Planet Linux Australia

Syndicate content
Planet Linux Australia - http://planet.linux.org.au
Updated: 59 min 23 sec ago

BlueHackers: A bit about “jlg”

Tue, 2014-09-02 23:37

I’m 29, Male from Sunny Brisbane (sunny at the moment). I was born in Adelaide, SA, Australia in a hospital called Modbury hospital. It’s still there. I have one son. I also have a daughter who by law I am not legally allowed to see as I am not on the birth certificate but I’m 99.99 percent sure I’m her father. Her name for the record is Annabel. I’m unsure of spelling. Our son (mine and Bec’s) is being raised with so much love and care and I only wish the same for my daughter. I should mention I’m no street thug or criminal. I actually have no criminal record. I survive on $500AUD a fortnight currently as of right this moment. Which is not much for a overweight male. I don’t really have any vices per say but I don’t use computers so frequently at my age of 29 I have short sighted vision. I should mention I don’t have diabetes.

My story is a common one I think? Man meets woman (Steve Cullen) my bio dad. Has sex, finds out has baby and does runner. I have to this day never met my bio dad. I have seen a photo when I was younger. He was some bald dude. I don’t think much of  him and I actually don’t speak much of him. My mother was awesome and she still is, albeit after her last suicide attempt. I will get to this later. I should mention I was a heavy smoker of cannabis from 2003 to 2007.  I attended a place called HUMBUG. Ironically it was a friend I made called Daniel who got me into marijuna. He would write code, I’d hack computers. We kind of worked as a team. Because the trust was set by us consuming so much (I will call it weed). I’m not proud of my drug usage but little did I know my Mum was a heavy user of other “drugs”. She was also in the army. For roughly 6 years she taught Army service men and women about english/maths etc. As she was a UniSA educated teacher. I on the other hand am self taught. From a young age I was somewhat unwillingly writing phish attacks but for chat websites. I would call these fake logins via HTML. I did this all roughly during High School. I admit freely that the school network was a joke. That doesn’t mean I abused it. I just made sure I couldn’t use their computers by inputting ASCII characters alt+256 the invisible char into the login screen I was using. It was Novell and it would not log in if you entered this char quickly without teacher looking then you’d get moved say to a girl you liked and flirt with her…. :-o)

For the sake of keeping things realistic and true I was actually very frigid. I dated some real nice girls I just couldn’t even get any courage to do anything more than sitting near them. That obviously changed in my final few years. I have always been anti-authority because I actually had 0 parent supervision for most of my teens. I would sit infront of my IBM Aptiva listening to god awful rap music I won’t mention online. I would sit reading RFCs, reading how to write HTML then thinking outside the box and doing what’s now called XSS (aka cross site scripting). Yahoo was one I did, obviously I have never been the type of guy to go hey here’s my handle and here I am LEA track me down. I prefer doing these things without an identity I always have, always will. I am a free lance individual. Whilst I sympathize with various well known hacktivists. I do not go out of my way to engage them.

I think this is enough for now…I will update soon. It’s 10.37pm I know not that late but all this writing has exhausted me. More later.

Andrew Pollock: [life] Day 216: Kindergarten, startup stuff, tennis and baking

Tue, 2014-09-02 22:25

I wanted to do a run this morning, but the flesh was weak, so it didn't happen.

I was a bit late getting away to pick up Zoe from Sarah's place, and the traffic was pretty bad again. I got to Kindergarten a bit later than I'd have liked.

I had a meeting with an accountant to discuss some stuff at 9:30am, which I managed to comfortably make. I even got the 30 minute consult for free, which was pretty awesome. I'm pretty close to shelving this particular startup idea, because I can structure it in a way that I'd like, but I can possibly pivot slightly and still solve the overall problem I'm trying to solve. I have some more research to do before I can make a final decision.

After I got home, I procrastinated for a bit and went and had a cup of tea with my neighbour, before finishing off the assessment for the unit of my real estate course that I've been making slow progress on. It finally went into the mail today, so I'm happy about that.

I drove to Kindergarten to take Zoe to tennis class. After tennis class, she wanted Megan to come back for a play date, and Laura had some errands to run, so that worked out well. I had some baking to do for the Father's Day thing at Kindergarten tomorrow, and Megan was keen to help. Zoe was more interested in playing dress ups. Ultimately all they wanted to do was lick the spatulas anyway.

I made my favourite red velvet cheesecake brownie, and tried milling some whole wheat down into flour. The texture was a bit gritty like cornmeal, but it seemed to turn out okay. I've been having problems with it not setting properly in the middle the last few times I've made it, and this time was no different, so I'll have to try making another batch in the morning.

Anshu dropped in after work, and then Laura got back to pick up Megan just before Sarah arrived to pick up Zoe. An easy afternoon.

Craige McWhirter: Matching Ceph Volumes To Nova Instances

Tue, 2014-09-02 15:27
Assumptions: Introduction:

If you have a variety of Ceph pools on different hardware, you may want to be able to find out which Ceph pools each instance is on for either performance or billing purposes.

Here's how I go about it...

Obtain the Volume ID

If your OpenStack service is configured to boot instances from volumes, then you can get the volume ID from the os-extended-volumes:volumes_attached field:

$ nova show Tutorial0 | grep volumes | os-extended-volumes:volumes_attached | [{"id": "96321d31-27f7-47e5-b2a8-fcd1c0c30767"}] Query the Ceph Cluster

I've not found an elegant way to go about this yet, so log into one of your Ceph servers and query the Ceph pools...

Grab a list of your Ceph pools:

# ceph osd lspools 10 poola,11 poolb,

Then query each pool to locate your volume:

# rbd -p poola ls | grep 96321d31-27f7-47e5-b2a8-fcd1c0c30767 # rbd -p poolb ls | grep 96321d31-27f7-47e5-b2a8-fcd1c0c30767 volume-96321d31-27f7-47e5-b2a8-fcd1c0c30767

Viola! The volume for your instance Tutorial0 is in the Ceph pool, poolb.

Andrew Pollock: [life] Day 215: Kindergarten, tinkering, massage and some shopping

Mon, 2014-09-01 18:25

Zoe was yelling out for me at 1:30am because her polar bear had fallen out of bed. She then proceeded to have a massive sleep in until 7:30am, so the morning was a little bit rushed.

That said, she took it upon herself to make breakfast while I was in the shower, which was pretty impressive.

Being the first day of Spring and a nice day at that, I wanted to get back into the habit of biking to Kindergarten, so despite it being a bit late, even though we did a very good job of getting ready in a hurry, we biked to Kindergarten. Zoe was singing all the way there, it was very cute.

I got home and spent the day getting Puppet to manage my BeagleBone Black, since I'd had to reinstall it as it had semi-died over the weekend.

I'd moved my massage from Wednesday to today, since there's a Father's Day thing on at Kindergarten on Wednesday, so I had a massage, and then went directly to pick up Zoe.

We went out to Westfield Carindale after pick up, to try and get some digital cameras donated to the Kindergarten to replace the ones they've got, which have died. I wasn't successful on the spot. Then we dropped past the pet shop to get some more kitty litter for Smudge, and then got home.

We'd barely gotten home and then Sarah arrived to pick Zoe up.

Andrew Pollock: [life] Day 215: Kindergarten, tinkering, massage and some shopping

Mon, 2014-09-01 18:25

Zoe was yelling out for me at 1:30am because her polar bear had fallen out of bed. She then proceeded to have a massive sleep in until 7:30am, so the morning was a little bit rushed.

That said, she took it upon herself to make breakfast while I was in the shower, which was pretty impressive.

Being the first day of Spring and a nice day at that, I wanted to get back into the habit of biking to Kindergarten, so despite it being a bit late, even though we did a very good job of getting ready in a hurry, we biked to Kindergarten. Zoe was singing all the way there, it was very cute.

I got home and spent the day getting Puppet to manage my BeagleBone Black, since I'd had to reinstall it as it had semi-died over the weekend.

I'd moved my massage from Wednesday to today, since there's a Father's Day thing on at Kindergarten on Wednesday, so I had a massage, and then went directly to pick up Zoe.

We went out to Westfield Carindale after pick up, to try and get some digital cameras donated to the Kindergarten to replace the ones they've got, which have died. I wasn't successful on the spot. Then we dropped past the pet shop to get some more kitty litter for Smudge, and then got home.

We'd barely gotten home and then Sarah arrived to pick Zoe up.

Andrew Pollock: [life] Day 212: A trip to the pool

Mon, 2014-09-01 18:25

This is what I get for not blogging on the day of, I can't clearly remember what we did on Friday now...

I had the plumber out in the morning, and then some cleaners to give a quote. I can't remember what we did after that.

After lunch we biked to Colmslie Pool and swam for a bit, I remember that much, and then I had some friends join Anshu and us for dinner, but the rest is coming up blank.

Andrew Pollock: [life] Day 212: A trip to the pool

Mon, 2014-09-01 18:25

This is what I get for not blogging on the day of, I can't clearly remember what we did on Friday now...

I had the plumber out in the morning, and then some cleaners to give a quote. I can't remember what we did after that.

After lunch we biked to Colmslie Pool and swam for a bit, I remember that much, and then I had some friends join Anshu and us for dinner, but the rest is coming up blank.

Brendan Scott: brendanscott

Mon, 2014-09-01 15:28

AGs is seeking public submissions on Online Copyright Infringement.

Some thoughts are:

The cover letter to the inquiry cites a PWC report prepared for the Australian Copyright Council.  The letter fails to note that gains are offset by the role of intellectual property in transfer pricing by multinationals.  There is strong evidence to suggest that intellectual property regimes have the effect of substantially reducing Australian taxation revenue through the use of transfer pricing mechanisms.

Page 3 of the discussion paper states that  the High Court in Roadshow said “that there were no reasonable steps that could have been taken by iiNet to reduce its subscribers’ infringements.”  The discussion paper goes on to enquire about what reasonable steps a network operator could take to reduce subscribers’ infringements. The whole of the debate about copyright infringement on the internet is infected by this sort of double speak.

The discussion paper does not specifically ask about a three strikes regime.  However, it invites discussion of a three strikes regime by raising it in the cover matter then inviting proposals as to what might be a “reasonable step.  Where noted my responses on a particular question relate to a three strikes regime.

Question 1:

Compelling an innocent person to assist a third party is to deprive that person of their liberty.  The only reasonable steps that come to mind are for network operators to respond to subpoenas validly issued to them – at least that is determined on a case by case basis under the supervision of a court.

Question 2:

Innocent third parties should not be required to assist in the enforcement of someone else’s rights. Any assistance that an innocent third party is required to give should be at the rights holder’s cost.  To do otherwise is to effectively require (in the case of a network) all customers to subsidise the private rights of the “rights’ holders'” enforcement. This is an inefficient an inequitable equivalent to a taxation scheme for public services.  The Government may as well compulsorily acquire the rights in question and equitably spread the cost through a levy.

Question 3:

No.  The existing section 36/101 was specifically inserted to provide exactly the clarity proposed here.  Rights holders were satisfied at the time.

Question 4:

Presumably reasonable is an objective test.

Question 5:

This response assumes the proposed implementation of a “three strikes” regime.

There is a Federal Magistrates court which is able to hear copyright infringement cases.  Defendants should have the right to have the case against them heard in a judicial forum. Under a three strikes regime an individual is required to justify their actions based on an accusation of infringement.  In the absence of a justification they suffer a sanction.  Our legal system should not be driven by mere accusations.  Defendants also have the right to know that the case against them is particular to them and not a cookie cutter accusation.

Question 6:

The court should have regard to what aims a block is intended to achieve, whether a block will be effective in achieving those aims and what impact a block will have on innocent third parties which may be affected by it.  For example, when Megaupload was taken down many innocent people lost their data with no warning.  This is more likely to be the case in the future as computing resources are increasingly shared in load balanced cloud storage implementations. These third parties are not represented in court and have no opportunity to put their case before a block is implemented.

A practice should be established whereby the court requires an undertaking from any person seeking a block to indemnify any innocent third party affected by the block against any damage suffered by them.  Alternatively, the Government could establish a victims compensation scheme that can run alongside such a block.  These third parties will be collateral damage from such a scheme.  Indeed, if the test for a site is only a “dominant purpose” test then collateral damage necessarily a consequence of the block.   An indemnity will serve the purpose of guiding incentives to reduce damage to innocent third parties.

Question 7

If the Government implements proposals which extend the applicability of auhtorisation infringements to smaller and smaller entities (eg a cafe providing wifi) then the safe harbour provisions need to be sufficiently simple and certain as to allow those entities to rely on them. At the moment they are compex and convoluted. If a cafe is forced to pay hundreds or thousands of dollars for legal advice about their wifi service, they will simply not provide it.

Question 8

Before the impact of measures can be measured [sic] a baseline first needs to be established for the purpose the Copyright Act is intended to serve.   In particular, the purpose of the Copyright Act is not to reduce infringement.  Rather, its titular purpose is to promote the creation of works and other subject matter.  This receives no mention in the discussion paper.  Historically, the Copyright Act has been promoted as necessary to maintain distribution networks (pre 1980s), as a means of providing creators with an income (last 2 centuries, but repeatedly contradicted empirically – most recently in the Don’t Give Up Your Day Job report),  as a natural right of authors (00s – contrary to judicial pronouncements on the issue) and now, apparently, as a means of stimulating the economy.  An Act which has so mutable a purpose ought to be considered with a jaundiced eye.

The reference to the PWC document suggests that the Hargreaves report would be a good starting point for further policy making.

Question 9

The retail price of downloadable copies of copyright works in Australia (exclusive of GST) should not exceed the price in their country of origin by more than 5% when sold directly.  The 5% figure is intended to allow for some additional costs of selling into Australia.

Implement the Productivity Commission’s recommendations on parallel importation.

Question 10, 11

The next two paragraphs of the response to this question deals primarily with a possible three strikes regime although the final observations are of a general character.

“Three strikes” regulation will effectively shift the burden of enforcement further away from rights holders to people who are the least equipped to implement it.  What will parents who receive warning letters do?  Will they implement a sophisticated filtering system on their home router?  Will they send their children off to a reeducation camp run by the rights’ holders? More likely they will blanket ban the internet access.  How will cafes manage their risk? More likely they will not provide wifi access.  This has already been the death knell of community wifi networks in the US.  The collateral damage from these proposals is difficult to quantify but there is every reason to believe it will be widespread.  This damage is routinely ignored in policy making.

Will rights’ holders use such a system against everyone? That is unlikely.  Rather, it will be used against some individuals unlucky enough to be first on the list. Those individuals will be used as examples for others.  This will be a law which will be enforced in an arbitrary and discriminatory fashion.  As such it will undermine respect for the law more generally.

The comments on the proposals above assume that they are acted on bona fide.  Once network operators are conditioned to a Pavlovian response to requests the system will be abused – the Get Up! organisation already believes it has been the subject of misuse: https://www.getup.org.au/campaigns/great-barrier-reef–3/adani-video/someone-wants-to-silence-us-dont-let-them

Evasion technologies have previously been a niche interest.  The size of the market limited their growth.  These provisions will sheet home to all citizens the need to implement evasion technologies, thereby greatly increasing the market and therefore the economic incentive for their evolution.  The long run effect of implementing proposals which effect this form of general surveillance of the population is to weaken national security.

By insulating rights holders from the costs of enforcement the proposals disconnect rights holders from the very externalities that enforcement creates.  If there were ever a recipe for poor policy, such a disconnection would be a key element of it.

 

 



Russell Coker: Links August 2014

Mon, 2014-09-01 02:26

Matt Palmer wrote a good overview of DNSSEC [1].

Sociological Images has an interesting article making the case for phasing out the US $0.01 coin [2]. The Australian $0.01 and $0.02 coins were worth much more when they were phased out.

Multiplicity is a board game that’s designed to address some of the failings of SimCity type games [3]. I haven’t played it yet but the page describing it is interesting.

Carlos Buento’s article about the Mirrortocracy has some interesting insights into the flawed hiring culture of Silicon Valley [4].

Adam Bryant wrote an interesting article for NY Times about Google’s experiments with big data and hiring [5]. Among other things it seems that grades and test results have no correlation with job performance.

Jennifer Chesters from the University of Canberra wrote an insightful article about the results of Australian private schools [6]. Her research indicates that kids who go to private schools are more likely to complete year 12 and university but they don’t end up earning more.

Kiwix is an offline Wikipedia reader for Android, needs 9.5G of storage space for the database [7].

Melanie Poole wrote an informative article for Mamamia about the evil World Congress of Families and their connections to the Australian government [8].

The BBC has a great interactive web site about how big space is [9].

The Raspberry Pi Spy has an interesting article about automating Minecraft with Python [10].

Wired has an interesting article about the Bittorrent Sync platform for distributing encrypted data [11]. It’s apparently like Dropbox but encrypted and decentralised. Also it supports applications on top of it which can offer social networking functions among other things.

ABC news has an interesting article about the failure to diagnose girls with Autism [12].

The AbbottsLies.com.au site catalogs the lies of Tony Abbott [13]. There’s a lot of work in keeping up with that.

Racialicious.com has an interesting article about “Moff’s Law” about discussion of media in which someone says “why do you have to analyze it” [14].

Paul Rosenberg wrote an insightful article about conservative racism in the US, it’s a must-read [15].

Salon has an interesting and amusing article about a photography project where 100 people were tased by their loved ones [16]. Watch the videos.

Related posts:

  1. Links August 2013 Mark Cuban wrote an interesting article titled “What Business is...
  2. Links February 2014 The Economist has an interesting and informative article about the...
  3. Links July 2014 Dave Johnson wrote an interesting article for Salon about companies...

Sridhar Dhanapalan: Twitter posts: 2014-08-25 to 2014-08-31

Mon, 2014-09-01 01:27

linux.conf.au News: Big announcement about upcoming announcements...

Sun, 2014-08-31 15:27

The Papers Committee weekend went extremely well and without bloodshed according to Steve, although there was some very strong discussion from time to time! The upshot is that we have a fantastic program now, with excellent presentations all across the board.

We have already begun contacting Miniconf organisers to let them know who has been successful and who hasn’t, and over the next couple of weeks we will be sending emails out to everyone who submitted a presentation to let them know how they fared.

If you have been accepted to run a Miniconf then your contact will be Simon Lyall (miniconfs@lca2015.linux.org.au) and if you have been accepted as a speaker then your contact will be Lisa Sands (speakers@lca2015.linux.org.au). We will be asking for a photo of you and your twitter name, as we will be running a Speaker Feature about a different presenter each day - don’t worry - you will be notified on your day!

We want to give great thanks to everyone who submitted papers - you are all still winners in our eyes, and we hope that even if you weren’t selected this time that won’t put you off attending the conference and having a great time. Please note that due to the large volume of submissions, we are unable to provide feedback on why any particular submission was unsuccessful.

Our earlybird registration will be opening soon, so watch this space!

Francois Marier: Outsourcing your webapp maintenance to Debian

Sun, 2014-08-31 08:46

Modern web applications are much more complicated than the simple Perl CGI scripts or PHP pages of the past. They usually start with a framework and include lots of external components both on the front-end and on the back-end.

Here's an example from the Node.js back-end of a real application:

$ npm list | wc -l 256

What if one of these 256 external components has a security vulnerability? How would you know and what would you do if of your direct dependencies had a hard-coded dependency on the vulnerable version? It's a real problem and of course one way to avoid this is to write everything yourself. But that's neither realistic nor desirable.

However, it's not a new problem. It was solved years ago by Linux distributions for C and C++ applications. For some reason though, this learning has not propagated to the web where the standard approach seems to be to "statically link everything".

What if we could build on the work done by Debian maintainers and the security team?

Case study - the Libravatar project

As a way of discussing a different approach to the problem of dependency management in web applications, let me describe the decisions made by the Libravatar project.

Description

Libravatar is a federated and free software alternative to the Gravatar profile photo hosting site.

From a developer point of view, it's a fairly simple stack:

The service is split between the master node, where you create an account and upload your avatar, and a few mirrors, which serve the photos to third-party sites.

Like with Gravatar, sites wanting to display images don't have to worry about a complicated protocol. In a nutshell, all that a site needs to do is hash the user's email and add that hash to a base URL. Where the federation kicks in is that every email domain is able to specify a different base URL via an SRV record in DNS.

For example, francois@debian.org hashes to 7cc352a2907216992f0f16d2af50b070 and so the full URL is:

http://cdn.libravatar.org/avatar/7cc352a2907216992f0f16d2af50b070

whereas francois@fmarier.org hashes to 0110e86fdb31486c22dd381326d99de9 and the full URL is:

http://fmarier.org/avatar/0110e86fdb31486c22dd381326d99de9

due to the presence of an SRV record on fmarier.org.

Ground rules

The main rules that the project follows is to:

  1. only use Python libraries that are in Debian
  2. use the versions present in the latest stable release (including backports)
Deployment using packages

In addition to these rules around dependencies, we decided to treat the application as if it were going to be uploaded to Debian:

  • It includes an "upstream" Makefile which minifies CSS and JavaScript, gzips them, and compiles PO files (i.e. a "build" step).
  • The Makefile includes a test target which runs the unit tests and some lint checks (pylint, pyflakes and pep8).
  • Debian packages are produced to encode the dependencies in the standard way as well as to run various setup commands in maintainer scripts and install cron jobs.
  • The project runs its own package repository using reprepro to easily distribute these custom packages.
  • In order to update the repository and the packages installed on servers that we control, we use fabric, which is basically a fancy way to run commands over ssh.
  • Mirrors can simply add our repository to their apt sources.list and upgrade Libravatar packages at the same time as their system packages.
Results

Overall, this approach has been quite successful and Libravatar has been a very low-maintenance service to run.

The ground rules have however limited our choice of libraries. For example, to talk to our queuing system, we had to use the raw Python bindings to the C Gearman library instead of being able to use a nice pythonic library which wasn't in Debian squeeze at the time.

There is of course always the possibility of packaging a missing library for Debian and maintaining a backport of it until the next Debian release. This wouldn't be a lot of work considering the fact that responsible bundling of a library would normally force you to follow its releases closely and keep any dependencies up to date, so you may as well share the result of that effort. But in the end, it turns out that there is a lot of Python stuff already in Debian and we haven't had to package anything new yet.

Another thing that was somewhat scary, due to the number of packages that were going to get bumped to a new major version, was the upgrade from squeeze to wheezy. It turned out however that it was surprisingly easy to upgrade to wheezy's version of Django, Apache and Postgres. It may be a problem next time, but all that means is that you have to set a day aside every 2 years to bring everything up to date.

Problems

The main problem we ran into is that we optimized for sysadmins and unfortunately made it harder for new developers to setup their environment. That's not very good from the point of view of welcoming new contributors as there is quite a bit of friction in preparing and testing your first patch. That's why we're looking at encoding our setup instructions into a Vagrant script so that new contributors can get started quickly.

Another problem we faced is that because we use the Debian version of jQuery and minify our own JavaScript files in the build step of the Makefile, we were affected by the removal from that package of the minified version of jQuery. In our setup, there is no way to minify JavaScript files that are provided by other packages and so the only way to fix this would be to fork the package in our repository or (preferably) to work with the Debian maintainer and get it fixed globally in Debian.

One thing worth noting is that while the Django project is very good at issuing backwards-compatible fixes for security issues, sometimes there is no way around disabling broken features. In practice, this means that we cannot run unattended-upgrades on our main server in case something breaks. Instead, we make use of apticron to automatically receive email reminders for any outstanding package updates.

On that topic, it can occasionally take a while for security updates to be released in Debian, but this usually falls into one of two cases:

  1. You either notice because you're already tracking releases pretty well and therefore could help Debian with backporting of fixes and/or testing;
  2. or you don't notice because it has slipped through the cracks or there simply are too many potential things to keep track of, in which case the fact that it eventually gets fixed without your intervention is a huge improvement.

Finally, relying too much on Debian packaging does prevent Fedora users (a project that also makes use of Libravatar) from easily contributing mirrors. Though if we had a concrete offer, we would certainly look into creating the appropriate RPMs.

Is it realistic?

It turns out that I'm not the only one who thought about this approach, which has been named "debops". The same day that my talk was announced on the DebConf website, someone emailed me saying that he had instituted the exact same rules at his company, which operates a large Django-based web application in the US and Russia. It was pretty impressive to read about a real business coming to the same conclusions and using the same approach (i.e. system libraries, deployment packages) as Libravatar.

Regardless of this though, I think there is a class of applications that are particularly well-suited for the approach we've just described. If a web application is not your full-time job and you want to minimize the amount of work required to keep it running, then it's a good investment to restrict your options and leverage the work of the Debian community to simplify your maintenance burden.

The second criterion I would look at is framework maturity. Given the 2-3 year release cycle of stable distributions, this approach is more likely to work with a mature framework like Django. After all, you probably wouldn't compile Apache from source, but until recently building Node.js from source was the preferred option as it was changing so quickly.

While it goes against conventional wisdom, relying on system libraries is a sustainable approach you should at least consider in your next project. After all, there is a real cost in bundling and keeping up with external dependencies.

This blog post is based on a talk I gave at DebConf 14: slides, video.

Maxim Zakharov: Australian Singing Competition

Sun, 2014-08-31 03:25

The Finals concert of the 2014 Australian Singing Competition was an amazing experience, and it was the first time I listened to opera singers live.

Congratulations to the winner, Isabella Moore from New Zealand!

Matt Palmer: Chromium tabs crashing and not rendering correctly?

Sat, 2014-08-30 15:26

If you’ve noticed your chrome/chromium on Linux having problems since you upgraded to somewhere around version 35/36, you’re not alone. Thankfully, it’s relatively easy to workaround. It will hit people who keep their browser open for a long time, or who have lots of tabs (or if you’re like me, and do both).

To tell if you’re suffering from this particular problem, crack open your ~/.xsession-errors file (or wherever your system logs stdout/stderr from programs running under X), and look for lines that look like this:

[22161:22185:0830/124533:ERROR:shared_memory_posix.cc(231)] Creating shared memory in /dev/shm/.org.chromium.Chromium.gFTQSy failed: Too many open files

And

[22161:22185:0830/124601:ERROR:host_shared_bitmap_manager.cc(122)] Cannot create shared memory buffer

If you see those errors, congratulations! The rest of this blog post will be of use to you.

There’s probably a myriad of bugs open about this problem, but the one I found was #367037: Shared memory-related tab crash. It turns out there’s a file handle leak in the chromium codebase somewhere, relating to shared memory handling. There’s no fix available, but the workaround is quite simple: increase the number of files that processes are allowed to have open.

System-wide, you can do this by creating a file /etc/security/limits.d/local-nofile.conf, containing this line:

* - nofile 65535

You could also edit /etc/security/limits.conf to contain the same line, if you were so inclined. Note that this will only take effect next time you login, or perhaps even only when you restart X (or, at worst, your entire machine).

This doesn’t help you if you’ve got Chromium already open and you’d like to stop it from crashing Right Now (perhaps restarting your machine would be a terrible hardship, causing you to lose your hard-won uptime record), then you can use a magical tool called prlimit.

The prlimit syscall is available if you’re running a Linux 2.6.36 or later kernel, and running at least glibc 2.13. You’ll have a prlimit command line program if you’ve got util-linux 2.21 or later. If not, you can use the example source code in the prlimit(2) manpage, changing RLIMIT_CPU to RLIMIT_NOFILE, and then running it like this:

prlimit <PID> 65535 65535

The <PID> argument is taken from the first number in the log messages from .xsession-errors – in the example above, it’s 22161.

And now, you can go back to using your tabs as ersatz bookmarks, like I do.