Planet Linux Australia

Syndicate content
Planet Linux Australia - http://planet.linux.org.au
Updated: 3 min 17 sec ago

Gabriel Noronha: New Electricity Retailer

Thu, 2014-09-04 19:26

So after crunching some more numbers and reading the green peace green energy guide I decided to change electricity retailers. Based of my need for a high VFIT (see previous post )  it was a choice between AGL (current provider), Click Energy and Diamond Energy.

Power Saving Calculations

Ok so the savings it’s not completely fair on AGL $55 of that $70 saving is 100% green energy which I’m not longer buying.  As click doesn’t offer it on their solar plan. but i can buy green energy from the a environmental trust for 4.2c/kWh and it’s a tax deduction.

Click saved me the most money has no contracts over AGLs 3 year killer and Diamonds 1 year one, it was also rated by green peace as middle range green. I’ve decided to move to click energy I’ll officially switch at my next meter read.

What about Gas well it’s going to switched later when click supports it. from twitter today:

It’s official! We’re pleased to announce Click Energy will be a #naturalgassupplier by the end of the year http://t.co/SOtVNIIDJK

— Click Energy (@click4energy) September 4, 2014

If I’ve convinced you to switch and you want to get $50 click has a mates rates referral program  drop me a message and we’ll go from there.

Russell Coker: Inteltech/Clicksend SMS Script

Thu, 2014-09-04 14:26

USER=username

API_KEY=1234ABC

OUTPUTDIR=/var/spool/sms

LOG_SERVICE=local1

I’ve just written the below script to send SMS via the inteltech.com/clicksend.com service. It takes the above configuration in /etc/sms-pass.cfg where the username is assigned with the clicksend web page and the API key is a long hex string that clicksend provides as a password. The LOG_SERVICE is which syslog service to use for the log messages, on systems that are expected to send many messages I use the service “local1″ and I use “user” for development systems.

I hope this is useful to someone, and if you have any ideas for improvement then please let me know.

#!/bin/sh

# $1 is destination number

# text is on standard input

# standard output gives message ID on success, and 0 is returned

# standard error gives error from server on failure, and 1 is returned

. /etc/sms-pass.cfg

OUTPUT=$OUTPUTDIR/out.$$

TEXT=`tr "[:space:]" + | cut -c 1-159`

logger -t sms -p $LOG_SERVICE.info "sending message to $1"

wget -O $OUTPUT "https://api.clicksend.com/http/v2/send.php?method=http&username=$USER&key=$API_KEY&to=$1&message=$TEXT" > /dev/null 2> /dev/null

if [ "$?" != "0" ]; then

  echo "Error running wget" >&2

  logger -t sms -p $LOG_SERVICE.err "failed to send message \"$TEXT\" to $1 – wget error"

  exit 1

fi

if ! grep -q ^.errortext.Success $OUTPUT ; then

  cat $OUTPUT >&2

  echo >&2

  ERR=$(grep ^.errortext $OUTPUT | sed -e s/^.errortext.// -e s/..errortext.$//)

  logger -t sms -p $LOG_SERVICE.err "failed to send message \"$TEXT\" to $1 – $ERR"

  rm $OUTPUT

  exit 1

fi

ID=$(grep ^.messageid $OUTPUT | sed -e s/^.messageid.// -e s/..messageid.$//)

rm $OUTPUT

logger -t sms -p $LOG_SERVICE.info "sent message to $1 with ID $ID"

exit 0

Related posts:

  1. Parsing Daemontools/Multilog dates in Shell Script I run some servers that use the DJB Daemontools to...

Jeremy Kerr: Customising OpenPower firmware

Wed, 2014-09-03 23:26

Now that the OpenPower sources are available, it's possible to build custom firmware images for OpenPower machines. Here's a little guide to show how that's done.

The build process

OpenPower firmware has a number of different components, and some infrastructure to pull it all together. We use buildroot to do most of the heavy lifting, plus a little wrapper, called op-build.

There's a README file, containing build instructions in the op-build git repository, but here's a quick overview:

To build an OpenPower PNOR image from scratch, we'll need a few prerequisites (assuming recent Ubuntu):

sudo apt-get install cscope ctags libz-dev libexpat-dev libc6-dev-i386 \ gcc g++ git bison flex gcc-multilib g++-multilib libxml-simple-perl \ libxml-sax-perl

Then we can grab the op-build repository, along with the git submodules:

git clone --recursive git://github.com/open-power/op-build.git

set up our environment and configure using the "palmetto" machine configuration:

. op-build-env op-build palmetto_defconfig

and build:

op-build

After a while (there is quite a bit of downloading to do on the first build), the build should complete successfully, and you'll have a PNOR image build in output/images/palmetto.pnor.

If you have an existing op-build tree around (colleagues working on OpenPower perhaps?), you can share or copy the dl/ directory to save on download time.

The op-build command is just a shortcut for a make in the buildroot tree, so the general buildroot documentation applies here too. Just replace "make" with "op-build". For example, we can enable a verbose build with:

op-build V=1 Changing the build configuration

Above, we used a palmetto_defconfig as the base buildroot configuration. This defines overall options for the build; things like:

  • Toolchain details used to build the image
  • Which firmware packages are used
  • Which packages are used in the petitboot bootloader environment
  • Which kernel configuration is used for the petitboot bootloader environment

This configuration can be changed through buildroot's menuconfig UI. To adjust the configuration:

op-build menuconfig

And busybox's configuration interface will be shown:

As an example, let's say we want to add the "file" utility to the petitboot environment. To do this, we can nagivate to that option in the Target Packages section (Target Packages → Shell and Utilities → file), and enable the option:

Then exit (saving changes) and rebuild:

op-build

- the resulting image will have the file command present in the petitboot shell environment.

Kernel configuration

There are a few other configuration targets to influence the build process; the most interesting for our case will be the kernel configuration. Since we use petitboot as our bootloader, it requires a Linux kernel for the initial bootloader environment. The set of drivers in this kernel will dictate which devices you'll be able to boot from.

So, if we want to enable booting from a new device, we'll need to include an appropriate driver in the kernel. To adjust the kernel configuration, use the linux-menuconfig target:

op-build linux-menuconfig

- which will show the standard Linux "menuconfig" interface:

From here, you can alter the kernel configuration. Once you're done, save changes and exit. Then, to build the new PNOR image:

op-build Customised packages

If you have a customised version of one of the packages used in the OpenPower build, you can easily tell op-build to use your local package. There are a number of package-specific make variables documented in the buildroot generic package reference, the most interesting ones being the _VERSION and _SITE variables.

For example, let's say we have a custom petitboot tree that we want to use for the build. We've committed our changes in the petitboot tree, and want to build a new PNOR image. For the sake of this example, the git SHA petitboot commit we'd like to build is 2468ace0, and our custom petitboot tree is at /home/jk/devel/petitboot.

To build a new PNOR image with this particular petitboot source, we need to specify a few buildroot make variables:

op-build PETITBOOT_SITE=/home/jk/devel/petitboot \ PETITBOOT_SITE_METHOD=git \ PETITBOOT_VERSION=2468ace0

This is what these variables are doing:

  • PETITBOOT_SITE=/home/jk/devel/petitboot - tells op-build where our custom source tree is. This could be a git URL or a local path.
  • PETITBOOT_SITE_METHOD=git - telsl op-build that PETITBOOT_SITE is a git tree. If we were using a git:// URL for PETITBOOT_SITE, then this variable would be set automatically
  • PETITBOOT_VERSION=2468ace0 - tells op-build which version of petitboot to checkout. This can be any commit reference that git understands.

The same method can be used for any of the other packages used during build. For OpenPower builds, you may also want to use the SKIBOOT_* and LINUX_* variables to include custom skiboot firmware and kernel in the build.

If you'd prefer to test new sources without committing to git, you can use _SITE_METHOD=local. This will copy the source tree (defined by _SITE) to the buildroot tree and use it directly. For example:

op-build SKIBOOT_SITE=/home/jk/devel/skiboot \ SKIBOOT_SITE_METHOD=local

- will build the current (and not-necessarily-committed) sources in /home/jk/devel/skiboot. Note that buildroot has no way to tell if your code has changed with _SITE_METHOD=local. If you re-build with this, it's safer to clean the relevant source tree first:

op-build skiboot-dirclean

Andrew Pollock: [life] Day 217: Father's Day at Kindergarten, spring cleaning, and swim class

Wed, 2014-09-03 23:25

I managed to crank out a 10 km run this morning. I was even happy with the pace of it.

I had my chiropractic appointment, and then the cleaners that I'd booked for a deep spring clean of my apartment descended on the place to get to work. Not long after that, it was time to head to Kindergarten for their Father's Day morning tea extravaganza.

Because my baking hadn't turned out to my satisfaction, I had to swing by Brumby's on the way there to get some last minute stuff, which made me about 10 minutes late. I got a phone call from Jason as I was walking in asking how far away I was, because Zoe was worried about where I was.

What ensued was a lovely morning. Zoe dragged me all over the playground, despite me being quite familiar with the Kindergarten, and then we played some group games and had morning tea. Dad arrived just in time for morning tea.

After that, we made pin-on paper neck ties (apparently Father's Day is all about the neck tie) and then it was time to go.

I got back just as the cleaners were finishing up, and used the remaining couple of hours to do some emails and phone calls before biking back to Kindergarten to pick Zoe up.

We had some time to kill before her swim class, so we went to the park near the pool, and Zoe did some more monkey bar practice.

After swim class we biked home, and had a nice dinner.

Lev Lafayette: Stepping Down as President of Linux Users Victoria and 2014 Committee (President and Secretary's) Report 2014

Wed, 2014-09-03 22:28

LUV annual general meetings are typically our smallest meetings of the year. It is a bold and few technically-inspired individuals who wish to sit through the necessary administrivia that keep the organisation alive in a formal sense, and the lack of an advertised speaker does suggest the possibility of ad-hoc pot-luck when it comes the short, technical lightning talks. However, I would like to make a special plea for LUV members to attend this agm. The reason being is that, after four years as president of LUV, I am going to step down from this position.

read more

BlueHackers: Follow up

Tue, 2014-09-02 23:00

I have a mental illness. From consuming weed for those years. I have major depression & anxiety. I also get paranoid about germs/what people think of me/my health. I think sometimes I make things worse for myself. The best thing that has ever happened is meeting my lovely Becci. She definitely has taken my unwell self and made me well. I had long quit the weed. But recovering from heavy usage takes the brain a while. Years in fact. I have been in and out of work. Fired for having a mental illness (CBA) and more recently as in last year my mother doused herself in gasoline and set herself alight. I haven’t walked easy street. But I try to keep my head up and wits about me. I have a family to care for an my grandparents who helped raise me quite a bit. Well a lot.

BlueHackers: A bit about “jlg”

Tue, 2014-09-02 22:37

I’m 29, Male from Sunny Brisbane (sunny at the moment). I was born in Adelaide, SA, Australia in a hospital called Modbury hospital. It’s still there. I have one son. I also have a daughter who by law I am not legally allowed to see as I am not on the birth certificate but I’m 99.99 percent sure I’m her father. Her name for the record is Annabel. I’m unsure of spelling. Our son (mine and Bec’s) is being raised with so much love and care and I only wish the same for my daughter. I should mention I’m no street thug or criminal. I actually have no criminal record. I survive on $500AUD a fortnight currently as of right this moment. Which is not much for a overweight male. I don’t really have any vices per say but I don’t use computers so frequently at my age of 29 I have short sighted vision. I should mention I don’t have diabetes.

My story is a common one I think? Man meets woman (Steve Cullen) my bio dad. Has sex, finds out has baby and does runner. I have to this day never met my bio dad. I have seen a photo when I was younger. He was some bald dude. I don’t think much of  him and I actually don’t speak much of him. My mother was awesome and she still is, albeit after her last suicide attempt. I will get to this later. I should mention I was a heavy smoker of cannabis from 2003 to 2007.  I attended a place called HUMBUG. Ironically it was a friend I made called Daniel who got me into marijuna. He would write code, I’d hack computers. We kind of worked as a team. Because the trust was set by us consuming so much (I will call it weed). I’m not proud of my drug usage but little did I know my Mum was a heavy user of other “drugs”. She was also in the army. For roughly 6 years she taught Army service men and women about english/maths etc. As she was a UniSA educated teacher. I on the other hand am self taught. From a young age I was somewhat unwillingly writing phish attacks but for chat websites. I would call these fake logins via HTML. I did this all roughly during High School. I admit freely that the school network was a joke. That doesn’t mean I abused it. I just made sure I couldn’t use their computers by inputting ASCII characters alt+256 the invisible char into the login screen I was using. It was Novell and it would not log in if you entered this char quickly without teacher looking then you’d get moved say to a girl you liked and flirt with her…. :-o)

For the sake of keeping things realistic and true I was actually very frigid. I dated some real nice girls I just couldn’t even get any courage to do anything more than sitting near them. That obviously changed in my final few years. I have always been anti-authority because I actually had 0 parent supervision for most of my teens. I would sit infront of my IBM Aptiva listening to god awful rap music I won’t mention online. I would sit reading RFCs, reading how to write HTML then thinking outside the box and doing what’s now called XSS (aka cross site scripting). Yahoo was one I did, obviously I have never been the type of guy to go hey here’s my handle and here I am LEA track me down. I prefer doing these things without an identity I always have, always will. I am a free lance individual. Whilst I sympathize with various well known hacktivists. I do not go out of my way to engage them.

I think this is enough for now…I will update soon. It’s 10.37pm I know not that late but all this writing has exhausted me. More later.

Andrew Pollock: [life] Day 216: Kindergarten, startup stuff, tennis and baking

Tue, 2014-09-02 21:25

I wanted to do a run this morning, but the flesh was weak, so it didn't happen.

I was a bit late getting away to pick up Zoe from Sarah's place, and the traffic was pretty bad again. I got to Kindergarten a bit later than I'd have liked.

I had a meeting with an accountant to discuss some stuff at 9:30am, which I managed to comfortably make. I even got the 30 minute consult for free, which was pretty awesome. I'm pretty close to shelving this particular startup idea, because I can structure it in a way that I'd like, but I can possibly pivot slightly and still solve the overall problem I'm trying to solve. I have some more research to do before I can make a final decision.

After I got home, I procrastinated for a bit and went and had a cup of tea with my neighbour, before finishing off the assessment for the unit of my real estate course that I've been making slow progress on. It finally went into the mail today, so I'm happy about that.

I drove to Kindergarten to take Zoe to tennis class. After tennis class, she wanted Megan to come back for a play date, and Laura had some errands to run, so that worked out well. I had some baking to do for the Father's Day thing at Kindergarten tomorrow, and Megan was keen to help. Zoe was more interested in playing dress ups. Ultimately all they wanted to do was lick the spatulas anyway.

I made my favourite red velvet cheesecake brownie, and tried milling some whole wheat down into flour. The texture was a bit gritty like cornmeal, but it seemed to turn out okay. I've been having problems with it not setting properly in the middle the last few times I've made it, and this time was no different, so I'll have to try making another batch in the morning.

Anshu dropped in after work, and then Laura got back to pick up Megan just before Sarah arrived to pick up Zoe. An easy afternoon.

Craige McWhirter: Matching Ceph Volumes To Nova Instances

Tue, 2014-09-02 14:27
Assumptions: Introduction:

If you have a variety of Ceph pools on different hardware, you may want to be able to find out which Ceph pools each instance is on for either performance or billing purposes.

Here's how I go about it...

Obtain the Volume ID

If your OpenStack service is configured to boot instances from volumes, then you can get the volume ID from the os-extended-volumes:volumes_attached field:

$ nova show Tutorial0 | grep volumes | os-extended-volumes:volumes_attached | [{"id": "96321d31-27f7-47e5-b2a8-fcd1c0c30767"}] Query the Ceph Cluster

I've not found an elegant way to go about this yet, so log into one of your Ceph servers and query the Ceph pools...

Grab a list of your Ceph pools:

# ceph osd lspools 10 poola,11 poolb,

Then query each pool to locate your volume:

# rbd -p poola ls | grep 96321d31-27f7-47e5-b2a8-fcd1c0c30767 # rbd -p poolb ls | grep 96321d31-27f7-47e5-b2a8-fcd1c0c30767 volume-96321d31-27f7-47e5-b2a8-fcd1c0c30767

Viola! The volume for your instance Tutorial0 is in the Ceph pool, poolb.

Andrew Pollock: [life] Day 215: Kindergarten, tinkering, massage and some shopping

Mon, 2014-09-01 17:25

Zoe was yelling out for me at 1:30am because her polar bear had fallen out of bed. She then proceeded to have a massive sleep in until 7:30am, so the morning was a little bit rushed.

That said, she took it upon herself to make breakfast while I was in the shower, which was pretty impressive.

Being the first day of Spring and a nice day at that, I wanted to get back into the habit of biking to Kindergarten, so despite it being a bit late, even though we did a very good job of getting ready in a hurry, we biked to Kindergarten. Zoe was singing all the way there, it was very cute.

I got home and spent the day getting Puppet to manage my BeagleBone Black, since I'd had to reinstall it as it had semi-died over the weekend.

I'd moved my massage from Wednesday to today, since there's a Father's Day thing on at Kindergarten on Wednesday, so I had a massage, and then went directly to pick up Zoe.

We went out to Westfield Carindale after pick up, to try and get some digital cameras donated to the Kindergarten to replace the ones they've got, which have died. I wasn't successful on the spot. Then we dropped past the pet shop to get some more kitty litter for Smudge, and then got home.

We'd barely gotten home and then Sarah arrived to pick Zoe up.

Andrew Pollock: [life] Day 215: Kindergarten, tinkering, massage and some shopping

Mon, 2014-09-01 17:25

Zoe was yelling out for me at 1:30am because her polar bear had fallen out of bed. She then proceeded to have a massive sleep in until 7:30am, so the morning was a little bit rushed.

That said, she took it upon herself to make breakfast while I was in the shower, which was pretty impressive.

Being the first day of Spring and a nice day at that, I wanted to get back into the habit of biking to Kindergarten, so despite it being a bit late, even though we did a very good job of getting ready in a hurry, we biked to Kindergarten. Zoe was singing all the way there, it was very cute.

I got home and spent the day getting Puppet to manage my BeagleBone Black, since I'd had to reinstall it as it had semi-died over the weekend.

I'd moved my massage from Wednesday to today, since there's a Father's Day thing on at Kindergarten on Wednesday, so I had a massage, and then went directly to pick up Zoe.

We went out to Westfield Carindale after pick up, to try and get some digital cameras donated to the Kindergarten to replace the ones they've got, which have died. I wasn't successful on the spot. Then we dropped past the pet shop to get some more kitty litter for Smudge, and then got home.

We'd barely gotten home and then Sarah arrived to pick Zoe up.

Andrew Pollock: [life] Day 212: A trip to the pool

Mon, 2014-09-01 17:25

This is what I get for not blogging on the day of, I can't clearly remember what we did on Friday now...

I had the plumber out in the morning, and then some cleaners to give a quote. I can't remember what we did after that.

After lunch we biked to Colmslie Pool and swam for a bit, I remember that much, and then I had some friends join Anshu and us for dinner, but the rest is coming up blank.

Andrew Pollock: [life] Day 212: A trip to the pool

Mon, 2014-09-01 17:25

This is what I get for not blogging on the day of, I can't clearly remember what we did on Friday now...

I had the plumber out in the morning, and then some cleaners to give a quote. I can't remember what we did after that.

After lunch we biked to Colmslie Pool and swam for a bit, I remember that much, and then I had some friends join Anshu and us for dinner, but the rest is coming up blank.

Brendan Scott: brendanscott

Mon, 2014-09-01 14:28

AGs is seeking public submissions on Online Copyright Infringement.

Some thoughts are:

The cover letter to the inquiry cites a PWC report prepared for the Australian Copyright Council.  The letter fails to note that gains are offset by the role of intellectual property in transfer pricing by multinationals.  There is strong evidence to suggest that intellectual property regimes have the effect of substantially reducing Australian taxation revenue through the use of transfer pricing mechanisms.

Page 3 of the discussion paper states that  the High Court in Roadshow said “that there were no reasonable steps that could have been taken by iiNet to reduce its subscribers’ infringements.”  The discussion paper goes on to enquire about what reasonable steps a network operator could take to reduce subscribers’ infringements. The whole of the debate about copyright infringement on the internet is infected by this sort of double speak.

The discussion paper does not specifically ask about a three strikes regime.  However, it invites discussion of a three strikes regime by raising it in the cover matter then inviting proposals as to what might be a “reasonable step.  Where noted my responses on a particular question relate to a three strikes regime.

Question 1:

Compelling an innocent person to assist a third party is to deprive that person of their liberty.  The only reasonable steps that come to mind are for network operators to respond to subpoenas validly issued to them – at least that is determined on a case by case basis under the supervision of a court.

Question 2:

Innocent third parties should not be required to assist in the enforcement of someone else’s rights. Any assistance that an innocent third party is required to give should be at the rights holder’s cost.  To do otherwise is to effectively require (in the case of a network) all customers to subsidise the private rights of the “rights’ holders'” enforcement. This is an inefficient an inequitable equivalent to a taxation scheme for public services.  The Government may as well compulsorily acquire the rights in question and equitably spread the cost through a levy.

Question 3:

No.  The existing section 36/101 was specifically inserted to provide exactly the clarity proposed here.  Rights holders were satisfied at the time.

Question 4:

Presumably reasonable is an objective test.

Question 5:

This response assumes the proposed implementation of a “three strikes” regime.

There is a Federal Magistrates court which is able to hear copyright infringement cases.  Defendants should have the right to have the case against them heard in a judicial forum. Under a three strikes regime an individual is required to justify their actions based on an accusation of infringement.  In the absence of a justification they suffer a sanction.  Our legal system should not be driven by mere accusations.  Defendants also have the right to know that the case against them is particular to them and not a cookie cutter accusation.

Question 6:

The court should have regard to what aims a block is intended to achieve, whether a block will be effective in achieving those aims and what impact a block will have on innocent third parties which may be affected by it.  For example, when Megaupload was taken down many innocent people lost their data with no warning.  This is more likely to be the case in the future as computing resources are increasingly shared in load balanced cloud storage implementations. These third parties are not represented in court and have no opportunity to put their case before a block is implemented.

A practice should be established whereby the court requires an undertaking from any person seeking a block to indemnify any innocent third party affected by the block against any damage suffered by them.  Alternatively, the Government could establish a victims compensation scheme that can run alongside such a block.  These third parties will be collateral damage from such a scheme.  Indeed, if the test for a site is only a “dominant purpose” test then collateral damage necessarily a consequence of the block.   An indemnity will serve the purpose of guiding incentives to reduce damage to innocent third parties.

Question 7

If the Government implements proposals which extend the applicability of auhtorisation infringements to smaller and smaller entities (eg a cafe providing wifi) then the safe harbour provisions need to be sufficiently simple and certain as to allow those entities to rely on them. At the moment they are compex and convoluted. If a cafe is forced to pay hundreds or thousands of dollars for legal advice about their wifi service, they will simply not provide it.

Question 8

Before the impact of measures can be measured [sic] a baseline first needs to be established for the purpose the Copyright Act is intended to serve.   In particular, the purpose of the Copyright Act is not to reduce infringement.  Rather, its titular purpose is to promote the creation of works and other subject matter.  This receives no mention in the discussion paper.  Historically, the Copyright Act has been promoted as necessary to maintain distribution networks (pre 1980s), as a means of providing creators with an income (last 2 centuries, but repeatedly contradicted empirically – most recently in the Don’t Give Up Your Day Job report),  as a natural right of authors (00s – contrary to judicial pronouncements on the issue) and now, apparently, as a means of stimulating the economy.  An Act which has so mutable a purpose ought to be considered with a jaundiced eye.

The reference to the PWC document suggests that the Hargreaves report would be a good starting point for further policy making.

Question 9

The retail price of downloadable copies of copyright works in Australia (exclusive of GST) should not exceed the price in their country of origin by more than 5% when sold directly.  The 5% figure is intended to allow for some additional costs of selling into Australia.

Implement the Productivity Commission’s recommendations on parallel importation.

Question 10, 11

The next two paragraphs of the response to this question deals primarily with a possible three strikes regime although the final observations are of a general character.

“Three strikes” regulation will effectively shift the burden of enforcement further away from rights holders to people who are the least equipped to implement it.  What will parents who receive warning letters do?  Will they implement a sophisticated filtering system on their home router?  Will they send their children off to a reeducation camp run by the rights’ holders? More likely they will blanket ban the internet access.  How will cafes manage their risk? More likely they will not provide wifi access.  This has already been the death knell of community wifi networks in the US.  The collateral damage from these proposals is difficult to quantify but there is every reason to believe it will be widespread.  This damage is routinely ignored in policy making.

Will rights’ holders use such a system against everyone? That is unlikely.  Rather, it will be used against some individuals unlucky enough to be first on the list. Those individuals will be used as examples for others.  This will be a law which will be enforced in an arbitrary and discriminatory fashion.  As such it will undermine respect for the law more generally.

The comments on the proposals above assume that they are acted on bona fide.  Once network operators are conditioned to a Pavlovian response to requests the system will be abused – the Get Up! organisation already believes it has been the subject of misuse: https://www.getup.org.au/campaigns/great-barrier-reef–3/adani-video/someone-wants-to-silence-us-dont-let-them

Evasion technologies have previously been a niche interest.  The size of the market limited their growth.  These provisions will sheet home to all citizens the need to implement evasion technologies, thereby greatly increasing the market and therefore the economic incentive for their evolution.  The long run effect of implementing proposals which effect this form of general surveillance of the population is to weaken national security.

By insulating rights holders from the costs of enforcement the proposals disconnect rights holders from the very externalities that enforcement creates.  If there were ever a recipe for poor policy, such a disconnection would be a key element of it.

 

 



Russell Coker: Links August 2014

Mon, 2014-09-01 01:26

Matt Palmer wrote a good overview of DNSSEC [1].

Sociological Images has an interesting article making the case for phasing out the US $0.01 coin [2]. The Australian $0.01 and $0.02 coins were worth much more when they were phased out.

Multiplicity is a board game that’s designed to address some of the failings of SimCity type games [3]. I haven’t played it yet but the page describing it is interesting.

Carlos Buento’s article about the Mirrortocracy has some interesting insights into the flawed hiring culture of Silicon Valley [4].

Adam Bryant wrote an interesting article for NY Times about Google’s experiments with big data and hiring [5]. Among other things it seems that grades and test results have no correlation with job performance.

Jennifer Chesters from the University of Canberra wrote an insightful article about the results of Australian private schools [6]. Her research indicates that kids who go to private schools are more likely to complete year 12 and university but they don’t end up earning more.

Kiwix is an offline Wikipedia reader for Android, needs 9.5G of storage space for the database [7].

Melanie Poole wrote an informative article for Mamamia about the evil World Congress of Families and their connections to the Australian government [8].

The BBC has a great interactive web site about how big space is [9].

The Raspberry Pi Spy has an interesting article about automating Minecraft with Python [10].

Wired has an interesting article about the Bittorrent Sync platform for distributing encrypted data [11]. It’s apparently like Dropbox but encrypted and decentralised. Also it supports applications on top of it which can offer social networking functions among other things.

ABC news has an interesting article about the failure to diagnose girls with Autism [12].

The AbbottsLies.com.au site catalogs the lies of Tony Abbott [13]. There’s a lot of work in keeping up with that.

Racialicious.com has an interesting article about “Moff’s Law” about discussion of media in which someone says “why do you have to analyze it” [14].

Paul Rosenberg wrote an insightful article about conservative racism in the US, it’s a must-read [15].

Salon has an interesting and amusing article about a photography project where 100 people were tased by their loved ones [16]. Watch the videos.

Related posts:

  1. Links August 2013 Mark Cuban wrote an interesting article titled “What Business is...
  2. Links February 2014 The Economist has an interesting and informative article about the...
  3. Links July 2014 Dave Johnson wrote an interesting article for Salon about companies...

Sridhar Dhanapalan: Twitter posts: 2014-08-25 to 2014-08-31

Mon, 2014-09-01 00:27

linux.conf.au News: Big announcement about upcoming announcements...

Sun, 2014-08-31 14:27

The Papers Committee weekend went extremely well and without bloodshed according to Steve, although there was some very strong discussion from time to time! The upshot is that we have a fantastic program now, with excellent presentations all across the board.

We have already begun contacting Miniconf organisers to let them know who has been successful and who hasn’t, and over the next couple of weeks we will be sending emails out to everyone who submitted a presentation to let them know how they fared.

If you have been accepted to run a Miniconf then your contact will be Simon Lyall (miniconfs@lca2015.linux.org.au) and if you have been accepted as a speaker then your contact will be Lisa Sands (speakers@lca2015.linux.org.au). We will be asking for a photo of you and your twitter name, as we will be running a Speaker Feature about a different presenter each day - don’t worry - you will be notified on your day!

We want to give great thanks to everyone who submitted papers - you are all still winners in our eyes, and we hope that even if you weren’t selected this time that won’t put you off attending the conference and having a great time. Please note that due to the large volume of submissions, we are unable to provide feedback on why any particular submission was unsuccessful.

Our earlybird registration will be opening soon, so watch this space!

Francois Marier: Outsourcing your webapp maintenance to Debian

Sun, 2014-08-31 07:46

Modern web applications are much more complicated than the simple Perl CGI scripts or PHP pages of the past. They usually start with a framework and include lots of external components both on the front-end and on the back-end.

Here's an example from the Node.js back-end of a real application:

$ npm list | wc -l 256

What if one of these 256 external components has a security vulnerability? How would you know and what would you do if of your direct dependencies had a hard-coded dependency on the vulnerable version? It's a real problem and of course one way to avoid this is to write everything yourself. But that's neither realistic nor desirable.

However, it's not a new problem. It was solved years ago by Linux distributions for C and C++ applications. For some reason though, this learning has not propagated to the web where the standard approach seems to be to "statically link everything".

What if we could build on the work done by Debian maintainers and the security team?

Case study - the Libravatar project

As a way of discussing a different approach to the problem of dependency management in web applications, let me describe the decisions made by the Libravatar project.

Description

Libravatar is a federated and free software alternative to the Gravatar profile photo hosting site.

From a developer point of view, it's a fairly simple stack:

The service is split between the master node, where you create an account and upload your avatar, and a few mirrors, which serve the photos to third-party sites.

Like with Gravatar, sites wanting to display images don't have to worry about a complicated protocol. In a nutshell, all that a site needs to do is hash the user's email and add that hash to a base URL. Where the federation kicks in is that every email domain is able to specify a different base URL via an SRV record in DNS.

For example, francois@debian.org hashes to 7cc352a2907216992f0f16d2af50b070 and so the full URL is:

http://cdn.libravatar.org/avatar/7cc352a2907216992f0f16d2af50b070

whereas francois@fmarier.org hashes to 0110e86fdb31486c22dd381326d99de9 and the full URL is:

http://fmarier.org/avatar/0110e86fdb31486c22dd381326d99de9

due to the presence of an SRV record on fmarier.org.

Ground rules

The main rules that the project follows is to:

  1. only use Python libraries that are in Debian
  2. use the versions present in the latest stable release (including backports)
Deployment using packages

In addition to these rules around dependencies, we decided to treat the application as if it were going to be uploaded to Debian:

  • It includes an "upstream" Makefile which minifies CSS and JavaScript, gzips them, and compiles PO files (i.e. a "build" step).
  • The Makefile includes a test target which runs the unit tests and some lint checks (pylint, pyflakes and pep8).
  • Debian packages are produced to encode the dependencies in the standard way as well as to run various setup commands in maintainer scripts and install cron jobs.
  • The project runs its own package repository using reprepro to easily distribute these custom packages.
  • In order to update the repository and the packages installed on servers that we control, we use fabric, which is basically a fancy way to run commands over ssh.
  • Mirrors can simply add our repository to their apt sources.list and upgrade Libravatar packages at the same time as their system packages.
Results

Overall, this approach has been quite successful and Libravatar has been a very low-maintenance service to run.

The ground rules have however limited our choice of libraries. For example, to talk to our queuing system, we had to use the raw Python bindings to the C Gearman library instead of being able to use a nice pythonic library which wasn't in Debian squeeze at the time.

There is of course always the possibility of packaging a missing library for Debian and maintaining a backport of it until the next Debian release. This wouldn't be a lot of work considering the fact that responsible bundling of a library would normally force you to follow its releases closely and keep any dependencies up to date, so you may as well share the result of that effort. But in the end, it turns out that there is a lot of Python stuff already in Debian and we haven't had to package anything new yet.

Another thing that was somewhat scary, due to the number of packages that were going to get bumped to a new major version, was the upgrade from squeeze to wheezy. It turned out however that it was surprisingly easy to upgrade to wheezy's version of Django, Apache and Postgres. It may be a problem next time, but all that means is that you have to set a day aside every 2 years to bring everything up to date.

Problems

The main problem we ran into is that we optimized for sysadmins and unfortunately made it harder for new developers to setup their environment. That's not very good from the point of view of welcoming new contributors as there is quite a bit of friction in preparing and testing your first patch. That's why we're looking at encoding our setup instructions into a Vagrant script so that new contributors can get started quickly.

Another problem we faced is that because we use the Debian version of jQuery and minify our own JavaScript files in the build step of the Makefile, we were affected by the removal from that package of the minified version of jQuery. In our setup, there is no way to minify JavaScript files that are provided by other packages and so the only way to fix this would be to fork the package in our repository or (preferably) to work with the Debian maintainer and get it fixed globally in Debian.

One thing worth noting is that while the Django project is very good at issuing backwards-compatible fixes for security issues, sometimes there is no way around disabling broken features. In practice, this means that we cannot run unattended-upgrades on our main server in case something breaks. Instead, we make use of apticron to automatically receive email reminders for any outstanding package updates.

On that topic, it can occasionally take a while for security updates to be released in Debian, but this usually falls into one of two cases:

  1. You either notice because you're already tracking releases pretty well and therefore could help Debian with backporting of fixes and/or testing;
  2. or you don't notice because it has slipped through the cracks or there simply are too many potential things to keep track of, in which case the fact that it eventually gets fixed without your intervention is a huge improvement.

Finally, relying too much on Debian packaging does prevent Fedora users (a project that also makes use of Libravatar) from easily contributing mirrors. Though if we had a concrete offer, we would certainly look into creating the appropriate RPMs.

Is it realistic?

It turns out that I'm not the only one who thought about this approach, which has been named "debops". The same day that my talk was announced on the DebConf website, someone emailed me saying that he had instituted the exact same rules at his company, which operates a large Django-based web application in the US and Russia. It was pretty impressive to read about a real business coming to the same conclusions and using the same approach (i.e. system libraries, deployment packages) as Libravatar.

Regardless of this though, I think there is a class of applications that are particularly well-suited for the approach we've just described. If a web application is not your full-time job and you want to minimize the amount of work required to keep it running, then it's a good investment to restrict your options and leverage the work of the Debian community to simplify your maintenance burden.

The second criterion I would look at is framework maturity. Given the 2-3 year release cycle of stable distributions, this approach is more likely to work with a mature framework like Django. After all, you probably wouldn't compile Apache from source, but until recently building Node.js from source was the preferred option as it was changing so quickly.

While it goes against conventional wisdom, relying on system libraries is a sustainable approach you should at least consider in your next project. After all, there is a real cost in bundling and keeping up with external dependencies.

This blog post is based on a talk I gave at DebConf 14: slides, video.

Maxim Zakharov: Australian Singing Competition

Sun, 2014-08-31 02:25

The Finals concert of the 2014 Australian Singing Competition was an amazing experience, and it was the first time I listened to opera singers live.

Congratulations to the winner, Isabella Moore from New Zealand!

Matt Palmer: Chromium tabs crashing and not rendering correctly?

Sat, 2014-08-30 14:26

If you’ve noticed your chrome/chromium on Linux having problems since you upgraded to somewhere around version 35/36, you’re not alone. Thankfully, it’s relatively easy to workaround. It will hit people who keep their browser open for a long time, or who have lots of tabs (or if you’re like me, and do both).

To tell if you’re suffering from this particular problem, crack open your ~/.xsession-errors file (or wherever your system logs stdout/stderr from programs running under X), and look for lines that look like this:

[22161:22185:0830/124533:ERROR:shared_memory_posix.cc(231)] Creating shared memory in /dev/shm/.org.chromium.Chromium.gFTQSy failed: Too many open files

And

[22161:22185:0830/124601:ERROR:host_shared_bitmap_manager.cc(122)] Cannot create shared memory buffer

If you see those errors, congratulations! The rest of this blog post will be of use to you.

There’s probably a myriad of bugs open about this problem, but the one I found was #367037: Shared memory-related tab crash. It turns out there’s a file handle leak in the chromium codebase somewhere, relating to shared memory handling. There’s no fix available, but the workaround is quite simple: increase the number of files that processes are allowed to have open.

System-wide, you can do this by creating a file /etc/security/limits.d/local-nofile.conf, containing this line:

* - nofile 65535

You could also edit /etc/security/limits.conf to contain the same line, if you were so inclined. Note that this will only take effect next time you login, or perhaps even only when you restart X (or, at worst, your entire machine).

This doesn’t help you if you’ve got Chromium already open and you’d like to stop it from crashing Right Now (perhaps restarting your machine would be a terrible hardship, causing you to lose your hard-won uptime record), then you can use a magical tool called prlimit.

The prlimit syscall is available if you’re running a Linux 2.6.36 or later kernel, and running at least glibc 2.13. You’ll have a prlimit command line program if you’ve got util-linux 2.21 or later. If not, you can use the example source code in the prlimit(2) manpage, changing RLIMIT_CPU to RLIMIT_NOFILE, and then running it like this:

prlimit <PID> 65535 65535

The <PID> argument is taken from the first number in the log messages from .xsession-errors – in the example above, it’s 22161.

And now, you can go back to using your tabs as ersatz bookmarks, like I do.