Planet Linux Australia

Syndicate content
Planet Linux Australia -
Updated: 8 min 34 sec ago News: Speaker Feature: Lana Brindley & Alexandra Settle, Olivier Bilodeau

Mon, 2014-10-27 11:28
Lana Brindley and Alexandra Settle 8 writers in under 8 months: from zero to a docs team in no time flat

11:35am Thursday 15th January 2015

Lana and Alexandra are both technical writers at Rackspace, the open Cloud Company.

Lana has been writing open source technical documentation for about eight years, and right now I’m working on documenting OpenStack with Rackspace, she does a lot of speaking, mostly about writing. She also talks about other topics from open source software to geek feminism and working in IT.

Lana is also involved in several volunteer projects including, Girl Geek Dinners, LinuxChix, OWOOT (Oceania Women of Open Tech), and various Linux Users Groups (LUGs). Alexandra is a technical writer with the Rackspace Cloud Builders Australia team. She began her career as a writer for the cloud documentation team at Red Hat, Australia. Alexandra prefers Fedora over other Linux distributions.

Recently she was part of a team that authored the OpenStack Design Architecture Guide, and hopes to further promote involvement in the OpenStack community within Australia.

For more information on Lana and Alexandra and their presentation, see here. You can follow them as @Loquacities (Lana) or @dewsday (Alexandra) and don’t forget to mention #LCA2015.

Olivier Bilodeau Advanced Linux Server-Side, Threats: How they work and what you can do about them

1:20pm Friday 16th January 2015

Olivier is an engineer that loves technology, software, security, open source, linux, brewing beer, travels and android.

Coming from the dusty Unix server room world, Olivier evolved professionally in networking, information security and open source software development to finally become malware researcher at ESET Canada. Presenting at Defcon, publishing in (In)secure Mag, teaching infosec to undergrads (ÉTS), driving the NorthSec Hacker Jeopardy and co-organizer of the MontréHack training initiative are among its note-worthy successes.

For more information on Olivier and his presentation, see here. You can follow him as @obilodeau and don’t forget to mention #LCA2015.

Sridhar Dhanapalan: Twitter posts: 2014-10-20 to 2014-10-26

Mon, 2014-10-27 01:27

Craige McWhirter: Craige McWhirter: Automating Building and Synchronising Local & Remote Git Repos With Github

Sat, 2014-10-25 22:28

I've blogged about some git configurations in the past. In particular working with remote git repos.

I have a particular workflow for most git repos

  • I have a local repo on my laptop
  • I have a remote git repo on my server
  • I have a public repo on Github that functions as a back up.

When I push to my remote server, a post receive hook automatically pushes the updates to Github. Yay for automation.

However this wasn't enough automation, as I found myself creating git repos and running through the setup steps more often than I'd like. As a result I created which takes all the manual steps I go through to setup my workflow and automates it.

The script currently does the following:

  • Builds a git repo locally
  • Adds a README.mdwn and a LICENCE. Commits the changes.
  • Builds a git repo hosted via your remote git server
  • Adds to the remote server, a git hook for automatically pushing to github
  • Adds to the remote server, a git remote for github.
  • Creates a repo at GitHub a via API 3
  • Pushes the READEME and LICENCE to the remote, which pushes to github.

It's currently written in bash and has no error handling.

I've planned a re-write in Haskell which will have error handling.

If this is of use to you, enjoy :-)

Mark Terle: That rare feeling …

Sat, 2014-10-25 16:25

… of actually completing things.

Upon reflection, it appears to have been a sucessful week.

Work – We relocated offices (including my own desk (again)) over the previous week from one slightly pre-used office building to another more well-used office building. My role as part of this project was to ensure that the mechanics of the move as far as IT and Comms occured and proceed smoothly. After recabling the floor, working with networks, telephones and desktops staff it was an almost flawless move, and everyone was up and running easily on Monday morning. I received lots of positive feedback which was good.

Choir – The wrap up SGM for the 62nd Australian Intervarsity Choral Festival Perth 2011, Inc happened. Pending the incorporation of the next festival, it is all over bar a few cheques and paperwork. Overall it was a great festival and as Treasurer was pleased with the final financial result (positive).

Hacking – This weeks little project has been virtualsnack. This is a curses emulator of the UCC Snack Machine and associated ROM. It is based on a previous emulator written with PyGTK and Glade that had bitrotted in the past ten years to be non-functioning and not worth the effort to ressurect. The purpose of the emulator is enable development of code to speak to the machine without having to have the real machine available to test against.

I chose to continue to have the code in python and used npyscreen as the curses UI library. One of the intermediate steps was creating a code sample,, which creates a daemon that speaks to a curses interfaces.

I hereby present V1.0 “Gobbledok” of virtualsnack. virtualsnack is hosted up on Github for the moment, but may move in future. I suspect this item of software will only be of interest to my friends at UCC.

Andrew Pollock: [life] Day 268: Science Friday, TumbleTastics, haircuts and a big bike outing

Fri, 2014-10-24 23:26

I didn't realise how jam packed today was until we sat down at dinner time and recounted what we'd done today.

I started the day pretty early, because Anshu had to be up for an early flight. I pottered around at home cleaning up a bit until Sarah dropped Zoe off.

After Zoe had watched a bit of TV, I thought we'd try some bottle rocket launching for Science Friday. I'd impulse purchased an AquaPod at Jaycar last year, and haven't gotten around to using it yet.

We wandered down to Hawthorne Park with the AquaPod, an empty 2 litre Sprite bottle, the bicycle pump and a funnel.

My one complaint with the AquaPod would have to be that the feet are too smooth. If you don't tug the string strongly enough you end up just dragging the whole thing across the ground, which isn't really what you want to be doing. Once Zoe figured out how to yank the string the right way, we were all good.

We launched the bottle a few times, but I didn't want to waste a huge amount of water, so we stopped after about half a dozen launches. Zoe wanted to have a play in the playground, so we wandered over to that side of the park for a bit.

It was getting close to time for TumbleTastics, and we needed to go via home to get changed, so we started the longish walk back home. It was slow going in the mid-morning heat and no scooter, but we got there eventually. We had another mad rush to get to TumbleTastics on time, and miraculously managed to make it there just as they were calling her name.

Lachlan wasn't there today, and I was feeling lazy, and Zoe was keen for a milkshake, so we dropped into Ooniverse on the way home. Zoe had a great old time playing with everything there.

After we got home again, we biked down to the Bulimba post office to collect some mail, and then biked over for a haircut.

After our haircuts, Zoe wanted to play in Hardcastle Park, so we biked over there for a bit. I'd been wanting to go and check out the newly opened Riverwalk and try taking the bike and trailer on a CityCat. A CityCat just happened to be arriving when we got to the park, but Zoe wasn't initially up for it. As luck would have it, she changed her mind as the CityCat docked, but it was too late to try and get on that one. We got on the next one instead.

I wasn't sure how the bike and the trailer were going to work out on the CityCat, but it worked out pretty well going from Hawthorne to New Farm Park. We boarded at Hawthorne from the front left hand side, and disembarked at New Farm Park from the front right hand side, so I basically just rolled the bike on and off again, without needing to worry about turning it around. It was a bit tight cornering from the pontoon to the gangway, but the deckhand helped me manoeuvre the trailer.

It was quite a nice little ride through the back streets of New Farm to get to the start of the Riverwalk, and we had a nice quick ride into the city. We biked all the way along the riverside through to the Old Botanic Gardens. We stopped for a little play in the playground that Zoe had played in the other weekend when we were wandering around for Brisbane Open House, and then continued through the gardens, over the Goodwill Bridge, and the bottom of the Kangaroo Point cliffs.

We wound our way back home through Dockside, and Mowbray Park and along the bikeway alongside Wynnum Road. It was a pretty huge ride, and I'm excited that it's opened up an easy way to access Southbank by bicycle. I'm looking forward to some bigger forays in the near future.

Tim Serong: Watching Grass Grow

Fri, 2014-10-24 19:27

For Hackweek 11 I thought it’d be fun to learn something about creating Android apps. The basic training is pretty straightforward, and the auto-completion (and auto-just-about-everything-else) in Android Studio is excellent. So having created a “hello world” app, and having learned something about activities and application lifecycle, I figured it was time to create something else. Something fun, but something I could reasonably complete in a few days. Given that Android devices are essentially just high res handheld screens with a bit of phone hardware tacked on, it seemed a crime not to write an app that draws something pretty.

The openSUSE desktop wallpaper, with its happy little Geeko sitting on a vine, combined with all the green growing stuff outside my house (it’s spring here) made me wonder if I couldn’t grow a little vine jungle on my phone, with many happy Geekos inhabiting it.

Android has OpenGL ES, so thinking that might be the way to go I went through the relevant lesson, and was surprised to see nothing on the screen where there should have been a triangle. Turns out the view is wrong in the sample code. I also realised I’d probably have to be generating triangle strips from curvy lines, then animating them, and the brain cells I have that were once devoted to this sort of graphical trickery are so covered in rust that I decided I’d probably be better off fiddling around with beziers on a canvas.

So, I created an app with a SurfaceView and a rendering thread which draws one vine after another, up from the bottom of the screen. Depending on Math.random() it extends a branch out to one side, or the other, or both, and might draw a Geeko sitting on the bottom most branch. Originally the thread lifecycle was tied to the Activity (started in onResume(), killed in onPause()), but this causes problems when you blank the screen while the app is running. So I simplified the implementation by tying the thread lifecycle to Surface create/destroy, at the probable expense of continuing to chew battery if you blank the screen while the app is active.

Then I realised that it would make much more sense to implement this as live wallpaper, rather than as a separate app, because then I’d see it running any time I used my phone. Turns out this simplified the implementation further. Goodbye annoying thread logic and lifecycle problems (although I did keep the previous source just in case). Here’s a screenshot:

The final source is on github, and I’ve put up a release build APK too in case anyone would like to try it out – assuming of course that you trust me not to have built a malicious binary, trust github to host it, and trust SSL to deliver it safely


Update 2014-10-27: The Geeko Live Wallpaper is now up on the Google Play store, although for some reason the “Live Wallpaper” category wasn’t available, so it’s in “Personalization” until (hopefully) someone in support gets back to me and tells me what I’m missing to get it into the right category.

Updated Update: Someone in support got back to me. “Live Wallpaper” can’t be selected as a category in the developer console, rather you have to wait for Google’s algorithms to detect that the app is live wallpaper and recategorize it automatically.

Michael Still: Specs for Kilo

Fri, 2014-10-24 14:27
Here's an updated list of the specs currently proposed for Kilo. I wanted to produce this before I start travelling for the summit in the next couple of days because I think many of these will be required reading for the Nova track at the summit.


  • Add instance administrative lock status to the instance detail results: review 127139 (abandoned).
  • Add more detailed network information to the metadata server: review 85673.
  • Add separated policy rule for each v2.1 api: review 127863.
  • Add user limits to the limits API (as well as project limits): review 127094.
  • Allow all printable characters in resource names: review 126696.
  • Expose the lock status of an instance as a queryable item: review 85928 (approved).
  • Implement instance tagging: review 127281 (fast tracked, approved).
  • Implement tags for volumes and snapshots with the EC2 API: review 126553 (fast tracked, approved).
  • Implement the v2.1 API: review 126452 (fast tracked, approved).
  • Microversion support: review 127127.
  • Move policy validation to just the API layer: review 127160.
  • Provide a policy statement on the goals of our API policies: review 128560.
  • Support X509 keypairs: review 105034.


  • Enable the nova metadata cache to be a shared resource to improve the hit rate: review 126705 (abandoned).
  • Enforce instance uuid uniqueness in the SQL database: review 128097 (fast tracked, approved).

Containers Service

Hypervisor: Docker

Hypervisor: FreeBSD

  • Implement support for FreeBSD networking in nova-network: review 127827.

Hypervisor: Hyper-V

  • Allow volumes to be stored on SMB shares instead of just iSCSI: review 102190 (approved).

Hypervisor: Ironic

Hypervisor: VMWare

  • Add ephemeral disk support to the VMware driver: review 126527 (fast tracked, approved).
  • Add support for the HTML5 console: review 127283.
  • Allow Nova to access a VMWare image store over NFS: review 126866.
  • Enable administrators and tenants to take advantage of backend storage policies: review 126547 (fast tracked, approved).
  • Enable the mapping of raw cinder devices to instances: review 128697.
  • Implement vSAN support: review 128600 (fast tracked, approved).
  • Support multiple disks inside a single OVA file: review 128691.
  • Support the OVA image format: review 127054 (fast tracked, approved).

Hypervisor: libvirt

Instance features


  • Move flavor data out of the system_metdata table in the SQL database: review 126620 (approved).
  • Transition Nova to using the Glance v2 API: review 84887.


  • Enable lazy translations of strings: review 126717 (fast tracked).


  • Dynamically alter the interval nova polls components at based on load and expected time for an operation to complete: review 122705.


  • Add an IOPS weigher: review 127123 (approved).
  • Add instance count on the hypervisor as a weight: review 127871 (abandoned).
  • Allow limiting the flavors that can be scheduled on certain host aggregates: review 122530 (abandoned).
  • Convert the resource tracker to objects: review 128964 (fast tracked, approved).
  • Create an object model to represent a request to boot an instance: review 127610.
  • Decouple services and compute nodes in the SQL database: review 126895.
  • Implement resource objects in the resource tracker: review 127609.
  • Isolate the scheduler's use of the Nova SQL database: review 89893.
  • Move select_destinations() to using a request object: review 127612.


  • Provide a reference implementation for console proxies that uses TLS: review 126958 (fast tracked).
  • Strongly validate the tenant and user for quota consuming requests with keystone: review 92507.

Tags for this post: openstack kilo blueprint spec

Related posts: One week of Nova Kilo specifications; Compute Kilo specs are open; On layers; Juno nova mid-cycle meetup summary: slots; My candidacy for Kilo Compute PTL; Juno nova mid-cycle meetup summary: nova-network to Neutron migration


Andrew Pollock: [life] Day 267: An outing to the Valley for lunch, and swim class

Fri, 2014-10-24 09:25

I was supposed to go to yoga in the morning, but I just couldn't drag my sorry arse out of bed with my man cold.

Sarah dropped Zoe around, and she watched a bit of TV while we were waiting for a structural engineer to come and take a look at the building's movement-related issues.

While I was downstairs showing the engineer around, Zoe decided she'd watched enough TV and, remembering that I'd said we needed to tidy up her room the previous morning, but not had time to, took herself off to her room and tidied it up. I was so impressed.

After the engineer was finished, we walked to the ferry terminal to take the cross-river ferry over to Teneriffe, and catch the CityGlider bus to the Valley for another one of the group lunches I get invited to.

After lunch, we reversed our travel, dropping into the hairdresser on the way home to make an appointment for the next day. We grabbed a few things from the Hawthorne Garage on the way through.

We pottered around at home for a little bit before it was time to bike to swim class.

After swim class, we biked home, and Zoe watched some TV while I got organised for a demonstration that night.

Sarah picked up Zoe, and I headed out to my demo. Another full day. News: Call for Volunteers

Fri, 2014-10-24 08:27

The Earlybird registrations are going extremely well – over 50% of the available tickets have sold in just two weeks! This is no longer a conference we are planning – this is a conference that is happening and that makes the Organisation Team very happy!

Speakers have been scheduled. Delegates are coming. We now urgently need to expand our team of volunteers to manage and assist all these wonderful visitors to ensure that LCA 2015 is unforgettable – for all the right reasons.

Volunteers are needed to register our delegates, show them to their accommodation, guide them around the University and transport them here and there. They will also manage our speakers by making sure that their presentations don't overrun, recording their presentations and assisting them in many other ways during their time at the conference.

Anyone who has been a volunteer before will tell you that it’s an extremely busy time, but so worthwhile. It’s rewarding to know that you’ve helped everybody at the conference to get the most out of it. There's nothing quite like knowing that you've made a difference.

But there is more, membership has other privileges and advantages! You don't just get to meet the delegates and speakers, you get to know many of them while helping them as well. You get a unique opportunity to get behind the scenes and close to the action. You can forge new relationships with amazing, interesting, wonderful people you might not ever get the chance to meet any other way.

Every volunteer's contribution is valued and vital to the overall running and success of the conference. We need all kinds of skills too – not just the technically savvy ones (although knowing which is the noisy end of a walkie-talkie may help). We want you! We need you! It just wouldn't be the same without you! If you would like to be an LCA 2015 volunteer it's easy to register. Just go to our volunteer page for more information. We review volunteer registrations regularly and if you’re based in Auckland (or would like a break away from wherever you are) then we would love to meet you at one of our regular meetings. Registered volunteers will receive information about these via email.

Jonathan Adamczewski: Assembly Primer Part 7 — Working with Strings — ARM

Thu, 2014-10-23 16:26

These are my notes for where I can see ARM varying from IA32, as presented in the video Part 7 — Working with Strings.

I’ve not remotely attempted to implement anything approximating optimal string operations for this part — I’m just working my way through the examples and finding obvious mappings to the ARM arch (or, at least what seem to be obvious). When I do something particularly stupid, leave a comment and let me know :)

Working with Strings .data HelloWorldString: .asciz "Hello World of Assembly!" H3110: .asciz "H3110" .bss .lcomm Destination, 100 .lcomm DestinationUsingRep, 100 .lcomm DestinationUsingStos, 100

Here’s the storage that the provided example StringBasics.s uses. No changes are required to compile this for ARM.

1. Simple copying using movsb, movsw, movsl @movl $HelloWorldString, %esi movw r0, #:lower16:HelloWorldString movt r0, #:upper16:HelloWorldString @movl $Destination, %edi movw r1, #:lower16:Destination movt r1, #:upper16:Destination @movsb ldrb r2, [r0], #1 strb r2, [r1], #1 @movsw ldrh r3, [r0], #2 strh r3, [r1], #2 @movsl ldr r4, [r0], #4 str r4, [r1], #4

More visible complexity than IA32, but not too bad overall.

IA32’s movs instructions implicitly take their source and destination addresses from %esi and %edi, and increment/decrement both. Because of ARM’s load/store architecture, separate load and store instructions are required in each case, but there is support for indexing of these registers:

ARM addressing modes

According to ARM A8.5, memory access instructions commonly support three addressing modes:

  • Offset addressing — An offset is applied to an address from a base register and the result is used to perform the memory access. It’s the form of addressing I’ve used in previous parts and looks like [rN, offset]
  • Pre-indexed addressing — An offset is applied to an address from a base register, the result is used to perform the memory access and also written back into the base register. It looks like [rN, offset]!
  • Post-indexed addressing — An address is used as-is from a base register for memory access. The offset is applied and the result is stored back to the base register. It looks like [rN], offset and is what I’ve used in the example above.
2. Setting / Clearing the DF flag

ARM doesn’t have a DF flag (to the best of my understanding). It could perhaps be simulated through the use of two instructions and conditional execution to select the right direction. I’ll look further into conditional execution of instructions on ARM in a later post.

3. Using Rep

ARM also doesn’t appear to have an instruction quite like IA32’s rep instruction. A conditional branch and a decrement will be the long-form equivalent. As branches are part of a later section, I’ll skip them for now.

@movl $HelloWorldString, %esi movw r0, #:lower16:HelloWorldString movt r0, #:upper16:HelloWorldString @movl $DestinationUsingRep, %edi movw r1, #:lower16:DestinationUsingRep movt r1, #:upper16:DestinationUsingRep @movl $25, %ecx # set the string length in ECX @cld # clear the DF @rep movsb @std ldm r0!, {r2,r3,r4,r5,r6,r7} ldrb r8, [r0,#0] stm r1!, {r2,r3,r4,r5,r6,r7} strb r8, [r1,#0]

To avoid conditional branches, I’ll start with the assumption that the string length is known (25 bytes). One approach would be using multiple load instructions, but the load multiple (ldm) instruction makes it somewhat easier for us — one instruction to fetch 24 bytes, and a load register byte (ldrb) for the last one. Using the ! after the source-address register indicates that it should be updated with the address of the next byte after those that have been read.

The storing of the data back to memory is done analogously. Store multiple (stm) writes 6 registers×4 bytes = 24 bytes (with the ! to have the destination address updated). The final byte is written using strb.

4. Loading string from memory into EAX register @cld @leal HelloWorldString, %esi movw r0, #:lower16:HelloWorldString movt r0, #:upper16:HelloWorldString @lodsb ldrb r1, [r0, #0] @movb $0, %al mov r1, #0 @dec %esi @ unneeded. equiv: sub r0, r0, #1 @lodsw ldrh r1, [r0, #0] @movw $0, %ax mov r1, #0 @subl $2, %esi # Make ESI point back to the original string. unneeded. equiv: sub r0, r0, #2 @lodsl ldr r1, [r0, #0]

In this section, we are shown how the IA32 lodsb, lodsw and lodsl instructions work. Again, they have implicitly assigned register usage, which isn’t how ARM operates.

So, instead of a simple, no-operand instruction like lodsb, we have a ldrb r1, [r0, #0] loading a byte from the address in r0 into r1. Because I didn’t use post indexed addressing, there’s no need to dec or subl the address after the load. If I were to do so, it could look like this:

ldrb r1, [r0], #1 sub r0, r0, #1 ldrh r1, [r0], #2 sub r0, r0, #2 ldr r1, [r0], #4

If you trace through it in gdb, look at how the value in r0 changes after each instruction.

5. Storing strings from EAX to memory @leal DestinationUsingStos, %edi movw r0, #:lower16:DestinationUsingStos movt r0, #:upper16:DestinationUsingStos @stosb strb r1, [r0], #1 @stosw strh r1, [r0], #2 @stosl str r1, [r0], #4

Same kind of thing as for the loads. Writes the letters in r1 (being “Hell” — leftovers from the previous section) into DestinationUsingStos (the result being “HHeHell”). String processing on little endian architectures has its appeal.

6. Comparing Strings @cld @leal HelloWorldString, %esi movw r0, #:lower16:HelloWorldString movt r0, #:upper16:HelloWorldString @leal H3110, %edi movw r1, #:lower16:H3110 movt r1, #:upper16:H3110 @cmpsb ldrb r2, [r0,#0] ldrb r3, [r1,#0] cmp r2, r3 @dec %esi @dec %edi @not needed because of the addressing mode used @cmpsw ldrh r2, [r0,#0] ldrh r3, [r1,#0] cmp r2, r3 @subl $2, %esi @subl $2, %edi @not needed because of the addressing mode used @cmpsl ldr r2, [r0,#0] ldr r3, [r1,#0] cmp r2, r3

Where IA32’s cmps instructions implicitly load through the pointers in %edi and %esi, explicit loads are needed for ARM. The compare then works in pretty much the same way as for IA32, setting condition code flags in the current program status register (cpsr). If you run the above code, and check the status registers before and after execution of the cmp instructions, you’ll see the zero flag set and unset in the same way as is demonstrated in the video.

The condition code flags are:

  • bit 31 — negative (N)
  • bit 30 — zero (Z)
  • bit 29 — carry (C)
  • bit 28 — overflow (V)

There’s other flags in that register — all the details are on page B1-16 and B1-17 in the ARM Architecture Reference Manual.

And with that, I think we’ve made it (finally) to the end of this part for ARM.

Other assembly primer notes are linked here.

Stewart Smith: CFP for Developer, Testing, Release and Continuous Integration Automation Miniconf at 2015

Thu, 2014-10-23 10:26

This is the Call for Papers for the Developer, Testing, Release and Continuous Integration Automation Miniconf at 2015 in Auckland. See

This miniconf is all about improving the way we produce, collaborate, test and release software.

We want to cover tools and techniques to improve the way we work together to produce higher quality software:

– code review tools and techniques (e.g. gerrit)

– continuous integration tools (e.g. jenkins)

– CI techniques (e.g. gated trunk, zuul)

– testing tools and techniques (e.g. subunit, fuzz testing tools)

– release tools and techniques: daily builds, interacting with distributions, ensuring you test the software that you ship.

– applying CI in your workplace/project

We’re looking for talks about open source technology *and* the human side of things.

Speakers at this miniconf must be registered for the main conference (although there are a limited number of miniconf only tickets available for miniconf speakers if required)

There will be a projector, and there is a possibility the talk will be recorded (depending on if the conference A/V is up and running) – if recorded, talks will be posted with the same place with the same CC license as main LCA talks are.

CFP is open until midnight November 21st 2014.

By submitting a presentation, you’re agreeing to the following:

I allow Linux Australia to record my talk.

I allow Linux Australia to release any recordings of my presentations, tutorials and minconfs under the Creative Commons Attribution-Share Alike License

I allow Linux Australia to release any other material (such as slides) from my presentations, tutorials and minconfs under the Creative Commons Attribution-Share Alike License.

I confirm that I have the authority to allow Linux Australia to release the above material. i.e., if your talk includes any information about your employer, or another persons copyrighted material, that person has given you authority to release this information.

Any questions? Contact me:

Andrew Pollock: [life] Day 266: Prep play date, shopping and a play date

Thu, 2014-10-23 10:25

Zoe's sleep seems a bit messed up lately. She yelled out for me at 3:53am, and I resettled her, but she wound up in bed with me at 4:15am anyway. It took me a while to get back to sleep, maybe around 5am, but then we slept in until about 7:30am.

That made for a bit of a mad rush to get out the door to Zoe's primary school for her "Prep Play Date" orientation. We managed to make it out the door by a bit after 8:30am.

15 minutes is what it appears to take to scooter to school, which is okay. With local traffic being what it is, I think this will be a nice way to get to and from school next year, weather permitting.

We signed in, and Zoe got paired up with an existing (extremely tall) Prep student to be her buddy. The other girl was very keen to hold Zoe's hand, which Zoe was a bit dubious about at first, but they got there eventually.

The kids spent about 20 minutes rotating through the three classrooms, with a different buddy in each classroom. They were all given a 9 station name badge when they signed in, and they got a sticker for each station that they visited in each classroom.

It was a really nice morning, and I discovered there's one other girl from Zoe's Kindergarten going to her school, so I made a point of introducing myself to her mother.

I've got a really great vibe about the school, and Zoe enjoyed the morning. I'm looking forward to the next stage of her education.

We scootered home afterwards, and Zoe got the speed wobbles going down the hill and had a spectacular crash, luckily without any injuries thanks to all of her safety gear.

Once we got home, we headed out to the food wholesaler at West End to pick up a few bits and pieces, and then I had to get to Kindergarten to chair the monthly PAG meeting. I dropped Zoe at Megan's place for a play date while I was at the Kindergarten.

After the meeting, I picked up Zoe and we headed over to Westfield Carindale to buy a birthday present for Zoe's Kindergarten friend, Ivy, who is having a birthday party on Saturday.

We got home from Carindale with just enough time to spare before Sarah arrived to pick Zoe up.

I then headed over to Anshu's place for a Diwali dinner. News: Speaker Feature: Audrey Lobo-Pulo, Jack Moffitt

Thu, 2014-10-23 08:27
Audrey Lobo-Pulo Evaluating government policies using open source models

10:40am Wednesday 14th January 2015

Dr. Audrey Lobo-Pulo is a passionate advocate of open government and the use of open source software in government modelling. Having started out as a physicist developing theoretical models in the field of high speed data transmission, she moved into the economic policy modelling sphere and worked at the Australian Treasury from 2005 till 2011.

Currently working at the Australian Taxation Office in Sydney, Audrey enjoys discussions on modelling economic policy.

For more information on Audrey and her presentation, see here. You can follow her as @AudreyMatty and don’t forget to mention #LCA2015.

Jack Moffitt Servo: Building a Parallel Browser

10:40am Friday 16th January 2015

Jacks current project is called Chesspark and is an online community for chess players built on top of technologies like XMPP (aka Jabber), AJAX, and Python.

He previously created the Icecast Streaming Media Server, spent a lot of time developing and managing the Ogg Vorbits project, and helping create and run the Foundation. All these efforts exist to create a common, royalty free, and open standard for multimedia on the Internet.

Jack is also passionate about Free Software and Open Source, technology, music, and photography.

For more information on Jack and his presentation, see here. You can follow him as @metajack and don’t forget to mention #LCA2015. News: Speaker Feature: Denise Paolucci, Gernot Heiser

Wed, 2014-10-22 08:28
Denise Paolucci When Your Codebase Is Nearly Old Enough To Vote

11:35 am Friday 16th January 2015

Denise is one of the founders of Dreamwidth, a journalling site and open source project forked from Livejournal, and one of only two majority-female open source projects.

Denise has appeared at multiple open source conferences to speak about Dreamwidth, including OSCON 2010 and 2010.

For more information on Denise and her presentation, see here.

Gernot Heiser seL4 Is Free - What Does This Mean For You?

4:35pm Thursday 15th January 2015

Gernot is a Scientia Professor and the John Lions Chair for operating systems at the University of New South Wales (UNSW).

He is also leader of the Software Systems Research Group (SSRG) at NICTA. In 2006 he co-founded Open Kernel Labs (OK Labs, acquired in 2012 by General Dynamics) to commercialise his L4 microkernel technology

For more information on Gernot and his presentation, see here. You can follow him as @GernotHeiser and don’t forget to mention #LCA2015.

Joshua Hesketh: OpenStack infrastructure swift logs and performance

Wed, 2014-10-22 01:25

Turns out I’m not very good at blogging very often. However I thought I would put what I’ve been working on for the last few days here out of interest.

For a while the OpenStack Infrastructure team have wanted to move away from storing logs on disk to something more cloudy – namely, swift. I’ve been working on this on and off for a while and we’re nearly there.

For the last few weeks the openstack-infra/project-config repository has been uploading its CI test logs to swift as well as storing them on disk. This has given us the opportunity to compare the last few weeks of data and see what kind of effects we can expect as we move assets into an object storage.

  • I should add a disclaimer/warning, before you read, that my methods here will likely make statisticians cringe horribly. For the moment though I’m just getting an indication for how things compare.
The set up

Fetching files from an object storage is nothing particularly new or special (CDN’s have been doing it for ages). However, for our usage we want to serve logs with os-loganalyze giving the opportunity to hyperlink to timestamp anchors or filter by log severity.

First though we need to get the logs into swift somehow. This is done by having the job upload its own logs. Rather than using (or writing) a Jenkins publisher we use a bash script to grab the jobs own console log (pulled from the Jenkins web ui) and then upload it to swift using credentials supplied to the job as environment variables (see my zuul-swift contributions).

This does, however, mean part of the logs are missing. For example the fetching and upload processes write to Jenkins’ console log but because it has already been fetched these entries are missing. Therefore this wants to be the very last thing you do in a job. I did see somebody do something similar where they keep the download process running in a fork so that they can fetch the full log but we’ll look at that another time.

When a request comes into, a request is handled like so:

  1. apache vhost matches the server
  2. if the request ends in .txt.gz, console.html or console.html.gz rewrite the url to prepend /htmlify/
  3. if the requested filename is a file or folder on disk, serve it up with apache as per normal
  4. otherwise rewrite the requested file to prepend /htmlify/ anyway

os-loganalyze is set up as an WSGIScriptAlias at /htmlify/. This means all files that aren’t on disk are sent to os-loganalyze (or if the file is on disk but matches a file we want to mark up it is also sent to os-loganalyze). os-loganalyze then does the following:

  1. Checks the requested file path is legitimate (or throws a 400 error)
  2. Checks if the file is on disk
  3. Checks if the file is stored in swift
  4. If the file is found markup (such as anchors) are optionally added and the request is served
    1. When serving from swift the file is fetched via the swiftclient by os-loganlayze in chunks and streamed to the user on the fly. Obviously fetching from swift will have larger network consequences.
  5. If no file is found, 404 is returned

If the file exists both on disk and in swift then step #2 can be skipped by passing ?source=swift as a parameter (thus only attempting to serve from swift). In our case the files exist both on disk and in swift since we want to compare the performance so this feature is necessary.

So now that we have the logs uploaded into swift and stored on disk we can get into some more interesting comparisons.

Testing performance process

My first attempt at this was simply to fetch the files from disk and then from swift and compare the results. A crude little python script did this for me:

The script fetches a copy of the log from disk and then from swift (both through os-loganalyze and therefore marked-up) and times the results. It does this in two scenarios:

  1. Repeatably fetching the same file over again (to get a good average)
  2. Fetching a list of recent logs from gerrit (using the gerrit api) and timing those

I then ran this in two environments.

  1. On my local network the other side of the world to the logserver
  2. On 5 parallel servers in the same DC as the logserver

Running on my home computer likely introduced a lot of errors due to my limited bandwidth, noisy network and large network latency. To help eliminate these errors I also tested it on 5 performance servers in the Rackspace cloud next to the log server itself. In this case I used ansible to orchestrate the test nodes thus running the benchmarks in parallel. I did this since in real world use there will often be many parallel requests at once affecting performance.

The following metrics are measured for both disk and swift:

  1. request sent – time taken to send the http request from my test computer
  2. response – time taken for a response from the server to arrive at the test computer
  3. transfer – time taken to transfer the file
  4. size – filesize of the requested file

The total time can be found by adding the first 3 metrics together.


Results Home computer, sequential requests of one file


The complementary colours are the same metric and the darker line represents swift’s performance (over the lighter disk performance line). The vertical lines over the plots are the error bars while the fetched filesize is the column graph down the bottom. Note that the transfer and file size metrics use the right axis for scale while the rest use the left.

As you would expect the requests for both disk and swift files are more or less comparable. We see a more noticable difference on the responses though with swift being slower. This is because disk is checked first, and if the file isn’t found on disk then a connection is sent to swift to check there. Clearly this is going to be slower.

The transfer times are erratic and varied. We can’t draw much from these, so lets keep analyzing deeper.

The total time from request to transfer can be seen by adding the times together. I didn’t do this as when requesting files of different sizes (in the next scenario) there is nothing worth comparing (as the file sizes are different). Arguably we could compare them anyway as the log sizes for identical jobs are similar but I didn’t think it was interesting.

The file sizes are there for interest sake but as expected they never change in this case.

You might notice that the end of the graph is much noisier. That is because I’ve applied some rudimentary data filtering.

request sent (ms) – disk request sent (ms) – swift response (ms) – disk response (ms) – swift transfer (ms) – disk transfer (ms) – swift size (KB) – disk size (KB) – swift Standard Deviation 54.89516183 43.71917948 56.74750291 194.7547117 849.8545127 838.9172066 7.121600095 7.311125275 Mean 283.9594368 282.5074598 373.7328851 531.8043908 5091.536092 5122.686897 1219.804598 1220.735632


I know it’s argued as poor practice to remove outliers using twice the standard deviation, but I did it anyway to see how it would look. I only did one pass at this even though I calculated new standard deviations.


request sent (ms) – disk request sent (ms) – swift response (ms) – disk response (ms) – swift transfer (ms) – disk transfer (ms) – swift size (KB) – disk size (KB) – swift Standard Deviation 13.88664039 14.84054789 44.0860569 115.5299781 541.3912899 515.4364601 7.038111654 6.98399691 Mean 274.9291111 276.2813889 364.6289583 503.9393472 5008.439028 5013.627083 1220.013889 1220.888889


I then moved the outliers to the end of the results list instead of removing them completely and used the newly calculated standard deviation (ie without the outliers) as the error margin.

Then to get a better indication of what are average times I plotted the histograms of each of these metrics.

Here we can see a similar request time.


Here it is quite clear that swift is slower at actually responding.


Interestingly both disk and swift sources have a similar total transfer time. This is perhaps an indication of my network limitation in downloading the files.


Home computer, sequential requests of recent logs

Next from my home computer I fetched a bunch of files in sequence from recent job runs.



Again I calculated the standard deviation and average to move the outliers to the end and get smaller error margins.

request sent (ms) – disk request sent (ms) – swift response (ms) – disk response (ms) – swift transfer (ms) – disk transfer (ms) – swift size (KB) – disk size (KB) – swift Standard Deviation 54.89516183 43.71917948 194.7547117 56.74750291 849.8545127 838.9172066 7.121600095 7.311125275 Mean 283.9594368 282.5074598 531.8043908 373.7328851 5091.536092 5122.686897 1219.804598 1220.735632 Second pass without outliers Standard Deviation 13.88664039 14.84054789 115.5299781 44.0860569 541.3912899 515.4364601 7.038111654 6.98399691 Mean 274.9291111 276.2813889 503.9393472 364.6289583 5008.439028 5013.627083 1220.013889 1220.888889


What we are probably seeing here with the large number of slower requests is network congestion in my house. Since the script requests disk, swift, disk, swift, disk.. and so on this evens it out causing a latency in both sources as seen.


Swift is very much slower here.


Although comparable in transfer times. Again this is likely due to my network limitation.


The size histograms don’t really add much here.


Rackspace Cloud, parallel requests of same log

Now to reduce latency and other network effects I tested fetching the same log over again in 5 parallel streams. Granted, it may have been interesting to see a machine close to the log server do a bunch of sequential requests for the one file (with little other noise) but I didn’t do it at the time unfortunately. Also we need to keep in mind that others may be access the log server and therefore any request in both my testing and normal use is going to have competing load.


I collected a much larger amount of data here making it harder to visualise through all the noise and error margins etc. (Sadly I couldn’t find a way of linking to a larger google spreadsheet graph). The histograms below give a much better picture of what is going on. However out of interest I created a rolling average graph. This graph won’t mean much in reality but hopefully will show which is faster on average (disk or swift).


You can see now that we’re closer to the server that swift is noticeably slower. This is confirmed by the averages:


  request sent (ms) – disk request sent (ms) – swift response (ms) – disk response (ms) – swift transfer (ms) – disk transfer (ms) – swift size (KB) – disk size (KB) – swift Standard Deviation 32.42528982 9.749368282 245.3197219 781.8807534 1082.253253 2737.059103 0 0 Mean 4.87337544 4.05191168 39.51898688 245.0792916 1553.098063 4167.07851 1226 1232 Second pass without outliers Standard Deviation 1.375875503 0.8390193564 28.38377158 191.4744331 878.6703183 2132.654898 0 0 Mean 3.487575109 3.418433003 7.550682037 96.65978872 1389.405618 3660.501404 1226 1232


Even once outliers are removed we’re still seeing a large latency from swift’s response.

The standard deviation in the requests now have gotten very small. We’ve clearly made a difference moving closer to the logserver.


Very nice and close.


Here we can see that for roughly half the requests the response time was the same for swift as for the disk. It’s the other half of the requests bringing things down.


The transfer for swift is consistently slower.


Rackspace Cloud, parallel requests of recent logs

Finally I ran just over a thousand requests in 5 parallel streams from computers near the logserver for recent logs.


Again the graph is too crowded to see what is happening so I took a rolling average.



request sent (ms) – disk request sent (ms) – swift response (ms) – disk response (ms) – swift transfer (ms) – disk transfer (ms) – swift size (KB) – disk size (KB) – swift Standard Deviation 0.7227904332 0.8900549012 434.8600827 909.095546 1913.9587 2132.992773 6.341238774 7.659678352 Mean 3.515711867 3.56191383 145.5941102 189.947818 2427.776165 2875.289455 1219.940039 1221.384913 Second pass without outliers Standard Deviation 0.4798803247 0.4966553679 109.6540634 171.1102999 1348.939342 1440.2851 6.137625464 7.565931993 Mean 3.379718381 3.405770445 70.31323922 86.16522485 2016.900047 2426.312363 1220.318912 1221.881335


The averages here are much more reasonable than when we continually tried to request the same file. Perhaps we’re hitting limitations with swifts serving abilities.


I’m not sure why we have sinc function here. A network expert may be able to tell you more. As far as I know this isn’t important to our analysis other than the fact that both disk and swift match.


Here we can now see swift keeping a lot closer to disk results than when we only requested the one file in parallel. Swift is still, unsurprisingly, slower overall.


Swift still loses out on transfers but again does a much better job of keeping up.


Error sources

I haven’t accounted for any of the following swift intricacies (in terms of caches etc) for:

  • Fetching random objects
  • Fetching the same object over and over
  • Fetching in parallel multiple different objects
  • Fetching the same object in parallel

I also haven’t done anything to account for things like file system caching, network profiling, noisy neighbours etc etc.

os-loganalyze tries to keep authenticated with swift, however

  • This can timeout (causes delays while reconnecting, possibly accounting for some spikes?)
  • This isn’t thread safe (are we hitting those edge cases?)

We could possibly explore getting longer authentication tokens or having os-loganalyze pull from an unauthenticated CDN to add the markup and then serve. I haven’t explored those here though.

os-loganalyze also handles all of the requests not just from my testing but also from anybody looking at OpenStack CI logs. In addition to this it also needs to deflate the gzip stream if required. As such there is potentially a large unknown (to me) load on the log server.

In other words, there are plenty of sources of errors. However I just wanted to get a feel for the general responsiveness compared to fetching from disk. Both sources had noise in their results so it should be expected in the real world when downloading logs that it’ll never be consistent.


As you would expect the request times are pretty much the same for both disk and swift (as mentioned earlier) especially when sitting next to the log server.

The response times vary but looking at the averages and the histograms these are rarely large. Even in the case where requesting the same file over and over in parallel caused responses to go slow these were only in the magnitude of 100ms.

The response time is the important one as it indicates how soon a download will start for the user. The total time to stream the contents of the whole log is seemingly less important if the user is able to start reading the file.

One thing that wasn’t tested was streaming of different file sizes. All of the files were roughly the same size (being logs of the same job). For example, what if the asset was a few gigabytes in size, would swift have any significant differences there? In general swift was slower to stream the file but only by a few hundred milliseconds for a megabyte. It’s hard to say (without further testing) if this would be noticeable on large files where there are many other factors contributing to the variance.

Whether or not these latencies are an issue is relative to how the user is using/consuming the logs. For example, if they are just looking at the logs in their web browser on occasion they probably aren’t going to notice a large difference. However if the logs are being fetched and scraped by a bot then it may see a decrease in performance.

Overall I’ll leave deciding on whether or not these latencies are acceptable as an exercise for the reader.

Andrew Pollock: [life] Day 265: Kindergarten and startup stuff

Tue, 2014-10-21 21:25

Zoe yelled out for me at 5:15am for some reason, but went back to sleep after I resettled her, and we had a slow start to the day a bit after 7am. I've got a mild version of whatever cold she's currently got, so I'm not feeling quite as chipper as usual.

We biked to Kindergarten, which was a bit of a slog up Hawthorne Road, given the aforementioned cold, but we got there in the end.

I left the trailer at the Kindergarten and biked home again.

I finally managed to get some more work done on my real estate course, and after a little more obsessing over one unit, got it into the post. I've almost got another unit finished as well. I'll try to get it finished in the evenings or something, because I'm feeling very behind, and I'd like to get it into the mail too. I'm due to get the second half of my course material, and I still have one more unit to do after this one I've almost finished.

I biked back to Kindergarten to pick up Zoe. She wanted to watch Megan's tennis class, but I needed to grab some stuff for dinner, so it took a bit of coaxing to get her to leave. I think she may have been a bit tired from her cold as well.

We biked home, and jumped in the car. I'd heard from Matthew's Dad that FoodWorks in Morningside had a good meat selection, so I wanted to check it out.

They had some good roasting meat, but that was about it. I gave up trying to mince my own pork and bought some pork mince instead.

We had a really nice dinner together, and I tried to get her to bed a little bit early. Every time I try to start the bed time routine early, the spare time manages to disappear anyway.

David Rowe: SM1000 Part 7 – Over the air in Germany

Tue, 2014-10-21 09:29

Michael Wild DL2FW in Germany recently attended a Hamfest where he demonstrated his SM1000. Michael sent me the following email (hint: I used Google translate on the web sites):

Here is the link to the review of our local hamfest.

At the bottom is a video of a short QSO on 40m using the SM-1000 over about 400km. The other station was Hermann (DF2DR). Hermann documented this QSO very well on his homepage also showing a snapshot of the waterfall during this QSO. Big selective fading as you can see, but we were doing well!

He also explains that, when switching to SSB at the same average power level, the voice was almost not understandable!

SM1000 Beta and FreeDV Update

Rick KA8BMA has been working hard on the Beta CAD work, and fighting a few Eagle DRC battles. Thanks to all his hard work we now have an up to date schematic and BOM for the Betas. He is now working on the Beta PCB layout, and we are refining the BOM with Edwin from Dragino in China. Ike, W3IKIE, has kindly been working with Rick to come up with a suitable enclosure. Thanks guys!

My current estimate is that the Beta SM1000s will be assembled in November. Once I’ve tested a few I’ll put them up on my store and start taking orders.

In the mean time I’ve thrown myself into modem simulations – playing with a 450 bit/s version of Codec 2, LPDC FEC codes, diversity schemes and coherent QPSK demodulation. I’m pushing towards a new FreeDV mode that works on fading channels at negative SNRs. More on that in later posts. The SM1000 and a new FreeDV mode are part of my goals for 2014. The SM1000 will make FreeDV easy to use, the new mode(s) will make it competitive with SSB on HF radio.

Everything is open source, both hardware and software. No vendor lock in, no software licenses and you are free to experiment and innovate.

Chris Samuel: IBM Pays GlobalFoundries to take Microprocessor Business

Tue, 2014-10-21 07:26

Interesting times for IBM, having already divested themselves of the x86 business by selling it on to Lenovo they’ve now announced that they’re paying GlobalFoundries $1.5bn to take pretty much that entire side of the business!

IBM (NYSE: IBM) and GLOBALFOUNDRIES today announced that they have signed a Definitive Agreement under which GLOBALFOUNDRIES plans to acquire IBM’s global commercial semiconductor technology business, including intellectual property, world-class technologists and technologies related to IBM Microelectronics, subject to completion of applicable regulatory reviews. GLOBALFOUNDRIES will also become IBM’s exclusive server processor semiconductor technology provider for 22 nanometer (nm), 14nm and 10nm semiconductors for the next 10 years.

It includes IBM’s IP and patents, though IBM will continue to do research for 5 years and GlobalFoundries will get access to that. Now what happens to those researchers (one of whom happens to be a friend of mine) after that isn’t clear.

When I heard the rumours yesterday I was wondering if IBM was aiming to do an ARM and become a fab-less CPU designer but this is much more like exiting the whole processor business altogether. The fact that they seem to be paying GlobalFoundries to take this off their hands also makes it sound pretty bad.

What this all means for their Power CPU is uncertain, and if I was nVidia and Mellanox in the OpenPOWER alliance I would be hoping I’d know about this before joining up!

Update: I’ve spoken to some IBM’ers about this and they assert they’re not leaving the chip business, they are offloading off the fabs and the manufacturing IP to GlobalFoundries but not the chip design side of things. In my opinion, though, it does mean that should they decide to exit the chip business at some point it’ll be easier for them to do so.

This item originally posted here:

IBM Pays GlobalFoundries to take Microprocessor Business

Andrew Pollock: [life] Day 264: Pupil Free Day means lots of park play

Mon, 2014-10-20 21:25

Today was a Kindergarten (and it seemed most of the schools in Brisbane) Pupil Free Day.

Grace, the head honcho of Thermomix in Australia, was supposed to be in town for a meet and greet, and a picnic in New Farm Park had been organised, but at the last minute she wasn't able to make it due to needing to be in Perth for a meeting. The plan changed and we had a Branch-level picnic meeting at the Colmslie Beach Reserve.

So after Sarah dropped Zoe off, I whipped up some red velvet cheesecake brownie, which seems to be my go to baked good when required to bring a plate (it's certainly popular) and I had some leftover sundried tomatoes, so I whipped up some sundried tomato dip as well.

The meet up in the park was great. My group leader's daughters were there, as were plenty of other consultant's kids due to the Pupile Free Day, and Zoe was happy to hang out and have a play. There was lots of yummy food, and we were able to graze and socialise a bit. We called it lunch.

After we got home, we had a bit of a clean up of the balcony, which had quite a lot of detritus from various play dates and craft activities. Once that was done, we had some nice down time in the hammock.

We then biked over to a park to catch up with Zoe's friend Mackensie for a play date. The girls had a really nice time, and I discovered that the missing link in the riverside bike path has been completed, which is rather nice for both cycling and running. (It goes to show how long it's been since I've gone for a run, I really need to fix that).

After that, we biked home, and I made dinner. We got through dinner pretty quickly, and so Zoe and I made a batch of ginger beer after dinner, since there was a Thermomix recipe for it. It was cloudy though, and Zoe was more used to the Bunderberg ginger beer, which is probably a bit better filtered, so she wasn't so keen on it.

All in all, it was a really lovely way to spend a Pupil Free Day.