Planet Linux Australia

Syndicate content
Planet Linux Australia - http://planet.linux.org.au
Updated: 1 hour 30 min ago

linux.conf.au News: Big announcement about upcoming announcements...

Sun, 2014-08-31 15:27

The Papers Committee weekend went extremely well and without bloodshed according to Steve, although there was some very strong discussion from time to time! The upshot is that we have a fantastic program now, with excellent presentations all across the board.

We have already begun contacting Miniconf organisers to let them know who has been successful and who hasn’t, and over the next couple of weeks we will be sending emails out to everyone who submitted a presentation to let them know how they fared.

If you have been accepted to run a Miniconf then your contact will be Simon Lyall (miniconfs@lca2015.linux.org.au) and if you have been accepted as a speaker then your contact will be Lisa Sands (speakers@lca2015.linux.org.au). We will be asking for a photo of you and your twitter name, as we will be running a Speaker Feature about a different presenter each day - don’t worry - you will be notified on your day!

We want to give great thanks to everyone who submitted papers - you are all still winners in our eyes, and we hope that even if you weren’t selected this time that won’t put you off attending the conference and having a great time. Please note that due to the large volume of submissions, we are unable to provide feedback on why any particular submission was unsuccessful.

Our earlybird registration will be opening soon, so watch this space!

Francois Marier: Outsourcing your webapp maintenance to Debian

Sun, 2014-08-31 08:46

Modern web applications are much more complicated than the simple Perl CGI scripts or PHP pages of the past. They usually start with a framework and include lots of external components both on the front-end and on the back-end.

Here's an example from the Node.js back-end of a real application:

$ npm list | wc -l 256

What if one of these 256 external components has a security vulnerability? How would you know and what would you do if of your direct dependencies had a hard-coded dependency on the vulnerable version? It's a real problem and of course one way to avoid this is to write everything yourself. But that's neither realistic nor desirable.

However, it's not a new problem. It was solved years ago by Linux distributions for C and C++ applications. For some reason though, this learning has not propagated to the web where the standard approach seems to be to "statically link everything".

What if we could build on the work done by Debian maintainers and the security team?

Case study - the Libravatar project

As a way of discussing a different approach to the problem of dependency management in web applications, let me describe the decisions made by the Libravatar project.

Description

Libravatar is a federated and free software alternative to the Gravatar profile photo hosting site.

From a developer point of view, it's a fairly simple stack:

The service is split between the master node, where you create an account and upload your avatar, and a few mirrors, which serve the photos to third-party sites.

Like with Gravatar, sites wanting to display images don't have to worry about a complicated protocol. In a nutshell, all that a site needs to do is hash the user's email and add that hash to a base URL. Where the federation kicks in is that every email domain is able to specify a different base URL via an SRV record in DNS.

For example, francois@debian.org hashes to 7cc352a2907216992f0f16d2af50b070 and so the full URL is:

http://cdn.libravatar.org/avatar/7cc352a2907216992f0f16d2af50b070

whereas francois@fmarier.org hashes to 0110e86fdb31486c22dd381326d99de9 and the full URL is:

http://fmarier.org/avatar/0110e86fdb31486c22dd381326d99de9

due to the presence of an SRV record on fmarier.org.

Ground rules

The main rules that the project follows is to:

  1. only use Python libraries that are in Debian
  2. use the versions present in the latest stable release (including backports)
Deployment using packages

In addition to these rules around dependencies, we decided to treat the application as if it were going to be uploaded to Debian:

  • It includes an "upstream" Makefile which minifies CSS and JavaScript, gzips them, and compiles PO files (i.e. a "build" step).
  • The Makefile includes a test target which runs the unit tests and some lint checks (pylint, pyflakes and pep8).
  • Debian packages are produced to encode the dependencies in the standard way as well as to run various setup commands in maintainer scripts and install cron jobs.
  • The project runs its own package repository using reprepro to easily distribute these custom packages.
  • In order to update the repository and the packages installed on servers that we control, we use fabric, which is basically a fancy way to run commands over ssh.
  • Mirrors can simply add our repository to their apt sources.list and upgrade Libravatar packages at the same time as their system packages.
Results

Overall, this approach has been quite successful and Libravatar has been a very low-maintenance service to run.

The ground rules have however limited our choice of libraries. For example, to talk to our queuing system, we had to use the raw Python bindings to the C Gearman library instead of being able to use a nice pythonic library which wasn't in Debian squeeze at the time.

There is of course always the possibility of packaging a missing library for Debian and maintaining a backport of it until the next Debian release. This wouldn't be a lot of work considering the fact that responsible bundling of a library would normally force you to follow its releases closely and keep any dependencies up to date, so you may as well share the result of that effort. But in the end, it turns out that there is a lot of Python stuff already in Debian and we haven't had to package anything new yet.

Another thing that was somewhat scary, due to the number of packages that were going to get bumped to a new major version, was the upgrade from squeeze to wheezy. It turned out however that it was surprisingly easy to upgrade to wheezy's version of Django, Apache and Postgres. It may be a problem next time, but all that means is that you have to set a day aside every 2 years to bring everything up to date.

Problems

The main problem we ran into is that we optimized for sysadmins and unfortunately made it harder for new developers to setup their environment. That's not very good from the point of view of welcoming new contributors as there is quite a bit of friction in preparing and testing your first patch. That's why we're looking at encoding our setup instructions into a Vagrant script so that new contributors can get started quickly.

Another problem we faced is that because we use the Debian version of jQuery and minify our own JavaScript files in the build step of the Makefile, we were affected by the removal from that package of the minified version of jQuery. In our setup, there is no way to minify JavaScript files that are provided by other packages and so the only way to fix this would be to fork the package in our repository or (preferably) to work with the Debian maintainer and get it fixed globally in Debian.

One thing worth noting is that while the Django project is very good at issuing backwards-compatible fixes for security issues, sometimes there is no way around disabling broken features. In practice, this means that we cannot run unattended-upgrades on our main server in case something breaks. Instead, we make use of apticron to automatically receive email reminders for any outstanding package updates.

On that topic, it can occasionally take a while for security updates to be released in Debian, but this usually falls into one of two cases:

  1. You either notice because you're already tracking releases pretty well and therefore could help Debian with backporting of fixes and/or testing;
  2. or you don't notice because it has slipped through the cracks or there simply are too many potential things to keep track of, in which case the fact that it eventually gets fixed without your intervention is a huge improvement.

Finally, relying too much on Debian packaging does prevent Fedora users (a project that also makes use of Libravatar) from easily contributing mirrors. Though if we had a concrete offer, we would certainly look into creating the appropriate RPMs.

Is it realistic?

It turns out that I'm not the only one who thought about this approach, which has been named "debops". The same day that my talk was announced on the DebConf website, someone emailed me saying that he had instituted the exact same rules at his company, which operates a large Django-based web application in the US and Russia. It was pretty impressive to read about a real business coming to the same conclusions and using the same approach (i.e. system libraries, deployment packages) as Libravatar.

Regardless of this though, I think there is a class of applications that are particularly well-suited for the approach we've just described. If a web application is not your full-time job and you want to minimize the amount of work required to keep it running, then it's a good investment to restrict your options and leverage the work of the Debian community to simplify your maintenance burden.

The second criterion I would look at is framework maturity. Given the 2-3 year release cycle of stable distributions, this approach is more likely to work with a mature framework like Django. After all, you probably wouldn't compile Apache from source, but until recently building Node.js from source was the preferred option as it was changing so quickly.

While it goes against conventional wisdom, relying on system libraries is a sustainable approach you should at least consider in your next project. After all, there is a real cost in bundling and keeping up with external dependencies.

This blog post is based on a talk I gave at DebConf 14: slides, video.

Maxim Zakharov: Australian Singing Competition

Sun, 2014-08-31 03:25

The Finals concert of the 2014 Australian Singing Competition was an amazing experience, and it was the first time I listened to opera singers live.

Congratulations to the winner, Isabella Moore from New Zealand!

Matt Palmer: Chromium tabs crashing and not rendering correctly?

Sat, 2014-08-30 15:26

If you’ve noticed your chrome/chromium on Linux having problems since you upgraded to somewhere around version 35/36, you’re not alone. Thankfully, it’s relatively easy to workaround. It will hit people who keep their browser open for a long time, or who have lots of tabs (or if you’re like me, and do both).

To tell if you’re suffering from this particular problem, crack open your ~/.xsession-errors file (or wherever your system logs stdout/stderr from programs running under X), and look for lines that look like this:

[22161:22185:0830/124533:ERROR:shared_memory_posix.cc(231)] Creating shared memory in /dev/shm/.org.chromium.Chromium.gFTQSy failed: Too many open files

And

[22161:22185:0830/124601:ERROR:host_shared_bitmap_manager.cc(122)] Cannot create shared memory buffer

If you see those errors, congratulations! The rest of this blog post will be of use to you.

There’s probably a myriad of bugs open about this problem, but the one I found was #367037: Shared memory-related tab crash. It turns out there’s a file handle leak in the chromium codebase somewhere, relating to shared memory handling. There’s no fix available, but the workaround is quite simple: increase the number of files that processes are allowed to have open.

System-wide, you can do this by creating a file /etc/security/limits.d/local-nofile.conf, containing this line:

* - nofile 65535

You could also edit /etc/security/limits.conf to contain the same line, if you were so inclined. Note that this will only take effect next time you login, or perhaps even only when you restart X (or, at worst, your entire machine).

This doesn’t help you if you’ve got Chromium already open and you’d like to stop it from crashing Right Now (perhaps restarting your machine would be a terrible hardship, causing you to lose your hard-won uptime record), then you can use a magical tool called prlimit.

The prlimit syscall is available if you’re running a Linux 2.6.36 or later kernel, and running at least glibc 2.13. You’ll have a prlimit command line program if you’ve got util-linux 2.21 or later. If not, you can use the example source code in the prlimit(2) manpage, changing RLIMIT_CPU to RLIMIT_NOFILE, and then running it like this:

prlimit <PID> 65535 65535

The <PID> argument is taken from the first number in the log messages from .xsession-errors – in the example above, it’s 22161.

And now, you can go back to using your tabs as ersatz bookmarks, like I do.

David Rowe: SM1000 Part 4 – Killing a PCB and PTT Working

Fri, 2014-08-29 21:30

Last Sunday the ADC1 net on the first SM1000 prototype went open circuit all of a sudden. After messing about for a few hours I lifted the uC pin for that net and soldered a fine wire to the other end of the net. That lasted a few days then fell off. I then broke the uC pin trying to put it all back together. So then I tried to use some Chip Quick I had laying about from the Mesh Potato days to remove the uC. I royally screwed that up, breaking several pads.

It’s been 7 years since my last surface mount assembly project and it shows!

However when the uC came off the reason for the open circuit became apparent. The photo below was taken through the microscope I use for surface mount assembly:

At the top is the bottom part of a square pad that is part of the ADC1 net. The track is broken just before the lower left corner of the pad. Many of the pads under the uC were in various stages of decomposition, e.g. solder mask and tinning gone, down to bare copper. Turns out I used too much flux and it wasn’t cleaned out from under the chip when I washed the PCB. For the past few weeks it was busy eating away the PCB.

Oh well, one step back! So this week I built another SM1000, and today I brought it to life. After fixing a few small assembly bugs I debugged the “switches and leds” driver and sm1000_main.c, which means I now have PTT operation working. So it’s normally in receive mode, but press PTT and it swaps to tx mode. The sync, PTT, and error LEDs work too. Cool.

Here is a picture of prototype number 2:

The three trimmers along the bottom set the internal mic amp, and line levels to the “mic” and “speaker” ports of the radio. The pot on the RHS is the internal speaker volume control. The two switches upper RHS are PTT and power. On the left is a RJ45 for the audio connections to the radio and under the PCB (not visible) are a bunch of 3.5mm sockets that provide alternate audio connections to the radio.

What next? Well the speaker audio is a bit distorted at high volume so I might look into that and see if the LM386 is behaving as specified. Then hook it up to a real radio and test it over the air. That will shake down the interfaces some more and see if it’s affected by strong nearby RF. Oh, and I need to test USB and a few other minor interfaces.

I’m very happy with progress and we are on track to release the SM1000 in beta form commercially in late 2014.

Tridge on UAVs: Lidar landing with APM:Plane

Fri, 2014-08-29 20:47

Over the last couple of days I have been testing the Lidar based auto-landing code that will be in the upcoming 3.1.1 release of APM:Plane. I'm delighted to say that it has gone very well!

Testing has been done on two planes - one is a Meridian sports plane with a OS46 Nitro motor. That is a tricycle undercarriage, so has very easy ground steering. The tests today were with the larger VQ Porter 2.7m tail-dragger with a DLE-35 petrol motor. That has a lot of equipment on board for the CanberraUAV OBC entry, so it weighs 14kg at takeoff making it a much more difficult plane to land well.

The Lidar is a SF/02 from LightWare, a really nice laser rangefinder that works nicely with Pixhawk. It has a 40m range, which is great for landing, allowing the plane plenty of time to lock onto the glide slope in the landing approach.

APM:Plane has supported these Lidars and other rangefinders for a while, but until now has not been able to use them for landing. Instead they were just being logged to the microSD card, but not actively used. After some very useful discussions with Paul Riseborough we now have the Lidar properly integrated into the landing code.

The test flights today were an auto-takeoff (with automatic ground steering), a quick auto-circuit then an automatic landing. The first landing went long as I'd forgotten to drop THR_MIN down to zero (I normally have it at 20% to ensure the engine stays at a reasonable level during auto flight). After fixing that we got a series of good auto flights.

The flight was flown with a 4 second flare time, which is probably a bit long as it allowed the plane to lose too much speed on the final part of the flare. That is why it bounces a bit as it starts losing height. I'll try with a bit shorter flare time tomorrow.

Here is the video of one of the Meridian flights yesterday. Sorry for missing part of the flight, the video was shot with a cell phone by a friend at the field.

Here is another video of the Porter flying today, but taken from the north of the runway

I'd like to thank Charles Wannop from Flying Finish Productions for the video of the Porter today with help from Darrell Burkey.

Gary Pendergast: The Next Adventure

Fri, 2014-08-29 16:26

Over my past few years at Automattic, I’ve worked on a bunch of different teams and projects – VideoPress, the WordPress iOS app, various Social projects, and most recently, o2. I even took a few months to work on WordPress core, helping build the auto-update functionality that we now see rolling out security updates within hours of their release.

The few months I spent working on WordPress core made me realise something – there’s a lot more I have to contribute. So, with the WordPress 4.0 RC out the door, I’m super excited to be moving to my next project – working on WordPress core full time!

Automattic naturally puts a lot of people-hours into WordPress, with over 30 of us contributing to WordPress 3.9. I’m looking forward to being a bigger part of that, and giving more back to WordPress community!

Robert Collins: Test processes as servers

Fri, 2014-08-29 15:29

Since its very early days subunit has had a single model – you run a process, it outputs test results. This works great, except when it doesn’t.

On the up side, you have a one way pipeline – there’s no interactivity needed, which makes it very very easy to write a subunit backend that e.g. testr can use.

On the downside, there’s no interactivity, which means that anytime you want to do something with those tests, a new process is needed – and thats sometimes quite expensive – particularly in test suites with 10’s of thousands of tests.Now, for use in the development edit-execute loop, this is arguably ok, because one needs to load the new tests into memory anyway; but wouldn’t it be nice if tools like testr that run tests for you didn’t have to decide upfront exactly how they were going to run. If instead they could get things running straight away and then give progressively larger and larger units of work to be run, without forcing a new process (and thus new discovery directory walking and importing) ? Secondly, testr has an inconsistent interface – if testr is letting a user debug things to testr through to child workers in a chain, it needs to use something structured (e.g. subunit) and route stdin to the actual worker, but the final testr needs to unwrap everything – this is needlessly complex. Lastly, for some languages at least, its possibly to dynamically pick up new code at runtime – so a simple inotify loop and we could avoid new-process (and more importantly complete-enumeration) *entirely*, leading to very fast edit-test cycles.

So, in this blog post I’m really running this idea up the flagpole, and trying to sketch out the interface – and hopefully get feedback on it.

Taking subunit.run as an example process to do this to:

  1. There should be an option to change from one-shot to server mode
  2. In server mode, it will listen for commands somewhere (lets say stdin)
  3. On startup it might eager load the available tests
  4. One command would be list-tests – which would enumerate all the tests to its output channel (which is stdout today – so lets stay with that for now)
  5. Another would be run-tests, which would take a set of test ids, and then filter-and-run just those ids from the available tests, output, as it does today, going to stdout. Passing somewhat large sets of test ids in may be desirable, because some test runners perform fixture optimisations (e.g. bringing up DB servers or web servers) and test-at-a-time is pretty much worst case for that sort of environment.
  6. Another would be be std-in a command providing a packet of stdin – used for interacting with debuggers

So that seems pretty approachable to me – we don’t even need an async loop in there, as long as we’re willing to patch select etc (for the stdin handling in some environments like Twisted). If we don’t want to monkey patch like that, we’ll need to make stdin a socketpair, and have an event loop running to shepard bytes from the real stdin to the one we let the rest of Python have.

What about that nirvana above? If we assume inotify support, then list_tests (and run_tests) can just consult a changed-file list and reload those modules before continuing. Reloading them just-in-time would be likely to create havoc – I think reloading only when synchronised with test completion makes a great deal of sense.

Would such a test server make sense in other languages?  What about e.g. testtools.run vs subunit.run – such a server wouldn’t want to use subunit, but perhaps a regular CLI UI would be nice…



Glen Turner: Raspberry Pi versus Cray X-MP supercomputer

Fri, 2014-08-29 00:11

It's often said that today we have in the phone in our pocket computers more powerful than the supercomputers of the past. Let's see if that is true.

The Raspberry Pi contains a Broadcom BCM2835 system on chip. The CPU within that system is a single ARM6 clocked at 700MHz. Located under the system on chip is 512MB of RAM -- this arrangement is called "package-on-package". As well as the RPi the BCM2835 SoC was also used in some phones, these days they are the cheapest of smartphones.

The Whetstone benchmark was widely used in the 1980s to measure the performance of supercomputers. It gives a result in millions of floating point operations per second. Running Whetstone on the Raspberry Pi gives 380 MFLOPS. See Appendix 1 for the details.

Let's see what supercomputer comes closest to 380 MFLOPS. That would be the Cray X-MP/EA/164 supercomputer from 1988. That is a classic supercomputer: the X-MP was a 1982 revision of the 1975 Cray 1. So good was the revision work by Steve Chen that it's performance rivalled the company's own later Cray 2. The Cray X-MP was the workhorse supercomputer for most of the 1980s, the EA series was the last version of the X-MP and its major feature was to allow a selectable word size -- either 24-bit or 32-bit -- which allowed the X-MP to run programs designed for the Cray 1 (24-bit), Cray X-MP (24-bit) or Cray Y-MP (32-bit).

Let's do some comparisons of the shipped hardware.

 

Basic specifications. Raspberry Pi versus Cray X-MP/EA/164 Item Cray X-MP/EA/164 Raspberry Pi Model B+ Price US$8m (1988) A$38 Price, adjusted for inflation US$16m A$38 Number of CPUs 1 1 Word size 24 or 32 32 RAM 64MB 512MB Cooling Air cooled, heatsinks, fans Air cooled, no heatsink, no fan

 

Neither unit comes with a power supply. The Cray does come with housing, famously including a free bench seat. The RPi requires third-party housing, typically for A$10; bench seats are not available as an option.

The Cray had the option of solid-state storage. A Secure Digital card is needed to contain the RPi's boot image and, usually, its operating system and data.

 

I/O systems. Raspberry Pi versus Cray X-MP/EA/164 Item Cray Raspberry Pi SSD size 512MB Third party, minimum of 4096MB Price US$6m (1988) A$20 Price, adjusted for inflation US$12m A$20

 

Of course the Cray X-MP also had rotating disk. Each disk unit could contain 1.2GB and had a peak transfer rate of 10MBps. This was achieved by using a large number of platters to compensate for the low density of the recorded data, giving the typical "top loading washing machine" look of disks of that era. The disk was attached to a I/O channel. The channel could connect many disks, collectively called a "string" of disks. The Cray X-MP had two to four I/O channels, each capable of 13MBps.

In comparison the Raspberry Pi's SDHC connector attaches one SDHC card at a speed of 25MBps. The performance of the SD cards themselves varies hugely, ranging from 2MBps to 30MBps.

Analysis

What is clear from the number is that the floating point performance of the Cray X-MP/EA has fared better with the passage of time than the other aspects of the system. That's because floating point performance was the major design goal of that era of supercomputers. Ignoring the floating point performance, the Raspberry Pi handily beats out every computer in the Cray X-MP range.

Would Cray have been surprised by these results? I doubt it. Seymour Cray left CDC when they decided to build a larger supercomputer. He viewed this as showing CDC as not "getting it": larger computers have longer wires, more electronics to drive the wires, more heat from the electronics, more design issues such as crosstalk and more latency. Cray's main design insight was that computers needed to be as small as possible. There's not much smaller you can make a computer than a system-on-chip.

So why aren't today's supercomputers systems-on-chip? The answer has two parts. Firstly, the chip would be too small to remove the heat from. This is why "chip packaging" has moved to near the centre of chip design. Secondly, chip design, verification, and tooling (called "tape out") is astonishingly expensive for advanced chips. It's simply not affordable. You can afford a small variation on a proven design, but that is about the extent of the financial risk which designers care to take. A failed tape out was one of the causes of the downfall of the network processor design of Procket Networks.

Appendix 1. Whetstone benchmark

The whets.c benchmark was downloaded from Roy Longbottom's PC Benchmark Collection.

Compiling this for the RPi is simple enough. Since benchmark geeks care about the details, here they are.

$ diff -d -U 0 whets.c.orig whets.c @@ -886 +886 @@ -#ifdef UNIX +#ifdef linux $ gcc --version | head -1 gcc (Debian 4.6.3-14+rpi1) 4.6.3 $ gcc -O3 -lm -s -o whets whets.c

Here's the run. This is using a Raspbian updated to 2014-08-23 on a Raspberry Pi Model B+ with the "turbo" overclocking to 1000MHz (this runs the RPi between 700MHz and 1000MHz depending upon demand and the SoC temperature). The Model B+ has 512MB of RAM. The machine was in multiuser text mode. There was no swap used before and after the run.

$ uname -a Linux raspberry 3.12.22+ #691 PREEMPT Wed Jun 18 18:29:58 BST 2014 armv6l GNU/Linux $ cat /etc/debian_version 7.6 $ ./whets ########################################## Single Precision C/C++ Whetstone Benchmark Calibrate 0.04 Seconds 1 Passes (x 100) 0.19 Seconds 5 Passes (x 100) 0.74 Seconds 25 Passes (x 100) 3.25 Seconds 125 Passes (x 100) Use 3849 passes (x 100) Single Precision C/C++ Whetstone Benchmark Loop content Result MFLOPS MOPS Seconds N1 floating point -1.12475013732910156 138.651 0.533 N2 floating point -1.12274742126464844 143.298 3.610 N3 if then else 1.00000000000000000 971.638 0.410 N4 fixed point 12.00000000000000000 0.000 0.000 N5 sin,cos etc. 0.49911010265350342 7.876 40.660 N6 floating point 0.99999982118606567 122.487 16.950 N7 assignments 3.00000000000000000 592.747 1.200 N8 exp,sqrt etc. 0.75110864639282227 3.869 37.010 MWIPS 383.470 100.373

It is worthwhile making the point that this took maybe ten minutes. Cray Research had multiple staff working on making benchmark numbers such as Whetstone as high as possible.

Ben Martin: Terry is getting In-Terry-gence.

Thu, 2014-08-28 23:15
I had hoped to use a quad core ARM machine running ROS to spruce up Terry the robot. Performing tasks like robotic pick and place, controlling Tiny Tim and autonomous "docking". Unfortunately I found that trying to use a Kinect from an ARM based Linux machine can make for some interesting times. So I thought I'd dig at the ultra low end Intel chipset "SBC". The below is a J1900 Atom machine which can have up to 8Gb of RAM and sports the features that one expects from a contemporary desktop machine, Gb net, USB3, SATA3, and even a PCI-e expansion.





A big draw to this is the "DC" version, which takes a normal laptop style power connector instead of the much larger ATX connectors. This makes it much simpler to hookup to a battery pack for mobile use. The board runs nicely from a laptop extension battery, even if the on button is a but funky looking. On the left is a nice battery pack which is running the whole PC.



An interesting feature of this motherboard is no LED at all. I had sort of gotten used to Intel boards having blinks and power LEDs and the like.

There should be enough CPU grunt to handle the Kinect and start looking at doing DSLAM and thus autonomous navigation.

Andrew Pollock: [life] Day 211: A trip to the museum with Megan and a swim assessment

Thu, 2014-08-28 22:25

Today was a nice full day. It was go go go from the moment my feet hit the floor, though

I had a magnificent 9 hours of uninterrupted sleep courtesy of a sleeping pill, some pseudoephedrine and a decongestant spray.

I started off with a great yoga class, and then headed over to Sarah's to pick up Zoe. The traffic was phenomenally bad on the way there, and I got there quite late, so I offered to drop Sarah at work on the way back to my place, since I had no particular schedule.

After I'd dropped Sarah off, I was pondering what to do today, as the weather looked a bit dubious. I asked Zoe if she wanted to go to the museum. She was keen for that, and asked if Megan could come too, so I called up Jason on the way home to see she wanted to come with us, and directly picked her up on the way home.

We briefly stopped at home to grab my Dad Bag and some snacks, and headed to the bus stop. We managed to walk (well, make a run for) straight onto the bus and headed into the museum.

The Museum and the Sciencecentre are synonymous to Zoe, despite the latter requiring admission (we've got an annual membership). In trying to use my membership to get a discounted Sciencecentre ticket for Megan, I managed to score a free family pass instead, which I was pretty happy about.

We got into the Sciencecentre, which was pretty busy with a school excursion, and girls started checking it out. The problem was they both wanted to call the shots on what they did and didn't like not getting their way. Once I instituted some turn-taking, everything went much more smoothly, and they had a great time.

We had some lunch in the cafe and then spent some more time in the museum itself before heading for the bus. We narrowly missed the bus 30 minutes earlier than I was aiming for, so I asked the girls if they wanted to take the CityCat instead. Megan was very keen for that, so we walked over to Southbank and caught the CityCat instead.

I was half expecting Zoe to want to be carried back from the CityCat stop, but she was good with walking back. Again, some turn taking as to who was the "leader" walking back helped keep the peace.

I had to get over to Chandler for Zoe's new potential swim school to assess her for level placement, so I dropped Megan off on the way, and we got to the pool just in time.

Zoe did me very proud and did a fantastic job of swimming, and was placed in the second-highest level of their learn to swim program. We also ran into her friend from Kindergarten, Vaeda, who was killing time while her brother had a swim class. So Zoe and Vaeda ended up splashing around in the splash pool for a while after her assessment.

Once I'd managed to extract Zoe from the splash pool, and got her changed, we headed straight back to Sarah's place to drop her off. So we pretty much spent the entire day out of the house. Zoe and Megan had a good day together, and I managed to figure out pretty quickly how to keep the peace.

Richard Jones: When testing goes bad

Thu, 2014-08-28 19:25

I've recently started working on a large, mature code base (some 65,000 lines of Python code). It has 1048 unit tests implemented in the standard unittest.TestCase fashion using the mox framework for mocking support (I'm not surprised you've not heard of it).

Recently I fixed a bug which was causing a user interface panel to display when it shouldn't have been. The fix basically amounts to a couple of lines of code added to the panel in question:

+ def can_access(self, context): + # extend basic permission-based check with a check to see whether + # the Aggregates extension is even enabled in nova + if not nova.extension_supported('Aggregates', context['request']): + return False + return super(Aggregates, self).can_access(context)

When I ran the unit test suite I discovered to my horror that 498 of the 1048 tests now failed. The reason for this is that the can_access() method here is called as a side-effect of those 498 tests and the nova.extension_supported (which is a REST call under the hood) needed to be mocked correctly to support it being called.

I quickly discovered that given the size of the test suite, and the testing tools used, each of those 498 tests must be fixed by hand, one at a time (if I'm lucky, some of them can be knocked off two at a time).

The main cause is mox's mocking of callables like the one above which enforces the order that those callables are invoked. It also enforces that the calls are made at all (uncalled mocks are treated as test failures).

This means there is no possibility to provide a blanket mock for the "nova.extension_supported". Tests with existing calls to that API need careful attention to ensure the ordering is correct. Tests which don't result in the side- effect call to the above method will raise an error, so even adding a mock setup in a TestCase.setUp() doesn't work in most cases.

It doesn't help that the codebase is so large, and has been developed by so many people over years. Mocking isn't consistently implemented; even the basic structure of tests in TestCases is inconsistent.

It's worth noting that the ordering check that mox provides is never used as far as I can tell in this codebase. I haven't sighted an example of multiple calls to the same mocked API without the additional use of the mox InAnyOrder() modifier. mox does not provide a mechanism to turn the ordering check off completely.

The pretend library (my go-to for stubbing) splits out the mocking step and the verification of calls so the ordering will only be enforced if you deem it absolutely necessary.

The choice to use unittest-style TestCase classes makes managing fixtures much more difficult (it becomes a nightmare of classes and mixins and setUp() super() calls or alternatively a nightmare of mixin classes and multiple explicit setup calls in test bodies). This is exacerbated by the test suite in question introducing its own mock-generating decorator which will generate a mock, but again leaves the implementation of the mocking to the test cases. py.test's fixtures are a far superior mechanism for managing mocking fixtures, allowing simpler, central creation of mocks and overriding of them through fixture dependencies.

The result is that I spent some time working through some of the test suite and discovered that in an afternoon I could fix about 10% of the failing tests. I have decided that spending a week fixing the tests for my 5 line bug fix is just not worth it, and I've withdrawn the patch.

Andrew Pollock: [life] Day 210: Running and a picnic, with a play date and some rain

Wed, 2014-08-27 19:25

I had a rotten night's sleep last night. Zoe woke up briefly around midnight wanting a cuddle, and then I woke up again at around 3am and couldn't get back to sleep. I surprised I'm not more trashed, really.

It was a nice day today, so I made a picnic lunch, and we headed out to Minnippi Parklands to do a run with Zoe in the jogging stroller. It was around 10am by the time we arrived, and I had grand plans of running 10 km. I ran out of steam after about 3.5 km, conveniently at the "Rocket Park" at Carindale, which Zoe's been to a few times before.

So we stopped there for a bit of a breather, and then I ran back again for another 3 km or so, in a slightly different route, before I again ran out of puff, and walked the rest of the way back.

We then proceeded to have our picnic lunch and a bit of a play, before I dropped her off at Megan's house for a play while I chaired the PAG meeting at Kindergarten.

After that, and extracting Zoe, which is never a quick task, we headed home to get ready for swim class. It started to rain and look a bit thundery, and as we arrived at swim class we were informed that lessons were canceled, so we turned around and headed back home.

Zoe watched a bit of TV and then Sarah arrived to pick her up. I'm going to knock myself out with a variety of drugs tonight and hope I get a good night's sleep with minimum of cold symptoms.

Maxim Zakharov: Central Sydney WordPress Meetup: E-mail marketing

Wed, 2014-08-27 02:25

Andrew Beeston from Clicky! speaks about email marketing at Central Sydney WordPress meetup:

Andrew Pollock: [life] Day 209: Startup stuff, Kindergarten, tennis and a play date

Tue, 2014-08-26 22:25

Last night was not a good night for sleep. I woke up around 12:30am for some reason, and then Zoe woke up around 2:30am (why is it always 2:30am?), but I managed to get her to go back to bed in her own bed. I was not feeling very chipper this morning.

Today was photo day at Kindergarten, so I spent some extra time braiding Zoe's hair (at her request) before we headed to Kindergarten.

When I got home, I got stuck into my real estate license assessment and made a lot of progress on the current unit today. I also mixed in some research on another idea I'm running with at the moment, which I'm very excited about.

I biked to Kindergarten to pick Zoe up, and managed to get her all sorted out in time for her tennis class, and she did the full class without any interruptions.

After tennis, we went to Megan's house for a bit. As we were leaving, her neighbour asked if we could help video one of her daughters doing the ALS ice bucket challenge thing, so we got a bit waylaid doing that, before we got home.

I managed to get Zoe down to bed a bit early tonight. My cold is really kicking my butt today. I hope we both sleep well tonight.

BlueHackers: About your breakfast

Tue, 2014-08-26 13:00

We know that eating well (good nutritional balance) and at the right times is good for your mental as well as your physical health.

There’s some new research out on breakfast. The article I spotted (Breakfast no longer ‘most important meal of the day’ | SBS) goes a bit popular and funny on it, so I’ll phrase it independently in an attempt to get the real information out.

One of the researchers makes the point that skipping breakfast is not the same as deferring. So consider the reason, are you going to eat properly a bit later, or are you not eating at all?

When you do have breakfast, note that really most cereals contain an atrocious amount of sugar (and other carbs) that you can’t realistically burn off even with a hard day’s work. And from my own personal observation, there’s often way too much salt in there also. Check out Kellogg’s Cornflakes for a neat example of way-too-much-salt.

Basically, the research comes back to the fact that just eating is not the point, it’s what you eat that actually really does matter.

What do you have for breakfast, and at what point/time in your day?

Tridge on UAVs: APM:Rover 2.46 released

Tue, 2014-08-26 09:34

The ardupilot development team is proud to announce the release of version 2.46 of APM:Rover. This is a major release with a lot of new features and bug fixes.



This release is based on a lot of development and testing that happened prior to the AVC competition where APM based vehicles performed very well.



Full changes list for this release:

  • added support for higher baudrates on telemetry ports, to make it easier to use high rate telemetry to companion boards. Rates of up to 1.5MBit are now supported to companion boards.
  • new Rangefinder code with support for a wider range of rangefinder types including a range of Lidars (thanks to Allyson Kreft)
  • added logging of power status on Pixhawk
  • added PIVOT_TURN_ANGLE parameter for pivot based turns on skid steering rovers
  • lots of improvements to the EKF support for Rover, thanks to Paul Riseborough and testing from Tom Coyle. Using the EKF can greatly improve navigation accuracy for fast rovers. Enable with AHRS_EKF_USE=1.
  • improved support for dual GPS on Pixhawk. Using a 2nd GPS can greatly improve performance when in an area with an obstructed view of the sky
  • support for up to 14 RC channels on Pihxawk
  • added BRAKING_PERCENT and BRAKING_SPEEDERR parameters for better breaking support when cornering
  • added support for FrSky telemetry via SERIAL2_PROTOCOL parameter (thanks to Matthias Badaire)
  • added support for Linux based autopilots, initially with the PXF BeagleBoneBlack cape and the Erle robotics board. Support for more boards is expected in future releases. Thanks to Victor, Sid and Anuj for their great work on the Linux port.
  • added StorageManager library, which expands available FRAM storage on Pixhawk to 16 kByte. This allows for 724 waypoints on Pixhawk.
  • improved reporting of magnetometer and barometer errors to the GCS
  • fixed a bug in automatic flow control detection for serial ports in Pixhawk
  • fixed use of FMU servo pins as digital inputs on Pixhawk
  • imported latest updates for VRBrain boards (thanks to Emile Castelnuovo and Luca Micheletti)
  • updates to the Piksi GPS support (thanks to Niels Joubert)
  • improved gyro estimate in DCM (thanks to Jon Challinger)
  • improved position projection in DCM in wind (thanks to Przemek Lekston)
  • several updates to AP_NavEKF for more robust handling of errors (thanks to Paul Riseborough)
  • lots of small code cleanups thanks to Daniel Frenzel
  • initial support for NavIO board from Mikhail Avkhimenia
  • fixed logging of RCOU for up to 12 channels (thanks to Emile Castelnuovo)
  • code cleanups from Silvia Nunezrivero
  • improved parameter download speed on radio links with no flow control



Many thanks to everyone who contributed to this release, especially Tom Coyle and Linus Penzlien for their excellent testing and feedback.



Happy driving!

Tridge on UAVs: APM:Plane 3.1.0 released

Tue, 2014-08-26 08:33

The ardupilot development team is proud to announce the release of version 3.1.0 of APM:Plane. This is a major release with a lot of new features and bug fixes.



The biggest change in this release is the addition of automatic terrain following. Terrain following allows the autopilot to guide the aircraft over varying terrain at a constant height above the ground using an on-board terrain database. Uses include safer RTL, more accurate and easier photo mapping and much easier mission planning in hilly areas.



There have also been a lot of updates to auto takeoff, especially for tail dragger aircraft. It is now much easier to get the steering right for a tail dragger on takeoff.



Another big change is the support of Linux based autopilots, starting with the PXF cape for the BeagleBoneBlack and the Erle robotics autopilot.



Full list of changes in this release

  • added terrain following support. See http://plane.ardupilot.com/wiki/common- ... following/
  • added support for higher baudrates on telemetry ports, to make it easier to use high rate telemetry to companion boards. Rates of up to 1.5MBit are now supported to companion boards.
  • added new takeoff code, including new parameters TKOFF_TDRAG_ELEV, TKOFF_TDRAG_SPD1, TKOFF_ROTATE_SPD, TKOFF_THR_SLEW and TKOFF_THR_MAX. This gives fine grained control of auto takeoff for tail dragger aircraft.
  • overhauled glide slope code to fix glide slope handling in many situations. This makes transitions between different altitudes much smoother.
  • prevent early waypoint completion for straight ahead waypoints. This makes for more accurate servo release at specific locations, for applications such as dropping water bottles.
  • added MAV_CMD_DO_INVERTED_FLIGHT command in missions, to change from normal to inverted flight in AUTO (thanks to Philip Rowse for testing of this feature).
  • new Rangefinder code with support for a wider range of rangefinder types including a range of Lidars (thanks to Allyson Kreft)
  • added support for FrSky telemetry via SERIAL2_PROTOCOL parameter (thanks to Matthias Badaire)

    added new STAB_PITCH_DOWN parameter to improve low throttle behaviour in FBWA mode, making a stall less likely in FBWA mode (thanks to Jack Pittar for the idea).
  • added GLIDE_SLOPE_MIN parameter for better handling of small altitude deviations in AUTO. This makes for more accurate altitude tracking in AUTO.
  • added support for Linux based autopilots, initially with the PXF BeagleBoneBlack cape and the Erle robotics board. Support for more boards is expected in future releases. Thanks to Victor, Sid and Anuj for their great work on the Linux port. See http://diydrones.com/profiles/blogs/fir ... t-on-linux for details.
  • prevent cross-tracking on some waypoint types, such as when initially entering AUTO or when the user commands a change of target waypoint.
  • fixed servo demo on startup (thanks to Klrill-ka)
  • added AFS (Advanced Failsafe) support on 32 bit boards by default. See http://plane.ardupilot.com/wiki/advance ... iguration/
  • added support for monitoring voltage of a 2nd battery via BATTERY2 MAVLink message
  • added airspeed sensor support in HIL
  • fixed HIL on APM2. HIL should now work again on all boards.
  • added StorageManager library, which expands available FRAM storage on Pixhawk to 16 kByte. This allows for 724 waypoints, 50 rally points and 84 fence points on Pixhawk.
  • improved steering on landing, so the plane is actively steered right through the landing.
  • improved reporting of magnetometer and barometer errors to the GCS
  • added FBWA_TDRAG_CHAN parameter, for easier FBWA takeoffs of tail draggers, and better testing of steering tuning for auto takeoff.
  • fixed failsafe pass through with no RC input (thanks to Klrill-ka)
  • fixed a bug in automatic flow control detection for serial ports in Pixhawk
  • fixed use of FMU servo pins as digital inputs on Pixhawk
  • imported latest updates for VRBrain boards (thanks to Emile Castelnuovo and Luca Micheletti)
  • updates to the Piksi GPS support (thanks to Niels Joubert)
  • improved gyro estimate in DCM (thanks to Jon Challinger)
  • improved position projection in DCM in wind (thanks to Przemek Lekston)
  • several updates to AP_NavEKF for more robust handling of errors (thanks to Paul Riseborough)
  • improved simulation of rangefinders in SITL
  • lots of small code cleanups thanks to Daniel Frenzel
  • initial support for NavIO board from Mikhail Avkhimenia
  • fixed logging of RCOU for up to 12 channels (thanks to Emile Castelnuovo)
  • code cleanups from Silvia Nunezrivero
  • improved parameter download speed on radio links with no flow control

Many thanks to everyone who contributed to this release, especially our beta testers Marco, Paul, Philip and Iam.



Happy flying!

Andrew Pollock: [life] Day 208: Kindergarten, running, insurance assessments, home improvements, BJJ and a babyccino

Mon, 2014-08-25 22:25

Today was a pretty busy day. I started off with a run, and managed to do 6 km this morning. I feel like I'm coming down with yet another cold, so I'm happy that I managed to get out and run at all, let alone last 6 km.

Next up, I had to get the car assessed after a minor rear-end collision it suffered on Saturday night (nobody was hurt, I wasn't at fault). I was really impressed with NRMA Insurance's claim processing, it was all very smooth. I've since learned that they even have a smartphone app for ensuring that one gets all the pertinent information after an accident.

I dropped into Bunnings on the way home to pick up a sliding rubbish bin. I've been wanting one of these ever since I moved into my place, and finally got around to doing it. I also grabbed some LED bulbs from Beacon.

After I got home, I spent the morning installing and reinstalling the rubbish bin (I suck at getting these things right first go) and swapping light bulbs around. Overall, it was a very satisfying morning scratching a few itches around the house that had been bugging me for a while.

I biked over to Kindergarten for pick up again, and we biked back home and didn't have a lot of time before we had to head out for Zoe's second freebie Brazilian Jiu-Jitsu class. This class was excellent, there were 8 kids in total, and 2 other girls. Zoe got to do some "rolling" with a partner. It was so cute to watch. They just had to try and block each other from touching their knees, and if they failed, they had to drop to the floor and hop back up again. For each of Zoe's partners they were very civilized and took turns at failing to block.

Zoe was pretty tired after the class. It was definitely the most strenuous class she's had to date, and she briefly fell asleep in the car on the way home. We had to make a stop at the Garage to grab some mushrooms for the mushroom soup we were making for dinner.

Zoe helped me make the mushroom soup, and after dinner to popped out for a babyccino. It's been a while since we've had a post-dinner one, and it was nice to do it again. We also managed to get through the entire afternoon without and TV, which I thought was excellent.

Gary Pendergast: My Media Server

Mon, 2014-08-25 16:26

Over the years, I’ve experimented with a bunch of different methods for media servers, and I think I’ve finally arrived at something that works well for me. Here are the details:

The Server

An old Dell Zino HD I had lying around, running Windows 7. Pretty much any server will be sufficient, this is just the one I had available. Dell doesn’t sell micro-PCs anymore, so just choose your favourite brand that sells something small and with low power requirements. The main things you need from it are a reasonable processor (fast enough to handle transcoding a few video streams in at least realtime), and lots of HD space. I don’t bother with RAID, because I won’t be sad about losing videos that I can easily re-download (the internet is my backup service).

Downloading

I make no excuses, nor apologies for downloading movies and TV shows in a manner that some may describe as involving “copyright violation”.

If you’re in a similar position, there are plenty of BitTorrent sites that allow you register and add videos to a personal RSS feed. Most BitTorrent clients can then subscribe to that feed, and automatically download anything added to it. Some sites even allow you to subscribe to collections, so you can subscribe to a TV show at the start of the season, and automatically get new episodes as soon as they arrive.

For your BitTorrent client, there are two features you need: the ability to subscribe to an RSS feed, and the ability to automatically run a command when the download finishes. I’ve found qBittorrent to be a good option for this.

Sorting

Once a file is downloaded, you need to sort them. By using a standard file layout, you have a much easier time of loading them into your media server later. For automatically sorting your files when they download, nothing compares to the amazing FileBot, which will automatically grab info about the download from TheMovieDB or TheTVDB, and pass it onto your media server. It’s entirely scriptable, but you don’t need to worry about that, because there’s already a great script to do all this for you, called Advanced Media Server (AMC). The initial setup for this was a bit annoying, so here’s the command I use (you can tweak the file locations for your own server, and you’ll need to fix the %n if you use something other than qBittorent):

"C:/Program Files/FileBot/filebot" -script fn:amc --output "C:/Media/Other" --log-file C:/Media/amc.log --action hardlink --conflict override -non-strict --def "seriesFormat=C:/Media/TV/{n}/{'S'+s}/{fn}" "movieFormat=C:/Media/Movies/{n} {y}/{fn}" excludeList=C:/Media/amc-input.txt plex=localhost "ut_dir=C:/Media/Downloads/%n" "ut_kind=multi" "ut_title=%n" Media Server

Plex is the answer to this question. It looks great, it’ll automatically download extra information about your media, and it has really nice mobile apps for remote control. Extra features include offline syncing to your mobile device, so you can take your media when you’re flying, and Chromecast support so you can watch everything on your TV.

The Filebot command above will automatically tell Plex that a new file has arrived, which is great for if you choose to have your media stored on a NAS (Plex may not be able to automatically watch a directory on a NAS for when new files are added).

Backup

Having a local server is great for keeping a local backup of things that do matter – your photos and documents, for example. I use CrashPlan to sync my most important things to my server, so I have a copy immediately available if my laptop dies. I also use CrashPlan’s remote backup service to keep an offsite backup of everything.

Conclusion

While I’ve enjoyed figuring out how to get this all working smoothly, I’d love to be able to pay a monthly fee for an Rdio or Spotify style service, where I get the latest movies and TV shows as soon as they’re available. If you’re wondering what your next startup should be, get onto that.