Planet Linux Australia

Syndicate content
Planet Linux Australia -
Updated: 39 min 41 sec ago

BlueHackers: Interactive Self-Care Guide

Sun, 2015-09-27 13:20

Interesting find:

[…] interactive flow chart for people who struggle with self care, executive dysfunction, and/or who have trouble reading internal signals. It’s designed to take as much of the weight off of you as possible, so each decision is very easy and doesn’t require much judgement.

Some readers may find it of use. I think it’d be useful to have the source code for this available so that a broad group of people can tweak and improve it, or make personalised versions.

James Purser: New episode out this weekend

Fri, 2015-09-25 00:29

So I finally managed to catch up with Chris Arnade and have a chat about his Faces of Addiction project last friday night. I don't think I was at my best (but, after a year off and the interview being at 11pm I'm going to cut myself a little slack).

I'll be putting the episode out this weekend and will let everyone know when it's up.


OpenSTEM: Trolling Self-Driving Cars

Thu, 2015-09-24 19:29

XKCD’s Randall nails it beautifully, as usual…

sure you can code around this particular “attack vector”, but there are infinite possibilities… these are things we do have to consider along the way.

Michael Still: How we got to test_init_instance_retries_reboot_pending_soft_became_hard

Thu, 2015-09-24 17:28
I've been asked some questions about a recent change to nova that I am responsible for, and I thought it would be easier to address those in this format than trying to explain what's happening in IRC. That way whenever someone compliments me on possibly the longest unit test name ever written, I can point them here.

Let's start with some definitions. What is the difference between a soft reboot and a hard reboot in Nova? The short answer is that a soft reboot gives the operating system running in the instance an opportunity to respond to an ACPI power event gracefully before the rug is pulled out from under the instance, whereas a hard reboot just punches the instance in the face immediately.

There is a bit more complexity than that of course, because this is OpenStack. A hard reboot also re-fetches image meta-data, and rebuilds the XML description of the instance that we hand to libvirt. It also re-populates any missing backing files. Finally it ensures that the networking is configured correctly and boots the instance again. In other words, a hard reboot is kind of like an initial instance boot, in that it makes fewer assumptions about how much you can trust the current state of the instance on the hypervisor node. Finally, a soft reboot which fails (probably because the instance operation system didn't respond to the ACPI event in a timely manner) is turned into a hard reboot after libvirt.wait_soft_reboot_seconds. So, we already perform hard reboots when a user asked for a soft reboot in certain error cases.

Its important to note that the actual reboot mechanism is similar though -- its just how patient we are and what side effects we create that change -- in libvirt they both end up as a shutdown of the virtual machine and then a startup.

Bug 1072751 reported an interesting edge case with a soft reboot though. If nova-compute crashes after shutting down the virtual machine, but before the virtual machine is started again, then the instance is left in an inconsistent state. We can demonstrate this with a devstack installation:

    Setup the right version of nova cd /opt/stack/nova git checkout dc6942c1218279097cda98bb5ebe4f273720115d Patch nova so it crashes on a soft reboot cat - > /tmp/patch <<EOF > diff --git a/nova/virt/libvirt/ b/nova/virt/libvirt/ > index ce19f22..6c565be 100644 > --- a/nova/virt/libvirt/ > +++ b/nova/virt/libvirt/ > @@ -34,6 +34,7 @@ import itertools > import mmap > import operator > import os > +import sys > import shutil > import tempfile > import time > @@ -2082,6 +2083,10 @@ class LibvirtDriver(driver.ComputeDriver): > # is already shutdown. > if state == power_state.RUNNING: > dom.shutdown() > + > + # NOTE(mikal): temporarily crash > + sys.exit(1) > + > # NOTE(vish): This actually could take slightly longer than the > # FLAG defines depending on how long the get_info > # call takes to return. > EOF patch -p1 < /tmp/patch restart nova-compute inside devstack to make sure you're running the patched version... Boot a victim instance cd ~/devstack source openrc admin glance image-list nova boot --image=cirros-0.3.4-x86_64-uec --flavor=1 foo Soft reboot, and verify its gone nova list nova reboot cacf99de-117d-4ab7-bd12-32cc2265e906 sudo virsh list ...virsh list should now show no virtual machines running as nova-compute crashed before it could start the instance again. However, nova-api knows that the instance should be rebooting... $ nova list +--------------------------------------+------+---------+----------------+-------------+------------------+ | ID | Name | Status | Task State | Power State | Networks | +--------------------------------------+------+---------+----------------+-------------+------------------+ | cacf99de-117d-4ab7-bd12-32cc2265e906 | foo | REBOOT | reboot_started | Running | private= | +--------------------------------------+------+---------+----------------+-------------+------------------+ start nova-compute again, nova-compute detects the missing instance on boot, and tries to start it up again... sg libvirtd '/usr/local/bin/nova-compute --config-file /etc/nova/nova.conf' \ > & echo $! >/opt/stack/status/stack/; fg || \ > echo "n-cpu failed to start" | tee "/opt/stack/status/stack/n-cpu.failure" [...snip...] Traceback (most recent call last): File "/opt/stack/nova/nova/conductor/", line 444, in _object_dispatch return getattr(target, method)(*args, **kwargs) File "/usr/local/lib/python2.7/dist-packages/oslo_versionedobjects/", line 213, in wrapper return fn(self, *args, **kwargs) File "/opt/stack/nova/nova/objects/", line 728, in save columns_to_join=_expected_cols(expected_attrs)) File "/opt/stack/nova/nova/db/", line 764, in instance_update_and_get_original expected=expected) File "/opt/stack/nova/nova/db/sqlalchemy/", line 216, in wrapper return f(*args, **kwargs) File "/usr/local/lib/python2.7/dist-packages/oslo_db/", line 146, in wrapper ectxt.value = e.inner_exc File "/usr/local/lib/python2.7/dist-packages/oslo_utils/", line 195, in __exit__ six.reraise(self.type_, self.value, self.tb) File "/usr/local/lib/python2.7/dist-packages/oslo_db/", line 136, in wrapper return f(*args, **kwargs) File "/opt/stack/nova/nova/db/sqlalchemy/", line 2464, in instance_update_and_get_original expected, original=instance_ref)) File "/opt/stack/nova/nova/db/sqlalchemy/", line 2602, in _instance_update raise exc(**exc_props) UnexpectedTaskStateError: Conflict updating instance cacf99de-117d-4ab7-bd12-32cc2265e906. Expected: {'task_state': [u'rebooting_hard', u'reboot_pending_hard', u'reboot_started_hard']}. Actual: {'task_state': u'reboot_started'}

So what happened here? This is a bit confusing because we asked for a soft reboot of the instance, but the error we are seeing here is that a hard reboot was attempted -- specifically, we're trying to update an instance object but all the task states we expect the instance to be in are related to a hard reboot, but the task state we're actually in is for a soft reboot.

We need to take a tour of the compute manager code to understand what happened here. nova-compute is implemented at nova/compute/ in the nova code base. Specifically, ComputeVirtAPI.init_host() sets up the service to start handling compute requests for a specific hypervisor node. As part of startup, this method calls ComputeVirtAPI._init_instance() once per instance on the hypervisor node. This method tries to do some sanity checking for each instance that nova thinks should be on the hypervisor:

  • Detecting if the instance was part of a failed evacuation.
  • Detecting instances that are soft deleted, deleting, or in an error state and ignoring them apart from a log message.
  • Detecting instances which we think are fully deleted but aren't in fact gone.
  • Moving instances we thought were booting, but which never completed into an error state. This happens if nova-compute crashes during the instance startup process.
  • Similarly, instances which were rebuilding are moved to an error state as well.
  • Clearing the task state for uncompleted tasks like snapshots or preparing for resize.
  • Finishes deleting instances which were partially deleted last time we saw them.
  • And finally, if the instance should be running but isn't, tries to reboot the instance to get it running.

It is this final state which is relevant in this case -- we think the instance should be running and its not, so we're going to reboot it. We do that by calling ComputeVirtAPI.reboot_instance(). The code which does this work looks like this:

    try_reboot, reboot_type = self._retry_reboot(context, instance) current_power_state = self._get_power_state(context, instance) if try_reboot: LOG.debug("Instance in transitional state (%(task_state)s) at " "start-up and power state is (%(power_state)s), " "triggering reboot", {'task_state': instance.task_state, 'power_state': current_power_state}, instance=instance) self.reboot_instance(context, instance, block_device_info=None, reboot_type=reboot_type) return [...snip...] def _retry_reboot(self, context, instance): current_power_state = self._get_power_state(context, instance) current_task_state = instance.task_state retry_reboot = False reboot_type = compute_utils.get_reboot_type(current_task_state, current_power_state) pending_soft = (current_task_state == task_states.REBOOT_PENDING and instance.vm_state in vm_states.ALLOW_SOFT_REBOOT) pending_hard = (current_task_state == task_states.REBOOT_PENDING_HARD and instance.vm_state in vm_states.ALLOW_HARD_REBOOT) started_not_running = (current_task_state in [task_states.REBOOT_STARTED, task_states.REBOOT_STARTED_HARD] and current_power_state != power_state.RUNNING) if pending_soft or pending_hard or started_not_running: retry_reboot = True return retry_reboot, reboot_type

So, we ask ComputeVirtAPI._retry_reboot() if a reboot is required, and if so what type. ComputeVirtAPI._retry_reboot() just uses nova.compute.utils.get_reboot_type() (aliased as compute_utils.get_reboot_type) to determine what type of reboot to use. This is the crux of the matter. Read on for a surprising discovery!

nova.compute.utils.get_reboot_type() looks like this:

    def get_reboot_type(task_state, current_power_state): """Checks if the current instance state requires a HARD reboot.""" if current_power_state != power_state.RUNNING: return 'HARD' soft_types = [task_states.REBOOT_STARTED, task_states.REBOOT_PENDING, task_states.REBOOTING] reboot_type = 'SOFT' if task_state in soft_types else 'HARD' return reboot_type

So, after all that it comes down to this. If the instance isn't running, then its a hard reboot. In our case, we shutdown the instance but haven't started it yet, so its not running. This will therefore be a hard reboot. This is where our problem lies -- we chose a hard reboot. The code doesn't blow up until later though -- when we try to do the reboot itself.

    @wrap_exception() @reverts_task_state @wrap_instance_event @wrap_instance_fault def reboot_instance(self, context, instance, block_device_info, reboot_type): """Reboot an instance on this host.""" # acknowledge the request made it to the manager if reboot_type == "SOFT": instance.task_state = task_states.REBOOT_PENDING expected_states = (task_states.REBOOTING, task_states.REBOOT_PENDING, task_states.REBOOT_STARTED) else: instance.task_state = task_states.REBOOT_PENDING_HARD expected_states = (task_states.REBOOTING_HARD, task_states.REBOOT_PENDING_HARD, task_states.REBOOT_STARTED_HARD) context = context.elevated()"Rebooting instance"), context=context, instance=instance) block_device_info = self._get_instance_block_device_info(context, instance) network_info = self.network_api.get_instance_nw_info(context, instance) self._notify_about_instance_usage(context, instance, "reboot.start") instance.power_state = self._get_power_state(context, instance) [...snip...]

And there's our problem. We have a reboot_type of HARD, which means we set the expected_states to those matching a hard reboot. However, the state the instance is actually in will be one correlating to a soft reboot, because that's what the user requested. We therefore experience an exception when we try to save our changes to the instance. This is the exception we saw above.

The fix in my patch is simply to change the current task state for an instance in this situation to one matching a hard reboot. It all just works then.

So why do we decide to use a hard reboot if the current power state is not RUNNING? This code was introduced in this patch and there isn't much discussion in the review comments as to why a hard reboot is the right choice here. That said, we already fall back to a hard reboot in error cases of a soft reboot inside the libvirt driver, and a hard reboot requires less trust of the surrounding state for the instance (block device mappings, networks and all those side effects mentioned at the very beginning), so I think it is the right call.

In conclusion, we use a hard reboot for soft reboots that fail, and a nova-compute crash during a soft reboot counts as one of those failure cases. So, when nova-compute detects a failed soft reboot, it converts it to a hard reboot and trys again.

Tags for this post: openstack reboot nova nova-compute

Related posts: One week of Nova Kilo specifications; Specs for Kilo; Juno nova mid-cycle meetup summary: nova-network to Neutron migration; Juno Nova PTL Candidacy; Juno nova mid-cycle meetup summary: scheduler; Juno nova mid-cycle meetup summary: ironic


BlueHackers: Suicide doesn’t take away the pain, it gives it to someone else

Thu, 2015-09-24 15:21

This is something that I feel quite strongly about. Both of my parents have tried to commit suicide when I was young, at different times and stages of my life. The first one was when I was about 11 and I don’t remember too much about it, there was a lot of pain flying around the family at that time and I was probably shielded from the details. The second parent (by then long divorced from the other parent) tried when I was 21 and away at uni in a different city. That one I remember vividly, even though I wasn’t there.

My reactions to the second were still those of a child. Perhaps when it’s a parent, one’s reactions are always those of a child. For me the most devastating thought was a purely selfish one (as fits a child) “Do I mean that little to them? Am I not even worth staying alive for?” The pain of that thought was overwhelming.

At the time I was young, saw myself as an optimist and simply could not relate in any way to the amount of pain that would bring one to such an action. I was angry. I described suicide as “the most selfish act anyone could do”.

Now decades of time and a world of life experience later, I have stared into that dark abyss myself and I know the pain that leads one there. I know how all-encompassing the pain and darkness seems and how the needs of others fade. An end to the pain is all one wants and it seems inconceivable that one’s life has any relevance any more. In fact, one can even argue to oneself that others would be better off without one there.

In those dark times it was the certain knowledge of that pain I had experienced myself as one (almost) left behind that kept me from that road more firmly than anything else. By then I was a parent myself and there was just no way I was going to send my children the message that they meant so little to me they were not even worth living for.  Although living seemed to be the hardest thing I could do, there was no hesitation that they were worth it.

And beyond the children there are always others. Others who will be affected by a suicide, no matter of whom. None of us is truly alone. We all have parents, we may have siblings. Even if all our family is gone and we feel we have no friends, it is likely that there are people who care. The person at the corner shop from whom you buy milk on weekends and who may think “should I have known? Is there anything I could have done?” Even if you can argue that there is no-one that would notice or care, let’s be frank, someone is going to have to deal with the body and winding up of financial and other affairs. And I’m sure it’s really going to make their day!

Whenever I hear about trains being delayed because of incidents on the track I am immediately concerned for those on the train, not least of all the drivers. What have they ever done to that person to deserve the images that will now be impossible to erase from memory, which will haunt their nights and dark moments and which may lead them to require therapy.

There are many people, working for many organisations, some sitting at telephones in shifts 24 hrs a day, who want more than anything else to help people wrestling with these dark issues. They care. They really do. About everyone.

Help is always available. So let’s all acknowledge that suicide Always causes pain to others.

Need help?

Tridge on UAVs: APM:Plane 3.4.0 released

Thu, 2015-09-24 12:32

The ArduPilot development team is proud to announce the release of version 3.4.0 of APM:Plane. This is a major release with a lot of changes so please read the notes carefully!

First release with EKF by default

This is the also the first release that enables the EKF (Extended Kalman Filter) for attitude and position estimation by default. This has been in development for a long time, and significantly improves flight performance. You can still disable the EKF if you want to using the AHRS_EKF_USE parameter, but it is strongly recommended that you use the EKF. Note that if an issue is discovered with the EKF in flight it will automatically be disabled and the older DCM system will be used instead. That should be very rare.

In order to use the EKF we need to be a bit more careful about the setup of the aircraft. That is why in the last release we enabled arming and pre-arm checks by default. Please don't disable the arming checks, they are there for very good reasons.

Last release with APM1/APM2 support

This will be the last major release that supports the old APM1/APM2 AVR based boards. We have finally run out of flash space and memory. In the last few releases we spent quite a bit of time trying to squeeze more and more into the small flash space of the APM1/APM2, but it had to end someday if ArduPilot is to continue to develop. I am open to the idea of someone else volunteering to keep doing development of APM1/APM2 so if you have the skills and inclination do please get in touch. Otherwise I will only do small point release changes for major bugs.

Even to get this release onto the APM1/APM2 we had to make sacrifices in terms of functionality. The APM1/APM2 release is missing quite a few features that are on the Pixhawk and other boards. For example:

  • no rangefinder support for landing
  • no terrain following
  • no EKF support
  • no camera control
  • no CLI support
  • no advanced failsafe support
  • no HIL support (sorry!)
  • support for far fewer GPS types

that is just the most obvious major features that are missing on APM1/APM2. There are also numerous other smaller things where we need to take shortcuts on the APM1/APM2. Some of these features were

available on older APM1/APM2 releases but needed to be removed to allow us to squeeze the new release onto the board. So if you are happy with a previous release on your APM2 and want a feature that is in that older release and not in this one then perhaps you shouldn't upgrade.

PID Tuning

While most people are happy with autotune to tune the PIDs for their planes, it is nice also to be able to do fine tuning by hand. This release includes new dataflash and mavlink messages to help with that

tuning. You can now see the individual contributions of the P, I and D components of each PID in the logs, allowing you to get a much better picture of the performance.

A simple application of this new tuning is you can easily see if your trim is off. If the Pitch I term is constantly contributing a signifcant positive factor then you know that ArduPilot is having to

constantly apply up elevator, which means your plane is nose heavy. The same goes for roll, and can also be used to help tune your ground steering.

Vibration Logging

This release includes a lot more options for diagnosing vibration issues. You will notice new VIBRATION messages in MAVLink and VIBE messages in the dataflash logs. Those give you a good idea of your

(unfiltered) vibration levels. For really detailed analysis you can setup your LOG_BITMASK to include raw logging, which gives you every accel and gyro sample on your Pixhawk. You can then do a FFT on the

result and plot the distribution of vibration level with frequency. That is great for finding the cause of vibration issues. Note that you need a very fast microSD card for that to work!

Rudder Disarm

This is the first release that allows you to disarm using the rudder if you want to. It isn't enabled by default (due to the slight risk of accidentially disarming while doing aerobatics). You can enable it

with the ARMING_RUDDER parameter by setting it to 2. It will only allow you to disarm if the autopilot thinks you are not flying at the time (thanks to the "is_flying" heuristics from Tom Pittenger).

More Sensors

This release includes support for a bunch more sensors. It now supports 3 different interfaces for the LightWare range of Lidars (serial, I2C and analog), and also supports the very nice Septentrio RTK

dual-frequency GPS (the first dual-frequency GPS we have support for). It also supports the new "blue label" Lidar from Pulsed Light (both on I2C and PWM).

For the uBlox GPS, we now have a lot more configurability of the driver, with the ability to set the GNSS mode for different constellations. Also in the uBlox driver we support logging of the raw carrier phase and pseudo range data, which allows for post-flight RTK analysis with raw-capable receivers for really accurate photo missions.

Better Linux support

This release includes a lot of improvements to the Linux based autopilot boards, including the NavIO+, the PXF and ERLE boards and the BBBMini and the new RasPilot board. If you like the idea of flying

with Linux then please try it out!

On-board compass calibrator

We also have a new on-board compass calibrator, which also adds calibration for soft iron effects, allowing for much more accurate compass calibration. Support for starting the compass calibration in the

various ground stations is still under development, but it looks like this will be a big improvement to compass calibration.

Lots of other changes!

The above list is just a taste of the changes that have gone into this release. Thousands of small changes have gone into this release with dozens of people contributing. Many thanks to everyone who helped!

Other key changes include:

  • fixed return point on geofence breach
  • enable messages for MAVLink gimbal support
  • use 64 bit timestamps in dataflash logs
  • added realtime PID tuning messages and PID logging
  • fixed a failure case for the px4 failsafe mixer
  • added DSM binding support on Pixhawk
  • added ALTITUDE_WAIT mission command
  • added vibration level logging
  • ignore low voltage failsafe while disarmed
  • added delta velocity and delta angle logging
  • fix LOITER_TO_ALT to verify headings towards waypoints within the loiter radius
  • allow rudder disarm based on ARMING_RUDDER parameter
  • fix default behaviour of flaps
  • prevent mode switch changes changing WP tracking
  • make TRAINING mode obey stall prevention roll limits
  • disable TRIM_RC_AT_START by default
  • fixed parameter documentation spelling errors
  • send MISSION_ITEM_REACHED messages on waypoint completion
  • fixed airspeed handling in SITL simulators
  • enable EKF by default on plane
  • Improve gyro bias learning rate for plane and rover
  • Allow switching primary GPS instance with 1 sat difference
  • added NSH over MAVLink support
  • added support for mpu9250 on pixhawk and pixhawk2
  • Add support for logging ublox RXM-RAWX messages
  • lots of updates to improve support for Linux based boards
  • added ORGN message in dataflash
  • added support for new "blue label" Lidar
  • switched to real hdop in uBlox driver
  • improved auto-config of uBlox
  • raise accel discrepancy arming threshold to 0.75
  • improved support for tcp and udp connections on Linux
  • switched to delta-velocity and delta-angles in DCM
  • improved detection of which accel to use in EKF
  • improved auto-detections of flow control on pixhawk UARTs
  • Failsafe actions are not executed if already on final approach or land.
  • Option to trigger GCS failsafe only in AUTO mode.
  • added climb/descend parameter to CONTINUE_AND_CHANGE_ALT
  • added HDOP to uavcan GPS driver
  • improved sending of autopilot version
  • prevent motor startup with bad throttle trim on reboot
  • log zero rangefinder distance when unhealthy
  • added PRU firmware files for BeagleBoneBlack port
  • fix for recent STORM32 gimbal support
  • changed sending of STATUSTEXT severity to use correct values
  • added new RSSI library with PWM input support
  • fixed MAVLink heading report for UAVCAN GPS
  • support LightWare I2C rangefinder on Linux
  • improved staging of parameters and formats on startup to dataflash
  • added new on-board compass calibrator
  • improved RCOutput code for NavIO port
  • added support for Septentrio GPS receiver
  • support DO_MOUNT_CONTROl via command-long interface
  • added CAM_RELAY_ON parameter
  • moved SKIP_GYRO_CAL functionality to INS_GYR_CAL
  • added detection of bad lidar settings for landing

Note that the documentation hasn't yet caught up with all the changes in this release. We are still working on that, but meanwhile if you see a feature that interests you and it isn't documented yet then please ask.

David Rowe: SNR and Eb/No Worked Example

Thu, 2015-09-24 09:29

German Hams Helmut and Alfred have been doing some fine work with FreeDV 700B at power levels as low as 50mW and SNRs down to 0dB over a 300km path. I thought it might be useful to show how SNR relates to Eb/No and Bit Error Rate (BER). Also I keep having to work this out myself on scraps of paper so nice to get it written down somewhere I can Google.

This plot shows the Eb/No versus BER for of a bunch of modems and channels. The curves show how much (Eb/No) we need for a certain Bit Error Rate (BER). Click for a larger version.

The lower three curves show the performance of modems in an AWGN channel – a channel that just has additive noise (like a very slow fading HF channel or VHF). The Blue curve just above the Red (ideal QPSK) is the cohpsk modem in an AWGN channel. Time for some math:

The energy/bit Eb = power/bit rate = S/Rb. The total noise the demod sees is No (noise power in 1Hz) multiplied by the bandwidth B, so N=NoB. Re-arranging a bit we get:

    SNR = S/N = EbRb/NoB

or in dB:

    SNR(db) = Eb/No(dB) + 10log10(Rb/B)

So for FreeDV 700B, the bit rate Rb = 700, B = 3000 Hz (for SNR in a 3000Hz bandwidth) so we get:

    SNR = Eb/No – 6.3

Now, say we need a BER of 2% or 0.02 for speech, the lower Blue curve says we need an Eb/No = 4dB, so we get:

    SNR = 4 – 6.3 = -2.3dB

So if the modem is working down to “just” 0dB we are about 2dB worse than theoretical. This is due to the extra bandwidth taken by the pilot symbols (which translates to 1.5dB), some implementation “loss” in the sync algorithms, and non linearities in the system.

I thought it worth explaining this a little more. These skills will be just as important to people experimenting with the radios of the 21st century as Ohms law was in the 20th.

Linux Users of Victoria (LUV) Announce: LUV Main October 2015 Meeting: Networking Fundamentals / High Performance Open Source Storage

Wed, 2015-09-23 21:29
Start: Oct 6 2015 18:30 End: Oct 6 2015 20:30 Start: Oct 6 2015 18:30 End: Oct 6 2015 20:30 Location: 

6th Floor, 200 Victoria St. Carlton VIC 3053



• Fraser McGlinn, Networking Fundamentals, Troubleshooting and Packet Analysis

• Sam McLeod, High Performance, Open Source Storage

200 Victoria St. Carlton VIC 3053 (formerly the EPA building)

Before and/or after each meeting those who are interested are welcome to join other members for dinner. We are open to suggestions for a good place to eat near our venue. Maria's on Peel Street in North Melbourne is currently the most popular place to eat after meetings.

LUV would like to acknowledge Red Hat for their help in obtaining the venue and VPAC for hosting.

Linux Users of Victoria Inc. is an incorporated association, registration number A0040056C.

October 6, 2015 - 18:30

read more

Michael Still: First trail run

Wed, 2015-09-23 12:28
So, now I trail run apparently. This was a test run for a hydration vest (thanks Steven Hanley for the loaner!). It was fun, but running up hills is evil.

Interactive map for this route.

Tags for this post: blog canberra trail run

Related posts: Second trail run; Chicken run; Update on the chickens; Boston; Random learning for the day


Chris Smart: Reset keyboard shortcuts in GNOME

Wed, 2015-09-23 11:29

Recently we had a Korora user ask how to reset the keybindings in GNOME, which they had changed.

I don’t think that the shortcuts program has a way to reset them, but you can use dconf-editor.

Open the dconf-editor program and browse to:


Anything that’s been modified should be in bold font. Select it then down the bottom on the right click the “Set to Default” button.

Hope that helps!

Pia Waugh: Government as an API: how to change the system

Wed, 2015-09-23 11:26

A couple of months ago I gave a short speech about Gov as an API at an AIIA event. Basically I believe that unless we make government data, content and transaction services API enabled and mashable, then we are simply improving upon the status quo. 1000 services designed to be much better are still 1000 services that could be integrated for users, automated at the backend, or otherwise transformed into part of a system rather than the unique siloed systems that we have today. I think the future is mashable government, and the private sector has already gone down this path so governments need to catch up!

When I rewatched it I felt it captured my thoughts around this topic really well, so below is the video and the transcript. Enjoy! Comments welcome.

The first thing is I want to talk about gov as an API. This is kind of like on steroids, but this goes way above and beyond data and gets into something far more profound. But just a step back, the to the concept of Government as a platform. Around the world a lot of Governments have adopted the idea of Government as a platform: let’s use common platforms, let’s use common standards, let’s try and be more efficient and effective. It’s generally been interpreted as creating platforms within Government that are common. But I think that we can do a lot better.

So Government as an API is about making Government one big conceptual API. Making the stuff that Government does discoverable programmatically, making the stuff that it does consumable programmatically, making Government the platform or a platform on which industry and citizens and indeed other Governments can actually innovate and value add. So there are many examples of this which I’ll get to but the concept here is getting towards the idea of mashable Government. Now I’m not here representing my employers or my current job or any of that kind of stuff. I’m just here speaking as a geek in Government doing some cool stuff. And obviously you’ve had the Digital Transformation Office mentioned today. There’s stuff coming about that but I’m working in there at the moment doing some cool stuff that I’m looking forward to telling you all about. So keep an eye out.

But I want you to consider the concept of mashable Government. So Australia is a country where we have a fairly egalitarian democratic view of the world. So in our minds and this is important to note, in our minds there is a role for Government. Now there’s obviously some differences around the edges about how big or small or how much I should do or shouldn’t do or whatever but the concept is that, that we’re not going to have Government going anywhere. Government will continue to deliver things, Government has a role of delivering things. The idea of mashable Government is making what the Government does more accessible, more mashable. As a citizen when you want to find something out you don’t care which jurisdiction it is, you don’t care which agency it is, you don’t care in some cases you know you don’t care who you’re talking to, you don’t care what number you have to call, you just want to get what you need. Part of the problem of course is what are all the services of Government? There is no single place right now. What are all of the, you know what’s all the content, you know with over a thousand websites or more but with lots and lots of websites just in the Federal Government and thousands more across the state and territories, where’s the right place to go? And you know sometimes people talk about you know what if we had improved SEO? Or what if we had improved themes or templates and such. If everyone has improved SEO you still have the same exact problem today, don’t you? You do a google search and then you still have lots of things to choose from and which one’s authoritative? Which one’s the most useful? Which one’s the most available?

The concept of Government as an API is making content, services, API’s, data, you know the stuff that Government produces either directly or indirectly more available to collate in a way that is user centric. That actually puts the user at the centre of the design but then also puts the understanding that other people, businesses or Governments will be able to provide value on top of what we do. So I want to imagine that all of that is available and that everything was API enabled. I want you to imagine third party re-use new applications, I mean we see small examples of that today. So to give you a couple of examples of where Governments already experimenting with this idea. obviously my little baby is one little example of this, it’s a microcosm. But whilst ever data, open data was just a list of things, a catalogue of stuff it was never going to be that high value.

So what we did when we re-launched a couple of years ago was we said what makes data valuable to people? Well programmatic access. Discovery is useful but if you can’t get access to it, it’s almost just annoying to be able to find it but not be able to access it. So how do we make it most useful? How do we make it most reusable, most high value in capacity shall we say? In potentia? So it was about programmatic access. It was about good meta data, it was about making it so it’s a value to citizens and industry but also to Government itself. If a Government agency needs to build a service, a citizen service to do something, rather than building an API to an internal system that’s privately available only to their application which would cost them money you know they could put the data in Whether it’s spatial or tabular and soon to be relational, you know different data types have different data provision needs so being able to centralise that function reduces the cost of providing it, making it easy for agencies to get the most out of their data, reduce the cost of delivering what they need to deliver on top of the data also creates an opportunity for external innovation. And I know that there’s already been loads of applications and analysis and uses of data that’s on and it’s only increasing everyday. Because we took open data from being a retrospective, freedom of information, compliance issue, which was never going to be sexy, right? We moved it towards how you can do things better. This is how we can enable innovation. This is how agencies can find each other’s data better and re-use it and not have to keep continually repeat the wheel. So we built a business proposition for that started to make it successful. So that’s been cool.

There’s been experimentation of gov as an API in the ATO. With the SBR API. With the ABN lookup or ABN lookup API. There’s so many businesses out there. I’m sure there’s a bunch in the room. When you build an application where someone puts in a business name into a app or into an application or a transaction or whatever. You can use the ABN lookup API to validate the business name. So you know it’s a really simple validation service, it means that you don’t have, as unfortunately we have right now in the whole of Government contracts data set 279 different spellings for the Department of Defence. You can start to actually get that, use what Government already has as validation services, as something to build upon. You know I really look forward to having whole of Government up to date spatial data that’s really available so people can build value on top of it. That’ll be very exciting. You know at some point I hope that happens but. Industry, experimented this with energy ratings data set. It’s a very quick example, they had to build an app as you know Ministers love to see. But they built a very, very useful app to actually compare when you’re in the store. You know your fridges and all the rest of it to see what’s best for you. But what they found, by putting the data on they saved money immediately and there’s a brilliant video if you go looking for this that the Department of Industry put together with Martin Hoffman that you should have a look at, which is very good. But what they found is by having the data out there, all the companies, all the retail companies that have to by law put the energy rating of every electrical device they sell on their brochures traditionally they did it by goggling, right? What’s the energy rating of this, whatever other retail companies using we’ll use that.

Completely out of date and unauthorised and not true, inaccurate. So by having the data set publically available kept up to date on a daily basis, suddenly they were able to massively reduce the cost of compliance for a piece of regulatory you know, so it actually reduced red tape. And then other application started being developed that were very useful and you know Government doesn’t have all the answers and no one pretends that. People love to pretend also that Government also has no answers. I think there’s a healthy balance in between. We’ve got a whole bunch of cool, innovators in Government doing cool stuff but we have to work in partnership and part of that includes using our stuff to enable cool innovation out there.

ABS obviously does a lot of work with API’s and that’s been wonderful to see. But also the National Health Services Directory. I don’t know who, how many people here know that? But you know it’s a directory of thousands, tens of thousands, of health services across Australia. All API enabled. Brilliant sort of work. So API enabled computing and systems and modular program design, agile program design is you know pretty typical for all of you. Because you’re in industry and you’re kind of used to that and you’re used to getting up to date with the latest thing that’ll make you competitive.

Moving Government towards that kind of approach will take a little longer but you know, but it has started. But if you take an API enabled approach to your systems design it is relatively easy to progress to taking an API approach to exposing that publically.

So, I think I only had ten minutes so imagine if all the public Government information services were carefully, were usefully right, usefully discoverable. Not just through using a google search, which appropriate metadata were and even consumable in some cases, you know what if you could actually consume some of those transaction systems or information or services and be able to then re-use it somewhere else. Because when someone is you know about to I don’t know, have a baby, they google for it first right and then they go to probably a baby, they don’t think to come to government in the first instance. So we need to make it easier for Government to go to them. When they go to, why wouldn’t be able to present to them the information that they need from Government as well. This is where we’re starting to sort of think when we start following the rabbit warren of gov as an API.

So, start thinking about what you would use. If all of these things were discoverable or if even some of them were discoverable and consumable, how would you use it? How would you innovate? How would you better serve your customers by leveraging Government as an API? So Government has and always will play a part. This is about making Government just another platform to help enable our wonderful egalitarian and democratic society. Thank you very much.

Postnote: adopting APIs as a strategy, not just a technical side effect is key here. Adopting modular architecture so that agencies can adopt the best of breed components for a system today, tomorrow and into the future, without lock in. I think just cobbling APIs on top of existing systems would miss the greater opportunity of taking a modular architecture design approach which creates more flexible, adaptable, affordable and resilient systems than the traditional single stack solution.

Binh Nguyen: More JSF Thoughts, Theme Hospital, George W. Bush Vs Tony Abbott, and More

Wed, 2015-09-23 06:35
- people in charge of running the PR behind the JSF program have handled it really badly at times. If anyone wants to really put the BVR combat perspective back into perspective they should point back to the history of other 'sealth aircraft' such as the B-2 instead of simply repeating the mantra, it will work in the future. People can judge the past, they can only speculate about the future and watch as problem after problem seems to be highlighted with the program

F-35 not a Dog Fighter???

- for a lot of countries the single engined nature of the aircraft makes little sense. Will be interesting how the end game plays out. It seems clear that some countries have been co-erced into purchasing the JSF rather than the JSF earning it's stripes entirely on merit

Norway to reduce F-35 order?

F-35 - Runaway Fighter - the fifth estate

- one thing I don't like about the program is the fact that if there is crack in the security of the program all countries participating in the program are in trouble. Think about computer security. Once upon a time it was claimed that Apple's Mac OS X and that Google's technology was best and that Android was impervious to security threats. It's become clear that these beliefs are nonsensical. If all allies switch to stealth based technologies all enemies will switch to trying to find a way to defeat it

- one possible attack against stealth aircraft I've ben thinking of revolves around sensory deprivation of the aircrafts sensors. It is said that the AESA RADAR capability of the JSF is capable of frying other aircraft's electronics. I'd be curious to see how attacks against airspeed, attitude, and other sensors would work. Both the B-2 and F-22 have had trouble with this...

- I'd be like the US military to be honest. Purchase in limited numbers early on and test it or let others do the same thing. Watch and see how the program progresses before making joining in

- never, ever make the assumption that the US will give back technology that you have helped to develop alongside them if they have iterated on it. A good example of this is the Japanese F-2 program which used higher levels of composite in airframe structure and the world's first AESA RADAR. Always have backup or keep a local research effort going even if the US promise to transfer knowledge back to a partner country

- as I've stated before the nature of detterance as a core defensive theory means that you are effectively still at war because it diverts resources from other industries back into defense. I'm curious to see how economies would change if everyone mutually agreed to drop weapons and platforms with projected power capabilities (a single US aircraft carrier alone costs about $14B USD, a B-2 bomber $2B, a F-22 fighter $250M USD, a F-35 JSF ~$100M USD, etc...) and only worried about local, regional, defense...

- people often accuse the US of poking into areas where they shouldn't. The problem is that they have so many defense agreements that it's difficult for them not to. They don't really have a choice sometimes. The obvious thing is whether or not they respond in a wise fashion

- in spite of what armchair generals keep on saying the Chinese and Russians would probably make life at least a little difficult for the US and her allies if things came to a head. It's clear that a lot of weapons platform's and systems that are now being pursued are struggles for everyone who is engaged in them (technically as well as cost wise) and they already have some possible counter measures in place. How good they actually are is the obvious question though. I'm also curious how good their OPSEC is. If they're able to seal off their scientists entirely in internal test environments then details regarding their programs and capabilities will be very difficult to obtain owing the the heavy dependence by the West purely on SIGINT/COMINT capabilities. They've always had a hard time gaining HUMINT but not the other way around...

- some analysts/journalists say that the 'Cold War' never really ended, that it's effectively been in hibernation for a while. The interesting thing is that in spite of what China has said regarding a peaceful rise it is pushing farther out with it's weapons systems and platforms. You don't need an aircraft carrier to defend your territory. You just need longer range weapons systems and platforms. It will be interesting to see how far China chooses to push out in spite of what is said by some public servants and politicians it is clear that China wants to take a more global role

- technically, the US wins many of the wars that it chooses. Realistically, though it's not so clear. Nearly every single adversary now engages in longer term, guerilla style tactics. In Afghanistan, Iraq, Iran, Libya, and elsewhere they've basically been waiting for allied forces to clear out before taking their opportunity

- a lot of claims regarding US defense technology superiority makes no sense. If old Soviet era SAM systems are so worthless against US manufactured jets then why bother to going to such extents with regard to cyberwarfare when it comes to shutting them down? I am absolutely certain that there is no way that the claim that some classes of aircraft have never been shot down is not true

- part of me wonders just exactly how much effort and resources are the Chinese and Russians genuinely throwing at their 5th gen fighter programs. Is it possible that they are simply waiting until most of the development is completed by the West and then they'll 'magically' have massive breakthroughs and begin full scale production of their programs? They've had a history of stealing and reverse engineering a lot of technology for a long time now

- the US defense budget seems exhorbitant. True, their requirements are substantially different but look at the way they structure a lot of programs and it becomes obvious why as well. They're often very ambitious with multiple core technologies that need to be developed in order for the overall program to work. Part of me thinks that their is almost a zero sum game at times. They think that they can throw money at some problems and it will be solved. It's not as simple as that. They've been working on some core problem problems like directed energy weapons and rail guns for a long time now and have had limited success. If they want a genuine chance at this they're better off understanding the problem and then funding the core science. It's much like their space and intelligence programs where a lot of other spin off technologies were subsequently developed

- reading a lot of stuff online and elsewhere it becomes much clearer that both sides often underestimate one another (less often by people in the defense or intelligence community) . You should track and watch things based on what people do, not what they say

- a lot of countries just seem to want to stay out of the geo-political game. They don't want to choose sides and couldn't care less. Understandable, seeing the role that both countries play throughout the world now

- the funny thing is that some of the countries that are pushed back (Iran, North Korea, Russia, etc...) don't have much too lose. US defense alone has struggled to identify targets worth bombing in North Korea and how do you force a country to comply if they have nothing left to lose such as Iran or North Korea? It's unlikely China or Russia will engage in all out attack in the near to medium future. It's likely they'll continue to do the exact same thing and skirt around the edges with cyberwarfare and aggressive intelligence collection

- It's clear that the superpower struggle has been underway for a while now. The irony is that this is game of economies as well as technology. If the West attempt to compete purely via defense technology/deterrence then part of me fears they will head down the same pathway that the USSR went. It will collapse under the strain of a defense (and other industries) that are largely worthless (under most circumstances) and does nothing for the general poplation. Of course, this is partially offset by a potential new trade pact in the APAC region but I am certain that this will inevitably still be in favour of the US especially with their extensive SIGINT/COMINT capability, economic intelligence, and their use of it in trade negotiations

- you don't really realise how many jobs and money is on the line with regards to the JSF program until you do the numbers

An old but still enjoyable/playable game with updates to run under Windows 7

Watching footage about George W. Bush it becomes much clearer that he was somewhat of a clown who realised his limitations. It's not the case with Tony Abbott who can be scary and hilarious at times

Last Week Tonight with John Oliver: Tony Abbott, President of the USA of Australia (HBO)

Must See Hilarious George Bush Bloopers! - VERY FUNNY

Once upon a time I read about a Chinese girl who used a pin in her soldering iron to do extremely fine soldering work. I use solder paste or wire glue. Takes less time and using sticky/masking tape you can achieve a really clean finish!/

Anthony Towns: Lightning network thoughts

Tue, 2015-09-22 19:26

I’ve been intrigued by micropayments for, like, ever, so I’ve been following Rusty’s experiments with bitcoin with interest. Bitcoin itself, of course, has a roughly 10 minute delay, and a fee of effectively about 3c per transaction (or $3.50 if you count inflation/mining rewards) so isn’t really suitable for true microtransactions; but pettycoin was going to be faster and cheaper until it got torpedoed by sidechains, and more recently the lightning network offers the prospect of small payments that are effectively instant, and have fees that scale linearly with the amount (so if a $10 transaction costs 3c like in bitcoin, a 10c transaction will only cost 0.03c).

(Why do I think that’s cool? I’d like to be able to charge anyone who emails me 1c, and make $130/month just from the spam I get. Or you could have a 10c signup fee for webservice trials to limit spam but not have to tie everything to your facebook account or undergo turing trials. You could have an open wifi access point, that people don’t have to register against, and just bill them per MB. You could maybe do the same with tor nodes. Or you could setup bittorrent so that in order to receive a block I pay maybe 0.2c/MB to whoever sent it to me, and I charge 0.2c/MB to anyone who wants a block from me — leechers paying while seeders earn a profit would be fascinating. It’d mean you could setup a webstore to sell apps or books without having to sell your sell your soul to a corporate giant like Apple, Google, Paypal, Amazon, Visa or Mastercard. I’m sure there’s other fun ideas)

A bit over a year ago I critiqued sky-high predictions of bitcoin valuations on the basis that “I think you’d start hitting practical limitations trying to put 75% of the world’s transactions through a single ledger (ie hitting bandwidth, storage and processing constraints)” — which is currently playing out as “OMG the block size is too small” debates. But the cool thing about lightning is that it lets you avoid that problem entirely; hundreds, thousands or millions of transactions over weeks or years can be summarised in just a handful of transactions on the blockchain.

(How does lightning do that? It sets up a mesh network of “channels” between everyone, and provides a way of determining a route via those channels between any two people. Each individual channel is between two people, and each channel is funded with a particular amount of bitcoin, which is split between the two people in whatever way. When you route a payment across a channel, the amount of that payment’s bitcoins moves from one side of the channel to the other, in the direction of the payment. The amount of bitcoins in a channel doesn’t change, but when you receive a payment, the amount of bitcoins on your side of your channels does. When you simply forward a payment, you get more money in one channel, and less in another by the same amount (or less a small handling fee). Some bitcoin-based crypto-magic ensues to ensure you can’t steal money, and that the original payer gets a “receipt”. The end result is that the only bitcoin transactions that need to happen are to open a channel, close a channel, or change the total amount of bitcoin in a channel. Rusty gave a pretty good interview with the “Let’s talk bitcoin” podcast if the handwaving here wasn’t enough background)

Of course, this doesn’t work very well if you’re only spending money: it doesn’t take long for all the bitcoins on your lightning channels to end up on the other side, and at that point you can’t spend any more. If you only receive money over lightning, the reverse happens, and you’re still stuck just as quickly. It’s still marginally better than raw bitcoin, in that you have two bitcoin transactions to open and close a channel worth, say, $200, rather than forty bitcoin transactions, one for each $5 you spend on coffee. But that’s only a fairly minor improvement.

You could handwave that away by saying “oh, but once lightning takes off, you’ll get your salary paid in lightning anyway, and you’ll pay your rent in lightning, and it’ll all be circular, just money flowing around, lubricating the economy”. But I think that’s unrealistic in two ways: first, it won’t be that way to start with, and if things don’t work when lightning is only useful for a few things, it will never take off; and second, money doesn’t flow around the economy completely fluidly, it accumulates in some places (capitalism! profits!) and drains away from others. So it seems useful to have some way of making degenerate scenarios actually work — like someone who only uses lightning to spend money, or someone who receives money by lightning but only wants to spend cold hard cash.

One way you can do that is if you imagine there’s someone on the lightning network who’ll act as an exchange — who’ll send you some bitcoin over lightning if you send them some cash from your bank account, or who’ll deposit some cash in your bank account when you send them bitcoins over lightning. That seems like a pretty simple and realistic scenario to me, and it makes a pretty big improvement.

I did a simulation to see just how well that actually works out. With “Alice” as a coffee consumer, who does nothing with lightning but buy $5 espressos from “Emma” and refill her lightning wallet by exchanging cash with “Xavier” who runs an exchange, converting dollars (or gold or shares etc) to lightning funds. Bob, Carol and Dave run lightning nodes and take a 1% cut of any transactions they forward. I uploaded a video to youtube that I think helps visualise the payment flows and channel states (there’s no sound):

It starts off with Alice and Xavier putting $200 in channels in the network; Bob, Carol and Dave putting in $600 each, and Emma just waiting for cash to arrive. The statistics box in the top right tracks how much each player has on the lightning network (“ln”), how much profit they’ve made (“pf”), and how many coffees Alice has ordered from Emma. About 3000 coffees later, it ends up with Alice having spent about $15,750 in real money on coffee ($5.05/coffee), Emma having about $15,350 in her bank account from making Alice’s coffees ($4.92/coffee), and Bob, Carol and Dave having collectively made about $400 profit on their $1800 investment (about 22%, or the $0.13/coffee difference between what Alice paid and Emma received). At that point, though, Bob, Carol and Dave have pretty much all the funds in the lightning network, and since they only forward transactions but never initiate them, the simulation grinds to a halt.

You could imagine a few ways of keeping the simulation going: Xavier could refresh his channels with another $200 via a blockchain transaction, for instance. Or Bob, Carol and Dave could buy coffees from Emma with their profits. Or Bob, Carol and Dave could cash some of their profits out via Xavier. Or maybe they buy some furniture from Alice. Basically, whatever happens, you end up relying on “other economic activity” happening either within lightning itself, or in bitcoin, or in regular cash.

But grinding to a halt after earning 22% and spending/receiving $15k isn’t actually too bad even as it is. So as a first pass, it seems like a pretty promising indicator that lightning might be feasible economically, as well as technically.

One somewhat interesting effect is that the profits don’t get distributed particularly evenly — Bob, Carol and Dave each invest $600 initially, but make $155.50 (25.9%), $184.70 (30.7%) and $52.20 (8.7%) respectively. I think that’s mostly a result of how I chose to route payments — it optimises the route to choose channels with the most funds in order to avoid payments getting stuck, and Dave just ends up handling less transaction volume. Having a better routing algorithm (that optimises based on minimum fees, and relies on channel fees increasing when they become unbalanced) might improve things here. Or it might not, and maybe Dave needs to quote lower fees in general or establish a channel with Xavier in order to bring his profits up to match Bob and Carol.

OpenSTEM: Building a Rope Bridge Using Quadcopters

Tue, 2015-09-22 13:30

Or, how to do something really useful with these critters…

Quadcopters are regularly in the news, as they’re fairly cheap and lots of people are playing about with them and quite often creating a nuisance or even dangerous situations. I suppose it’s a phase, but I don’t blame people for wondering what positive use quadcopters can have.

At STEM and Management University ETH Zurich (Switzerland), software tools have been developed to calculate the appropriate structure for a rope bridge, after a physical location has been measured up. The resulting structure is also virtually tested before the quadcopters start, autonomously, with the actual build.

The built physical structure can hold humans crossing. Imagine this getting used in disaster areas, to help save people. Just one example… quite fabulous, isn’t it!

The experiments are done in the Flying Machine Arena of ETH Zurich, a 10x10x10 meter space with fast motion capture cameras.

Michael Still: Camp Cottermouth

Tue, 2015-09-22 12:28
I spent the weekend at a Scout camp at Camp Cottermouth. The light on the hills here in the mornings is magic.


Interactive map for this route.

Tags for this post: blog pictures 20150920 photo canberra bushwalk


Michael Davies: Mocking python objects and object functions using both class-level and function-level mocks

Mon, 2015-09-21 17:51
Had some fun solving an issue with partitions larger than 2Tb, and came across a little gotcha when it comes to mocking in python when a) you want to mock both an object and a function in that object, and b) when you want to mock.patch.object at both the test class and test method level.

Say you have a function you want to test that looks like this:

def make_partitions(...):        ...        dp = disk_partitioner.DiskPartitioner(...)        dp.add_partition(...)        ...

where the DiskPartitioner class looks like this:

class DiskPartitioner(object):

    def __init__(self, ...):        ...

    def add_partition(self, ...):        ...

and you have existing test code like this:

@mock.patch.object(utils, 'execute') class MakePartitionsTestCase(test_base.BaseTestCase):    ...

and you want to add a new test function, but adding a new patch just for your new test.

You want to verify that the class is instantiated with the right options, and you need to mock the add_partition method as well. How do you use the existing test class (with the mock of the execute function), add a new mock for the DiskPartitioner.add_partition function, and the __init__ of the DiskPartitioner class?

After a little trial and error, this is how:

    @mock.patch.object(disk_partitioner, 'DiskPartitioner',                       autospec=True)    def test_make_partitions_with_gpt(self, mock_dp, mock_exc):

        # Need to mock the function as well        mock_dp.add_partition = mock.Mock(return_value=None)        ...        disk_utils.make_partitions(...)   # Function under test         mock_dp.assert_called_once_with(...)        mock_dp.add_partition.assert_called_once_with(...)

Things to note:

1) The ordering of the mock parameters to test_make_partitions_with_gpt isn't immediately intuitive (at least to me).  You specify the function level mocks first, followed by the class level mocks.

2) You need to manually mock the instance method of the mocked class.  (i.e. the add_partition function)

You can see the whole enchilada over here in the review.

David Rowe: Codec 2 Masking Model Part 2

Mon, 2015-09-21 09:30

I’ve been making steady progress on my new ideas for amplitude quantisation for Codec 2. The goal is to increase speech quality, in particular for very low bit rate 700 bits/ modes.

Here are the signal processing steps I’m working on:

The signal processing algorithms I have developed since Part 1 are coloured in blue. I still need to nail the yellow work. The white stuff has been around for years.

Actually I spent a few weeks on the yellow steps but wasn’t satisfied so looked for something a bit easier to do for a while. The progress has made me feel like I am getting somewhere, and pumped me up to hit the tough bits again. Sometimes we need to organise the engineering to suit our emotional needs. We need to see (or rather “feel”) constant progress. Research and Disappointment is hard!

Transformations and Sample Rate Changes

The goal of a codec is to reduce the bit rate, but still maintain some target speech quality. The “quality bar” varies with your application. For my current work low quality speech is OK, as I’m competing with analog HF SSB. Just getting the message through after a few tries is a lower bar, the upper bar being easy conversation over that nasty old HF channel.

While drawing the figure above I realised that a codec can be viewed as a bunch of processing steps that either (i) transform the speech signal or (ii) change the sample rate. An example of transforming is performing a FFT to convert the time domain speech signal into the frequency domain. We then decimate in the time and frequency domain to change the sample rate of the speech signal.

Lowering the sample rate is an effective way to lower the bit rate. This process is called decimation. In Codec 2 we start with a bunch of sinusoidal amplitudes that we update every 10ms (100Hz sampling rate). We then throw away every 3 out of 4 to give a sample rate of 25Hz. This means there are less samples to every second, so the bit rate is reduced.

At the decoder we use interpolation to smoothly fill in the missing gaps, raising the sample rate back up to 100Hz. We eventually transform back to the time domain using an inverse FFT to play the signal out of the speaker. Speakers like time domain signals.

In the figure above we start with chunks of speech samples in the time domain, then transform into the frequency domain, where we fit a sinusoidal, then masking model.

The sinusoidal model takes us from a 512 point FFT to 20-80 amplitudes. Its fits a sinusoidal speech model to the incoming signal. The number of sinusoidal amplitudes varies with the pitch of the incoming voice. It is time varying, which complicates our life if we desire a constant bit rate.

The masking model fits a smoothed envelope that represents the way we produce and hear speech. For example we don’t talk in whistles (unless you are R2D2) so no point wasting bits in being able to code very narrow bandwidths signals. The ear masks weak tones near strong ones so no point coding them either. The ear also has a log frequency and amplitude response so we take advantage of that too.

In this way the speech signal winds it’s way through the codec, being transformed this way and that, as we carve off samples until we get something that we can send over the channel.

Next Steps

Need to sort out those remaining yellow blocks, and come up with a fully quantised codec candidate.

An idea that occurred to me while drawing the diagram – can we estimate the mask directly from the FFT samples? We may not need the intermediate estimation of the sinusoidal amplitudes any more.

It may also be possible to analyse/synthesise using filters modeling the masks running in the time domain. For example on the analysis side look at the energy at the output at a bunch of masking filters spaced closely enough that we can’t perceive the difference.

Writing stuff up on a blog is cool. It’s “the cardboard colleague” effect: the process of clearly articulating your work can lead to new ideas and bug fixes. It doesn’t matter who you articulate the problems too, just talking about them can lead to solutions.

Sridhar Dhanapalan: Twitter posts: 2015-09-14 to 2015-09-20

Mon, 2015-09-21 01:27

Ben Martin: Terry Motor Upgrade -- no stopping it!

Sun, 2015-09-20 15:48
I have now updated the code and PID control for the new RoboClaw and HD Planetary motor configuration. As part of the upgrade I had to move to using a lipo battery because these motors stall at 20 amps. While it is a bad idea to leave it stalled, it's a worse idea to have the battery have issues due to drawing too much current. It's always best to choose where the system will fail rather than letting the cards fall where the may. In this case, leaving it stalled will result in drive train damage in the motors, not a controller board failure, or a lipo issue.

One of the more telling images is below which compares not only the size of the motors but also the size of the wires servicing the power to the motors. I used 14AWG wire with silicon coating for the new motors so that a 20A draw will not cause any issues in the wiring. Printing out new holders for the high precision quadrature encoders took a while. Each print was about 1 hour long and there was always a millimetre or two that could be changed in the design which then spurred another print job.

Below is the old controller board (the 5A roboclaw) with the new controller sitting on the bench in front of Terry (45A controller). I know I only really needed the 30A controller for this job, but when I decided to grab the items the 30A was sold out so I bumped up to the next model.

The RoboClaw is isolated from the channel by being attached via nylon bolts to a 3d printed cross over panel.

One of the downsides to the 45A model, which I imagine will fix itself in time, was that the manual didn't seem to be available. The commands are largely the same as for the other models in the series, but I had to work out the connections for the quad encoders and have currently powered them of the BEC because the screw terminal version of the RoboClaw doesn't have +/- terminals for the quads.

One little surprise was that these motors are quite magnetic without power. Nuts and the like want to move in and the motors will attract each other too. Granted it's not like they will attract themselves from any great distance, but it's interesting compared to the lower torque motors I've been using in the past.

I also had a go at wiring 4mm connectors to 10AWG cable. Almost got it right after a few attempts but the lugs are not 100% fixed into their HXT plastic chassis because of some solder or flux debris I accidentally left on the job. I guess some time soon I'll be wiring my 100A monster automotive switch inline in the 10AWG cable for solid battery isolation when Terry is idle. ServoCity has some nice bundles of 14AWG wire (which are the yellow and blue ones I used to the motors) and I got a bunch of other wire from HobbyKing.

Francois Marier: Hooking into docking and undocking events to run scripts

Sun, 2015-09-20 10:55

In order to automatically update my monitor setup and activate/deactivate my external monitor when plugging my ThinkPad into its dock, I found a way to hook into the ACPI events and run arbitrary scripts.

This was tested on a T420 with a ThinkPad Dock Series 3 as well as a T440p with a ThinkPad Ultra Dock.

The only requirement is the ThinkPad ACPI kernel module which you can find in the tp-smapi-dkms package in Debian. That's what generates the ibm/hotkey events we will listen for.

Hooking into the events

Create the following ACPI event scripts as suggested in this guide.

Firstly, /etc/acpi/events/thinkpad-dock:

event=ibm/hotkey LEN0068:00 00000080 00004010 action=su francois -c "/home/francois/bin/external-monitor dock"

Secondly, /etc/acpi/events/thinkpad-undock:

event=ibm/hotkey LEN0068:00 00000080 00004011 action=su francois -c "/home/francois/bin/external-monitor undock"

then restart udev:

sudo service udev restart Finding the right events

To make sure the events are the right ones, lift them off of:

sudo acpi_listen

and ensure that your script is actually running by adding:

logger "ACPI event: $*"

at the begininng of it and then looking in /var/log/syslog for this lines like:

logger: external-monitor undock logger: external-monitor dock

If that doesn't work for some reason, try using an ACPI event script like this:

event=ibm/hotkey action=logger %e

to see which event you should hook into.

Using xrandr inside an ACPI event script

Because the script will be running outside of your user session, the xrandr calls must explicitly set the display variable (-d). This is what I used:

#!/bin/sh logger "ACPI event: $*" xrandr -d :0.0 --output DP2 --auto xrandr -d :0.0 --output eDP1 --auto xrandr -d :0.0 --output DP2 --left-of eDP1