Planet Linux Australia

Syndicate content
Planet Linux Australia -
Updated: 1 hour 23 min ago

Linux Users of Victoria (LUV) Announce: LUV Main October 2015 Meeting: Networking Fundamentals / High Performance Open Source Storage

Wed, 2015-09-23 21:29
Start: Oct 6 2015 18:30 End: Oct 6 2015 20:30 Start: Oct 6 2015 18:30 End: Oct 6 2015 20:30 Location: 

6th Floor, 200 Victoria St. Carlton VIC 3053



• Fraser McGlinn, Networking Fundamentals, Troubleshooting and Packet Analysis

• Sam McLeod, High Performance, Open Source Storage

200 Victoria St. Carlton VIC 3053 (formerly the EPA building)

Before and/or after each meeting those who are interested are welcome to join other members for dinner. We are open to suggestions for a good place to eat near our venue. Maria's on Peel Street in North Melbourne is currently the most popular place to eat after meetings.

LUV would like to acknowledge Red Hat for their help in obtaining the venue and VPAC for hosting.

Linux Users of Victoria Inc. is an incorporated association, registration number A0040056C.

October 6, 2015 - 18:30

read more

Michael Still: First trail run

Wed, 2015-09-23 12:28
So, now I trail run apparently. This was a test run for a hydration vest (thanks Steven Hanley for the loaner!). It was fun, but running up hills is evil.

Interactive map for this route.

Tags for this post: blog canberra trail run

Related posts: Second trail run; Chicken run; Update on the chickens; Boston; Random learning for the day


Chris Smart: Reset keyboard shortcuts in GNOME

Wed, 2015-09-23 11:29

Recently we had a Korora user ask how to reset the keybindings in GNOME, which they had changed.

I don’t think that the shortcuts program has a way to reset them, but you can use dconf-editor.

Open the dconf-editor program and browse to:


Anything that’s been modified should be in bold font. Select it then down the bottom on the right click the “Set to Default” button.

Hope that helps!

Pia Waugh: Government as an API: how to change the system

Wed, 2015-09-23 11:26

A couple of months ago I gave a short speech about Gov as an API at an AIIA event. Basically I believe that unless we make government data, content and transaction services API enabled and mashable, then we are simply improving upon the status quo. 1000 services designed to be much better are still 1000 services that could be integrated for users, automated at the backend, or otherwise transformed into part of a system rather than the unique siloed systems that we have today. I think the future is mashable government, and the private sector has already gone down this path so governments need to catch up!

When I rewatched it I felt it captured my thoughts around this topic really well, so below is the video and the transcript. Enjoy! Comments welcome.

The first thing is I want to talk about gov as an API. This is kind of like on steroids, but this goes way above and beyond data and gets into something far more profound. But just a step back, the to the concept of Government as a platform. Around the world a lot of Governments have adopted the idea of Government as a platform: let’s use common platforms, let’s use common standards, let’s try and be more efficient and effective. It’s generally been interpreted as creating platforms within Government that are common. But I think that we can do a lot better.

So Government as an API is about making Government one big conceptual API. Making the stuff that Government does discoverable programmatically, making the stuff that it does consumable programmatically, making Government the platform or a platform on which industry and citizens and indeed other Governments can actually innovate and value add. So there are many examples of this which I’ll get to but the concept here is getting towards the idea of mashable Government. Now I’m not here representing my employers or my current job or any of that kind of stuff. I’m just here speaking as a geek in Government doing some cool stuff. And obviously you’ve had the Digital Transformation Office mentioned today. There’s stuff coming about that but I’m working in there at the moment doing some cool stuff that I’m looking forward to telling you all about. So keep an eye out.

But I want you to consider the concept of mashable Government. So Australia is a country where we have a fairly egalitarian democratic view of the world. So in our minds and this is important to note, in our minds there is a role for Government. Now there’s obviously some differences around the edges about how big or small or how much I should do or shouldn’t do or whatever but the concept is that, that we’re not going to have Government going anywhere. Government will continue to deliver things, Government has a role of delivering things. The idea of mashable Government is making what the Government does more accessible, more mashable. As a citizen when you want to find something out you don’t care which jurisdiction it is, you don’t care which agency it is, you don’t care in some cases you know you don’t care who you’re talking to, you don’t care what number you have to call, you just want to get what you need. Part of the problem of course is what are all the services of Government? There is no single place right now. What are all of the, you know what’s all the content, you know with over a thousand websites or more but with lots and lots of websites just in the Federal Government and thousands more across the state and territories, where’s the right place to go? And you know sometimes people talk about you know what if we had improved SEO? Or what if we had improved themes or templates and such. If everyone has improved SEO you still have the same exact problem today, don’t you? You do a google search and then you still have lots of things to choose from and which one’s authoritative? Which one’s the most useful? Which one’s the most available?

The concept of Government as an API is making content, services, API’s, data, you know the stuff that Government produces either directly or indirectly more available to collate in a way that is user centric. That actually puts the user at the centre of the design but then also puts the understanding that other people, businesses or Governments will be able to provide value on top of what we do. So I want to imagine that all of that is available and that everything was API enabled. I want you to imagine third party re-use new applications, I mean we see small examples of that today. So to give you a couple of examples of where Governments already experimenting with this idea. obviously my little baby is one little example of this, it’s a microcosm. But whilst ever data, open data was just a list of things, a catalogue of stuff it was never going to be that high value.

So what we did when we re-launched a couple of years ago was we said what makes data valuable to people? Well programmatic access. Discovery is useful but if you can’t get access to it, it’s almost just annoying to be able to find it but not be able to access it. So how do we make it most useful? How do we make it most reusable, most high value in capacity shall we say? In potentia? So it was about programmatic access. It was about good meta data, it was about making it so it’s a value to citizens and industry but also to Government itself. If a Government agency needs to build a service, a citizen service to do something, rather than building an API to an internal system that’s privately available only to their application which would cost them money you know they could put the data in Whether it’s spatial or tabular and soon to be relational, you know different data types have different data provision needs so being able to centralise that function reduces the cost of providing it, making it easy for agencies to get the most out of their data, reduce the cost of delivering what they need to deliver on top of the data also creates an opportunity for external innovation. And I know that there’s already been loads of applications and analysis and uses of data that’s on and it’s only increasing everyday. Because we took open data from being a retrospective, freedom of information, compliance issue, which was never going to be sexy, right? We moved it towards how you can do things better. This is how we can enable innovation. This is how agencies can find each other’s data better and re-use it and not have to keep continually repeat the wheel. So we built a business proposition for that started to make it successful. So that’s been cool.

There’s been experimentation of gov as an API in the ATO. With the SBR API. With the ABN lookup or ABN lookup API. There’s so many businesses out there. I’m sure there’s a bunch in the room. When you build an application where someone puts in a business name into a app or into an application or a transaction or whatever. You can use the ABN lookup API to validate the business name. So you know it’s a really simple validation service, it means that you don’t have, as unfortunately we have right now in the whole of Government contracts data set 279 different spellings for the Department of Defence. You can start to actually get that, use what Government already has as validation services, as something to build upon. You know I really look forward to having whole of Government up to date spatial data that’s really available so people can build value on top of it. That’ll be very exciting. You know at some point I hope that happens but. Industry, experimented this with energy ratings data set. It’s a very quick example, they had to build an app as you know Ministers love to see. But they built a very, very useful app to actually compare when you’re in the store. You know your fridges and all the rest of it to see what’s best for you. But what they found, by putting the data on they saved money immediately and there’s a brilliant video if you go looking for this that the Department of Industry put together with Martin Hoffman that you should have a look at, which is very good. But what they found is by having the data out there, all the companies, all the retail companies that have to by law put the energy rating of every electrical device they sell on their brochures traditionally they did it by goggling, right? What’s the energy rating of this, whatever other retail companies using we’ll use that.

Completely out of date and unauthorised and not true, inaccurate. So by having the data set publically available kept up to date on a daily basis, suddenly they were able to massively reduce the cost of compliance for a piece of regulatory you know, so it actually reduced red tape. And then other application started being developed that were very useful and you know Government doesn’t have all the answers and no one pretends that. People love to pretend also that Government also has no answers. I think there’s a healthy balance in between. We’ve got a whole bunch of cool, innovators in Government doing cool stuff but we have to work in partnership and part of that includes using our stuff to enable cool innovation out there.

ABS obviously does a lot of work with API’s and that’s been wonderful to see. But also the National Health Services Directory. I don’t know who, how many people here know that? But you know it’s a directory of thousands, tens of thousands, of health services across Australia. All API enabled. Brilliant sort of work. So API enabled computing and systems and modular program design, agile program design is you know pretty typical for all of you. Because you’re in industry and you’re kind of used to that and you’re used to getting up to date with the latest thing that’ll make you competitive.

Moving Government towards that kind of approach will take a little longer but you know, but it has started. But if you take an API enabled approach to your systems design it is relatively easy to progress to taking an API approach to exposing that publically.

So, I think I only had ten minutes so imagine if all the public Government information services were carefully, were usefully right, usefully discoverable. Not just through using a google search, which appropriate metadata were and even consumable in some cases, you know what if you could actually consume some of those transaction systems or information or services and be able to then re-use it somewhere else. Because when someone is you know about to I don’t know, have a baby, they google for it first right and then they go to probably a baby, they don’t think to come to government in the first instance. So we need to make it easier for Government to go to them. When they go to, why wouldn’t be able to present to them the information that they need from Government as well. This is where we’re starting to sort of think when we start following the rabbit warren of gov as an API.

So, start thinking about what you would use. If all of these things were discoverable or if even some of them were discoverable and consumable, how would you use it? How would you innovate? How would you better serve your customers by leveraging Government as an API? So Government has and always will play a part. This is about making Government just another platform to help enable our wonderful egalitarian and democratic society. Thank you very much.

Postnote: adopting APIs as a strategy, not just a technical side effect is key here. Adopting modular architecture so that agencies can adopt the best of breed components for a system today, tomorrow and into the future, without lock in. I think just cobbling APIs on top of existing systems would miss the greater opportunity of taking a modular architecture design approach which creates more flexible, adaptable, affordable and resilient systems than the traditional single stack solution.

Binh Nguyen: More JSF Thoughts, Theme Hospital, George W. Bush Vs Tony Abbott, and More

Wed, 2015-09-23 06:35
- people in charge of running the PR behind the JSF program have handled it really badly at times. If anyone wants to really put the BVR combat perspective back into perspective they should point back to the history of other 'sealth aircraft' such as the B-2 instead of simply repeating the mantra, it will work in the future. People can judge the past, they can only speculate about the future and watch as problem after problem seems to be highlighted with the program

F-35 not a Dog Fighter???

- for a lot of countries the single engined nature of the aircraft makes little sense. Will be interesting how the end game plays out. It seems clear that some countries have been co-erced into purchasing the JSF rather than the JSF earning it's stripes entirely on merit

Norway to reduce F-35 order?

F-35 - Runaway Fighter - the fifth estate

- one thing I don't like about the program is the fact that if there is crack in the security of the program all countries participating in the program are in trouble. Think about computer security. Once upon a time it was claimed that Apple's Mac OS X and that Google's technology was best and that Android was impervious to security threats. It's become clear that these beliefs are nonsensical. If all allies switch to stealth based technologies all enemies will switch to trying to find a way to defeat it

- one possible attack against stealth aircraft I've ben thinking of revolves around sensory deprivation of the aircrafts sensors. It is said that the AESA RADAR capability of the JSF is capable of frying other aircraft's electronics. I'd be curious to see how attacks against airspeed, attitude, and other sensors would work. Both the B-2 and F-22 have had trouble with this...

- I'd be like the US military to be honest. Purchase in limited numbers early on and test it or let others do the same thing. Watch and see how the program progresses before making joining in

- never, ever make the assumption that the US will give back technology that you have helped to develop alongside them if they have iterated on it. A good example of this is the Japanese F-2 program which used higher levels of composite in airframe structure and the world's first AESA RADAR. Always have backup or keep a local research effort going even if the US promise to transfer knowledge back to a partner country

- as I've stated before the nature of detterance as a core defensive theory means that you are effectively still at war because it diverts resources from other industries back into defense. I'm curious to see how economies would change if everyone mutually agreed to drop weapons and platforms with projected power capabilities (a single US aircraft carrier alone costs about $14B USD, a B-2 bomber $2B, a F-22 fighter $250M USD, a F-35 JSF ~$100M USD, etc...) and only worried about local, regional, defense...

- people often accuse the US of poking into areas where they shouldn't. The problem is that they have so many defense agreements that it's difficult for them not to. They don't really have a choice sometimes. The obvious thing is whether or not they respond in a wise fashion

- in spite of what armchair generals keep on saying the Chinese and Russians would probably make life at least a little difficult for the US and her allies if things came to a head. It's clear that a lot of weapons platform's and systems that are now being pursued are struggles for everyone who is engaged in them (technically as well as cost wise) and they already have some possible counter measures in place. How good they actually are is the obvious question though. I'm also curious how good their OPSEC is. If they're able to seal off their scientists entirely in internal test environments then details regarding their programs and capabilities will be very difficult to obtain owing the the heavy dependence by the West purely on SIGINT/COMINT capabilities. They've always had a hard time gaining HUMINT but not the other way around...

- some analysts/journalists say that the 'Cold War' never really ended, that it's effectively been in hibernation for a while. The interesting thing is that in spite of what China has said regarding a peaceful rise it is pushing farther out with it's weapons systems and platforms. You don't need an aircraft carrier to defend your territory. You just need longer range weapons systems and platforms. It will be interesting to see how far China chooses to push out in spite of what is said by some public servants and politicians it is clear that China wants to take a more global role

- technically, the US wins many of the wars that it chooses. Realistically, though it's not so clear. Nearly every single adversary now engages in longer term, guerilla style tactics. In Afghanistan, Iraq, Iran, Libya, and elsewhere they've basically been waiting for allied forces to clear out before taking their opportunity

- a lot of claims regarding US defense technology superiority makes no sense. If old Soviet era SAM systems are so worthless against US manufactured jets then why bother to going to such extents with regard to cyberwarfare when it comes to shutting them down? I am absolutely certain that there is no way that the claim that some classes of aircraft have never been shot down is not true

- part of me wonders just exactly how much effort and resources are the Chinese and Russians genuinely throwing at their 5th gen fighter programs. Is it possible that they are simply waiting until most of the development is completed by the West and then they'll 'magically' have massive breakthroughs and begin full scale production of their programs? They've had a history of stealing and reverse engineering a lot of technology for a long time now

- the US defense budget seems exhorbitant. True, their requirements are substantially different but look at the way they structure a lot of programs and it becomes obvious why as well. They're often very ambitious with multiple core technologies that need to be developed in order for the overall program to work. Part of me thinks that their is almost a zero sum game at times. They think that they can throw money at some problems and it will be solved. It's not as simple as that. They've been working on some core problem problems like directed energy weapons and rail guns for a long time now and have had limited success. If they want a genuine chance at this they're better off understanding the problem and then funding the core science. It's much like their space and intelligence programs where a lot of other spin off technologies were subsequently developed

- reading a lot of stuff online and elsewhere it becomes much clearer that both sides often underestimate one another (less often by people in the defense or intelligence community) . You should track and watch things based on what people do, not what they say

- a lot of countries just seem to want to stay out of the geo-political game. They don't want to choose sides and couldn't care less. Understandable, seeing the role that both countries play throughout the world now

- the funny thing is that some of the countries that are pushed back (Iran, North Korea, Russia, etc...) don't have much too lose. US defense alone has struggled to identify targets worth bombing in North Korea and how do you force a country to comply if they have nothing left to lose such as Iran or North Korea? It's unlikely China or Russia will engage in all out attack in the near to medium future. It's likely they'll continue to do the exact same thing and skirt around the edges with cyberwarfare and aggressive intelligence collection

- It's clear that the superpower struggle has been underway for a while now. The irony is that this is game of economies as well as technology. If the West attempt to compete purely via defense technology/deterrence then part of me fears they will head down the same pathway that the USSR went. It will collapse under the strain of a defense (and other industries) that are largely worthless (under most circumstances) and does nothing for the general poplation. Of course, this is partially offset by a potential new trade pact in the APAC region but I am certain that this will inevitably still be in favour of the US especially with their extensive SIGINT/COMINT capability, economic intelligence, and their use of it in trade negotiations

- you don't really realise how many jobs and money is on the line with regards to the JSF program until you do the numbers

An old but still enjoyable/playable game with updates to run under Windows 7

Watching footage about George W. Bush it becomes much clearer that he was somewhat of a clown who realised his limitations. It's not the case with Tony Abbott who can be scary and hilarious at times

Last Week Tonight with John Oliver: Tony Abbott, President of the USA of Australia (HBO)

Must See Hilarious George Bush Bloopers! - VERY FUNNY

Once upon a time I read about a Chinese girl who used a pin in her soldering iron to do extremely fine soldering work. I use solder paste or wire glue. Takes less time and using sticky/masking tape you can achieve a really clean finish!/

Anthony Towns: Lightning network thoughts

Tue, 2015-09-22 19:26

I’ve been intrigued by micropayments for, like, ever, so I’ve been following Rusty’s experiments with bitcoin with interest. Bitcoin itself, of course, has a roughly 10 minute delay, and a fee of effectively about 3c per transaction (or $3.50 if you count inflation/mining rewards) so isn’t really suitable for true microtransactions; but pettycoin was going to be faster and cheaper until it got torpedoed by sidechains, and more recently the lightning network offers the prospect of small payments that are effectively instant, and have fees that scale linearly with the amount (so if a $10 transaction costs 3c like in bitcoin, a 10c transaction will only cost 0.03c).

(Why do I think that’s cool? I’d like to be able to charge anyone who emails me 1c, and make $130/month just from the spam I get. Or you could have a 10c signup fee for webservice trials to limit spam but not have to tie everything to your facebook account or undergo turing trials. You could have an open wifi access point, that people don’t have to register against, and just bill them per MB. You could maybe do the same with tor nodes. Or you could setup bittorrent so that in order to receive a block I pay maybe 0.2c/MB to whoever sent it to me, and I charge 0.2c/MB to anyone who wants a block from me — leechers paying while seeders earn a profit would be fascinating. It’d mean you could setup a webstore to sell apps or books without having to sell your sell your soul to a corporate giant like Apple, Google, Paypal, Amazon, Visa or Mastercard. I’m sure there’s other fun ideas)

A bit over a year ago I critiqued sky-high predictions of bitcoin valuations on the basis that “I think you’d start hitting practical limitations trying to put 75% of the world’s transactions through a single ledger (ie hitting bandwidth, storage and processing constraints)” — which is currently playing out as “OMG the block size is too small” debates. But the cool thing about lightning is that it lets you avoid that problem entirely; hundreds, thousands or millions of transactions over weeks or years can be summarised in just a handful of transactions on the blockchain.

(How does lightning do that? It sets up a mesh network of “channels” between everyone, and provides a way of determining a route via those channels between any two people. Each individual channel is between two people, and each channel is funded with a particular amount of bitcoin, which is split between the two people in whatever way. When you route a payment across a channel, the amount of that payment’s bitcoins moves from one side of the channel to the other, in the direction of the payment. The amount of bitcoins in a channel doesn’t change, but when you receive a payment, the amount of bitcoins on your side of your channels does. When you simply forward a payment, you get more money in one channel, and less in another by the same amount (or less a small handling fee). Some bitcoin-based crypto-magic ensues to ensure you can’t steal money, and that the original payer gets a “receipt”. The end result is that the only bitcoin transactions that need to happen are to open a channel, close a channel, or change the total amount of bitcoin in a channel. Rusty gave a pretty good interview with the “Let’s talk bitcoin” podcast if the handwaving here wasn’t enough background)

Of course, this doesn’t work very well if you’re only spending money: it doesn’t take long for all the bitcoins on your lightning channels to end up on the other side, and at that point you can’t spend any more. If you only receive money over lightning, the reverse happens, and you’re still stuck just as quickly. It’s still marginally better than raw bitcoin, in that you have two bitcoin transactions to open and close a channel worth, say, $200, rather than forty bitcoin transactions, one for each $5 you spend on coffee. But that’s only a fairly minor improvement.

You could handwave that away by saying “oh, but once lightning takes off, you’ll get your salary paid in lightning anyway, and you’ll pay your rent in lightning, and it’ll all be circular, just money flowing around, lubricating the economy”. But I think that’s unrealistic in two ways: first, it won’t be that way to start with, and if things don’t work when lightning is only useful for a few things, it will never take off; and second, money doesn’t flow around the economy completely fluidly, it accumulates in some places (capitalism! profits!) and drains away from others. So it seems useful to have some way of making degenerate scenarios actually work — like someone who only uses lightning to spend money, or someone who receives money by lightning but only wants to spend cold hard cash.

One way you can do that is if you imagine there’s someone on the lightning network who’ll act as an exchange — who’ll send you some bitcoin over lightning if you send them some cash from your bank account, or who’ll deposit some cash in your bank account when you send them bitcoins over lightning. That seems like a pretty simple and realistic scenario to me, and it makes a pretty big improvement.

I did a simulation to see just how well that actually works out. With “Alice” as a coffee consumer, who does nothing with lightning but buy $5 espressos from “Emma” and refill her lightning wallet by exchanging cash with “Xavier” who runs an exchange, converting dollars (or gold or shares etc) to lightning funds. Bob, Carol and Dave run lightning nodes and take a 1% cut of any transactions they forward. I uploaded a video to youtube that I think helps visualise the payment flows and channel states (there’s no sound):

It starts off with Alice and Xavier putting $200 in channels in the network; Bob, Carol and Dave putting in $600 each, and Emma just waiting for cash to arrive. The statistics box in the top right tracks how much each player has on the lightning network (“ln”), how much profit they’ve made (“pf”), and how many coffees Alice has ordered from Emma. About 3000 coffees later, it ends up with Alice having spent about $15,750 in real money on coffee ($5.05/coffee), Emma having about $15,350 in her bank account from making Alice’s coffees ($4.92/coffee), and Bob, Carol and Dave having collectively made about $400 profit on their $1800 investment (about 22%, or the $0.13/coffee difference between what Alice paid and Emma received). At that point, though, Bob, Carol and Dave have pretty much all the funds in the lightning network, and since they only forward transactions but never initiate them, the simulation grinds to a halt.

You could imagine a few ways of keeping the simulation going: Xavier could refresh his channels with another $200 via a blockchain transaction, for instance. Or Bob, Carol and Dave could buy coffees from Emma with their profits. Or Bob, Carol and Dave could cash some of their profits out via Xavier. Or maybe they buy some furniture from Alice. Basically, whatever happens, you end up relying on “other economic activity” happening either within lightning itself, or in bitcoin, or in regular cash.

But grinding to a halt after earning 22% and spending/receiving $15k isn’t actually too bad even as it is. So as a first pass, it seems like a pretty promising indicator that lightning might be feasible economically, as well as technically.

One somewhat interesting effect is that the profits don’t get distributed particularly evenly — Bob, Carol and Dave each invest $600 initially, but make $155.50 (25.9%), $184.70 (30.7%) and $52.20 (8.7%) respectively. I think that’s mostly a result of how I chose to route payments — it optimises the route to choose channels with the most funds in order to avoid payments getting stuck, and Dave just ends up handling less transaction volume. Having a better routing algorithm (that optimises based on minimum fees, and relies on channel fees increasing when they become unbalanced) might improve things here. Or it might not, and maybe Dave needs to quote lower fees in general or establish a channel with Xavier in order to bring his profits up to match Bob and Carol.

OpenSTEM: Building a Rope Bridge Using Quadcopters

Tue, 2015-09-22 13:30

Or, how to do something really useful with these critters…

Quadcopters are regularly in the news, as they’re fairly cheap and lots of people are playing about with them and quite often creating a nuisance or even dangerous situations. I suppose it’s a phase, but I don’t blame people for wondering what positive use quadcopters can have.

At STEM and Management University ETH Zurich (Switzerland), software tools have been developed to calculate the appropriate structure for a rope bridge, after a physical location has been measured up. The resulting structure is also virtually tested before the quadcopters start, autonomously, with the actual build.

The built physical structure can hold humans crossing. Imagine this getting used in disaster areas, to help save people. Just one example… quite fabulous, isn’t it!

The experiments are done in the Flying Machine Arena of ETH Zurich, a 10x10x10 meter space with fast motion capture cameras.

Michael Still: Camp Cottermouth

Tue, 2015-09-22 12:28
I spent the weekend at a Scout camp at Camp Cottermouth. The light on the hills here in the mornings is magic.


Interactive map for this route.

Tags for this post: blog pictures 20150920 photo canberra bushwalk


Michael Davies: Mocking python objects and object functions using both class-level and function-level mocks

Mon, 2015-09-21 17:51
Had some fun solving an issue with partitions larger than 2Tb, and came across a little gotcha when it comes to mocking in python when a) you want to mock both an object and a function in that object, and b) when you want to mock.patch.object at both the test class and test method level.

Say you have a function you want to test that looks like this:

def make_partitions(...):        ...        dp = disk_partitioner.DiskPartitioner(...)        dp.add_partition(...)        ...

where the DiskPartitioner class looks like this:

class DiskPartitioner(object):

    def __init__(self, ...):        ...

    def add_partition(self, ...):        ...

and you have existing test code like this:

@mock.patch.object(utils, 'execute') class MakePartitionsTestCase(test_base.BaseTestCase):    ...

and you want to add a new test function, but adding a new patch just for your new test.

You want to verify that the class is instantiated with the right options, and you need to mock the add_partition method as well. How do you use the existing test class (with the mock of the execute function), add a new mock for the DiskPartitioner.add_partition function, and the __init__ of the DiskPartitioner class?

After a little trial and error, this is how:

    @mock.patch.object(disk_partitioner, 'DiskPartitioner',                       autospec=True)    def test_make_partitions_with_gpt(self, mock_dp, mock_exc):

        # Need to mock the function as well        mock_dp.add_partition = mock.Mock(return_value=None)        ...        disk_utils.make_partitions(...)   # Function under test         mock_dp.assert_called_once_with(...)        mock_dp.add_partition.assert_called_once_with(...)

Things to note:

1) The ordering of the mock parameters to test_make_partitions_with_gpt isn't immediately intuitive (at least to me).  You specify the function level mocks first, followed by the class level mocks.

2) You need to manually mock the instance method of the mocked class.  (i.e. the add_partition function)

You can see the whole enchilada over here in the review.

David Rowe: Codec 2 Masking Model Part 2

Mon, 2015-09-21 09:30

I’ve been making steady progress on my new ideas for amplitude quantisation for Codec 2. The goal is to increase speech quality, in particular for very low bit rate 700 bits/ modes.

Here are the signal processing steps I’m working on:

The signal processing algorithms I have developed since Part 1 are coloured in blue. I still need to nail the yellow work. The white stuff has been around for years.

Actually I spent a few weeks on the yellow steps but wasn’t satisfied so looked for something a bit easier to do for a while. The progress has made me feel like I am getting somewhere, and pumped me up to hit the tough bits again. Sometimes we need to organise the engineering to suit our emotional needs. We need to see (or rather “feel”) constant progress. Research and Disappointment is hard!

Transformations and Sample Rate Changes

The goal of a codec is to reduce the bit rate, but still maintain some target speech quality. The “quality bar” varies with your application. For my current work low quality speech is OK, as I’m competing with analog HF SSB. Just getting the message through after a few tries is a lower bar, the upper bar being easy conversation over that nasty old HF channel.

While drawing the figure above I realised that a codec can be viewed as a bunch of processing steps that either (i) transform the speech signal or (ii) change the sample rate. An example of transforming is performing a FFT to convert the time domain speech signal into the frequency domain. We then decimate in the time and frequency domain to change the sample rate of the speech signal.

Lowering the sample rate is an effective way to lower the bit rate. This process is called decimation. In Codec 2 we start with a bunch of sinusoidal amplitudes that we update every 10ms (100Hz sampling rate). We then throw away every 3 out of 4 to give a sample rate of 25Hz. This means there are less samples to every second, so the bit rate is reduced.

At the decoder we use interpolation to smoothly fill in the missing gaps, raising the sample rate back up to 100Hz. We eventually transform back to the time domain using an inverse FFT to play the signal out of the speaker. Speakers like time domain signals.

In the figure above we start with chunks of speech samples in the time domain, then transform into the frequency domain, where we fit a sinusoidal, then masking model.

The sinusoidal model takes us from a 512 point FFT to 20-80 amplitudes. Its fits a sinusoidal speech model to the incoming signal. The number of sinusoidal amplitudes varies with the pitch of the incoming voice. It is time varying, which complicates our life if we desire a constant bit rate.

The masking model fits a smoothed envelope that represents the way we produce and hear speech. For example we don’t talk in whistles (unless you are R2D2) so no point wasting bits in being able to code very narrow bandwidths signals. The ear masks weak tones near strong ones so no point coding them either. The ear also has a log frequency and amplitude response so we take advantage of that too.

In this way the speech signal winds it’s way through the codec, being transformed this way and that, as we carve off samples until we get something that we can send over the channel.

Next Steps

Need to sort out those remaining yellow blocks, and come up with a fully quantised codec candidate.

An idea that occurred to me while drawing the diagram – can we estimate the mask directly from the FFT samples? We may not need the intermediate estimation of the sinusoidal amplitudes any more.

It may also be possible to analyse/synthesise using filters modeling the masks running in the time domain. For example on the analysis side look at the energy at the output at a bunch of masking filters spaced closely enough that we can’t perceive the difference.

Writing stuff up on a blog is cool. It’s “the cardboard colleague” effect: the process of clearly articulating your work can lead to new ideas and bug fixes. It doesn’t matter who you articulate the problems too, just talking about them can lead to solutions.

Sridhar Dhanapalan: Twitter posts: 2015-09-14 to 2015-09-20

Mon, 2015-09-21 01:27

Ben Martin: Terry Motor Upgrade -- no stopping it!

Sun, 2015-09-20 15:48
I have now updated the code and PID control for the new RoboClaw and HD Planetary motor configuration. As part of the upgrade I had to move to using a lipo battery because these motors stall at 20 amps. While it is a bad idea to leave it stalled, it's a worse idea to have the battery have issues due to drawing too much current. It's always best to choose where the system will fail rather than letting the cards fall where the may. In this case, leaving it stalled will result in drive train damage in the motors, not a controller board failure, or a lipo issue.

One of the more telling images is below which compares not only the size of the motors but also the size of the wires servicing the power to the motors. I used 14AWG wire with silicon coating for the new motors so that a 20A draw will not cause any issues in the wiring. Printing out new holders for the high precision quadrature encoders took a while. Each print was about 1 hour long and there was always a millimetre or two that could be changed in the design which then spurred another print job.

Below is the old controller board (the 5A roboclaw) with the new controller sitting on the bench in front of Terry (45A controller). I know I only really needed the 30A controller for this job, but when I decided to grab the items the 30A was sold out so I bumped up to the next model.

The RoboClaw is isolated from the channel by being attached via nylon bolts to a 3d printed cross over panel.

One of the downsides to the 45A model, which I imagine will fix itself in time, was that the manual didn't seem to be available. The commands are largely the same as for the other models in the series, but I had to work out the connections for the quad encoders and have currently powered them of the BEC because the screw terminal version of the RoboClaw doesn't have +/- terminals for the quads.

One little surprise was that these motors are quite magnetic without power. Nuts and the like want to move in and the motors will attract each other too. Granted it's not like they will attract themselves from any great distance, but it's interesting compared to the lower torque motors I've been using in the past.

I also had a go at wiring 4mm connectors to 10AWG cable. Almost got it right after a few attempts but the lugs are not 100% fixed into their HXT plastic chassis because of some solder or flux debris I accidentally left on the job. I guess some time soon I'll be wiring my 100A monster automotive switch inline in the 10AWG cable for solid battery isolation when Terry is idle. ServoCity has some nice bundles of 14AWG wire (which are the yellow and blue ones I used to the motors) and I got a bunch of other wire from HobbyKing.

Francois Marier: Hooking into docking and undocking events to run scripts

Sun, 2015-09-20 10:55

In order to automatically update my monitor setup and activate/deactivate my external monitor when plugging my ThinkPad into its dock, I found a way to hook into the ACPI events and run arbitrary scripts.

This was tested on a T420 with a ThinkPad Dock Series 3 as well as a T440p with a ThinkPad Ultra Dock.

The only requirement is the ThinkPad ACPI kernel module which you can find in the tp-smapi-dkms package in Debian. That's what generates the ibm/hotkey events we will listen for.

Hooking into the events

Create the following ACPI event scripts as suggested in this guide.

Firstly, /etc/acpi/events/thinkpad-dock:

event=ibm/hotkey LEN0068:00 00000080 00004010 action=su francois -c "/home/francois/bin/external-monitor dock"

Secondly, /etc/acpi/events/thinkpad-undock:

event=ibm/hotkey LEN0068:00 00000080 00004011 action=su francois -c "/home/francois/bin/external-monitor undock"

then restart udev:

sudo service udev restart Finding the right events

To make sure the events are the right ones, lift them off of:

sudo acpi_listen

and ensure that your script is actually running by adding:

logger "ACPI event: $*"

at the begininng of it and then looking in /var/log/syslog for this lines like:

logger: external-monitor undock logger: external-monitor dock

If that doesn't work for some reason, try using an ACPI event script like this:

event=ibm/hotkey action=logger %e

to see which event you should hook into.

Using xrandr inside an ACPI event script

Because the script will be running outside of your user session, the xrandr calls must explicitly set the display variable (-d). This is what I used:

#!/bin/sh logger "ACPI event: $*" xrandr -d :0.0 --output DP2 --auto xrandr -d :0.0 --output eDP1 --auto xrandr -d :0.0 --output DP2 --left-of eDP1

David Rowe: Phase from Magnitude Spectra

Fri, 2015-09-18 09:30

For my latest Codec 2 brainstorms I need to generate a phase spectra from a magnitude spectra. I’m using ceptral/minimum phase techniques. Despite plenty of theory and even code on the Internet it took me a while to get something working. So I thought I’d post an worked example here. I must admit the theory still makes my eyes glaze over. However a working demo is a great start to understanding the theory if you’re even nerdier than me.

Codec 2 just transmits the magnitude of the speech spectrum to the decoder. The phases are estimated at the encoder but take too many bits to encode, and aren’t that important for communications quality speech. So we toss them away and reconstruct them at the decoder using some sort of rule based approach. I’m messing about with a new way of modeling the speech spectrum so needed a new way to generate the phase spectra at the decoder.

Here is the mag_to_phase.m function, which is a slightly modified version of this Octave code that I found in my meanderings on the InterWebs. I think there is also a Matlab/Octave function called mps.m which does a similar job.

I decided it to test it using a 10th order LPC synthesis filter. These filters are known to have a minimum-phase phase spectra. So if the algorithm is working it will generate exactly the same phase spectra.

So we start with 40ms of speech:

Then we find the phase spectra (bottom) given the magnitude spectrum (top):

On the bottom the green line is the measured phase spectrum of the filter, and the blue line is what the mag_to_phase.m function came up with. They are identical, I’ve just offset them by 0.5 rads on the plot. So it works Yayyyy – we can find a minimum phase spectra from just the magnitude spectra of a filter.

This is the impulse response, which the algorithm spits out as an intermediate product. One interpretation of minimum phase (so I’m told) is that the energy is all collected near the start of the pulse:

As the DFT is cyclical the bit on the right is actually concatenated with the bit on the left to make one continuous pulse centered on time = 0. All a bit “Dr Who” I know but this is DSP after all! With a bit of imagination you can see it looks like one period of the original input speech in the first plot above.

Michael Still: Exploring for a navex

Thu, 2015-09-17 09:28
I feel like I need more detailed maps of Mount Stranger than I currently have in order to layout a possible navex. I there spent a little time this afternoon wandering down the fire trail to mark all the gates in the fence. I need to do a little more of this before its ready for a navex.

Interactive map for this route.

Tags for this post: blog canberra bush walk

Related posts: Walking to work; First jog, and a walk to Los Altos


Ben Martin: 10 Foot Pound Boots for Terry

Wed, 2015-09-16 23:40
A sad day when your robot outgrows it's baby motors. On carpet this happened when the robot started to tip the scales at over 10kg. So now I have some lovely new motors that can generate almost 10 foot pounds of torque.

This has caused me to move to a more rigid motor attachment and a subsequent modofication and reprint of the rotary encoder holders (not shown above). The previous motors were spur motors, so I could rotate the motor itself within its mounting bracket to mate the large gear to the encoders. Not so anymore. Apart from looking super cool the larger alloy gear gives me an 8 to 1 reduction to the encoders, nothing like the feeling of picking up 3 bits of extra precision.

This has also meant using some most sizable cables. The yellow and purple cables are 14 AWG silicon wires. For the uplink I have an almost store bought 12AWG and some hand made 10 AWG monsters. Each motor stalls at 20A so there is the potential of a noticable amount of current to flow around the base of Terry now.

Pia Waugh: Returning to data and Gov 2.0 from the DTO

Wed, 2015-09-16 10:27

I have been working at the newly created Digital Transformation Office in the Federal Government since January this year helping to set it up, create a vision, get some good people in and build some stuff. I was working in and then running a small, highly skilled and awesome team focused on how to dramatically improve information (websites) and transaction services across government. This included a bunch of cool ideas around whole of government service analytics, building a discovery layer (read APIs) for all government data, content and services, working with agencies to improve content and SEO, working on reporting mechanisms for the DTO, and looking at ways to usefully reduce the huge number of websites currently run by the Federal public service amongst other things. You can see some of our team blog posts about this work.

It has been an awesome trip and we built some great stuff, but now I need to return to my work on data, gov 2.0 and supporting the Australian Government CTO John Sheridan in looking at whole of government technology, procurement and common platforms. I can also work more closely with Sharyn Clarkson and the Online Services Branch on the range of whole of government platforms and solutions they run today, particularly the highly popular GovCMS. It has been a difficult choice but basically it came down to where my skills and efforts are best placed at this point in time. Plus I miss working on open data!

I wanted to say a final public thank you to everyone I worked with at the DTO, past and present. It has been a genuine privilege to work in the diverse teams and leadership from across over 20 agencies in the one team! It gave me a lot of insight to the different cultures, capabilities and assumptions in different departments, and I think we all challenged each other and created a bigger and better vision for the effort. I have learned much and enjoyed the collaborative nature of the broader DTO team.

I believe the DTO has two major opportunities ahead: as a a force of awesome and a catalyst for change. As a force of awesome, the DTO can show how delivery and service design can be done with modern tools and methods, can provide a safe sandpit for experimentation, can set the baseline for the whole APS through the digital service standard, and can support genuine culture change across the APS through training, guidance and provision of expertise/advisers in agencies. As a catalyst for change, the DTO can support the many, many people across the APS who want transformation, who want to do things better, and who can be further empowered, armed and supported to do just that through the work of the DTO. Building stronger relationships across the public services of Australia will be critical to this broader cultural change and evolution to modern technologies and methodologies.

I continue to support the efforts of the DTO and the broader digital transformation agenda and I wish Paul Shetler and the whole team good luck with an ambitious and inspiring vision for the future. If we could all make an approach that was data/evidence driven, user centric, mashable/modular, collaborative and cross government(s) the norm, we would overcome the natural silos of government, we would establish the truly collaborative public service we all crave and we would be better able to support the community. I have long believed that the path of technical integrity is the most important guiding principle of everything I do, and I will continue to contribute to the broader discussions about “digital transformation” in government.

Stay tuned for updates on the blog, and I look forward to spending the next 4 months kicking a few goals before I go on maternity leave

James Purser: First PETW in over two years!

Tue, 2015-09-15 16:30

So tomorrow night I'm going to be conducting my first interview for Purser Explores The World in over two years, wootness!

And even more awesome it's going to be with New York based photographer Chris Arnade who has been documenting the stories of people battling addiction and poverty in the New York neighbourhood of the South Bronx via the Faces of Addiction project.

I'm excited, both because this is the first PETW episode in aaaages, and also because the stories that Chris tells both through his photography plus his facebook page and other media humanise people that have long been swept under the rug by society.

Blog Catagories: angrybeaniePurser Explores The Word

David Rowe: FreeDV Voice Keyer and Spotting Demo

Tue, 2015-09-15 14:30

I’ve added a Voice Keyer feature to the FreeDV GUI program. It will play a pre-recorded wave file, key your transmitter, then pause to listen. Use the Tools-PTT menu to select the wave file to use, the rx pause duration, and the number of times to repeat. If you hit space bar the keyer exits. It also stops if it detects a valid FreeDV sync for 5 seconds, to avoid congesting the band if others are using it.

I’m going to leave the voice keyer running while I’m working at my bench, to stimulate local FreeDV activity.

Spotting Demo

FreeDV has a low bit rate text message stream that allows you to send information such as your call-sign and location. Last year I added some code to parse the received text messages, and generate a system command if certain patterns are received. In the last few hours I worked up a simple FreeDV “spotting” system using this feature and a shell script.

Take a look at this screen shot of Tool-Options:

I’m sending myself the text message “s=vk5dgr hi there from David” as an example. Every time FreeDV receives a text message it issues a “rx_txtmsg” event. This is then parsed by the regular expressions on the left “rx_txtmsg s=(.*)”. If there is a match, the system command on the right is executed.

In this case any events with “rx_txmsg s=something” will result in the call to the shell script “ something”, passing the text to the shell script. Here is what the script looks like:



echo `date -u` "  " $1 "<br>" >> $SPOTFILE

tail -n 25 $SPOTFILE > /tmp/spot.tmp1

mv /tmp/spot.tmp1 $SPOTFILE

lftp -e "cd www;put $SPOTFILE;quit" $FTPSERVER

So this script adds a time stamp, limits the script to the last 25 lines, then ftps it to my webserver. You can see the web page here. It’s pretty crude, but you get the idea. It needs proper HTML formatting, a title, and a way to prevent the same persons spot being repeated all the time.

You can add other regular expressions and systems commands if you like. For example you could make a bell ring if someone puts your callsign in a text message, or put a pin on a map at their grid coordinates. Or send a message to FreeDV QSO finder to say you are “on line” and listening. If a few of us set up spotters around the world it will be a useful testing tool, like websdr for FreeDV.

To help debug you can mess with the regular expressions and system commands in real time, just click on Apply.

I like to use full duplex (Tools-PTT Half duplex unchecked) and “modprobe snd-aloop” to loopback the modem audio when testing. Talking to myself is much easier than two laptops.

If we get any really useful regexp/system commands we can bake them into FreeDV. I realise not everyone is up to regexp coding!

I’ll leave this running for a bit on 14.236 MHz, FreeDV 700B. See if you can hit it!

David Rowe: FreeDV QSO Party Weekend

Tue, 2015-09-15 13:30

A great weekend with the AREG team working FreeDV around VK; and despite some poor band conditions, the world.

We were located at Younghusband, on the banks of the river Murray, a 90 minute drive due East of Adelaide:

We had two K3 radios, one with a SM1000 on 20M, and one using laptop based FreeDV on 40M:

Here is the enormous 40M beam we had available, with my young son at the base:

It was great to see FreeDV 700B performing well under quite adverse conditions. Over time I became more and more accustomed to the sound of 700B, and could understand it comfortably from a laptop loudspeaker across the room.

When we had good quality FreeDV 1600 signals, it really sounded great, especially the lack of SSB channel noise. As we spoke to people, we noticed a lot of other FreeDV traffic popping up around our frequency.

We did have some problems with S7 power line hash from a nearby HT line. The ambient HF RF noise issue is a problem for HF radio everywhere these days. I have some ideas for DSP based active noise cancellation using 2 or more receivers that I might try in 2016. Mark, VK5QI, had a novel solution. He connected FreeDV on his laptop to Andy’s (VK5AKH) websdr in Adelaide. With the lower noise level we successfully completed a QSO with Gerhard, 0E3GBB, in Austria. Here are Andy (left) and Mark working Gerhard:

I was in the background most of the time, working on FreeDV on my laptop! Thank you very much AREG and especially Chris, VK5CP, for hosting the event!