Planet Linux Australia

Syndicate content
Planet Linux Australia -
Updated: 9 min 59 sec ago

David Rowe: Codec 2 Masking Model Part 2

Mon, 2015-09-21 09:30

I’ve been making steady progress on my new ideas for amplitude quantisation for Codec 2. The goal is to increase speech quality, in particular for very low bit rate 700 bits/ modes.

Here are the signal processing steps I’m working on:

The signal processing algorithms I have developed since Part 1 are coloured in blue. I still need to nail the yellow work. The white stuff has been around for years.

Actually I spent a few weeks on the yellow steps but wasn’t satisfied so looked for something a bit easier to do for a while. The progress has made me feel like I am getting somewhere, and pumped me up to hit the tough bits again. Sometimes we need to organise the engineering to suit our emotional needs. We need to see (or rather “feel”) constant progress. Research and Disappointment is hard!

Transformations and Sample Rate Changes

The goal of a codec is to reduce the bit rate, but still maintain some target speech quality. The “quality bar” varies with your application. For my current work low quality speech is OK, as I’m competing with analog HF SSB. Just getting the message through after a few tries is a lower bar, the upper bar being easy conversation over that nasty old HF channel.

While drawing the figure above I realised that a codec can be viewed as a bunch of processing steps that either (i) transform the speech signal or (ii) change the sample rate. An example of transforming is performing a FFT to convert the time domain speech signal into the frequency domain. We then decimate in the time and frequency domain to change the sample rate of the speech signal.

Lowering the sample rate is an effective way to lower the bit rate. This process is called decimation. In Codec 2 we start with a bunch of sinusoidal amplitudes that we update every 10ms (100Hz sampling rate). We then throw away every 3 out of 4 to give a sample rate of 25Hz. This means there are less samples to every second, so the bit rate is reduced.

At the decoder we use interpolation to smoothly fill in the missing gaps, raising the sample rate back up to 100Hz. We eventually transform back to the time domain using an inverse FFT to play the signal out of the speaker. Speakers like time domain signals.

In the figure above we start with chunks of speech samples in the time domain, then transform into the frequency domain, where we fit a sinusoidal, then masking model.

The sinusoidal model takes us from a 512 point FFT to 20-80 amplitudes. Its fits a sinusoidal speech model to the incoming signal. The number of sinusoidal amplitudes varies with the pitch of the incoming voice. It is time varying, which complicates our life if we desire a constant bit rate.

The masking model fits a smoothed envelope that represents the way we produce and hear speech. For example we don’t talk in whistles (unless you are R2D2) so no point wasting bits in being able to code very narrow bandwidths signals. The ear masks weak tones near strong ones so no point coding them either. The ear also has a log frequency and amplitude response so we take advantage of that too.

In this way the speech signal winds it’s way through the codec, being transformed this way and that, as we carve off samples until we get something that we can send over the channel.

Next Steps

Need to sort out those remaining yellow blocks, and come up with a fully quantised codec candidate.

An idea that occurred to me while drawing the diagram – can we estimate the mask directly from the FFT samples? We may not need the intermediate estimation of the sinusoidal amplitudes any more.

It may also be possible to analyse/synthesise using filters modeling the masks running in the time domain. For example on the analysis side look at the energy at the output at a bunch of masking filters spaced closely enough that we can’t perceive the difference.

Writing stuff up on a blog is cool. It’s “the cardboard colleague” effect: the process of clearly articulating your work can lead to new ideas and bug fixes. It doesn’t matter who you articulate the problems too, just talking about them can lead to solutions.

Sridhar Dhanapalan: Twitter posts: 2015-09-14 to 2015-09-20

Mon, 2015-09-21 01:27

Ben Martin: Terry Motor Upgrade -- no stopping it!

Sun, 2015-09-20 15:48
I have now updated the code and PID control for the new RoboClaw and HD Planetary motor configuration. As part of the upgrade I had to move to using a lipo battery because these motors stall at 20 amps. While it is a bad idea to leave it stalled, it's a worse idea to have the battery have issues due to drawing too much current. It's always best to choose where the system will fail rather than letting the cards fall where the may. In this case, leaving it stalled will result in drive train damage in the motors, not a controller board failure, or a lipo issue.

One of the more telling images is below which compares not only the size of the motors but also the size of the wires servicing the power to the motors. I used 14AWG wire with silicon coating for the new motors so that a 20A draw will not cause any issues in the wiring. Printing out new holders for the high precision quadrature encoders took a while. Each print was about 1 hour long and there was always a millimetre or two that could be changed in the design which then spurred another print job.

Below is the old controller board (the 5A roboclaw) with the new controller sitting on the bench in front of Terry (45A controller). I know I only really needed the 30A controller for this job, but when I decided to grab the items the 30A was sold out so I bumped up to the next model.

The RoboClaw is isolated from the channel by being attached via nylon bolts to a 3d printed cross over panel.

One of the downsides to the 45A model, which I imagine will fix itself in time, was that the manual didn't seem to be available. The commands are largely the same as for the other models in the series, but I had to work out the connections for the quad encoders and have currently powered them of the BEC because the screw terminal version of the RoboClaw doesn't have +/- terminals for the quads.

One little surprise was that these motors are quite magnetic without power. Nuts and the like want to move in and the motors will attract each other too. Granted it's not like they will attract themselves from any great distance, but it's interesting compared to the lower torque motors I've been using in the past.

I also had a go at wiring 4mm connectors to 10AWG cable. Almost got it right after a few attempts but the lugs are not 100% fixed into their HXT plastic chassis because of some solder or flux debris I accidentally left on the job. I guess some time soon I'll be wiring my 100A monster automotive switch inline in the 10AWG cable for solid battery isolation when Terry is idle. ServoCity has some nice bundles of 14AWG wire (which are the yellow and blue ones I used to the motors) and I got a bunch of other wire from HobbyKing.

Francois Marier: Hooking into docking and undocking events to run scripts

Sun, 2015-09-20 10:55

In order to automatically update my monitor setup and activate/deactivate my external monitor when plugging my ThinkPad into its dock, I found a way to hook into the ACPI events and run arbitrary scripts.

This was tested on a T420 with a ThinkPad Dock Series 3 as well as a T440p with a ThinkPad Ultra Dock.

The only requirement is the ThinkPad ACPI kernel module which you can find in the tp-smapi-dkms package in Debian. That's what generates the ibm/hotkey events we will listen for.

Hooking into the events

Create the following ACPI event scripts as suggested in this guide.

Firstly, /etc/acpi/events/thinkpad-dock:

event=ibm/hotkey LEN0068:00 00000080 00004010 action=su francois -c "/home/francois/bin/external-monitor dock"

Secondly, /etc/acpi/events/thinkpad-undock:

event=ibm/hotkey LEN0068:00 00000080 00004011 action=su francois -c "/home/francois/bin/external-monitor undock"

then restart udev:

sudo service udev restart Finding the right events

To make sure the events are the right ones, lift them off of:

sudo acpi_listen

and ensure that your script is actually running by adding:

logger "ACPI event: $*"

at the begininng of it and then looking in /var/log/syslog for this lines like:

logger: external-monitor undock logger: external-monitor dock

If that doesn't work for some reason, try using an ACPI event script like this:

event=ibm/hotkey action=logger %e

to see which event you should hook into.

Using xrandr inside an ACPI event script

Because the script will be running outside of your user session, the xrandr calls must explicitly set the display variable (-d). This is what I used:

#!/bin/sh logger "ACPI event: $*" xrandr -d :0.0 --output DP2 --auto xrandr -d :0.0 --output eDP1 --auto xrandr -d :0.0 --output DP2 --left-of eDP1

David Rowe: Phase from Magnitude Spectra

Fri, 2015-09-18 09:30

For my latest Codec 2 brainstorms I need to generate a phase spectra from a magnitude spectra. I’m using ceptral/minimum phase techniques. Despite plenty of theory and even code on the Internet it took me a while to get something working. So I thought I’d post an worked example here. I must admit the theory still makes my eyes glaze over. However a working demo is a great start to understanding the theory if you’re even nerdier than me.

Codec 2 just transmits the magnitude of the speech spectrum to the decoder. The phases are estimated at the encoder but take too many bits to encode, and aren’t that important for communications quality speech. So we toss them away and reconstruct them at the decoder using some sort of rule based approach. I’m messing about with a new way of modeling the speech spectrum so needed a new way to generate the phase spectra at the decoder.

Here is the mag_to_phase.m function, which is a slightly modified version of this Octave code that I found in my meanderings on the InterWebs. I think there is also a Matlab/Octave function called mps.m which does a similar job.

I decided it to test it using a 10th order LPC synthesis filter. These filters are known to have a minimum-phase phase spectra. So if the algorithm is working it will generate exactly the same phase spectra.

So we start with 40ms of speech:

Then we find the phase spectra (bottom) given the magnitude spectrum (top):

On the bottom the green line is the measured phase spectrum of the filter, and the blue line is what the mag_to_phase.m function came up with. They are identical, I’ve just offset them by 0.5 rads on the plot. So it works Yayyyy – we can find a minimum phase spectra from just the magnitude spectra of a filter.

This is the impulse response, which the algorithm spits out as an intermediate product. One interpretation of minimum phase (so I’m told) is that the energy is all collected near the start of the pulse:

As the DFT is cyclical the bit on the right is actually concatenated with the bit on the left to make one continuous pulse centered on time = 0. All a bit “Dr Who” I know but this is DSP after all! With a bit of imagination you can see it looks like one period of the original input speech in the first plot above.

Michael Still: Exploring for a navex

Thu, 2015-09-17 09:28
I feel like I need more detailed maps of Mount Stranger than I currently have in order to layout a possible navex. I there spent a little time this afternoon wandering down the fire trail to mark all the gates in the fence. I need to do a little more of this before its ready for a navex.

Interactive map for this route.

Tags for this post: blog canberra bush walk

Related posts: Walking to work; First jog, and a walk to Los Altos


Ben Martin: 10 Foot Pound Boots for Terry

Wed, 2015-09-16 23:40
A sad day when your robot outgrows it's baby motors. On carpet this happened when the robot started to tip the scales at over 10kg. So now I have some lovely new motors that can generate almost 10 foot pounds of torque.

This has caused me to move to a more rigid motor attachment and a subsequent modofication and reprint of the rotary encoder holders (not shown above). The previous motors were spur motors, so I could rotate the motor itself within its mounting bracket to mate the large gear to the encoders. Not so anymore. Apart from looking super cool the larger alloy gear gives me an 8 to 1 reduction to the encoders, nothing like the feeling of picking up 3 bits of extra precision.

This has also meant using some most sizable cables. The yellow and purple cables are 14 AWG silicon wires. For the uplink I have an almost store bought 12AWG and some hand made 10 AWG monsters. Each motor stalls at 20A so there is the potential of a noticable amount of current to flow around the base of Terry now.

Pia Waugh: Returning to data and Gov 2.0 from the DTO

Wed, 2015-09-16 10:27

I have been working at the newly created Digital Transformation Office in the Federal Government since January this year helping to set it up, create a vision, get some good people in and build some stuff. I was working in and then running a small, highly skilled and awesome team focused on how to dramatically improve information (websites) and transaction services across government. This included a bunch of cool ideas around whole of government service analytics, building a discovery layer (read APIs) for all government data, content and services, working with agencies to improve content and SEO, working on reporting mechanisms for the DTO, and looking at ways to usefully reduce the huge number of websites currently run by the Federal public service amongst other things. You can see some of our team blog posts about this work.

It has been an awesome trip and we built some great stuff, but now I need to return to my work on data, gov 2.0 and supporting the Australian Government CTO John Sheridan in looking at whole of government technology, procurement and common platforms. I can also work more closely with Sharyn Clarkson and the Online Services Branch on the range of whole of government platforms and solutions they run today, particularly the highly popular GovCMS. It has been a difficult choice but basically it came down to where my skills and efforts are best placed at this point in time. Plus I miss working on open data!

I wanted to say a final public thank you to everyone I worked with at the DTO, past and present. It has been a genuine privilege to work in the diverse teams and leadership from across over 20 agencies in the one team! It gave me a lot of insight to the different cultures, capabilities and assumptions in different departments, and I think we all challenged each other and created a bigger and better vision for the effort. I have learned much and enjoyed the collaborative nature of the broader DTO team.

I believe the DTO has two major opportunities ahead: as a a force of awesome and a catalyst for change. As a force of awesome, the DTO can show how delivery and service design can be done with modern tools and methods, can provide a safe sandpit for experimentation, can set the baseline for the whole APS through the digital service standard, and can support genuine culture change across the APS through training, guidance and provision of expertise/advisers in agencies. As a catalyst for change, the DTO can support the many, many people across the APS who want transformation, who want to do things better, and who can be further empowered, armed and supported to do just that through the work of the DTO. Building stronger relationships across the public services of Australia will be critical to this broader cultural change and evolution to modern technologies and methodologies.

I continue to support the efforts of the DTO and the broader digital transformation agenda and I wish Paul Shetler and the whole team good luck with an ambitious and inspiring vision for the future. If we could all make an approach that was data/evidence driven, user centric, mashable/modular, collaborative and cross government(s) the norm, we would overcome the natural silos of government, we would establish the truly collaborative public service we all crave and we would be better able to support the community. I have long believed that the path of technical integrity is the most important guiding principle of everything I do, and I will continue to contribute to the broader discussions about “digital transformation” in government.

Stay tuned for updates on the blog, and I look forward to spending the next 4 months kicking a few goals before I go on maternity leave

James Purser: First PETW in over two years!

Tue, 2015-09-15 16:30

So tomorrow night I'm going to be conducting my first interview for Purser Explores The World in over two years, wootness!

And even more awesome it's going to be with New York based photographer Chris Arnade who has been documenting the stories of people battling addiction and poverty in the New York neighbourhood of the South Bronx via the Faces of Addiction project.

I'm excited, both because this is the first PETW episode in aaaages, and also because the stories that Chris tells both through his photography plus his facebook page and other media humanise people that have long been swept under the rug by society.

Blog Catagories: angrybeaniePurser Explores The Word

David Rowe: FreeDV Voice Keyer and Spotting Demo

Tue, 2015-09-15 14:30

I’ve added a Voice Keyer feature to the FreeDV GUI program. It will play a pre-recorded wave file, key your transmitter, then pause to listen. Use the Tools-PTT menu to select the wave file to use, the rx pause duration, and the number of times to repeat. If you hit space bar the keyer exits. It also stops if it detects a valid FreeDV sync for 5 seconds, to avoid congesting the band if others are using it.

I’m going to leave the voice keyer running while I’m working at my bench, to stimulate local FreeDV activity.

Spotting Demo

FreeDV has a low bit rate text message stream that allows you to send information such as your call-sign and location. Last year I added some code to parse the received text messages, and generate a system command if certain patterns are received. In the last few hours I worked up a simple FreeDV “spotting” system using this feature and a shell script.

Take a look at this screen shot of Tool-Options:

I’m sending myself the text message “s=vk5dgr hi there from David” as an example. Every time FreeDV receives a text message it issues a “rx_txtmsg” event. This is then parsed by the regular expressions on the left “rx_txtmsg s=(.*)”. If there is a match, the system command on the right is executed.

In this case any events with “rx_txmsg s=something” will result in the call to the shell script “ something”, passing the text to the shell script. Here is what the script looks like:



echo `date -u` "  " $1 "<br>" >> $SPOTFILE

tail -n 25 $SPOTFILE > /tmp/spot.tmp1

mv /tmp/spot.tmp1 $SPOTFILE

lftp -e "cd www;put $SPOTFILE;quit" $FTPSERVER

So this script adds a time stamp, limits the script to the last 25 lines, then ftps it to my webserver. You can see the web page here. It’s pretty crude, but you get the idea. It needs proper HTML formatting, a title, and a way to prevent the same persons spot being repeated all the time.

You can add other regular expressions and systems commands if you like. For example you could make a bell ring if someone puts your callsign in a text message, or put a pin on a map at their grid coordinates. Or send a message to FreeDV QSO finder to say you are “on line” and listening. If a few of us set up spotters around the world it will be a useful testing tool, like websdr for FreeDV.

To help debug you can mess with the regular expressions and system commands in real time, just click on Apply.

I like to use full duplex (Tools-PTT Half duplex unchecked) and “modprobe snd-aloop” to loopback the modem audio when testing. Talking to myself is much easier than two laptops.

If we get any really useful regexp/system commands we can bake them into FreeDV. I realise not everyone is up to regexp coding!

I’ll leave this running for a bit on 14.236 MHz, FreeDV 700B. See if you can hit it!

David Rowe: FreeDV QSO Party Weekend

Tue, 2015-09-15 13:30

A great weekend with the AREG team working FreeDV around VK; and despite some poor band conditions, the world.

We were located at Younghusband, on the banks of the river Murray, a 90 minute drive due East of Adelaide:

We had two K3 radios, one with a SM1000 on 20M, and one using laptop based FreeDV on 40M:

Here is the enormous 40M beam we had available, with my young son at the base:

It was great to see FreeDV 700B performing well under quite adverse conditions. Over time I became more and more accustomed to the sound of 700B, and could understand it comfortably from a laptop loudspeaker across the room.

When we had good quality FreeDV 1600 signals, it really sounded great, especially the lack of SSB channel noise. As we spoke to people, we noticed a lot of other FreeDV traffic popping up around our frequency.

We did have some problems with S7 power line hash from a nearby HT line. The ambient HF RF noise issue is a problem for HF radio everywhere these days. I have some ideas for DSP based active noise cancellation using 2 or more receivers that I might try in 2016. Mark, VK5QI, had a novel solution. He connected FreeDV on his laptop to Andy’s (VK5AKH) websdr in Adelaide. With the lower noise level we successfully completed a QSO with Gerhard, 0E3GBB, in Austria. Here are Andy (left) and Mark working Gerhard:

I was in the background most of the time, working on FreeDV on my laptop! Thank you very much AREG and especially Chris, VK5CP, for hosting the event!

Francois Marier: Setting up a network scanner using SANE

Mon, 2015-09-14 16:44

Sharing a scanner over the network using SANE is fairly straightforward. Here's how I shared a scanner on a server (running Debian jessie) with a client (running Ubuntu trusty).

Install SANE

The packages you need on both the client and the server are:

You should check whether or your scanner is supported by the latest stable release or by the latest development version.

In my case, I needed to get a Canon LiDE 220 working so I had to grab the libsane 1.0.25+git20150528-1 package from Debian experimental.

Test the scanner locally

Once you have SANE installed, you can test it out locally to confirm that it detects your scanner:

scanimage -L

This should give you output similar to this:

device `genesys:libusb:001:006' is a Canon LiDE 220 flatbed scanner

If that doesn't work, make sure that the scanner is actually detected by the USB stack:

$ lsusb | grep Canon Bus 001 Device 006: ID 04a9:190f Canon, Inc.

and that its USB ID shows up in the SANE backend it needs:

$ grep 190f /etc/sane.d/genesys.conf usb 0x04a9 0x190f

To do a test scan, simply run:

scanimage > test.ppm

and then take a look at the (greyscale) image it produced (test.ppm).

Configure the server

With the scanner working locally, it's time to expose it to network clients by adding the client IP addresses to /etc/sane.d/saned.conf:

## Access list

and then opening the appropriate port on your firewall (typically /etc/network/iptables in Debian):

-A INPUT -s -p tcp --dport 6566 -j ACCEPT

Then you need to ensure that the SANE server is running by setting the following in /etc/default/saned:


if you're using the sysv init system, or by running this command:

systemctl enable saned.socket

if using systemd.

I actually had to reboot to make saned visible to systemd, so if you still run into these errors:

$ service saned start Failed to start saned.service: Unit saned.service is masked.

you're probably just one reboot away from getting it to work.

Configure the client

On the client, all you need to do is add the following to /etc/sane.d/net.conf:

connect_timeout = 60 myserver

where myserver is the hostname or IP address of the server running saned.

Test the scanner remotely

With everything in place, you should be able to see the scanner from the client computer:

$ scanimage -L device `net:myserver:genesys:libusb:001:006' is a Canon LiDE 220 flatbed scanner

and successfully perform a test scan using this command:

scanimage > test.ppm

Michael Still: On running

Mon, 2015-09-14 08:28
I've been running for a little while now, but I don't mention it here much. I think I mostly don't mention it because I normally just post photos here, and I don't tend to stop and take happy snaps on my runs. The runs started off pretty modest -- initially I struggled with shin splints after more than a couple of minutes. I've worked through that and a couple of injuries along the way and am consistently doing 5km runs now.

That said, my longest runs have been in the last week when I did a 7.5km and an 8.1km. I'm building up to 10km, mostly because its a nice round number. I think ultimately trail running might be the thing for me, I get quite bored running around suburbs over and over again.

Its interesting that I come from an aggressively unsporting family, but yet all of my middle aged siblings and I have started running in the last year or two. Its a mid-life crisis thing perhaps?

Interactive map for this route.

Tags for this post: blog running fitness sport

Related posts: First jog, and a walk to Los Altos; Martin retires from his work netball league


Linux Users of Victoria (LUV) Announce: Software Freedom Day Meeting 2015

Sun, 2015-09-13 00:29
Start: Sep 19 2015 11:00 End: Sep 19 2015 16:00 Start: Sep 19 2015 11:00 End: Sep 19 2015 16:00 Location: 

Electron Workshop 31 Arden Street, North Melbourne.


There will not be a regular LUV Beginners workshop for the month of September. Instead, you're going to be in for a much bigger treat!

This month, Free Software Melbourne[1], Linux Users of Victoria[2] and Electron Workshop[3] are joining forces to bring you the local Software Freedom Day event for Melbourne.

The event will take place on Saturday 19th September between 11am and 4pm at:

Electron Workshop

31 Arden Street, North Melbourne.


Electron Workshop is on the south side of Arden Street, about half way between Errol Street and Leveson Street. Public transport: 57 tram, nearest stop at corner of Errol and Queensberry Streets; 55 and 59 trams run a few blocks away along Flemington Road; 402 bus runs along Arden Street, but nearest stop is on Errol Street. On a Saturday afternoon, some car parking should be available on nearby streets.

LUV would like to acknowledge Red Hat for their help in obtaining the Trinity College venue and VPAC for hosting.

Linux Users of Victoria Inc., is an incorporated association, registration number A0040056C.

September 19, 2015 - 11:00

read more

Linux Users of Victoria (LUV) Announce: Adjourned 2015 LUV Annual General Meeting

Sun, 2015-09-13 00:29
Start: Sep 19 2015 15:30 End: Sep 19 2015 16:15 Start: Sep 19 2015 15:30 End: Sep 19 2015 16:15 Location: 

Boardroom, Electron Workshop, 31 Arden Street, North Melbourne


Confirmation of adjourned LUV 2015 AGM

This notice is to confirm that Linux Users of Victoria Inc. will be holding the adjournment of its Annual General Meeting, on Saturday 19th September 2015. The meeting will be held in the Boardroom of Electron Workshop at 3.30pm.

Electron Workshop is on the south side of Arden Street, about half way between Errol Street and Leveson Street. Public transport: 57 tram, nearest stop at corner of Errol and Queensberry Streets; 55 and 59 trams run a few blocks away along Flemington Road; 402 bus runs along Arden Street, but nearest stop is on Errol Street. On a Saturday afternoon, some car parking should be available on nearby streets.

LUV would like to thank Electron Workshop for making their boardroom available for this meeting, also Red Hat for their help in obtaining the Trinity College venue and VPAC for hosting.

Linux Users of Victoria Inc., is an incorporated association, registration number A0040056C.

September 19, 2015 - 15:30

read more

Michael Still: CBC Navigation Course

Sat, 2015-09-12 19:28
So today was the day long map and compass class with the Canberra Bushwalking Club. I liked this walk a lot. It was a good length at about 15km, and included a few things I'd wanted to do for a while like wander around McQuoids Hill and the northern crest of Urambi hills. Some nice terrain, and red rocks gorge clearly requires further exploration.


See more thumbnails

Interactive map for this route.

Tags for this post: blog pictures 20150912 photo canberra bushwalk


Clinton Roy: clintonroy

Sat, 2015-09-12 15:28
  • Keynote, Brenda Wallace, State of Copyright. Lots of interesting information, NZ centric though (naturally) video
  • Keynote, Allison Kaptur, Ways we can be more effective learners. Focusing on the idea of a mindset, the context of our learning and different approaches, a hilight of the conference for me video
  • Keynote, Katie Bell, Python as a Teaching Language video
  • Brian Thorne, Blind Analytics, algorithms on encrypted data. I’ll probably have to watch this fifty times to actually understand.Video
  • Katie McLaughlin, Build a Better Hat Rack. Being nice in open source, not assuming that people know your work is appreciated. not all work is
  • Chris Neugebauer, Python’s type hints, in comparison to Javascripts. A very promising talk about the future of type hinting in python. video
  • Martin Henschke, Eloise “Ducky” Macdonald-Meyer, Coding workshops for school kids in Tasmania. Similar outcomes to the keynote. video
  • Tim Mitchell, Database Migrations using alembic, programmatically upgrading database scemas, hilights for me were the use cases. The example used was a good one as it’s multi staged (it’s a bad example for other reasons :) video
  • Jeremy Stott, Practical Web Security, takes a hobby project and secures it. Lots of little tips. video
  • Thomi Richards, Connascence in Python, a language for talking about the different types of coupling video
  • Cory Benfield, You Don’t care About Efficiency. All about sync code wasting cpu cycles while IO is happening, don’t do that. Doesn’t cover threading though. video
  • Lee Symes, Why Python is awesome, always good to see how languages are learning off each other video
  • Various (including myself), Lightning talks video
  • Ben Shaw, Micro-Services: Is HTTP the only way? video
  • Chris LeBlanc, Cython, I’ve done a lot of Cython, but there’s a a lot of features and it’s a fast moving target video
  • Steve Baker, The Pythonista’s 3D printing toolchain video
  • Tom Eastmen, security. Tom gives lightning talk about serial protocols everywhere, based on that, this should be good video
  • Fraser Tweedale, Integrating Python apps with Centralised Identity systems. I believe that this talk is mostly focused on configuring your web server to do authnz, rather than coding it incorrectly. video
  • Rand Huso, MPI and IoC video
  • Gagan Sharma, Simon Salinas, Custom Python Applications in Neuroscience video
  • Fei Long Wang, Zaqar, struggled to find a reason this exists, it might just need to exist to be an open replacement for SQS. video
  • WxPython Tuning app for FreeEMS, a Python app taking serial data from a car control system. They jumped to using threads and mutexes and things and didn’t seem to try to use an async read from the serial port, when they were already using a GUI mainloop. I asked why not, but they didn’t seem to understand my question. There doesn’t appear to be a video, may be because of a poorly named command line tool that sounds like a swear word.

Filed under: Uncategorized

Russell Coker: Running a Shell in a Daemon Domain

Fri, 2015-09-11 19:26

allow unconfined_t logrotate_t:process transition;

allow logrotate_t { shell_exec_t bin_t }:file entrypoint;

allow logrotate_t unconfined_t:fd use;

allow logrotate_t unconfined_t:process sigchld;

I recently had a problem with SE Linux policy related to logrotate. To test it out I decided to run a shell in the domain logrotate_t to interactively perform some of the operations that logrotate performs when run from cron. I used the above policy to allow unconfined_t (the default domain for a sysadmin shell) to enter the daemon domain.

Then I used the command “runcon -r system_r -t logrotate_t bash” to run a shell in the domain logrotate_t. The utility runcon will attempt to run a program in any SE Linux context you specify, but to succeed the system has to be in permissive mode or you need policy to permit it. I could have written policy to allow the logrotate_t domain to be in the role unconfined_r but it was easier to just use runcon to change roles.

Then I had a shell in the logrotate_t command to test out the post-rotate scripts. It turned out that I didn’t really need to do this (I had misread the output of an earlier sesearch command). But this technique can be used for debugging other SE Linux related problems so it seemed worth blogging about.

Related posts:

  1. Xen CPU use per Domain The command “xm list” displays the number of seconds of...
  2. SE Linux Lenny Status Update I previously described four levels of SE Linux support on...
  3. UBAC and SE Linux in Debian A recent development in SE Linux policy is the concept...

OpenSTEM: 2015 Science in Society Journalism Awards winners announced

Fri, 2015-09-11 12:30

Brendan Scott: brendanscott

Fri, 2015-09-11 12:30

You may or may not know that I’ve been moonlighting with Python for a number of years now.  This year I have written a new book Python for Kids for Dummies and, as of this week, it’s available from Amazon:

When I was engaged to do the book it was only going to be released in the US. I’ve recently learnt that it will be available in Australia (and elsewhere I guess) as well.

It is set to be available in Australia in a week or two (as at 11 Sept) from places like:



Angus and Robertson

Robinsons (Melbourne)

QBD (Hello Banana Benders!)

and Dymocks (or so I’m told, but it’s not on their website. I’ve always found Dymocks’ website difficult) – update saw a copy in Dymocks today (20 Sept 2015)