news aggregator

Tridge on UAVs: APM:Plane 3.7.1 released and 3.8.0 beta nearly ready for testing

Planet Linux Australia - Sun, 2016-10-23 21:03

New plane releases from

Development is really hotting up for fixed wing and quadplanes! I've just released 3.7.1 and I plan on releasing the first beta of the major 3.8.0 release in the next week.
The 3.7.1 release fixes some significant bugs reported by users since the 3.7.0 release. Many thanks for all the great feedback on the release from users!
The 3.8.0 release includes a lot more new stuff, including a new servo mapping backend and a new persistent auto-trim system that is makes getting the servo trim just right a breeze.
Happy flying!

Chris Smart: Building and Booting Upstream Linux and U-Boot for Orange Pi One ARM Board (with Ethernet)

Planet Linux Australia - Sun, 2016-10-23 17:03

My home automation setup will make use of Arduinos and also embedded Linux devices. I’m currently looking into a few boards to see if any meet my criteria.

The most important factor for me is that the device must be supported in upstream Linux (preferable stable, but mainline will do) and U-Boot. I do not wish to use any old, crappy, vulnerable vendor trees!

The Orange Pi One is a small, cheap ARM board based on the AllWinner H3 (sun8iw7p1) SOC with a quad-Core Cortex-A7 ARM CPU and 512MB RAM. It has no wifi, but does have an onboard 10/100 Ethernet provided by the SOC (Linux patches incoming). It has no NAND (not supported upstream yet anyway), but does support SD. There is lots of information available at

Orange Pi One

Note that while Fedora 25 does not yet support this board specifically it does support both the Orange Pi PC (which is effectively a more full-featured version of this device) and the Orange Pi Lite (which is the same but swaps Ethernet for WiFi). Using either of those configurations should at least boot on the Orange Pi One.

Connecting UART

The UART on the Pi One uses the GND, TX and RX connections which are next to the Ethernet jack. Plug the corresponding cables from a 3.3V UART cable onto these pins and then into a USB port on your machine.

UART Pin Connections (RX yellow, TX orange, GND black)

Your device will probably be /dev/ttyUSB0, but you can check this with dmesg just after plugging it in.

Now we can simply use screen to connect to the UART, but you’ll have to be in the dialout group.

sudo gpasswd -a ${USER} dialout
newgrp dialout
screen /dev/ttyUSB0 115200

Note that you won’t see anything just yet without an SD card that has a working bootloader. We’ll get to that shortly!

Partition the SD card

First things first, get yourself an SD card.

While U-Boot itself is embedded in the card and doesn’t need it to be partitioned, it will be required later to read the boot files.

U-Boot needs the card to have an msdos partition table with a small boot partition (ext now supported) that starts at 1MB. You can use the rest of the card for the root file system (but we’ll boot an initramfs, so it’s not needed).

Assuming your card is at /dev/sdx (replace as necessary, check dmesg after plugging it in if you’re not sure).

sudo umount /dev/sdx* # makes sure it's not mounted
sudo parted -s /dev/sdx \
mklabel msdos mkpart \
primary ext3 1M 10M \
mkpart primary ext4 10M 100%

Now we can format the partitions (upstream U-Boot supports ext3 on the boot partition).
sudo mkfs.ext3 /dev/sdx1
sudo mkfs.ext4 /dev/sdx2

Leave your SD card plugged in, we will need to write the bootloader to it soon!

Upstream U-Boot Bootloader

Install the arm build toolchain dependencies.

sudo dnf install gcc-arm-linux-gnu binutils-arm-linux-gnu

We need to clone upstream U-Boot Git tree. Note that I’m checking out the release directly (-b v2016.09.01) but you could leave this off to get master, or change it to a different tag if you want.
cd ${HOME}
git clone --depth 1 -b v2016.09.01 git://
cd u-boot

There is a defconfig already for this board, so simply make this and build the bootloader binary.
CROSS_COMPILE=arm-linux-gnu- make orangepi_one_defconfig
CROSS_COMPILE=arm-linux-gnu- make -j$(nproc)

Write the bootloader to the SD card (replace /dev/sdx, like before).
sudo dd if=u-boot-sunxi-with-spl.bin of=/dev/sdx bs=1024 seek=8

Testing our bootloader

Now we can remove the SD card and plug it into the powered off Orange Pi One to see if our bootloader build was successful.

Switch back to your terminal that’s running screen and then power up the Orange Pi One. Note that the device will try to netboot by default, so you’ll need to hit the enter key when you see a line that says the following.

(Or you can just repeatedly hit enter key in the screen console while you turn the device on.)

Note that if you don’t see anything, swap the RT and TX pins on the UART and try again.

With any luck you will then get to a U-Boot prompt where we can check the build by running the version command. It should have the U-Boot version we checked out from Git and today’s build date!

U-Boot version

Hurrah! If that didn’t work for you, repeat the build and writing steps above. You must have a working bootloader before you can get a kernel to work.

If that worked, power off your device and re-insert the SD card into your computer and mount it at /mnt.

sudo umount /dev/sdx* # unmount everywhere first
sudo mount /dev/sdx1 /mnt

Creating an initramfs

Of course, a kernel won’t be much good without some userspace. Let’s use Fedora’s static busybox package to build a simple initramfs that we can boot on the Orange Pi One.

I have a script that makes this easy, you can grab it from GitHub.

Ensure your SD card is plugged into your computer and mounted at /mnt, then we can copy the file on!

cd ${HOME}
git clone
cd custom-initramfs
./ --arch arm --dir "${PWD}" --tty ttyS0

This will create an initramfs for us in your custom-initramfs directory, called initramfs-arm.cpio.gz. We’re not done yet, though, we need to convert this to the format supported by U-Boot (we’ll write it directly to the SD card).

gunzip initramfs-arm.cpio.gz
sudo mkimage -A arm -T ramdisk -C none -n uInitrd \
-d initramfs-arm.cpio /mnt/uInitrd

Now we have a simple initramfs ready to go.

Upstream Linux Kernel

The Ethernet driver has been submitted to the arm-linux mailing list (it’s up to its 4th iteration) and will hopefully land in 4.10 (it’s too late for 4.9 with RC1 already out).

Clone the mainline Linux tree (this will take a while). Note that I’m getting the latest tagged release by default (-b v4.9-rc1) but you could leave this off or change it to some other tag if you want.

cd ${HOME}
git clone --depth 1 -b v4.9-rc1 \

Or, if you want to try linux-stable, clone this repo instead.
git clone --depth 1 -b v4.8.4 \
git:// linux

Now go into the linux directory.
cd linux

Patching in EMAC support for SOC

If you don’t need the onboard Ethernet, you can skip this step.

We can get the patches from the Linux kernel’s Patchwork instance, just make sure you’re in the directory for your Linux Git repository.

Note that these will probably only apply cleanly on top of mainline v4.9 Linux tree, not stable v4.8.

# [v4,01/10] ethernet: add sun8i-emac driver
wget \
-O sun8i-emac-patch-1.patch
# [v4,04/10] ARM: dts: sun8i-h3: Add dt node for the syscon
wget \
-O sun8i-emac-patch-4.patch
# [v4,05/10] ARM: dts: sun8i-h3: add sun8i-emac ethernet driver
wget \
-O sun8i-emac-patch-5.patch
# [v4,07/10] ARM: dts: sun8i: Enable sun8i-emac on the Orange PI One
wget \
-O sun8i-emac-patch-7.patch
# [v4,09/10] ARM: sunxi: Enable sun8i-emac driver on sunxi_defconfig
wget \
-O sun8i-emac-patch-9.patch

We will apply these patches (you could also use git apply, or grab the mbox if you want and use git am).

for patch in 1 4 5 7 9 ; do
    patch -p1 < sun8i-emac-patch-${patch}.patch

Hopefully that will apply cleanly.

Building the kernel

Now we are ready to build our kernel!

Load the default kernel config for the sunxi boards.
ARCH=arm CROSS_COMPILE=arm-linux-gnu- make sunxi_defconfig

If you want, you could modify the kernel config here, for example remove support for other AllWinner SOCs.
ARCH=arm CROSS_COMPILE=arm-linux-gnu- make menuconfig

Build the kernel image and device tree blob.
ARCH=arm CROSS_COMPILE=arm-linux-gnu- make -j$(nproc) zImage dtbs

Mount the boot partition and copy on the kernel and device tree file.
sudo cp arch/arm/boot/zImage /mnt/
sudo cp arch/arm/boot/dts/sun8i-h3-orangepi-one.dtb /mnt/

Bootloader config

Next we need to make a bootloader file, boot.cmd, which tells U-Boot what to load and boot (the kernel, device tree and initramfs).

The bootargs line says to output the console to serial and to boot from the ramdisk. Variables are used for the memory locations of the kernel, dtb and initramfs.

Note, if you want to boot from the second partition instead of an initramfs, change root argument to root=/dev/mmcblk0p2 (or other partition as required).

cat > boot.cmd << \EOF
ext2load mmc 0 ${kernel_addr_r} zImage
ext2load mmc 0 ${fdt_addr_r} sun8i-h3-orangepi-one.dtb
ext2load mmc 0 ${ramdisk_addr_r} uInitrd
setenv bootargs console=ttyS0,115200 earlyprintk root=/dev/root \
rootwait panic=10
bootz ${kernel_addr_r} ${ramdisk_addr_r} ${fdt_addr_r}

Compile the bootloader file and output it directly to the SD card at /mnt.
sudo mkimage -C none -A arm -T script -d boot.cmd /mnt/boot.scr

Now, unmount your SD card.

sudo umount /dev/sdx*

Testing it all

Insert it into the Orange Pi One and turn it on! Hopefully you’ll see it booting the kernel on your screen terminal window.

You should be greeted by a login prompt. Log in with root (no password).

Login prompt

That’s it! You’ve built your own Linux system for the Orange Pi One!

If you want networking, at root prompt with the Ethernet device an IP address on your network and test with a ping.

Here’s an example:

Networking on Orange Pi One

There is clearly lots more you can do with this device.

Memory usage


Chris Smart: My Custom Open Source Home Automation Project – Part 3, Roll Out

Planet Linux Australia - Sat, 2016-10-22 19:03

In Part 1 I discussed motivation and research where I decided to build a custom, open source wired solution. Part 2 discussed the prototype and other experiments.

Because we were having to fit in with the builder, I didn’t have enough time to finalise the smart system, so I needed a dumb mode. This Part 3 is about rolling out dumb mode in Smart house!

Operation “dumb mode, Smart house” begins

We had a proven prototype, now we had to translate that into a house-wide implementation.

First we designed and mapped out the cabling.

Cabling Plans


  • Cat5e (sometimes multiple runs) for room Arduinos
  • Cat5e to windows for future curtain motors
  • Reed switch cables to light switch
  • Regular Cat6 data cabling too, of course!
  • Whatever else we thought we might need down the track

Time was tight (fitting in with the builder) but we got there (mostly).

  • Ran almost 2 km of cable in total
  • This was a LOT of work and took a LOT of time


Cabled Wall Plates

Cabled Wireless Access Point

Cable Run

Electrical cable

This is the electrician’s job.

  • Electrician ran each bank of lights on their own circuit
  • Multiple additional electrical circuits
    • HA on its own electrical circuit, UPS backed
    • Study/computers on own circuit, UPS backed
    • Various others like dryer, ironing board, entertainment unit, alarm, ceiling fans, ovens, etc
  • Can leave the house and turn everything off
    (except essentials)

The relays had to be reliable, but also available off-the-shelf as I didn’t want to get something that’s custom or hard to replace. Again, for devices that draw too much current for the relay, it will throw a contactor instead so that the device can still be controlled.

  • Went with Finder 39 series relays, specifically
  • Very thin profile
  • Built in fuses
  • Common bus bars
  • Single Pole Double Throw (SPDT)

Finder Relays with Din Mount Module

These are triggered by 24V DC which switches the 240V AC for the circuit. The light switches are running 24V and when you press the button it completes the circuit, providing 24V input to the relay (which turns on the 240V and therefore the light).

There are newer relays now which have an input range (e.g. 0-24V), I would probably use those instead if I was doing it again today so that it can be more easily fired from an array of outputs (not just a 24V relay driver shield).

The fact that they are SPDT means that I can set the relay to be normally open (NO) in which case the relay is off by default, or normally closed (NC) in which case the relay is on by default. This means that if the smart system goes down and can’t provide voltage to the input side of the relay, certain banks of lights (such as the hallway, stairs, kitchen, bathroom and garage lights) will turn on (so that the house is safe while I fix it).

Bank of Relays

In the photo above you can see the Cat5e 24V lines from the light switch circuits coming into the grey terminal block at the bottom. They are then cabled to the input side of the relays. This means that we don’t touch any AC and I can easily change what’s providing the input to the relay (to a relay board on an Arduino, for example).

Rack (excuse the messy data cabling)

There are two racks, one upstairs and one downstairs, that provide the infrastructure for the HA and data networks.

PSU running at 24V

Each rack has a power supply unit (PSU) running at 24V which provides the power for the light switches in dumb mode. These are running in parallel to provide redundancy for the dumb network in case one dies.

You can see that there is very little voltage drop.

Relay Timers

The Finder relays also support timer modules, which is very handy for something that you want to automatically turn off after a certain (configurable) amount of time.

  • Heated towel racks are bell press switches
  • Uses a timer relay to auto turn off
  • Modes and delay configurable via dip switches on relay
UPS backed GPO Circuits

UPS in-lined GPO Circuits

Two GPO circuits are backed by UPS which I can simply plug in-line and feed back to the circuit. These are the HA network as well as the computer network. If the power is cut to the house, my HA will still have power (for a while) and the Internet connection will remain up.

Clearly I’ll have no lights if the power is cut, but I could power some emergency LED lighting strips from the UPS lines – that hasn’t been done yet though.


The switches for dumb mode are regular also off the shelf light switches.

  • Playing with light switches (yay DC!)
  • Cabling up the switches using standard Clipsal gear
  • Single Cat5e cable can power up to 4 switches
  • Support one, two, three and four way switches
  • Bedroom switches use switch with LED (can see where the switch is at night)

Bathroom Light Switch (Dumb Mode)

We have up to 4-way switches so you can turn a single light on or off from 4 locations. The entrance light is wired up this way, you can control it from:

  • Front door
  • Hallway near internal garage door
  • Bottom of the stairs
  • Top of the stairs

A single Cat5e cable can run up to 4 switches.

Cat5e Cabling for Light Switch

  • Blue and orange +ve
  • White-blue and white-orange -ve
  • Green switch 1
  • White-green switch 2
  • Brown switch 3
  • White-brown switch 4

Note that we’re using two wires together for +ve and -ve which helps increase capacity and reduce voltage drop (followed the Clipsal standard wiring of blue and orange pairs).

Later in Smart Mode, this Cat5e cable will be re-purposed as Ethernet for an Arduino or embedded Linux device.

Hallway Passive Infrared (PIR) Motion Sensors

I wanted the lights in the hallway to automatically turn on at night if someone was walking to the bathroom or what not.

  • Upstairs hallway uses two 24V PIRs in parallel
  • Either one turns on lights
  • Connected to dumb mode network so fire relay as everything else
  • Can be overridden by switch on the wall
  • Adjustable for sensitivity, light level and length of time

Hallway PIRs

Tunable settings for PIR

Tweaking the PIR means I have it only turning on the lights at night.

Dumb mode results

We have been using dumb mode for over a year now, it has never skipped a beat.

Now I just need to find the time to start working on the Smart Mode…

Chris Smart: My Custom Open Source Home Automation Project – Part 2, Design and Prototype

Planet Linux Australia - Sat, 2016-10-22 19:03

In Part 1 I discussed motivation and research where I decided to build a custom, open source wired solution. In this Part 2 I discuss the design and the prototype that proved the design.

Wired Design

Although there are options like 1-Wire, I decided that I wanted more flexibility at the light switches.

  • Inspired by Jon Oxer’s awesome
  • Individual circuits for lights and some General Purpose Outlets (GPO)
  • Bank of relays control the circuits
  • Arduinos and/or embedded Linux devices control the relays

How would it work?

  • One Arduino or embedded Linux device per room
  • Run C-Bus Cat5e cable to light switches to power Arduino, provide access to HA network
  • Room Arduino takes buttons (lights, fans, etc) and sensors (temp, humidity, reed switch, PIR, etc) as inputs
  • Room Arduino sends network message to relay Arduino
  • Arduino with relay shield fires relay to enable/disable power to device (such as a light, fan, etc)

Basic design

Of course this doesn’t just control lights, but also towel racks, fans, etc. Running C-Bus cable means that I can more easily switch to their proprietary system if I fail (or perhaps sell the house).

A relay is fine for devices such as LED downlights which don’t draw much current, but for larger devices (such as ovens and airconditioning) I will use the relay to throw a contactor.

Network Messaging

For the network messaging I chose MQTT (as many others have).

  • Lightweight
  • Supports encryption
  • Uses publish/subscribe model with a broker
  • Very easy to set up and well supported by Arduino and Linux

The way it works would be for the relay driver to subscribe to topics from devices around the house. Those devices publish messages to the topic, such as when buttons are pressed or sensors are triggered. The relay driver parses the messages and reacts accordingly (turning a device on or off).

Cabling Benefits
  • More secure than wireless
  • More future proof
  • DC only, no need to touch AC
  • Provides PoE for devices and motors
  • Can still use wireless (e.g. ZigBee) if I want to
  • Convert to proprietary system (C-Bus) if I fail
  • My brother is a certified cabler

Chris Smart: My Custom Open Source Home Automation Project – Part 1, Motivation and Research

Planet Linux Australia - Sat, 2016-10-22 19:03

In January 2016 I gave a presentation at the Canberra Linux Users Group about my journey developing my own Open Source home automation system. This is an adaptation of that presentation for my blog. Big thanks to my brother, Tim, for all his help with this project!

Comments and feedback welcome.

Why home automation?
  • It’s cool
  • Good way to learn something new
  • Leverage modern technology to make things easier in the home

At the same time, it’s kinda scary. There is a lack of standards and lack of decent security applied to most Internet of Things (IoT) solutions.

Motivation and opportunity
  • Building a new house
  • Chance to do things more easily at frame stage while there are no walls

Frame stage

Some things that I want to do with HA
  • Respond to the environment and people in the home
  • Alert me when there’s a problem (fridge left open, oven left on!)
  • Gather information about the home, e.g.
    • Temperature, humidity, CO2, light level
    • Open doors and windows and whether the house is locked
    • Electricity usage
  • Manage lighting automatically, switches, PIR, mood, sunset, etc
  • Control power circuits
  • Manage access to the house via pin pad, proxy card, voice activation, retina scans
  • Control gadgets, door bell/intercom, hot water, AC heating/cooling, exhaust fans, blinds and curtains, garage door
  • Automate security system
  • Integrate media around the house (movie starts, dim the lights!)
  • Water my garden, and more..
My requirements for HA
  • Open
  • Secure
  • Extensible
  • Prefer DC only, not AC
  • High Wife Acceptance Factor (important!)

There’s no existing open source IoT framework that I could simply install, sit back and enjoy. Where’s the fun in that, anyway?

Research time!

Three main options:

  • Wireless
  • Wired
  • Combination of both
Wireless Solutions
  • Dominated by proprietary Z-Wave (although has since become more open)
  • Although open standards based also exist, like ZigBee and 6LoWPAN

Z-Wave modules

Wireless Pros

  • Lots of different gadgets available
  • Gadgets are pretty cheap and easy to find
  • Easy to get up and running
  • Widely supported by all kinds of systems

Wireless Cons

  • Wireless gadgets are pretty cheap and nasty
  • Most are not open
  • Often not updateable, potentially insecure
  • Connect to AC
  • Replace or install a unit requires an electrician
  • Often talk to the “cloud”

So yeah, I could whack those up around my house, install a bridge and move on with my life, but…

  • Not as much fun!
  • Don’t want to rely on wireless
  • Don’t want to rely on an electrician
  • Don’t really want to touch AC
  • Cheap gadgets that are never updated
  • Security vulnerabilities makes it high risk

Wired Solutions

  • Proprietary systems like Clipsal’s C-Bus
  • Open standards based systems like KNX
  • Custom hardware
  • Expensive

Russell Coker: Another Broken Nexus 5

Planet Linux Australia - Sat, 2016-10-22 17:02

In late 2013 I bought a Nexus 5 for my wife [1]. It’s a good phone and I generally have no complaints about the way it works. In the middle of 2016 I had to make a warranty claim when the original Nexus 5 stopped working [2]. Google’s warranty support was ok, the call-back was good but unfortunately there was some confusion which delayed replacement.

Once the confusion about the IMEI was resolved the warranty replacement method was to bill my credit card for a replacement phone and reverse the charge if/when they got the original phone back and found it to have a defect covered by warranty. This policy meant that I got a new phone sooner as they didn’t need to get the old phone first. This is a huge benefit for defects that don’t make the phone unusable as you will never be without a phone. Also if the user determines that the breakage was their fault they can just refrain from sending in the old phone.

Today my wife’s latest Nexus 5 developed a problem. It turned itself off and went into a reboot loop when connected to the charger. Also one of the clips on the rear case had popped out and other clips popped out when I pushed it back in. It appears (without opening the phone) that the battery may have grown larger (which is a common symptom of battery related problems). The phone is slightly less than 3 years old, so if I had got the extended warranty then I would have got a replacement.

Now I’m about to buy a Nexus 6P (because the Pixel is ridiculously expensive) which is $700 including postage. Kogan offers me a 3 year warranty for an extra $108. Obviously in retrospect spending an extra $100 would have been a benefit for the Nexus 5. But the first question is whether new phone going to have a probability greater than 1/7 of failing due to something other than user error in years 2 and 3? For an extended warranty to provide any benefit the phone has to have a problem that doesn’t occur in the first year (or a problem in a replacement phone after the first phone was replaced). The phone also has to not be lost, stolen, or dropped in a pool by it’s owner. While my wife and I have a good record of not losing or breaking phones the probability of it happening isn’t zero.

The Nexus 5 that just died can be replaced for 2/3 of the original price. The value of the old Nexus 5 to me is less than 2/3 of the original price as buying a newer better phone is the option I want. The value of an old phone to me decreases faster than the replacement cost because I don’t want to buy an old phone.

For an extended warranty to be a good deal for me I think it would have to cost significantly less than 1/10 of the purchase price due to the low probability of failure in that time period and the decreasing value of a replacement outdated phone. So even though my last choice to skip an extended warranty ended up not paying out I expect that overall I will be financially ahead if I keep self-insuring, and I’m sure that I have already saved money by self-insuring all my previous devices.

Related posts:

  1. The Nexus 5 The Nexus 5 is the latest Android phone to be...
  2. Nexus 4 Ringke Fusion Case I’ve been using Android phones for 2.5 years and...
  3. Nexus 4 My wife has had a LG Nexus 4 for about...

Stewart Smith: Workaround for opal-prd using 100% CPU

Planet Linux Australia - Thu, 2016-10-20 19:00

opal-prd is the Processor RunTime Diagnostics daemon, the userspace process that on OpenPower systems is responsible for some of the runtime diagnostics. Although a userspace process, it memory maps (as in mmap) in some code loaded by early firmware (Hostboot) called the HostBoot RunTime (HBRT) and runs it, using calls to the kernel to accomplish any needed operations (e.g. reading/writing registers inside the chip). Running this in user space gives us benefits such as being able to attach gdb, recover from segfaults etc.

The reason this code is shipped as part of firmware rather than as an OS package is that it is very system specific, and it would be a giant pain to update a package in every Linux distribution every time a new chip or machine was introduced.

Anyway, there’s a bug in the HBRT code that means if there’s an ECC error in the HBEL (HostBoot Error Log) partition in the system flash (“bios” or “pnor”… the flash where your system firmware lives), the opal-prd process may get stuck chewing up 100% CPU and not doing anything useful. There’s for this.

You will notice a problem if the opal-prd process is using 100% CPU and the last log messages are something like:

HBRT: ERRL:>>ErrlManager::ErrlManager constructor. HBRT: ERRL:iv_hiddenErrorLogsEnable = 0x0 HBRT: ERRL:>>setupPnorInfo HBRT: PNOR:>>RtPnor::getSectionInfo HBRT: PNOR:>>RtPnor::readFromDevice: i_offset=0x0, i_procId=0 sec=11 size=0x20000 ecc=1 HBRT: PNOR:RtPnor::readFromDevice: removing ECC... HBRT: PNOR:RtPnor::readFromDevice> Uncorrectable ECC error : chip=0,offset=0x0

(the parameters to readFromDevice may differ)

Luckily, there’s a simple workaround to fix it all up! You will need the pflash utility. Primarily, pflash is meant only for developers and those who know what they’re doing. You can turn your computer into a brick using it.

pflash is packaged in Ubuntu 16.10 and RHEL 7.3, but you can otherwise build it from source easily enough:

git clone cd skiboot/external/pflash make

Now that you have pflash, you just need to erase the HBEL partition and write (ECC) zeros:

dd if=/dev/zero of=/tmp/hbel bs=1 count=147456 pflash -P HBEL -e pflash -P HBEL -p /tmp/hbel

Note: you cannot just erase the partition or use the pflash option to do an ECC erase, you may render your system unbootable if you get it wrong.

After that, restart opal-prd however your distro handles restarting daemons (e.g. systemctl restart opal-prd.service) and all should be well.

Binh Nguyen: Common Russian Media Themes, Has Western Liberal Capitalist Democracy Failed?, and More

Planet Linux Australia - Tue, 2016-10-18 03:22
After watching international media for a while (particularly those who aren't part of the standard 'Western Alliance') you'll realise that their are common themes: - they are clearly against the current international order. Believe that things will be better if changed. Wants the rules changed (especially as they seemed to have favoured some countries who went through the World Wars relatively

Tridge on UAVs: CanberraUAV Outback Challenge 2016 Debrief

Planet Linux Australia - Mon, 2016-10-17 18:17

I have finally written up an article on our successful Outback Challenge 2016 entry

The members of CanberraUAV are home from the Outback Challenge and life is starting to return to normal after an extremely hectic (and fun!) time preparing our aircraft for this years challenge. It is time to write up our usual debrief acticle to give those of you who weren't able to be there some idea of what happened.

For reference here are the articles from the 2012 and 2014 challenges:

Medical Express

The Outback Challenge is held every two years in Queensland, Australia. As the challenge was completed by multiple teams in 2014 the organisers needed to come up with a new challenge. The new challenge for 2016 was called "Medical Express" and the challenge was to retrieve a blood sample from Joe at a remote landing site.

outback-joe.jpg762x562 217 KB

The back-story is that poor Outback Joe is trapped behind flood waters on his remote property in Queensland. Unfortunately he is ill, and doctors at a nearby hospital need a blood sample to diagnose his illness. A UAV is called in to fly a 23km path to a place where Joe is waiting. We only know Joes approximate position (within 100 meters) so
first off the UAV needs to find Joe using an on-board camera. After finding Joe the aircraft needs to find a good landing site in an area littered with obstacles. The landing site needs to be more than 30 meters from Joe (to meet CASA safety requirements) but less than 80 meters (so Joe doesn't have to walk too far).

The aircraft then needs to perform an automatic landing, and then wait for Joe to load the blood sample into an easily accessible carrier. Joe then presses a button to indicate he is done loading the blood sample. The aircraft needs to wait for one minute for Joe to get clear, and then perform an automatic takeoff and flight back to the home location to deliver the blood sample to waiting hospital staff.

That story hides a lot of very challenging detail. For example, the UAV must maintain continuous telemetry contact with the operators back at the base. That needs to be done despite not knowing exactly where the landing site will be until the day before the challenge starts.

Also, the landing area has trees around it and no landing strip, so a normal fixed wing landing and takeoff is very problematic. The organisers wanted teams to come up with a VTOL solution and in this
they were very successful, kickstarting a huge effort to develop the VTOL capabilities of multiple open source autopilot systems.

The organisers also had provided a strict flight path that the teams have to follow to reach the search area where Joe is located. The winding path over the rural terrain of Dalby is strictly enforced, with any aircraft breaching the geofence required to immediately and automatically terminate by crashing into the ground.

The organisers also gave quite a wide range of flight distance and weather conditions that the teams had to be able to cope with. The distance to the search area could be up to 30km, meaning a round trip distance of 60km without taking into account all the time spent above the search area trying to find Joe. The teams had to be able to fly in up to 25 knots average wind on the ground, which could mean well over 30 knots in the air.

The mission also needed to be completed in one hour, including the time for spent loading the blood sample and circling above Joe.

Russell Coker: Improving Memory

Planet Linux Australia - Mon, 2016-10-17 17:02

I’ve just attended a lecture about improving memory, mostly about mnemonic techniques. I’m not against learning techniques to improve memory and I think it’s good to teach kids a variety of things many of which won’t be needed when they are younger as you never know which kids will need various skills. But I disagree with the assertion that we are losing valuable skills due to “digital amnesia”.

Nowadays we have programs to check spelling so we can avoid the effort of remembering to spell difficult words like mnemonic, calendar apps on our phones that link to addresses and phone numbers, and the ability to Google the world’s knowledge from the bathroom. So the question is, what do we need to remember?

For remembering phone numbers it seems that all we need is to remember numbers that we might call in the event of a mobile phone being lost or running out of battery charge. That would be a close friend or relative and maybe a taxi company (and 13CABS isn’t difficult to remember).

Remembering addresses (street numbers etc) doesn’t seem very useful in any situation. Remembering the way to get to a place is useful and it seems to me that the way the navigation programs operate works against this. To remember a route you would want to travel the same way on multiple occasions and use a relatively simple route. The way that Google maps tends to give the more confusing routes (IE routes varying by the day and routes which take all shortcuts) works against this.

I think that spending time improving memory skills is useful, but it will either take time away from learning other skills that are more useful to most people nowadays or take time away from leisure activities. If improving memory skills is fun for you then it’s probably better than most hobbies (it’s cheap and provides some minor benefits in life).

When I was in primary school it was considered important to make kids memorise their “times tables”. I’m sure that memorising the multiplication of all numbers less than 13 is useful to some people, but I never felt a need to do it. When I was young I could multiply any pair of 2 digit numbers as quickly as most kids could remember the result. The big difference was that most kids needed a calculator to multiply any number by 13 which is a significant disadvantage.

What We Must Memorise

Nowadays the biggest memory issue is with passwords (the Correct Horse Battery Staple XKCD comic is worth reading [1]). Teaching mnemonic techniques for the purpose of memorising passwords would probably be a good idea – and would probably get more interest from the audience.

One interesting corner-case of passwords is ATM PIN numbers. The Wikipedia page about PIN numbers states that 4-12 digits can be used for PINs [2]. The 4 digit PIN was initially chosen because John Adrian Shepherd-Barron (who is credited with inventing the ATM) was convinced by his wife that 6 digits would be too difficult to memorise. The fact that hardly any banks outside Switzerland use more than 4 digits suggests that Mrs Shepherd-Barron had a point. The fact that this was decided in the 60’s proves that it’s not “digital amnesia”.

We also have to memorise how to use various supposedly user-friendly programs. If you observe an iPhone or Mac being used by someone who hasn’t used one before it becomes obvious that they really aren’t so user friendly and users need to memorise many operations. This is not a criticism of Apple, some tasks are inherently complex and require some complexity of the user interface. The limitations of the basic UI facilities become more obvious when there are operations like palm-swiping the screen for a screen-shot and a double-tap plus drag for a 1 finger zoom on Android.

What else do we need to memorise?

Related posts:

  1. Xen Memory Use and Zope I am currently considering what to do regarding a Zope...
  2. Improving Computer Reliability In a comment on my post about Taxing Inferior Products...
  3. Chilled Memory Attacks In 1996 Peter Gutmann wrote a paper titled “Secure Deletion...

Clinton Roy: In Memory of Gary Curtis

Planet Linux Australia - Sun, 2016-10-16 17:00

This week we learnt of the sad passing of a long term regular attendee of Humbug, Gary Curtis. Gary was often early, and nearly always the last to leave.

One  of Gary’s prized possessions was his car, more specifically his LINUX number plate. Gary was very happy to be our official airport-conference shuttle for keynote speakers in 2011 with this number plate.

Gary always had very strong opinions about how Humbug and our Humbug organised conferences should be run, but rarely took to running the events himself. It became a perennial joke at Humbug AGMs that we would always nominate Gary for positions, and he would always decline. Eventually we worked out that Humbug was one of the few times Gary wasn’t in charge of a group, and that was relaxing for him.

A topic that Gary always came back to was genealogy, especially the phone app he was working on.

A peculiar quirk of Humbug meetings is that they run on Saturday nights, and thus we often have meetings at the same time as Australian elections. Gary was always keen to keep up with the election on the night, often with interesting insights.

My most personal memory of Gary was our road trip after OSDC New Zealand, we did something like three days of driving around in a rental car, staying at hotels along the way. Gary’s driving did little to impress me, but he was certainly enjoying himself.

Gary will be missed.


Filed under: Uncategorized

Glen Turner: Activating IPv6 stable privacy addressing from RFC7217

Planet Linux Australia - Thu, 2016-10-13 11:34
Understand stable privacy addressing

In Three new things to know about deploying IPv6 I described the new IPv6 Interface Identifier creation scheme in RFC7217.* This scheme results in an IPv6 address which is stable, and yet has no relationship to the device's MAC address, nor can an address generated by the scheme be used to track the machine as it moves to other subnets.

This isn't the same as RFC4941 IP privacy addressing. RFC4941 addresses are more private, as they change regularly. But that instability makes attaching to a service on the host very painful. It's also not a great scheme for support staff: an unstable address complicates network fault finding. RFC7217 seeks a compromise position which provides an address which is difficult to use for host tracking, whilst retaining a stable address within a subnet to simplify fault finding and make for easy hosting of services such as SSH.

The older RFC4291 EUI-64 Interface Identifier scheme is being deprecated in favour of RFC7217 stable privacy addressing.

For servers you probably want to continue to use static addressing with a unique address per service. That is, a server running multiple services will hold multiple IPv6 addresses, and each service on the server bind()s to its address.

Configure stable privacy addressing

To activate the RFC7217 stable privacy addressing scheme in a Linux which uses Network Manager (Fedora, Ubuntu, etc) create a file /etc/NetworkManager/conf.d/99-local.conf containing:

[connection] ipv6.ip6-privacy=0 ipv6.addr-gen-mode=stable-privacy

Then restart Network Manager, so that the configuration file is read, and restart the interface. You can restart an interface by physically unplugging it or by:

systemctl restart NetworkManagerip link set dev eth0 down && ip link set dev eth0 up

This may drop your SSH session if you are accessing the host remotely.

Verify stable privacy addressing

Check the results with:

ip --family inet6 addr show dev eth0 scope global 1: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 state UP qlen 1000 inet6 2001:db8:1:2:b03a:86e8:e163:2714/64 scope global noprefixroute dynamic valid_lft 2591932sec preferred_lft 604732sec

The highlighted Interface Identifier part of the IPv6 address should have changed from the EUI-64 Interface Identifier; that is, the Interface Identifier should not contain any bytes of the interface's MAC address. The other parts of the IPv6 address — the Network Prefix, Subnet Identifier and Prefix Length — should not have changed.

If you repeat the test on a different subnet then the Interface Identifier should change. Upon returning to the original subnet the Interface Identifier should return to the original value.

Maxim Zakharov: One more fix for AMP WordPress plugin

Planet Linux Australia - Thu, 2016-10-13 11:05

With the recent AMP update at Google you may notice increased number of AMP parsing errors in your search console. They look like

The mandatory tag 'html ⚡ for top-level html' is missing or incorrect.

Some plugins, e.g. Add Meta Tags, may alter language_attributes() using 'language_attributes' filter and adding XML-related attributes which are disallowed (see ) and that causes the error mentioned above.

I have made a fix solving this problem and made pull request for WordPress AMP plugin, you may see it here:

Linux Users of Victoria (LUV) Announce: LUV Main November 2016 Meeting: The Internet of Toys / Special General Meeting / Functional Programming

Planet Linux Australia - Tue, 2016-10-11 03:02
Start: Nov 2 2016 18:30 End: Nov 2 2016 20:30 Start: Nov 2 2016 18:30 End: Nov 2 2016 20:30 Location: 

6th Floor, 200 Victoria St. Carlton VIC 3053



• Nick Moore, The Internet of Toys: ESP8266 and MicroPython
• Special General Meeting
• Les Kitchen, Functional Programming

200 Victoria St. Carlton VIC 3053 (the EPA building)

Late arrivals needing access to the building and the sixth floor please call 0490 627 326.

Before and/or after each meeting those who are interested are welcome to join other members for dinner. We are open to suggestions for a good place to eat near our venue. Maria's on Peel Street in North Melbourne is currently the most popular place to eat after meetings.

LUV would like to acknowledge Red Hat for their help in obtaining the venue.

Linux Users of Victoria Inc. is an incorporated association, registration number A0040056C.

November 2, 2016 - 18:30

read more

Linux Users of Victoria (LUV) Announce: LUV Beginners October Meeting: Build a Simple RC Bot!

Planet Linux Australia - Sun, 2016-10-09 21:03
Start: Oct 15 2016 12:30 End: Oct 15 2016 16:30 Start: Oct 15 2016 12:30 End: Oct 15 2016 16:30 Location: 

Infoxchange, 33 Elizabeth St. Richmond


Build a Simple RC Bot! Getting started with Arduino and Android

In this introductory talk, Ivan Lim Siu Kee will take you through the process of building a simple remote controlled bot. Find out how you can get started on building simple remote controlled bots of your own. While effort has been made to keep the presentation as beginner friendly as possible, some programming experience is still recommended to get the most out of this talk.

The meeting will be held at Infoxchange, 33 Elizabeth St. Richmond 3121 (enter via the garage on Jonas St.)

Late arrivals, please call (0490) 049 589 for access to the venue.

LUV would like to acknowledge Infoxchange for the venue.

Linux Users of Victoria Inc., is an incorporated association, registration number A0040056C.

October 15, 2016 - 12:30

read more

Craig Sanders: Converting to a ZFS rootfs

Planet Linux Australia - Sun, 2016-10-09 17:03

My main desktop/server machine (running Debian sid) at home has been running XFS on mdadm raid-1 on a pair of SSDs for the last few years. A few days ago, one of the SSDs died.

I’ve been planning to switch to ZFS as the root filesystem for a while now, so instead of just replacing the failed drive, I took the opportunity to convert it.

NOTE: at this point in time, ZFS On Linux does NOT support TRIM for either datasets or zvols on SSD. There’s a patch almost ready (TRIM/Discard support from Nexenta #3656), so I’m betting on that getting merged before it becomes an issue for me.

Here’s the procedure I came up with:

1. Buy new disks, shutdown machine, install new disks, reboot.

The details of this stage are unimportant, and the only thing to note is that I’m switching from mdadm RAID-1 with two SSDs to ZFS with two mirrored pairs (RAID-10) on four SSDs (Crucial MX300 275G – at around $100 AUD each, they’re hard to resist). Buying four 275G SSDs is slightly more expensive than buying two of the 525G models, but will perform a lot better.

When installed in the machine, they ended up as /dev/sdp, /dev/sdq, /dev/sdr, and /dev/sds. I’ll be using the symlinks in /dev/disk/by-id/ for the zpool, but for partition and setup, it’s easiest to use the /dev/sd? device nodes.

2. Partition the disks identically with gpt partition tables, using gdisk and sgdisk.

I need:

  • A small partition (type EF02, 1MB) for grub to install itself in. Needed on gpt.
  • A small partition (type EF00, 1MB) for EFI System. I’m not currently booting with UEFI but I want the option to move to it later.
  • A small partition (type 8300, 2GB) for /boot.

    I want /boot on a separate partition to make it easier to recover from problems that might occur with future upgrades. 2GB might seem excessive, but as this is my tftp & dhcp server I can’t rely on network boot for rescues, so I want to be able to put rescue ISO images in there and boot them with grub and memdisk.

    This will be mdadm RAID-1, with 4 copies.

  • A larger partition (type 8200, 4GB) for swap. With 4 identically partitioned SSDs, I’ll end up with 16GB swap (using zswap for block-device backed compressed RAM swap)

  • A large partition (type bf07, 210GB) for my rootfs

  • A small partition (type bf08, 2GBB) to provide ZIL for my HDD zpools

  • A larger partition (type bf09, 32GB) to provide L2ARC for my HDD zpools

ZFS On Linux uses partition type bf08 (“Solaris Reserved 1”) natively, but doesn’t seem to care what the partition types are for ZIL and L2ARC. I arbitrarily used bf08 (“Solaris Reserved 2”) and bf09 (“Solaris Reserved 3”) for easy identification. I’ll set these up later, once I’ve got the system booted – I don’t want to risk breaking my existing zpools by taking away their ZIL and L2ARC (and forgetting to zpool remove them, which I might possibly have done once) if I have to repartition.

I used gdisk to interactively set up the partitions:

# gdisk -l /dev/sdp GPT fdisk (gdisk) version 1.0.1 Partition table scan: MBR: protective BSD: not present APM: not present GPT: present Found valid GPT with protective MBR; using GPT. Disk /dev/sdp: 537234768 sectors, 256.2 GiB Logical sector size: 512 bytes Disk identifier (GUID): 4234FE49-FCF0-48AE-828B-3C52448E8CBD Partition table holds up to 128 entries First usable sector is 34, last usable sector is 537234734 Partitions will be aligned on 8-sector boundaries Total free space is 6 sectors (3.0 KiB) Number Start (sector) End (sector) Size Code Name 1 40 2047 1004.0 KiB EF02 BIOS boot partition 2 2048 2099199 1024.0 MiB EF00 EFI System 3 2099200 6293503 2.0 GiB 8300 Linux filesystem 4 6293504 14682111 4.0 GiB 8200 Linux swap 5 14682112 455084031 210.0 GiB BF07 Solaris Reserved 1 6 455084032 459278335 2.0 GiB BF08 Solaris Reserved 2 7 459278336 537234734 37.2 GiB BF09 Solaris Reserved 3

I then cloned the partition table to the other three SSDs with this little script:

#! /bin/bash src='sdp' targets=( 'sdq' 'sdr' 'sds' ) for tgt in "${targets[@]}"; do sgdisk --replicate="/dev/$tgt" /dev/"$src" sgdisk --randomize-guids "/dev/$tgt" done 3. Create the mdadm for /boot, the zpool, and and the root filesystem.

Most rootfs on ZFS guides that I’ve seen say to call the pool rpool, then create a dataset called "$(hostname)-1" and then create a ROOT dataset under that. so on my machine, that would be rpool/ganesh-1/ROOT. Some reverse the order of hostname and the rootfs dataset, for rpool/ROOT/ganesh-1.

There might be uses for this naming scheme in other environments but not in mine. And, to me, it looks ugly. So I’ll use just $(hostname)/root for the rootfs. i.e. ganesh/root

I wrote a script to automate it, figuring I’d probably have to do it several times in order to optimise performance. Also, I wanted to document the procedure for future reference, and have scripts that would be trivial to modify for other machines.

#! /bin/bash exec &> ./create.log hn="$(hostname -s)" base='ata-Crucial_CT275MX300SSD1_' md='/dev/md0' md_part=3 md_parts=( $(/bin/ls -1 /dev/disk/by-id/${base}*-part${md_part}) ) zfs_part=5 # 4 disks, so use the top half and bottom half for the two mirrors. zmirror1=( $(/bin/ls -1 /dev/disk/by-id/${base}*-part${zfs_part} | head -n 2) ) zmirror2=( $(/bin/ls -1 /dev/disk/by-id/${base}*-part${zfs_part} | tail -n 2) ) # create /boot raid array mdadm "$md" --create \ --bitmap=internal \ --raid-devices=4 \ --level 1 \ --metadata=0.90 \ "${md_parts[@]}" mkfs.ext4 "$md" # create zpool zpool create -o ashift=12 "$hn" \ mirror "${zmirror1[@]}" \ mirror "${zmirror2[@]}" # create zfs rootfs zfs set compression=on "$hn" zfs set atime=off "$hn" zfs create "$hn/root" zpool set bootfs="$hn/root" # mount the new /boot under the zfs root mount "$md" "/$hn/root/boot"

If you want or need other ZFS datasets (e.g. for /home, /var etc) then create them here in this script. Or you can do that later after you’ve got the system up and running on ZFS.

If you run mysql or postgresql, read the various tuning guides for how to get best performance for databases on ZFS (they both need their own datasets with particular recordsize and other settings). If you download Linux ISOs or anything with bit-torrent, avoid COW fragmentation by setting up a dataset to download into with recordsize=16K and configure your BT client to move the downloads to another directory on completion.

I did this after I got my system booted on ZFS. For my db, I stoppped the postgres service, renamed /var/lib/postgresql to /var/lib/p, created the new datasets with:

zfs create -o recordsize=8K -o logbias=throughput -o mountpoint=/var/lib/postgresql \ -o primarycache=metadata ganesh/postgres zfs create -o recordsize=128k -o logbias=latency -o mountpoint=/var/lib/postgresql/9.6/main/pg_xlog \ -o primarycache=metadata ganesh/pg-xlog

followed by rsync and then started postgres again.

4. rsync my current system to it.

Logout all user sessions, shut down all services that write to the disk (postfix, postgresql, mysql, apache, asterisk, docker, etc). If you haven’t booted into recovery/rescue/single-user mode, then you should be as close to it as possible – everything non-esssential should be stopped. I chose not to boot to single-user in case I needed access to the web to look things up while I did all this (this machine is my internet gateway).


hn="$(hostname -s)" time rsync -avxHAXS -h -h --progress --stats --delete / /boot/ "/$hn/root/"

After the rsync, my 130GB of data from XFS was compressed to 91GB on ZFS with transparent lz4 compression.

Run the rsync again if (as I did), you realise you forgot to shut down postfix (causing newly arrived mail to not be on the new setup) or something.

You can do a (very quick & dirty) performance test now, by running zpool scrub "$hn". Then run watch zpool status "$hn". As there should be no errorss to correct, you should get scrub speeds approximating the combined sequential read speed of all vdevs in the pool. In my case, I got around 500-600M/s – I was kind of expecting closer to 800M/s but that’s good enough….the Crucial MX300s aren’t the fastest drive available (but they’re great for the price), and ZFS is optimised for reliability more than speed. The scrub took about 3 minutes to scan all 91GB. My HDD zpools get around 150 to 250M/s, depending on whether they have mirror or RAID-Z vdevs and on what kind of drives they have.

For real benchmarking, use bonnie++ or fio.

5. Prepare the new rootfs for chroot, chroot into it, edit /etc/fstab and /etc/default/grub.

This script bind mounts /proc, /sys, /dev, and /dev/pts before chrooting:

#! /bin/sh hn="$(hostname -s)" for i in proc sys dev dev/pts ; do mount -o bind "/$i" "/${hn}/root/$i" done chroot "/${hn}/root"

Change /etc/fstab (on the new zfs root to) have the zfs root and ext4 on raid-1 /boot:

/ganesh/root / zfs defaults 0 0 /dev/md0 /boot ext4 defaults,relatime,nodiratime,errors=remount-ro 0 2

I haven’t bothered with setting up the swap at this point. That’s trivial and I can do it after I’ve got the system rebooted with its new ZFS rootfs (which reminds me, I still haven’t done that :).

add boot=zfs to the GRUB_CMDLINE_LINUX variable in /etc/default/grub. On my system, that’s:

GRUB_CMDLINE_LINUX="iommu=noagp usbhid.quirks=0x1B1C:0x1B20:0x408 boot=zfs"

NOTE: If you end up needing to run rsync again as in step 4. above copy /etc/fstab and /etc/default/grub to the old root filesystem first. I suggest to /etc/fstab.zfs and /etc/default/grub.zfs

6. Install grub

Here’s where things get a little complicated. Running install-grub on /dev/sd[pqrs] is fine, we created the type ef02 partition for it to install itself into.

But running update-grub to generate the new /boot/grub/grub.cfg will fail with an error like this:

/usr/sbin/grub-probe: error: failed to get canonical path of `/dev/ata-Crucial_CT275MX300SSD1_163313AADD8A-part5'.

IMO, that’s a bug in grub-probe – it should look in /dev/disk/by-id/ if it can’t find what it’s looking for in /dev/

I fixed that problem with this script:

#! /bin/sh cd /dev ln -s /dev/disk/by-id/ata-Crucial* .

After that, update-grub works fine.

NOTE: you will have to add udev rules to create these symlinks, or run this script on every boot otherwise you’ll get that error every time you run update-grub in future.

7. Prepare to reboot

Unmount proc, sys, dev/pts, dev, the new raid /boot, and the new zfs filesystems. Set the mount point for the new rootfs to /

#! /bin/sh hn="$(hostname -s)" md="/dev/md0" for i in dev/pts dev sys proc ; do umount "/${hn}/root/$i" done umount "$md" zfs umount "${hn}/root" zfs umount "${hn}" zfs set mountpoint=/ "${hn}/root" zfs set canmount=off "${hn}" 8. Reboot

Remember to configure the BIOS to boot from your new disks.

The system should boot up with the new rootfs, no rescue disk required as in some other guides – the rsync and chroot stuff has already been done.

9. Other notes
  • If you’re adding partition(s) to a zpool for ZIL, remember that ashift is per vdev, not per zpool. So remember to specify ashift=12 when adding them. e.g.

    zpool add -o ashift=12 export log \ mirror ata-Crucial_CT275MX300SSD1_163313AAEE5F-part6 \ ata-Crucial_CT275MX300SSD1_163313AB002C-part6

    Check that all vdevs in all pools have the correct ashift value with:

    zdb | grep -E 'ashift|vdev|type' | grep -v disk
10. Useful references

Reading these made it much easier to come up with my own method. Highly recommended.

Converting to a ZFS rootfs is a post from: Errata

Maxim Zakharov: Data structure for word relative cooccurence frequencies, counts and prefix tree

Planet Linux Australia - Sat, 2016-10-08 19:05

Trying to solve the task of calculating word cooccurrence relative frequencies fast, I have created an interesting data structure, which also allows to calculate counts for the first word in the pair to check; and it creates word prefix tree for the text processing, which can be used for further text analysis.

The source code is available on GitHub:

When you execute make command you should see the following output:

cc -O3 -funsigned-char cooccur.c -o cooccur -lm Example 1 ./cooccur a.txt 2 < | tee a.out Checking pair d e Count:3 cocount:3 Relative frequency: 1.00 Checking pair a b Count:3 cocount:1 Relative frequency: 0.33 Example 2 ./cooccur b.txt 3 < | tee b.out Checking pair a penny Count:3 cocount:3 Relative frequency: 1.00 Checking pair penny earned Count:4 cocount:1 Relative frequency: 0.25

The cooccur program takes two arguments: the filename of a text file to process and the window of words size to calculate relative frequencies within it. Then the program takes pairs of words from its standard input, one pair per line, to calculate count of appearance of the first word in the text processed and the cooccurrence count for the pair in that text. If the second word appears more than once in the window, only one appearance is counted.

Examples were taken here:

Michael Davies: Fixing broken Debian packages

Planet Linux Australia - Fri, 2016-10-07 11:06
In my job we make use of Vidyo for videoconferencing, but today I ran into an issue after re-imaging my Ubuntu 16.04 desktop.

The latest version of vidyodesktop requires libqt4-gui, which doesn't exist in Ubuntu anymore. This always seems to be a problem with non-free software targeting multiple versions of multiple operating systems.

You can work around the issue, doing something like:

sudo dpkg -i --ignore-depends=libqt4-gui VidyoDesktopInstaller-*.deb

but then you get the dreaded unmet dependencies roadblock which prevents you from future package manager updates and operations. i.e.

You might want to run 'apt-get -f install' to correct these:
 vidyodesktop : Depends: libqt4-gui (>= 4.8.1) but it is not installable
E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution).

It's a known problem, and it's been well documented. The suggested solution was to modify the VidyoDesktopInstaller-*.deb package, but I didn't want to do that (because when the next version comes out, it will need to be handraulicly fixed too - and that's an ongoing burden I'm not prepared to live with). So I went looking for another solution - and found Debian's equivs package (and thanks to tonyb for pointing me in the right direction!)

So what we want to do is to create a dummy Debian package that will satisfy the libqt4-gui requirement.  So first off, let's uninstall vidyodesktop, and install equivs:

sudo apt-get -f install
sudo apt-get install equivs

Next, let's make a fake package:

mkdir -p ~/src/fake-libqt4-gui
cd  ~/src/fake-libqt4-gui
cat << EOF > fake-libqt4-gui
Section: misc
Priority: optional
Standards-Version: 3.9.2

Package: libqt4-gui
Version: 1:100
Maintainer: Michael Davies <>
Architecture: all
Description: fake libqt4-gui to keep vidyodesktop happy
And now, let's build and install the dummy package:
equivs-build fake-libqt4-guisudo dpkg -i libqt4-gui_100_all.deb
And now vidyodesktop installs cleanly!
sudo dpkg -i VidyoDesktopInstaller-*.deb

James Morris: LinuxCon Europe Kernel Security Slides

Planet Linux Australia - Fri, 2016-10-07 01:03

Yesterday I gave an update on the Linux kernel security subsystem at LinuxCon Europe, in Berlin.

The slides are available here:

The talk began with a brief overview and history of the Linux kernel security subsystem, and then I provided an update on significant changes in the v4 kernel series, up to v4.8.  Some expected upcoming features were also covered.  Skip to slide 31 if you just want to see the changes.  There are quite a few!

It’s my first visit to Berlin, and it’s been fascinating to see the remnants of the Cold War, which dominated life in 1980s when I was at school, but which also seemed so impossibly far from Australia.

Brandenburg Gate, Berlin. Unity Day 2016.

I hope to visit again with more time to explore.

Russell Coker: 10 Years of Glasses

Planet Linux Australia - Mon, 2016-10-03 23:02

10 years ago I first blogged about getting glasses [1]. I’ve just ordered my 4th pair of glasses. When you buy new glasses the first step is to scan your old glasses to use that as a base point for assessing your eyes, instead of going in cold and trying lots of different lenses they can just try small variations on your current glasses. Any good optometrist will give you a print-out of the specs of your old glasses and your new prescription after you buy glasses, they may be hesitant to do so if you don’t buy because some people get a prescription at an optometrist and then buy cheap glasses online. Here are the specs of my new glasses, the ones I’m wearing now that are about 4 years old, and the ones before that which are probably about 8 years old:

New 4 Years Old Really Old R-SPH 0.00 0.00 -0.25 R-CYL -1.50 -1.50 -1.50 R-AXS 180 179 180 L-SPH 0.00 -0.25 -0.25 L-CYL -1.00 -1.00 -1.00 L-AXS 5 10 179

The Specsavers website has a good description of what this means [2]. In summary SPH is whether you are log-sighted (positive) or short-sighted (negative). CYL is for astigmatism which is where the focal lengths for horizontal and vertical aren’t equal. AXS is the angle for astigmatism. There are other fields which you can read about on the Specsavers page, but they aren’t relevant for me.

The first thing I learned when I looked at these numbers is that until recently I was apparently slightly short-sighted. In a way this isn’t a great surprise given that I spend so much time doing computer work and very little time focusing on things further away. What is a surprise is that I don’t recall optometrists mentioning it to me. Apparently it’s common to become more long-sighted as you get older so being slightly short-sighted when you are young is probably a good thing.

Astigmatism is the reason why I wear glasses (the Wikipedia page has a very good explanation of this [3]). For the configuration of my web browser and GUI (which I believe to be default in terms of fonts for Debian/Unstable running KDE and Google-Chrome on a Thinkpad T420 with 1600×900 screen) I can read my blog posts very clearly while wearing glasses. Without glasses I can read it with my left eye but it is fuzzy and with my right eye reading it is like reading the last line of an eye test, something I can do if I concentrate a lot for test purposes but would never do by choice. If I turn my glasses 90 degrees (so that they make my vision worse not better) then my ability to read the text with my left eye is worse than my right eye without glasses, this is as expected as the 1.00 level of astigmatism in my left eye is doubled when I use the lens in my glasses as 90 degrees to it’s intended angle.

The AXS numbers are for the angle of astigmatism. I don’t know why some of them are listed as 180 degrees or why that would be different from 0 degrees (if I turn my glasses so that one lens is rotated 180 degrees it works in exactly the same way). The numbers from 179 degrees to 5 degrees may be just a measurement error.

Related posts:

  1. more on vision I had a few comments on my last so I...
  2. right-side visual migraine This afternoon I had another visual migraine. It was a...
  3. New Portslave release after 5 Years I’ve just uploaded Portslave version 2010.03.30 to Debian, it replaces...