news aggregator

Michael Still: Art with condiments

Planet Linux Australia - Thu, 2018-04-19 19:00

Mr 15 just made me watch this video, its pretty awesome…

You’re welcome.

Michael Still: City2Surf 2018

Planet Linux Australia - Wed, 2018-04-18 15:00

I registered for city2surf this morning, which will be the third time I’ve run in the event. In 2016 my employer sponsored a bunch of us to enter, and I ran the course in 86 minutes and 54 seconds. 2017 was a bit more exciting, because in hindsight I did the final part of my training and the race itself with a torn achilles tendon. Regardless, I finished the course in 79 minutes and 39 seconds — a 7 minute and 16 second improvement despite the injury.

This year I’ve done a few things differently — I’ve started training much earlier, mostly as a side effect to recovering from the achilles injury; and secondly I’ve decided to try and raise some money for charity during the run.

Specifically, I’m raising money for the Black Dog Institute. They were selected because I’ve struggled with depression on and off over my adult life, and that’s especially true for the last twelve months or so. I figure that raising money for a resource that I’ve found personally useful makes a lot of sense.

I’d love for you to donate to the Black Dog Institute, but I understand that’s not always possible. Either way, thanks for reading this far!

David Rowe: Lithium Cell Amp Hour Tester and Electric Sailing

Planet Linux Australia - Wed, 2018-04-18 09:04

I recently electrocuted my little sail boat. I built the battery pack using some second hand Lithium cells donated by my EV. However after 8 years of abuse from my kids and I those cells are of varying quality. So I set about developing an Amp-Hour tester to determine the capacity of the cells.

The system has a relay that switches a low value power resistor (OK some coat hanger wire) across the 3.2V cell terminals, loading it up at about 27A, roughly the cruise current for my e-boat. It’s about 0.12 ohms once it heats up. This gets too hot to touch but not red hot, it’s only 86W being dissipated along about 1m of wire. When I built my EV I used the coat hanger wire load trick to test 3kW loads, that was a bit more exciting!

The empty beer can in the background makes a useful insulated stand off. Might need to make more of those.

When I first installed Lithium cells in my EV I developed a charge controller for my EV. I borrowed a small part of that circuit; a two transistor flip flop and a Battery Management System (BMS) module:

Across the cell under test is a CM090 BMS module from EV Power. That’s the good looking red PCB in the photos, onto which I have tacked the circuit above. These modules have a switch than opens when the cell voltage drops beneath 2.5V.

Taking the base of either transistor to ground switches on the other transistor. In logic terms, it’s a “not set” and “not reset” operation. When power is applied, the BMS module switch is closed. The 10uF capacitor is discharged, so provides a momentary short to ground, turning Q1 off, and Q2 on. Current flows through the automotive relay, switching on the load to the battery.

After a few hours the cell discharges beneath 2.5V, the BMS switch opens and Q2 is switched off. The collector voltage on Q2 rises, switching on Q1. Due to the latching operation of the flip flip – it stays in this state. This is important, as when the relay opens, the cell will be unloaded and it’s voltage will rise again and the BMS module switch will close. In the initial design without a flip flop, this caused the relay to buzz as the cell voltage oscillated about 2.5V as the relay opened and closed! I need the test to stop and stay stopped – it will be operating unattended so I don’t want to damage the cell by completely discharging it.

The LED was inserted to ensure the base voltage on Q1 was low enough to switch Q1 off when Q2 was on (Vce of Q2 is not zero), and has the neat side effect of lighting the LED when the test is complete!

In operation, I point a cell phone taking time lapse video of the LED and some multi-meters, and start the test:

I wander back after 3 hours and jog-shuttle the time lapse video to determine the time when the LED came on:

The time lapse feature on this phone runs in 1/10 of real time. For example Cell #9 discharged in 12:12 on the time lapse video. So we convert that time to seconds, multiply by 10 to get “seconds of real time”, then divide by 3600 to get the run time in hours. Multiplying by the discharge current of 27(ish) Amps we get the cell capacity:

12:12 time lapse, 27*(12*60+12)*10/3600 = 55AH

So this cells a bit low, and won’t be finding it’s way onto my boat!

Another alternative is a logging multimeter, one could even measure and integrate the discharge current over time. or I could have just bought or borrowed a proper discharge tester, but where’s the fun in that?

Results

It was fun to develop, a few Saturday afternoons of sitting in the driveway soldering, occasional burns from 86W of hot wire, and a little head scratching while I figured out how to take the design from an expensive buzzer to a working circuit. Nice to do some soldering after months of software based DSP. I’m also happy that I could develop a transistor circuit from first principles.

I’ve now tested 12 cells (I have 40 to work through), and measured capacities of 50 to 75AH (they are rated at 100AH new). Some cells have odd behavior under load; dipping beneath 3V right at the start of the test rather than holding 3.2V for a few hours – indicating high internal resistance.

My beloved sail e-boat is already doing better. Last weekend, using the best cells I had tested at that point, I e-motored all day on varying power levels.

One neat trick, explained to me by Matt, is motor-sailing. Using a little bit of outboard power, the boat overcomes hydrodynamic friction (it gets moving in the water) and the sail is moved out of stall (like an airplane wing moving to just above stall speed). This means to boat moves a lot faster than under motor or sail alone in light winds. For example the motor was registering just 80W, but we were doing 3 knots in light winds. This same trick can be done with a stink-motor and dinosaur juice, but the e-motor is completely silent, we forgot it was on for hours at a time!

Reading Further

Electric Car BMS Controller
New Lithium Battery Pack for my EV
Engage the Silent Drive
EV Bugs

Linux Users of Victoria (LUV) Announce: LUV April 2018 Workshop: Linux and Drupal mentoring and troubleshooting

Planet Linux Australia - Tue, 2018-04-17 21:03
Start: Apr 21 2018 12:00 End: Apr 21 2018 16:00 Start: Apr 21 2018 12:00 End: Apr 21 2018 16:00 Location:  Room B2:11, State Library of Victoria, 328 Swanston St, Melbourne Link:  https://www.meetup.com/drupalmelbourne/events/qsvdwcyxgbcc/

As our usual venue at Infoxchange is not available this month due to construction work, we'll be joining forces with DrupalMelbourne at the State Library of Victoria.

Linux Users of Victoria is a subcommittee of Linux Australia.

April 21, 2018 - 12:00

Gary Pendergast: Introducing: Click Sync

Planet Linux Australia - Tue, 2018-04-17 17:04

Chrome’s syncing is pretty magical: you can see your browsing history from your phone, tablet, and computers, all in one place. When you install Chrome on a new computer, it automatically downloads your extensions. You can see your bookmarks everywhere, it even lets you open a tab from another device.

There’s one thing that’s always bugged me, however. When you click a link, it turns purple, as all visited links should. But it doesn’t turn purple on your other devices. Google have had this bug on their radar for ages, but it hasn’t made much progress. There’s already an extension that kind of fixes this, but it works by hashing every URL you visit and sending them to a server run by the extension author: not something I’m particularly comfortable with.

And so, I wrote Click Sync!

Click Sync

Syncs your ‘visited’ link status between Chrome installs.

chrome.google.com

When you click a link, it’ll use Chrome’s inbuilt sync service to tell all your other computers to mark it as visited. If you like watching videos of links turn purple without being clicked, I have just the thing for you:

While you’re thinking about how Chrome syncs between all your devices, it’s good to setup a Chrome Passphrase, if you haven’t already. This encrypts your personal data before it passes through Google’s servers.

Unfortunately, Chrome mobile doesn’t support extensions, so this is only good for syncing between computers. If you run into any bugs, head on over the Click Sync repository, and let me know!

David Rowe: Testing HAB Telemetry Protocols

Planet Linux Australia - Mon, 2018-04-16 09:04

On Saturday Mark and I had a pleasant day bench testing High Altitude Balloon (HAB) Telemetry protocols and demodulators.

Project Horus HAB flights use a low power transmitter to send regular updates of the balloons position and status. To date, this has been sent using RTTY, and demodulated using Fldigi, or a special version modified for HAB work called dl-Fldigi.

Lora is becoming common in HAB circles, however I am confident we can do better using a custom protocol and well engineered, and most importantly – open source – modems. While very well designed and conveniently packaged, Lora is not magic – modem performance is defined by physics.

A few year ago, Mark and I developed and flight tested a binary protocol (Horus Binary) for HAB flights. We have dusted this off, and I’ve written a C callable API (horus_api.c) to make Horus RTTY and Binary easy to use. The plan is to release a cross platform GUI application that supports Horus Binary, so anyone with a SSB receiver can join in the fun of tracking Horus flights using Horus Binary.

A good HAB telemetry protocol works at low SNRs, and has fast updates to allow accurate positioning of the payload during the final decent. A way of measuring the performance is Packet Error Rate (PER) – how many telemetry packets get through at a given Signal to Noise Ratio (SNR).

So we generated some synthetic Horus RTTY and Binary packets at calibrated SNRs using GNU Octave simulation code (fsk_horus.m), then played the wave files through several modems.

Here are the results (click for a larger version):

The X-axis is in Eb/No, which is proportional to SNR:

SNR = EBNodB + 10log10(Rb/BW)

where Rb is the bit rate and BW is the noise bandwidth you want to measure SNR in. Eb/No is handy as it normalises for the effect of bit rate and noise bandwidth, making modem comparison easier.

Protocol dl-Fldigi
RTTY Fldigi
RTTY Horus
RTTY Horus
Binary Eb/No
(50% PER) 13.0 12.0 11.5 4.5 Rb 100 100 100 200 SNR (3000Hz) -1.7 -2.7 -3.2 -7.2 Packet
Duration 6 6 6 1.6 Wave File Listen Listen Listen Listen

Discussion

The older dl-Fldigi is a few dB behind the more modern Fldigi. Our Horus RTTY and especially Binary protocols are doing very well. At the same bit rate (Eb/No curve), Horus Binary is 9dB ahead of dl-Fldigi, which is a very useful gain; at least double the Line of Site (LOS) range, and equivalent to having nearly 10x the transmit power. The Binary packets are fast as well, allowing for rapid position updates in the final descent.

Trade offs are possible, for example if we slowed Horus Binary to 50 bits/s, it’s packet duration would be 6.4s (about the same as RTTY) however 50% PER would occur at a SNR of -13dB, a 15dB improvement over dl-Fldigi.

Reading Further

Project Horus
Binary Telemetry Protocol
All Your Modem are Belong To Us
SNR and Eb/No Worked Example

Michael Still: On Selecting a Well Engaged Open Source Vendor

Planet Linux Australia - Sun, 2018-04-15 23:00

Aptira is in an interesting position in the Open Source market, because we don’t usually sell software. Instead, our customers come to us seeking assistance with deciding which OpenStack to use, or how to embed ONAP into their nationwide networks, or how to move their legacy networks to the software defined future. Therefore, our most common role is as a trusted advisor to help our customers decide which Open Source products to buy.

(My boss would insist that I point out here that we do customisation of Open Source for our customers, and have assisted many in the past with deploying pure upstream solutions. Basically, we do what is the right fit for the customer, and aren’t obsessed with fitting customers into pre-defined moulds that suit our partners.)

That makes it important that we recommend products from companies that are well engaged with their upstream Open Source communities. That might be OpenStack, or ONAP, or even something like Open Daylight. This raises the obvious question – what makes a company well engaged with an upstream project?

Read more over at my employer’s blog

Michael Still: Configuring docker to use rexray and Ceph for persistent storage

Planet Linux Australia - Sun, 2018-04-15 21:00

For various reasons I wanted to play with docker containers backed by persistent Ceph storage. rexray seemed like the way to do that, so here are my notes on getting that working…

First off, I needed to install rexray:

    root@labosa:~/rexray# curl -sSL https://dl.bintray.com/emccode/rexray/install | sh Selecting previously unselected package rexray. (Reading database ... 177547 files and directories currently installed.) Preparing to unpack rexray_0.9.0-1_amd64.deb ... Unpacking rexray (0.9.0-1) ... Setting up rexray (0.9.0-1) ... rexray has been installed to /usr/bin/rexray REX-Ray ------- Binary: /usr/bin/rexray Flavor: client+agent+controller SemVer: 0.9.0 OsArch: Linux-x86_64 Branch: v0.9.0 Commit: 2a7458dd90a79c673463e14094377baf9fc8695e Formed: Thu, 04 May 2017 07:38:11 AEST libStorage ---------- SemVer: 0.6.0 OsArch: Linux-x86_64 Branch: v0.9.0 Commit: fa055d6da595602715bdfd5541b4aa6d4dcbcbd9 Formed: Thu, 04 May 2017 07:36:11 AEST

Which is of course horrid. What that script seems to have done is install a deb’d version of rexray based on an alien’d package:

    root@labosa:~/rexray# dpkg -s rexray Package: rexray Status: install ok installed Priority: extra Section: alien Installed-Size: 36140 Maintainer: Travis CI User <travis@testing-gce-7fbf00fc-f7cd-4e37-a584-810c64fdeeb1> Architecture: amd64 Version: 0.9.0-1 Depends: libc6 (>= 2.3.2) Description: Tool for managing remote & local storage. A guest based storage introspection tool that allows local visibility and management from cloud and storage platforms. . (Converted from a rpm package by alien version 8.86.)

If I was building anything more than a test environment I think I’d want to do a better job of installing rexray than this, so you’ve been warned.

Next to configure rexray to use Ceph. The configuration details are cunningly hidden in the libstorage docs, and aren’t mentioned at all in the rexray docs, so you probably want to take a look at the libstorage docs on ceph. First off, we need to install the ceph tools, and copy the ceph authentication information from the the ceph we installed using openstack-ansible earlier.

    root@labosa:/etc# apt-get install ceph-common root@labosa:/etc# scp -rp 172.29.239.114:/etc/ceph . The authenticity of host '172.29.239.114 (172.29.239.114)' can't be established. ECDSA key fingerprint is SHA256:SA6U2fuXyVbsVJIoCEHL+qlQ3xEIda/MDOnHOZbgtnE. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added '172.29.239.114' (ECDSA) to the list of known hosts. rbdmap 100% 92 0.1KB/s 00:00 ceph.conf 100% 681 0.7KB/s 00:00 ceph.client.admin.keyring 100% 63 0.1KB/s 00:00 ceph.client.glance.keyring 100% 64 0.1KB/s 00:00 ceph.client.cinder.keyring 100% 64 0.1KB/s 00:00 ceph.client.cinder-backup.keyring 71 0.1KB/s 00:00 root@labosa:/etc# modprobe rbd

You also need to configure rexray. My first attempt looked like this:

    root@labosa:/var/log# cat /etc/rexray/config.yml libstorage: service: ceph

And the rexray output sure made it look like it worked…

    root@labosa:/etc# rexray service start ● rexray.service - rexray Loaded: loaded (/etc/systemd/system/rexray.service; enabled; vendor preset: enabled) Active: active (running) since Mon 2017-05-29 10:14:07 AEST; 33ms ago Main PID: 477423 (rexray) Tasks: 5 Memory: 1.5M CPU: 9ms CGroup: /system.slice/rexray.service └─477423 /usr/bin/rexray start -f May 29 10:14:07 labosa systemd[1]: Started rexray.

Which looked good, but /var/log/syslog said:

    May 29 10:14:08 labosa rexray[477423]: REX-Ray May 29 10:14:08 labosa rexray[477423]: ------- May 29 10:14:08 labosa rexray[477423]: Binary: /usr/bin/rexray May 29 10:14:08 labosa rexray[477423]: Flavor: client+agent+controller May 29 10:14:08 labosa rexray[477423]: SemVer: 0.9.0 May 29 10:14:08 labosa rexray[477423]: OsArch: Linux-x86_64 May 29 10:14:08 labosa rexray[477423]: Branch: v0.9.0 May 29 10:14:08 labosa rexray[477423]: Commit: 2a7458dd90a79c673463e14094377baf9fc8695e May 29 10:14:08 labosa rexray[477423]: Formed: Thu, 04 May 2017 07:38:11 AEST May 29 10:14:08 labosa rexray[477423]: libStorage May 29 10:14:08 labosa rexray[477423]: ---------- May 29 10:14:08 labosa rexray[477423]: SemVer: 0.6.0 May 29 10:14:08 labosa rexray[477423]: OsArch: Linux-x86_64 May 29 10:14:08 labosa rexray[477423]: Branch: v0.9.0 May 29 10:14:08 labosa rexray[477423]: Commit: fa055d6da595602715bdfd5541b4aa6d4dcbcbd9 May 29 10:14:08 labosa rexray[477423]: Formed: Thu, 04 May 2017 07:36:11 AEST May 29 10:14:08 labosa rexray[477423]: time="2017-05-29T10:14:08+10:00" level=error msg="error starting libStorage server" error.driver=ceph time=1496016848215 May 29 10:14:08 labosa rexray[477423]: time="2017-05-29T10:14:08+10:00" level=error msg="default module(s) failed to initialize" error.driver=ceph time=1496016848216 May 29 10:14:08 labosa rexray[477423]: time="2017-05-29T10:14:08+10:00" level=error msg="daemon failed to initialize" error.driver=ceph time=1496016848216 May 29 10:14:08 labosa rexray[477423]: time="2017-05-29T10:14:08+10:00" level=error msg="error starting rex-ray" error.driver=ceph time=1496016848216

That’s because the service is called rbd it seems. So, the config file ended up looking like this:

    root@labosa:/var/log# cat /etc/rexray/config.yml libstorage: service: rbd rbd: defaultPool: rbd

Now to install docker:

    root@labosa:/var/log# sudo apt-get update root@labosa:/var/log# sudo apt-get install linux-image-extra-$(uname -r) \ linux-image-extra-virtual root@labosa:/var/log# sudo apt-get install apt-transport-https \ ca-certificates curl software-properties-common root@labosa:/var/log# curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add - root@labosa:/var/log# sudo add-apt-repository \ "deb [arch=amd64] https://download.docker.com/linux/ubuntu \ $(lsb_release -cs) \ stable" root@labosa:/var/log# sudo apt-get update root@labosa:/var/log# sudo apt-get install docker-ce

Now let’s make a rexray volume.

    root@labosa:/var/log# rexray volume ls ID Name Status Size root@labosa:/var/log# docker volume create --driver=rexray --name=mysql \ --opt=size=1 A size of 1 here means 1gb mysql root@labosa:/var/log# rexray volume ls ID Name Status Size rbd.mysql mysql available 1

Let’s start the container.

    root@labosa:/var/log# docker run --name some-mysql --volume-driver=rexray \ -v mysql:/var/lib/mysql -e MYSQL_ROOT_PASSWORD=my-secret-pw -d mysql Unable to find image 'mysql:latest' locally latest: Pulling from library/mysql 10a267c67f42: Pull complete c2dcc7bb2a88: Pull complete 17e7a0445698: Pull complete 9a61839a176f: Pull complete a1033d2f1825: Pull complete 0d6792140dcc: Pull complete cd3adf03d6e6: Pull complete d79d216fd92b: Pull complete b3c25bdeb4f4: Pull complete 02556e8f331f: Pull complete 4bed508a9e77: Pull complete Digest: sha256:2f4b1900c0ee53f344564db8d85733bd8d70b0a78cd00e6d92dc107224fc84a5 Status: Downloaded newer image for mysql:latest ccc251e6322dac504e978f4b95b3787517500de61eb251017cc0b7fd878c190b

And now to prove that persistence works and that there’s nothing up my sleeve…

    root@labosa:/var/log# docker run -it --link some-mysql:mysql --rm mysql \ sh -c 'exec mysql -h"$MYSQL_PORT_3306_TCP_ADDR" \ -P"$MYSQL_PORT_3306_TCP_PORT" -uroot -p"$MYSQL_ENV_MYSQL_ROOT_PASSWORD"' mysql: [Warning] Using a password on the command line interface can be insecure. Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 3 Server version: 5.7.18 MySQL Community Server (GPL) Copyright (c) 2000, 2017, Oracle and/or its affiliates. All rights reserved. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. mysql> show databases; +--------------------+ | Database | +--------------------+ | information_schema | | mysql | | performance_schema | | sys | +--------------------+ 4 rows in set (0.00 sec) mysql> create database demo; Query OK, 1 row affected (0.03 sec) mysql> use demo; Database changed mysql> create table foo(val char(5)); Query OK, 0 rows affected (0.14 sec) mysql> insert into foo(val) values ('a'), ('b'), ('c'); Query OK, 3 rows affected (0.08 sec) Records: 3 Duplicates: 0 Warnings: 0 mysql> select * from foo; +------+ | val | +------+ | a | | b | | c | +------+ 3 rows in set (0.00 sec)

Now let’s re-create the container and prove the data remains.

    root@labosa:/var/log# docker stop some-mysql some-mysql root@labosa:/var/log# docker rm some-mysql some-mysql root@labosa:/var/log# docker run --name some-mysql --volume-driver=rexray \ -v mysql:/var/lib/mysql -e MYSQL_ROOT_PASSWORD=my-secret-pw -d mysql 99a7ccae1ad1865eb1bcc8c757251903dd2f1ac7d3ce4e365b5cdf94f539fe05 root@labosa:/var/log# docker run -it --link some-mysql:mysql --rm mysql \ sh -c 'exec mysql -h"$MYSQL_PORT_3306_TCP_ADDR" -\ P"$MYSQL_PORT_3306_TCP_PORT" -uroot -p"$MYSQL_ENV_MYSQL_ROOT_PASSWORD"' mysql: [Warning] Using a password on the command line interface can be insecure. Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 3 Server version: 5.7.18 MySQL Community Server (GPL) Copyright (c) 2000, 2017, Oracle and/or its affiliates. All rights reserved. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. mysql> use demo; Reading table information for completion of table and column names You can turn off this feature to get a quicker startup with -A Database changed mysql> select * from foo; +------+ | val | +------+ | a | | b | | c | +------+ 3 rows in set (0.00 sec)

So there you go.

Michael Still: I think I found a bug in python’s unittest.mock library

Planet Linux Australia - Sun, 2018-04-15 21:00

Mocking is a pretty common thing to do in unit tests covering OpenStack Nova code. Over the years we’ve used various mock libraries to do that, with the flavor de jour being unittest.mock. I must say that I strongly prefer unittest.mock to the old mox code we used to write, but I think I just accidentally found a fairly big bug.

The problem is that python mocks are magical. Its an object where you can call any method name, and the mock will happily pretend it has that method, and return None. You can then later ask what “methods” were called on the mock.

However, you use the same mock object later to make assertions about what was called. Herein is the problem — the mock object doesn’t know if you’re the code under test, or the code that’s making assertions. So, if you fat finger the assertion in your test code, the assertion will just quietly map to a non-existent method which returns None, and your code will pass.

Here’s an example:

#!/usr/bin/python3 from unittest import mock class foo(object): def dummy(a, b): return a + b @mock.patch.object(foo, 'dummy') def call_dummy(mock_dummy): f = foo() f.dummy(1, 2) print('Asserting a call should work if the call was made') mock_dummy.assert_has_calls([mock.call(1, 2)]) print('Assertion for expected call passed') print() print('Asserting a call should raise an exception if the call wasn\'t made') mock_worked = False try: mock_dummy.assert_has_calls([mock.call(3, 4)]) except AssertionError as e: mock_worked = True print('Expected failure, %s' % e) if not mock_worked: print('*** Assertion should have failed ***') print() print('Asserting a call where the assertion has a typo should fail, but ' 'doesn\'t') mock_worked = False try: mock_dummy.typo_assert_has_calls([mock.call(3, 4)]) except AssertionError as e: mock_worked = True print('Expected failure, %s' % e) print() if not mock_worked: print('*** Assertion should have failed ***') print(mock_dummy.mock_calls) print() if __name__ == '__main__': call_dummy()

If I run that code, I get this:

$ python3 mock_assert_errors.py Asserting a call should work if the call was made Assertion for expected call passed Asserting a call should raise an exception if the call wasn't made Expected failure, Calls not found. Expected: [call(3, 4)] Actual: [call(1, 2)] Asserting a call where the assertion has a typo should fail, but doesn't *** Assertion should have failed *** [call(1, 2), call.typo_assert_has_calls([call(3, 4)])]

So, we should have been told that typo_assert_has_calls isn’t a thing, but we didn’t notice because it silently failed. I discovered this when I noticed an assertion with a (smaller than this) typo in its call in a code review yesterday.

I don’t really have a solution to this right now (I’m home sick and not thinking straight), but it would be interesting to see what other people think.

Michael Still: Python3 venvs for people who are old and grumpy

Planet Linux Australia - Sun, 2018-04-15 21:00

I’ve been using virtualenvwrapper to make venvs for python2 for probably six or so years. I know it, and understand it. Now some bad man (hi Ramon!) is making me do python3, and virtualenvwrapper just isn’t a thing over there as best as I can tell.

So how do I make a venv? Its really not too bad…

First, install the dependencies:

    git clone git://github.com/yyuu/pyenv.git .pyenv echo 'export PYENV_ROOT="$HOME/.pyenv"' >> ~/.bashrc echo 'export PATH="$PYENV_ROOT/bin:$PATH"' >> ~/.bashrc echo 'eval "$(pyenv init -)"' >> ~/.bashrc git clone https://github.com/yyuu/pyenv-virtualenv.git ~/.pyenv/plugins/pyenv-virtualenv source ~/.bashrc

Now to make a venv, do something like this (in this case, infrasot is the name of the venv):

    mkdir -p ~/.virtualenvs/pyenv-infrasot cd ~/.virtualenvs/pyenv-infrasot pyenv virtualenv system infrasot

You can see your installed venvs like this:

    $ pyenv versions * system (set by /home/user/.pyenv/version) infrasot

Where system is the system installed python, and not a venv. To activate and deactivate the venv, do this:

    $ pyenv activate infrasot $ ... stuff you're doing ... $ pvenv deactivate

I’ll probably write wrappers at some point so that this looks like virtualenvwrapper, but its good enough for now.

Michael Still: Giving serial devices meaningful names

Planet Linux Australia - Sun, 2018-04-15 21:00

This is a hack I’ve been using for ages, but I thought it deserved a write up.

I have USB serial devices. Lots of them. I use them for home automation things, as well as for talking to devices such as the console ports on switches and so forth. For the permanently installed serial devices one of the challenges is having them show up in predictable places so that the scripts which know how to drive each device are talking in the right place.

For the trivial case, this is pretty easy with udev:

$ cat /etc/udev/rules.d/60-local.rules KERNEL=="ttyUSB*", \ ATTRS{idVendor}=="0403", ATTRS{idProduct}=="6001", \ ATTRS{serial}=="A8003Ye7", \ SYMLINK+="radish"

This says for any USB serial device that is discovered (either inserted post boot, or at boot), if the USB vendor and product ID match the relevant values, to symlink the device to “/dev/radish”.

You find out the vendor and product ID from lsusb like this:

$ lsusb Bus 003 Device 003: ID 0624:0201 Avocent Corp. Bus 003 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 007 Device 002: ID 0665:5161 Cypress Semiconductor USB to Serial Bus 007 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 006 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 005 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 004 Device 002: ID 0403:6001 Future Technology Devices International, Ltd FT232 Serial (UART) IC Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 009 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub Bus 008 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub

You can play with inserting and removing the device to determine which of these entries is the device you care about.

So that’s great, until you have more than one device with the same USB serial vendor and product id. Then things are a bit more… difficult.

It turns out that you can have udev execute a command on device insert to help you determine what symlink to create. So for example, I have this entry in the rules on one of my machines:

KERNEL=="ttyUSB*", \ ATTRS{idVendor}=="067b", ATTRS{idProduct}=="2303", \ PROGRAM="/usr/bin/usbtest /dev/%k", \ SYMLINK+="%c"

This results in /usr/bin/usbtest being run with the path of the device file on its command line for every device detection (of a matching device). The stdout of that program is then used as the name of a symlink in /dev.

So, that script attempts to talk to the device and determine what it is — in my case either a currentcost or a solar panel inverter.

Michael Still: Hugo nominees for 2018

Planet Linux Australia - Sun, 2018-04-15 21:00

Lifehacker kindly pointed out that the Hugo nominees are out for 2018. They are:

  • The Collapsing Empire, by John Scalzi. I’ve read this one and liked it.
  • New York 2140, by Kim Stanley Robinson. I’ve had a difficult time with Kim’s work in the past, but perhaps I’ll one day read this.
  • Provenance, by Ann Leckie. I liked Ancillary Justice, but failed to fully read the sequel, so I guess we’ll wait and see on this one.
  • Raven Stratagem, by Yoon Ha Lee. I know nothing!
  • Six Wakes, by Mur Lafferty. Again, I know nothing about this book or this author.

So a few there to consider in the future.

Michael Still: The Collapsing Empire

Planet Linux Australia - Sun, 2018-04-15 21:00

This is a fun fast read, as is everything by Mr Scalzi. The basic premise here is that of a set of interdependent colonies that are about to lose their ability to trade with each other, and are therefore doomed. Oh, except they don’t know that and are busy having petty trade wars instead. It isn’t a super intellectual read, but it is fun and does leave me wanting to know what happens to the empire…

Title: The Collapsing Empire
Author: John Scalzi
Genre: Fiction
Publisher: Tor Books
Release Date: March 21, 2017
Pages: 336

Our universe is ruled by physics and faster than light travel is not possible—until the discovery of The Flow, an extra-dimensional field we can access at certain points in space-time that transport us to other worlds, around other stars. Humanity flows away from Earth, into space, and in time forgets our home world and creates a new empire, the Interdependency, whose ethos requires that no one human outpost can survive without the others. It’s a hedge against interstellar war—and a system of control for the rulers of the empire. The Flow is eternal—but it is not static. Just as a river changes course, The Flow changes as well, cutting off worlds from the rest of humanity. When it’s discovered that The Flow is moving, possibly cutting off all human worlds from faster than light travel forever, three individuals -- a scientist, a starship captain and the Empress of the Interdependency—are in a race against time to discover what, if anything, can be salvaged from an interstellar empire on the brink of collapse. “John Scalzi is the most entertaining, accessible writer working in SF today.” —Joe Hill "If anyone stands at the core of the American science fiction tradition at the moment, it is Scalzi." —The Encyclopedia of Science Fiction, Third Edition

Michael Still: Things I read today: the best description I’ve seen of metadata routing in neutron

Planet Linux Australia - Sun, 2018-04-15 21:00

I happened upon a thread about OVN’s proposal for how to handle nova metadata traffic, which linked to this very good Suse blog post about how metadata traffic is routed in neutron. I’m just adding the link here because I think it will be useful to others. The OVN proposal is also an interesting read.

Michael Still: Escaping from blosxom

Planet Linux Australia - Sun, 2018-04-15 21:00

I’ve been running my personal blog on a very hacked version of blosxom for a hilariously long time, and its time to escape. I’ve therefore started converting all of the content to wordpress here, and will eventually redirect the old domain to here as well.

Why blogging when its so 2000? I’m increasingly disinterested in social media like Facebook and Twitter. I figure if I’m going to note something down that looks like it might be useful to others I’ll put it on ye olde blog instead.

I’m sure the conversion isn’t perfect, and I’ve decided not to migrate very old content that simply not interesting any more (linux kernel patches from 2004 for example). If you find a post which has converted badly, just comment on it and I’ll do something about it. I am very sure that pretty much no one will do that thing however.

Michael Still: Nova vendordata deployment, an excessively detailed guide

Planet Linux Australia - Sun, 2018-04-15 21:00

Nova presents configuration information to instances it starts via a mechanism called metadata. This metadata is made available via either a configdrive, or the metadata service. These mechanisms are widely used via helpers such as cloud-init to specify things like the root password the instance should use. There are three separate groups of people who need to be able to specify metadata for an instance.

User provided data

The user who booted the instance can pass metadata to the instance in several ways. For authentication keypairs, the keypairs functionality of the Nova APIs can be used to upload a key and then specify that key during the Nova boot API request. For less structured data, a small opaque blob of data may be passed via the user-data feature of the Nova API. Examples of such unstructured data would be the puppet role that the instance should use, or the HTTP address of a server to fetch post-boot configuration information from.

Nova provided data

Nova itself needs to pass information to the instance via its internal implementation of the metadata system. Such information includes the network configuration for the instance, as well as the requested hostname for the instance. This happens by default and requires no configuration by the user or deployer.

Deployer provided data

There is however a third type of data. It is possible that the deployer of OpenStack needs to pass data to an instance. It is also possible that this data is not known to the user starting the instance. An example might be a cryptographic token to be used to register the instance with Active Directory post boot — the user starting the instance should not have access to Active Directory to create this token, but the Nova deployment might have permissions to generate the token on the user’s behalf.

Nova supports a mechanism to add “vendordata” to the metadata handed to instances. This is done by loading named modules, which must appear in the nova source code. We provide two such modules:

  • StaticJSON: a module which can include the contents of a static JSON file loaded from disk. This can be used for things which don’t change between instances, such as the location of the corporate puppet server.
  • DynamicJSON: a module which will make a request to an external REST service to determine what metadata to add to an instance. This is how we recommend you generate things like Active Directory tokens which change per instance.

Tell me more about DynamicJSON

Having said all that, this post is about how to configure the DynamicJSON plugin, as I think its the most interesting bit here.

To use DynamicJSON, you configure it like this:

  • Add “DynamicJSON” to the vendordata_providers configuration option. This can also include “StaticJSON” if you’d like.
  • Specify the REST services to be contacted to generate metadata in the vendordata_dynamic_targets configuration option. There can be more than one of these, but note that they will be queried once per metadata request from the instance, which can mean a fair bit of traffic depending on your configuration and the configuration of the instance.

The format for an entry in vendordata_dynamic_targets is like this:

<name>@<url>

Where name is a short string not including the ‘@’ character, and where the URL can include a port number if so required. An example would be:

testing@http://127.0.0.1:125

Metadata fetched from this target will appear in the metadata service at a new file called vendordata2.json, with a path (either in the metadata service URL or in the configdrive) like this:

openstack/2016-10-06/vendor_data2.json

For each dynamic target, there will be an entry in the JSON file named after that target. For example:

{ "testing": { "value1": 1, "value2": 2, "value3": "three" } }

Do not specify the same name more than once. If you do, we will ignore subsequent uses of a previously used name.

The following data is passed to your REST service as a JSON encoded POST:

  • project-id: the UUID of the project that owns the instance
  • instance-id: the UUID of the instance
  • image-id: the UUID of the image used to boot this instance
  • user-data: as specified by the user at boot time
  • hostname: the hostname of the instance
  • metadata: as specified by the user at boot time

Deployment considerations

Nova provides authentication to external metadata services in order to provide some level of certainty that the request came from nova. This is done by providing a service token with the request — you can then just deploy your metadata service with the keystone authentication WSGI middleware. This is configured using the keystone authentication parameters in the vendordata_dynamic_auth configuration group.

This behaviour is optional however, if you do not configure a service user nova will not authenticate with the external metadata service.

Deploying the same vendordata service

There is a sample vendordata service that is meant to model what a deployer would use for their custom metadata at http://github.com/mikalstill/vendordata. Deploying that service is relatively simple:

$ git clone http://github.com/mikalstill/vendordata $ cd vendordata $ apt-get install virtualenvwrapper $ . /etc/bash_completion.d/virtualenvwrapper (only needed if virtualenvwrapper wasn't already installed) $ mkvirtualenv vendordata $ pip install -r requirements.txt

We need to configure the keystone WSGI middleware to authenticate against the right keystone service. There is a sample configuration file in git, but its configured to work with an openstack-ansible all in one install that I setup up for my private testing, which probably isn’t what you’re using:

[keystone_authtoken] insecure = False auth_plugin = password auth_url = http://172.29.236.100:35357 auth_uri = http://172.29.236.100:5000 project_domain_id = default user_domain_id = default project_name = service username = nova password = 5dff06ac0c43685de108cc799300ba36dfaf29e4 region_name = RegionOne

Per the README file in the vendordata sample repository, you can test the vendordata server in a stand alone manner by generating a token manually from keystone:

$ curl -d @credentials.json -H "Content-Type: application/json" http://172.29.236.100:5000/v2.0/tokens > token.json $ token=`cat token.json | python -c "import sys, json; print json.loads(sys.stdin.read())['access']['token']['id'];"`

We then include that token in a test request to the vendordata service:

curl -H "X-Auth-Token: $token" http://127.0.0.1:8888/

Configuring nova to use the external metadata service

Now we’re ready to wire up the sample metadata service with nova. You do that by adding something like this to the nova.conf configuration file:

[api] vendordata_providers=DynamicJSON vendordata_dynamic_targets=testing@http://metadatathingie.example.com:8888

Where metadatathingie.example.com is the IP address or hostname of the server running the external metadata service. Now if we boot an instance like this:

nova boot --image 2f6e96ca-9f58-4832-9136-21ed6c1e3b1f --flavor tempest1 --nic net-name=public --config-drive true foo

We end up with a config drive which contains the information or external metadata service returned (in the example case, handy Carrie Fischer quotes):

# cat openstack/latest/vendor_data2.json | python -m json.tool { "testing": { "carrie_says": "I really love the internet. They say chat-rooms are the trailer park of the internet but I find it amazing." } }

Michael Still: So you want to setup a Ceph dev environment using OSA

Planet Linux Australia - Sun, 2018-04-15 21:00

Support for installing and configuring Ceph was added to openstack-ansible in Ocata, so now that I have a need for a Ceph development environment it seems logical that I would build it by building an openstack-ansible Ocata AIO. There were a few gotchas there, so I want to explain the process I used.

First off, Ceph is enabled in an openstack-ansible AIO using a thing I’ve never seen before called a “Scenario”. Basically this means that you need to export an environment variable called “SCENARIO” before running the AIO install. Something like this will do the trick?L:

    export SCENARIO=ceph

Next you need to set the global pg_num in the ceph role or the install will fail. I did that with this patch:

    --- /etc/ansible/roles/ceph.ceph-common/defaults/main.yml 2017-05-26 08:55:07.803635173 +1000 +++ /etc/ansible/roles/ceph.ceph-common/defaults/main.yml 2017-05-26 08:58:30.417019878 +1000 @@ -338,7 +338,9 @@ # foo: 1234 # bar: 5678 # -ceph_conf_overrides: {} +ceph_conf_overrides: + global: + osd_pool_default_pg_num: 8 ############# @@ -373,4 +375,4 @@ # Set this to true to enable File access via NFS. Requires an MDS role. nfs_file_gw: true # Set this to true to enable Object access via NFS. Requires an RGW role. -nfs_obj_gw: false \ No newline at end of file +nfs_obj_gw: false

That of course needs to be done after the Ceph role has been fetched, but before it is executed, so in other words after the AIO bootstrap, but before the install.

And that was about it (although of course that took a fair while to work out). I have this automated in my little install helper thing, so I’ll never need to think about it again which is nice.

Once Ceph is installed, you interact with it via the monitor container, not the utility container, which is a bit odd. That said, all you really need is the Ceph config file and the Ceph utilities, so you could move those elsewhere.

    root@labosa:/etc/openstack_deploy# lxc-attach -n aio1_ceph-mon_container-a3d8b8b1 root@aio1-ceph-mon-container-a3d8b8b1:/# ceph -s cluster 24424319-b5e9-49d2-a57a-6087ab7f45bd health HEALTH_OK monmap e1: 1 mons at {aio1-ceph-mon-container-a3d8b8b1=172.29.239.114:6789/0} election epoch 3, quorum 0 aio1-ceph-mon-container-a3d8b8b1 osdmap e20: 3 osds: 3 up, 3 in flags sortbitwise,require_jewel_osds pgmap v36: 40 pgs, 5 pools, 0 bytes data, 0 objects 102156 kB used, 3070 GB / 3070 GB avail 40 active+clean root@aio1-ceph-mon-container-a3d8b8b1:/# ceph osd tree ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY -1 2.99817 root default -2 2.99817 host labosa 0 0.99939 osd.0 up 1.00000 1.00000 1 0.99939 osd.1 up 1.00000 1.00000 2 0.99939 osd.2 up 1.00000 1.00000

OpenSTEM: Australia and the Commonwealth Games

Planet Linux Australia - Sun, 2018-04-15 15:05
Australia has been doing exceptionally well at the 2018 Commonwealth Games, held at the Gold Coast, Queensland. We can be very proud of our athletes, not only for their sporting prowess, but also because of their friendly demeanour and wonderful examples of the spirit of sportsmanship. I’m sure we all felt proud when the Australian […]

Ben Martin: My little robotic pals

Planet Linux Australia - Fri, 2018-04-13 15:30
Years ago I decided to build an indoor robot with multiple kinects for navigation and a robotic arm for manipulation. It was an interesting time working out how to do this and what is needed to get a mobile base to map and navigate a static and dynamic indoor space. Any young players reading this might think that ROS can just magically make this all happen. There are some interesting issues to discover building your own base and some, um, "issues" shall we say that you will need to address that are not in the books or docs. I won't spoil it here for the new players other than to say be prepared to be persistent. 


There are two active wheels at the front and a single drag wheel at the back about 12 inches behind the front wheels. I wrote the code to control the arm myself as custom ROS nodes. A great trick here is you can inject sinusoidal movement by injecting a shim ROS node to take one target and smoothly move towards it.

Now I have a new friend for outdoor activity, the "hound bot". The little furry friend is still sans hair but has gps, imu, rc control override, and a ps4 eye camera mounted for depth perception and mapping. Taking a leaf out of one of the big car makers book and only using cameras for navigation. But for me it is about cost since a good lidar is still much to expensive for the hound.


The hound is a sort of monocoque where the copper looking square part at the front is part of a 1/4 inch aircraft grade alloy solid welded chassis that extends the lenght of the robot. The hound can do about 20km/h and is around 20kg in heft. The electronics bay in the middle is protected by a reinforced carbon fibre layup that I did. Mixing material for fun and slight weight loss.

One great part about doing this "because I want to" is that I am unbounded. Academic institutions might say that building robust alloy shells is not a worthwhile task and only the abstract algorithms matter. I get to pick and choose what matters based purely on what is interesting, what is hard to do (yay!), and what will help me get the robot to perform a task that I want.

The hound will get gripper(s) so it can autonomously "fetch" things for me such as the mail or go find and pick up objects on the lawn.

Donna Benjamin: Leadership, and teamwork.

Planet Linux Australia - Fri, 2018-04-13 07:02
Friday, April 13, 2018 - 04:09

I'm angry and defensive. I don't know why. So I'm trying hard to figure that out right now.

Here's some words.

I'm writing these words for myself to try and figure this out.
I'm hoping these words might help make it clear.
I'm fearful these words will make it worse.

But I don't want to be silent about this.

Content Warning: This post refers to genocide.

This is about a discussion at the teamwork and leadership workshop at DrupalCon. For perhaps 5 mins within a 90 minute session we talked about Hitler. It was an intensely thought provoking, and uncomfortable 5 minute conversation. It was nuanced. It wasn't really tweetable.

On Holocaust memorial day, it seems timely to explore whether or not we should talk about Hitler when exploring the nature of leadership. Not all leaders are good. Call them dictators, call them tyrants, call them fascists, call them evil. Leadership is defined differently by different cultures, at different times, and in different contexts.

Some people in the room were upset and disgusted that we had that conversation. I'm really very deeply sorry about that.

Some of them then talked about it with others afterwards, which is great. It was a confronting conversation, and one, frankly, we should all be having as genocide and fascism exist in very real ways in the very real world.

But some of those they spoke with, who weren't there, seem to have extrapolated from that conversation that it was something different to what I experienced in the room. I feel they formed opinions that I can only call, well, what words can I call those opinions? Uninformed? Misinformed? Out of context? Wrong? That's probably unfair, it's just my perspective. But from those opinions, they also made assumptions, and turned those assumptions into accusations.

One person said they were glad they weren't there, but clearly happy to criticise us from afar on twitter. I responded that I thought it was a shame they didn't come to the workshop, but did choose to publicly criticise our work. Others responded to that saying this was disgusting, offensive, unacceptable and inappropriate that we would even consider having this conversation. One accused me of trying to shut down the conversation.

So, I think perhaps the reason I'm feeling angry and defensive, is I'm being accused of something I don't think I did.

And I want to defend myself.

I've studied World War Two and the Genocide that took place under Hitler's direction.

My grandmother was arrested in the early 1930's and held in a concentration camp. She was, thankfully, released and fled Germany to Australia as a refugee before the war was declared. Her mother was murdered by Hitler. My grandfather's parents and sister were also murdered by Hitler.

So, I guess I feel like I've got a pretty strong understanding of who Hitler was, and what he did.

So when I have people telling me, that it's completely disgusting to even consider discussing Hitler in the context of examining what leadership is, and what it means? Fuck that. I will not desist. Hitler was a monster, and we must never forget what he was, or what he did.

During silent reflection on a number of images, I wrote this note.

"Hitler was a powerful leader. No question. So powerful, he destroyed the world."

When asked if they thought Hitler was a leader or not, most people in the room, including me, put up their hand. We were wrong.

The four people who put their hand up to say he was NOT a leader were right.

We had not collectively defined leadership at that point. We were in the middle of a process doing exactly that.

The definition we were eventually offered is that leaders must care for their followers, and must care for people generally.

At no point, did anyone in that room, consider the possibility that Hitler was a "Good Leader" which is the misinformed accusation I most categorically reject.

Our facilitator, Adam Goodman, told us we were all wrong, except the four who rejected Hitler as an example of a Leader, by saying, that no, he was not a leader, but yes, he was a dictator, yes he was a tyrant. But he was not a leader.

Whilst I agree, and was relieved by that reframing, I would also counter argue that it is English semantics.

Someone else also reminded us, that Hitler was elected. I too, was elected to the board of the Drupal Association, I was then appointed to one of the class Director seats. My final term ends later this year, and frankly, right now, I'm kind of wondering if I should leave right now.

Other people shown in the slide deck were Oprah Winfrey, Angela Merkel, Rosa Parks, Serena Williams, Marin Alsop, Sonia Sotomayor, a woman in military uniform, and a large group of women protesting in Tahrir Square in Egypt.

It also included Gandhi, and Mandela.

I observed that I felt sad I could think of no woman that I would list in the same breath as those two men.

So... for those of you who judged us, and this workshop, from what you saw on twitter, before having all the facts?
Let me tell you what I think this was about.

This wasn't about Hitler.

This was about leadership, and learning how we can be better leaders. I felt we were also exploring how we might better support the leaders we have, and nurture the ones to come. And I now also wonder how we might respectfully acknowledge the work and effort of those who've come and gone, and learn to better pass on what's important to those doing the work now.

We need teamwork. We need leadership. It takes collective effort, and most of all, it takes collective empathy and compassion.

Dries Buytaert was the final image in the deck.

Dries shared these 5 values and their underlying principles with us to further explore, discuss and develop together.

Prioritize impact
Impact gives us purpose. We build software that is easy, accessible and safe for everyone to use.

Better together
We foster a learning environment, prefer collaborative decision-making, encourage others to get involved and to help lead our community.

Strive for excellence
We constantly re-evaluate and assume that change is constant.

Treat each other with dignity and respect
We do not tolerate intolerance toward others. We seek first to understand, then to be understood. We give each other constructive criticism, and are relentlessly optimistic.

Enjoy what you do
Be sure to have fun.

I'm sorry to say this, but I'm really not having fun right now. But I am much clearer about why I'm feeling angry.

Photo Credit "Protesters against Egyptian President Mohamed Morsi celebrate in Tahrir Square in Cairo on July 3, 2013. Egypt's armed forces overthrew elected Islamist President Morsi on Wednesday and announced a political transition with the support of a wide range of political and religious leaders." Mohamed Abd El Ghany Reuters.