Planet Linux Australia
ArduPilot now supports TiltWing VTOL aircraft! First test flights done earlier today in Canberra
We did the first test flights of the Unique Models CL84 TiltWing VTOL with ArduPilot today. Support for tilt-wing aircraft is in the 3.7.1 stable release, with enhancements to support the tilt mechanism of the CL84 added for the upcoming 3.8.0 release.
This CL84 model operates as a tricopter in VTOL flight, and as a normal aileron/elevator plane in forward flight. The unusual thing from the point of view of ArduPilot is that the tilt mechanism uses a retract style servo, which means it can be commanded to be fully up or fully down, but you can't ask it to hold any angle in between. That makes for some interesting challenges in the VTOL transition code.
For the 3.8.0 release there is a new parameter Q_TILT_TYPE that controls whether the tilt mechanism for tiltrotors and tiltwings is a continuous servo or a binary (retract style) servo. In either case the Q_TILT_RATE parameter sets the rate at which the servo changes angle (in degrees/second).
This aircraft has been previously tested in hover by Greg Covey (see http://discuss.ardupilot.org/t/tiltrotor-support-for-plane/8805) but has not previously been tested with automatic transitions.
Many thanks to Grant, Peter, James and Jack from CanberraUAV for their assistance with testing the CL84 today and the videos and photos!
By default, EasyBuild will delete the build directory of an successful installation and will save failures of attempted installs for diagnostic purposes. In some cases however, one may want to save the build directory. This can be useful, for example, for diagnosis of *successful* builds. Another example is the installation of plugins for applications such as Paraview, which *require* access to the the successful buildir.
Recently, the University of Melbourne is ranked #1 in Australia and #33 in the world, according to the Times Higher Education World University Rankings of 2015-2016 . The rankings are based on a balanced metric between citations, industry income, international outlook, research, and teaching.
I did manage to go from opening the packet to firmware environment setup, built, and uploaded in less than 2 hours total. No bridges, no hassles, cable shrinks around the place and 90 degree headers across the bottom of the board for future tinkering.
This is going to look extremely stylish in a CNCed hardwood case. My current plan is to turn it into a smart remote control. Rotary encoder for volume, maybe modal so that the desired "program" can be selected quickly from a list without needing to flick or page through things.
In this follow-up to Command and Control, Schlosser explores the conscientious objectors and protestors who have sought to highlight not just the immorality of nuclear weapons, but the hilariously insecure state the US government stores them in. In all seriousness, we are talking grannies with heart conditions being able to break in.
My only real objection to this book is that is more of a pamphlet than a book, and feels a bit like things that didn't make it into the main book. That said, it is well worth the read.
Tags for this post: book eric_schlosser nuclear weapons safety protest
Related posts: Command and Control; Random linkage; Fast Food Nation; Starfish Prime; Why you should stand away from the car when the cop tells you to; Random fact for the day Comment Recommend a book
Code of Consult and Safety
- Putting prefered pronoun
- Free Childcare
- Sponsored by Github
- Approx 10 kids
- Assistance Grants
- Breakdown by gender etc
- Roughly 25% of attendees and speakers not men
- More numbers
- 104 Matrix chat users
- 554 attendees
- 2900 coffee cups
- Network claimed to 7.5Gb/s
- 1.6 TB over the week, 200Mb/s max
- 30 Session Chairs
- 12 Miniconfs
- 491 Proposals (130 more than the others)
- 6 Tutorials, 75 talks, 80 speakers
- 4 Keynote speakers
- 21 Sponsors
Linux.conf.au 2018 – Sydney
- A little bit of history repeating
- 2001, 2007, 2018
- Venue is UTS
- 5 minutes to food, train station
- @lca2018 on twitter
- Looking for a few extra helpers
- In support of Outreachy
- 3 interns funded
- Thanks to team members
Since the last launch, Mark and I have put a lot of work into carefully integrating a rate 0.8 LDPC code developed by Bill, VK5DSP. The coded 115 kbit/s system is now working error free on the bench down to -112dBm, and can transfer a new hi-res image in just a few seconds. With a tx power of 50mW, we estimate a line of site range of 100km. We are now out-performing commercial FSK telemetry chip sets using our open source system.
However disaster struck soon after launch at Mt Barker High School oval. High winds blew the payloads into a tree and three of them were chopped off, leaving the balloon and a lone payload to continue into the stratosphere. One of the payloads that hit the tree was our SSDV, tumbling into a neighboring back yard. Oh well, we’ll have another try in December.
Now I’ve been playing a lot of Kerbal Space Program lately. It’s got me thinking about vectors, for example in Kerbal I learned how to land two space craft at exactly the same point on the Mun (Moon) using vectors and some high school equations of motion. I’ve also taken up sailing – more vectors involved in how sails propel a ship.
The high altitude balloon consists of a latex, helium filled weather balloon a few meters in diameters. Strung out beneath that on 50m of fishing line are a series of “payloads”, our electronic gizmos in little foam boxes. The physical distance helps avoid interference between the radios in each box.
While the balloon was held near the ground, it was keeled over at an angle:
It’s tethered, and not moving, but is acted on by the force of the lift from the helium and drag from the wind. These forces pivot the balloon around an arc with a radius of the tether. If these forces were equal the balloon would be at 45 degrees. Today it was lower, perhaps 30 degrees.
When the balloon is released, it is accelerated by the wind until it reaches a horizontal velocity that matches the wind speed. The payloads will also reach wind speed and eventually hang vertically under the balloon due to the force of gravity. Likewise the lift accelerates the balloon upwards. This is balanced by drag to reach a vertical velocity (the ascent rate). The horizontal and vertical velocity components will vary over time, but lets assume they are roughly constant over the duration of our launch.
Now today the wind speed was 40 km/hr, just over 10 m/s. Mark suggested a typical balloon ascent rate of 5 m/s. The high school oval was 100m wide, so the balloon would take 100/10 = 10s to traverse the oval from one side to the gum tree. In 10 seconds the balloon would rise 5×10 = 50m, approximately the length of the payload string. Our gum tree, however, rises to a height of 30m, and reached out to snag the lower 3 payloads…..
A few days ago while riding my bike I was involved in a spirited exchange of opinions with a gentleman in a motor vehicle. After said exchange he attempted to run me off the road, and got out of his car, presumably with intent to assault me. Despite the surge of adrenaline I declined to engage in fisticuffs, dodged around him, and rode off into the sunset. I may have been laughing and communicating further with sign language. It’s hard to recall.
I thought I’d apply some year 11 physics to see what all the fuss was about. I was in the middle of the road, preparing to turn right at a T-junction (this is Australia remember). While his motivations were unclear, his vehicle didn’t look like an ambulance. I am assuming he as not an organ-courier, and that there probably wasn’t a live heart beating in an icebox on the front seat as he raced to the transplant recipient. Rather, I am guessing he objected to me being in that position, as that impeded his ability to travel at full speed.
The street in question is 140m long. Our paths crossed half way along at the 70m point, with him traveling at the legal limit of 14 m/s, and me a sedate 5 m/s.
Lets say he intended to brake sharply 10m before the T junction, so he could maintain 14 m/s for at most 60m. His optimal journey duration was therefore 4 seconds. My monopolization of the taxpayer funded side-street meant he was forced to endure a 12 second journey. The 8 second difference must have seemed like eternity, no wonder he was angry, prepared to risk physical injury and an assault charge!
My endeavor to produce a digital voice mode that competes with SSB continues. For a big chunk of 2016 I took a break from this work as I was gainfully employed on a commercial HF modem project. However since December I have once again been working on a 700 bit/s codec. The goal is voice quality roughly the same as the current 1300 bit/s mode. This can then be mated with the coherent PSK modem, and possibly the 4FSK modem for trials over HF channels.
I have diverged somewhat from the prototype I discussed in the last post in this saga. Lots of twists and turns in R&D, and sometimes you just have to forge ahead in one direction leaving other branches unexplored.
SamplesSample 1300 700C hts1a Listen Listen hts2a Listen Listen forig Listen Listen ve9qrp_10s Listen Listen mmt1 Listen Listen vk5qi Listen Listen vk5qi 1% BER Listen Listen cq_ref Listen Listen
Note the 700C samples are a little lower level, an artifact of the post filtering as discussed below. What I listen for is intelligibility, how easy is the same to understand compared to the reference 1300 bit/s samples? Is it muffled? I feel that 700C is roughly the same as 1300. Some samples a little better (cq_ref), some (ve9qrp_10s, mmt1) a little worse. The artifacts and frequency response are different. But close enough for now, and worth testing over air. And hey – it’s half the bit rate!
I threw in a vk5qi sample with 1% random errors, and it’s still usable. No squealing or ear damage, but perhaps more sensitive that 1300 to the same BER. Guess that’s expected, every bit means more at a lower bit rate.
Some of the samples like vk5qi and cq_ref are strongly low pass filtered, others like ve9qrp are “flat” spectrally, with the high frequencies at about the same level as the low frequencies. The spectral flatness doesn’t affect intelligibility much but can upset speech codecs. Might be worth trying some high pass (vk5qi, cq_ref) or low pass (ve9qrp_10s) filtering before encoding.
Below is a block diagram of the signal processing. The resampling step is the key, it converts the time varying number of harmonic amplitudes to fixed number (K=20) of samples. They are sampled using the “mel” scale, which means we take more finely spaced samples at low frequencies, with coarser steps at high frequencies. This matches the log frequency response of the ear. I arrived at K=20 by experiment.
The amplitudes and even the Vector Quantiser (VQ) entries are in dB, which is very nice to work in and matches the ears logarithmic amplitude response. The VQ was trained on just 120 seconds of data from a training database that doesn’t include any of the samples above. More work required on the VQ design and training, but I’m encouraged that it works so well already.
Here is a 3D plot of amplitude in dB against time (300 frames) and the K=20 frequency vectors for hts1a. You can see the signal evolving over time, and the low levels at the high frequency end.
The post filter is another key step. It raises the spectral peaks (formants) an lowers the valleys (anti-formants), greatly improving the speech quality. When the peak/valley ratio is low, the speech takes on a muffled quality. This is an important area for further investigation. Gain normalisation after post filtering is why the 700C samples are lower in level than the 1300 samples. Need some more work here.
The two stage VQ uses 18 bits, energy 4 bits, and pitch 6 bits for a total of 28 bits every 40ms frame. Unvoiced frames are signalled by a zero value in the pitch quantiser removing the need for a voicing bit. It doesn’t use differential in time encoding to make it more robust to bit errors.
Days and days of very careful coding and checks at each development step. It’s so easy to make a mistake or declare victory early. I continually compared the output speech to a few Codec 2 1300 samples to make sure I was in the ball park. This reduced the subjective testing to a manageable load. I used automated testing to compare the reference Octave code to the C code, porting and testing one signal processing module at a time. Sometimes I would just printf rows of vectors from two versions and compare the two, old school but quite effective and spotting the step where the bug crept in.
The Octave simulation code can be driven by the scripts newamp1_batch.m and newamp1_fby.m, in combination with c2sim.
To try the C version of the new mode:codec2-dev/build_linux/src$ ./c2enc 700C ../../raw/hts1a.raw - | ./c2dec 700C - -| play -t raw -r 8000 -s -2 -
Some thoughts on FEC. A (23,12) Golay code could protect the most significant bits of 1st VQ index, pitch, and energy. The VQ could be organised to tolerate errors in a few of its bits by sorting to make an error jump to a ‘close’ entry. The extra 11 parity bits would cost 1.5dB in SNR, but might let us operate at significantly lower in SNR on a HF channel.
Over the next few weeks we’ll hook up 700C to the FreeDV API, and get it running over the air. Release early and often – lets find out if 700C works in the real world and provides a gain in performance on HF channels over FreeDV 1600. If it looks promising I’d like to do another lap around the 700C algorithm, investigating some of the issues mentioned above.
One of my favorite images below, just before impact with the ground. You can see the parachute and the tangled remains of the balloon in the background, the yellow fuzzy line is the nylon rope close to the lens.
Well done to the AREG club members (in particular Mark) for all your hard work in preparing the payloads and ground stations.
High Altitude Balloons is a fun hobby. It’s a really nice day out driving in the country with nice people in a car packed full of technology. South Australia has some really nice bakeries that we stop at for meat pies and donuts on the way. Yum. It was very satisfying to see High Definition (HD) images immediately after take off as the balloon soared above us. Several ground stations were collecting packets that were re-assembled by a central server – we crowd sourced the image reception.
Open Source FSK modem
Surprisingly we were receiving images while mobile for much of the flight. I could see the Eb/No move up and down about 6dB over 3 second cycles, which we guess is due to rotation or swinging of the payload under the balloon. The antennas used are not omnidirectional so the change in orientation of tx and rx antennas would account for this signal variation. Perhaps this can be improved using different antennas or interleaving/FEC.
Our little modem is as good as the Universe will let us make it (near perfect performance against theory) and it lived up to the results predicted by our calculations and tested on the ground. Bill, VK5DSP, developed a rate 0.8 LDPC code that provides 6dB coding gain. We were receiving 115 kbit/s data on just 50mW of tx power at ranges of over 100km. Our secret is good engineering, open source software, $20 SDRs, and a LNA. We are outperforming commercial chipsets with open source.
The work on our wonderful little FSK modem continues. Brady O’Brien, KC9TPA has been refactoring the code for the past few weeks. It is now more compact, has a better command line interface, and most importantly runs faster so we getting close to running high speed telemetry on a Raspberry Pi and fully embedded platforms.
I think we can get another 4dB out of the system, bringing the MDS down to -116dBm – if we use 4FSK and lose the RS232 start/stop bits. What we really need next is custom tx hardware for open source telemetry. None of the chipsets out there are quite right, and our demod outperforms them all so why should we compromise?
The project has had some interesting spin offs. The members of AREG are getting really interested in SDR on Linux resulting in a run on recycled laptops from ASPItech, a local electronics recycler!
Today I was part of the AREG team that flew Horus 37 – a High Altitude Balloon flight. The payload included hardware sending Slow Scan TV (SSTV) images at 115 kbit/s, based on the work Mark and I documented in this blog post from earlier this year.
It worked! Using just 50mW of transmit power and open source software we managed to receive SSTV images at bit rates of up to 115 kbit/s:
More images here.
Here is a screen shot of the Python dashboard for the FSK demodulator that Mark and Brady have developed. It gives us some visibility into the demod state and signal quality:
(View-Image on your browser to get a larger version)
The Eb/No plot shows the signal strength moving up and down over time, probably due to motion of our car. The Tone Frequency Estimate shows a solid lock on the two FSK frequencies. The centre of the Eye Diagram looks good in this snapshot.
Octave and C LDPC Library
There were some errors in received packets, which appear as stripes in the images:
On the next flight we plan to add a LDPC FEC code to protect against these errors and allow the system to operate at signal levels about 8dB lower (more than doubling our range).
Bill, VK5DSP, has developed a rate 0.8 LDPC code designed for the packet length of our SSTV software (2064 bits/packet including checksum). This runs with the CML library – C software designed to be called from Matlab via the MEX file interface. I previously showed how the CML library can be used in GNU Octave.
I like to develop modem algorithms in GNU Octave, then port to C for real time operation. So I have put some time into developing Octave/C software to simulate the LDPC encoded FSK modem in Octave, then easily port exactly the same LDPC code to C. For example the write_code_to_C_include_file() Octave function generates a C header file with the code matrices and test vectors. There are test functions that use an Octave encoder and C decoder and compare the results to an Octave decoder. It’s carefully tested and bit exact to 64-bit double precision! Still a work in progress, but has been checked into codec2-dev SVN:ldpc_fsk_lib.m Library of Octave functions to support LDPC over FSK modems test_ldpc_fsk_lib.m Test and demo functions for Octave and C library code mpdecode_core.c CML MpDecode.c LDPC decoder functions re-factored H2064_516_sparse.h Sample C include file that describes Bill’s rate 0.8 code ldpc_enc.c Command line LDPC encoder ldpc_dec.c Command line LDPC decoder drs232_ldpc.c Command line SSTV deframer and LDPC decoder
This software might be useful for others who want to use LDPC codes in their Matlab/Octave work, then run them in real time in C. With the (2064,512) code, the decoder runs at about 500 kbit/s on one core of my old laptop. I would also like to explore the use of these powerful codes in my HF Digital Voice work.
SSTV Hardware and Software
Mark did a fine job putting the system together and building the payload hardware and it’s enclosure:
It uses a Raspberry Pi, with a FSK modulator we drive from the Pi’s serial port. The camera aperture is just visible at the front. Mark has published the software here. The tx side is handled by a single Python script. Here is the impressive command line used to start the rx side running:#!/bin/bash # # Start RX using a rtlsdr. # python rx_gui.py & rtl_sdr -s 1000000 -f 441000000 -g 35 - | csdr convert_u8_f | csdr bandpass_fir_fft_cc 0.1 0.4 0.05 | csdr fractional_decimator_ff 1.08331 | csdr realpart_cf | csdr convert_f_s16 | ./fsk_demod 2XS 8 923096 115387 - - S 2> >(python fskdemodgui.py --wide) | ./drs232_ldpc - - | python rx_ssdv.py --partialupdate 16
We have piped together a bunch of command line utilities on the Linux command line. A hardware analogy is a bunch of electronic boards on a work bench connected via coaxial jumper leads. It works quite well and allows us to easily prototype SDR radio systems on Linux machines from a laptop to a RPi. However down the track we need to get it all “in one box” – a single, cross platform executable anyone can run.
We did some initial tests with the LDPC decoder today but hit integration issues that flat lined our CPU. Next steps will be to investigate these issues and try LDPC encoded SSTV on the next flight, which is currently scheduled for the end of October. We would love to have some help with this work, e.g. optimizing and testing the software. Please let us know if you would like to help!
Mark’s blog post on the flight
AREG blog post detailing the entire flight, including set up and recovery
High Speed Balloon Data Link – Development and Testing of the SSTV over FSK system
All your Modems are belong to Us – The origin of the “ideal” FSK demod used for this work.
FreeDV 2400A – The C version of this modem developed by Brady and used for VHF Digital Voice
LDPC using Octave and CML – using the CML library LDPC decoder in GNU Octave
A friend of mine is developing a commercial OQPSK modem and was a bit stuck. I’m not surprised as I’ve had problems with OQPSK in the past as well. He called to run a few ideas past me and I remembered I had developed a coherent GMSK modem simulation a few years ago. Turns out MSK and friends like GMSK can be interpreted as a form of OQPSK.
A few hours later I had a basic OQPSK modem simulation running. At that point we sat down for a bottle of Sparkling Shiraz and some curry to celebrate. The next morning, slightly hung over, I spent another day sorting out the diabolical phase and timing ambiguity issues to make sure it runs at all sorts of timing and phase offsets.
So oqsk.m is a reference implementation of an Offset QPSK (OQPSK) modem simulation, written in GNU Octave. It’s complete, including timing and phase offset estimation, and phase/timing ambiguity resolution. It handles phase, frequency, timing, and sample clock offsets. You could run it over real world channels.
It’s performance is bang on ideal for QPSK:
I thought it would be useful to publish this blog post as OQPSK modems are hard. I’ve had a few run-in with these beasts over the years and had headaches every time. This business about the I and Q arms being half a symbol offset from each other makes phase synchronisation very hard and does your head in. Here is the Tx waveform, you can see the half symbol time offset in the instant where I and Q symbols change:
As this is unfiltered OQPSK, the Tx waveform is just the the Tx symbols passed through a zero-order hold. That’s a fancy way of saying we keep the symbols values constant for M=4 samples then change them.
There are very few complete reference implementations of high quality modems on the Internet. Providing them has become a mission of mine. By “complete” I mean pushing past the textbook definitions to include real world synchronisation. By “high quality” I mean tested against theoretical performance curves with different channel impairments. Or even tested at all. OQPSK is a bit obscure and it’s even harder to find any details of how to build a real world modem. Plenty of information on the basics, but not the nitty gritty details like synchronisation.
The PLL and timing loop simultaneously provides phase and timing estimation. I derived it from a similar algorithm used for the GMSK modem simulation. Unusually for me, the operation of the timing and phase PLL loop is still a bit of mystery. I don’t quite fully understand it. Would welcome more explanation from any readers who are familiar to it. Parts of it I understand (and indeed I engineered) – the timing is estimated on blocks of samples using a non-linearity and DFT, and the PLL equations I worked through a few years ago. It’s also a bit old school, I’m more familiar with feed forward type estimators and not something this “analog”. Oh well, it works.
Here is the phase estimator PLL loop doing it’s thing. You can see the Digital Controlled Oscillator (DCO) phase tracking a small frequency offset in the lower subplot:
Phase and Timing Ambiguities
The phase/timing estimation works quite well (great scatter diagram and BER curve), but can sync up with some ambiguities. For example the PLL will lock on the actual phase offset plus integer multiples of 90 degrees. This is common with phase estimators for QPSK and it means your constellation has been rotated by some multiple of 90 degrees. I also discovered that combinations of phase and timing offsets can cause confusion. For example a 90 degree phase shift swaps I and Q. As the timing estimator can’t tell I from Q it might lock onto a sequence like …IQIQIQI… or …QIQIQIQ…. leading to lots of pain when you try to de-map the sequence back to bits.
So I spent a Thursday exploring these ambiguities. I ended up correlating the known test sequence with the I and Q arms separately, and worked out how to detect IQ swapping and the phase ambiguity. This was tough, but it’s now handling the different combinations of phase, frequency and timing offsets that I throw at it. In a real modem with unknown payload data a Unique Word (UW) of 10 or 20 bits at the start of each data frame could be used for ambiguity resolution.
The modem lacks an initial frequency offset estimator, but the PLL works OK with small freq offsets like 0.1% of the symbol rate. It would be useful to add an outer loop to track these frequency offsets out.
As it uses feedback loops its not super fast to sync and best suited to continuous rather than burst operation.
The timing recovery might need some work for your application, as it just uses the nearest whole sample. So for a small over-sample rate M=4, a timing offset of 2.7 samples will mean it chooses sample 3, which is a bit coarse, although given our BER results it appears unfiltered PSK isn’t too sensitive to timing errors. Here is the timing estimator tracking a sample clock offset of 100ppm, you can see the coarse quantisation to the nearest sample in the lower subplot:
For small M, a linear interpolator would help. If M is large, say 10 or 20, then using the nearest sample will probably be good enough.
This modem is unfiltered PSK, so it has broad lobes in the transmit spectrum. Here is the Tx spectrum at Eb/No=4dB:
The transmit filter is just a “zero older hold” and the received filter an integrator. Raised cosine filtering could be added if you want a narrow bandwidth. This will probably make it more sensitive to timing errors.
Like everything with modems, test it by measuring the BER. Please.
oqsk.m GNU Octave OQPSK modem simulation
GMSK Modem Simulation blog post that was used as a starting point for the OQPSK modem. With lots more reference links.
Use #lcapapers to tell Linux.conf.au what you want to see in 2018
Michael Still and Michael Davies get the Rusty Wrench award
Karaoke – Jack Skinner
- Talk with random slides
- End to end encrypted communication system
- No entity owns your conversations
- Bridge between walled gardens (eg IRC and Slack)
- In Very late Beta, 450K user accounts
- Run or Write your own servers or services or client
Cooked – Pete the Pirate
- How to get into Sous Vide cooking
- Create home kit
- Beaglebone Black
- Rice cooker, fish tank air pump.
- Also use to germinate seeds
- Also use this system to brew beer
Emoji Archeology 101 – Russell Keith-Magee
- 1963 Happy face created
Continuously Delivering Security in the Cloud – Casey West
- This is a talk about operation excellence
- Why are system attacked? Because they exist
- Resisting Change to Mitigate Risk – It’s a trap!
- You have a choice
- Going fast with unbounded risk
- Going slow to mitigate risk
- Advanced Persistent Threat (ATP) – The breach that lasts for months
- Successful attacks have
- Leaked or misused creditials
- Miconfigured or unpatched software
- Changing very little slowly helps all three of the above
- A moving target is harder to hit
- Cloud-native operability lets platforms move faster
- Composable architecture (serverless, microservices)
- Automated Processes (CD)
- Collaborative Culture (DevOps)
- Production Environment (Structured Platform)
- The 3 Rs
- Rotate credentials every few minutes or hours
- Credentials will leak, Humans are weak
- “If a human being generates a password for you then you should reject it”
- Computers should generate it, every few hours
- Repave every server and application every few minutes/hours
- Implies you have things like LBs that can handle servers adding and leaving
- Container lifecycle
- Note: No “change “step
- A Server that doesn’t exist isn’t being cromprimised
- Regularly blow away running containers
- Repave ≠ Patch
- uptime <= 3600
- Repair vulnerable runtime environments every few minutes or hours
- What stuff will need repair?
- Runtime Environments (eg rails)
- Operating Systems
- The Future of security is build pipelines
- Try to put in credential rotation and upsteam imports into your builds
- Embracing Change to Mitigate Risk
- Less of a Trap (in the cloud)
- Lenovo Thinkpad X230T
- Bought Aug 2013
- Ariginal capacity 62 KWh – 5hours and 12W
- Capacity down to 1.9Wh – 10 minutes
- 45N1079 replacement bought
- DRM on laptop claimed it was not genuine and refused to recharge it.
- Batteries talk SBS protocol to laptop
- SMBus port and SMClock port
- Throw Away
- Replace Cells
- Easy to damage
- Might not work
- Hack firmware on battery
- Talk at DEFCON 19
- But this is different model from that
- Couldn’t work out how to get to firmware
- Added something in between
- Update the firmware on the machine
- Embeded Controller (EC)
- Looking though the firmware for Battery Authentication
- Found routine that look plausable
- But other stuff was encrypted
- EC Update process
- BIOS update puts EC update in spare flash memory area
- After the BIOs grabs that and applies update
- Pulled apart the BIOs, found EcFwUpdateDxe.efi routine that updates the EC
- Found that stuff send to the EC still encrypted.
- Unencryption done by flasher program
- Flasher program
- Encrypted itself (decrypted by the current fireware)
- JTAG interface for flashing debug
- Physically difficult to get to
- Luckily Russian Hackers have already grabbed a copy
- The Decryption function in the Flasher program
- Appears to be blowfish
- Found the key (in expanded form) in the firmware
- Enough for the encryption and decryption
- Outer checksum checked by BIOs
- Post-decryption sum – checked by the flasher (bricks EC if bad)
- Section Echecksums (also bricks)
- noop the checks in code
- noop another check that sometimes failer
- Different error message
- Found a second authentication process
- noop out the 2nd challenge in the BIOs
- Posted writeup, posted to hacker news
- 1 million page views
- Uploaded code to github
- Other people doing stuff with the embedded controller
- No longer works on latest laptops, EC firmware appears to be signed
- Anything can be broken with physical access and significant determination
- Australian Elections use a lot of software
- Encoding and counting preferential votes
- For voting in polling places
- For voting over the internet
- How do we know this software is correct
- The Paper ballot box is engineered around a serious of problems
- In the past people bought their own voting paper
- The Australian Ballot used in many places (eg NZ)
- Franch use different method with envelopes and glass boxes
- The US has had lots of problems and different ways
- Four cases studies in Aus
- vVote: Victoria
- Vic state election 2014
- 1121 votes for overseas Australians voting in Embassies etc
- Based on Pret a Voter
- You can varify that what you voted was what went though
- Source code on bitbucket
- Crypto signed, varified, open source, etc
- Not going forward
- Didn’t get the electoral commissions input and buy-in.
- A little hard to use
- iVote: NSW and WA
- 280,000 votes over Internet in 2015 NSW state election ( around 5-6% of total votes)
- Vote on a device of your choosing
- Vote encrypted and send over Internet
- Get receipt number
- Exports to a varification service. You can telephone them, give them your number and they will read back you votes
- Website used 3rd-party analytics provider with export-grade crypto
- Vulnerable to injection of content, votes could be read or changed
- Fixed (after 66k votes cast)
- NSW iVote really wasn’t varifiable
- About 5000 people called into service and successfully verified
- How many tried to verify but failed?
- Commission said 1.7% of electors verified and none identified any anomalies with their vote (Mar 2015)
- How many tried and failed? “in the 10s” (Oct 2015)
- Parliamentary said how many failed? Seven or 5 (Aug 2016)
- How many failed to get any vote? 627 (Aug 2016)
- This is a failure rate of about 10%
- It is believed it was around 200 unique (later in 2016)
- Vote Counting software
- Errors in NSW counting
- NSW legislative voting redistributed votes are selected at random
- No source code for this
- Use same source code for lots of other elections
- Re-ran some of the votes, found randomness could change results. Found one most likely cost somebody a seat, but not till 4 years later.
- Generate the random key publicly
- Open up the source code
- They electorial peopel didn’t want to do this.
- In the 2016 localgovt count we found 2 more bugs
- One candidate should have won with 54% probability but didn’t
- The Australian Senate Count
- AEC consistent refuses to revel the source code
- The Senate Date is release, you can redo it yourself any bugs will become evident
- What about digitising the ballots?
- How would we know if that wasn’t working?
- Only by auditing the paper evidence
- The Americas have a history or auditing the paper ballots
- But the Australian vote is a lot more complex so everything not 100% yet
- Stuff is online
It is with a little sadness, but a lot of pride that I announce my retirement from GovHack, at least retirement from the organising team It has been an incredible journey with a lot of amazing people along the way and I will continue to be it’s biggest fan and support. I look forward to actually competing in future GovHacks and just joining in the community a little more than is possible when you are running around organising things! I think GovHack has grown up and started to walk, so as any responsible parent, I want to give it space to grow and evolve with the incredible people at the helm, and the new people getting involved.
Just quickly, it might be worth reflecting on the history. The first “GovHack” event was a wonderfully run hackathon by John Allsopp and Web Directions as part of the Gov 2.0 Taskforce program in 2009. It was small with about 40 or so people, but extremely influential and groundbreaking in bringing government and community together in Australia, and I want to thank John for his work on this. You rock! I should also acknowledge the Gov 2.0 Taskforce for funding the initiative, Senator at the time Kate Lundy for participating and giving it some political imprimatur, and early public servants who took a risk to explore new models of openness and collaboration such as Aus Gov CTO John Sheridan. A lot of things came together to create an environment in which community and government could work together better.
Over the subsequent couple of years there were heaps of “apps” competitions run by government and industry. On the one hand it was great to see experimentation however, unfortunately, several events did silly things like suing developers for copyright infringement, including NDAs for participation, or setting actual work for development rather than experimentation (which arguably amounts to just getting free labour). I could see the tech community, my people, starting to disengage and become entirely and understandably cynical of engaging with government. This would be a disastrous outcome because government need geeks. The instincts, skills and energy of the tech community can help reinvent the future of government so I wanted to right this wrong.
In 2012 I pulled together a small group of awesome people. Some from that first GovHack event, some from BarCamp, some I just knew and we asked John if we could use the name (thank you again John!) and launched a voluntary, community run, annual and fun hackathon, by hackers for hackers (and if you are concerned by that term, please check out what a hacker is). We knew if we did something awesome, it would build the community up, encourage governments to open data, show off our awesome technical community, and provide a way to explore tricky problems in new and interesting ways. But we had to make is an awesome event for people to participate in.
It has been wonderful to see GovHack grow from such humble origins to the behemoth it is today, whilst also staying true to the original purpose, and true to the community it serves. In 2016 (for which I was on maternity leave) there were over 3000 participants in 40 locations across two countries with active participation by Federal, State/Territory and Local Governments. There are always growing pains, but the integrity of the event and commitment to community continues to be a huge part of the success of the event.
In 2015 I stepped back from the lead role onto the general committee, and Geoff Mason did a brilliant job as Head Cat Herder! In 2016 I was on maternity leave and watched from a distance as the team and event continued to evolve and grow under the leadership of Richard Tubb. I feel now that it has its own momentum, strong leadership, an amazing community of volunteers and participation and can continue to blossom. This is a huge credit to all the people involved, to the dedicated national organisers over the years, to the local organisers across Australia and New Zealand, and of course, to all the community who have grown around it.
A few days ago, a woman came up to me at linux.conf.au and told me about how she had come to Australia not knowing anyone, and gone to GovHack after seeing it advertised at her university, and she made all her friends and relationships there and is so extremely happy. It made me teary, but also was a timely reminder. Our community is amazing. And initiatives like GovHack can be great enablers for our community, for new people to meet, build new communities, and be supported to rock. So we need to always remember that the projects are only as important as how much they help our community.
I continue to be one of GovHack’s biggest fans. I look forward to competing this year and seeing where current and future leadership takes the event and they have my full support and confidence. I will be looking for my next community startup after I finish writing my book (hopefully due mid year :)).
If you love GovHack and want to help, please volunteer for 2017, consider joining the leadership, or just come along for fun. If you don’t know what GovHack is, I’ll see you there!
Keeping Linux Great
- Previous Keynotes have posed question I’ll pose answers
- What is the free of open source software, it has no future
- FLOSS is yesterday’s gravy
- Based on where the technology is today. How would FLOSS work with punch cards?
- Other people have said similar things
- Software, Linux and similar all going down in google trends
- But “app” is going up
- Small pieces losely joined
- Linux used to be great could you could pipe stuff to little programs
- That is what is happening to software
- Example – share a page to another app in a mobile interface
- All apps no longer need to send mail, they just have to talk to the mail app
- So What should you do?
- Vendor all you dependencies, just copy everyone elses code into your repo (and list their names if it is BSD) so you can ship everything in one blob (eg Android)
- Components must be 5> million or >20 LOC , only a handful or them
- At the other end apps are smaller since they can depend on the OS or other Apps for lots of functionality so they don’t have to write it themselves.
- Example node with thousands of dependencies
- Vendor all you dependencies, just copy everyone elses code into your repo (and list their names if it is BSD) so you can ship everything in one blob (eg Android)
- App Freedom
- “Advanced programming environments conflate the runtime with the devtime” – Bret Victor
- Open Source software rarely does that
- “It turns out that Object Orientation didn’t work out, it is another legacy with are stuck with”
- Having the source code is nice but it is not a requirement. Access to the runtime is what you want. You need to get it where people are using it.
- Liberal Software
- But not everything wasn’t to be a programmer
- 75% comes from 6 generic web applications ( collection, storage, reservation, etc)
- A lot of functionality requires big data or huge amounts of machines or is centralised so open sourcing the software doesn’t do anything useful
- If it was useful it could be patented, if it was not useful but literary then it was just copyright