# Planet Linux Australia

Planet Linux Australia - http://planet.linux.org.au
Updated: 23 min 21 sec ago

### sthbrx - a POWER technical blog: Learning From the Best

Wed, 2016-03-02 23:00

When I first started at IBM I knew how to alter Javascript and compile it. This is because of my many years playing Minecraft (yes I am a nerd). Now I have leveled up! I can understand and use Bash, Assembly, Python, Ruby and C! Writing full programs in any of these languages is a very difficult prospect but none the less achievable with what I know now. Whereas two weeks ago it would have been impossible. Working here even for a short time has been an amazing Learning experience for me, plus it looks great on a resume! Learning how to write C has been one of the most useful things I have learnt. I have already written programs for use both in and out of IBM. The first program I wrote was the standard newbie 'hello world' exercise. I have now expanded on that program so that it now says, "Hello world! This is Callum Scarvell". This is done using strings that recognise my name as a set character. Then I used a header file called conio.h or curses.h to recognise 'cal' as the short form of my name. This is so now I can abbreviate my name easier. Heres what the code looks like:

#include <stdio.h> #include <string.h> #include <curses.h> int main() { printf("Hello, World! This Is cal"); char first_name[] = "Callum"; char last_name[] = "Scarvell"; char name[100]; /* testing code */ if (strncmp(first_name, "Callum", 100) != 0) return 1; if (strncmp(last_name, "Scarvell",100) != 0) return 1; last_name[0] = 'S'; sprintf(name, "%s %s", first_name, last_name); if (strncmp(name, "Callum Scarvell", 100) == 0) { printf("This is %s\n",name); } /*printf("actual string is -%s-\n",name);*/ return 0; } void Name_Rec() { int i,j,k; char a[30],b[30]; clrscr(); puts("Callum Scarvell : \n"); gets(a); printf("\n\ncal : \n\n%c",a[0]); for(i=0;a[i]!='\0';i++)

The last two lines have been left out to make it a challenge to recreate. Feel free to test your own knowledge of C to finish the program! My ultimate goal for this program is to make it generate the text 'Hello World! This is Callum Scarvell's computer. Everybody else beware!'(which is easy) then import it into the Linux kernel to the profile login screen. Then I will have my own unique copy of the kernel. And I could call myself an LSD(Linux system developer). That's just a small pet project I have been working on in my time here. Another pet project of mine is my own very altered copy of the open source game NetHack. It's written in C as well and is very easy to tinker with. I have been able to do things like set my characters starting hit points to 40, give my character awesome starting gear and keep save files even after the death of a character. These are just a couple small projects that made learning C so much easier and a lot more fun. And the whole time I was learning C, Ruby, or Python I had some of the best system developers in the world showing me the ropes. This made things even easier, and much more comprehensive. So really its no surprise that in three short weeks I managed to learn almost four different languages and how to run a blog from the raw source code. The knowledge given to me by the OzLabs team is priceless and invaluable. I will forever remember all the new faces and what they taught me. And the Linux Gods will answer your prayers whether e-mail or in person because they walk among us! So if you ever get an opportunity to do work experience, internship or a graduate placement take the chance to do it because you will learn many things that are not taught in school.

If you would like to reveiw the source code for the blog or my work in general you can find me at CallumScar.github.com or find me on facebook, Callum Scarvell.

And a huge thankyou to the OzLabs team for taking me on for the three weeks and for teaching me so much! I am forever indebted to everyone here.

### David Rowe: Project Whack a Mole Part 1

Wed, 2016-03-02 14:31

As a side project I’ve been working on a Direction Finding (DF) system. Although it relies on phase it’s very different to Doppler. It uses a mixer to frequency multiplex signals from two antennas into a SDR. Some maths works out the phase difference between the antennas, which can be used to compute a bearing.

The use case is tracking down a troll who is annoying us on our local repeater. He pops up for a few seconds at a time, like the game of Whack a Mole. It’s also fun to work on a new(ish) type of DF system, and play with RF.

I’ve got the system measuring phase angles between two antennas on the bench, so thought I better come up for air and blog on my progress so far.

Hardware

Here is a block diagram of the hardware:

The trick is to get signals from two antennas into the SDR, in such a way that the phase difference can be measured. One approach is to phase lock two or more SDRs. My approach is to frequency shift the a2 signal, which is then summed with a1 and sent to the SDR. I used a Minicircuits ADE-1 mixer (left) and home made hybrid combiner (centre):

For testing on the bench I use a sig-gen and a splitter (right) to generate the a1 and a2 signals. I can vary the phase by varying the cable lengths.

Here is a spec-an plot showing a1 in the centre and the a2 “sidebands”, at +/- 32kHz:

The LO frequency of 32kHz was chosen as it (i) greater than the 16kHz bandwidth of FM signals and (ii) means we can use a modest sampling rate of 192kHz to capture the 3 signals, (iii) we can use a common “watch” crystal to generate it. The LO input on the mixer is rated down to 500kHz but works OK with a conversion loss of 9dB.

Signal Processing Design

OK so we have the two signals a1 and a2 present at each antenna. Theta is an arbitrary phase offset that both signals experience due to propagation time from the transmitter, and other phase shifts common to both signals, like the SDRs signal processing. Phi is the phase difference between a1 and a2, this is what we want to compute. Alpha is the phase offset of the local oscillator. Only a2 experiences this phase shift as it passes through the mixer. Omega-l is the local oscillator frequency, and omega is the carrier frequency. The summed signal presented to the SDR input is called r, which we can derive:

Note we assume the two signals a1 and a2 are complex, but the mixer is real (double sided). So there are a total of three signals at the SDR input. Now lets mess about with the phase terms of the three signals that make up r:

So the output is 2 times phi, the phase difference between the two antennas. Yayyyyyy. The 2phi output also implies an ambiguity of 180 degrees, which is what we would expect with just 2 antennas. I’ll worry about that later, e.g. with a third channel or mounting the hardware on the edge of our city such that bearings are only expected from one hemisphere.

There are several ways to implement the signal processing. I like the sample by sample approach:

It’s all implemented in df_mixer.m. This can run with a simulated signal or input from a HackRF SDR.

Walk Through and Results

Lets look at the algorithm in action with a1 and a2 generated on the bench using a splitter and two lengths of coax to set the phase difference. The signal generator was set to 439.048MHz and -30dBm. We sample about 1 second using the HackRF SDR, the run the Octave script:

hackrf_transfer -r df1.iq -f 439000000 -n 10000000 -l 20 -g 40 % octave:25> df_mixer.m Here is the input signal, the wanted signals are at 48kHz (a1), 16kHz and 80kHz (a2). We pass that through these Band Pass Filters (BPFs): To get the three signals: After the signal processing magic we can plot the output for each sample on the complex plane. Its like a scatter plot, and gives us a feel for how reliable the phase estimates are: We can also find the angle for each sample and plot a histogram. The tighter this histogram is the more confidence we have: Testing with Cables So how to test? I ended up inserting short lengths of transmission line, using adapters and attenuators. I guessed the velocity as 2/3 the speed of light. This spreadsheet summarises the results: When I insert adapters in the opposite antenna line the phase angle reduces. I inserted a 10dB attenuator and the phase angle changed roughly in proportion to the attenuator length. It worked just fine despite the amplitude difference. So it’s doing something sensible. Wow! Discussion The central carrier and two “sidebands” looks a lot like an AM signal. I initially thought I could demodulate it using envelope detection. However that was a flop, so I got the paper and pencil out and worked out the math. This was challenging but I do enjoy a good engineering puzzle. After a few goes over several days I came up with the math above, and tested it using a simulation. Note we don’t really care what sort of modulation the signal has. It could be a carrier, FM, or SSB. We just look at the phase so it’s insensitive to amplitude differences in the two signals. Any frequency and phase modulation is present on both a1 and a2 and is removed by the signal processing, leaving just the phase difference term. So the algorithm essentially strips modulation. This means “processing gain” is possible. We can make phase estimates on every sample over say 1 second. We can then average the phase estimates. This may lead to a good phase estimate at SNRs lower than we can demodulate the signal. Plucking DF bearings out of the noise. Just like the FFT of a weak sine wave in noise creates a nice sharp line if you sample the signal long enough. This system is phase based so will be affected by multipath signals. Mounting the system with a direct line of site to the transmitter is a good idea. The histogram gives us a confidence measure, and may be useful in detecting multipath or multiple bearings. Presenting this histogram information visually on a 3D or intensity map would be a useful area to explore. The absolute phase estimates are sensitive to frequency offset, for reasons I haven’t worked out yet. The HackRF is about 4kHz off my sig-gen at 439MHz, which shifts the phase estimates. So it might need tuning or re-calibration to a known bearing. I haven’t worked out where the “noise” in the scatter diagram comes from. The phase is the product of several non-linearities so we expect it to jump around a bit. Given we are just interested in phase, perhaps a limiter or three could be included at some point in the processing. Off Line Direction Finding One neat possibility with this approach is off line DF. Imagine every time the squelch opens, we log the SDR baseband Fs=192kHz signal onto a hard disk. A 1 Tbyte disk would store 720 hours at Fs=192kHz (2 byte IQ samples). We can then then use a sound editor to jump to the position where our Mole appears for a few seconds, and run the DF signal processing on that segment. We can tweak parameters, even run it a few times, to improve the bearing. We can compare this to the same signal received at different sites across town, to get a cross bearing. We can do this off line DF-ing days later, or download the samples and process at a location remote to the DF site. It also provides a documented record for ACMA, should evidence be required for prosecution. Further Work My next step is to configure the HackRF for high gain so I can try some off-air signals. The repeater output is about -70dBm inside my home office so that will do for a start. If that works I will try DF-ing repeater input signals, perhaps with the hardware mounted on a mast outside. I have a UHF BPF I will insert to prevent overload from out of band signals. I’m hoping it will be as accurate as Doppler systems, e.g. capable of resolving say 16 different bearings on a “ring of LEDs” or similar virtual display. I bet there are many issues I need to sort out and perhaps a show stopper lurking somewhere. We shall see! It’s good to experiment. Failure is an option. We could simplify the hardware significantly. Other mixers could be tried. The circuit is insensitive to levels so the combining could be very simple, we don’t need a hybrid. Just connect the two signals to the same node. If the mixer has poor RF-IF isolation (carrier feed-through) there could be a problem. This could be alleviated by ensuring a1 is > 10dB above the a2 carrier feed-through. A very simple approach would be using a UHF transistor for the 32kHz clock oscillator, and injecting a2 into the emitter or base. The 32kHz transistor clock oscillator I built was hard to start. Here is the saga of getting the 32kHz oscillator to run. More Project Whack a Mole Part 2 Latex Source Have to put this somewhere in case I need it again. I used HostMath to build up the equations and Rogers Online Equations to render it to a PNG. \begin{array}{lcl} a_{1} & = & e^{j(\omega t+\phi +\theta)} \\ a_{2} & = & e^{j(\omega t+\theta)} \\ r & = & a_{1}+a_{2}cos(w_{l}t+\alpha ) \\ & = & e^{j(\omega t+\phi +\theta)})+\frac{1}{2}e^{j((\omega+\omega_{l}) t+\alpha +\theta)}+\frac{1}{2}e^{j((\omega-\omega_{l}) t-\alpha +\theta} \end{array} \begin{array}{lcl} phase_{1} & = & \omega t + \phi +\theta \\ phase_{2} & = & (\omega+\omega_{l})t+\alpha +\theta \\ phase_{3} & = & (\omega-\omega_{l})t-\alpha +\theta \\ phase_{2}+phase_{3} & = & \omega t + \omega_{l}t+\alpha +\theta + \omega t - \omega_{l}t -\alpha + \theta \\ & = & 2\omega t + 2\theta \\ 2phase_{1} - (phase_{2}+phase_{3}) & = & 2\omega t + 2\phi + 2\theta -2\omega t - 2\theta \\ & = & 2\phi \end{array} ### Chris Neugebauer: Python in the Caribbean? More of this! Wed, 2016-03-02 06:26 I don’t often make a point of blogging about the conferences I end up at, but sometimes there are exceptions to be made. A couple of weekends ago, a happy set of coincidences meant that I was able to attend the first PyCaribbean, in Santo Domingo, capital city of the Dominican Republic. I was lucky enough to give a couple of talks there, too. This was a superbly well-organised conference. Leonardo and Vivian were truly excellent hosts, and it showed that they were passionate about welcoming the world to their city. They made sure breakfast and lunch at the venue were well catered. We weren’t left wanting in the evenings either, thanks to organised outings to some great local bars and restaurants over each of the evenings. Better still, the organisers were properly attentive to issues that came up: when the westerners (including me) went up to Leo asking where the coffee was at breakfast (“we don’t drink much of that here”), the situation was resolved within hours. This attitude of resolving mismatches in the expectations of locals vs visitors was truly exceptional, and regional conference organisers can learn a lot from it. The programme was, in my opinion, better than by rights any first-run conference should be. Most of the speakers were from countries further afield than the Caribbean (though I don’t believe anyone travelled further than me), and the keynotes were all of a standard that I’d expect from much more established conferences. Given that the audience was mostly from the DR – or Central America, at a stretch – the organisers showed that they truly understood the importance of bringing the world’s Python community to their local community. This is a value that it took us at PyCon Australia several years to grok, and PyCaribbean was doing it during their first year. A wonderful side-effect of this focus on quality is, the programme was also of a standard high enough that someone could visit from nearby parts of the US and still enjoy a programme of a standard matching some of the best US regional Python conferences. A bit about the city and venue: Even though the DR has a reputation as a touristy island, Santo Domingo is by no means a tourist town. It’s a working city in a developing nation: the harbour laps up very close to the waterfront roads (no beaches here), the traffic patterns help make crossing the road an extreme sport (skilled jaywalking ftw), and toilet paper and soap at the venue was mostly a BYO affair (sigh). Through learning and planning ahead, most of this culture shock subsided beyond my first day at the event, but it’s very clear that PyCaribbean was no beachside junket. In Santo Domingo, the language barrier was a lot more confronting than I’d expected, too. Whilst I lucked out on getting a cabbie at the airport who could speak a tiny bit of English, and a receptionist with fluent English at the hotel, that was about the extent of being able to communicate. Especially funny was showing up at the venue, and not being allowed in, until I realised that the problem was not being allowed to wear shorts inside government buildings (it took a while to realise that was what the pointing at my legs meant). You need at least some Spanish to function in Santo Domingo, and whilst I wasn’t the only speaker who was caught out by this, I’m still extremely grateful for the organisers for helping bridge the language barrier when we were all out and about during the evening events. This made the conference all the more enjoyable. Will I be back for another PyCaribbean? Absolutely. This was one of the best regional Python conferences I’ve ever been to. The organisers had a solid vision for the event, far earlier than most conferences I’ve been to; the local community was grateful, eager to learn, and were rewarded by talks of a very high standard for a regional conferences; finally, everyone who flew into Santo Domingo got what felt like a truly authentic introduction to Dominican Culture, thanks to the solid efforts of the organisers. Should you go to the next PyCaribbean? Yes. Should your company sponsor it? Yes. It’s a truly legitimate Python conference that in a couple of years time will be amongst the best in the world. In PyCaribbean, the Python community’s gained a wonderful conference, and the Caribbean has gained a link with the global Python community, and one that it can be truly proud of at that. If you’re anywhere near the area, PyCaribbean is worthy of serious consideration. ### Peter Lieverdink: Accidental Space Tourist - SocialSpaceWA Tue, 2016-03-01 11:27 Like many people, I love the beautiful images we receive from space telescopes and spacecraft that orbit other worlds in the solar system. Also like many other people, I expect, I never really stop to think how we get those images, just assuming they get sent to earth via some magic space internet. However, there is no internet (magic or otherwise, yet) in space and getting the data to create these pretty images (and to do science) is rather involved. Quite by accident I got a chance to learn a lot more about that process. SocialSpaceWA Whilst not working, I stumbled across a retweet by the European Space Agency, asking for people to apply to visit their deep space tracking station in New Norcia, Western Australia (NNO) as part of their SocialSpace programme. I didn't really have anything on, qualified to apply by way of having an ESA member nation passport, don't live more than 16 hours flying away, so I thought "why not?". Why not indeed. I applied a few days before the closing date and only a week later I got the happy news I'd been selected to attend. I immediately grabbed some return tickets to Perth and then started fretting about doing this thing with 15 total strangers. Eep! Time-lapse of land-fall over southern WA, after crossing the Great Australian Bight. Of course, fretting was totally unwarranted. ESA had organised a bus to drive us all to New Norcia from Perth, and a bunch of delegates organised to meet up with Daniel (the ESA chef-de-mission) before heading to the bus pick-up. Of course, my fellow delegates were all space geeks too and we all got on really well (especially once Daniel started handing out ESA swag :-) The trip to New Norcia was in a lovely airconditioned bus, which made coping with the heat wave rather easy. Introductions As an ice-breaker, we all shared a group dinner that evening at the New Norcia hotel. After a round of 140 character introductions, we split into groups and each group was joined by an ESA engineer, who talked a little about who they were and the work they did on the site. After dinner, John Goldsmith gave a talk about astrophotography and the sights of the night sky in preparation for an observing session with some people from the Perth Observatory, who'd driven up with cars full of (rather lovely) telescopes. Sadly I missed the talk because I was volunteered to help out with the telescopes. On the up-side, that resulted in my first TV appearance ever on Channel 10 in Perth. The seeing was excellent (New Norcia has proper dark skies) so it ended up being a fairly late night. Unfortunately, that meant the morning wasn't quite as early as I'd hoped it would be. Because of the dark skies, and three hour time difference with home, I had planned to not go back to Perth for the night. Instead, I wanted to stay in New Norcia and then get up early to catch the planetary alignment in action. I ended up seeing it just fine, but it was getting a little bit too light at that stage to easily cature all planets on camera. Because most delegates elected to stay back in Perth overnight (where the hotels have airco) they wouldn't be back before 10am, which gave me time to have a nice and relaxing early morning at the hotel, with fresh coffee. Aaaah, the serenity. Down to business Once my partners in crime had arrived, we all moved to the ESA education room at the New Norcia monastery for some enlightening sessions about the ESA Tracking Network (ESTRACK) and NNO by ESA engineers. ESTRACK Yves Doat spoke about why the ESTRACK network is needed and what it currently consists of. He showed us highlights of some of the missions they've supported over the past decades, from the Giotto mission past Halley's Comet in 1986 through to the current Rosetta/Philae mission to comet 67P Churyumov-Gerasimenko. Deep Space Comms Klaus-Jürgen Schulz dove into the details of deep space communications and paid particular attention to the difficulties of communicating with spacecraft that are close to the sun (which is an issue for the BepiColombo mission to Mercury, of course!) he finished his presentation by telling us about the future of deep space communications, using light rather than radio, to obtain much higher rates of data transmission. Ground Station Operations Next, Marc Roubert explained the operational intricacies of running ground stations. Since they are generally located in relatively remote radio-silent areas, getting construction materials and equipment to the site can pose a real problem. Bush fires, sand storms, snow and the occasional leopard (for the Argentinian site) can interfere with operations as well. Their location can also pose problems for the power supply. The sites use a lot of power to cryo-cool the amplifiers. Fire can cut power lines, so generators are needed. All delegates became very excited when he said that due to the cost of power in Australia, NNO was actually going solar. ESA have built a 250kW solar plant on the New Norcia site, which will pay for istelf in only 7 years and save about 400 tons of CO2 per year. They're not yet allowed to feed power back into the grid, because the infrastructure wouldn't be able to cope. But they built the plant to produce only as much power as they need, so there isn't that much to feed back currently anyway. The trouble with big antennas Gunther Sessler then gave us the low-down on the new NNO-2 antenna. How it was constructed and what it can do that the 35m NNO-1 antenna can't, which is mainly obtain signal from spacecraft even if they're slightly off-course (which can happen easily if a rocket slightly over- or underperforms at launch). As it turns out, the 35m NNO-1 antenna has a beam with of 60 millidegrees and to acquire a signal from a spacecraft, it has to be somewhere within that beam. I did the maths on that, and 60 millidegrees equates to a circle with a diameter of only 200m at a distance of 1000km (eg: a spacecraft on its way to orbit just clearing the horizon) Now 200m sounds like a lot, but when you realise a spacecraft is doing upwards of 5km/sec at that point, locking on to it becomes a much harder problem! That's where the wider beam width of the 4.5m NNO-2 antenna comes in. It can see a larger part of the sky, so can pick up spacecraft that are slightly off-course a lot easier. And if the space craft is even more off-course, the 0.75m antenna has a wider beam width still. With some smarts, once the 0.75m antenna locks on to a spacecraft, it can be used to center the 4.5m dish on it in turn. And once the 4.5m antenna is locked, its data can be used to in turn lock the 35m NNO-1 on the craft. Putting it all together The final presentation was by Peter Droll, who put it all together and gave us an overview of how ESTRACK was used to send the Lisa Pathfinder mission on its way to the L1 langrange point. That was done by boosting its orbit with several engine burns, after ach of which the crafts position needed to be known exactly in order to caculate the next burn. LPF is trialing equipment for detecting gravitational waves in space and should have started science operations today. Fittingly, this presentation was on the morning of the LIGO announcement :-) Tour We had a quick lunch after the presentations and then hopped back on the bus to go see the NNO dishes. The Inmarsat Cricket Team had prepared well and gave us a tour of NNO-1, allowing us to stick our heads absolutely eveywhere. The only spanner in the works in terms of social media was that the inside of the dish is really well shielded against radio interference, so all our phones stopped working! Luckily, the Nikon with borrowed fish-eye lens worked fine. You can see all of my SocialSpaceWA photos on Flickr. We toured the NNO-1 dish, as well as the the generator and battery buildings and the control room. Two lucky souls managed to score the chance to actually operate NNO-1 and I grabbed a bit of video whilst Matt took the dish for a joyride. I am assured that New Norcia doesn't do hayrides like Parkes, and that nobody plays cricket in the dish either (but they do play football!) Taking NNO-1 for a joyride. Inauguration After the tour, VIPs started arriving for the formal inauguration ceremony. After a welcome to country, we heard talks from the WA deputy premier and the European Union ambassador to Australia, praising the virtues of scientific cooperation. I definitely hope there will be more of that in the future, if only to make more space infrastructure more readily accessible for visiting! :-) Speeches over, we all hopped back on the bus to finally go and see the new NNO-2 antenna. It's located a few hundred meters away from the main complex and since we were still enjoying the heat wave, the transport was most welcome. That is, until the smaller of the buses couldn't cope with the rather steep hill and we all had to do the last hundred meters or so on foot. The sun was setting as we arrived at the NNO-2 site and with the thin crescent moon it made a rather lovely backdrop for the blessing of the new facility by three monks from the New Norcia Monastery, followed by the antenna doing a little dance. Good luck on you mission, NNO-2! Image: Vaughan Puddey. Wrap-Up The formal proceedings over, we were all bused back to the monastery where ESA treated us to a delicious dinner as the stars came out. The monks are New Norcia turn out to make a rather decent drop of wine as well. I'm not a fan of beer, but I'm told their ale is pretty good too :-) Finally it was time to hop on the bus and head back to Perth and after a final farewell drink, all delegates went their separate ways again. But one thing we did all agree on: if you ever get the chance to do some accidental space tourism, take that chance with both hands and don't let go! Thumbs up for New Norcia! Thank you, ESA, Inmarsat and New Norcia! Tags: SocialSpaceWAdeep spacespaceadventureESA ### Chris Smart: Configuring Postfix to forward emails via localhost to secure, authenticated GMail Tue, 2016-03-01 10:30 It’s pretty easy to configure postfix on a local Linux box to forward emails via an external mail server. This way you can just send via localhost in your programs or any system daemons and the rest is automatically handled for you. Here’s how to forward via GMail using authentication and encryption on Fedora (23 at the time of writing). You should consider enabling two-factor authentication on your gmail account, and generate a password specifically for postfix. Install packages: sudo dnf install cyrus-sasl-plain postfix mailx Basic postfix configuration: #Only listen on IPv4, not IPv6. Omit if you want IPv6. sudo postconf inet_protocols=ipv4 #Relay all mail through to TLS enabled gmail sudo postconf relayhost=[smtp.gmail.com]:587 #Use TLS encryption for sending email through gmail sudo postconf smtp_use_tls=yes #Enable authentication for gmail sudo postconf smtp_sasl_auth_enable=yes #Use the credentials in this file sudo postconf smtp_sasl_password_maps=hash:/etc/postfix/sasl_passwd #This file has the certificate to trust gmail encryption sudo postconf smtp_tls_CAfile=/etc/ssl/certs/ca-bundle.crt #Require authentication to send mail sudo postconf smtp_sasl_security_options=noanonymous sudo postconf smtp_sasl_tls_security_options=noanonymous By default postfix listens on localhost, which is probably what you want. If you don’t for some reason, you could change the inet_interfaces parameter in the config file, but be warned that then anyone on your network (or potentially the Internet if it’s a public address) could send mail through your system. You may also want to consider using TLS on your postfix server. By default, postfix sets myhostname to your fully-qualified domain name (check with hostname -f) but if you need to change this for some reason you can. For our instance it’s not really necessary because we’re forwarding email through a relay and not accepting locally. Check that our configuration looks good: sudo postconf -n sudo postfix check Create a password file using a text editor: sudoedit /etc/postfix/sasl_passwd The content should be in this form (the brackets are required, just replace your username@gmail.com address and password): [smtp.gmail.com]:587 username@gmail.com:password Hash the password for postfix: sudo postmap /etc/postfix/sasl_passwd Tail the postfix log: sudo journalctl -f -u postfix.service & Start the service (you should see it start up in the log): sudo systemctl start postfix Send a test email, replace username@gmail.com with your real email address: echo "This is a test." | mail -s "test message" username@gmail.com You should see the email go through the journalctl log and be forwarded, something like: Feb 29 04:32:51 hostname postfix/smtp[4115]: 87BE620221: to=, relay=smtp.gmail.com[209.85.146.108]:587, delay=1.9, delays=0.04/0.06/0.55/1.3, dsn=2.0.0, status=sent (250 2.0.0 OK 1456720371 m32sm102235580ksj.52 - gsmtp) ### David Rowe: Codec 2 Masking Model Part 3 Mon, 2016-02-29 15:31 I’ve started working on this project again. It’s important as progress will feed into both the HF and VHF work. It’s also kind of cool as it’s very unlike what anyone else is doing out there in speech coding land where it’s all LPC/LSP. In Part 1 I described how the spectral amplitudes can be modelled by masking curves. The next step is to (i) decimate the model to a small number of samples and (ii) quantise those samples using a modest number of bits/frame. This post describes the progress I have made in decimating the masking model parameters, the top yellow box here: Analysis By Synthesis Back when I was just a wee slip of speech coder, I worked on Code Excited Linear Prediction (CELP). These codecs use a technique called Analysis by Synthesis (AbyS). To choose the speech model parameters, a bunch of them are tried, the resulting speech synthesised, and the results evaluated. The set of parameters that minimises the difference between the input speech and synthesised output speech are transmitted to the decoder. Trying every possible set of parameters keeps the encoder DSPs rather busy, and just getting them to run in real time was quite a challenge at the time (late 1980’s on 10MIP DSPs). Time goes by, and it’s now 30 years later. After a few dead ends, I’ve worked out a way to use AbyS to select the best 4 amplitude/frequency pairs to describe the speech spectrum. It works like this: 1. In each frame there are L possible frequency positions, each position being the frequency of each harmonic. For each frequency there is a corresponding harmonic amplitude {Am}. 2. At each harmonic position, I generate a masking function, and measure the error between that and the target spectral envelope. 3. After all possible masking functions are evaluated, I choose the one that minimises the error to the target. 4. The process is then repeated for the next stage, until we have used 4 masking functions in total. As each masking function is “fitted”, the total error gradually reduces. 5. The output is four frequencies and four amplitudes. These must be sent to the decoder, where they can be used to generate a spectral envelope that approximates the original. The following plots show AbyS in action for frame 50 of hts1a: The red line is the spectral envelope defined by the harmonic amplitudes {Am}. Magenta is the model the decoder uses based on 4 frequency/amplitude samples, and found using AbyS. The black crosses indicate the frequencies found using AbyS. Here is a plot of the error (actually Mean Square Error) for each mask position at each stage. As we add more samples to the model, the error compared to the target decreases. You can see a sharp dip in the first (blue top curve) around 2500Hz. That is the frequency chosen for the first mask sample. With the first sample fixed, we then search for the best position for the next sample (dark green), which occurs around 500Hz. Samples Here are some samples from the AbyS model compared to the Codec 2 700B and 1300 modes. The AbyS frequency/amplitude pairs are unquantised, but other parameters (synthetic phase, pitch, voicing, energy, frame update rate) are the same as Codec 2 700B/1300. Sample 700B 1300 newamp AbyS ve9qrp_10s Listen Listen Listen mmt1 Listen Listen Listen vk5qi Listen Listen Listen At 700 bits/s we have 28 bit/s frame available. Assuming 7 bits for pitch, 1 for voicing, and 5 for frame energy that leaves us a budget of 15 bits/frame for the AbyS freq/amp pairs. At 1300 bit/s we have 52 bit/s frame total with 39 bits/frame available for AbyS freq/amp pairs. My goal is to get 1300 bit/s quality at 700 bit/s using the AbyS masking model technique. That would significantly boost the quality at 700 bits/s and let us use the COHPSK modem that works really well on HF channels. Command Lines newamp.m was configured with decimation_in_time on and set to 4 (40ms frame update rate with interpolation at 10ms intervals). This is the same frame update rate as Codec 2 700B and 1300 modes. The phase0 model was enabled in c2sim to use synthetic phases and a single voicing bit, just like the Codec 2 modes. The synthetic phases were derived from a LPC model but can also be synthesised from any amplitude spectra, such as the AbyS masking model. octave:20> newamp_batch("../build_linux/src/vk5qi") ./c2sim ../../raw/vk5qi.raw --amread vk5qi_am.out --phase0 --postfilter -o - | sox -t raw -r 8000 -s -2 - ~/Desktop/abys/vk5qi.wav

Happy Birthday to Me

This is my 300th blog post in 10 years! Yayyyyyy. That’s about one rather detailed post every two weeks. I started with this one in April 2006 just after I hung up my trousers and departed the corporate world.

This blog currently gets visited by 3500 unique IPs/day although it regularly hits 5000/day. I type posts up in Emacs, then paste them into WordPress for final editing. I draw figures in LibreOffice Impress, and plots using GNU Octave.

I quite like writing, it gives me a chance to exercise the teacher inside me. Reporting on what I have done helps get it straight in my head. If I solve a problem I figure the solution might be useful for others.

I hope this blog has been useful for you too.

### OpenSTEM: Leap Day Special: 50% off Family and Teacher Subscriptions!

Mon, 2016-02-29 11:31

To celebrate the quirkiness of the leap day, we’re doing a very special offer – just from 29 Feb 2016 until 1 Mar 2016!

Leap years are funny things. Did you know, for instance, that in

Ireland and the United Kingdom when it was expected that men would always ask women to marry them and not the other way around, there was a tradition that it was acceptable for

women to ask men to marry them on Leap Year’s Day?

An OpenSTEM subscription provides free access to all our base PDF Resources for an entire year! This is many megabytes of awesome materials for you to use, full of colourful text and images. New PDFs are added all the time.

To make use of this limited offer, simply go to the special Leap Day page, or use the LEAPDAY coupon code when checking out one of the aforementioned subscriptions in the store. You will need to specifically add either the Private Family or One Teacher subscription to your cart.

You can also take a peek at what our different resources look like on our Curriculum Samples page.

### Francois Marier: Extracting Album Covers from the iTunes Store

Mon, 2016-02-29 10:49

The iTunes store is a good source of high-quality album cover art. If you search for the album on Google Images, then visit the page and right-click on the cover image, you will get a 170 px by 170 px image. Change the 170x170 in the URL to one of the following values to get a higher resolution image:

• 170x170
• 340x340
• 600x600
• 1200x1200
• 1400x1400

Alternatively, use this handy webapp to query the iTunes search API and get to the source image directly.

### Colin Charles: Amazon RDS updates February 2016

Mon, 2016-02-29 04:25

I think one of the big announcements that came out from the Amazon Web Services world in October 2015 was the fact that you could spin up instances of MariaDB Server on it. You would get MariaDB Server 10.0.17. As of this writing, you are still getting that (the MySQL shipping then was 5.6.23, and today you can create a 5.6.27 instance, but there were no .24/.25/.26 releases). I’m hoping that there’s active work going on to make MariaDB Server 10.1 available ASAP on the platform.

Just last week you would have noticed that Amazon has rolled out MySQL 5.7.10. The in-place upgrades are not available yet, so updating is via dump/reload or using read replicas. According to the forums, a lot of people have been wanting to use the JSON functionality.

Are you trying MySQL 5.7 on RDS? How about your usage of MariaDB Server 10.0 on RDS? I’d be interested in feedback either as a comment here, or via email.

### Colin Charles: SCALE14x trip report

Sun, 2016-02-28 13:25

SCALE14x was held at Pasadena, Los Angeles this year from January 21-24 2016. I think its important to note that the venue changed from the Hilton LAX — this is a much bigger space, as the event is much bigger, and you’ll also notice that the expo hall has grown tremendously.

I had a talk in the MySQL track, and that was just one of over 180 talks. There were over 3,600 people attending, and it showed by the number of people coming by the MariaDB Corporation booth. I spent sometime there with Rod Allen, Max Mether, and Kurt Pastore, and the qualified leads we received were pretty high. Of course it didn’t hurt that we were also giving away a Sphero BB-8 Droid.

The MySQL track room was generally always full. We learned some interesting tidbits like Percona Server 5.7 would be GA in February 2016 (true!), the strong crowd at the MariaDB booth and quite a bit more. People are definitely interested in MySQL 5.7’s JSON functionality.

The highlight of my talk, The MySQL Server Ecosystem in 2016 was that it brought along quite a good discussion on Twitter. Its clear people are very interested in this and there is much opportunity for writing about this!

The Mark Shuttleworth keynote

But there were other SCALE14x highlights, like the keynote by Mark Shuttleworth. It was generally a very moving keynote, and here are a few bits that I took as notes:

• Technology changes lives
• Society evolves because it becomes possible to live differently
• New software moves too fast for distributions (6 months is too long). Look at Github. Speed vs. integration/trust/maintenance (the work of a distro)
An overview of a next-gen filesystem

Another talk I found interesting was the talk about bitrot, and filesystems like btrfs and ZFS. Best to read the presentation, and the article that was referenced.

A talk by Facebook is usually quite full, and I was interested in how they were using GlusterFS and if anyone has managed to successfully run a database over it yet (no). This was a talk given by Richard Wareing who’s been at Facebook for over 5 years:

• GB’s to many PBs, 100’s of millions of files. QPS (FOPs) is 10s of billions per day, namespace (volume), TBs to PBs and Bricks: 1000’s. Version 3.3.x is when they started and now they use 3.6.x (trail mainline closely)
• Use cases: archival, backing data store for large scale applications, anything that doesn’t fit into other DBs
• Primarily using XFS, and are starting to use btrfs (about 20% of the fleet run on it)
• closed source AntFarm, JD, and their IPv6 support (they removed IPv4 support). They have JSON Statistic dumps which they contributed upstream.
• a good mantra, pragmatism over correctness
Some expo hall chatter

There was plenty to followup post-SCALE14x with many having questions about MariaDB Server, or wanting to buy services around it from MariaDB Corporation. I learned for example that Rackspace maintains their own IUS repository of packages they think their customers will find important to use. The idea behind it is that its Inline with Upstream Stable. Naturally you will find MariaDB Server as well as packages for all the engines like CONNECT.

I also learned that Stacki uses MariaDB Server for provisioning, as was evidenced by their github issue.

Its incredibly rewarding to note that pretty much everyone knew what MariaDB Server was. Its been a long journey (six years!) but it sure feels sweet. Ilan and his team put on a great SCALE so I can’t wait to be back again next year.

### Lev Lafayette: Batchholds, Leap Seconds, and PBS Restarts

Sat, 2016-02-27 23:31

It is not unusual for a few jobs to fall into a batchhold state when one is managing a cluster; users often write PBS submissions with errors in them (such as requesting more core than what is actually available). When a sysadmin has the opportunity to do so they should check such scripts, and educate the users on what they have done wrong.

### Tridge on UAVs: A new chapter in ArduPilot development

Sat, 2016-02-27 16:43

The ArduPilot core development team is starting on a new phase in the project's development. We’ve been having a lot of discussions lately about how to become better organised and better meet the needs of both our great user community and the increasing number of organisations using ArduPilot professionally. The dev team is passionate about making the best autopilot software we can and we are putting the structures in place to make that happen.

Those of you who have been following the developments over the years know that ArduPilot has enjoyed a very close relationship with 3DRobotics for a long time, including a lot of direct funding of ArduPilot developers by 3DR. As 3DR changes its focus that relationship has changed, and the relationship now is not one of financial support for developers but instead 3DR will be one of many companies contributing to open source development both in ArduPilot and the wider DroneCode community. The reduction in direct funding by 3DR is not really too surprising as the level of financial support in the past was quite unusual by open source project standards.

Meanwhile the number of other individuals and companies directly supporting ArduPilot development has been increasing a lot recently, with over 130 separate people contributing to the code in the last year alone, and the range of companies making autopilot hardware and airframes aimed at ArduPilot users has also grown enormously.

We’re really delighted with how the developer community is working together, and we’re very confident that ArduPilot has a very bright future

Creation of ArduPilot non-profit

The ArduPilot dev team is creating a non-profit entity to act as a focal point for ArduPilot development. It will take a while to get this setup, but the aim is to have a governance body that aims to guide the direction the project takes and ensure the project meets the needs of the very diverse user community. Once the organisation is in place we will make another announcement, but you can expect it to be modelled on the many successful open source non-profits that exist across the free software community.

The non-profit organisation will oversee the management of the documentation, the auto-build and test servers and will help set priorities for future development.

We’re working with 3DR now to organise the transfer of the ardupilot.com domain to the development team leads, and will transfer it to the non-profit once that is established. The dev team has always led the administration of that site, so this is mostly a formality, but we are also planning on a re-work of the documentation to create an improved experience for the community and to make it easier to maintain.

In addition to the non-profit, we think there is a need for more consulting services around ArduPilot and DroneCode. We’ve recognised this need for a while as the developers have often received requests for commercial support and consulting services. That is why we created this commercial support list on the website last year:

http://planner.ardupilot.com/wiki/common-commercial-support/

It is time to take that to the next level by promoting a wider range of consulting services for ArduPilot. As part of that a group of the ArduPilot developers are in the process of creating a company that will provide a broad range of consulting services around ArduPilot. You will see some more announcements about this soon and we think this will really help ArduPIlot expand into places that are hard to get to now. We are delighted at this development, and hope these  companies listed on the website will provide a vibrant commercial support ecosystem for the benefit of the entire ArduPilot community.

Best of both worlds

We think that having a non-profit to steer the project while having consulting businesses to support those who need commercial support provides the best of both worlds. The non-profit ArduPilot project and the consulting businesses will be separate entities, but the close personal and professional relationships that have built up in the family of ArduPilot developers will help both to support each other.

Note that ArduPilot is committed to open source and free software principles, and there will be no reduction in features or attempt to limit the open source project. ArduPilot is free and always will be. We care just as much about the hobbyist users as we do about supporting commercial use. We just want to make a great autopilot while providing good service to all users, whether commercial or hobbyist.

Thank you!

We’d also like to say a huge thank you to all the ArduPilot users and developers that have allowed ArduPilot to develop so much in recent years. We’ve come a very long way and we’re really proud of what we have built.

Finally we’d also like to thank all the hardware makers that support ArduPilot. The huge range of hardware available to our users from so many companies is fantastic, and we want to make it easier for our users to find the right hardware for their needs. We will continue working to improve the documentation to make that easier.

Happy flying!

The ArduPilot Dev Team

### OpenSTEM: Junior Primary Students Build Stonehenge Model

Fri, 2016-02-26 16:30

Just a few weeks ago junior primary students did the Building Stonehenge Activity, as part of our Integrated History/Geography Program for Primary.

Seville Road State School on Brisbane’s south-side kindly sent us a photo to show you. This class used wooden blocks they happened to have, other classes use collected cardboard boxes.

Year 1-3 Building Stonehenge Activity (Photo: Seville Rd State School)

Our materials are designed to provide a more engaging learning experience for students as well as teachers. Here, students are examining different types of calendars and ways of measuring time. Stonehenge is given as an example of a solar calendar. This leads naturally into a discussion of solstices, equinoxes and seasons.

### Stewart Smith: MySQL Contributions status

Fri, 2016-02-26 11:27

This post is an update to the status of various MySQL bugs (some with patches) that I’ve filed over the past couple of years (or that people around me have). I’m not looking at POWER specific ones, as there are these too, but each of these bugs here deal with general correctness of the code base.

Firstly, let’s look at some points I’ve raised:

• Incorrect locking for global_query_id (bug #72544)

Raised on May 5th, 2014 on the internals list. As of today, no action (apart from Dimitri verifying the bug back in May 2014). There continues to be locking that perhaps only works by accident around query IDs.Soon, this bug will be two years old.
• Endian code based on CPU type rather than endian define (bug #72715)

About six-hundred and fifty days ago I filed this bug – back in May 2014, which probably has a relatively trivial fix of using the correct #ifdef of BIG_ENDIAN/LITTLE_ENDIAN rather than doing specific behavior based on #ifdef __i386__

What’s worse is that this looks like somebody being clever for a compiler in the 1990s, which unlikely ends up with the most optimal code today.
• mysql-test-run.pl –valgrind-all does not run all binaries under valgrind (bug #74830)

Yeah, this should be a trivial fix, but nothing has happened since November 2014.

I’m unlikely to go provide a patch simply because it seems to take sooooo damn long to get anything merged.
• MySQL 5.1 doesn’t build with Bison 3.0 (bug #77529)

Probably of little consequence, unless you’re trying to build MySQL 5.1 on a linux distro released in the last couple of years. Fixed in Maria for a long time now.

Trivial patches:

• Incorrect function name in DBUG_ENTER (bug #78133)

Pull request number 25 on github – a trivial patch that is obviously correct, simply correcting some debug only string.

So far, over 191 days with no action. If you can’t get trivial and obvious patches merged in about 2/3rds of a year, you’re not going to grow contributions. Nearly everybody coming to a project starts out with trivial patches, and if a long time contributor who will complain loudly on the internet (like I am here) if his trivial patches aren’t merged can’t get it in, what chance on earth does a newcomer have?

In case you’re wondering, this is the patch: --- a/sql/rpl_rli_pdb.cc +++ b/sql/rpl_rli_pdb.cc @@ -470,7 +470,7 @@ bool Slave_worker::read_info(Rpl_info_handler *from) bool Slave_worker::write_info(Rpl_info_handler *to) { - DBUG_ENTER("Master_info::write_info"); + DBUG_ENTER("Slave_worker::write_info");
• InnoDB table flags in bitfield is non-optimal (bug #74831)

With a patch since I filed this back in November 2014, it’s managed to sit idle long enough for GCC 4.8 to practically disappear from anywhere I care about, and 4.9 makes better optimization decisions. There are other reasons why C bitfields are an awful idea too.

Actually complex issues:

• InnoDB mutex spin loop is missing GCC barrier (bug #72755)

Again, another bug filed back in May 2014, where InnoDB is doing a rather weird trick to attempt to get the compiler to not optimize away a spinloop. There’s a known good way of doing this, it’s called a compiler barrier. I’ve had a patch for nearly two years, not merged :(
• buf_block_align relies on random timeouts, volatile rather than memory barriers (bug #74775)

This bug was first filed in November 2014 and deals with a couple of incorrect assumptions about memory ordering and what volatile means.

While this may only exhibit a problem on ARM and POWER processors (as well as any other relaxed memory ordering architectures, x86 is the notable exception), it’s clearly incorrect and very non-portable.

Don’t expect MySQL 5.7 to work properly on ARM (or POWER). Try this: ./mysql-test-run.pl rpl.rpl_checksum_cache --repeat=10

You’ll likely find MySQL > 5.7.5 still explodes.

In fact, there’s also Bug #79378 which Alexey Kopytov filed with patch that’s been sitting idle since November 2015 which is likely related to this bug.

Not covered here: universal CRC32 hardware acceleration (rather than just for innodb data pages) and other locking issues (some only recently discovered). I also didn’t go into anything filed in December 2015… although in any other project I’d expect something filed in December 2015 to have been looked at by now.

Like it or not, MySQL is still upstream for all the MySQL derivatives active today. Maybe this will change as RocksDB and TokuDB gain users and if WebScaleSQL, MariaDB and Percona can foster a better development community.

### Chris Samuel: Eight years

Thu, 2016-02-25 19:26

This item originally posted here:

Eight years

### Tridge on UAVs: Building, flying and crashing a large QuadPlane

Thu, 2016-02-25 18:32

Not all of the adventures that CanberraUAV have with experimental aircraft go as well as we might hope. This is the story of our recent build of a large QuadPlane and how the flight ended in a crash.

As part of our efforts in developing aircraft for the Outback Challenge 2016 competition CanberraUAV has been building both large helicopters and large QuadPlanes. The competition calls for a fast, long range VTOL aircraft, and those two airframe types are the obvious contenders.

This particular QuadPlane was the first that we've built that is of the size and endurance that we thought it would easily handle the OBC mission. We based it on the aircraft we used to win the OBC'2014 competition, a 2.7m wingspan VQ Porter with a 35cc DLE35 petrol engine. This is the type of aircraft you commonly see at RC flying clubs for people who want to fly larger scale civilian aircraft. It flies well and the fuselage and is easy to work on with plenty of room for additional payloads.

The base airframe has a typical takeoff weight of a bit over 7kg. In the configuration we used in the 2014 competition it weighed over 11kg as we had a lot of extra equipment onboard like long range radios, the bottle drop system and onboard computers, plus lots of fuel. When rebuilt as a QuadPlane it weighed around 15kg, which is really stretching the base airframe to close to its limits.

To convert the porter to a QuadPlane we started by glueing 300x100 1mm thick carbon fibre sheets to the under-surface of the wings, and added 800x20x20 square section carbon fibre tubes as motor arms. This basic design was inspired by what Sander did for his QuadRanger build.

in the above image you can see the CF sheet and the CF tubes being glued to the wing. We used silicon sealant between the sheet and the wing, and epoxy for gluing the two 800mm tubes together and attaching them to the wing. This worked really well and we will be using it again.

For the batteries of the quad part of the plane we initially thought we'd put them in the fuselage as that is the easiest way to do the build, but after some further thought we ended up putting them out on the wings:

They are held on using velcro and cup-hooks epoxied to the CF sheet and spars, with rubber bands for securing them. That works really well and we will also be using it again.

The reason for the change to wing mounted batteries is twofold. The first is concerns of induction on the long wires needed in such a big plane leading to the ESCs being damaged (see for example http://www.rcgroups.com/forums/showthread.php?t=952523&highlight=engin+wire). The second is that we think the weight being out on the wings will reduce the stress on the wing root when doing turns in fixed wing mode.

We used 4S 5Ah 65C batteries in a 2S 2P arrangement, giving us 10Ah of 8S in total to the ESCs. We didn't cross-couple the batteries between left and right side, although we may do so in future builds.

For quad motors we used NTM Prop Drive 50-60 motors at 380kV. That is overkill really, but we wanted this plane to stay steady while hovering 25 knot winds, and for that you need a pretty high power to weight ratio to overcome the wind on the big wings. It certainly worked, flying this 15kg QuadPlane did not feel cumbersome at all. The plane responded very quickly to the sticks despite its size.

We wired it with 10AWG wire, which helped keep the voltage drop down, and tried to keep the battery cables short. Soldering lots of 10AWG connectors is a pain, but worth it. We mostly used 4mm bullets, with some HXT-4mm for the battery connectors. The Y connections needed to split the 8S across two ESCs was done with direct spliced solder connections.

For the ESCs we used RotorStar 120A HV. It seemed a good choice as it had plenty of headroom over the expected 30A hover current per motor, and 75A full throttle current. This ESC was our only major regret in the build, for reasons which will become clear later.

For props we used APC 18x5.5 propellers, largely because they are quite cheap and are big enough to be efficient, while not being too unwieldy in the build.

For fixed wing flight we didn't change anything over the setup we used for OBC'2014, apart from losing the ability to use the flaps due to the position of the quad arms. A VTOL aircraft doesn't really need flaps though, so it was no big loss. We expected the 35cc petrol engine would pull the plane along fine with our usual 20x10 prop.

We did reduce the maximum bank angle allowed in fixed wing flight, down from 55 degrees to 45 degrees. The aim was to keep the wing loading in reasonable limits in turns given we were pushing the airframe well beyond the normal flying weight. This worked out really well, with no signs of stress during fixed wing flight.

Test flights

The first test flight went fine. It was just a short hover test, with a nervous pilot (me!) at the sticks. I hadn't flown such a big quadcopter before (it is 15kg takeoff weight) and I muffed up the landing when I realised I should try and land off the runway to keep out of the way of other aircraft using the strip. I tried to reposition a few feet while landing and it landed heavier than it should have. No damage to anything except my pride.

The logs showed it was flying perfectly. Our initial guess of 0.5 for roll and pitch gains worked great, with the desired and achieved attitude matching far better than I ever expected to see in an aircraft of this type. The feel on the sticks was great too - it really responded well. That is what comes from having 10kW of power in an aircraft.

The build was meant to have a sustained hover time of around 4 minutes (using ecalc), and the battery we actually used for the flight showed we were doing a fair bit better than we predicted. A QuadPlane doesn't need much hover time.  For a one hour mission for OBC'2016 we reckon we need less than 2 minutes of VTOL flight, so 4 minutes is lots of safety margin.

Unfortunately the second test flight didn't go so well. It started off perfectly, with a great vertical takeoff, and a perfect transition to forward flight as the petrol engine engaged.

The plane was then flown for a bit in FBWA mode, and it responded beautifully. After that we switched to full auto and it flew the mission without any problems. It did run the throttle on the petrol engine at almost full throttle the entire time, as we were aiming for 28m/s and it was struggling a bit with the drag of the quad motors and the extra weight, but the tuning was great and we were already celebrating as we started the landing run.

The transition back to hover mode also went really well, with none of the issues we thought we might have had. Then during the descent for landing the rear left motor stopped, and we once again proved that a quadcopter doesn't fly well on 3 motors.

Unfortunately there wasn't time to switch back to fixed wing flight and the plane came down hard nose first. Rather a sad moment for the CanberraUAV team as this was the aircraft that had won the OBC for us in 2014. It was hard to see it in so many pieces.

We looked at the logs to try to see what had happened and Peter immediately noticed the tell tale sign of motor failure (one PWM channel going to maximum and staying there). We then looked carefully at the motors and ESCs, and after initially suspecting a cabling issue we found the cause was a burnt out ESC:

The middle FET is dead and shows burn marks. Tests later showed the FETs on either side in the same row were also dead. This was a surprise to us as the ESC was so over spec for our setup. We did discover one possible contributing cause:

that red quality control sticker is placed over the FET on the other side of the board from the dead one, and the design of the ESC is such that the heat from the dead FET has to travel via that covered FET to the heatsink. The sticker was between the FET and the heatsink, preventing heat from getting out.

All we can say for certain is the ESC failed though, so of course we started to think about motor redundancy. We're building two more large QuadPlanes now, one of them based on an OctaQuad design, in an X8 configuration with the same base airframe (a spare VQ Porter 2.7m that we had built for OBC'2014). The ArduPilot QuadPlane code already supports octa configs (along with hexa and several others). For this build we're using T-Motor MT3520-11 400kV motors, and will probably use t-motor ESCs. We will also still use the 18x5.5 props, just more of them!

Strangely enough, the better power to weight ratio of the t-motor motors means the new octa X8 build will be a bit lighter than the quad build. We're hoping it will come in at around 13.7kg, which will help reduce the load on the forward motor for fixed wing flight.

Many thanks to everyone involved in building this plane, and especially to Grant Morphett for all his building work and Jack Pittar for lots of good advice.

Building and flying a large QuadPlane has been a lot of fun, and we've learnt a lot. I hope to do a blog post of a more successful flight of our next QuadPlane creation soon!

### David Rowe: Double Tuned Filter Notch Mystery

Thu, 2016-02-25 12:30

For the SM2000 I need a 146MHz Band Pass Filter (BPF). This lead me to the Double Tuned Filter (DTF) or Double Tuned Circuit (DTC) – two air cored coils coupled to each other, and resonated with trimmer capacitors. To get a narrow pass band, the Q of the resonators must be kept high, which means an impedance of a few 1000 ohms at resonance. So I connect the low impedance 50 ohm input and output by tapping the coils at half a turn.

Here is the basic schematic (source Vasily Ivanenko on Twitter):

For 146MHz the inductors are about 150nH, and capacitors about 8pF. For 435MHz the inductors are about 50nH, and capacitor 3pF. The “hot end” of the inductors where the trimmer cap connects is high impedance at resonance, several 1000 ohms.

These filters are sometimes called Helical Filters – however this is a little confusing to me. My understanding is that Helical filters – although similar in physical construction – operate on transmission line principles and have the “hot” end of the inductors left open – no capacitor to ground.

Here is a photo of a 146MHz DTC filter, and it’s frequency response:

I understand how the band pass response is formed, but have been mystified by the notches in the response. In particular, what causes the low frequency notch just under the centre frequency? Here is a similar DTC I built for 435MHz and it’s frequency response, also with the mystery notches:

After a few days of messing about with Spice, Octave simulations, RF books, and bugging my RF brains trust (thanks Yung, Neil, Jeff), I accidentally stumbled across the reason. A small capacitor (around 1pF) between the hot end of the inductors creates the low frequency notch. Physically, this is parasitic capacitance coupling across the air gap between the coils.

Here is a LTspice simulation of the UHF version of the circuit. Note how the tapped inductors are modelled by a small L in series with the main inductance. The “K” directive models the coupling. Air cored transformers have a low coupling coefficient, I guessed at values of 0.02 to 0.1. You can see the notch just before resonance caused by the 1pF parasitic coupling between the two inductors. Without this capacitor in the model, the notch goes away.

The tapped inductor is used for an impedance match. An equivalent circuit is simply driving and loading the circuit with a high impedance, say 1500 ohms:

After some head scratching I found this useful model for transformers. It’s valid for low-k transformers where the primary and secondary inductance is the same (Ref):

Note it doesn’t model DC isolation but that’s OK for this circuit. While I don’t understand the derivation of this model, it does makes intuitive sense. A loosely coupled air core transformer can be modeled as a high (inductive) impedance between the primary and secondary. We still get reasonable power transfer (a few dB insertion loss) as the impedance of the primary and secondary is also high at resonance.

Using the model in Fig 55 with k = 0.1, the top L in the PI arrangement is about 10L1 or 500nH. I also removed the 3pF capacitors in an attempt to isolate just the components responsible for the notch. So we get:

Finally! Now I can understand how the notch is created. We have 1pF in parallel with 500nH, which forms a parallel resonant circuit at 225MHz. Parallel resonance circuits have very high impedance at resonance, which blocks the signal, causing the notch.

It took me a while to spot the parallel resonance. I had assumed a series resonance shorting the signal to ground, and wasted a lot of time looking for that. Parasitic inductance in the capacitors is often the reason for notches above resonance.

This suggests we can position the notch by adjusting the capacitance between the coils, either by spacing or adding a real capacitor. Positioning the notch could be useful, e.g. deeply attenuating energy at an image frequency before a mixer.

### sthbrx - a POWER technical blog: Work Experience At Ozlabs

Wed, 2016-02-24 23:00

As a recent year twelve graduate my knowledge of computer science was very limited and my ability to write working programs was all but none. So you can imagine my excitement when I heard of an opening for work experience with IBM's internationally renowned Ozlabs team, or as I knew them the Linux Gods. My first day of working at Ozlabs I learnt more about programing then in six years of secondary education. I met most of the Ozlabs team and made connections that will certainly help with my pursuit of a career in IT. Because in business its who you know more than what you know, and now I know the guys at Ozlabs I know how to write code and run it on my own Linux Distro. And on top of all the extremely valuable knowledge I am on a first name basis with the Linux Gods at the LTC.

After my first week at Ozlabs I cloned this blog from Octopress and reformatted it for pelican static site generator.For those who don't know Octopress is a ruby based static site generator so converting the embedded ruby gems to pelicans python code was no easy task for this newbie. Luckily I had a team of some of the best software developers in the world to help and teach me their ways. After we sorted the change from ruby to python and I was able to understand both languages, I presented my work to the team. They then decided to throw me a curve ball as they did not like any of pelicans default themes, instead they wanted the original Octopress theme on the new blog. This is how I learnt GitHub is my bestest friend, because some kind soul had already converted the ruby theme into python and it ran perfectly!

Now it was a simple task of re-formatting the ruby-gem text files into markdown which is pelican compatible(which is why we chose pelican in the first place). So now we had a working pelican blog on the Octopress theme, one issue it was very annoying to navigate. Using my newly learned skills and understanding of python I inserted tags, categories, web-links, navigation bar and I started learning how to code C. And it all worked fine! That was what I a newbie could accomplish in one week. I still have two more weeks left here and I have plenty of really interesting work left to do. This has been one of the greatest learning experiences of my life and I would do it again if I could! So if you are looking for experience in it or software development look no further because you could be learning to code from the people who wrote the language itself. The Linux Gods.

### James Purser: Some random thoughts on #senatereform

Tue, 2016-02-23 11:30

So Turnbull dropped the hammer on the senate yesterday with a set of changes to the way the senate is going to be elected. At the core of these changes will be the removal of the (and there really is no other way to describe them) anti-democratic preference deals that meant that voters lost control over where their preferences were directed if they voted above the line.

The changes have another benefit apart from returning full control over preferences to the voter. They also give all those groups that have sprung up to game the current system a good kick in the goolies.

This is a good thing.

Let me get one thing straight first up. I have no problem with the micro parties competing in elections, the more the merrier I say. It makes for a vibrant democracy when there are multiple, competing groups all representing their particular world views. Where the problem lies is that many of the micro parties that went to the last election were, in essence empty shells. Fronts setup to funnel preferences around until one of the "real" micro parties managed to scrape enough to get a place in the senate.

The new system will mean that parties won't be able to rely on dodgy deals done with preference farmers, or in fact some of the viler participants in our democratic process. Instead, they will have compete in the open for the voters preference, which means that they're going to have to do more than just rely on their tiny, tiny bases. Instead they're going to have to work their butts off to engage with the wider electorate.

As can be expected, there has been a lot of wailing and gnashing of teeth since the announcement. Lots of declarations of the death of democracy (again) and so on and so forth. Some of the loudest complaining of all appears to be coming from the one of the biggest beneficiaries of preference farming. The ALP. As soon as Turnbull finished his press conference, the ALP was on the hustings accusing the Government of everything from causing confusion and dismay to a dastardly scheme to gerrymander the Senate so that it would permanently be in a state of Coalition control.

You know what? I am personally looking forward to my vote going exactly where I want it to go, and I am especially looking forward to making sure that my preferences don't accidentally end up propping up some of the worst of the worst this country has to offer.

Blog Catagories: Politics

### OpenSTEM: Learning New Skills Faster | Washington Post

Mon, 2016-02-22 11:31

https://www.washingtonpost.com/news/wonk/wp/2016/02/12/how-to-learn-new-skills-twice-as-fast/

How to level up twice as quickly.

The short version of the research outcome is that when you are learning new skills, just repeating the exact same task is not actually the most efficient. Making subtle changes in the task/routine speeds up the learning process.

Considering our brains work and learn like neural nets (actually neural nets are modelled off our brains, but anyhow), I’m not surprised: repeating exactly the same thing will strengthen the related pathways. But the real world doesn’t keep things absolutely identical, so introducing small changes in the learning process will create a better pattern in our neural net.

Generally, people regard “dumb” repetition as boring, and I don’t blame them. It generally makes very little sense.

The researchers note that too much variation doesn’t work either – which again makes sense, if you think in neural net context: if your range of pathways is too broad, you’re not strengthening pathways very much. Quantity rather than quality.

So, a bit of variety is good for learning!