Planet Linux Australia
I weighed just over 110 kilos (240 lbs). I decided something had to change -- I have a new daughter, and I want to be around to see her well into her life. So, I joined a gym and started bush walking. My first walk was documented here as a walk up Tuggeranong hill. That rapidly became an obsession with climbing hills to survey markers, which then started to include finding geocaches.
I can't give you a full list of the tangents that one photo from Paris has caused, because the list isn't complete yet and may never be. I now run, swim, ride my bike, and generally sweat on things. Its all fun and has had the unexpected side effect that its helped me cope with work stress much more than I expected.
I've lost about 15 kilos (30 lbs) so far. Weight loss isn't really the main goal now, but its something I continue to track.
I thought it would be interesting to list all the places I've walked in Canberra this year, but a simple bullet point list is too long. So instead, here's an interactive map.
Interactive map for this route.
There are a lot more walks I want to do around here. Its just a case of finding the time.
Tags for this post: blog fitness health weight canberra bushwalk
Related posts: It hasn't been a very good week; A further update on Robyn's health; RIP Robyn Boland; Weekend update; Bigger improvements; An update on Catherine's health
- Ok, so I want to do Mount Tennent from the "wrong" side (Apollo Road up the fire trail). The reason for this is that there is a series of 20 something geocaches along the route that I'd like to tick off as well as walking to Tennent.
I think the total route should be about 13kms, with about 800m of ascent. I propose we leave at least one car at the Apollo Road car park, and then drive the rest to the start point at the Namadgi visitor centre. We can then do the walk, and send one car back to collect the others while we wait under a tree (or something like that).
Naismith's rule says this walk should take about 5 hours.
I've been meaning to do this one for ages, but finally got around to doing the walk with a few friends. We went up Tennent from the Namadgi visitor's center, but walked back down on the far side of Tennent which seems less common. The far side is less scenic, but less steep as well I think. Along the way we collected 23 geocaches along the way, and a lovely walk was had by all.
Just over 1,000 meters vertically, and around 16km horizontally.
Interactive map for this route.
Tags for this post: blog pictures 20151228 photo canberra bushwalk
So we rent, we always have. Not from any sort of ideological resistance to owning a house or anything, we just haven't been in the position where it's been a thing we could even consider until now.
A new job has put us into a position where we could start looking at the vague possibility of buying a house, committing to a mortgage and so on. So I've been looking around both at the houses available in the area (I like Dapto, I don't particularly want to move), and at the sort of money we would need to be paying out fortnightly.
So to give you a brief run down on what we pay a fortnight, I'll just put it at about $800. This gives us a three bedroom house with a garage, a single bathroom and toilet and a big back yard. Add in the money it costs to travel to the city to work and you could make the case we're paying around $920 a fortnight.
Average rental in Dapto in is $420, so that takes the fortnightly amount up to $960.
Now let's have a look at the buyers market. According to that site, the median house price is $430,918. The current St George 2 year fixed interest rate is 4.19%. According to their loan repayment calculator, this means that our loan repayments would be $1592 for the first two years and $1492 for the rest of the term (depending on which way interest rates go of course).
So, if we do buy a house in this area, we're looking at adding almost half again on top of what we pay at the moment.
If we want to buy a house for close to what we're paying for rent now, we have to look at moving south to Bomaderry or Nowra, both of which have median house prices under $350,000. Unfortunately that also means that I would start travelling for six hours a day to get to work and miss 12 would have to pull out of her selective high school and so on.
We still want to buy a house, but right now it's not going to happen. So renting is going to be our thing for a while yet.Blog Catagories: housing
About a year ago I posted a FreeDV 2015 Road Map. Time for a review and to make plans for 2016.
Summary of 2015
It was a really good year for me professionally, many goals achieved or pushed forward, and a few unexpected digressions that has opened up interesting new horizons. Highlights include:
- The SM1000 FreeDV Adaptor entered production, is selling steadily, and we are now making the 3rd batch. Some encouraging contributions from the community (thanks Stuart for your firmware work).
- Some exciting test results using FreeDV on VHF channels (thanks Daniel) lead to the SM2000 VHF DV radio project. I’ve spent the last few months prototyping hardware for this project and Rick, KA8BMA, is now working on the CAD for Rev A. Lots of exploration and development of VHF modems (thanks Brady).
- The FreeDV 700B mode was released, using a completely new, very robust, coherent PSK HF modem. This met the goal of open source DV down to 0dB SNR. The weakness of this mode is speech quality – it’s OK for brief exchanges at low SNRs but difficult to use conversationally.
- World wide efforts at promoting FreeDV, such as the QSO party (thanks AREG), conference and Hamfest presentations (especially in the US – thanks to Mel and Bruce and many others), the FreeDV Beacon (thanks John), and weekly WIA broadcasts (thanks Mark and Andy).
- The Masking and Trellis work (thanks Eric VK5HSE) shows great promise for improved quality and robustness to channel errors.
FreeDV 2016 Road Map
Here are my goals for 2016:
- Time for another iteration of FreeDV quality/robustness work. The goal is a FreeDV mode with the robustness of FreeDV 700B but the speech quality of FreeDV 1600. I’d also like address robustness to different microphone/audio/acoustic conditions. This involves a lot of detail work and on air, real world tuning. I’ll call this mode 700C for now, although I have no idea what the final bit rate will be. A higher quality mode for VHF work would also be useful.
- Iteratively develop the SM1000, add a USB virtual comm port for configuration, progress the UI, port a 700-something mode, improve internal microphone quality.
- Release VHF FreeDV modes, one mode that runs through $60 HTs, another that outperforms closed source DV by 10dB. Integrate these modes into the FreeDV GUI program, FreeDV API, SM1000, and SM2000.
- Demonstrate advanced VHF DV features with the SM2000, such as a low cost TDMA repeater, diversity reception, and outperforming analog FM and closed source DV systems by 10dB: on an open hardware/open software platform.
- Increase on-air FreeDV activity so it’s easy to find a daily QSO on 20m and 40m, through the FreeDV Beacon, conference and Hamfest promotion, QSO Party, broadcasts, test and tune meets where we help get SM1000s/FreeDV running.
Here’s a work flow diagram of how it all fits together:
Some other (open source) project ideas, not strictly connected with FreeDV:
- SDR Direction Finder: A phase based direction finding system that uses a fairly simple RF mixer board (LO+mixer+combiner) to frequency multiplex signals from two antennas into a SDR receiver. I have this partially working on the bench; it can measure the phase shift between the two input ports on the 70cm Ham band. Next step is to connect real antennas and try it with real off air signals. The phase shift can be used to determine a bearing. It’s not Doppler, there is no switching of antennas. The phase is estimated using some AM-like signal processing of the two signals. With two antennas there is a 180 degree ambiguity. If the system works I’ll look into resolving that ambiguity. Would be nice to work with someone on this, it’s kind of a side project.
- Urban Noise Cancellation: Like many HF radio users, I am experiencing S9 background noise levels on the lower HF bands. It’s got to be fixed. I have some ideas on how this can be partially canceled using DSP techniques and two or even one receiver(s). None of my friends and colleagues believe it can be done but that’s never stopped me before. Yes I am aware of analog products that attempt a similar approach (e.g. from MJF). This is very different.
- Open Telemetry Radio: The Near Space Balloon community are migrating to closed source chip sets for telemetry. This needs to be fixed. In October I showed just how powerful open source modems, as applied to telemetry, can be. I would like to develop an open source sub $10 UHF radio for telemetry. My modem experience suggests we can run most of the radio in software on a micro-controller. Many applications in IoT as well, so this project would lead to “open source” IoT radio hardware.
Note: asking me “when” for any of the above will get you an invitation to help “make it happen”!
So I ran into a bit of a hiccup a few weeks back.
An update managed to corrupt the database files that ran this blog and a few other sites. I tried to recover them, but it turns out that InnoDB files are notoriously finicky, so I had to turn to the backups.
Sigh. Last backup was in June.
Well in the washup, it wasn't a great loss, only five blog posts and an episode of Purser Explores The World need to be replaced.
On the upside, I've switched from the vanilla mysql to MariaDB, so we'll see how that goes.
Oh and I've finally fixed up the mail setup so that it does things like SPF and require login to do SMTP.Blog Catagories: web things
A few months ago John, VK5DM/3IC, approached me with the idea of setting up a FreeDV beacon. He offered to set up the radio, antenna, computer hardware, and Internet connectivity in Sunbury, a semi rural location NW of Melbourne. I have written the software, based on the FreeDV API and some code lifted from the FreeDV GUI program.
The beacon gives people testing FreeDV “someone to talk to”, and will encourage FreeDV activity. It’s also useful for me to have an automated receiver on an interstate (650kms/400miles) path for FreeDV development.
The beacon listens to a HF radio. When a FreeDV 1600 signal is detected, it starts parsing the text message. When it receives a “trigger” word, it will transmit a reply. A signal report consisting of the SNR and Bit Error Rate (BER) is sent in the text message of the reply. The beacon also records the incoming radio signal and decoded speech to wave files which are posted on a web site.
The first FreeBeacon (VK3CHN) is running on a Raspberry Pi with USB audio and RS232 dongles connected to a surplus Codan radio. VK3CHN is the callsign of a “silent key” friend of Johns, who was well known locally for his experimental work. His family asked that his callsign be kept in use for experimental activities.
Here’s an example of decoded wave file logged by the beacon. The received modem signal is also recorded by the beacon and posted. These files were generated from a transmission at my location 650km away that was logged by the beacon. Conditions were marginal, a SSB contact over the same path was readability 4.
The beacon is up and running on 7.177MHz, you can trigger it using FreeDV 1600 with “VK3CHN” in your transmit text message. Pls email John (john_at_bungama_dot_com) for the web site URL – it’s on a home ADSL connection so bandwidth is limited.
Here is the freebeacon software including installation instructions.
Thanks Richard Shaw for adding cmake support to the project.
FreeDV on a Raspberry Pi
I’m one of the few people on the planet who hasn’t used a RPi until now. I like the way it can drive a keyboard and HDMI monitor directly. It’s very nice to have an “embedded” system with the full apt-get repositories and the ability to compile on the machine itself, rather than cross compiling on a host. It’s so small it ends up dangling in the air on the end of USB and Ethernet cables.
Here is a test of decoding 120 seconds of FreeDV 1600 on my Lenovo X220 laptop (i5-2520M CPU @ 2.50GHz):
david@penetrator:~/freebeacon$ time ~/codec2-dev/build_linux/src/freedv_rx 1600 freebeacon_test.wav /dev/null
And the Rpi model B (ARMv7 Processor rev 5 (v7l)):
pi@raspberrypi ~/freebeacon $ time ~/codec2-dev/build_linux/src/freedv_rx 1600 freebeacon_test.wav /dev/null
In both cases it’s running on a single core, and libcodec2 is compiled using -O2 optimisation. The Rpi is 17.8/1.2 = 15 times slower than my laptop but a FreeDV 1600 decode still runs 17/120 = 15% of the single core – comfortably in real time. Fast enough.
I developed Freebeacon on my laptop using a “Rig Blaster” USB sound device, then moved to the RPi. On the RPi I discovered clicks in the audio that took a few hours to debug. Eventually I fixed the problem by tweaking the PortAudio Pa_OpenStream() call below (n48 line):
err = Pa_OpenStream(
n48, /* changed from 0 to n48 to get Rpi audio to work without clicks */
NULL, /* no callback, use blocking API */
I’ve hit similar issues on the FreeDV GUI program, the buffer size negotiated between the end user, PortAudio, and the underlying Linux sound system seems important.
The software can use RS232 DTR/RTS to key the transmitter PTT. My USB to RS232 serial cables worked on my laptop but not on the RPi. Not sure why, I could toggle the lines on the RPi with the stty command, so it must be a configuration or driver issue. Does anyone have any suggestions? Another USB RS232 cable John supplied worked just fine. Oh Well.
There is also a command line option to use the RPi GPIOs to key the transmitter.
Tips and Tricks
During set up it’s useful to play sound files out of your sound card. Sox “play” can be used on the non-default sound device:
$ AUDIODEV=hw:1,0 play freebeacon_test.wav
Use “aplay -l” to list the sound card/device
We used alsamixer to set the levels. You can change sound cards with F6. This didn’t work on all ssh clients, notably one used from Windows wouldn’t pass the function keys.
In the freebeacon repository there is a test file freebeacon_test.wav that can be used to test the beacon. As a first step play this file out of another laptop into the freebeacon sound card. Set the trigger string on freebeacon to “hello”. Check that your freebeacon gets sync, a reasonable SNR, and you see “Tx Triggered” when the text string is received.
It is recommended to use a 8kHz mono sound file for the transmit source audio wave file. You can convert any wave file to 8kHz mono using sox, for example:
sox vk3chn.wav -r 8000 -c 1 vk3chn_8kmono.wav
To pass your transmit audio through FreeDV 1600 and listen to the output quality on your default sound device:
~/codec2-dev/build_linux/src$ ./freedv_tx 1600 ~/Desktop/vk3chn_8kmono.wav - | ./freedv_rx 1600 - - | play -t raw -r 8000 -s -2 -
Or to save the result to a wave file:
~/codec2-dev/build_linux/src$ ./freedv_tx 1600 ~/Desktop/vk3chn_8kmono.wav - | ./freedv_rx 1600 - - | sox -t raw -r 8000 -s -2 - ~/Desktop/vk3chn_freedv1600.wav
This is useful to check sound quality before activating the beacon.
On receive watch out for the overload level, and reduce your radios AF gain or the sound card input gain if it’s hard up against 32767. Something like 5000-10000 is fine. It’s not critical, as long as you are not clipping.
The “-v” option is very useful for testing. Watch for the “sync” flag as a valid FreeDV signal is detected, and the “tx Trigger” when a txt message is received.
More info in the freebeacon README.
It would be great to have other people set up some more Freebeacons. In particular on 14.236MHz to log traffic and measure propagation.
Freebeacon only builds on Linux however it could be modified to run on Windows. If you are interested in setting up a Freebeacon on Win32 please let me know.
Many other ideas are in the freebeacon README. I would love to see some C code patches submitted with your ideas!
Interactive map for this route.
Tags for this post: blog pictures 20151223 photo canberra bushwalk
I’ve just finished reading “The Bishops Boys” – a biography of the Wright Brothers, and the invention of the airplane. I bought this book while visiting the Smithsonian in Washington 25 years ago, and have read it a few times. I visited Dayton in 2012 and saw a few places mentioned in the book, such as Huffman Prairie and their bicycle shop.
It’s quite a good read, I especially enjoyed the story of how they “engineered” the aeroplane. For example systematic wind tunnel tests of various wing surfaces, appreciation of the need for control in the roll axis, and calculations of the thrust required for a powered craft. At the time everyone else was using guesswork.
The picture painted of nineteenth century suburban life was also interesting, quite similar to our own. One big difference was the number of people (indeed many of the Wright family) dying from infectious disease at relatively young ages. In the developed world we have made huge advances in public sanitation, antibiotics, and vaccinations.
However I am critical of the Wrights “patent wars”. They spent many years trying to sell their technology and fighting patent infringement, which slowed down development of the art, particular in pre WW1 USA. The stress and fatigue of the legal battles contributed to the early death of Wilbur Wright. The Wrights themselves were slow to employ the useful technology of others (rear elevator, wheels, intuitive cockpit controls), as they considered them infringers. They weren’t motivated by money, more by principle, and had been brought up with a family history of courtroom drama.
I feel the “open source” approach is much better – share the IP, combine your contribution with that of others, and nudge the entire world forward. A useful lesson.
Interactive map for this route.
Tags for this post: blog orienteering
Related posts: Scout activity: orienteering at Mount Stranger
Six months ago in a previous post I showed that 45% of transactions have an output of less that $1, and estimated that they would get squeezed out first as blocks filled. It’s time to review that prediction, and also to see several things:
- Are fees rising?
- Are fees detached from magic (default) numbers of satoshi?
- Are low value transactions getting squeezed out?
- Are transactions starting to shrink in response to fee pressure?
Here are some scenarios: low-value transactions might be vanishing even if nothing else changes, because people’s expectations (“free global microtransactions!” are changing). Fees might be rising but still on magic numbers, because miners and nodes increased their relayfee due to spam attacks (most commonly, the rate was increased from 1000 satoshi per kb to 5000 satoshi per kb). Finally, we’d eventually expect wallets which produce large transactions (eg. using uncompressed signatures) to lose popularity, and wallets to get smarter about transaction generation (particularly once Segregated Witness makes it fairly easy).Fees For The Last 2 Years
Conclusion: Too noisy to be conclusive: they seem to be rising recently, but some of that reflects the exchange rate changes.Are Fees on Magic Boundaries?
Wallets should be estimating fees: in a real fee market they’d need to.
Dumb wallets pay a fixed fee per kb: eg. the bitcoin-core wallet pays 1,000 (now 5,000) satoshi per kb by default; even if the transaction is 300 bytes, it will pay 5,000 satoshi. Some wallets use (slightly more sensible) scaling-by-size, so they’d pay 1,500 satoshi. So if a transaction fee ends in “000”, or the scaled transaction fee does (+/- 2) we can categorize them as “fixed fee”. We assume others are using a variable fee (about 0.6% will be erroneously marked as fixed):
This graph is a bit dense, so we thin it by grouping into weeks:
Conclusion: Wallets are starting to adapt to fee pressure, though the majority are still using a fixed fee.Low Value Transactions For Last 4 Years
We categorize 4 distinct types of transactions: ones which have an output below 25c, ones which have an output between 25c and $1, ones which have an output between $1 and $5, and ones which have no output below $5, and graph the trends for each for the last four years:
Conclusion: 25c transactions are flat (ignoring those spam attack spikes). < $1 and <$5 are growing, but most growth is coming from transactions >= $5.Transaction Size For Last 4 Years
Here are the transaction sizes for the last 4 years:
Conclusion: There seems to be a slight decline in transaction sizes, but it’s not clear the cause, and it might be just noise.Conclusion
There are signs of a nascent fee market, but it’s still very early. I’d expect something conclusive in the next 6 months.
The majority of fees should be variable, and they’re not: wallets remain poor, but users will migrate as blocks fill and more transactions get stuck.
A fee rate of over 10c per kb (2.5c per median transaction) hasn’t suppressed 25c transactions: perhaps it’s not high enough yet, or perhaps wallets aren’t making the relative fees clear enough (eg. my Trezor gives fees in BTC, as well as only offering fixed fee rates).
The slight dip in mean transaction sizes and lack of growth in 25c transactions to may point to early market pressure, however.
Six months ago I showed that 45% of transactions were less than a dollar. In the last six months that has declined to 38%. I previously estimated that we would want larger blocks within two years, and need them within three. That still seems a reasonable estimate.Data Blockstream
On lightning. Not on drawing pretty graphs. But I wanted to see the data…
This is a terse guide on bootstrapping virtual-machine images for OpenStack infrastructure, with the goal of adding continuous-integration support for new platforms. It might also be handy if you are trying to replicate the upstream CI environment.
It covers deployment to Rackspace for testing; Rackspace is one of the major providers of capacity for the OpenStack Infrastructure project, so it makes a good place to start when building up your support story.
Firstly, get an Ubuntu Trusty environment to build the image in (other environments, like CentOS or Fedora, probably work -- but take this step to minimise differences to what the automated machinery of what upstream does). You want a fair bit of memory, and plenty of disk-space.
The tool used for building virtual-machine images is diskimage-builder. In short, it takes a series of elements, which are really just scripts run in different phases of the build process.
I'll describe building a Fedora image, since that's been my focus lately. We will use a fedora-minimal element -- this means the system is bootstrapped from nothing inside a chroot environment, before eventually being turned into a virtual-machine image (contrast this to the fedora element, which bases itself off the qcow2 images provided by the official Fedora cloud project). Thus you'll need a few things installed on the Ubuntu host to deal with bootstrapping a Fedora chrootapt-get install yum yum-utils python-lzma
You will hit stuff like that python-lzma dependency on this road-less-travelled -- technically it is a bug that yum packages on Ubuntu don't depend on it; without it you will get strange yum errors about unsupported compression.
At this point, you can bootstrap your diskimage-builder environment. You probably want diskimage-builder from git, and then build up a virtualenv for your support bits and pieces.git clone git://git.openstack.org/openstack/diskimage-builder virtualenv dib_env . dib_env/bin/activate pip install dib-utils
dib-utils is a small part of diskimage-builder that is split-out; don't think too much about it.
While diskimage-builder is responsible for the creation of the basic image, there are a number of elements provided by the OpenStack project-config repository that bootstrap the OpenStack environment. This does a lot of stuff, including caching all git trees (so CI jobs aren't cloning everything constantly) and running puppet setup.git clone git://git.openstack.org/openstack-infra/project-config
There's one more trick required for building the VHD images that Rackspace requires; make sure you install the patched vhd-util as described in the script help.
At this point, you can probably build an image. Here's something approximating what you will want to dobreak=after-error \ TMP_DIR=~/tmp \ ELEMENTS_PATH=~/project-config/nodepool/elements \ DIB_DEV_USER_PASSWORD="password" DIB_DEV_USER_PWDLESS_SUDO=1 \ DISTRO=23 \ ./bin/disk-image-create -x --no-tmpfs -t vhd \ fedora-minimal vm devuser simple-init \ openstack-repos puppet nodepool-base node-devstack
To break some of this down
- break= will help you debug failing builds by dropping you into a shell when one of the element parts fail.
- TMP_DIR should be set to somewhere with plenty of space; a default /tmp with tmpfs is probably restricted to a couple of gigabytes; not enough for a build.
- ELEMENTS_PATH will add the OpenStack specific elements
- DIB_DEV_USER_* flags will create a devuser login with a password and sudo access. This is really important as it is most likely you'll boot up fairly broken the first time, and you need a way to log-in (this is not used in "production").
- DISTRO in this case says to build a Fedora 23 based-image, but will vary depending on what you are trying to do.
- disk-image-create finally creates the image. We are telling it to
create a vhd based image, using fedora-minimal in this case. For
configuration we are using simple-init, which uses the glean project to configure
networking from a configuration-drive.
- By default, this will install glean from pypi. However, adding DIB_INSTALL_TYPE_simple_init=repo would modify the install to use the git source. This is handy if you have changes in there that are not released to pypi yet.
This goes and does its thing; it will take about 20 minutes. Depending on how far your platform diverges from the existing support, it will require a lot of work to get everything working so you can get an image out the other side. To see a rough example of what should be happening, see the logs of the official image builds that happen for a variety of platforms.
At some point, you should get a file image.vhd which is now ready to be deployed. The only reasonable way to do this is with shade. You can quickly grab this into the virtualenv we created beforepip install shade
Now you'll need to setup a clouds.yaml file to give yourself the permissions to upload the image. It should look something likeclouds: rax: profile: rackspace auth: username: your_rax_username password: your_rax_password project_id: your_rax_accountnum regions: - IAD
You should know your user-name and password (whatever you log into the website with), and when you login to Rackspace your project_id value is listed in the drop-down box labeled with your username as Account #. shade has no UI as such, so a simple script will do the upload.import shade shade.simple_logging(debug=True) cloud = shade.openstack_cloud(cloud='rax') image = cloud.create_image('image-name', filename='image.vhd', wait=True)
Now wait -- this will also take a while. Even after upload it takes a fair while to process (you will see the shade debug output looping around some glance calls seeing if it is ready). If everything works, the script should return and you should be able to see the new image in the output of nova list-images.
At this point, you're ready to try booting! One caveat is that the Rackspace web interface does not seem to give you the option to boot with a configuration drive available to the host, essential for simple-init to bring up the network. So boot via the API with something likenova boot --flavor=2 --image=image-uuid --config-drive 1 test-image
This should build and boot your image! This will allow you to open a whole new door of debugging to get your image booting correctly.
You can now iterate by rebuilding/uploading/booting as required. Note these are pretty big images, and uploaded in broken-up swift files, so I find swift delete --all helpful to reset between builds (obviously, I have nothing else in swift that I want to keep). The Rackspace java-based console UI is rather annoying; it cuts itself off every time the host reboots. This makes it quite difficult to catch the early bootup, or modify grub options via the boot-loader, etc. You might need to fiddle timeout options, etc in the image build.
If you've managed to get your image booting and listening on the network, you're a good-deal of the way towards having your platform supported in upstream OpenStack CI. At this point, you likely want to peruse the nodepool configuration and get an official build happening here. Once that is up, you can start the process of adding jobs that use your platform to test!
Don't worry, there's plenty more that can go wrong from here -- but you're on the way! OpenStack infra is a very dynamic environment, which many changes in progress; so in general, #openstack-infra on freenode is going to be a great place to start looking for help.
For the past eight years I've worked at the Victorian Partnership for Advanced Computing, also known as V3 Alliance, its trading name after merging with the Victorian eReserch Initiative. Today is my last official day, although I suspect I'll be doing "VPAC things" for a while yet.
I now have a prototype 1W PA working (based on a RD01MUS2 FET) and need to design a filter to remove harmonics. With a transmit signal around 150MHz, this filter needs to attenuate harmonics at 300 and 450MHz.
My first attempt was a Q=5 “Pi” filter (56pF, 33nH, 56pF), which can be viewed as two L networks back to back with a virtual 5 ohm resistance in the middle. However when tested it had poor stop band rejection, just 30dB maximum at 280MHz (yellow):
After a day of messing about, reading, and LTSpice simulations, I traced the issue to stray inductance in the capacitor leads. By reducing the lead length for the capacitors, stop band performance improved by over 20dB (purple)!!
Here is a photo of a 4.7pF cap with roughly the lead lengths I started with:
Not much but combined with 50pF or more of capacitance the inductance of these leads can have a big effect. To reduce the lead inductance I soldered the capacitors across the back of the SMA connectors, and clipped the leads very short:
A few mm of lead can make a big difference, around 1nH per mm. This becomes quite significant at UHF, e.g. 10mm is 10nH which is 31 ohms at 500MHz.
Here are two simulations with long (green) and short (blue) capacitor leads, modeled as 8nH and 2nH series inductance. The longer 8nH leads have a series resonant frequency f = 1/2*pi*sqrt(LC) = 259MHz, before our 2nd harmonic. This makes the initial slope steep and produces a notch in the frequency response, however after resonance the capacitor is now inductive, and the stop band attenuation starts to get worse. The notch is not visible on the real world sweep, perhaps due to finite Q of the real world capacitor. However the same 30dB stop band “floor” can be seen in the simulation.
Also modeled is a guess of the 33nH conductor series resistance and parasitic parallel capacitance.
This sensitivity to component lead lengths was a nasty surprise! Although I’m developing a VHF radio, the behavior of components at UHF needs to be taken into account. Self resonance and parasitic effects is one of those things I’ve “sort of known” for a while – but the experience of a screw up and having to solve it really drives the lesson home! So: best to use small value, surface mount capacitors and ensure the self resonant frequency is above 500MHz. Quite amazing what 5mm of component lead can do at 500MHz!
5th order Chebychev
Armed with this graphical lesson in UHF construction I set about building a 0.5dB ripple, N=5 Chebyshev filter, with a 3dB cut off of 180MHz, using the tables in RF Circuit Design (by Chris Bowick).
Here is the circuit:
If you zoom in on the photo you just see a 33pF SM capacitor soldered across the back of the SMA connectors. I used a 47pF through hole cap, however I soldered the entire 5mm lead of one end to ground and had virtually no lead at the hot end. I used air core inductors for high Q, mounted at right angles to minimise coupling. They are 4 turns loosely spaced on a 5mm ID drill bit. I adjusted them with my tracking generator/spec-an to series resonate at 98MHz with a 47pF capacitor.
Here is the sweep of the filter:
I am very proud of this sweep! At least 70dB stop-band attenuation all the way out to 1.5GHz Yayyyyyyyyy.
Here is the output spectrum of my prototype 1W PA after being cleaned up by the filter:
With an output power of 1W (+30dBm) the 2nd harmonic is 53dB down. This exceeds the ACMA/ITU Amateur Radio Spec of 43dB + 10log10(P) by 10dB. For comparison the 2nd harmonic of my FT-817 with 27dBm (0.5W) output is 56dB down. My Baofeng UV-5R on low power (+32dBm) has several rather interesting VHF spurious emissions, the worst being just 42dB down at 180MHz.
I’m no Yaesu (my DV system is better), but these results are not bad for a VHF/UHF noob.
I stumbled across Construction Techniques for LC Highpass and Lowpass Filters used in the 1 MHz to 1 GHz Frequency Range. This is a really thorough treatment of how parasitic effects upset filters, has lots of of experimental results, and tips to improve real world filters.
Regional and rural technology advocate, Federation University Council Member, former lawyer and Chair of Internet Australia, George Fong, will be one of four outstanding keynotes for linux.conf.au in February 2016. Eminently respected within the region for his tireless efforts in advancing technology access, governance and inclusion, his advice and counsel is sought far and wide.
Most recently, Fong has been a driving force in the re-branding of the Internet Society of Australia into Internet Australia, transforming the organisation to be relevant and representative of a broad cross-section of Australians.
“Internet Australia is the peak body representing everyone who uses the Internet. Much of our work is dedicated to keeping the Internet as open and free from undue government interference as possible and seeing innovation thrive in the sector. It goes without saying that a significant part of that process is fuelled by the open source and Linux community.”
"There are new challenges in maintaining the integrity and utility of the Internet. Those challenges are redefining who we are in the technical community and what our roles and responsibilities are for communities and businesses nationally and internationally.”
Conference Director David Bell was thrilled to announce that Mr Fong would be keynoting.
“We’re delighted to be able to bring George’s considered wisdom, academic intellect and world view to linux.conf.au 2016 Geelong - LCA By the Bay. The need for well researched, inclusive and representative technology policies is more important than ever given the disruption the world faces in coming years”.
One of the most respected technical conferences in Australia, Linux Conference Australia (linux.conf.au) will make Geelong home between 1st-5th February 2016. The conference is expected to attract over 500 national and international professional and hobbyist developers, technicians and innovative hardware specialists, and will feature nearly 100 Speakers and presentations over five days. Deakin University’s stunning Waterfront Campus will host the conference, leveraging state of the art networking and audio visual facilities.
The conference delivers Delegates a range of presentations and tutorials on topics such as open source hardware, open source operating systems and open source software, storage, containers and related issues such as patents, copyright and technical community development.
Linux is a computer operating system, in the same way that MacOS, Windows, Android and iOS are operating systems. It can be used on desktop computers, servers, and increasingly on mobile devices such as smartphones and tablets.
Linux embodies the ‘open source’ paradigm of software development, which holds that source code – the code that is used to give computers and mobile devices functionality – should be ‘open’. That is, the source code should be viewable, modifiable and shareable by the entire community. There are a number of benefits to the open source paradigm, including facilitating innovation, sharing and re-use. The ‘open’ paradigm is increasingly extending to other areas such as open government, open culture, open health and open education.
Potential Delegates and Speakers are encouraged to remain up to date with conference news through one of the following channels;
- Website: http://lcabythebay.org.au
- Twitter: @linuxconfau, hashtag #lca2016
- Facebook: https://www.facebook.com/lcabythebay
- Google+: https://www.google.com/+LcabythebayOrgAu
- Lanyrd: http://lanyrd.com/2016/linuxconfau/
- IRC: #linux.conf.au on freenode.net
- Email: firstname.lastname@example.org
- Announce mailing list: http://lists.linux.org.au/mailman/listinfo/lca-announce
We warmly encourage you to forward this announcement to technical communities you may be involved in.
Back in the day, many people had a job for life. Today, the average number of jobs a person might have in their lifetime is six. In future, trends suggest we'll have 6 jobs at once.
I resemble that remark. This year I've found myself saying I have 5 jobs. 1 for each day of the week.
Mainly, I run my own business, Creative Contingencies, and have done so since 1997. But I'm also
- a member of the Open Invention Network's global licensing team,
- on the Drupal Watchdog team at Tag1 Consulting,
- on the board of the Drupal Association and also
- chair of the Drupal community working group.
Two of those jobs are voluntary. Being involved in the open source community is awesome. But sometimes it can be tricky. This morning I was reading an interview with Karen Sandler about "what we mean by we". I was struck by this bit in particular.
"There are blurry lines everywhere in FOSS: between what is personal and what is professional, between volunteers and paid contributors, between non-profit organizations and for-profit companies, and even between the ideological and commercial goals that motivate the work."
We are often asked to compartmentalise these things. I'm not always able to do so. I'm always me. Whichever hat I happen to be wearing.
Earlier on in benchmarking MySQL and MariaDB on POWER8, we noticed that on write workloads (or read workloads involving a lot of IO) we were spending a bunch of time computing InnoDB page checksums. This is a relatively well known MySQL problem and has existed for many years and Percona even added innodb_fast_checksum to Percona Server to help alleviate the problem.
In MySQL 5.6, we got the ability to use CRC32 checksums, which are great in that they’re a lot faster to compute than tho old InnoDB “new” checksum. There’s code inside InnoDB to use the x86 SSE2 crc32q instruction to accelerate performing the checksum on compatible x86 CPUs (although oddly enough, the CRC32 checksum in the binlog does not use this acceleration).
However, on POWER, we’d end up using the software implementation of CRC32, which used a lot more CPU than we’d like. Luckily, CRC32 is really common code and for POWER8, we got some handy instructions to help computing it. Unfortunately, this required brushing up on vector polynomial math in order to understand how to do it all quickly. The end result was Anton coming up with crc32-vpmsum code that we could drop into projects that embed a copy of crc32 that was about 41 times faster than the best non-vpmsum implementation.
Recently, Daniel Black took the patch that had passed through both Daniel Axten‘s and my hands and worked on upstreaming it into MariaDB and MySQL. We did some pretty solid benchmarking on the improvement you’d get, and we pretty much cannot notice the difference between innodb_checksum=off and having it use the POWER8 accelerated CRC32 checksum, which frees up maybe 30% of CPU time to be used for things like query execution! My original benchmark showed a 30% improvement in sysbench read/write workloads.
The excellent news? Two days ago, MariaDB merged POWER8 accelerated crc32! This means that IO heavy workloads on MariaDB on POWER8 will get much faster in the next release.
MySQL bug 74776 is open, with patch attached, so hopefully MySQL will merge this soon too (hint hint).