Planet Linux Australia

Syndicate content
Planet Linux Australia -
Updated: 30 min 33 sec ago

Michael Still: Chet and I went on an adventure to LA-96

Tue, 2015-07-28 11:29
So, I've been fascinated with American nuclear history for ages, and Chet and I got talking about what if any nuclear launch facilities there were in LA. We found LA-96 online and set off on an expedition to explore. An interesting site, its a pity there are no radars left there. Apparently SF-88 is the place to go for tours from vets and radars.


See more thumbnails

I also made a quick and dirty 360 degree video of the view of LA from the top of the nike control radar tower:

Interactive map for this route.

Tags for this post: blog pictures 20150727-nike_missile photo california

Related posts: First jog, and a walk to Los Altos; Did I mention it's hot here?; Summing up Santa Monica; Noisy neighbours at Central Park in Mountain View; So, how am I getting to the US?; Views from a lookout on Mulholland Drive, Bel Air


Michael Still: Geocaching with TheDevilDuck

Mon, 2015-07-27 08:29
In what amounts to possibly the longest LAX layover ever, I've been hanging out with Chet at his place in Altadena for a few days on the way home after the Nova mid-cycle meetup. We decided that being the dorks that we are we should do some geocaching. This is just some quick pics some unexpected bush land -- I never thought LA would be so close to nature, but this part certainly is.


Interactive map for this route.

Tags for this post: blog pictures 20150727 photo california bushwalk

Related posts: A walk in the San Mateo historic red woods; First jog, and a walk to Los Altos; Goodwin trig; Did I mention it's hot here?; Big Monks; Summing up Santa Monica


Sridhar Dhanapalan: Twitter posts: 2015-07-20 to 2015-07-26

Mon, 2015-07-27 01:27

David Rowe: Microphone Placement and Speech Codecs

Sat, 2015-07-25 12:30

This week I have been looking at the effect different speech samples have on the performance of Codec 2. One factor is microphone placement. In radio (from broadcast to two way HF/VHF) we tend to use microphones closely placed to our lips. In telephony, hands free, or more distance microphone placement has become common.

People trying FreeDV over the air have obtained poor results from using built-in laptop microphones, but good results from USB headsets.

So why does microphone placement matter?

Today I put this question to the codec2-dev and digital voice mailing lists, and received many fine ideas. I also chatted to such luminaries as Matt VK5ZM and Mark VK5QI on the morning drive time 70cm net. I’ve also been having an ongoing discussion with Glen, VK1XX, on this and other Codec 2 source audio conundrums.

The Model

A microphone is a bit like a radio front end:

We assume linearity (the microphone signal isn’t clipping).

Imagine we take exactly the same mic and try it 2cm and then 50cm away from the speakers lips. As we move it away the signal power drops and (given the same noise figure) SNR must decrease.

Adding extra gain after the microphone doesn’t help the SNR, just like adding gain down the track in a radio receiver doesn’t help the SNR.

When we are very close to a microphone, the low frequencies tend to be boosted, this is known as the proximity effect. This is where the analogy to radio signals falls over. Oh well.

A microphone 50cm away picks up multi-path reflections from the room, laptop case, and other surfaces that start to become significant compared to the direct path. Summing a delayed version of the original signal will have an impact on the frequency response and add reverb – just like a HF or VHF radio signal. These effects may be really hard to remove.

Science in my Lounge Room 1 – Proximity Effect

I couldn’t resist – I wanted to demonstrate this model in the real world. So I dreamed up some tests using a couple of laptops, a loudspeaker, and a microphone.

To test the proximity effect I constructed a wave file with two sine waves at 100Hz and 1000Hz, and played it through the speaker. I then sampled using the microphone at different distances from a speaker. The proximity effect predicts the 100Hz tone should fall off faster than the 1000Hz tone with distance. I measured each tone power using Audacity (spectrum feature).

This spreadsheet shows the results over a couple of runs (levels in dB).

So in Test 1, we can see the 100Hz tone falls off 4dB faster than the 1000Hz tone. That seems a bit small, could be experimental error. So I tried again with the mic just inside the speaker aperture (hence -1cm) and the difference increased to 8dB, just as expected. Yayyy, it worked!

Apparently this effect can be as large as 16dB for some microphones. Apparently radio announcers use this effect to add gravitas to their voice, e.g. leaning closer to the mic when they want to add drama.

Im my case it means unwanted extra low frequency energy messing with Codec 2 with some closely placed microphones.

Science in my Lounge Room 2 – Multipath

So how can I test the multipath component of my model above? Can I actually see the effects of reflections? I set up my loudspeaker on a coffee table and played a 300 to 3000 Hz swept sine wave through it. I sampled close up and with the mic 25cm away.

The idea is get a reflection off the coffee table. The direct and reflected wave will be half a wavelength out of phase at some frequency, which should cause a notch in the spectrum.

Lets take a look at the frequency response close up and at 25cm:

Hmm, they are both a bit of a mess. Apparently I don’t live in an anechoic chamber. Hmmm, that might be handy for kids parties. Anyway I can observe:

  1. The signal falls off a cliff at about 1000Hz. Well that will teach me to use a speaker with an active cross over for these sorts of tests. It’s part of a system that normally has two other little speakers plugged into the back.
  2. They both have a resonance around 500Hz.
  3. The close sample is about 18dB stronger. Given both have same noise level, that’s 18dB better SNR than the other sample. Any additional gain after the microphone will increase the noise as much as the signal, so the SNR won’t improve.

OK, lets look at the reflections:

A bit of Googling reveals reflections of acoustic waves from solid surfaces are in phase (not reversed 180 degrees). Also, the angle of incidence is the same as reflection. Just like light.

Now the microphone and speaker aperture is 16cm off the table, and the mic 25cm away. Couple of right angle triangles, bit of Pythagoras, and I make the reflected path length as 40.6cm. This means a path difference of 40.6 – 25 = 15.6cm. So when wavelength/2 = 15.6cm, we should get a notch in the spectrum, as the two waves will cancel. Now v=f(wavelength), and v=340m/s, so we expect a notch at f = 340*2/0.156 = 1090Hz.

Looking at a zoomed version of the 25cm spectrum:

I can see several notches: 460Hz, 1050Hz, 1120Hz, and 1300Hz. I’d like to think the 1050Hz notch is the one predicted above.

Can we explain the other notches? I looked around the room to see what else could be reflecting. The walls and ceiling are a bit far away (which means low freq notches). Hmm, what about the floor? It’s big, and it’s flat. I measured the path length directly under the table as 1.3m. This table summarises the possible notch frequencies:

Note that notches will occur at any frequency where the path difference is half a wavelength, so wavelength/2, 3(wavelength)/2, 5(wavelength)/2…..hence we get a comb effect along the frequency axis.

OK I can see the predicted notch at 486Hz, and 1133Hz, which means the 1050 Hz is probably the one off the table. I can’t explain the 1300Hz notch, and no sign of the predicted notch at 810Hz. With a little imagination we can see a notch around 1460Hz. Hey, that’s not bad at all for a first go!

If I was super keen I’d try a few variations like the height above the table and see if the 1050Hz notch moves. But it’s Friday, and nearly time to drink red wine and eat pizza with my friends. So that’s enough lounge room acoustics for now.

How to break a low bit rate speech codec

Low bit rate speech codecs make certain assumptions about the speech signal they compress. For example the time varying filter used to transmit the speech spectrum assumes the spectrum varies slowly in frequency, and doesn’t have any notches. In fact, as this filter is “all pole” (IIR), it can only model resonances (peaks) well, not zeros (notches). Codecs like mine tend to fall apart (the decoded speech sounds bad) when the input speech violates these assumptions.

This helps explain why clean speech from a nicely placed microphone is good for low bit rate speech codecs.

Now Skype and (mobile) phones do work quite well in “hands free” mode, with rather distance microphone placement. I often use Skype with my internal laptop microphone. Why is this OK?

Well the codecs used have a much higher bit rate, e.g. 10,000 bit/s rather than 1,000 bits/s. This gives them the luxury to employ codecs that can, to some extent, code arbitrary waveforms as well as speech. These employ algorithms like CELP that use a hybrid of model based (like Codec 2) and waveform based (like PCM). So they faithfully follow the crappy mic signal, and don’t fall over completely.


In Sep 2014 I had some interesting discussions around the effect of microphones, small speakers, and speech samples with Mike, OH2FCZ, who has is an audio professional. Thanks Mike!

James Morris: Linux Security Summit 2015 Update: Free Registration

Fri, 2015-07-24 15:27

In previous years, attending the Linux Security Summit (LSS) has required full registration as a LinuxCon attendee.  This year, LSS has been upgraded to a hosted event.  I didn’t realize that this meant that LSS registration was available entirely standalone.  To quote an email thread:

If you are only planning on attending the The Linux Security Summit, there is no need to register for LinuxCon North America. That being said you will not have access to any of the booths, keynotes, breakout sessions, or breaks that come with the LinuxCon North America registration.  You will only have access to The Linux Security Summit.

Thus, if you wish to attend only LSS, then you may register for that alone, at no cost.

There may be a number of people who registered for LinuxCon but who only wanted to attend LSS.   In that case, please contact the program committee at

Apologies for any confusion.

Michael Davies: Virtualenv and library fun

Fri, 2015-07-24 10:44
Doing python development means using virtualenv, which is wonderful.  Still, sometimes you find a gotcha that trips you up.

Today, for whatever reason, inside a venv inside a brand new Ubuntu 14.04 install,  I could not see a system-wide install of pywsman (installed via sudo apt-get install python-openwsman)

For example:

mrda@host:~$ python -c 'import pywsman'

# Works

mrda@host:~$ tox -evenv --notest(venv)mrda@host:~$ python -c 'import pywsman' Traceback (most recent call last):  File "<string>", line 1, in <module>ImportError: No module named pywsman# WAT?

Let's try something else that's installed system-wide(venv)mrda@host:~$ python -c 'import six' # Works

Why does six work, and pywsman not?(venv)mrda@host:~$ ls -la /usr/lib/python2.7/dist-packages/six*-rw-r--r-- 1 root root  1418 Mar 26 22:57 /usr/lib/python2.7/dist-packages/six-1.5.2.egg-info-rw-r--r-- 1 root root 22857 Jan  6  2014 /usr/lib/python2.7/dist-packages/ -rw-r--r-- 1 root root 22317 Jul 23 07:23 /usr/lib/python2.7/dist-packages/six.pyc(venv)mrda@host:~$ ls -la /usr/lib/python2.7/dist-packages/*pywsman*-rw-r--r-- 1 root root  80590 Jun 16  2014 /usr/lib/python2.7/dist-packages/ -rw-r--r-- 1 root root 293680 Jun 16  2014 /usr/lib/python2.7/dist-packages/

The only thing that comes to mind is that pywsman wraps a .so

A work-around is to tell venv that it should use the system-wide install of pywsman, like this:

# Kill the old venv first (venv)mrda@host:~$ deactivate mrda@host:~$ rm -rf .tox/venv

Now startover mrda@host:~$ tox -evenv --notest --sitepackages pywsman (venv)mrda@host:~$ python -c "import pywsman"# Fun and Profit!

Binh Nguyen: Self Replacing Secure Code, our Strange World, Mac OS X Images Online, Password Recovery Software, and Python Code Obfuscation

Thu, 2015-07-23 21:45
A while back (several years ago) I wrote about self replacing code in my 'Cloud and Security' report (p.399-402)(I worked on it on and off over an extended period of time) within the context of building more secure codebases. DARPA are currently funding projects within this space. Based on I've seen it's early days. To be honest it's not that difficult to build if you think about it carefully and break it down. Much of the code that is required is already in wide spread use and I already have much of the code ready to go. The problem is dealing with the sub-components. There are some aspects that are incredibly tedious to deal with especially within the context of multiple languages.

If you're curious, I also looked at fully automated network defense (as in the CGC (Cyber Grand Challenge)) in all of my three reports, 'Building a Coud Computing Service', 'Convergence Effect', and 'Cloud and Internet Security' (I also looked at a lot of other concepts such as 'Active Defense' systems which involves automated network response/attack but there are a lot of legal, ethical, technical, and other conundrums that we need to think about if we proceed further down this path...). I'll be curious to see what the final implementations will be like...

If you've ever worked in the computer security industry you'll realise that it can be incredibly frustrating at times. As I've stated previously it can sometimes be easier to get information from countries under sanction than legitimately (even in a professional setting in a 'safe environment') for study. I find it very difficult to understand this perspective especially when search engines allow independent researchers easy access to adequate samples and how you're supposed to defend against something if you (and many others around you) have little idea of how some attack system/code works.,infosec-firms-oppose-misguided-exploit-export-controls.aspx

It's interesting how the West views China and Russia via diplomatic cables (WikiLeaks). They say that China is being overly aggressive particularly with regards to economics and defense. Russia is viewed as a hybrid criminal state. When you think about it carefully the world is just shades of grey. A lot of what we do in the West is very difficult to defend when you look behind the scenes and realise that we straddle such a fine line and much of what they do we also engage in. We're just more subtle about it. If the general public were to realise that Obama once held off on seizing money from the financial system (proceeds of crime and terrorism) because there was so much locked up in US banks that it would cause the whole system to crash would they see things differently? If the world in general knew that much of southern Italy's economy was from crime would they view it in the same way as they saw Russia? If the world knew exactly how much 'economic intelligence' seems to play a role in 'national security' would we think about the role of state security differently?

If you develop across multiple platforms you'll have discovered that it is just easier to have a copy of Mac OS X running in a Virtual Machine rather than having to shuffle back and forth between different machines. Copies of the ISO/DMG image (technically, Mac OS X is free for those who don't know) are widely available and as many have discovered most of the time setup is reasonably easy.

If you've ever lost your password to an archive, password recovery programs can save a lot of time. Most of the free password recovery tools deal only with a limited number of filetypes and passwords.

There are some Python bytecode obfuscation utilities out there but like standard obfuscators they are of limited utility against skilled programmers.

Glen Turner: Configuring Zotero PDF full text indexing in Debian Jessie

Wed, 2015-07-22 23:56

Zoterto is an excellent reference and citation manager. It runs within Firefox, making it very easy to record sources that you encounter on the web (and in this age of publication databases almost everything is on the web). There are plugins for LibreOffice and for Word which can then format those citations to meet your paper's requirements. Zotero's Firefox application can also output for other systems, such as Wikipedia and LaTeX. You can keep your references in the Zotero cloud, which is a huge help if you use different computers at home and work or school.

The competing product is EndNote. Frankly, EndNote belongs to a previous era of researcher methods. If you use Windows, Word and Internet Explorer and have a spare $100 then you might wish to consider it. For me there's a host of showstoppers, such as not running on Linux and not being able to bookmark a reference from my phone when it is mentioned in a seminar.

Anyway, this article isn't a Zotero versus EndNote smackdown, there's plenty of those on the web. This article is to show a how to configure Zotero's full text indexing for the RaspberryPi and other Debian machines.

Installing Zotero

There are two parts to install: a plugin for Firefox, and extensions for Word or LibreOffice. (OpenOffice works too, but to be frank again, LibreOffice is the mainstream project of that application these days.)

Zotero keeps its database as part of your Firefox profile. Now if you're about to embark on a multi-year research project you may one day have trouble with Firefox and someone will suggest clearing your Firefox profile, and Firefox once again works fine. But then you wonder, "where are my years of carefully-collected references?" And then you cry before carefully trying to re-sync.

So the first task in serious use of Zotero on Linux is to move that database out of Firefox. After installing Zotero on Firefox press the "Z" button, press the Gear icon, select "Preferences" from the dropbox menu. On the resulting panel select "Advanced" and "Files and folders". Press the radio button "Data directory location -- custom" and enter a directory name.

I'd suggest using a directory named "/home/vk5tu/.zotero" or "/home/vk5tu/zotero" (amended for your own userid, of course). The standalone client uses a directory named "/home/vk5tu/.zotero" but there are advantages to not keeping years of precious data in some hidden directory.

After making the change quit from Firefox. Now move the directory in the Firefox profile to whereever you told Zotero to look:

$ cd $ mv .mozilla/firefox/*.default/zotero .zotero Full text indexing of PDF files

Zotero can create a full-text index of PDF files. You want that. The directions for configuring the tools are simple.

Too simple. Because downloading a statically-linked binary from the internet which is then run over PDFs from a huge range of sources is not the best of ideas.

The page does have instructions for manual configuration but the page lacks a worked example. Let's do that here.

Manual configuration of PDF full indexing utilities on Debian

Install the pdftotext and pdfinfo programs:

$ sudo apt-get install poppler-utils

Find the kernel and architecture:

$ uname --kernel-name --machine Linux armv7l

In the Zotero data directory create a symbolic link to the installed programs. The printed kernel-name and machine is part of the link's name:

$ cd ~/.zotero $ ln -s $(which pdftotext) pdftotext-$(uname -s)-$(uname -m) $ ln -s $(which pdfinfo) pdfinfo-$(uname -s)-$(uname -m)

Install a small helper script to alter pdftotext paramaters:

$ cd ~/.zotero $ wget -O $ chmod a+x

Create some files named *.version containing the version numbers of the utilities. The version number appears in the third field of the first line on stderr:

$ cd ~/.zotero $ pdftotext -v 2>&1 | head -1 | cut -d ' ' -f3 > pdftotext-$(uname -s)-$(uname -m).version $ pdfinfo -v 2>&1 | head -1 | cut -d ' ' -f3 > pdfinfo-$(uname -s)-$(uname -m).version

Start Firefox and Zotero's gear icon, "Preferences", "Search" should report something like:

PDF indexing pdftotext version 0.26.5 is installed pdfinfo version 0.26.5 is installed

Do not press "check for update". The usual maintenance of the operating system will keep those utilities up to date.

Linux Users of Victoria (LUV) Announce: LUV Main August 2015 Meeting: Open Machines Building Open Hardware / VLSCI: Supercomputing for Life Sciences

Wed, 2015-07-22 19:29
Start: Aug 4 2015 18:30 End: Aug 4 2015 20:30 Start: Aug 4 2015 18:30 End: Aug 4 2015 20:30 Location: 

200 Victoria St. Carlton VIC 3053



• Jon Oxer, Open Machines Building Open Hardware

• Chris Samuel, VLSCI: Supercomputing for Life Sciences

200 Victoria St. Carlton VIC 3053 (formerly the EPA building)

Before and/or after each meeting those who are interested are welcome to join other members for dinner. We are open to suggestions for a good place to eat near our venue. Maria's on Peel Street in North Melbourne is currently the most popular place to eat after meetings.

LUV would like to acknowledge Red Hat for their help in obtaining the venue and VPAC for hosting.

Linux Users of Victoria Inc. is an incorporated association, registration number A0040056C.

August 4, 2015 - 18:30

read more

Michael Davies: DHCP and NUCs

Wed, 2015-07-22 19:23
I've put together a little test network at home for doing some Ironic testing on hardware using NUCs.  So far it's going quite well, although one problem that had me stumped for a while was getting the NUC to behave itself when obtaining an IP address with DHCP.

Each time I booted the network, a different IP address from the pool was being allocated (i.e. the next one in the DHCP address pool).

There's already a documented problem with isc-dhcp-server for devices where the BMC and host share a NIC (including the same MAC address), but this was even worse because on closer examination a different Client UID is being presented as part of the DHCPDISCOVER for the node each time. (Fortunately the NUC's BMC doesn't do this as well).

So I couldn't really find a solution online, but the answer was there all the time in the man page - there's a cute little option "ignore-client-uids true;" that ensures only the MAC address is used for DHCP lease matching, and not Client UID.  Turning this on means now that on each deploy the NUC receives the same IP address - and not just for the node, but also for the BMC - it works around the aforementioned bug as well.  Woohoo!

There's still one remaining problem, I can't seem to get a fixed IP address returned in the DHCPOFFER, I have to configure a dynamic pool instead (which is fine because this is a test network with limited nodes in it).  One to resolve another day...

David Rowe: Self Driving Cars

Wed, 2015-07-22 11:30

I’m a believer in self driving car technology, and predict it will have enormous effects, for example:

  1. Our cars currently spend most of the time doing nothing. They could be out making money for us as taxis while we are at work.
  2. How much infrastructure and frustration (home garage, driveways, car parks, finding a park) do we devote to cars that are standing still? We could park them a few km away in a “car hive” and arrange to have them turn up only when we need them.
  3. I can make interstate trips laying down sleeping or working.
  4. Electric cars can recharge themselves.
  5. It throws personal car ownership into question. I can just summon a car on my smart phone then send the thing away when I’m finished. No need for parking, central maintenance. If they are electric, and driverless, then very low running costs.
  6. It will decimate the major cause of accidental deaths, saving untold misery. Imagine if your car knew the GPS coordinates of every car within 1000m, even if outside of visual range, like around a corner. No more t-boning, or even car doors opening in the path of my bike.
  7. Speeding and traffic fines go away, which will present a revenue problem for governments like mine that depend on the statistical likelihood of people accidentally speeding.
  8. My red wine consumption can set impressive new records as the car can drive me home and pour me into bed.

I think the time will come when computers do a lot better than we can at driving. The record of these cars in the US is impressive. The record for humans in car accidents dismal (a leading case of death).

We already have driverless planes (autopilot, anti-collision radar, autoland), that do a pretty good job with up to 500 lives at a time.

I can see a time (say 20 years) when there will be penalties (like a large insurance excess) if a human is at the wheel during an accident. Meat bags like me really shouldn’t be in control of 1000kg of steel hurtling along at 60 km/hr. Incidentally that’s 144.5 kJ of kinetic energy. A 9mm bullet exits a pistol with 0.519 kJ of energy. No wonder cars hurt people.

However many people are concerned about “blue screens of death”. I recently had an email exchange on a mailing list, here are some key points for and against:

  1. The cars might be hacked. My response is that computers and micro-controllers have been in cars for 30 years. Hacking of safety critical systems (ABS or EFI or cruise control) is unheard of. However unlike a 1980′s EFI system, self driving cars will have operating systems and connectivity, so this does need to be addressed. The technology will (initially at least) be closed source, increasing the security risk. Here is a recent example of a modern car being hacked.
  2. Planes are not really “driverless”, they have controls and pilots present. My response is that long distance commercial aircraft are under autonomous control for the majority of their flying hours, even if manual controls are present. Given the large number of people on board an aircraft it is of course prudent to have manual control/pilot back up, even if rarely used.
  3. The drivers of planes are sometimes a weak link. As we saw last year and on Sep 11 2001, there are issues when a malicious pilot gains control. Human error is also behind a large number of airplane incidents, and most car accidents. It was noted that software has been behind some airplane accidents too – a fair point.
  4. Compared to aircraft the scale is much different for cars (billions rather than 1000s). The passenger payload is also very different (1.5 people in a car on average?), and the safety record of cars much much worse – it’s crying out for improvement via automation. So I think automation of cars will eventually be a public safety issue (like vaccinations) and controls will disappear.
  5. Insurance companies may refuse a claim if the car is driverless. My response is that insurance companies will look at the actuarial data as that’s how they make money. So far all of the accidents involving Google driverless cars have been caused by meat bags, not silicon.

I have put my money where my mouth is and invested in a modest amount of Google shares based on my belief in this technology. This is also an ethical buy for me. I’d rather have some involvement in an exciting future that saves lives and makes the a world a better place than invest in banks and mining companies which don’t.

Binh Nguyen: Joint Strike Fighter F-35 Notes

Wed, 2015-07-22 02:12
Below are a bunch of thoughts, collation of articles about the F-35 JSF, F-22 Raptor, and associated technologies...

- every single defense analyst knows that comprimises had to be made in order to achieve a blend of cost effectiveness, stealth, agility, etc... in the F-22 and F-35. What's also clear is that once things get up close and personal things mightn't be as clear cut as we're being told. I was of the impression that the F-22 would basically outdo anything and everything in the sky all of the time. It's clear that based on training excercises that unless the F-22's have been backing off it may not be as phenomenal as we're being led to believe (one possible reason to deliberately back off is to not provide intelligence on max performance envelope to provide less of a target for near peer threats with regards to research and engineering). There are actually a lot of low speed manouvres that I've seen a late model 3D-vectored Sukhoi perform that a 2D-vectored F-22 has not demonstrated. The F-35 is dead on arrival in many areas (at the moment. Definitely from a WVR perspective) as many people have stated. My hope and expectation is that it will have significant upgrades throughout it's lifetime

F22 vs Rafale dogfight video

Dogfight: Rafale vs F22 (Close combat)


- in the past public information/intelligence regarding some defense programs/equipment have been limited to reduce the chances of a setting off arms race. That way the side who has disemminated the mis-information can be guaranteed an advantage should there be a conflict. Here's the problem though, while some of this may be such, I doubt that all of it is. My expectation that due to some of the intelligence leaks (many terabytes. Some details of the breach are available publicly) regarding designs of the ATF (F-22) and JSF (F-35) programs is also causing some problems as well. They need to overcome technical problems as well as problems posed by previous intelligence leaks. Some of what is being said makes no sense as well. Most of what we're being sold on doesn't actually work (yet) (fusion, radar, passive sensors, identification friend-or-foe, etc...)...

- if production is really as problematic as they say that it could be without possible recourse then the only thing left is to bluff. Deterrence is based on the notion that your opponent will not attack because you have a qualitative or quantitative advantage... Obviously, the problem if there is actual conflict we have a huge problem. We purportedly want to be able to defend ourselves should anything potentially bad occur. The irony is that our notion of self defense often incorporates force projection in far off, distant lands...

F22 Raptor Exposed - Why the F22 Was Cancelled

F-35 - a trillion dollar disaster


JSF 35 vs F18 superhornet

- we keep on giving Lockheed Martin a tough time regarding development and implementation but we keep on forgetting that they have delivered many successful platforms including the U-2, the Lockheed SR-71 Blackbird, the Lockheed F-117 Nighthawk, and the Lockheed Martin F-22 Raptor

f-22 raptor crash landing

- SIGINT/COMINT often produces a lot of a false positives. Imagine listening to every single conversation that you overheard every single conversation about you. Would you possibly be concerned about your security? Probably more than usual despite whatever you might say? As I said previously in posts on this blog it doesn't makes sense that we would have such money invested in SIGINT/COMINT without a return on investment. I believe that we may be involved in far more 'economic intelligence' then we may be led to believe

- despite what is said about the US (and what they say about themselves), they do tell half-truths/falsehoods. They said that the Patriot missile defense systems were a complete success upon release with ~80% success rates when first released. Subsequent revisions of past performance have indicated actual success rate of about half that. It has been said that the US has enjoyed substantive qualitative and quantitative advantages over Soviet/Russian aircraft for a long time. Recently released data seems to indicate that it is closer to parity (not 100% sure about the validity of this data) when pilots are properly trained. There seems to be indications that Russian pilots may have been involved in conflicts where they shouldn't have been or were unknown to be involved...

- the irony between the Russians and US is that they both deny that their technology is worth pursuing and yet time seems to indicate otherwise. A long time ago Russian scientists didn't bother with stealth because they though it was overly expensive without enough of a gain (especially in light of updated sensor technology) and yet the PAK-FA/T50 is clearly a test bed for such technology. Previously, the US denied that that thrust vectoring was worth pursuing and yet the the F-22 clearly makes use of it

- based on some estimates that I've seen the F-22 may be capable of close to Mach 3 (~2.5 based on some of the estimates that I've seen) under limited circumstances

- people keep on saying maintaining a larger, indigenous defense program is simply too expensive. I say otherwise. Based on what has been leaked regarding the bidding process many people basically signed on without necessarily knowing everything about the JSF program. If we had more knowledge we may have proceeded a little bit differently

- a lot of people who would/should have classified knowledge of the program are basically implying that it will work and will give us a massive advantage give more development time. The problem is that there is so much core functionality that is so problematic that this is difficult to believe...

- the fact that pilots are being briefed not to allow for particular circumstances tells us that there are genuine problems with the JSF

- judging by the opinions in the US military many people are guarded regarding the future performance of the aircraft. We just don't know until it's deployed and see how others react from a technological perspective

- proponents of the ATF/JSF programs keep on saying that since you can't see it you can't shoot. If that's the case, I just don't understand why we don't push up development of 5.5/6th gen fighters (stealth drones basically) and run a hybrid force composed of ATF, JSF, and armed drones (some countries including France are already doing this)? Drones are somewhat of a better known quantity and without life support issues to worry about should be able to go head to head with any manned fighter even with limited AI and computing power. Look at the following videos and you'll notice that the pilot is right on the physical limit in a 4.5 gen fighter during an excercise with an F-22. A lot of stories are floating around indicating that the F-22 enjoys a big advantage but that under certain circumstance it can be mitigated. Imagine going up against a drone where you don't have to worry about the pilot blacking out, pilot training (incredibly expensive to train. Experience has also told us that pilots need genuine flight time not just simulation time to maintain their skills), a possible hybrid propulsion system (for momentary speed changes/bursts (more than that provided by afterburner systems) to avoid being hit by a weapon or being acquired by a targeting system), and has more space for weapons and sensors? I just don't understand how you would be better off with a mostly manned fleet as opposed to a hybrid fleet unless there are technological/technical issues to worry about (I find this highly unlikely given some of the prototypes and deployments that are already out there)

F22 vs Rafale dogfight video

Dogfight: Rafale vs F22 (Close combat)


- if I were a near peer aggressor or looking to defend against 5th gen threats I'd just to straight to 5.5/6th gen armed drone fighter development. You wouldn't need to fulfil all the requirements and with the additional lead time you may be able to achieve not just parity but actual advantages while possibly being cheaper with regards to TCO (Total Cost of Ownership). There are added benefits going straight to 5.5/6th gen armed drone development. You don't have to compromise so much on design. The bubble shaped (or not) canopy to aide dogfighting affects aerodynamic efficiency and actually is one of the main causes of increased RCS (Radar Cross Section) on a modern fighter jet. The pilot and additional equipment (ejector sear, user interface equipment, life support systems, etc...) would surely add a large amount of weight which can now be removed. With the loss in weight and increase in aerodynamic design flexibility you could save a huge amount of money. You also have a lot more flexibility in reducing RCS. For instance, some of the biggest reflectors of RADAR signals is the canopy (a film is used to deal with this) and the pilot's helmet and one of the biggest supposed selling points of stealth aircraft are RAM coatings. They're incredibly expensive though and wear out (look up the history of the B-2 Spirit and the F-22 Raptor). If you have a smaller aicraft to begin with though you have less area to paint leading to lower costs of ownership while retaining the advantages of low observable technology

- the fact that it has already been speculated that 6th gen fighters may focus less on stealth and speed and more on weapons capability means that the US is aware of increasingly effective defense systems against 5th gen fighters such as the F-22 Raptor and F-35 JSF which rely heavily on low observability 

- based on Wikileaks and other OSINT (Open Source Intelligence) everyone involved with the United States seems to acknowledge that they get a raw end of the deal to a certain extent but they also seem to acknowledge/imply that life is easier with them than without them. Read enough and you'll realise that even when classified as a closer partner rather than just a purchaser of their equipment you sometimes don't/won't receive much extra help

- if we had the ability I'd be looking to develop our own indigineous program defense programs. At least when we make procurements we'd be in a better position to be able to make a decision as to whether what was being presented to us was good or bad. We've been burnt on so many different programs with so many different countries... The only issue that I may see is that the US may attempt to block us from this. It has happened in the past with other supposed allies before...

- I just don't get it sometimes. Most of the operations and deployments that US and allied countries engage in are counter-insurgency and CAS significant parts of our operations involving mostly un-manned drones (armed or not). 5th gen fighters help but they're overkill. Based on some of what I've seen the only two genuine near peer threats are China and Russia both of whom have known limitations in their hardware (RAM coatings/films, engine performance/endurance, materials design and manufacturing, etc...). Sometimes it feels as though the US looks for enemies that mightn't even exist. Even a former Australian Prime-Ministerial advister said that China doesn't want to lead the world, "China will get in the way or get out of the way." The only thing I can possibly think of is that the US has intelligence that may suggest that China intends to project force further outwards (which it has done) or else they're overly paranoid. Russia is a slightly different story though... I'm guessing it would be interesting reading up more about how the US (overall) interprets Russian and Chinese actions behinds the scenes (lookup training manuals for allied intelligence officers for an idea of what our interpretation of what their intelligence services are like)

- sometimes people say that the F-111 was a great plane but in reality there was no great use of it in combat. It could be the exact same circumstance with the F-35

- there could be a chance the aircraft could become like the B-2 and the F-22. Seldom used because the actual true, cost of running it is horribly high. Also imagine the ramifications/blowback of losing such an expensive piece of machinery should there be a chance that it can be avoided

- defending against 5th gen fighters isn't easy but it isn't impossible. Sensor upgrades, sensor blinding/jamming technology, integrated networks, artificial manipulation of weather (increased condensation levels increases RCS), faster and more effective weapons, layered defense (with strategic use of disposable (and non-disposable) decoys so that you can hunt down departing basically, unarmed fighters), experimentation with cloud seeing with substances that may help to speed up RAM coating removal or else reduce the effectiveness of stealth technology (the less you have to deal with the easier your battles will be), forcing the battle into unfavourable conditions, etc... Interestingly, there have been some accounts/leaks of being able to detect US stealth bombers (B-1) lifting off from some US air bases from Australia using long range RADAR. Obviously, it's one thing to be able to detect and track versus achieving a weapons quality lock on a possible target

RUSSIAN RADAR CAN NOW SEE F-22 AND F-35 Says top US Aircraft designer

- following are rough estimate on RCS of various modern defense aircraft. It's clear that while Chinese and Russian technology aren't entirely on par they make the contest unconfortably close. Estimates on the PAK-FA/T-50 indicate RCS of about somewhere between the F-35 and F-22. Ultiamtely this comes back down to a sensor game. Rough estimates seem to indicate a slight edge to the F-22 in most areas. Part me thinks that the RCS of the PAK-FA/T-50 must be propoganda, the other part leads me to believe that there is no way countries would consider purchase of the aircraft if it didn't offer a competitive RCS

- it's somehwat bemusing that that you can't take pictures/videos from certain angles of the JSF in some of the videos mentioned here and yet there are heaps of pictures online of LOAN systems online including high resolution images of the back end of the F-35 and F-22 22 Raptor F 35 real shoot super clear

- people keep on saying that if you can't see and you can't lock on to stealth aircraft they'll basically be gone by the time. The converse is true. Without some form of targeting system the fighter in question can't lock on to his target. Once you understand how AESA RADAR works you also understand that given sufficient computing power, good implementation skills, etc... it's also subject to the same issue that faces the other side. You shoot what you can't see and by targeting you give away your position. My guess is that detection of tracking by RADAR is somewhat similar to a lot of de-cluttering/de-noising algorithms (while making use of wireless communication/encryption & information theories as well) but much more complex... which is why there has been such heavy investment and interest in more passive systems (infra-red, light, sound, etc...)

F-35 JSF Distributed Aperture System (EO DAS)

Lockheed Martin F-35 Lightning II- The Joint Strike Fighter- Full Documentary.

4195: The Final F-22 Raptor

Rafale beats F 35 & F 22 in Flight International

Eurofighter Typhoon fighter jet Full Documentary

Eurofighter Typhoon vs Dassault Rafale

DOCUMENTARY - SUKHOI Fighter Jet Aircrafts Family History - From Su-27 to PAK FA 50

Green Lantern : F35 v/s UCAVs

Erik de Castro Lopo: Building the LLVM Fuzzer on Debian.

Tue, 2015-07-21 21:08

I've been using the awesome American Fuzzy Lop fuzzer since late last year but had also heard good things about the LLVM Fuzzer. Getting the code for the LLVM Fuzzer is trivial, but when I tried to use it, I ran into all sorts of road blocks.

Firstly, the LLVM Fuzzer needs to be compiled with and used with Clang (GNU GCC won't work) and it needs to be Clang >= 3.7. Now Debian does ship a clang-3.7 in the Testing and Unstable releases, but that package has a bug (#779785) which means the Debian package is missing the static libraries required by the Address Sanitizer options. Use of the Address Sanitizers (and other sanitizers) increases the effectiveness of fuzzing tremendously.

This bug meant I had to build Clang from source, which nnfortunately, is rather poorly documented (I intend to submit a patch to improve this) and I only managed it with help from the #llvm IRC channel.

Building Clang from the git mirror can be done as follows:

mkdir LLVM cd LLVM/ git clone (cd llvm/tools/ && git clone (cd llvm/projects/ && git clone (cd llvm/projects/ && git clone (cd llvm/projects/ && git clone mkdir -p llvm-build (cd llvm-build/ && cmake -G "Unix Makefiles" -DCMAKE_INSTALL_PREFIX=$(HOME)/Clang/3.8 ../llvm) (cd llvm-build/ && make install)

If all the above works, you will now have working clang and clang++ compilers installed in $HOME/Clang/3.8/bin and you can then follow the examples in the LLVM Fuzzer documentation.

David Rowe: FreeDV Robustness Part 6 – Early Low SNR Results

Tue, 2015-07-21 20:30

Anyone who writes software should be sentenced to use it. So for the last few days I’ve been radiating FreeDV 700 signals from my home in Adelaide to this websdr in Melbourne, about 800km away. This has been very useful, as I can sample signals without having to bother other Hams. Thanks John!

I’ve also found a few bugs and improved the FreeDV diagnostics to get a feel for how the system is working over real world channels.

I am using a simple end fed dipole a few meters off the ground and my IC7200 at maximum power (100W I presume, I don’t have a power meter). A key goal is comparable performance to SSB at low SNRs on HF channels – that is where FreeDV has struggled so far. This has been a tough nut to crack. SSB is really, really good on HF.

Here is a sample taken this afternoon, in a marginal channel. It consists of analog/DV/analog/DV speech. You might need to listen to it a few times, it’s hard to understand first time around. I can only get a few words in analog or DV. It’s right at the lower limit of intelligibility, which is common in HF radio.

Take a look at the spectrogram of the off air signal. You can see the parallel digital carriers, the diagonal stripes is the frequency selective fading. In the analog segments every now and again some low frequency energy pops up above the noise (speech is dominated by low frequency energy).

This sample had a significant amount of frequency selective fading, which occasionally drops the whole signal down into the noise. The DV mutes in the middle of the 2nd digital section as the signal drops out completely.

There was no speech compressor on SSB. I am using the “analog” feature of FreeDV, which allows me to use the same microphone and quickly swap between SSB and DV to ensure the HF channel is roughly the same. I used my laptops built in microphone, and haven’t tweaked the SSB or DV audio with filtering or level adjustment.

I did confirm the PEP power is about the same in both modes using my oscilloscope with a simple “loop” antenna formed by clipping the probe ground wire to the tip. It picked up a few volts of RF easily from the nearby antenna. The DV output audio level is a bit quiet for some reason, have to look into that.

I’m quite happy with these results. In a low SNR, barely usable SSB channel, the new coherent PSK modem is hanging on really well and we could get a message through on DV (e.g. phonetics, a signal report). When the modem locks it’s noise free, a big plus over SSB. All with open source software. Wow!

My experience is consistent with this FreeDV 700 report from Kurt KE7KUS over a 40m NVIS path.

Next step is to work on the DV speech quality to make it easy to use conversationally. I’d say the DV speech quality is currently readability 3 or 4/5. I’ll try a better microphone, filtering of the input speech, and see what can be done with the 700 bit/s Codec.

One option is a new mode where we use the 1300 bit/s codec (as used in FreeDV 1600) with the new, cohpsk modem. The 1300 bit/s codec sounds much better but would require about 3dB more SNR (half an s-point) with this modem. The problem is bandwidth. One reason the new modem works so well is that I use all of the SSB bandwidth. I actually send the 7 x 75 symbol/s carriers twice, to get 14 carriers total. These are then re-combined in the demodulator. This “diversity” approach makes a big difference in the performance on frequency selective fading channels. We don’t have room for that sort of diversity with a codec running much faster.

So time to put the thinking hat back on. I’d also like to try some nastier fading channels, like 20m around the world, or 40m NVIS. However I’m very pleased with this result. I feel the modem is “there”, however a little more work required on the Codec. We’re making progress!

Matt Palmer: Why DANE isn't going to win

Mon, 2015-07-20 14:43

In a comment to my previous post, Daniele asked the entirely reasonable question,

Would you like to comment on why you think that DNSSEC+DANE are not a possible and much better alternative?

Where DANE fails to be a feasible alternative to the current system is that it is not “widely acknowledged to be superior in every possible way”. A weak demonstration of this is that no browser has implemented DANE support, and very few other TLS-using applications have, either. The only thing I use which has DANE support that I’m aware of is Postfix – and SMTP is an application in which the limitations of DANE have far less impact.

My understanding of the limitations of DANE, for large-scale deployment, are enumerated below.

DNS Is Awful

Quoting Google security engineer Adam Langley:

But many (~4% in past tests) of users can’t resolve a TXT record when they can resolve an A record for the same name. In practice, consumer DNS is hijacked by many devices that do a poor job of implementing DNS.

Consider that TXT records are far, far older than TLSA records. It seems likely that TLSA records would fail to be retrieved greater than 4% of the time. Extrapolate to the likely failure rate for lookup of TLSA records would be, and imagine what that would do to the reliability of DANE verification. It would either be completely unworkable, or else would cause a whole new round of “just click through the security error” training. Ugh.

This also impacts DNSSEC itself. Lots of recursive resolvers don’t validate DNSSEC, and some providers mangle DNS responses in some way, which breaks DNSSEC. Since OSes don’t support DNSSEC validation “by default” (for example, by having the name resolution APIs indicate DNSSEC validation status), browsers would essentially have to ship their own validating resolver code.

Some people have concerns around the “single point of control” for DNS records, too. While the “weakest link” nature of the CA model is terribad, there is a significant body of opinion that replacing it with a single, minimally-accountable organisation like ICANN isn’t a great trade.

Finally, performance is also a concern. Having to go out-of-band to retrieve TLSA records delays page generation, and nobody likes slow page loads.


Lots of people don’t like DNSSEC, for all sorts of reasons. While I don’t think it is quite as bad as people make out (I’ve deployed it for most zones I manage, there are some legitimate issues that mean browser vendors aren’t willing to rely on DNSSEC.

1024 bit RSA keys are quite common throughout the DNSSEC system. Getting rid of 1024 bit keys in the PKI has been a long-running effort; doing the same for DNSSEC is likely to take quite a while. Yes, rapid rotation is possible, by splitting key-signing and zone-signing (a good design choice), but since it can’t be enforced, it’s entirely likely that long-lived 1024 bit keys for signing DNSSEC zones is the rule, rather than exception.

DNS Providers are Awful

While we all poke fun at CAs who get compromised, consider how often someone’s DNS control panel gets compromised. Now ponder the fact that, if DANE is supported, TLSA records can be manipulated in that DNS control panel. Those records would then automatically be DNSSEC signed by the DNS provider and served up to anyone who comes along. Ouch.

In theory, of course, you should choose a suitably secure DNS provider, to prevent this problem. Given that there are regular hijackings of high-profile domains (which, presumably, the owners of those domains would also want to prevent), there is something in the DNS service provider market which prevents optimal consumer behaviour. Market for lemons, perchance?


None of these problems are unsolvable, although none are trivial. I like DANE as a concept, and I’d really, really like to see it succeed. However, the problems I’ve listed above are all reasonable objections, made by people who have their hands in browser codebases, and so unless they’re fixed, I don’t see that anyone’s going to be able to rely on DANE on the Internet for a long, long time to come.

Sridhar Dhanapalan: Twitter posts: 2015-07-13 to 2015-07-19

Mon, 2015-07-20 01:27

Craige McWhirter: Craige McWhirter: How To Configure Debian to Use The Tiny Programmer ISP Board

Sun, 2015-07-19 18:29

So, you've gone and bought yourself a Tiny Programmer ISP, you've plugged into your Debian system, excitedly run avrdude only to be greeted with this:

% avrdude -c usbtiny -p m8 avrdude: error: usbtiny_transmit: error sending control message: Operation not permitted avrdude: initialization failed, rc=-1 Double check connections and try again, or use -F to override this check. avrdude: error: usbtiny_transmit: error sending control message: Operation not permitted avrdude done. Thank you.

I resolved this permissions error by adding the following line to /etc/udev/rules.d/10-usbtinyisp.rules:

SUBSYSTEM=="usb", ATTR{idVendor}=="1781", ATTR{idProduct}=="0c9f", GROUP="plugdev", MODE="0660"

Then restarting udev:

% sudo systemctl restart udev

Plugged the Tiny Programmer ISP back in the laptop and ran avrdude again:

% avrdude -c usbtiny -p m8 avrdude: AVR device initialized and ready to accept instructions Reading | ################################################## | 100% 0.00s avrdude: Device signature = 0x1e9587 avrdude: Expected signature for ATmega8 is 1E 93 07 Double check chip, or use -F to override this check. avrdude done. Thank you.

You should now have avrdude love.

Enjoy :-)

Michael Still: Casuarina Sands to Kambah Pool

Sun, 2015-07-19 10:29
I did a walk with the Canberra Bushwalking Club from Casuarina Sands (in the Cotter) to Kambah Pool (just near my house) yesterday. It was very enjoyable. I'm not going to pretend to be excellent at write ups for walks, but will note that the walk leader John Evans has a very detailed blog post about the walk up already. We found a bunch of geocaches along the way, with John doing most of the work and ChifleyGrrrl and I providing encouragement and scrambling skills. A very enjoyable day.


See more thumbnails

Interactive map for this route.

Tags for this post: blog pictures 20150718-casurina_sands_to_kambah_pool photo canberra bushwalk

Related posts: Goodwin trig; Big Monks; Geocaching; Confessions of a middle aged orienteering marker; A quick walk through Curtin; Narrabundah trig and 16 geocaches