Planet Linux Australia

Syndicate content
Planet Linux Australia -
Updated: 28 min 8 sec ago

BlueHackers: The Legacy of Autism and the Future of Neurodiversity

Tue, 2015-08-25 10:50

The New York Times published an interesting review of a book entitled “NeuroTribes: The Legacy of Autism and the Future of Neurodiversity”, authored by Steve Silberman (534 pp. Avery/Penguin Random House).

Silberman describes how autism was discovered by a few different people around the same time, but with each the publicity around their work is warped by their environment and political situation.

This means that we mainly know the angle that one of the people took, which in turn warps our view of Aspergers and autism. Ironically, the lesser known story is actually that of Hans Asperger.

I reckon it’s an interesting read.

James Purser: Mark got a booboo

Tue, 2015-08-25 00:31

Mark Latham losing his AFR column because an advertiser thought his abusive tweets and articles weren't worth being associated with isn't actually a freedom of speech issue.

Nope, not even close to it.

Do you know why?


No one is stopping Latho from spouting his particular brand of down home "outer suburban dad" brand of putresence.

Hell, all he has to do to get back up and running is go and setup a wordpress account and he can be back emptying his bile duct on the internet along with the rest of us who don't get cushy newspaper jobs because we managed to completely screw over our political career in a most spectacular way

Hey, he could setup a Patreon account and everyone who wants to can support him directly, either monthly sub, or a per flatulence rate.

This whole thing reeks of a massive sense of entitlement, both with Latho himself and his media supporters. Bolt, Devine and others who have lept to his defence all push this idea that any move to expose writers to consequences arising from their rantings is some sort of mortal offense against democracy and freedom. Of course, while they do this, they demand the scalps of anyone who dares to write abusive rants against their own positions.


Oh and as I've been reminded, Australia doesn't actually have Freedom of Speech as they do in the US.

Blog Catagories: media

David Rowe: Dual Rav 4 SM1000 Installation

Mon, 2015-08-24 16:30

Andy VK5AKH, and Mark VK5QI, have mirror image SM1000 mobile installations, same radio, even the same car! Some good lessons learned on testing and debugging microphone levels that will be useful for other people installing their SM1000. Read all about it on Mark’s fine blog.

David Rowe: Codec 2 Masking Model Part 1

Mon, 2015-08-24 12:30

Many speech codecs use Linear Predictive Coding (LPC) to model the short term speech spectrum. For very low bit rate codecs, most of the bit rate is allocated to this information.

While working on the 700 bit/s version of Codec 2 I hit a few problems with LPC and started thinking about alternatives based on the masking properties of the human ear. I’ve written Octave code to prototype these ideas.

I’ve spent about 2 weeks on this so far, so thought I better write it up. Helps me clarify my thoughts. This is hard work for me. Many of the steps below took several days of scratching on paper and procrastinating. The human mind can only hold so many pieces of information. So it’s like a puzzle with too many pieces missing. The trick is to find a way in, a simple step that gets you a working algorithm that is a little bit closer to your goal. Like evolution, each small change needs to be viable. You need to build a gentle ramp up Mount Improbable.

Problems with LPC

We perceive speech based on the position of peaks in the speech spectrum. These peaks are called formants. To clearly perceive speech the formants need to be distinct, e.g. two peaks with a low level (anti-formant) region between them.

LPC is not very good at modeling anti-formants, the space between formants. As it is an all pole model, it can only explicitly model peaks in the speech spectrum. This can lead to unwanted energy in the anti-formants which makes speech sound muffled and hard to understand. The Codec 2 LPC postfilter improves the quality of the decoded speech by suppressing interformant-energy.

LPC attempts to model spectral slope and other features of the speech spectrum which are not important for speech perception. For example “flat”, high pass or low pass filtered speech is equally easy for us to understand. We can pass speech through a Q=1 bandpass or notch filter and it will still sound OK. However LPC wastes bits on these features, and get’s into trouble with large spectral slope.

LPC has trouble with high pitched speakers where it tends to model individual pitch harmonics rather than formants.

LPC is based on “designing” a filter to minimise mean square error rather than the properties of the human ear. For example it works on a linear frequency axis rather than log frequency like the human ear. This means it tends to allocates bits evenly across frequency, whereas an allocation weighted towards low frequencies would be more sensible. LPC often produces large errors near DC, an important area of human speech perception.

LPC puts significant information into the bandwidth of filters or width of formants, however due to masking the ear is not very sensitive to formant bandwidth. What is more important is sharp definition of the formant and anti-formant regions.

So I started thinking about a spectral envelope model with these properties:

  1. Specifies the location of formants with just 3 or 4 frequencies. Focuses on good formant definition, not the bandwidth of formants.
  2. Doesn’t care much about the relative amplitude of formants (spectral slope). This can be coarsely quantised or just hard coded using, e.g. voiced speech has a natural low pass spectral slope.
  3. Works in the log amplitude and log frequency domains.

Auditory Masking

Auditory masking refers to the “capture effect” of the human ear, a bit like an FM receiver. If you hear a strong tone, then you cant hear slightly weaker tones nearby. The weaker ones are masked. If you can’t hear these masked tones, there is no point sending them to the decoder. So we can save some bits. Masking is often used in (relatively) high bit rate audio codecs like MP3.

I found some Octave code for generating masking curves (Thanks Jon!), and went to work applying masking to Codec 2 amplitude modelling.

Masking in Action

Here are some plots to show how it works. Lets take a look at frame 83 from hts2a, a female speaker. First, 40ms of the input speech:

Now the same frame in the frequency domain:

The blue line is the speech spectrum, the red the amplitude samples {Am}, one for each harmonic. It’s these samples we would like to send to the decoder. The goal is to encode them efficiently. They form a spectral envelope, that describes the speech being articulated.

OK so lets look at the effect of masking. Here is the masking curve for a single harmonic (m=3, the highest one):

Masking theory says we can’t hear any harmonics beneath the level of this curve. This means we don’t need to send them over the channel and can save bits. Yayyyyyy.

Now lets plot the masking curves for all harmonics:

Wow, that’s a bit busy and hard to understand. Instead, lets just plot the top of all the masking curves (green):

Better. We can see that the entire masking curve is dominated by just a few harmonics. I’ve marked the frequencies of the harmonics that matter with black crosses. We can’t really hear the contribution from other harmonics. The two crosses near 1500Hz can probably be tossed away as they just describe the bottom of an anti-formant region. So that leaves us with just three samples to describe the entire speech spectrum. That’s very efficient, and worth investigating further.

Spectral Slope and Coding Quality

Some speech signals have a strong “low pass filter” slope between 0 an 4000Hz. Others have a “flat” spectrum – the high frequencies are about the same level as low frequencies.

Notice how the high frequency harmonics spread their masking down to lower frequencies? Now imagine we bumped up the level of the high frequency harmonics, e.g. with a first order high pass filter. Their masks would then rise, masking more low frequency harmonics, e.g. those near 1500Hz in the example above. Which means we could toss the masked harmonics away, and not send them to the decoder. Neat. Only down side is the speech would sound a bit high pass filtered. That’s no problem as long as it’s intelligible. This is an analog HF radio SSB replacement, not Hi-Fi.

This also explains why “flat” samples (hts1a, ve9qrp) with relatively less spectral slope code well, whereas others (kristoff, cq_ref) with a strong spectral slope are harder to code. Flat speech has improved masking, leaving less perceptually important information to model and code.

This is consistent with what I have heard about other low bit rate codecs. They often employ pre-processing such as equalisation to make the speech signal code better.

Putting Masking to work

Speech compression is the art of throwing stuff away. So how can we use this masking model to compress the speech? What can we throw away? Well lets start by assuming only the samples with the black crosses matter. This means we get to toss quite a bit of information away. This is good. We only have to transmit a subset of {Am}. How I’m not sure yet. Never mind that for now. At the decoder, we need to synthesise the speech, just from the black crosses. Hopefully it won’t sound like crap. Let’s work on that for now, and see if we are getting anywhere.

Attempt 1: Lets toss away any harmonics that have a smaller amplitude than the mask (Listen). Hmm, that sounds interesting! Apart from not being very good, I can hear a tinkling sound, like trickling water. I suspect (but haven’t proved) this is because harmonics are coming and going quickly as the masking model puts them above and below the mask, which makes them come and go quickly. Little packets of sine waves. I’ve heard similar sounds on other codecs when they are nearing their limits.

Attempt 2: OK, so how about we set the amplitude of all harmonics to exactly the mask level (Listen): Hmmm, sounds a bit artificial and muffled. Now I’ve learned that muffled means the formants are not well formed. Needs more difference between the formats and anti-formant regions. I guess this makes sense if all samples are exactly on the masking curve – we can just hear ALL of them. The LPC post filter I developed a few years ago increased the definition of formants, which had a big impact on speech quality. So lets try….

Attempt 3: Rather than deleting any harmonics beneath the mask, lets reduce their level a bit. That way we won’t get tinkling – harmonics will always be there rather than coming and going. We can use the mask instead of the LPC post filter to know which harmonics we need to attenuate (Listen).

That’s better! Close enough to using the original {Am} (Listen), however with lots of information removed.

For comparison here is Codec 2 700B (Listen and Codec 2 1300 (aka FreeDV 1600 when we add FEC) Listen. This is the best I’ve done with LPC/LSP to date.

The post filter algorithm is very simple. I set the harmonic magnitudes to the mask (green line), then boost only the non-masked harmonics (black crosses) by 6dB. Here is a plot of the original harmonics (red), and the version (green) I mangle with my model and send to the decoder for synthesis:

Here is a spectrogram (thanks Audacity) for Attempt 1, 2, and 3 for the first 1.6 seconds (“The navy attacked the big”). You can see the clearer formant representation with Attempt 3, compared to Attempt 2 (lower inter-formant energy), and the effect of the post filter (dark line in center of formants).

Command Line Kung Fu

If you want to play along:

~/codec2-dev/build_linux/src$ ./c2sim ../../raw/kristoff.raw --dump kristoff


octave:49> newamp_batch("../build_linux/src/kristoff");


~/codec2-dev/build_linux/src$ ./c2sim ../../raw/kristoff.raw --amread kristoff_am.out -o - | play -t raw -r 8000 -e signed-integer -b 16 - -q

The “newamp_fbf” script lets you single step through frames.


To synthesise the speech at the decoder I also need to come up with a phase for each harmonic. Phase and speech is still a bit of a mystery to me. Not sure what to do here. In the zero phase model, I sampled the phase of the LPC synthesis filter. However I don’t have one of them any more.

Lets think about what the LPC filter does with the phase. We know at resonance phase shifts rapidly:

The sharper the resonance the faster it swings. This has the effect of dispersing the energy in the pitch pulse exciting the filter.

So with the masking model I could just choose the center of each resonance, and swing the phase about madly. I know where the center of each resonance is, as we found that with the masking model.

Next Steps

The core idea is to apply a masking model to the set of harmonic magnitudes {Am} and select just 3-4 samples of that set that define the mask. At the decoder we use the masking model and a simple post filter to reconstruct a set of {Am_} that we use to synthesise the decoded speech.

Still a few problems to solve, however I think this masking model holds some promise for high quality speech at low bit rates. As it’s completely different to conventional LPC/LSP I’m flying blind. However the pieces are falling into place.

I’m currently working on i) how to reduce the number of samples to a low number ii) how to determine which ones we really need (e.g. discarding interformant samples); and iii) how to represent the amplitude of each sample with a low or zero number of bits. There are also some artifacts with background noise and chunks of spectrum coming and going.

I’m pretty sure the frequencies of the samples can be quantised coarsely, say 3 bits each using scalar quantisation, or perhaps 8 bit/s frame using VQ. There will also be quite a bit of correlation between the amplitudes and frequencies of each sample.

For voiced speech there will be a downwards (low pass) slope in the amplitudes, for unvoiced speech more energy at high frequencies. This suggests joint VQ of the sample frequencies and amplitudes might be useful.

The frequency and amplitude of the mask samples will be highly correlated in time (small frame to frame variations) so will have good robustness to bit errors if we apply trellis decoding techniques. Compared to LPC/LSP the bandwidth of formants is “hard coded” by the masking curves, so the dreaded LSPs-too-close due to bit errors R2D2 noises might be a thing of the past. I’ll explore robustness to bit errors when we get to the fully quantised stage.

Sridhar Dhanapalan: Twitter posts: 2015-08-17 to 2015-08-23

Mon, 2015-08-24 01:27

David Rowe: A Miserable Debt Free Life Part 2

Sun, 2015-08-23 10:31

The first post was very popular, and sparked debate all over the Internet. I’ve read many of the discussions, and would like to add a few points.

Firstly I don’t feel I did a very good job of building my assets – plenty of my friends have done much better in terms of net worth and/or early retirement. Many have done the Altruism thing better than I. Sites like Mr. Money Moustache do a better job at explaining the values I hold around money. Also I’ve lost interest in more accumulation, but my lifestyle seems interesting to people, hence these posts.

The Magical 10%

The spreadsheet I put up was not for you. It was just a simple example, showing how compound interest, savings and time can work for you. Or against you, if you like easy credit and debt. A lot of people seem hung up on the 10% figure I used.

I didn’t spell out exactly what my financial strategy is for good reason.

You need to figure out how to achieve your goals. Maybe its saving, maybe it’s getting educated to secure a high income, or maybe it’s nailing debt early. Some of my peers like real estate. I like shares, a good education, professional experience, and small business. I am mediocre at most of them. I looked at other peoples stories, then found something that worked for me.

But you need to work this out. It’s part of the deal, and you are not going to get the magic formula from a blog post by some guy sitting on a couch with too much spare time on his hands and an Internet connection.

The common threads are spending less than your earn, investment, and time. And yes, this is rocket science. The majority of the human race just can’t do it. Compound interest is based on exponential growth – which is completely under-appreciated by the human race. We just don’t get exponential growth.


Another issue around the 10% figure is risk. People want guarantees, zero risk, a cook book formula. Life doesn’t work like that. I had to deal with shares tumbling after 9/11 and the GFC, and a divorce. No one on a forum in the year 2000 told me about those future events when I was getting serious about saving and investing. Risk and return are a part of life. The risk is there anyway – you might lose your job tomorrow or get sick or divorced or have triplets. It’s up to you if you want to put that risk to work or shy away from it.

Risk can be managed, plan for it. For example you can say “what happens if my partner loses his job for 12 months”, or “what happens if the housing market dips 35% overnight”. Then plug those numbers in and come up with a strategy to manage that risk.

Lets look at the down side. If the magical 10% is not achieved, or even if a financial catastrophe strikes, who is going to be in a better position? Someone who is frugal and can save, or someone maxed out on debt who can’t live without the next pay cheque?

There is a hell of lot more risk in doing nothing.

Make a Plan and Know Thy Expenditure

Make your own plan. There is something really valuable in simply having a plan. Putting some serious thought into it. Curiously, I think this is more valuable than following the plan. I’m not sure why, but the process of planning has been more important to me than the actual plan. It can be a couple of pages of dot points and a single page spreadsheet. But write it down.

Some people commented that they know what they spend, for example they have a simple spreadsheet listing their expenses or a budget. Just the fact that they know their expenditure tells me they have their financial future sorted. There is something fundamental about this simple step. The converse is also true. If you can’t measure it, you can’t manage it.

No Magic Formula – It’s Hard Work

If parts of my experience don’t work for you, go and find something that does. Anything of value is 1% inspiration and 99% perspiration. Creating your own financial plan is part of the 99%. You need to provide that. Develop the habit of saving. Research investment options that work for you. Talk to your successful friends. Learn to stop wasting money on stuff you don’t need. Understand compound interest in your saving and in your debt. Whatever it takes to achieve your goals. These things are hard. No magic formula. This is what I’m teaching my kids.

Work your System

There is nothing unique about Australia, e.g. middle class welfare, socialised medicine, or high priced housing. Well it is quite nice here but we do speak funny and the drop bears are murderous. And don’t get me started on Tony Abbott. The point is that all countries have their risks and opportunities. Your system will be different to mine. Health care may suck where you live but maybe house prices are still reasonable, or the average wage in your profession is awesome, or the cost of living really low, or you are young without dependents and have time in front of you. Whatever your conditions are, learn to make them work for you.

BTW why did so few people comment on the Altruism section? And why so many on strategies for retiring early?

Binh Nguyen: Cracking a Combination Lock, Some Counter-Stealth Thoughts, and More Apple Information

Sun, 2015-08-23 00:55
Someone was recently trying to sell a safe but they didn't have the combination (they had proof of ownership if you're wondering). Anybody who has been faced with this situation is often torn because sometimes the item in question is valuable but the safe can be of comparable value so it's a lose lose situation. If you remember that the original combination then all is fine and well (I first encountered this situation in a hotel when I locked something but forgot the combination. It took me an agonising amount of time to recall the unlock code). If not, you're left with physical destruction of the safe to get back in, etc...

Tips on getting back in:

- did you use mneumonics of some sort to get at the combination?

- is there a limitation on the string that can be entered (any side intelligence is useful)?

- is there a time lock involved?

- does changing particular variables make it easier to get back in non-descructively?

- keep a log on the combinations that you have tried to ensure you don't re-cover the same territory

In this case, things were a bit odd. It had rubber buttons which when removed exposed membrane type switches which could be interfaced via an environmental sensor acquisition and interface device (something like an Arduino)(if you're curious this was designed and produced by a well known international security firm proving that brand doesn't always equate to quality). Once you program it and wire things up correctly, it's simply a case of letting your robot and program run until you open the safe. Another option is a more robust robot where it pushes buttons but obviously this takes quite a bit more hardware (which can make the project pretty expensive and potentially unworthwhile) to get working.

As I covered in my book on 'Cloud and Internet Security' please use proper locks with adequate countemeasures (time locks, variable string lengths, abnormal characters, shim proof, relatively unbreakable, etc...) and have a backup in case something goes wrong.

Been thinking about stealth design and counter measures a bit more.

- when you look at the the 2D thrust vectoring configuration of the F-22 Raptor you think why didn't they go 3D at times. One possible reason may be the 'letterbox effect'. It was designed as an air superiority fighter predominantly that relies heavily on BVR capabilities. From front on the plume effect is diminished (think about particle/energy weapon implementation problems) making it more difficult to detect. Obviously, this potentially reduces sideward movement (paricularly in comparison with 3D TVT options. Pure turn is more difficult but combined bank and turn isn't). Obvious tactic is to force the F-22 into sideward movements if it is ever on your tail (unlikely, due to apparently better sensor technology though)

- the above is a null point if you factor in variable thrust (one engine fires at a higher rate of thrust relative to the other) but it may result in feedback issues. People who have experience with fly by wire systems or high performance race cars which are undertuned will better understand this

- people keep on harping on about how 5th gen fighters can rely more heavily on BVR capabilities. Something which is often little spoken of is the relatively low performance of AAM (Air to Air Missile) systems (Morever, there is a difference between seeing, achieving RADAR lock, and achieving a kill). There must be upgrades along the way/in the pipeline to make 5th gen fighters a viable/economic option into the future

- the fact that several allied nations (Japan, Korea, and Turkey are among them currently)(India, Indonesia, and Russia are among those who are developing their own based on non-Western design) are developing their own indiginous 5th gen fighters which have characteristics more similar to the F-22 Raptor (the notable exception may be Israel who are maintaining and upgrading their F-15 fleet) and have air superiority in mind tells us that the F-35 is a much poorer brother to the F-22 Raptor in spite of what is being publicly said

Warplanes: No Tears For The T-50

- it's clear that the US and several allied nations believe that current stealth may have limited utility in the future. In fact, the Israeli's have said that within 5-10 years the JSF may lost any significant advantage that it currently has without upgrades

- everyone knows of the limited utility of AAM (Air to Air Missile) systems. It will be interesting to see whether particle/energy weapons are retrofitted to the JSF or whether they will be reserved entirely for 6th gen fighters. I'd be curious to know how much progress they've made with regards to this particularly with regards to energy consumption

- even if there have been/are intelligence breaches in the design of new fighter jets there's still the problem of production. The Soviets basically had the complete blue prints for NASA's Space Shuttle but ultimately decided against using it on a regular basis/producing more because like the Americans they discovered that it was extremely uneconomical. For a long time, the Soviets have trailed the West with regards to semiconductor technology which means that their sensor technology may not have caught up. This mightn't be the case with the Chinese. Ironically, should the Chinese fund the Russians and they work together they may achieve greater progress then working too independently

- some of the passive IRST systems out have current ranges of about 100-150km mark (that is publicly acknowledged)

- disoriention of gyroscopes has been used as a strategy against UCAV/UAVs. I'd be curious about how such technology would work against modern fighters which often go into failsafe mode (nobody wants to lose a fighter jet worth 8 or more figures. Hence, the technology) when the pilot blacks out... The other interesting thing would be how on field technologies such as temporal sensory deprivation (blinding, deafening, dis-orirentation, etc...) could be used in unison from longer range. All technologies which have been tested and used against ground based troops before)

- I've been thinking/theorising about some light based detection technologies to aircraft in general. One option I've been considering is somewhat like a sperical ball. The spherical ball is composed of lenses which focus in on a centre which is composed of sensors which would be a hybrid based technology based on the photoelectric effect and spectrascopic theory. The light would automatically trigger a voltage (much like a solar cell) while use of diffraction/spectrascopic theory would enable identification of aircraft from long range using light. The theory behind this is based on the way engine plumes work and the way jet fuels differ. Think about this carefully. Russian rocket fuel is very different from Western rocket fuel. I suspect it's much the same for jet fuel. We currently identify star/planet composition on roughly the same theory. Why not fighter aircraft? Moreover, there are other distinguishing aspects of the jet fighter nozzle exhausts (see my previous post and the section on LOAN systems, Think about the length and shape of each one based on their current flight mode (full afterburner, cruising, etc...) and the way most engine exhausts are unique (due to a number of different reasons including engine design, fuel, etc...). Clearly, the F-22, F-35, B-2, and other stealth have very unique nozzle shapes when compared to current 4th gen fighter options and among one another. The other thing is that given sufficient research (and I suspect a lot of time) I believe that the benefits of night or day flight will/could be largely mitigated. Think about the way in which light and camera filters (and night vision) work. They basically screen out based on frequency/wavelength to make things more visible. You should be able achieve the same thing during daylight. The other bonus of such technology is that it is entirely passive giving the advantage back to the party in defense and intelligence is relatively easy to collect. Just show up at a demonstration or near an airfield... 

- such technology may be a moot point as we have already made progress on cloaking (effectively invisible to the naked eye) technology (though exact details are classified as is a lot of other details regarding particle/energy weapons and shielding technologies)... There's also the problem of straight lines. For practical purposes, light travels in straight lines... OTH type capabilities are beyond such technology (for the time being. Who knows what will happen in the future?)

- someone may contest that I seem to be focusing in on exhaust only but as as you aware this style of detection should also work against standard objects as well (though it's practicallity would be somewhat limited). Just like RADAR though you give up on being able to power through weather and other physical anomalies because you can't use a conventional LASER. For me, this represents a balance between being detected from an attackers perspective and being able to track them from afar... If you've ever been involved in a security/bug sweep you will know that a LASER even of modest power can be seen from quite a distance away

- everybody knows how dependent allied forces are upon integrated systems (sensors, re-fuelling, etc...)

- never fly straight and level against a 5th gen fighter. Weave up and down and side to side even on patrols to maximise the chances of detection earlier in the game because all of them don't have genuine all aspect stealth

- I've been thinking of other ways of defending against low observability aircraft. The first is based on 'loitering' weapons. Namely, weapons which move at low velocity/loiter until they come within targeting range of aicraft. Then they 'activate' and chase their target much like a 'moving mine' (a technology often seen in cartoons?). Another is essentially turning off all of your sensors once they become within targeting range. Once they end up in passive detection range, then you fire in massive, independent volleys knowing full well that low observability aircraft have low payload capability owing to comprimises in their design

- as stated previously, I very much doubt that the JSF is as bad some people are portraying

- it's clear that defense has become more integrated with economics now by virtue of the fact that most of our current defense theory is based on the notion of deterrence. I beleive that the only true way forward is reform of the United Nations, increased use of un-manned technologies, and perhaps people coming to terms with their circumstances more differently (unlikely given how long humanity has been around), etc... There is a strong possibility that the defense estabilshment's belief that future defense programs could be unaffordable could become true within the context of deterence and our need to want to control affairs around the word. We need cheaper options with the ability to 'push up' when required...

All of this is a moot point though because genuine 5th gen fighters should be able to see you from a mile off and most countries who have entered into the stealth technology arena are struggling to build 5th gen options (including Russia who have a long history in defense research and manufacturing). For the most part, they're opting for a combination of direct confrontation and damage limitation through reduction of defensive projection capability through long range weapons such as aicraft carrier destroying missiles, targeting of AWACS/refuelling systems, etc... and like for like battle options...

I've been working on more Apple based technolgy of late (I've been curious about the software development side for a while). It's been intriguing taking a closer look at their hardware. Most people I've come across have been impressed by the Apple ecosystem. To be honest, the more I look at the technology borne from this company the more 'generic' them seem. Much of the technology is simply repackaged but in a better way. They've had more than their fair share of problems.

How to identify MacBook models

How to identify MacBook Pro models

A whole heap of companies including graphic card, game console, and computer manufacturers were caught out with BGA implementation problems (basically, people tried to save money by reducing the quality of solder. These problems have largely been fixed much like the earlier capacitor saga). Apple weren't immune

Lines on a screen of an Apple iMac. Can be due to software settings, firmware, or hardware

Apparently, Macbooks get noisy headjacks from time to time. Can be due to software settings or hardware failure

One of the strangest things I've found is that in spite of a core failure of primary storage device people still try to sell hardware for almost what the current market value of a perfectly functional machine is. Some people still go for it but I'm guessing they have spare hardware lying around

There are some interesting aspects to their MagSafe power adapters. Some aspects are similar to authentication protocols used by manufacturers such as HP to ensure that that everthing is safe and that only original OEM equipment is used. Something tells me they don't do enough testing though. They seem to have a continuous stream of anomalous problems. It could be similar to the Microsoft Windows security problem though. Do you want an OS delivered in a timely fashion or one that is deprecated but secure at a later date (delivered in a lecture by a Microsoft spokesman a while back). You can't predict everything that happens when things move into mass scale production but I would have thought that the 'torquing' problem would have been obvious from a consumer engineering/design perspective from the outset...

Macbook power adapter compatibility

Overheating problems on Macbooks quite common

Upgrading Apple laptop hard drives is similar in complexity to that of PC based laptops

One thing has to be said of Apple hardware construction. It's radically different to that of PC based systems. I'd rather deal with a business class laptop that is designed to be upgraded and probably exhibits greater reliability to be honest. Opening a lot of their devices has told me that form takes too much in the ratio between form and function

One frustrating aspect of the Apple ecosystem is that they gradually phase out support of old hardware by inserting pre-requisite checking. Thankfully, as others (and I) have discovered bypassing some of their checks can be trivial at times

David Rowe: Hamburgers versus Oncology

Sat, 2015-08-22 09:30

On a similar, but slightly lighter note, this blog was pointed out to me. The subject is high (saturated) fat versus carbohydrate based diets, which is an ongoing area of research, and may (may) be useful in managing diabetes. This gentleman is a citizen scientist (and engineer no less) like myself. Cool. I like the way he using numbers and in particular the way data is presented graphically.

However I tuned out when I saw claims of “using ketosis to fight cancer”, backed only by an anecdote. If you are interested, this claim is throughly debunked on

Bullshit detection 101 – if you find a claim of curing cancer, it’s pseudo-science. If the evidence cited is one persons story (an anecdote) it’s rubbish. You can safely move along. It shows a dangerous leaning towards dogma, rather than science. Unfortunately, these magical claims can obscure useful research in the area. For example exploring a subtle, less sensational effect between a ketogenic diet and diabetes. That’s why people doing real science don’t make outrageous claims without very strong evidence – its kills their credibility.

We need short circuit methods for discovering pseudo science. Otherwise you can waste a lot of time and energy investing spurious claims. People can get hurt or even killed. Takes a lot less effort to make a stupid claim than to prove it’s stupid. These days I can make a call by reading about 1 paragraph, the tricks used to apply a scientific veneer to magical claims are pretty consistent.

A hobby of mine is critical thinking, so I enjoy exploring magical claims from that perspective. I am scientifically trained and do R&D myself, in a field that I earned a PhD in. Even with that background, I know how hard it is to create new knowledge, and how easy it is to fool myself when I want to believe.

I’m not going to try bacon double cheeseburger (without the bun) therapy if I get cancer. I’ll be straight down to Oncology and take the best that modern, evidence based medicine can give, from lovely, dedicated people who have spent 20 years studying and treating it. Hit me with the the radiation and chemotherapy Doc! And don’t spare the Sieverts!

David Rowe: Is Alt-Med Responsible for 20% of Cancer Deaths?

Sat, 2015-08-22 09:30

In my meanderings on the InterWebs this caught my eye:

As a director of a cancer charity I work with patients everyday; my co-director has 40-yrs experience at the cancer coalface.We’re aware there are many cancer deaths that can be prevented if we could reduce the number of patients delaying or abandoning conventional treatment while experimenting with alt/med.It is ironic that when national cancer deaths are falling the numbers of patients embracing alt/med is increasing and that group get poor outcomes.If about 46,000 patients die from cancer in 2015, we suspect 10-20% will be caused by alt/med reliance. This figure dwarfs the road toll, deaths from domestic violence, homicide. suicide and terrorism in this country.

This comment was made by Pip Cornell, in the comments on this article discussing declining cancer rates. OK, so Pips views are anecdotal. She works for a charity that assists cancer sufferers. I’m putting it forward as a theory, not a fact. More research is required.

The good news is evidence based medicine is getting some traction with cancer. The bad news is that Alt-med views may be killing people. I guess this shouldn’t surprise me, Alt-med (non evidence-based medicine) has been killing people throughout history.

The Australian Government has recently introduced financial penalties for parents who do not vaccinate. Raw milk has been outlawed after it killed a toddler. I fully support these developments. Steps in the right direction. I hope they take a look at the effect of alt-med on serious illness like cancer.

Russell Coker: The Purpose of a Code of Conduct

Wed, 2015-08-19 21:26

On a private mailing list there have been some recent discussions about a Code of Conduct which demonstrate some great misunderstandings. The misunderstandings don’t seem particular to that list so it’s worthy of a blog post. Also people tend to think more about what they do when their actions will be exposed to a wider audience so hopefully people who read this post will think before they respond.


The first discussion concerned the issue of making “jokes”. When dealing with the treatment of other people (particularly minority groups) the issue of “jokes” is a common one. It’s fairly common for people in positions of power to make “jokes” about people with less power and then complain if someone disapproves. The more extreme examples of this concern hate words which are strongly associated with violence, one of the most common is a word used to describe gay men which has often been associated with significant violence and murder. Men who are straight and who conform to the stereotypes of straight men don’t have much to fear from that word while men who aren’t straight will associate it with a death threat and tend not to find any amusement in it.

Most minority groups have words that are known to be associated with hate crimes. When such words are used they usually send a signal that the minority groups in question aren’t welcome. The exception is when the words are used by other members of the group in question. For example if I was walking past a biker bar and heard someone call out “geek” or “nerd” I would be a little nervous (even though geeks/nerds have faced much less violence than most minority groups). But at a Linux conference my reaction would be very different. As a general rule you shouldn’t use any word that has a history of being used to attack any minority group other than one that you are a member of, so black rappers get to use a word that was historically used by white slave-owners but because I’m white I don’t get to sing along to their music. As an aside we had a discussion about such rap lyrics on the Linux Users of Victoria mailing list some time ago, hopefully most people think I’m stating the obvious here but some people need a clear explanation.

One thing that people should consider “jokes” is the issue of punching-down vs punching-up [1] (there are many posts about this topic, I linked to the first Google hit which seems quite good). The basic concept is that making jokes about more powerful people or organisations is brave while making “jokes” about less powerful people is cowardly and serves to continue the exclusion of marginalised people. When I raised this issue in the mailing list discussion a group of men immediately complained that they might be bullied by lots of less powerful people making jokes about them. One problem here is that powerful people tend to be very thin skinned due to the fact that people are usually nice to them. While the imaginary scenario of less powerful people making jokes about rich white men might be unpleasant if it happened in person, it wouldn’t compare to the experience of less powerful people who are the target of repeated “jokes” in addition to all manner of other bad treatment. Another problem is that the impact of a joke depends on the power of the person who makes it, EG if your boss makes a “joke” about you then you have to work on your CV, if a colleague or subordinate makes a joke then you can often ignore it.

Who does a Code of Conduct Protect

One member of the mailing list wrote a long and very earnest message about his belief that the CoC was designed to protect him from off-topic discussions. He analysed the results of a CoC on that basis and determined that it had failed due to the number of off-topic messages on the mailing lists he subscribes to. Being so self-centered is strongly correlated with being in a position of power, he seems to sincerely believe that everything should be about him, that he is entitled to all manner of protection and that any rule which doesn’t protect him is worthless.

I believe that the purpose of all laws and regulations should be to protect those who are less powerful, the more powerful people can usually protect themselves. The benefit that powerful people receive from being part of a system that is based on rules is that organisations (clubs, societies, companies, governments, etc) can become larger and achieve greater things if people can trust in the system. When minority groups are discouraged from contributing and when people need to be concerned about protecting themselves from attack the scope of an organisation is reduced. When there is a certain minimum standard of treatment that people can expect then they will be more willing to contribute and more able to concentrate on their contributions when they don’t expect to be attacked.

The Public Interest

When an organisation declares itself to be acting in the public interest (EG by including “Public Interest” in the name of the organisation) I think that we should expect even better treatment of minority groups. One might argue that a corporation should protect members of minority groups for the sole purpose of making more money (it has been proven that more diverse groups produce better quality work). But an organisation that’s in the “Public Interest” should be expected to go way beyond that and protect members of minority groups as a matter of principle.

When an organisation is declared to be operating in the “Public Interest” I believe that anyone who’s so unable to control their bigotry that they can’t refrain from being bigoted on the mailing lists should not be a member.

Related posts:

  1. Perfect Code vs Quite Good Code Some years ago I worked on a project where software...
  2. The Purpose of Planet Debian An issue that causes ongoing discussion is what is the...
  3. WTF – Let’s write all the code twice There is an interesting web site (with the slogan...

James Purser: The next step in the death of the regional networks

Wed, 2015-08-19 00:30

So we were flicking around youtube this evening as we are wont to do and we came across this ad

Now, an ad on youtube is nothing special, however what is special about this one is the fact that it's a local ad. That fishing shop is fifteen minutes from where I live and it's not the first local ad that I've seen on Youtube lately.

This means two things. Youtube can tell that I'm from the area the ad is targetted at, and local businesses now have an alternative to the local tv networks for advertising, an alternative that is available across multiple platforms, has a constant source of new content and is deeply embedded in the internet enabled culture that the networks have been ignoring for the past fifteen years.

Getting rid of the 2/3 rule, or removing the 75% reach rule won't save the networks. Embracing the internet and engaging with people in that space, just might.

Blog Catagories: mediaregional media

Francois Marier: Watching (some) Bluray movies on Ubuntu 14.04 using VLC

Tue, 2015-08-18 17:47

While the Bluray digital restrictions management system is a lot more crippling than the one preventing users from watching their legally purchased DVDs, it is possible to decode some Bluray discs on Linux using vlc.

First of all, install the required packages as root:

apt install vlc libaacs0 libbluray-bdj libbluray1 mkdir /usr/share/libbluray/ ln -s /usr/share/java/libbluray-0.5.0.jar /usr/share/libbluray/libbluray.jar

The last two lines are there to fix an error you might see on the console when opening a Bluray disc with vlc:

libbluray/bdj/bdj.c:249: libbluray.jar not found. libbluray/bdj/bdj.c:349: BD-J check: Failed to load libbluray.jar

and is apparently due to a bug in libbluray.

Then, as a user, you must install some AACS decryption keys. The most interesting source at the moment seems to be

mkdir ~/.config/aacs cd ~/.config/aacs wget

but it is still limited in the range of discs it can decode.

David Rowe: OLPC and Measuring if Technology Helps

Tue, 2015-08-18 17:30

I have a penchant for dating teachers who have worked in Australia’s 3rd world. This has given me a deep, personal appreciation of just how hard developing world education can be.

So I was wondering: where has the OLPC project gone? And in particular, has it helped people? I have had some experience with this wonderful initiative, and it was the subject of much excitement in my geeky, open source community.

I started to question the educational outcomes of the OLPC project in 2011. Too much tech buzz, and I know from my own experiences (and those of friends in the developing world) that parachuting rich white guy technology into the developing world then walking away just doesn’t work. It just makes geeks and the media feel good, for a little while at least.

Turns out 2.5M units have been deployed world wide, quite a number for any hardware project. One Education alone has an impressive 50k units in the field, and are seeking to deploy many more. Rangan Srikhanta from One Education Australia informed me (via a private email) that a 3 year study has just kicked off with 3 Universities, to evaluate the use of the XO and other IT technology in the classroom. Initial results in 2016. They have also tuned their deployment strategy to address better use of deployed XOs.

Other studies have questioned the educational outcomes of the OLPC project. Quite a vigorous debate in the comments there! I am not a teacher, so don’t profess to have the answers, but I did like this quote:

He added: “…the evidence shows that computers by themselves have no effect on learning and what really matters is the institutional environment that makes learning possible: the family, the teacher, the classroom, your peers.”

Measurement Matters

It’s really important to make sure the technology is effective. I have direct experience of developing world technology deployments that haven’t reached critical mass despite a lot of hard work by good people. With some initiatives like OLPC, even after 10 years (an eternity in IT, but not long in education) there isn’t any consensus. This means it’s unclear if the resources are being well spent.

I have also met some great people from other initiatives like AirJaldi and Inveneo who have done an excellent job of using geeky technology to consistently help people in the developing world.

This matters to me. These days I am developing technology building blocks (like HF Digital Voice), rather than working directly on deployments in the developing world. Not as sexy, I don’t get to sweat amongst the palm trees, or show videos of “unboxing” shiny technology in dusty locations. But for me at least, a better chance to “improve the world a little bit” using my skills and resources.

Failure is an Option

When I started Googling for recent OLPC developments I discovered many posts declaring OLPC to be a failure. I’m not so sure. It innovated in many areas, such as robust, repairable, eco-friendly IT technology purpose designed for education in the developing world. They have shipped 2.5M units, which I have never done with any of my products. It excited and motivated a lot of people (including me).

When working on the Village Telco I experienced difficult problems with interference on mesh networks and frustration working with nasty closed source chip set vendors. I started asking fundamental questions about sending voice over radio and lead me to my current HF Digital Voice work – which is 1000 times (60dB) more efficient than VOIP over Wifi and completely open source.

Pushing developing world education and telecommunications forward is a huge undertaking. Mistakes will be made, but without trying we learn nothing, and get no closer to solutions. So I say GO failure.

I have learned to push for failure early – get that shiny tech out in the field and watch how it breaks. Set binary pass/fail conditions. Build in ways to objectively measure it’s performance. Avoid gold plating and long development cycles before fundamental assumptions have been tested.

Measuring the Effectiveness of my Own Work

Lets put the spotlight on me. Can I can measure the efficacy of my own work in hard numbers? This blog gets visited by 5000 unique IPs a day (150k/month). Unique IPs is a reasonable measure for a blog, and it’s per day, so it shows some recurring utility.

OK, so how about my HF radio digital voice software? Like the OLPC project, that’s a bit harder to measure. Quite a few people trying FreeDV but an unknown number of them are walking away after an initial tinker. A few people are saying publicly it’s not as good as SSB. So “downloads”, like the number of XO laptops deployed, is not a reliable metric of the utility of my work.

However there is another measure. An end-user can directly compare the performance of FreeDV against analog SSB over HF radio. Your communication is either better or it is not. You don’t need any studies, you can determine the answer yourself in just a few minutes. So while I may not have reached my technical goals quite get (I’m still tweaking FreeDV 700), I have a built in way for anyone to determine if the technology I am developing is helping anyone.

Russell Coker: BTRFS Training

Tue, 2015-08-18 16:26

Some years ago Barwon South Water gave LUV 3 old 1RU Sun servers for any use related to free software. We gave one of those servers to the Canberra makerlab and another is used as the server for the LUV mailing lists and web site and the 3rd server was put aside for training. The servers have hot-swap 15,000rpm SAS disks – IE disks that have a replacement cost greater than the budget we have for hardware. As we were given a spare 70G disk (and a 140G disk can replace a 70G disk) the LUV server has 2*70G disks and the 140G disks (which can’t be replaced) are in the server for training.

On Saturday I ran a BTRFS and ZFS training session for the LUV Beginners’ SIG. This was inspired by the amount of discussion of those filesystems on the mailing list and the amount of interest when we have lectures on those topics.

The training went well, the meeting was better attended than most Beginners’ SIG meetings and the people who attended it seemed to enjoy it. One thing that I will do better in future is clearly documenting commands that are expected to fail and documenting how to login to the system. The users all logged in to accounts on a Xen server and then ssh’d to root at their DomU. I think that it would have saved a bit of time if I had aliased commands like “btrfs” to “echo you must login to your virtual server first” or made the shell prompt at the Dom0 include instructions to login to the DomU.

Each user or group had a virtual machine. The server has 32G of RAM and I ran 14 virtual servers that each had 2G of RAM. In retrospect I should have configured fewer servers and asked people to work in groups, that would allow more RAM for each virtual server and also more RAM for the Dom0. The Dom0 was running a BTRFS RAID-1 filesystem and each virtual machine had a snapshot of the block devices from my master image for the training. Performance was quite good initially as the OS image was shared and fit into cache. But when many users were corrupting and scrubbing filesystems performance became very poor. The disks performed well (sustaining over 100 writes per second) but that’s not much when shared between 14 active users.

The ZFS part of the tutorial was based on RAID-Z (I didn’t use RAID-5/6 in BTRFS because it’s not ready to use and didn’t use RAID-1 in ZFS because most people want RAID-Z). Each user had 5*4G virtual disks (2 for the OS and 3 for BTRFS and ZFS testing). By the end of the training session there was about 76G of storage used in the filesystem (including the space used by the OS for the Dom0), so each user had something like 5G of unique data.

We are now considering what other training we can run on that server. I’m thinking of running training on DNS and email. Suggestions for other topics would be appreciated. For training that’s not disk intensive we could run many more than 14 virtual machines, 60 or more should be possible.

Below are the notes from the BTRFS part of the training, anyone could do this on their own if they substitute 2 empty partitions for /dev/xvdd and /dev/xvde. On a Debian/Jessie system all that you need to do to get ready for this is to install the btrfs-tools package. Note that this does have some risk if you make a typo. An advantage of doing this sort of thing in a virtual machine is that there’s no possibility of breaking things that matter.

  1. Making the filesystem
    1. Make the filesystem, this makes a filesystem that spans 2 devices (note you must use the-f option if there was already a filesystem on those devices):

      mkfs.btrfs /dev/xvdd /dev/xvde
    2. Use file(1) to see basic data from the superblocks:

      file -s /dev/xvdd /dev/xvde
    3. Mount the filesystem (can mount either block device, the kernel knows they belong together):

      mount /dev/xvdd /mnt/tmp
    4. See a BTRFS df of the filesystem, shows what type of RAID is used:

      btrfs filesystem df /mnt/tmp
    5. See more information about FS device use:

      btrfs filesystem show /mnt/tmp
    6. Balance the filesystem to change it to RAID-1 and verify the change, note that some parts of the filesystem were single and RAID-0 before this change):

      btrfs balance start -dconvert=raid1 -mconvert=raid1 -sconvert=raid1 –force /mnt/tmp

      btrfs filesystem df /mnt/tmp
    7. See if there are any errors, shouldn’t be any (yet):

      btrfs device stats /mnt/tmp
    8. Copy some files to the filesystem:

      cp -r /usr /mnt/tmp
    9. Check the filesystem for basic consistency (only checks checksums):

      btrfs scrub start -B -d /mnt/tmp
  2. Online corruption
    1. Corrupt the filesystem:

      dd if=/dev/zero of=/dev/xvdd bs=1024k count=2000 seek=50
    2. Scrub again, should give a warning about errors:

      btrfs scrub start -B /mnt/tmp
    3. Check error count:

      btrfs device stats /mnt/tmp
    4. Corrupt it again:

      dd if=/dev/zero of=/dev/xvdd bs=1024k count=2000 seek=50
    5. Unmount it:

      umount /mnt/tmp
    6. In another terminal follow the kernel log:

      tail -f /var/log/kern.log
    7. Mount it again and observe it correcting errors on mount:

      mount /dev/xvdd /mnt/tmp
    8. Run a diff, observe kernel error messages and observe that diff reports no file differences:

      diff -ru /usr /mnt/tmp/usr/
    9. Run another scrub, this will probably correct some errors which weren’t discovered by diff:

      btrfs scrub start -B -d /mnt/tmp
  3. Offline corruption
    1. Umount the filesystem, corrupt the start, then try mounting it again which will fail because the superblocks were wiped:

      umount /mnt/tmp

      dd if=/dev/zero of=/dev/xvdd bs=1024k count=200

      mount /dev/xvdd /mnt/tmp

      mount /dev/xvde /mnt/tmp
    2. Note that the filesystem was not mountable due to a lack of a superblock. It might be possible to recover from this but that’s more advanced so we will restore the RAID.

      Mount the filesystem in a degraded RAID mode, this allows full operation.

      mount /dev/xvde /mnt/tmp -o degraded
    3. Add /dev/xvdd back to the RAID:

      btrfs device add /dev/xvdd /mnt/tmp
    4. Show the filesystem devices, observe that xvdd is listed twice, the missing device and the one that was just added:

      btrfs filesystem show /mnt/tmp
    5. Remove the missing device and observe the change:

      btrfs device delete missing /mnt/tmp

      btrfs filesystem show /mnt/tmp
    6. Balance the filesystem, not sure this is necessary but it’s good practice to do it when in doubt:

      btrfs balance start /mnt/tmp
    7. Umount and mount it, note that the degraded option is not needed:

      umount /mnt/tmp

      mount /dev/xvdd /mnt/tmp
  4. Experiment
    1. Experiment with the “btrfs subvolume create” and “btrfs subvolume delete” commands (which act like mkdir and rmdir).
    2. Experiment with “btrfs subvolume snapshot SOURCE DEST” and “btrfs subvolume snapshot -r SOURCE DEST” for creating regular and read-only snapshots of other subvolumes (including the root).

Related posts:

  1. Starting with BTRFS Based on my investigation of RAID reliability [1] I have...
  2. BTRFS vs LVM For some years LVM (the Linux Logical Volume Manager) has...
  3. Why I Use BTRFS I’ve just had to do yet another backup/format/restore operation on...

Leon Brooks: Making good Canon LP-E6 battery-pack contacts

Tue, 2015-08-18 15:29
battery-pack contacts

Canon LP-E6 battery packs (such as those using in my 70D camera) have two fine connector wires used for charging them.  These seem to be a weak point, as (if left to themselves) they eventually fail to connect well, which means that they do not charge adequately, or (in the field) do not run the equipment at all.

One experimenter discovered that scrubbing them with the edge of a stiff business card helped to make

with (nonCanon this time) charger contacts

them good.  So I considered something more extensive.

Parts: squeeze-bottle of cleaner (I use a citrus-based cleaner from PlanetArk, which seems to be able to clean almost anything off without being excessively invasive); spray-can

equipment requiredof WD-40; cheap tooth-brush, paper towels (or tissues, or bum-fodder).

Method: lightly

brush headspray cleaner onto contacts. Gently but vigorously rub along the contacts with toothbrush. Paper-dry the contacts.

Lightly spray WD-40 onto contacts. Gently but vigorously rub along the contacts with toothbrush. Paper-dry the contacts.

wider view of brush on contacts

(optional) When thoroughly dry, add a touch of light machine oil. This wards off moisture.

This appears to be just as effective with 3rd-party battery packs.

James Purser: Rethreading the Beanie

Tue, 2015-08-18 00:30

So there hasn't been any activity over at (actual podcast wise) since October last year.

I keep meaning to reboot things but I never quite get around to it and I've been thinking about why.

I think it boils down to two problems:

Firstly, I suffer from "Been there done that itis", that is once I've done something I tend to start looking around for the next challenge.

Secondly I suffer from a severe case of "creators block".

For instance, I have twelve different episode ideas for Purser Explores The World, a series I really enjoy making because it means I get to learn new things and talk to interesting people. I mean I've covered everything from Richard the Thirds remains being discovered to what it means to be a geek and using crowd sourcing to deal with disasters

Some of the ideas I want to cover in new episodes include a visit to HARS, a local Air Museum with some really fascinating exhibits, the recent controversy over Amnesty Internationals new approach to sex workers rights and the idea that we've already gone past the point of no return with regards to AI controlled weapons systems.

Then there's For Science. With Magdeline Lum and Maia Sauren, then Mel Thompson, we covered everything from Space Archeology to Salami made from infant bacteria and How not to science.

I want to start podcasting about science again. Since the last episode of For Science I've tried to keep things up with #lunchtimescience but I miss the audio production side of things and I really miss the back and forth that we had going. I'm planning on dipping my toes back into the water with a weekly Lunchtime For Science podcast. This will be a shorter format, coming at the end of each week to summarise the news and hopefully introduce a couple of new segments.

And finally there's WTF Australia. Not sure what to do about this one. Bernie and I had a lot of fun doing the weekly hangouts but at the end we just sort of drifted. 

So as you can see, all the plans, just a lack of the ability to get over the blockage.

Blog Catagories: angrybeanie

Sridhar Dhanapalan: Twitter posts: 2015-08-10 to 2015-08-16

Mon, 2015-08-17 01:27

Peter Lieverdink: Exploring the solar system

Sun, 2015-08-16 22:27

At last week's telescope driver training I found out that Melbourne contains a 1 to 1 billion scale model of the solar system. It's an artwork by Cameron Robbins and Christopher Lansell.

The Sun is located at St Kilda marina and the planets are spaced out along the beach and foreshore back towards the city.

Since the weather was lovely today, I thought ... why not? The guide says you can walk from the Sun to Pluto in about an hour and a half, which would make your speed approximately three times the speed of light, or warp 1.44, if you like.


The Sun. It's big.

Mercury, 4.9mm across and still at the car park, 58m from the Sun.

Venus, 1.2cm across on the beach at 108m from the Sun.


Earth (1.3cm) and Moon (3.5mm) are on the beach as well, at 42m from Venus (and 150m from the Sun).

Mars is 6.8mm across and at the far end of the beach, 228m from the Sun.

The walk now takes you past the millions of tiny grains of sand that make up the asteroid belt and the beach.

Jupiter is 14cm across and lives 778m from the Sun, near the Sea Baths.

The outer solar system is rather large, so it would be wise to purchase an ice cream at this point.

Saturn (12cm, rings 28cm) is nearly twice as far away from the Sun at 1.4km near St Kilda harbour as Jupiter was.

Uranus (5cm) is so far from the Sun (2.9km) that it's actually in the next suburb, near the end of Wright Street in Middle Park.

Neptune (4.9cm) is in the next suburb again (Port Melbourne) at 4.9km from the Sun.

No longer a planet, but still included. Pluto is a 2mm pellet on the beach at Garden City (yet another suburb along again) at 5.9km from the Sun.


On this scale, the nearest star (Proxima Centauri, an uassuming red dwarf) is about 40 trillion kilometers away from the sun. Which, on a one to billion scale, happens to be about the same as once around the Earth. 


Proxima Centauri is just on the other side of the Sun from the rest of the solar system, a cool 4.2 light years away.


Google map of the Melbourne solar system.

Tags: astronomymelbourne

Linux Users of Victoria (LUV) Announce: LUV Main September 2015 Meeting: Cross-Compiling Code for the Web / Annual General Meeting

Sun, 2015-08-16 19:30
Start: Sep 1 2015 18:30 End: Sep 1 2015 20:30 Start: Sep 1 2015 18:30 End: Sep 1 2015 20:30 Location: 

6th Floor, 200 Victoria St. Carlton VIC 3053



• Ryan Kelly, Cross-Compiling Code for the Web

• Annual General Meeting and lightning talks

200 Victoria St. Carlton VIC 3053 (formerly the EPA building)

Before and/or after each meeting those who are interested are welcome to join other members for dinner. We are open to suggestions for a good place to eat near our venue. Maria's on Peel Street in North Melbourne is currently the most popular place to eat after meetings.

LUV would like to acknowledge Red Hat for their help in obtaining the venue and VPAC for hosting.

Linux Users of Victoria Inc. is an incorporated association, registration number A0040056C.

September 1, 2015 - 18:30

read more

Rusty Russell: Broadband Speeds, New Data

Sat, 2015-08-15 16:02

Thanks to edmundedgar on reddit I have some more accurate data to update my previous bandwidth growth estimation post: OFCOM UK, who released their November 2014 report on average broadband speeds.  Whereas Akamai numbers could be lowered by the increase in mobile connections, this directly measures actual broadband speeds.

Extracting the figures gives:

  1. Average download speed in November 2008 was 3.6Mbit
  2. Average download speed in November 2014 was 22.8Mbit
  3. Average upload speed in November 2014 was 2.9Mbit
  4. Average upload speed in November 2008 to April 2009 was 0.43Mbit/s

So in 6 years, downloads went up by 6.333 times, and uploads went up by 6.75 times.  That’s an annual increase of 36% for downloads and 37% for uploads; that’s good, as it implies we can use download speed factor increases as a proxy for upload speed increases (as upload speed is just as important for a peer-to-peer network).

This compares with my previous post’s Akamai’s UK numbers of 3.526Mbit in Q4 2008 and 10.874Mbit in Q4 2014: only a factor of 3.08 (26% per annum).  Given how close Akamai’s numbers were to OFCOM’s in November 2008 (a year after the iPhone UK release, but probably too early for mobile to have significant effect), it’s reasonable to assume that mobile plays a large part of this difference.

If we assume Akamai’s numbers reflected real broadband rates prior to November 2008, we can also use it to extend the OFCOM data back a year: this is important since there was almost no bandwidth growth according to Akamai from Q4 2007 to Q7 2008: ignoring that period gives a rosier picture than my last post, and smells of cherrypicking data.

So, let’s say the UK went from 3.265Mbit in Q4 2007 (Akamai numbers) to 22.8Mbit in Q4 2014 (OFCOM numbers).  That’s a factor of 6.98, or 32% increase per annum for the UK. If we assume that the US Akamai data is under-representing Q4 2014 speeds by the same factor (6.333 / 3.08 = 2.056) as the UK data, that implies the US went from 3.644Mbit in Q4 2007 to 11.061 * 2.056 = 22.74Mbit in Q4 2014, giving a factor of 6.24, or 30% increase per annum for the US.

As stated previously, China is now where the US and UK were 7 years ago, suggesting they’re a reasonable model for future growth for that region.  Thus I revise my bandwidth estimates; instead of 17% per annum this suggests 30% per annum as a reasonable growth rate.