Planet Linux Australia

Syndicate content
Planet Linux Australia - http://planet.linux.org.au
Updated: 9 min 34 sec ago

Trent Lloyd: The bird that really likes my carport..

Mon, 2014-08-25 02:25
So yesterday I came home to this bird, in my carport







From some googling I *think* it is a pigeon, maybe a dove, but I didnt' check very hard, so I'm far from even moderately sure :)  The tag on it's foot says "AUST 2006" and some other stuff I couldn't make out.



It was there all yesterday afternoon, I managed to get quite close to take the above photo, but it did flinch when I tried to get any closer.



This morning, it was still there, and this afternoon when I go thome, it was still there although now its finding its way around the floor.



I do wonder why it likes my carport so much, It doesn't *seem* hurt, and it managed to get up here





So I assume that it can fly somewhat, I guess I'll wait and see if it's there tomorrow... If anyone knows anything feel free to leave a comment :)



I turned on the sprinklers and it was right at the very edge of the big double door out to the world drinking the water splashing in, so it's not stuck in or anything.



Weird!

Trent Lloyd: Green Day - Boulevard of Broken Dreams

Mon, 2014-08-25 02:25
[for the freedom lovers out there, this it totally and completely unrelated to anything freedom :)]



how the hell did I just lose the following poker hand



I pull pocket sixes, and then flop Ace-6-Ace, i.e. 6's full of Ace's full house, after some serious betting and deceiving I go all in.



HAND



(my hand, circled in black - 6H, 6C, 6S, AH, AS, opponents hand - circled in Red, AH, AD, AH, AS, 6S)

He beat me with 4 of a kind aces.



Opponent Before:





Opponent After:





Note the completely evil look on his face! A conspiracy I tell you...



This after a long string of really stupid pair and occasionally two-pair wins, sheesh!

Trent Lloyd: GNOME in Jericho

Mon, 2014-08-25 02:25
Watching Jericho S01E14, noticed this...







Looks suspiciously like GNOME 1.4 to me :)

Trent Lloyd: Avahi Scalability: "Is it good or is it bad?"

Mon, 2014-08-25 02:25
Lennart rightfully pointed out that I didn't really make any conclusion as to the results of my little test, the reason for this is really, "I'm not sure"



Certainly, It seems to be OK, the number of transmitted packets by my rough calculation, make sense, I would be interested to see what the realistic practical throughput of multicast on wireless is when you have many hosts transmitting at once, I know in 802.11b multicast is transmitted at "basic rate" of 1 or 2 mbit (or so I beleive), I'm not sure if 802.11g changes this.



My generally quick gut feeling is "I think this would work" (on wireless), I have no doubt this is fine on a wired network.



More testing to be done...

Trent Lloyd: Some random non-scientific Avahi "scaling" figures

Mon, 2014-08-25 02:25
Talking to sjoerd and others on IRC, (for the benefit of the OLPC project), I decided to attempt to get some kind of an idea of the amount of traffic Avahi generates on a large network.



I booted up 80 UMLs, running 2.6.20.2, on my AMD Athlon64 X2 4200+ (O/C to 2.5GHz per core), with 2GB of ram.



Each was running with 16M ram, a base debian etch install with Avahi 0.6.16.



Interestingly with 80 VMs running my memory usage looked like this:

Mem: 2076124k total, 2012064k used, 64060k free, 18436k buffers

Swap: 996020k total, 8k used, 996012k free, 1476504k cached






I configured a 'UML Switch' with a tap device on the host attached (tun1) and told each VM to come up and use avahi-autoipd to obtain a link-local IP.



I had each VM set to advertise 3 services, via the static service advertisement files



  • _olpc_presence._tcp
  • _activity._tcp (subtype _RSSActivity._sub._activity._tcp)
  • _activity._tcp (subtype _WebActivity._sub._activity._tcp)


plus it was configured with Avahi defaults so it would announce a workstation service (the default 'ssh' service was however NOT present) and the magic services that indicate what kind of services are being announced



So I started Wireshark and IPTRAF and started booting 80 VMs, at a pace of 1 every 10 seconds, after roughly 10-15 minutes the following numbers of packets were seen on the host tun1 interface



704 UDP (56.3%)

390 ARP (21.2%)

156 OTHER (12.5%)




The ARPs are for avahi-autoipd and the UDP packets are for avahi-daemon to speak mDNS, iptraf reported



Incoming Bytes: 417,391



I then gave my local machine an IP which bumped the packet count to 712, 395 and 157.



I then started 'avahi-browse _activity._tcp', this would result in 2 services from each machine being returned, following that tidying up the packet count was at



935 UDP

Incoming Bytes: 496,901

Outgoing Bytes: 28,787 (30 packets according to iptraf)




Now this *really* gave me machine a heart attack, as many 'linux' processes we're eating 20% CPU as possible, and took a good 10+ seconds for my machine to start responding again, I suspect if i was running the SKAS3 patch it might be a little less harsh.



I then after cancelling that, run avahi-browse -r _activity._tcp which causes Avahi to resolve each of the services, following that run



UDP 1287

Incoming Bytes: 570,000 packets 1384

Outgoing Bytes: 185,000 packets 227




In this case most of the services were cached and I just had to resolve each one.



I forgot to watch for traffic counts, so I re-ran the above test and iptraf claimed 165kbits/second at peak for 1 5 second interval. In this time I noticed a bunch of the service resolution queries timed out, I suspect this may have to do with it causing my machine to lock hard for a bit while it does it's magic... ;)



So that's the end of my very simple basic run of basically doing some real (rather than theoretical) tests of the number of packets seen flying around with 80 hosts on a network with Avahi with a few services, and the impact of people running a browse/resolve on a popular service type.



I'm going to try comandeer some more hardware to run some faster tests and collect some more useful data.

Trent Lloyd: �X��(�

Mon, 2014-08-25 02:25
So I was fiddling around with my new phone (Sony Ericcson k800i) and I noticed I could play videos as a ringtone, I was interesting how that worked...



So I downloaded a Music Video of "The Veronicas - When it all falls apart" from the Three Music store, which is at a cost of $3.00 (Which, BTW, came down at 30K/s, not bad for mobile data...)



Once my phone had downloaded it, I had two options "View" (which works fine) and the other was "Ringtone", having selected this the phone stated "This video is restricted against that kind of use"



Sigh.



I wonder what three would say if I asked for a refund ;)

Trent Lloyd: DOA - Dead or Alive (The Movie): Featuring: Partial linux source

Mon, 2014-08-25 02:25
I was watching the movie "Dead or Alive" this afternoon, and I was curious to see the source code scrolling past was from the linux kernel







Interestingly they have blotched out bits and pieces, notably the copyright declaration. They also appear to lack the ability to render tabs.



You can compare to arch/alpha/kernel/err_impl.h (taken from Ubuntu's linux-source-2.6.19), I'll include the excerpt pictured above here:





*

* linux/arch/alpha/kernel/err_impl.h

*

* Copyright (C) 2000 Jeff Wiedemeier (Compaq Computer Corporation)

*

* Contains declarations and macros to support Alpha error handling

* implementations.

*/



union el_timestamp;

struct el_subpacket;

struct ev7_lf_subpackets;



struct el_subpacket_annotation {

struct el_subpacket_annotation *next;

u16 class;

u16 type;

u16 revision;

char *description;

char **annotation;

};





I'm not sure how legal or anything this is, but interesting none the less...

Sridhar Dhanapalan: Twitter posts: 2014-08-18 to 2014-08-24

Mon, 2014-08-25 01:27

Chris Smart: Creating certs and keys for services using FreeIPA (Dogtag)

Sun, 2014-08-24 20:28

The default installation of FreeIPA includes the Dogtag certificate management system, a Certificate Authority for your network. It manages expiration of certificates and can automatically renew them. Any client machines on your network will trust the services you provide (you may need to import the IPA CA cert).

There are a number of ways to make certificates. You can generate a certificate signing request or you can have Dogtag manage the whole process for you. You can also create individual cert and key files or put them into a nss database. My preferred method is to use individual files and have Dogtag do the work for me.

If you so desire, you can join your servers to the realm in just the same manner as a desktop client. However, even if they are not joined to the realm you can still create certs for them! You will need to run a few additional steps though, namely creating DNS records and adding the machine manually.

Let’s create a certificate for a web server on www.test.lan (192.168.0.100) which is has not joined our realm.

SSH onto your IPA server and get a kerberos ticket.

[user@machine ~]# ssh root@ipa-server.test.lan

[root@ipa-server ~]# kinit admin

If the host is not already in the realm, create DNS entries and add the host.

[root@ipa-server ~]# ipa dnsrecord-add test.lan www --a-rec 192.168.0.100

[root@ipa-server ~]# ipa dnsrecord-add 0.168.192.in-addr.arpa. 100 --ptr-rec www.test.lan.

[root@ipa-server ~]# ipa host-add www.test.lan

Add a web service for the www machine.

[root@ipa-server ~]# ipa service-add HTTP/www.test.lan

Only the target machine can create a certificate (IPA uses the host kerberos ticket) by default, so to be able to create the certificate on your IPA server you need to allow it to manage the web service for the www host.

[root@ipa-server ~]# ipa service-add-host --hosts=ipa-server.test.lan HTTP/www.test.lan

Now create the cert and key.

[root@ipa-server ~]# ipa-getcert request -r -f /etc/pki/tls/certs/www.test.lan.crt -k

/etc/pki/tls/private/www.test.lan.key -N CN=www.test.lan -D

www.test.lan -K HTTP/www.test.lan

Now copy that key and certificate to your web server host and configure apache as required.

[root@ipa-server ~]# rsync -P /etc/pki/tls/certs/www.test.lan.crt /etc/pki/tls/private/www.test.lan.key root@www.test.lan:

You can also easily delete keys so that they aren’t tracked and renewed any more, first get the request id.

[root@ipa-server ~]# ipa-getcert list

Take note of the id for the certificate you want to delete.

[root@ipa-server ~]# getcert stop-tracking -i [request id]

A CRL (certificate revocation list) is automatically maintained and published on the IPA server at ​https://ipa-server.test.lan/ipa/crl/MasterCRL.bin

Andrew McDonnell: Raspberry Pi Virtual Machine Automation

Sat, 2014-08-23 21:27
Several months ago now I was doing some development for Raspberry Pi. I guess that shows how busy with life things have been. (I have a backlog of lots of things I would like to blog about that didn’t get blogged yet, pardon my grammar!) Now the Pi runs on an SD card and it […]

linux.conf.au News: Papers Committee weekend - who will be presenting at LCAuckland

Sat, 2014-08-23 20:28

This weekend is the Papers Committee weekend, and Steven (Ellis) is now on his way over to Sydney to join our revered Papers Committee for a fun-packed weekend deciding which of the many submitted presentations to chose from for our conference next year.

It’s a very important job, crucial, even! I don't envy them, trying to foresee what is going to be at the top of everyone’s must-see list, predicting what will be trending in 6 month’s time, and what will have died a sad, lonely death or sputtered out after a brief burst of glory in the meantime.

Then there’s the programme... Who fits together? Who shouldn’t be opposite whom? And on it goes. It will be hard work! After speaking with the Chairs of the committee (Michael Davies and Michael Still) we've learned that this is traditionally a passionately fought process with each and every person focussed intently on ensuring that our delegates have access to the best presentations currently and soon-to-be available.

“The Michaels” know the conference and its audience and the rest of the committee is made up of past organisers, some FOSS celebrities and past presenters - most of whom have done this job many times now. Steve has been sent with some strict instructions about the presentations our team wants to see and the format of the conference itself that has some new, exciting ideas.

To those in the Papers Committee gathering together this weekend to make these important decisions - we wish you all a safe journey there and back again, and we say Stand Your Ground!

To those of you who have submitted a presentation we say "Good Luck - you are all wonderful in our eyes!

All the best

The LCA 2015 team

David Rowe: Do Anti-Depressants work?

Sat, 2014-08-23 17:29

In the middle of 2013 I had a nasty bout of depression and was prescribed anti-depressant drugs. Although undiagnosed, I think I may have suffered low level depression for a few years, but had avoided anti-depressants and indeed other treatment for a couple of reasons:

  • I am a man, and men are bad at looking after their own health.
  • The stigma around mental health. It’s tough to face it and do something about it. Consider how you react to these two statements “I broke my leg and took 6 months to recover”, and ” I broke my mind and took 6 months to recover”.
  • The opinion of people influential in my life at that time. My GP friend Michael presented a statistic that anti-depressants were only marginally better that placebos (75% versus 70%) in treating depression. I was also in a close relationship with a person who held an “all drugs are bad”, anti-western medicine mentality. At the time I lacked the confidence to make health choices that were right for me.

Combined, these factors cost me 18 months of rocky mental health.

When my health collapsed the mental health care professionals recommend the combination of anti-depressants and counselling with a psychologist or psychiatrist. The good news is that this treatment, combined with a lot of hard work, and putting positive, supportive, relationships around me, is working. I came off the bottom quite quickly (a few months), and have continued to improve. I am currently weaning myself off the anti-depressants, and life is good, and getting better, as I “re-wire” my thought process.

That’s the difficult, personal bit out of the way. Lets talk about anti-depressants and science.

Did Anti-deps help me?

Due to Michael’s statistic above (anti-deps only 5% better than placebo) I was left with lingering doubts about anti-depressants. Could I be fooling myself, using something that didn’t work? This was too much for the scientist in me, so I felt compelled to check the evidence myself!

Now, the fact that I “got better” is not good enough. I may have improved from the counselling alone. Or through the “natural history” of disease, just like we automatically heal in 1-2 weeks from a common cold.

The health care professionals I worked with are confident anti-depressants function as advertised, based on their training and years of experience. This has some weight, but the causes and effects in mental health are complex. Professionals can hold mistaken beliefs. Indeed a wise professional will adapt as medical science advances and new therapies are replaced by old. They are not immune to unconscious bias. So the views of professionals, even based on years of experience, is not proof.

Trust Me. I’m a Doctor

I am a “Dr”, but not a medical one. I have a PhD in Electronic Engineering. I don’t know much about medicine, but I do know something about research. In a PhD you create a tiny piece of new knowledge, something human kind didn’t know before. It’s hard, and takes years, and even then the “contribution” you make is usually minor and left to gather dust on a shelf in a university library.

But you do learn how to find out what is real and what is not. How to separate facts from bullshit. You learn about scientific rigour. You do that by performing “research and disappointment” for four years, and finding out just how wrong you can be so many times before finally you get to to core of something real. You learn that what you want to believe, that your opinion, means nothing when it gets tested against the laws of nature.

So with the help of Michael and a great (and very funny) book on how medical trials work called Snake Oil Science, I did a little research of my own.

Drilling into a few studies

What I was looking for were “quality” studies, which have been carefully designed to sort out what’s true from what’s not. So my approach was to look into a few studies that supported the negative hypothesis. Get beyond the headlines.

One high quality study with the widely presented conclusion “anti-deps useless for mild and moderate depression” was (JAMA 2010). This paper and it’s conclusion has been debunked here. Briefly, they used the results from 3 studies of just one SSRI (Paxil) and used that under-representation to draw impossibly broad conclusions.

Ben Goldacre is campaigning against publication bias. This is the tendency for journals only to publish positive results. This is a real problem and I support Ben’s work. Unfortunately, it also feeds alt-med conspiracy theories about big pharma.

Ben has a great TED Talk on the problem of publication bias in drug trials. To lend credibility he cites a journal paper (NEJM 358 Turner). Ben presents numbers from this paper that suggest anti-depressants don’t work, due to selective publishing of only positive trials.

Here a couple of frames from Ben’s TED talk (at the 7:30 mark). Big pharma supplied the FDA with these results to get their nasty western meds approved:

However here are the real results with all trials included:

Looks like a damning case against anti-deps, and big pharma. Nope. I took the simple step of reading the paper, rather than accepting the argument from authority that comes from a physician quoting a journal paper, in A TED talk. Here is a direct quote from the paper Ben cited:

“We wish to clarify that non-significance in a single trial does not necessarily indicate lack of efficacy. Each drug, when subjected to meta-analysis, was shown to be superior to placebo. On the other hand, the true magnitude of each drug’s superiority to placebo was less than a diligent literature review would indicate.”

Just to summarise: Every drug. Superior to a placebo. This means they work.

The paper continues. By averaging all the data the overall mean effect size over all studies (published and not, all drugs) was 32% over a placebo. That’s actually quite positive.

So while Ben’s argument of publication bias is valid, his dramatic implication that anti-deps don’t work is wrong, at least from this study.

Yes publication bias is a big problem and needs to be addressed. However science is at work, self correcting, and it’s good to see guys like Ben working on it. It’s a classic trick used by alt-med as well: just quote good results, and ignore the results that show the alt-med therapies to be ineffective. This is Bad Science.

However this doesn’t discredit science, and shouldn’t make us abandon high quality trials and fall back on even poorer science like anecdotes and personal experience.

Breathless Headlines

This article from CBC News. No references to clinical studies, some leading questions, and a few personal opinions. So it’s just a hypothesis – but no more that that. A lack of understanding of the chemical functionality of a drug doesn’t invalidate it’s use. This isn’t the first time an effective drug’s function wasn’t well understood. For example Paracetamol isn’t completely understood even today.

As usual, a little digging reveals a very different slant that’s makes the CBC article look misleading. The author of the book is quoted in Wikipedia:

“Whitaker acknowledges that psychiatric medications do sometimes work but believes that they must be used in a ‘selective, cautious manner’. It should be understood that they’re not fixing any chemical imbalances. And honestly, they should be used on a short-term basis.”

I am attracted to the short term approach, and it is the approach suggested by the mental health care professionals that has helped me. Like a bandage or cast, anti-deps can support one while other mental health repairs are going on.

In contrast, the CBC article (first para):

“But people are questioning whether these drugs are the appropriate treatment for depression, and if they could even be causing harm.”

Poor journalism and cherry picking.

My Conclusions

My little investigation is by no means comprehensive. However the high quality journal papers I’ve studied so far support the hypothesis that anti-deps work and debunk the “anti-depressants are not effective compared to placebo” argument to my satisfaction.

I would like to read more studies of the combination of psycho-therapy and SSRIs – if anyone has any references to high quality journal papers on these subjects please let me know. The mental health nurse that treated me last year suggested recovery was about “40% SSRIs + 60% therapy”. I can visualise this treatment as a couple of normal distribution curves overlapping, with the means added together to be your mental health.

Medicine and Engineering

I was initially aghast at some of the crappy science even I can pick up in these “journal” papers. “This would never happen in engineering” I thought. However I bet some similar tricks are at play. There are pressures to “publish, patent” etc that would encourage bad science there too. For example signal processing papers rarely publish their source code, so it’s very hard to reproduce a competing algorithm. All you have is a few of the core equations. If I make a bug while simulating a competitors algorithm, it gives me the “right” answer – Oh look mine is better!

In my research: Some people using Codec 2 say it sounds bad and doesn’t work well for HF comms. Other people are saying it’s great and much better than the legacy analog technology. Huh? Well, I could average them out in a meta study and say “it’s about the same as analog”. Or use my internal bias and self esteem to simply conclude Codec 2 is awesome.

But what I am actually doing is saying “Hmm, that’s interesting – why can two groups of sensible people have the opposite results? Lets look into that”. Turns out different microphones make Codec 2 behave in different ways. This is leading me to investigate the effect of the input speech filtering. So through this apparent conflict we are learning more and improving Codec 2. What an awesome result!

I suspect it’s the same with anti-deps. Other factors are at play and we need better study design. Frustrating – we all want definitive answers. But no one said Science was easy. Just that it’s self correcting.

That’s why IFL Science.

Glen Turner: Raspberry Pi and 802.11 wireless (WiFi) networks

Fri, 2014-08-22 23:02

A note to readers

There are a many ways to configure wireless networking on Debian. Far too many. What is described here is the simplest option which uses the programs and configurations which ship in an unaltered Raspbian distribution. This lets people bring up wireless networking to their home access point with a minimum of fuss. More advanced configurations may be more easily done with other tools, such as NetworkManager. Now back to your originally programmed channel…

The RaspberryPi does not come with wireless onboard. But it's simple enough to buy a small USB wireless dongle. Element14 sell them for A$9.31. It's unlikely you'll see them in shops for such a low price so it is well work ordering a WiFi dongle with your RPi.

Raspbian already comes with the necessary software installed. Let's say our home wireless network has a SSID of example and a pre-shared key (aka password) of TGAB…Klsh. Edit /etc/wpa_supplicant/wpa_supplicant.conf. You will see some existing lines:

ctrl_interface=DIR=/var/run/wpa_supplicant GROUP=netdev update_config=1

Now add some lines describing your wireless network:

network={ ssid="example" psk="TGABpPpabLkgX0aE2XOKIjsXTVSy2yEF0mtUgFjapmMXwNNQ3yYJmtA9pGYKlsh" scan_ssid=1 }

The parameter scan_ssid=1 allows the WiFi dongle to connect with a wireless access point which does not do SSID broadcasts.

Now plug the dongle in. Check dmesg that udev installed the dongle's device driver:

$ dmesg [ 3.873335] usb 1-1.4: new high-speed USB device number 5 using dwc_otg [ 4.005018] usb 1-1.4: New USB device found, idVendor=0bda, idProduct=8176 [ 4.030075] usb 1-1.4: New USB device strings: Mfr=1, Product=2, SerialNumber=3 [ 4.050034] usb 1-1.4: Product: 802.11n WLAN Adapter [ 4.060398] usb 1-1.4: Manufacturer: Realtek [ 4.069904] usb 1-1.4: SerialNumber: 000000000001 [ 8.586604] usbcore: registered new interface driver rtl8192cu

A new interface will have appeared:

$ ifconfig wlan0 wlan0 Link encap:Ethernet HWaddr 00:11:22:33:44:55 UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 KiB) TX bytes:0 (0.0 KiB)

IPv4's DHCP should run and your interface should be populated with addresses:

$ ifconfig wlan0 wlan0 Link encap:Ethernet HWaddr 00:11:22:33:44:55 inet addr:192.0.2.1 Bcast:192.0.2.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:100 errors:0 dropped:0 overruns:0 frame:0 TX packets:100 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 KiB) TX bytes:0 (0.0 KiB)

If you use multiple wireless networks, then add additional network={…} stanzas to wpa_supplicant.conf. wpa_supplicant will choose the correct stanza based on the SSIDs present on the wireless network.

IPv6

If you are using IPv6 (by deleting /etc/modprobe.d/ipv6.conf) then IPv6's zeroconf and SLAAC will run and you will also get a IPv6 link-local address and maybe a global address if your network has IPv6 connectivity off the subnet.

$ ifconfig wlan0 wlan0 Link encap:Ethernet HWaddr 00:11:22:33:44:55 inet addr:192.0.2.1 Bcast:192.0.2.255 Mask:255.255.255.0 inet6 addr: fe80::211:22ff:fe33:4455/64 Scope:Link inet6 addr: 2001:db8:abcd:1234:211:22ff:fe33:4455/64 Scope:Global UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:100 errors:0 dropped:0 overruns:0 frame:0 TX packets:100 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 KiB) TX bytes:0 (0.0 KiB) Commonly occurring issues

If the interface is not populated with addresses then try to restart the interface. You will need to do this if you plugged the dongle in prior to editing wpa_supplicant.conf.

$ sudo ifdown wlan0 $ sudo ifup wlan0

If you still have trouble then look at the messages in /var/log/daemon.log, especially those from wpa_supplicant. Also check dmesg, ensuring that the device driver isn't printing messages indicating misbehaviour.

Also check that the default route points to where you expect; that is, the default route line says default via … dev wlan0.

$ ip route show default via 192.168.255.254 dev wlan0 192.168.255.0/24 dev wlan0 proto kernel scope link src 192.168.255.1 $ ip -6 route show 2001:db8:abcd:1234::/64 dev wlan0 proto kernel metric 256 expires 10000sec fe80::/64 dev wlan0 proto kernel metric 256 default via fe80::1 dev wlan0 proto ra metric 1024 expires 1000sec

If you have edited /etc/network/interfaces then you may need to restore these lines to that file:

allow-hotplug wlan0 iface wlan0 inet manual wpa-roam /etc/wpa_supplicant/wpa_supplicant.conf iface default inet dhcp Security

As this example shows, the pre-shared key should be long — up to 63 characters — and very random. The entire strength of WPA2 relies on the length and randomness of the key. If your current key is neither of these then you might want to generate a new key and configure it into the access point.

An easy way to generate a key is:

$ sudo apt-get install pwgen $ pwgen -s 63 1 TGABpPpabLkgX0aE2XOKIjsXTVSy2yEF0mtUgFjapmMXwNNQ3yYJmtA9pGYKlsh

This works even better if you use the RaspberryPi's hardware random number generator.

There is only one secure wireless protocol which you can use at home: Wireless Protected Access version two with pre-shared key, this is known as “WPA2-PSK” or as “WPA2 Personal”. The only secure encryption is CCMP -- this uses the Advanced Encryption Standard and is sometimes named “AES” in the access point configurations. The only secure authentication algorithm for use with WPA2-PSK is OPEN: this doesn't mean “open access point for use by all, so no authentication” but the reverse: “Open Systems Authentication”.

You can configure wpa_supplicant.conf to insist on these secure options as the only technology it will use with your home network.

network={ ssid="example" psk="TGABpPpabLkgX0aE2XOKIjsXTVSy2yEF0mtUgFjapmMXwNNQ3yYJmtA9pGYKlsh" scan_ssid=1 # Prevent backsliding into insecure protocols key_mgmt=WPA-PSK auth_alg=OPEN proto=WPA2 group=CCMP pairwise=CCMP }

Andrew Pollock: [life] Day 205: Rainy day play, a Brazilian Jiu-Jitsu refresher

Fri, 2014-08-22 21:25

I had grand plans of doing a 10 km run in the Minnippi Parklands, pushing Zoe in the stroller, followed by some bike riding practise for Zoe and a picnic lunch. Instead, it rained. We had a really nice day, nevertheless.

Zoe slept well again, and I woke up pretty early and was already well and truly awake when she got out of bed, so as a result we were ready to hit the road reasonably early. Since it was raining, I thought a visit to Lollipops Play Cafe would be a fun treat.

We got there about 10 minutes before the play cafe opened, so after some puddle stomping, we popped into Bunnings to get a few things, and then went to Lollipops.

Unfortunately Jason was tied up, so Megan couldn't join us. I did run into Mel, a mother from Kindergarten, who was there with her son, Matthew, and daughter. So instead of practising my knots or doing my real estate license assessment, I ended up having a chat with her , which was nice. She mentioned that she had some stuff to try and do in the afternoon, so I asked if Matthew wanted to come over for a play date for a couple of hours. He was keen for that.

So we went home, and I made some lunch for us, and then Mel dropped Matthew off at around 1pm, and they had a great time playing. I think first up they played a game of hide and seek, and then my practise rope got used for quite a bit of tug-o-war, and then we did some craft. After that I busted out the kinetic sand, and that kept them occupied for ages. They also just had a bit of a play with all the boxes on the balcony. It was a really nice play session. I like it when boys come over for a play date, as the dynamic is totally different, and Zoe and Matthew played really well together.

I dropped Matthew back home on the way to Zoe's Brazilian Jiu Jitsu class. Infinity Martial Arts was running a "please come back" promotion, where you could have two free lessons and a new uniform, so I figured, why not? I'd like to give Zoe the choice of Brazilian Jiu Jitsu again or gymnastics for Term 4, and this seemed like a good way of refreshing her memory as to what Brazilian Jiu Jitsu was. I'm hoping that Tumbletastics will do a free lesson in the school holidays as well, so Zoe will be able to make a reasonably informed choice.

Zoe's now in the "4 to 7" age group for BJJ classes, and there was just one other boy in the class today. She did really well, and the new black Gi looks really good on her. She also had the same teacher, Patrick, who she's really fond of, so it was a good afternoon all round. We stayed and watched a little bit of the 7 to 11 age group class that followed before heading back home.

We'd barely gotten home and Sarah arrived to pick up Zoe, so the day went quite quickly really, without being too hectic.

Michael Still: Juno nova mid-cycle meetup summary: conclusion

Fri, 2014-08-22 18:27
There's been a lot of content in this series about the Juno Nova mid-cycle meetup, so thanks to those who followed along with me. I've also received a lot of positive feedback about the posts, so I am thinking the exercise is worthwhile, and will try to be more organized for the next mid-cycle (and therefore get these posts out earlier). To recap quickly, here's what was covered in the series:



The first post in the series covered social issues: things like how we organized the mid-cycle meetup, how we should address core reviewer burnout, and the current state of play of the Juno release. Bug management has been an ongoing issue for Nova for a while, so we talked about bug management. We are making progress on this issue, but more needs to be done and it's going to take a lot of help for everyone to get there. There was also discussion about proposals on how to handle review workload in the Kilo release, although nothing has been finalized yet.



The second post covered the current state of play for containers in Nova, as well as our future direction. Unexpectedly, this was by far the most read post in the series if Google Analytics is to be believed. There is clear interest in support for containers in Nova. I expect this to be a hot topic at the Paris summit as well. Another new feature we're working on is the Ironic driver merge into Nova. This is progressing well, and we hope to have it fully merged by the end of the Juno release cycle.



At a superficial level the post about DB2 support in Nova is a simple tale of IBM's desire to have people use their database. However, to the skilled observer its deeper than that -- its a tale of love and loss, as well as a discussion of how to safely move our schema forward without causing undue pain for our large deployments. We also covered the state of cells support in Nova, with the main issue being that we really need cells to be feature complete. Hopefully people are working on a plan for this now. Another internal refactoring is the current scheduler work, which is important because it positions us for the future.



We also discussed the next gen Nova API, and talked through the proposed upgrade path for the transition from nova-network to neutron.



For those who are curious, there are 8,259 words (not that I am counting or anything) in this post series including this summary post. I estimate it took me about four working days to write (ED: and about two days for his trained team of technical writers to edit into mostly coherent English). I would love to get your feedback on if you found the series useful as it's a pretty big investment in time.



Tags for this post: openstack juno nova mid-cycle summary

Related posts: Juno nova mid-cycle meetup summary: nova-network to Neutron migration; Juno nova mid-cycle meetup summary: scheduler; Juno nova mid-cycle meetup summary: ironic; Juno nova mid-cycle meetup summary: DB2 support; Juno nova mid-cycle meetup summary: social issues; Juno nova mid-cycle meetup summary: slots



Comment

Russell Coker: Men Commenting on Women’s Issues

Fri, 2014-08-22 13:26

A lecture at LCA 2011 which included some inappropriate slides was followed by long discussions on mailing lists. In February 2011 I wrote a blog post debunking some of the bogus arguments in two lists [1]. One of the noteworthy incidents in the mailing list discussion concerned Ted Ts’o (an influential member of the Linux community) debating the definition of rape. My main point on that issue in Feb 2011 was that it’s insensitive to needlessly debate the statistics.

Recently Valerie Aurora wrote about another aspect of this on The Ada Initiative blog [2] and on her personal blog. Some of her significant points are that conference harassment doesn’t end when the conference ends (it can continue on mailing lists etc), that good people shouldn’t do nothing when bad things happen, and that free speech doesn’t mean freedom from consequences or the freedom to use private resources (such as conference mailing lists) without restriction.

Craig Sanders wrote a very misguided post about the Ted Ts’o situation [3]. One of the many things wrong with his post is his statement “I’m particularly disgusted by the men who intervene way too early – without an explicit invitation or request for help or a clear need such as an immediate threat of violence – in womens’ issues“.

I believe that as a general rule when any group of people are involved in causing a problem they should be involved in fixing it. So when we have problems that are broadly based around men treating women badly the prime responsibility should be upon men to fix them. It seems very clear that no matter what scope is chosen for fixing the problems (whether it be lobbying for new legislation, sociological research, blogging, or directly discussing issues with people to change their attitudes) women are doing considerably more than half the work. I believe that this is an indication that overall men are failing.

Asking for Help

I don’t believe that members of minority groups should have to ask for help. Asking isn’t easy, having someone spontaneously offer help because it’s the right thing to do can be a lot easier to accept psychologically than having to beg for help. There is a book named “Women Don’t Ask” which has a page on the geek feminism Wiki [4]. I think the fact that so many women relate to a book named “Women Don’t Ask” is an indication that we shouldn’t expect women to ask directly, particularly in times of stress. The Wiki page notes a criticism of the book that some specific requests are framed as “complaining”, so I think we should consider a “complaint” from a woman as a direct request to do something.

The geek feminism blog has an article titled “How To Exclude Women Without Really Trying” which covers many aspects of one incident [5]. Near the end of the article is a direct call for men to be involved in dealing with such problems. The geek feminism Wiki has a page on “Allies” which includes “Even a blog post helps” [6]. It seems clear from public web sites run by women that women really want men to be involved.

Finally when I get blog comments and private email from women who thank me for my posts I take it as an implied request to do more of the same.

One thing that we really don’t want is to have men wait and do nothing until there is an immediate threat of violence. There are two massive problems with that plan, one is that being saved from a violent situation isn’t a fun experience, the other is that an immediate threat of violence is most likely to happen when there is no-one around to intervene.

Men Don’t Listen to Women

Rebecca Solnit wrote an article about being ignored by men titled “Men Explain Things to Me” [7]. When discussing women’s issues the term “Mansplaining” is often used for that sort of thing, the geek feminism Wiki has some background [8]. It seems obvious that the men who have the greatest need to be taught some things related to women’s issues are the ones who are least likely to listen to women. This implies that other men have to teach them.

Craig says that women need “space to discover and practice their own strength and their own voices“. I think that the best way to achieve that goal is to listen when women speak. Of course that doesn’t preclude speaking as well, just listen first, listen carefully, and listen more than you speak.

Craig claims that when men like me and Matthew Garrett comment on such issues we are making “women’s spaces more comfortable, more palatable, for men“. From all the discussion on this it seems quite obvious that what would make things more comfortable for men would be for the issue to never be discussed at all. It seems to me that two of the ways of making such discussions uncomfortable for most men are to discuss sexual assault and to discuss what should be done when you have a friend who treats women in a way that you don’t like. Matthew has covered both of those so it seems that he’s doing a good job of making men uncomfortable – I think that this is a good thing, a discussion that is “comfortable and palatable” for the people in power is not going to be any good for the people who aren’t in power.

The Voting Aspect

It seems to me that when certain issues are discussed we have a social process that is some form of vote. If one person complains then they are portrayed as crazy. When other people agree with the complaint then their comments are marginalised to try and preserve the narrative of one crazy person. It seems that in the case of the discussion about Rape Apology and LCA2011 most men who comment regard it as one person (either Valeria Aurora or Matthew Garrett) causing a dispute. There is even some commentary which references my blog post about Rape Apology [9] but somehow manages to ignore me when it comes to counting more than one person agreeing with Valerie. For reference David Zanetti was the first person to use the term “apologist for rapists” in connection with the LCA 2011 discussion [10]. So we have a count of at least three men already.

These same patterns always happen so making a comment in support makes a difference. It doesn’t have to be insightful, long, or well written, merely “I agree” and a link to a web page will help. Note that a blog post is much better than a comment in this regard, comments are much like conversation while a blog post is a stronger commitment to a position.

I don’t believe that the majority is necessarily correct. But an opinion which is supported by too small a minority isn’t going to be considered much by most people.

The Cost of Commenting

The Internet is a hostile environment, when you comment on a contentious issue there will be people who demonstrate their disagreement in uncivilised and even criminal ways. S. E. Smith wrote an informative post for Tiger Beatdown about the terrorism that feminist bloggers face [11]. I believe that men face fewer threats than women when they write about such things and the threats are less credible. I don’t believe that any of the men who have threatened me have the ability to carry out their threats but I expect that many women who receive such threats will consider them to be credible.

The difference in the frequency and nature of the terrorism (and there is no other word for what S. E. Smith describes) experienced by men and women gives a vastly different cost to commenting. So when men fail to address issues related to the behavior of other men that isn’t helping women in any way. It’s imposing a significant cost on women for covering issues which could be addressed by men for minimal cost.

It’s interesting to note that there are men who consider themselves to be brave because they write things which will cause women to criticise them or even accuse them of misogyny. I think that the women who write about such issues even though they will receive threats of significant violence are the brave ones.

Not Being Patronising

Craig raises the issue of not being patronising, which is of course very important. I think that the first thing to do to avoid being perceived as patronising in a blog post is to cite adequate references. I’ve spent a lot of time reading what women have written about such issues and cited the articles that seem most useful in describing the issues. I’m sure that some women will disagree with my choice of references and some will disagree with some of my conclusions, but I think that most women will appreciate that I read what women write (it seems that most men don’t).

It seems to me that a significant part of feminism is about women not having men tell them what to do. So when men offer advice on how to go about feminist advocacy it’s likely to be taken badly. It’s not just that women don’t want advice from men, but that advice from men is usually wrong. There are patterns in communication which mean that the effective strategies for women communicating with men are different from the effective strategies for men communicating with men (see my previous section on men not listening to women). Also there’s a common trend of men offering simplistic advice on how to solve problems, one thing to keep in mind is that any problem which affects many people and is easy to solve has probably been solved a long time ago.

Often when social issues are discussed there is some background in the life experience of the people involved. For example Rookie Mag has an article about the street harassment women face which includes many disturbing anecdotes (some of which concern primary school students) [12]. Obviously anyone who has lived through that sort of thing (which means most women) will instinctively understand some issues related to threatening sexual behavior that I can’t easily understand even when I spend some time considering the matter. So there will be things which don’t immediately appear to be serious problems to me but which are interpreted very differently by women. The non-patronising approach to such things is to accept the concerns women express as legitimate, to try to understand them, and not to argue about it. For example the issue that Valerie recently raised wasn’t something that seemed significant when I first read the email in question, but I carefully considered it when I saw her posts explaining the issue and what she wrote makes sense to me.

I don’t think it’s possible for a man to make a useful comment on any issue related to the treatment of women without consulting multiple women first. I suggest a pre-requisite for any man who wants to write any sort of long article about the treatment of women is to have conversations with multiple women who have relevant knowledge. I’ve had some long discussions with more than a few women who are involved with the FOSS community. This has given me a reasonable understanding of some of the issues (I won’t claim to be any sort of expert). I think that if you just go and imagine things about a group of people who have a significantly different life-experience then you will be wrong in many ways and often offensively wrong. Just reading isn’t enough, you need to have conversations with multiple people so that they can point out the things you don’t understand.

This isn’t any sort of comprehensive list of ways to avoid being patronising, but it’s a few things which seem like common mistakes.

Anne Onne wrote a detailed post advising men who want to comment on feminist blogs etc [13], most of it applies to any situation where men comment on women’s issues.

Related posts:

  1. A Lack of Understanding of Nuclear Issues Ben Fowler writes about the issues related to nuclear power...

Michael Still: Juno nova mid-cycle meetup summary: the next generation Nova API

Fri, 2014-08-22 11:27
This is the final post in my series covering the highlights from the Juno Nova mid-cycle meetup. In this post I will cover our next generation API, which used to be called the v3 API but is largely now referred to as the v2.1 API. Getting to this point has been one of the more painful processes I think I've ever seen in Nova's development history, and I think we've learnt some important things about how large distributed projects operate along the way. My hope is that we remember these lessons next time we hit something as contentious as our API re-write has been.



Now on to the API itself. It started out as an attempt to improve our current API to be more maintainable and less confusing to our users. We deliberately decided that we would not focus on adding features, but instead attempt to reduce as much technical debt as possible. This development effort went on for about a year before we realized we'd made a mistake. The mistake we made is that we assumed that our users would agree it was trivial to move to a new API, and that they'd do that even if there weren't compelling new features, which it turned out was entirely incorrect.



I want to make it clear that this wasn't a mistake on the part of the v3 API team. They implemented what the technical leadership of Nova at the time asked for, and were very surprised when we discovered our mistake. We've now spent over a release cycle trying to recover from that mistake as gracefully as possible, but the upside is that the API we will be delivering is significantly more future proof than what we have in the current v2 API.



At the Atlanta Juno summit, it was agreed that the v3 API would never ship in its current form, and that what we would instead do is provide a v2.1 API. This API would be 99% compatible with the current v2 API, with the incompatible things being stuff like if you pass a malformed parameter to the API we will now tell you instead of silently ignoring it, which we call 'input validation'. The other thing we are going to add in the v2.1 API is a system of 'micro-versions', which allow a client to specify what version of the API it understands, and for the server to gracefully degrade to older versions if required.



This micro-version system is important, because the next step is to then start adding the v3 cleanups and fixes into the v2.1 API, but as a series of micro-versions. That way we can drag the majority of our users with us into a better future, without abandoning users of older API versions. I should note at this point that the mechanics for deciding what the minimum micro-version a version of Nova will support are largely undefined at the moment. My instinct is that we will tie to stable release versions in some way; if your client dates back to a release of Nova that we no longer support, then we might expect you to upgrade. However, that hasn't been debated yet, so don't take my thoughts on that as rigid truth.



Frustratingly, the intent of the v2.1 API has been agreed and unchanged since the Atlanta summit, yet we're late in the Juno release and most of the work isn't done yet. This is because we got bogged down in the mechanics of how micro-versions will work, and how the translation for older API versions will work inside the Nova code later on. We finally unblocked this at the mid-cycle meetup, which means this work can finally progress again.



The main concern that we needed to resolve at the mid-cycle was the belief that if the v2.1 API was implemented as a series of translations on top of the v3 code, then the translation layer would be quite thick and complicated. This raises issues of maintainability, as well as the amount of code we need to understand. The API team has now agreed to produce an API implementation that is just the v2.1 functionality, and will then layer things on top of that. This is actually invisible to users of the API, but it leaves us with an implementation where changes after v2.1 are additive, which should be easier to maintain.



One of the other changes in the original v3 code is that we stopped proxying functionality for Neutron, Cinder and Glance. With the decision to implement a v2.1 API instead, we will need to rebuild that proxying implementation. To unblock v2.1, and based on advice from the HP and Rackspace public cloud teams, we have decided to delay implementing these proxies. So, the first version of the v2.1 API we ship will not have proxies, but later versions will add them in. The current v2 API implementation will not be removed until all the proxies have been added to v2.1. This is prompted by the belief that many advanced API users don't use the Nova API proxies, and therefore could move to v2.1 without them being implemented.



Finally, I want to thank the Nova API team, especially Chris Yeoh and Kenichi Oomichi for their patience with us while we have worked through these complicated issues. It's much appreciated, and I find them a consistent pleasure to work with.



That brings us to the end of my summary of the Nova Juno mid-cycle meetup. I'll write up a quick summary post that ties all of the posts together, but apart from that this series is now finished. Thanks for following along.



Tags for this post: openstack juno nova mid-cycle summary api v3 v2.1

Related posts: Juno nova mid-cycle meetup summary: nova-network to Neutron migration; Juno nova mid-cycle meetup summary: scheduler; Juno nova mid-cycle meetup summary: ironic; Juno nova mid-cycle meetup summary: conclusion; Juno nova mid-cycle meetup summary: DB2 support; Juno nova mid-cycle meetup summary: social issues



Comment

Michael Still: Don't Tell Mum I Work On The Rigs

Fri, 2014-08-22 08:27






ISBN: 1741146984

LibraryThing

I read this book while on a flight a few weeks ago. Its surprisingly readable and relatively short -- you can knock it over in a single long haul flight. The book covers the memoirs of an oil rig worker, from childhood right through to middle age. That's probably the biggest weakness of the book, it just kind of stops when the writer reaches the present day. I felt there wasn't really a conclusion, which was disappointing.

An interesting fun read however.



Tags for this post: book paul_carter oil rig memoir

Related posts: Extreme Machines: Eirik Raude; New Orleans and sea level; Kern County oil wells on I-5; What is the point that people's morals evaporate? Comment Recommend a book

Andrew Pollock: [life] Day 204: Workshops Rail Museum

Thu, 2014-08-21 22:25

Zoe had a fabulous night's sleep and so did I. Despite that, I felt a bit tired today. Might have been all the unusual exercise yesterday.

After a leisurely start, we headed off in the direction of the Workshops Rail Museum for the day. We dropped past OfficeWorks on the way to return something I got months ago and didn't like, and I used the opportunity to grab a couple of cute little A5-sized clipboards. I'm going to keep one in the car and one in my Dad bag, so Zoe can doodle when we're on the go. I also discovered that one can buy reams of A5 paper.

We arrived at the Workshops, which were pretty quiet, except for a school excursion. Apparently they're also filming a movie there somewhere at the moment too (not in the museum part).

Despite Zoe's uninterrupted night's sleep, she fell asleep in the car on the way there, which was highly unusual. I let her sleep for a while in the car once we got there, before I woke her up. She woke up a bit grumpy, but once she realised where we were, she was very excited.

We had a good time doing the usual things, and then had a late lunch, and a brief return to the museum before heading over to Kim's place before she had to leave to pick up Sarah from school. Zoe and I looked after Tom and played with his massive pile of glitter play dough until Kim got back with Sarah.

Zoe and Sarah had their usual fabulous time together for about an hour before we had to head home. I'd had dinner going in the slow cooker, so it was nice and easy to get dinner on the table once we got home.

Despite her nap, Zoe went to bed easily. Now I have to try and convince Linux to properly print two-up on A4 paper. The expected methods aren't working for me.

David Rowe: SM1000 Part 3 – Rx Working

Thu, 2014-08-21 14:29

After an hour of messing about it turns out a bad solder joint meant U6 wasn’t connected to the ADC1 pin on the STM32F4 (schematic). This was probably the source of “noise” in some of my earlier unit tests. I found it useful to write a program to connect the ADC1 input to the DAC2 output (loudspeaker) and “listen” to the noise. Software signal tracer. Note to self: I must add that sort of analog loopback as a SM1000 menu option. I “cooked” the bad joint for 10 seconds with the soldering iron and some fresh flux and the rx side burst into life.

Here’s a video walk through of the FreeDV Rx demo:

I am really excited by the “analog” feel to the SM1000. Power up and “off air” speech is coming out of the speaker a few 100ms later! Benefits of no operating system (so no boot delay) and the low latency, fast sync, FreeDV design that veterans like Mel Whitten K0PFX have designed after years of pioneering HF DV.

The SM1000 latency is significantly lower that the PC version of FreeDV. It’s easy to get “hard” real time performance without an operating system, so it’s safe to use nice small audio buffers. Although to be fair optimising latency in x86 FreeDV is not something I have explored to date.

The top level of the receive code is pretty simple:



/* ADC1 is the demod in signal from the radio rx, DAC2 is the SM1000 speaker */

 

nin = freedv_nin(f);  

nout = nin;

f->total_bit_errors = 0;

 

if (adc1_read(&adc16k[FDMDV_OS_TAPS_16K], 2*nin) == 0) {

  GPIOE->ODR = (1 << 3);

  fdmdv_16_to_8_short(adc8k, &adc16k[FDMDV_OS_TAPS_16K], nin);

  nout = freedv_rx(f, &dac8k[FDMDV_OS_TAPS_8K], adc8k);

  //for(i=0; i<FREEDV_NSAMPLES; i++)

  //   dac8k[FDMDV_OS_TAPS_8K+i] = adc8k[i];

  fdmdv_8_to_16_short(dac16k, &dac8k[FDMDV_OS_TAPS_8K], nout);              

  dac2_write(dac16k, 2*nout);

  //led_ptt(0); led_rt(f->fdmdv_stats.sync); led_err(f->total_bit_errors);

  GPIOE->ODR &= ~(1 << 3);

}



We read “nin” modem samples from the ADC, change the same rate from 16 to 8 kHz, then call freedv_rx(). We then re-sample the “nout” output decoded speech samples to 16 kHz, and send them to the DAC, where they are played out of the loudspeaker.

The commented out “for” loop is the analog loopback code I used to “listen” to the ADC1 noise. There is also some commented out code for blinking LEDs (e.g. if we have sync, bit errors) that I haven’t tested yet (indeed the LEDs haven’t been loaded onto the PCB). I like to hit the highest risk tasks on the check list first.

The “GPIOE->ODR” is the GPIO Port E output data register, that’s the code to take the TP8 line high and low for measuring the real time CPU load on the oscilloscope.

Running the ADC and DAC at 16 kHz means I can get away without analog anti-aliasing or reconstruction filters. I figure the SSB radio’s filtering can take care of that.

OK. Time to load up the switches and LEDs and get the SM1000 switching between Tx and Rx via the PTT button.

I used this line to compress the 250MB monster 1080p video from my phone to a 8MB file that was fast to upload on YouTube:



david@bear:~/Desktop$ ffmpeg -i VID_20140821_113318.mp4 -ab 56k -ar 22050 -b 300k -r 15 -s 480x360 VID_20140821_113318.flv