Planet Linux Australia

Syndicate content
Planet Linux Australia - http://planet.linux.org.au
Updated: 35 min 12 sec ago

linux.conf.au News: Papers Committee weekend - who will be presenting at LCAuckland

Sat, 2014-08-23 19:28

This weekend is the Papers Committee weekend, and Steven (Ellis) is now on his way over to Sydney to join our revered Papers Committee for a fun-packed weekend deciding which of the many submitted presentations to chose from for our conference next year.

It’s a very important job, crucial, even! I don't envy them, trying to foresee what is going to be at the top of everyone’s must-see list, predicting what will be trending in 6 month’s time, and what will have died a sad, lonely death or sputtered out after a brief burst of glory in the meantime.

Then there’s the programme... Who fits together? Who shouldn’t be opposite whom? And on it goes. It will be hard work! After speaking with the Chairs of the committee (Michael Davies and Michael Still) we've learned that this is traditionally a passionately fought process with each and every person focussed intently on ensuring that our delegates have access to the best presentations currently and soon-to-be available.

“The Michaels” know the conference and its audience and the rest of the committee is made up of past organisers, some FOSS celebrities and past presenters - most of whom have done this job many times now. Steve has been sent with some strict instructions about the presentations our team wants to see and the format of the conference itself that has some new, exciting ideas.

To those in the Papers Committee gathering together this weekend to make these important decisions - we wish you all a safe journey there and back again, and we say Stand Your Ground!

To those of you who have submitted a presentation we say "Good Luck - you are all wonderful in our eyes!

All the best

The LCA 2015 team

David Rowe: Do Anti-Depressants work?

Sat, 2014-08-23 16:29

In the middle of 2013 I had a nasty bout of depression and was prescribed anti-depressant drugs. Although undiagnosed, I think I may have suffered low level depression for a few years, but had avoided anti-depressants and indeed other treatment for a couple of reasons:

  • I am a man, and men are bad at looking after their own health.
  • The stigma around mental health. It’s tough to face it and do something about it. Consider how you react to these two statements “I broke my leg and took 6 months to recover”, and ” I broke my mind and took 6 months to recover”.
  • The opinion of people influential in my life at that time. My GP friend Michael presented a statistic that anti-depressants were only marginally better that placebos (75% versus 70%) in treating depression. I was also in a close relationship with a person who held an “all drugs are bad”, anti-western medicine mentality. At the time I lacked the confidence to make health choices that were right for me.

Combined, these factors cost me 18 months of rocky mental health.

When my health collapsed the mental health care professionals recommend the combination of anti-depressants and counselling with a psychologist or psychiatrist. The good news is that this treatment, combined with a lot of hard work, and putting positive, supportive, relationships around me, is working. I came off the bottom quite quickly (a few months), and have continued to improve. I am currently weaning myself off the anti-depressants, and life is good, and getting better, as I “re-wire” my thought process.

That’s the difficult, personal bit out of the way. Lets talk about anti-depressants and science.

Did Anti-deps help me?

Due to Michael’s statistic above (anti-deps only 5% better than placebo) I was left with lingering doubts about anti-depressants. Could I be fooling myself, using something that didn’t work? This was too much for the scientist in me, so I felt compelled to check the evidence myself!

Now, the fact that I “got better” is not good enough. I may have improved from the counselling alone. Or through the “natural history” of disease, just like we automatically heal in 1-2 weeks from a common cold.

The health care professionals I worked with are confident anti-depressants function as advertised, based on their training and years of experience. This has some weight, but the causes and effects in mental health are complex. Professionals can hold mistaken beliefs. Indeed a wise professional will adapt as medical science advances and new therapies are replaced by old. They are not immune to unconscious bias. So the views of professionals, even based on years of experience, is not proof.

Trust Me. I’m a Doctor

I am a “Dr”, but not a medical one. I have a PhD in Electronic Engineering. I don’t know much about medicine, but I do know something about research. In a PhD you create a tiny piece of new knowledge, something human kind didn’t know before. It’s hard, and takes years, and even then the “contribution” you make is usually minor and left to gather dust on a shelf in a university library.

But you do learn how to find out what is real and what is not. How to separate facts from bullshit. You learn about scientific rigour. You do that by performing “research and disappointment” for four years, and finding out just how wrong you can be so many times before finally you get to to core of something real. You learn that what you want to believe, that your opinion, means nothing when it gets tested against the laws of nature.

So with the help of Michael and a great (and very funny) book on how medical trials work called Snake Oil Science, I did a little research of my own.

Drilling into a few studies

What I was looking for were “quality” studies, which have been carefully designed to sort out what’s true from what’s not. So my approach was to look into a few studies that supported the negative hypothesis. Get beyond the headlines.

One high quality study with the widely presented conclusion “anti-deps useless for mild and moderate depression” was (JAMA 2010). This paper and it’s conclusion has been debunked here. Briefly, they used the results from 3 studies of just one SSRI (Paxil) and used that under-representation to draw impossibly broad conclusions.

Ben Goldacre is campaigning against publication bias. This is the tendency for journals only to publish positive results. This is a real problem and I support Ben’s work. Unfortunately, it also feeds alt-med conspiracy theories about big pharma.

Ben has a great TED Talk on the problem of publication bias in drug trials. To lend credibility he cites a journal paper (NEJM 358 Turner). Ben presents numbers from this paper that suggest anti-depressants don’t work, due to selective publishing of only positive trials.

Here a couple of frames from Ben’s TED talk (at the 7:30 mark). Big pharma supplied the FDA with these results to get their nasty western meds approved:

However here are the real results with all trials included:

Looks like a damning case against anti-deps, and big pharma. Nope. I took the simple step of reading the paper, rather than accepting the argument from authority that comes from a physician quoting a journal paper, in A TED talk. Here is a direct quote from the paper Ben cited:

“We wish to clarify that non-significance in a single trial does not necessarily indicate lack of efficacy. Each drug, when subjected to meta-analysis, was shown to be superior to placebo. On the other hand, the true magnitude of each drug’s superiority to placebo was less than a diligent literature review would indicate.”

Just to summarise: Every drug. Superior to a placebo. This means they work.

The paper continues. By averaging all the data the overall mean effect size over all studies (published and not, all drugs) was 32% over a placebo. That’s actually quite positive.

So while Ben’s argument of publication bias is valid, his dramatic implication that anti-deps don’t work is wrong, at least from this study.

Yes publication bias is a big problem and needs to be addressed. However science is at work, self correcting, and it’s good to see guys like Ben working on it. It’s a classic trick used by alt-med as well: just quote good results, and ignore the results that show the alt-med therapies to be ineffective. This is Bad Science.

However this doesn’t discredit science, and shouldn’t make us abandon high quality trials and fall back on even poorer science like anecdotes and personal experience.

Breathless Headlines

This article from CBC News. No references to clinical studies, some leading questions, and a few personal opinions. So it’s just a hypothesis – but no more that that. A lack of understanding of the chemical functionality of a drug doesn’t invalidate it’s use. This isn’t the first time an effective drug’s function wasn’t well understood. For example Paracetamol isn’t completely understood even today.

As usual, a little digging reveals a very different slant that’s makes the CBC article look misleading. The author of the book is quoted in Wikipedia:

“Whitaker acknowledges that psychiatric medications do sometimes work but believes that they must be used in a ‘selective, cautious manner’. It should be understood that they’re not fixing any chemical imbalances. And honestly, they should be used on a short-term basis.”

I am attracted to the short term approach, and it is the approach suggested by the mental health care professionals that has helped me. Like a bandage or cast, anti-deps can support one while other mental health repairs are going on.

In contrast, the CBC article (first para):

“But people are questioning whether these drugs are the appropriate treatment for depression, and if they could even be causing harm.”

Poor journalism and cherry picking.

My Conclusions

My little investigation is by no means comprehensive. However the high quality journal papers I’ve studied so far support the hypothesis that anti-deps work and debunk the “anti-depressants are not effective compared to placebo” argument to my satisfaction.

I would like to read more studies of the combination of psycho-therapy and SSRIs – if anyone has any references to high quality journal papers on these subjects please let me know. The mental health nurse that treated me last year suggested recovery was about “40% SSRIs + 60% therapy”. I can visualise this treatment as a couple of normal distribution curves overlapping, with the means added together to be your mental health.

Medicine and Engineering

I was initially aghast at some of the crappy science even I can pick up in these “journal” papers. “This would never happen in engineering” I thought. However I bet some similar tricks are at play. There are pressures to “publish, patent” etc that would encourage bad science there too. For example signal processing papers rarely publish their source code, so it’s very hard to reproduce a competing algorithm. All you have is a few of the core equations. If I make a bug while simulating a competitors algorithm, it gives me the “right” answer – Oh look mine is better!

In my research: Some people using Codec 2 say it sounds bad and doesn’t work well for HF comms. Other people are saying it’s great and much better than the legacy analog technology. Huh? Well, I could average them out in a meta study and say “it’s about the same as analog”. Or use my internal bias and self esteem to simply conclude Codec 2 is awesome.

But what I am actually doing is saying “Hmm, that’s interesting – why can two groups of sensible people have the opposite results? Lets look into that”. Turns out different microphones make Codec 2 behave in different ways. This is leading me to investigate the effect of the input speech filtering. So through this apparent conflict we are learning more and improving Codec 2. What an awesome result!

I suspect it’s the same with anti-deps. Other factors are at play and we need better study design. Frustrating – we all want definitive answers. But no one said Science was easy. Just that it’s self correcting.

That’s why IFL Science.

Glen Turner: Raspberry Pi and 802.11 wireless (WiFi) networks

Fri, 2014-08-22 22:02

A note to readers

There are a many ways to configure wireless networking on Debian. Far too many. What is described here is the simplest option which uses the programs and configurations which ship in an unaltered Raspbian distribution. This lets people bring up wireless networking to their home access point with a minimum of fuss. More advanced configurations may be more easily done with other tools, such as NetworkManager. Now back to your originally programmed channel…

The RaspberryPi does not come with wireless onboard. But it's simple enough to buy a small USB wireless dongle. Element14 sell them for A$9.31. It's unlikely you'll see them in shops for such a low price so it is well work ordering a WiFi dongle with your RPi.

Raspbian already comes with the necessary software installed. Let's say our home wireless network has a SSID of example and a pre-shared key (aka password) of TGAB…Klsh. Edit /etc/wpa_supplicant/wpa_supplicant.conf. You will see some existing lines:

ctrl_interface=DIR=/var/run/wpa_supplicant GROUP=netdev update_config=1

Now add some lines describing your wireless network:

network={ ssid="example" psk="TGABpPpabLkgX0aE2XOKIjsXTVSy2yEF0mtUgFjapmMXwNNQ3yYJmtA9pGYKlsh" scan_ssid=1 }

The parameter scan_ssid=1 allows the WiFi dongle to connect with a wireless access point which does not do SSID broadcasts.

Now plug the dongle in. Check dmesg that udev installed the dongle's device driver:

$ dmesg [ 3.873335] usb 1-1.4: new high-speed USB device number 5 using dwc_otg [ 4.005018] usb 1-1.4: New USB device found, idVendor=0bda, idProduct=8176 [ 4.030075] usb 1-1.4: New USB device strings: Mfr=1, Product=2, SerialNumber=3 [ 4.050034] usb 1-1.4: Product: 802.11n WLAN Adapter [ 4.060398] usb 1-1.4: Manufacturer: Realtek [ 4.069904] usb 1-1.4: SerialNumber: 000000000001 [ 8.586604] usbcore: registered new interface driver rtl8192cu

A new interface will have appeared:

$ ifconfig wlan0 wlan0 Link encap:Ethernet HWaddr 00:11:22:33:44:55 UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 KiB) TX bytes:0 (0.0 KiB)

IPv4's DHCP should run and your interface should be populated with addresses:

$ ifconfig wlan0 wlan0 Link encap:Ethernet HWaddr 00:11:22:33:44:55 inet addr:192.0.2.1 Bcast:192.0.2.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:100 errors:0 dropped:0 overruns:0 frame:0 TX packets:100 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 KiB) TX bytes:0 (0.0 KiB)

If you use multiple wireless networks, then add additional network={…} stanzas to wpa_supplicant.conf. wpa_supplicant will choose the correct stanza based on the SSIDs present on the wireless network.

IPv6

If you are using IPv6 (by deleting /etc/modprobe.d/ipv6.conf) then IPv6's zeroconf and SLAAC will run and you will also get a IPv6 link-local address and maybe a global address if your network has IPv6 connectivity off the subnet.

$ ifconfig wlan0 wlan0 Link encap:Ethernet HWaddr 00:11:22:33:44:55 inet addr:192.0.2.1 Bcast:192.0.2.255 Mask:255.255.255.0 inet6 addr: fe80::211:22ff:fe33:4455/64 Scope:Link inet6 addr: 2001:db8:abcd:1234:211:22ff:fe33:4455/64 Scope:Global UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:100 errors:0 dropped:0 overruns:0 frame:0 TX packets:100 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 KiB) TX bytes:0 (0.0 KiB) Commonly occurring issues

If the interface is not populated with addresses then try to restart the interface. You will need to do this if you plugged the dongle in prior to editing wpa_supplicant.conf.

$ sudo ifdown wlan0 $ sudo ifup wlan0

If you still have trouble then look at the messages in /var/log/daemon.log, especially those from wpa_supplicant. Also check dmesg, ensuring that the device driver isn't printing messages indicating misbehaviour.

Also check that the default route points to where you expect; that is, the default route line says default via … dev wlan0.

$ ip route show default via 192.168.255.254 dev wlan0 192.168.255.0/24 dev wlan0 proto kernel scope link src 192.168.255.1 $ ip -6 route show 2001:db8:abcd:1234::/64 dev wlan0 proto kernel metric 256 expires 10000sec fe80::/64 dev wlan0 proto kernel metric 256 default via fe80::1 dev wlan0 proto ra metric 1024 expires 1000sec

If you have edited /etc/network/interfaces then you may need to restore these lines to that file:

allow-hotplug wlan0 iface wlan0 inet manual wpa-roam /etc/wpa_supplicant/wpa_supplicant.conf iface default inet dhcp Security

As this example shows, the pre-shared key should be long — up to 63 characters — and very random. The entire strength of WPA2 relies on the length and randomness of the key. If your current key is neither of these then you might want to generate a new key and configure it into the access point.

An easy way to generate a key is:

$ sudo apt-get install pwgen $ pwgen -s 63 1 TGABpPpabLkgX0aE2XOKIjsXTVSy2yEF0mtUgFjapmMXwNNQ3yYJmtA9pGYKlsh

This works even better if you use the RaspberryPi's hardware random number generator.

There is only one secure wireless protocol which you can use at home: Wireless Protected Access version two with pre-shared key, this is known as “WPA2-PSK” or as “WPA2 Personal”. The only secure encryption is CCMP -- this uses the Advanced Encryption Standard and is sometimes named “AES” in the access point configurations. The only secure authentication algorithm for use with WPA2-PSK is OPEN: this doesn't mean “open access point for use by all, so no authentication” but the reverse: “Open Systems Authentication”.

You can configure wpa_supplicant.conf to insist on these secure options as the only technology it will use with your home network.

network={ ssid="example" psk="TGABpPpabLkgX0aE2XOKIjsXTVSy2yEF0mtUgFjapmMXwNNQ3yYJmtA9pGYKlsh" scan_ssid=1 # Prevent backsliding into insecure protocols key_mgmt=WPA-PSK auth_alg=OPEN proto=WPA2 group=CCMP pairwise=CCMP }

Andrew Pollock: [life] Day 205: Rainy day play, a Brazilian Jiu-Jitsu refresher

Fri, 2014-08-22 20:25

I had grand plans of doing a 10 km run in the Minnippi Parklands, pushing Zoe in the stroller, followed by some bike riding practise for Zoe and a picnic lunch. Instead, it rained. We had a really nice day, nevertheless.

Zoe slept well again, and I woke up pretty early and was already well and truly awake when she got out of bed, so as a result we were ready to hit the road reasonably early. Since it was raining, I thought a visit to Lollipops Play Cafe would be a fun treat.

We got there about 10 minutes before the play cafe opened, so after some puddle stomping, we popped into Bunnings to get a few things, and then went to Lollipops.

Unfortunately Jason was tied up, so Megan couldn't join us. I did run into Mel, a mother from Kindergarten, who was there with her son, Matthew, and daughter. So instead of practising my knots or doing my real estate license assessment, I ended up having a chat with her , which was nice. She mentioned that she had some stuff to try and do in the afternoon, so I asked if Matthew wanted to come over for a play date for a couple of hours. He was keen for that.

So we went home, and I made some lunch for us, and then Mel dropped Matthew off at around 1pm, and they had a great time playing. I think first up they played a game of hide and seek, and then my practise rope got used for quite a bit of tug-o-war, and then we did some craft. After that I busted out the kinetic sand, and that kept them occupied for ages. They also just had a bit of a play with all the boxes on the balcony. It was a really nice play session. I like it when boys come over for a play date, as the dynamic is totally different, and Zoe and Matthew played really well together.

I dropped Matthew back home on the way to Zoe's Brazilian Jiu Jitsu class. Infinity Martial Arts was running a "please come back" promotion, where you could have two free lessons and a new uniform, so I figured, why not? I'd like to give Zoe the choice of Brazilian Jiu Jitsu again or gymnastics for Term 4, and this seemed like a good way of refreshing her memory as to what Brazilian Jiu Jitsu was. I'm hoping that Tumbletastics will do a free lesson in the school holidays as well, so Zoe will be able to make a reasonably informed choice.

Zoe's now in the "4 to 7" age group for BJJ classes, and there was just one other boy in the class today. She did really well, and the new black Gi looks really good on her. She also had the same teacher, Patrick, who she's really fond of, so it was a good afternoon all round. We stayed and watched a little bit of the 7 to 11 age group class that followed before heading back home.

We'd barely gotten home and Sarah arrived to pick up Zoe, so the day went quite quickly really, without being too hectic.

Michael Still: Juno nova mid-cycle meetup summary: conclusion

Fri, 2014-08-22 17:27
There's been a lot of content in this series about the Juno Nova mid-cycle meetup, so thanks to those who followed along with me. I've also received a lot of positive feedback about the posts, so I am thinking the exercise is worthwhile, and will try to be more organized for the next mid-cycle (and therefore get these posts out earlier). To recap quickly, here's what was covered in the series:



The first post in the series covered social issues: things like how we organized the mid-cycle meetup, how we should address core reviewer burnout, and the current state of play of the Juno release. Bug management has been an ongoing issue for Nova for a while, so we talked about bug management. We are making progress on this issue, but more needs to be done and it's going to take a lot of help for everyone to get there. There was also discussion about proposals on how to handle review workload in the Kilo release, although nothing has been finalized yet.



The second post covered the current state of play for containers in Nova, as well as our future direction. Unexpectedly, this was by far the most read post in the series if Google Analytics is to be believed. There is clear interest in support for containers in Nova. I expect this to be a hot topic at the Paris summit as well. Another new feature we're working on is the Ironic driver merge into Nova. This is progressing well, and we hope to have it fully merged by the end of the Juno release cycle.



At a superficial level the post about DB2 support in Nova is a simple tale of IBM's desire to have people use their database. However, to the skilled observer its deeper than that -- its a tale of love and loss, as well as a discussion of how to safely move our schema forward without causing undue pain for our large deployments. We also covered the state of cells support in Nova, with the main issue being that we really need cells to be feature complete. Hopefully people are working on a plan for this now. Another internal refactoring is the current scheduler work, which is important because it positions us for the future.



We also discussed the next gen Nova API, and talked through the proposed upgrade path for the transition from nova-network to neutron.



For those who are curious, there are 8,259 words (not that I am counting or anything) in this post series including this summary post. I estimate it took me about four working days to write (ED: and about two days for his trained team of technical writers to edit into mostly coherent English). I would love to get your feedback on if you found the series useful as it's a pretty big investment in time.



Tags for this post: openstack juno nova mid-cycle summary

Related posts: Juno nova mid-cycle meetup summary: nova-network to Neutron migration; Juno nova mid-cycle meetup summary: scheduler; Juno nova mid-cycle meetup summary: ironic; Juno nova mid-cycle meetup summary: DB2 support; Juno nova mid-cycle meetup summary: social issues; Juno nova mid-cycle meetup summary: slots



Comment

Russell Coker: Men Commenting on Women’s Issues

Fri, 2014-08-22 12:26

A lecture at LCA 2011 which included some inappropriate slides was followed by long discussions on mailing lists. In February 2011 I wrote a blog post debunking some of the bogus arguments in two lists [1]. One of the noteworthy incidents in the mailing list discussion concerned Ted Ts’o (an influential member of the Linux community) debating the definition of rape. My main point on that issue in Feb 2011 was that it’s insensitive to needlessly debate the statistics.

Recently Valerie Aurora wrote about another aspect of this on The Ada Initiative blog [2] and on her personal blog. Some of her significant points are that conference harassment doesn’t end when the conference ends (it can continue on mailing lists etc), that good people shouldn’t do nothing when bad things happen, and that free speech doesn’t mean freedom from consequences or the freedom to use private resources (such as conference mailing lists) without restriction.

Craig Sanders wrote a very misguided post about the Ted Ts’o situation [3]. One of the many things wrong with his post is his statement “I’m particularly disgusted by the men who intervene way too early – without an explicit invitation or request for help or a clear need such as an immediate threat of violence – in womens’ issues“.

I believe that as a general rule when any group of people are involved in causing a problem they should be involved in fixing it. So when we have problems that are broadly based around men treating women badly the prime responsibility should be upon men to fix them. It seems very clear that no matter what scope is chosen for fixing the problems (whether it be lobbying for new legislation, sociological research, blogging, or directly discussing issues with people to change their attitudes) women are doing considerably more than half the work. I believe that this is an indication that overall men are failing.

Asking for Help

I don’t believe that members of minority groups should have to ask for help. Asking isn’t easy, having someone spontaneously offer help because it’s the right thing to do can be a lot easier to accept psychologically than having to beg for help. There is a book named “Women Don’t Ask” which has a page on the geek feminism Wiki [4]. I think the fact that so many women relate to a book named “Women Don’t Ask” is an indication that we shouldn’t expect women to ask directly, particularly in times of stress. The Wiki page notes a criticism of the book that some specific requests are framed as “complaining”, so I think we should consider a “complaint” from a woman as a direct request to do something.

The geek feminism blog has an article titled “How To Exclude Women Without Really Trying” which covers many aspects of one incident [5]. Near the end of the article is a direct call for men to be involved in dealing with such problems. The geek feminism Wiki has a page on “Allies” which includes “Even a blog post helps” [6]. It seems clear from public web sites run by women that women really want men to be involved.

Finally when I get blog comments and private email from women who thank me for my posts I take it as an implied request to do more of the same.

One thing that we really don’t want is to have men wait and do nothing until there is an immediate threat of violence. There are two massive problems with that plan, one is that being saved from a violent situation isn’t a fun experience, the other is that an immediate threat of violence is most likely to happen when there is no-one around to intervene.

Men Don’t Listen to Women

Rebecca Solnit wrote an article about being ignored by men titled “Men Explain Things to Me” [7]. When discussing women’s issues the term “Mansplaining” is often used for that sort of thing, the geek feminism Wiki has some background [8]. It seems obvious that the men who have the greatest need to be taught some things related to women’s issues are the ones who are least likely to listen to women. This implies that other men have to teach them.

Craig says that women need “space to discover and practice their own strength and their own voices“. I think that the best way to achieve that goal is to listen when women speak. Of course that doesn’t preclude speaking as well, just listen first, listen carefully, and listen more than you speak.

Craig claims that when men like me and Matthew Garrett comment on such issues we are making “women’s spaces more comfortable, more palatable, for men“. From all the discussion on this it seems quite obvious that what would make things more comfortable for men would be for the issue to never be discussed at all. It seems to me that two of the ways of making such discussions uncomfortable for most men are to discuss sexual assault and to discuss what should be done when you have a friend who treats women in a way that you don’t like. Matthew has covered both of those so it seems that he’s doing a good job of making men uncomfortable – I think that this is a good thing, a discussion that is “comfortable and palatable” for the people in power is not going to be any good for the people who aren’t in power.

The Voting Aspect

It seems to me that when certain issues are discussed we have a social process that is some form of vote. If one person complains then they are portrayed as crazy. When other people agree with the complaint then their comments are marginalised to try and preserve the narrative of one crazy person. It seems that in the case of the discussion about Rape Apology and LCA2011 most men who comment regard it as one person (either Valeria Aurora or Matthew Garrett) causing a dispute. There is even some commentary which references my blog post about Rape Apology [9] but somehow manages to ignore me when it comes to counting more than one person agreeing with Valerie. For reference David Zanetti was the first person to use the term “apologist for rapists” in connection with the LCA 2011 discussion [10]. So we have a count of at least three men already.

These same patterns always happen so making a comment in support makes a difference. It doesn’t have to be insightful, long, or well written, merely “I agree” and a link to a web page will help. Note that a blog post is much better than a comment in this regard, comments are much like conversation while a blog post is a stronger commitment to a position.

I don’t believe that the majority is necessarily correct. But an opinion which is supported by too small a minority isn’t going to be considered much by most people.

The Cost of Commenting

The Internet is a hostile environment, when you comment on a contentious issue there will be people who demonstrate their disagreement in uncivilised and even criminal ways. S. E. Smith wrote an informative post for Tiger Beatdown about the terrorism that feminist bloggers face [11]. I believe that men face fewer threats than women when they write about such things and the threats are less credible. I don’t believe that any of the men who have threatened me have the ability to carry out their threats but I expect that many women who receive such threats will consider them to be credible.

The difference in the frequency and nature of the terrorism (and there is no other word for what S. E. Smith describes) experienced by men and women gives a vastly different cost to commenting. So when men fail to address issues related to the behavior of other men that isn’t helping women in any way. It’s imposing a significant cost on women for covering issues which could be addressed by men for minimal cost.

It’s interesting to note that there are men who consider themselves to be brave because they write things which will cause women to criticise them or even accuse them of misogyny. I think that the women who write about such issues even though they will receive threats of significant violence are the brave ones.

Not Being Patronising

Craig raises the issue of not being patronising, which is of course very important. I think that the first thing to do to avoid being perceived as patronising in a blog post is to cite adequate references. I’ve spent a lot of time reading what women have written about such issues and cited the articles that seem most useful in describing the issues. I’m sure that some women will disagree with my choice of references and some will disagree with some of my conclusions, but I think that most women will appreciate that I read what women write (it seems that most men don’t).

It seems to me that a significant part of feminism is about women not having men tell them what to do. So when men offer advice on how to go about feminist advocacy it’s likely to be taken badly. It’s not just that women don’t want advice from men, but that advice from men is usually wrong. There are patterns in communication which mean that the effective strategies for women communicating with men are different from the effective strategies for men communicating with men (see my previous section on men not listening to women). Also there’s a common trend of men offering simplistic advice on how to solve problems, one thing to keep in mind is that any problem which affects many people and is easy to solve has probably been solved a long time ago.

Often when social issues are discussed there is some background in the life experience of the people involved. For example Rookie Mag has an article about the street harassment women face which includes many disturbing anecdotes (some of which concern primary school students) [12]. Obviously anyone who has lived through that sort of thing (which means most women) will instinctively understand some issues related to threatening sexual behavior that I can’t easily understand even when I spend some time considering the matter. So there will be things which don’t immediately appear to be serious problems to me but which are interpreted very differently by women. The non-patronising approach to such things is to accept the concerns women express as legitimate, to try to understand them, and not to argue about it. For example the issue that Valerie recently raised wasn’t something that seemed significant when I first read the email in question, but I carefully considered it when I saw her posts explaining the issue and what she wrote makes sense to me.

I don’t think it’s possible for a man to make a useful comment on any issue related to the treatment of women without consulting multiple women first. I suggest a pre-requisite for any man who wants to write any sort of long article about the treatment of women is to have conversations with multiple women who have relevant knowledge. I’ve had some long discussions with more than a few women who are involved with the FOSS community. This has given me a reasonable understanding of some of the issues (I won’t claim to be any sort of expert). I think that if you just go and imagine things about a group of people who have a significantly different life-experience then you will be wrong in many ways and often offensively wrong. Just reading isn’t enough, you need to have conversations with multiple people so that they can point out the things you don’t understand.

This isn’t any sort of comprehensive list of ways to avoid being patronising, but it’s a few things which seem like common mistakes.

Anne Onne wrote a detailed post advising men who want to comment on feminist blogs etc [13], most of it applies to any situation where men comment on women’s issues.

Related posts:

  1. A Lack of Understanding of Nuclear Issues Ben Fowler writes about the issues related to nuclear power...

Michael Still: Juno nova mid-cycle meetup summary: the next generation Nova API

Fri, 2014-08-22 10:27
This is the final post in my series covering the highlights from the Juno Nova mid-cycle meetup. In this post I will cover our next generation API, which used to be called the v3 API but is largely now referred to as the v2.1 API. Getting to this point has been one of the more painful processes I think I've ever seen in Nova's development history, and I think we've learnt some important things about how large distributed projects operate along the way. My hope is that we remember these lessons next time we hit something as contentious as our API re-write has been.



Now on to the API itself. It started out as an attempt to improve our current API to be more maintainable and less confusing to our users. We deliberately decided that we would not focus on adding features, but instead attempt to reduce as much technical debt as possible. This development effort went on for about a year before we realized we'd made a mistake. The mistake we made is that we assumed that our users would agree it was trivial to move to a new API, and that they'd do that even if there weren't compelling new features, which it turned out was entirely incorrect.



I want to make it clear that this wasn't a mistake on the part of the v3 API team. They implemented what the technical leadership of Nova at the time asked for, and were very surprised when we discovered our mistake. We've now spent over a release cycle trying to recover from that mistake as gracefully as possible, but the upside is that the API we will be delivering is significantly more future proof than what we have in the current v2 API.



At the Atlanta Juno summit, it was agreed that the v3 API would never ship in its current form, and that what we would instead do is provide a v2.1 API. This API would be 99% compatible with the current v2 API, with the incompatible things being stuff like if you pass a malformed parameter to the API we will now tell you instead of silently ignoring it, which we call 'input validation'. The other thing we are going to add in the v2.1 API is a system of 'micro-versions', which allow a client to specify what version of the API it understands, and for the server to gracefully degrade to older versions if required.



This micro-version system is important, because the next step is to then start adding the v3 cleanups and fixes into the v2.1 API, but as a series of micro-versions. That way we can drag the majority of our users with us into a better future, without abandoning users of older API versions. I should note at this point that the mechanics for deciding what the minimum micro-version a version of Nova will support are largely undefined at the moment. My instinct is that we will tie to stable release versions in some way; if your client dates back to a release of Nova that we no longer support, then we might expect you to upgrade. However, that hasn't been debated yet, so don't take my thoughts on that as rigid truth.



Frustratingly, the intent of the v2.1 API has been agreed and unchanged since the Atlanta summit, yet we're late in the Juno release and most of the work isn't done yet. This is because we got bogged down in the mechanics of how micro-versions will work, and how the translation for older API versions will work inside the Nova code later on. We finally unblocked this at the mid-cycle meetup, which means this work can finally progress again.



The main concern that we needed to resolve at the mid-cycle was the belief that if the v2.1 API was implemented as a series of translations on top of the v3 code, then the translation layer would be quite thick and complicated. This raises issues of maintainability, as well as the amount of code we need to understand. The API team has now agreed to produce an API implementation that is just the v2.1 functionality, and will then layer things on top of that. This is actually invisible to users of the API, but it leaves us with an implementation where changes after v2.1 are additive, which should be easier to maintain.



One of the other changes in the original v3 code is that we stopped proxying functionality for Neutron, Cinder and Glance. With the decision to implement a v2.1 API instead, we will need to rebuild that proxying implementation. To unblock v2.1, and based on advice from the HP and Rackspace public cloud teams, we have decided to delay implementing these proxies. So, the first version of the v2.1 API we ship will not have proxies, but later versions will add them in. The current v2 API implementation will not be removed until all the proxies have been added to v2.1. This is prompted by the belief that many advanced API users don't use the Nova API proxies, and therefore could move to v2.1 without them being implemented.



Finally, I want to thank the Nova API team, especially Chris Yeoh and Kenichi Oomichi for their patience with us while we have worked through these complicated issues. It's much appreciated, and I find them a consistent pleasure to work with.



That brings us to the end of my summary of the Nova Juno mid-cycle meetup. I'll write up a quick summary post that ties all of the posts together, but apart from that this series is now finished. Thanks for following along.



Tags for this post: openstack juno nova mid-cycle summary api v3 v2.1

Related posts: Juno nova mid-cycle meetup summary: nova-network to Neutron migration; Juno nova mid-cycle meetup summary: scheduler; Juno nova mid-cycle meetup summary: ironic; Juno nova mid-cycle meetup summary: conclusion; Juno nova mid-cycle meetup summary: DB2 support; Juno nova mid-cycle meetup summary: social issues



Comment

Michael Still: Don't Tell Mum I Work On The Rigs

Fri, 2014-08-22 07:27






ISBN: 1741146984

LibraryThing

I read this book while on a flight a few weeks ago. Its surprisingly readable and relatively short -- you can knock it over in a single long haul flight. The book covers the memoirs of an oil rig worker, from childhood right through to middle age. That's probably the biggest weakness of the book, it just kind of stops when the writer reaches the present day. I felt there wasn't really a conclusion, which was disappointing.

An interesting fun read however.



Tags for this post: book paul_carter oil rig memoir

Related posts: Extreme Machines: Eirik Raude; New Orleans and sea level; Kern County oil wells on I-5; What is the point that people's morals evaporate? Comment Recommend a book

Andrew Pollock: [life] Day 204: Workshops Rail Museum

Thu, 2014-08-21 21:25

Zoe had a fabulous night's sleep and so did I. Despite that, I felt a bit tired today. Might have been all the unusual exercise yesterday.

After a leisurely start, we headed off in the direction of the Workshops Rail Museum for the day. We dropped past OfficeWorks on the way to return something I got months ago and didn't like, and I used the opportunity to grab a couple of cute little A5-sized clipboards. I'm going to keep one in the car and one in my Dad bag, so Zoe can doodle when we're on the go. I also discovered that one can buy reams of A5 paper.

We arrived at the Workshops, which were pretty quiet, except for a school excursion. Apparently they're also filming a movie there somewhere at the moment too (not in the museum part).

Despite Zoe's uninterrupted night's sleep, she fell asleep in the car on the way there, which was highly unusual. I let her sleep for a while in the car once we got there, before I woke her up. She woke up a bit grumpy, but once she realised where we were, she was very excited.

We had a good time doing the usual things, and then had a late lunch, and a brief return to the museum before heading over to Kim's place before she had to leave to pick up Sarah from school. Zoe and I looked after Tom and played with his massive pile of glitter play dough until Kim got back with Sarah.

Zoe and Sarah had their usual fabulous time together for about an hour before we had to head home. I'd had dinner going in the slow cooker, so it was nice and easy to get dinner on the table once we got home.

Despite her nap, Zoe went to bed easily. Now I have to try and convince Linux to properly print two-up on A4 paper. The expected methods aren't working for me.

David Rowe: SM1000 Part 3 – Rx Working

Thu, 2014-08-21 13:29

After an hour of messing about it turns out a bad solder joint meant U6 wasn’t connected to the ADC1 pin on the STM32F4 (schematic). This was probably the source of “noise” in some of my earlier unit tests. I found it useful to write a program to connect the ADC1 input to the DAC2 output (loudspeaker) and “listen” to the noise. Software signal tracer. Note to self: I must add that sort of analog loopback as a SM1000 menu option. I “cooked” the bad joint for 10 seconds with the soldering iron and some fresh flux and the rx side burst into life.

Here’s a video walk through of the FreeDV Rx demo:

I am really excited by the “analog” feel to the SM1000. Power up and “off air” speech is coming out of the speaker a few 100ms later! Benefits of no operating system (so no boot delay) and the low latency, fast sync, FreeDV design that veterans like Mel Whitten K0PFX have designed after years of pioneering HF DV.

The SM1000 latency is significantly lower that the PC version of FreeDV. It’s easy to get “hard” real time performance without an operating system, so it’s safe to use nice small audio buffers. Although to be fair optimising latency in x86 FreeDV is not something I have explored to date.

The top level of the receive code is pretty simple:



/* ADC1 is the demod in signal from the radio rx, DAC2 is the SM1000 speaker */

 

nin = freedv_nin(f);  

nout = nin;

f->total_bit_errors = 0;

 

if (adc1_read(&adc16k[FDMDV_OS_TAPS_16K], 2*nin) == 0) {

  GPIOE->ODR = (1 << 3);

  fdmdv_16_to_8_short(adc8k, &adc16k[FDMDV_OS_TAPS_16K], nin);

  nout = freedv_rx(f, &dac8k[FDMDV_OS_TAPS_8K], adc8k);

  //for(i=0; i<FREEDV_NSAMPLES; i++)

  //   dac8k[FDMDV_OS_TAPS_8K+i] = adc8k[i];

  fdmdv_8_to_16_short(dac16k, &dac8k[FDMDV_OS_TAPS_8K], nout);              

  dac2_write(dac16k, 2*nout);

  //led_ptt(0); led_rt(f->fdmdv_stats.sync); led_err(f->total_bit_errors);

  GPIOE->ODR &= ~(1 << 3);

}



We read “nin” modem samples from the ADC, change the same rate from 16 to 8 kHz, then call freedv_rx(). We then re-sample the “nout” output decoded speech samples to 16 kHz, and send them to the DAC, where they are played out of the loudspeaker.

The commented out “for” loop is the analog loopback code I used to “listen” to the ADC1 noise. There is also some commented out code for blinking LEDs (e.g. if we have sync, bit errors) that I haven’t tested yet (indeed the LEDs haven’t been loaded onto the PCB). I like to hit the highest risk tasks on the check list first.

The “GPIOE->ODR” is the GPIO Port E output data register, that’s the code to take the TP8 line high and low for measuring the real time CPU load on the oscilloscope.

Running the ADC and DAC at 16 kHz means I can get away without analog anti-aliasing or reconstruction filters. I figure the SSB radio’s filtering can take care of that.

OK. Time to load up the switches and LEDs and get the SM1000 switching between Tx and Rx via the PTT button.

I used this line to compress the 250MB monster 1080p video from my phone to a 8MB file that was fast to upload on YouTube:



david@bear:~/Desktop$ ffmpeg -i VID_20140821_113318.mp4 -ab 56k -ar 22050 -b 300k -r 15 -s 480x360 VID_20140821_113318.flv

David Rowe: SM1000 Part 2 – Embedded FreeDV Tx Working

Thu, 2014-08-21 08:29

Just now I fired up the full, embedded FreeDV “tx side”. So speech is sampled from the SM1000 microphone, processed by the Codec 2 encoder, then sent to the FDMDV modulator, then out of the DAC as modem tones. It worked, and used only about 25% of the STM32F4 CPU! A laptop running the PC version of FreeDV is the receiver.

Here is the decoded speech from a test “transmission” which to my ear sounds about the same as FreeDV running on a PC. I am relieved that there aren’t too many funny noises apart from artefacts of the Codec itself (which are funny enough).

The scatter plot is really good – better than I expected. Nice tight points and a SNR of 25 dB. This shows that the DAC and line interface hardware is working well:

For the past few weeks I have been gradually building up the software for the SM1000. Codec 2 and the FDMDV modem needed a little tweaking to reduce the memory and CPU load required. It’s really handy that I am the author of both!

The hardware seems to be OK although there is some noise in the analog side (e.g. microphone amplifier, switching power supply) that I am still looking into. Thanks Rick Barnich KA8BMA for an excellent job on the hardware design.

I have also been working on various drivers (ADC, DAC, switches and LEDs), and getting my head around developing on a “bare metal” platform (no operating system). For example if I run out of memory it just hangs, and when I Ctrl-C in gdb the stack is corrupted and it’s in an infinite loop. Anyway, it’s all starting to make sense now, and I’m nearing the finish line.

The STM32F4 is a curious combination of a “router” class CPU that doesn’t have an operating system. By “router” class I mean a CPU found inside a DSL router, like a WRT54G, that runs embedded Linux. The STM32F4 is much faster (168MHz) and more capable than the smaller chips we usually call a “uC” (e.g. a PIC or AVR). Much to my surprise I’m not missing embedded Linux. In some ways an operating system complicates life, for example random context switches, i-cache thrashing, needing lots of RAM and Flash, large and complex build systems and on the hardware side an external address and data bus which means high speed digital signals and PCB area.

I am now working on the Rx side. I need to work out a way to extract demod information so I can determine that the analog line in, ADC, and demod are working correctly. At this stage nothing is coming out of U6, the line interface op-amp schematic here). Oh well, I will take a look at that tomorrow.

Andrew Pollock: [life] Day 203: Kindergarten, a run and cleaning

Wed, 2014-08-20 21:25

I started the day off with a run. It was just a very crappy 5 km run, but it was nice to be feeling well enough to go for a run, and have the weather cooperate as well. I look forward to getting back to 10 km in my near future.

I had my chiropractic adjustment and then got stuck into cleaning the house.

Due to some sort of scheduling SNAFU, I didn't have a massage today. I'm still not quite sure what happened there, but I biked over and everything. The upside was it gave me some more time to clean.

It also worked out well, because I'd booked a doctor's appointment pretty close after my massage, so it was going to be tight to get from one place to the other.

With my rediscovered enthusiasm for exercise, and cooperative weather, I decided to bike to Kindergarten for pick up. Zoe was very excited. I'd also forgotten that Zoe had a swim class this afternoon, so we only had about 30 minutes at home before we had to head out again (again by bike) to go to swim class. I used the time to finish cleaning, and Zoe helped mop her bathroom.

Zoe wanted to hang around and watch Megan do her swim class, so we didn't get away straight away, which made for a slightly late dinner.

Zoe was pretty tired by bath time. Hopefully she'll have a good sleep tonight.

Michael Still: Juno nova mid-cycle meetup summary: nova-network to Neutron migration

Wed, 2014-08-20 14:27
This will be my second last post about the Juno Nova mid-cycle meetup, which covers the state of play for work on the nova-network to Neutron upgrade.



First off, some background information. Neutron (formerly Quantum) was developed over a long period of time to replace nova-network, and added to the OpenStack Folsom release. The development of new features for nova-network was frozen in the Nova code base, so that users would transition to Neutron. Unfortunately the transition period took longer than expected. We ended up having to unfreeze development of nova-network, in order to fix reliability problems that were affecting our CI gating and the reliability of deployments for existing nova-network users. Also, at least two OpenStack companies were carrying significant feature patches for nova-network, which we wanted to merge into the main code base.



You can see the announcement at http://lists.openstack.org/pipermail/openstack-dev/2014-January/025824.html. The main enhancements post-freeze were a conversion to use our new objects infrastructure (and therefore conductor), as well as features that were being developed by Nebula. I can't find any contributions from the other OpenStack company in the code base at this time, so I assume they haven't been proposed.



The nova-network to Neutron migration path has come to the attention of the OpenStack Technical Committee, who have asked for a more formal plan to address Neutron feature gaps and deprecate nova-network. That plan is tracked at https://wiki.openstack.org/wiki/Governance/TechnicalCommittee/Neutron_Gap_Coverage. As you can see, there are still some things to be merged which are targeted for juno-3. At the time of writing this includes grenade testing; Neutron being the devstack default; a replacement for nova-network multi-host; a migration plan; and some documentation. They are all making good progress, but until these action items are completed, Nova can't start the process of deprecating nova-network.



The discussion at the Nova mid-cycle meetup was around the migration planning item in the plan. There is a Nova specification that outlines one possible plan for live upgrading instances (i.e, no instance downtime) at https://review.openstack.org/#/c/101921/, but this will probably now be replaced with a simpler migration path involving cold migrations. This is prompted by not being able to find a user that absolutely has to have live upgrade. There was some confusion, because of a belief that the TC was requiring a live upgrade plan. But as Russell Bryant says in the meetup etherpad:



"Note that the TC has made no such statement on migration expectations other than a migration path must exist, both projects must agree on the plan, and that plan must be submitted to the TC as a part of the project's graduation review (or project gap review in this case). I wouldn't expect the TC to make much of a fuss about the plan if both Nova and Neutron teams are in agreement."



The current plan is to go forward with a cold upgrade path, unless a user comes forward with an absolute hard requirement for a live upgrade, and a plan to fund developers to work on it.



At this point, it looks like we are on track to get all of the functionality we need from Neutron in the Juno release. If that happens, we will start the nova-network deprecation timer in Kilo, with my expectation being that nova-network would be removed in the "M" release. There is also an option to change the default networking implementation to Neutron before the deprecation of nova-network is complete, which will mean that new deployments are defaulting to the long term supported option.



In the next (and probably final) post in this series, I'll talk about the API formerly known as Nova API v3.



Tags for this post: openstack juno nova mid-cycle summary nova-network neutron migration

Related posts: Juno nova mid-cycle meetup summary: scheduler; Juno nova mid-cycle meetup summary: ironic; Juno nova mid-cycle meetup summary: conclusion; Juno nova mid-cycle meetup summary: DB2 support; Juno nova mid-cycle meetup summary: social issues; Juno nova mid-cycle meetup summary: slots



Comment

Tridge on UAVs: First flight of ArduPilot on Linux

Wed, 2014-08-20 14:24

I'm delighted to announce that the effort to port ArduPilot to Linux reached an important milestone yesterday with the first flight of a fixed wing aircraft on a Linux based autopilot running ArduPilot.

As I mentioned in a previous blog post, we've been working on porting ArduPilot to Linux for a while now. There are lots of reasons for wanting to do this port, not least of which is that it is an interesting challenge!

For the test flight yesterday I used a PXF cape which is an add-on to a BeagleBoneBlack board which was designed by Philip Rowse from 3DRobotics. The PXF cape was designed as a development test platform for Linux based autopilots, and includes a rich array of sensors. It has 3 IMUs (a MPU6000, a MPU9250 and a LSM9DSO), plus two barometers (MS5611 on SPI and I2C), 3 I2C connectors for things like magnetometers and airspeed sensors plus a pile of UARTs, analog ports etc.

All of this sits on top of a BeagleBoneBlack, which is a widely used embedded Linux board with 512M ram, a 1GHz ARM CPU and 2 GByte of EMMC for storage. We're running the Debian Linux distribution on the BeagleBoneBlack, with a 3.8.13-PREEMPT kernel. The BBB also has a two nice co-processors called PRUs (programmable realtime units) which are ideal for timing critical tasks. In the flight yesterday we used one PRU for capturing PPM-SUM input from a R/C receiver, and the other PRU for PWM output to the aircrafts servos.

Summer of code project

The effort to port ArduPilot to Linux got a big boost a few months ago when we teamed up with Victor, Sid and Anuj from the BeaglePilot project as part of a Google Summer of Code project. Victor was sponsored by GSoC, while Sid and Anuj were sponsored as summer students in 3DRobotics. Together they have put a huge amount of effort in over the last few months, which culminated in the flight yesterday. The timing was perfect, as yesterday was also the day that student evaluations were due for the GSoc!

PXF on a SkyWalker

For the flight yesterday I used a 3DR SkyWalker, with the BBB+PXF replacing the usual Pixhawk. Because the port of ArduPilot to Linux used the AP_HAL hardware abstraction layer all of the hardware specific code is abstracted below the flight code, which meant I was able to fly the SkyWalker with exactly the same parameters loaded as I have previously used with the Pixhawk on the same plane.

For this flight we didn't use all of the sensors on the PXF however. Some issues with the build of the initial test boards meant that only the MPU9250 was fully functional, but that was quite sufficient. Future revisions of the PXF will fix up the other two IMUs, allowing us to gain the advantages of multiple IMUs (specifically it gains considerable robustness to accelerometer aliasing).

I also had a digital airspeed sensor (on I2C) and an external GPS/Compass combo to give the full set of sensors needed for good fixed wing flight.

Debugging at the field

As with any experimental hardware you have to expect some issues, and the PXF indeed showed up a problem when I arrived at the flying field. At home I don't get GPS lock due to my metal roof so I hadn't done much testing of the GPS and when I was doing pre-flight ground testing yesterday I found that I frequently lost the GPS. With a bit of work using valgrind and gdb I found the bug, and the GPS started to work correctly. It was an interesting bug in the UART layer in AP_HAL_Linux which also may affect the AP_HAL_PX4 code used on a Pixhawk (although with much more subtle effect), so it was an important fix, and really shows the development benefit of testing on multiple platforms.

After that issue was fixed the SkyWalker was launched, and as expected it flew perfectly, exactly as it would fly with any other ArduPilot based autopilot. There was quite a strong wind (about 15 knots, gusting to 20) which was a challenge for such a small foam plane, but it handled it nicely.

Lots more photos of the first flight are available here. Thanks to Darrell Burkey for braving a cold Canberra morning to come out and take some photos!

Next Steps

Now that we have ArduPilot on PXF flying nicely the next step is a test flight with a multi-copter (I'll probably use an Iris). I'm also looking forward to hearing about first flight reports from other groups working on porting ArduPilot to other Linux based boards, such as the NavIO.

This projects follows in the footsteps of quite a few existing autopilots that run on Linux, both open source and proprietary, including such well known projects as Paparrazi, the AR-Drone and many research autopilots at universities around the world. Having the abiity to run ArduPilot on Linux opens up some interesting possibilities for the ArduPilot project, including things like ROS integration, tightly integrated SLAM and lots of computationally intensive vision algorithms. I'm really looking forward to ArduPilot on Linux being widely available for everyone to try.

All of the code needed to fly ArduPilot on Linux is in the standard ArduPilot git repository.

Thanks

Many thanks to 3DRobotics for sponsoring the development of the PXF cape, and to Victor, Sid and Anuj for their efforts over the last few months! Special thanks to Philip Rowse for designing the board, and for putting up with lots of questions as we worked on the port, and to Craig Elder and Jeff Wurzbach for providing engineering support from the 3DR US offices.

Andrew Pollock: [life] Day 202: Kindergarten, a lot of administrative running around, tennis

Wed, 2014-08-20 12:25

Yesterday was a pretty busy day. I hardly stopped, and on top of a poor night's sleep, I was pretty exhausted by the end of the day.

I started the day with a yoga class, because a few extraordinary things had popped up on my schedule, meaning this was the only time I could get to a class this week. It was a beginner's class, but it was nice to have a slower pace for a change, and an excellent way to start the day off.

I drove to Sarah's place to pick up Zoe and take her to Kindergarten, and made a bad choice for the route, and the traffic was particularly bad, and we got to Kindergarten a bit later than normal.

After I dropped Zoe off, I headed straight to the post office to get some passport photos for the application for my certificate of registration. I also noticed that they now had some post office boxes available (I was a bit miffed because I'd been actively discouraged from putting my name down for one earlier in the year because of the purported length of the wait list). I discovered that one does not simply open a PO box in the name of a business, one needs letters of authority and print outs of ABNs and whatnot, so after I got my passport photos and made a few other impulse purchases (USB speakers for $1.99?!) I headed back home to gather the other documentation I needed.

By the time I'd done that and a few other bits and pieces at home, it was time to pop back to get my yoga teacher to certify my photos. Then I headed into the city to lodge the application in person. I should get the piece of paper in 6 weeks or so.

Then I swung past the post office to complete my PO box application (successfully this time) and grab some lunch, and update my mailing address with the bank. By the time I'd done all that, I had enough time to swing past home to grab Zoe's tennis racquet and a snack for her and head to Kindergarten to pick her up.

Today's tennis class went much better. Giving her a snack before the class started was definitely the way to go. She'd also eaten a good lunch, which would have helped. I just need to remember to get her to go to the toilet, then she should be all good for an interruption-free class.

I dropped Zoe directly back to Sarah after tennis class today, and then swung by OfficeWorks to pick up some stationery on the way home.

Arjen Lentz: Two Spaces After a Period: Why You Should Never, Ever Do It | Slate.com

Wed, 2014-08-20 11:25

The cause of the double space may be manual typewriters with their monospace font. But we all use proportional fonts these days.

Related Posts:
  • No related posts

Rusty Russell: POLLOUT doesn’t mean write(2) won’t block: Part II

Wed, 2014-08-20 00:27

My previous discovery that poll() indicating an fd was writable didn’t mean write() wouldn’t block lead to some interesting discussion on Google+.

It became clear that there is much confusion over read and write; eg. Linus thought read() was like write() whereas I thought (prior to my last post) that write() was like read(). Both wrong…

Both Linux and v6 UNIX always returned from read() once data was available (v6 didn’t have sockets, but they had pipes). POSIX even suggests this:

The value returned may be less than nbyte if the number of bytes left in the file is less than nbyte, if the read() request was interrupted by a signal, or if the file is a pipe or FIFO or special file and has fewer than nbyte bytes immediately available for reading.

But write() is different. Presumably so simple UNIX filters didn’t have to check the return and loop (they’d just die with EPIPE anyway), write() tries hard to write all the data before returning. And that leads to a simple rule.  Quoting Linus:

Sure, you can try to play games by knowing socket buffer sizes and look at pending buffers with SIOCOUTQ etc, and say “ok, I can probably do a write of size X without blocking” even on a blocking file descriptor, but it’s hacky, fragile and wrong.

I’m travelling, so I built an Ubuntu-compatible kernel with a printk() into select() and poll() to see who else was making this mistake on my laptop:

cups-browsed: (1262): fd 5 poll() for write without nonblock cups-browsed: (1262): fd 6 poll() for write without nonblock Xorg: (1377): fd 1 select() for write without nonblock Xorg: (1377): fd 3 select() for write without nonblock Xorg: (1377): fd 11 select() for write without nonblock

This first one is actually OK; fd 5 is an eventfd (which should never block). But the rest seem to be sockets, and thus probably bugs.

What’s worse, are the Linux select() man page:

A file descriptor is considered ready if it is possible to perform the corresponding I/O operation (e.g., read(2)) without blocking. ... those in writefds will be watched to see if a write will not block...

And poll():

POLLOUT Writing now will not block.

Man page patches have been submitted…

Andrew McDonnell: Unleashed GovHack – an Adelaide Adventure in Open Data

Tue, 2014-08-19 22:27
Last month I attended Unleashed Govhack, our local contribution to the Australian GovHack hackathon. Unleashed Essentially GovHack is a chance for makers, hackers, designers, artists, and researchers to team up with government ‘data custodians’ and build proof of concept applications (web or mobile), software tools, video productions or presentations (data journalism) in a way that […]

Lev Lafayette: A Source Installation of gzip

Tue, 2014-08-19 21:28

GNU zip is a compression utility free from patented algorithms. Software patents are stupid, and patented compression algorithms are especially stupid.

read more

Linux Users of Victoria (LUV) Announce: LUV Main September 2014 Meeting: AGM + lightning talks

Tue, 2014-08-19 18:28
Start: Sep 2 2014 19:00 End: Sep 2 2014 21:00 Start: Sep 2 2014 19:00 End: Sep 2 2014 21:00 Location: 

The Buzzard Lecture Theatre. Evan Burge Building, Trinity College, Melbourne University Main Campus, Parkville.

Link:  http://luv.asn.au/meetings/map

AGM + lightning talks

Notice of LUV Annual General Meeting, 2nd September 2014, 19:00.

Linux Users of Victoria, Inc., registration number

A0040056C, will be holding its Annual General Meeting at

7pm on Tuesday, 2nd September 2014, in the Buzzard

Lecture Theatre, Trinity College.

The AGM will be held in conjunction with our usual

September Main Meeting. As is customary, after the AGM

business we will have a series of lightning talks by

members on a recent Linux experience or project.

The Buzzard Lecture Theatre, Evan Burge Building, Trinity College Main Campus Parkville Melways Map: 2B C5

Notes: Trinity College's Main Campus is located off Royal Parade. The Evan Burge Building is located near the Tennis Courts. See our Map of Trinity College. Additional maps of Trinity and the surrounding area (including its relation to the city) can be found at http://www.trinity.unimelb.edu.au/about/location/map

Parking can be found along or near Royal Parade, Grattan Street, Swanston Street and College Crescent. Parking within Trinity College is unfortunately only available to staff.

For those coming via Public Transport, the number 19 tram (North Coburg - City) passes by the main entrance of Trinity College (Get off at Morrah St, Stop 12). This tram departs from the Elizabeth Street tram terminus (Flinders Street end) and goes past Melbourne Central Timetables can be found on-line at:

http://www.metlinkmelbourne.com.au/route/view/725

Before and/or after each meeting those who are interested are welcome to join other members for dinner. We are open to suggestions for a good place to eat near our venue. Maria's on Peel Street in North Melbourne is currently the most popular place to eat after meetings.

LUV would like to acknowledge Red Hat for their help in obtaining the Buzzard Lecture Theatre venue and VPAC for hosting, and BENK Open Systems for their financial support of the Beginners Workshops

Linux Users of Victoria Inc., is an incorporated association, registration number A0040056C.

September 2, 2014 - 19:00

read more