Planet Linux Australia

Syndicate content
Planet Linux Australia - http://planet.linux.org.au
Updated: 16 min 18 sec ago

Andrew Pollock: [life] Day 243: Day care for a day

Mon, 2014-09-29 22:25

I had to resort to using Zoe's old day care today so I could do some more Thermomix Consultant training. Zoe's asked me on and off if she could go back to her old day care to visit her friends and her old teachers, so she wasn't at all disappointed when she could today. Megan was even there as well, so it was a super easy drop off. She practically hugged me and sent me on my way.

When I came back at 3pm to pick her up, she wanted to stay longer, but wavered a bit when I offered to let her stay for another hour and ended up coming home with me.

We made a side trip to the Valley to check my post office box, and then came home.

Zoe watched a bit of TV, and then Sarah arrived to pick her up. After some navel gazing, I finished off the day with a very strenuous yoga class.

Sonia Hamilton: Git and mercurial abort: revision cannot be pushed

Mon, 2014-09-29 12:29

I’ve been migrating some repositories from Mercurial to Git; as part of this migration process some users want to keep using Mercurial locally until they have time to learn git.

First install the hg-git tools; for example on Ubuntu:

sudo aptitude install python-setuptools python-dev sudo easy_install hg-git

Make sure the following is in your ~/.hgrc:

[extensions] hgext.bookmarks = hggit =

Then, in your existing mercurial repository, add a new remote that points to the git repository. For example for a BitBucket repository:

cd <mercurial repository> cat .hg/hgrc [paths] # the original hg repository default = https://username@abcde.org/foo/barhg # the git version (on BitBucket in this case) bbgit = git+ssh://git@bitbucket.org:foo/bar.git

Then you can go an hg push bbgit to push from your local hg repository to the remote git repository.

mercurial abort: revision cannot be pushed

You may get the error mercurial abort: revision cannot be pushed since it doesn’t have a ref when pushing from hg to git, or you might notice that your hg work isn’t being pushed. The solution here is to reset the hg bookmark for git’s master branch:

hg book -f -r tip master hg push bbgit

If you find yourself doing this regularly, this small shell function (in your ~/.bashrc) will help:

hggitpush () { # $1 is hg remote name in hgrc for repo # $2 is branch (defaults to master) hg book -f -r tip ${2:-master} hg push $1 }

Then from your shell you can run commands like:

hggitpush bbgit dev hggitpush foogit # defaults to pushing to master

Sridhar Dhanapalan: Twitter posts: 2014-09-22 to 2014-09-28

Mon, 2014-09-29 01:26

David Rowe: SM1000 Part 6 – Noise and Radio Tests

Sun, 2014-09-28 15:29

For the last few weeks I have been debugging some noise issues in “analog mode”, and testing the SM1000 between a couple of HF radios.

The SM1000 needs to operate in “analog” mode as well as support FreeDV Digital Voice (DV mode). In analog mode, the ADC samples the mic signal, and sends it straight to the DAC where it is sent to the mic input of the radio. This lets you use the SM1000 for SSB as well as DV, without unplugging the SM1000 and changing microphones. Analog mode is a bit more challenging as electrical noise in the SM1000, if not controlled, makes it through to the transmit audio. DV mode is less sensitive, as the modem doesn’t care about low level noise.

Tracking down noise sources involves a lot of detail work, not very exciting but time consuming. For example I can hear a noise in the received audio, is it from the DAC or ADC side? Write software so I can press a button to send 0 samples to the DAC so I can separate the DAC and ADC at run time. OK it’s the ADC side, is it the ADC itself or the microphone amplifier? Break net and terminate ADC with 1k resistor to ground (thanks Matt VK5ZM for this suggestion). OK it’s the microphone amplifier, so is it on the input side or the op-amp itself? Does the noise level change with the mic gain control? No, then it must not be from the input. And so it goes.

I found noise due to the ADC, the mic amp, the mic bias circuit, and the 5V switcher. Various capacitors and RC filters helped reduce it to acceptable levels. The switcher caused high frequency hiss, this was improved with a 100nF cap across R40, and a 1500 ohm/1nF RC filter between U9 and the ADC input on U1 (prototype schematic). The mic amp and mic bias circuit was picking up 50Hz noise at the frame rate of the DSP software that was fixed with 220uF cap across R40 and a 100 ohm/220uF RC filter in series with R39, the condenser mic bias network.

To further improve noise, Rick and I are also working on changes to the PCB layout. My analog skills are growing and I am now working methodically. It’s nice to learn some new skills, useful for other radio projects as well. Satisfying.

Testing Between Two Radios

Next step is to see how the SM1000 performs over real radios. In particular how does it go with nearby RF energy? Does the uC reset itself, is there RF noise getting into the sensitive microphone amplifier and causing runaway feedback in analog mode? Also user set up issues: how easy is it to interface to the mic input of a radio? Is the level reaching the radio mic input OK?

The first step was to connect the SM1000 to a FT817 as the transmit radio, then to a IC7200 via 100dB of attenuation. The IC7200 receive audio was connected to a laptop running FreeDV. The FT817 was set to 0.5W output so I wouldn’t let the smoke out of my little in-line attenuators. This worked pretty well, and I obtained SNRs of up to 20dB from FreeDV. It’s always a little lower through real radios, but that’s acceptable. The PTT control from the SM1000 worked well. It was at this point that I heard some noises using the SM1000 in “analog” mode that I chased down as described above.

At the IC7200 output I recorded this file demonstrating audio using the stock FT817 MH31 microphone, the SM1000 used in analog mode, and the SM1000 in DV mode. The audio levels are unequal (MH31 is louder), but I am satisfied there are no strange noises in the SM1000 audio (especially in analog mode) when compared to the MH31 microphone. The levels can be easily tweaked.

Then I swapped the configuration to use the IC7200 as the transmitter. This has up to 100W PEP output, so I connected it to an end fed dipole, and used the FT817 with the (non-resonant) VHF antenna as the receiver. It took me a while to get the basic radio configuration working. Even with the stock IC7200 mic I could hear all sorts of strange noises in the receive audio due to the proximity of the two radios. Separating them (walking up the street with the FT817) or winding the RF gain all the way down helped.

However the FreeDV SNR was quite low, a maximum of 15dB. I spent some time trying to work out why but didn’t get to the bottom of it. I suspect there is some transmit pass-band filtering in the IC7200, making some FDMDV carriers a few dB lower than others. Note x-shaped scatter diagram and sloped spectrum below:

However the main purpose of these tests was to see how the SM1000 handled high RF fields. So I decided to move on.

I tested a bunch of different combinations, all with good results:

  • IC7200 with stock HM36 mic, SM1000 in analog mode, SM1000 in DV mode (high and low drive)
  • Radios tuned to 7.05, 14.235 and 28.5 MHz.
  • Tested with IC7200 and SM1000 running from the same 12V battery (breaking transformer isolation).
  • Had a 1m headphone cable plugged into the SM1000 act as an additional “antenna”.
  • Rigged up an adaptor to plug the FT817 MH31 mic into the CN5 “ext mic” connector on the SM1000. Total of 1.5m in mic lead, so plenty of opportunity for RF pick up.
  • Running full power into low and 3:1 SWR loads. (Matt, VK5ZM suggested high SWR loads is a harsh RF environment).

Here are some samples, SM1000 analog, stock IC7200 mic, SM1000 DV low drive, SM1000 high drive. There are some funny noises on the analog and stock mic samples due to the proximity of the rx to the tx, but they are consistent across both samples. No evidence of runaway RF feedback or obvious strange noises. Once again the DV level is a bit lower. All the nasty HF channel noise is gone too!

Change Control

Rick and I are coordinating our work with a change log text file that is under SVN version control. As I perform tests and make changes to the SM1000, I record them in the change log. Rick then works from this document to modify the schematic and PCB, making notes on the change log. I can then review his notes against the latest schematic and PCB files. The change log, combined with email and occasional Skype calls, is working really well, despite us being half way around the planet from each other.

SM1000 Enclosure

One open issue for me is what enclosure we provide for the Beta units. I’ve spoken to a few people about this, and am open to suggestions from you, dear reader. Please comment below on your needs or ideas for a SM1000 enclosure. My requirements are:

  1. Holes for loudspeaker, PTT switch, many connectors.
  2. Support operation in “hand held” or “small box next to the radio” form

    factor.
  3. Be reasonably priced, quick to produce for the Qty 100 beta run.

It’s a little over two months since I started working on the SM1000 prototype, and just 6 months since Rick and I started the project. I’m very pleased with progress. We are on track to meet our goal of having Betas available in 2014. I’ve kicked off the manufacturing process with my good friend Edwin from Dragino in China, ordering parts and working together with Rick on the BOM.

Glen Turner: Ubiquitous survelliance, VPNs, and metadata

Sat, 2014-09-27 11:28

My apologies for the lack of diagrams accompanying this post. I had not realised when I selected LiveJournal to host my blog that it did not host images.

There have been a lot of remarks, not the least by a minister, about the use of VPNs to avoid metadata collection. Unfortunately VPNs cannot be presumed to be effective in avoiding metadata collection, because of the sheer ubiquity of surveillance and the traffic analysis opportunities that ubiquity makes possible.

By ‘metadata’ I mean the production of flow records, one record per flow, with no sampling or aggregation.

By ‘ubiquitous surveillance’ I mean the ability to completely tap and record the ingress and egress data of a computer. Furthermore, the sharing of that data with other nations, such as via the Five Eyes programme. It is a legal quirk in the US and in Australia that a national spy agency may not, without a warrant or reasonable cause, be able to analyse the data of its own citizens directly, but can obtain that same information via a Five Eyes partner without a warrant or reasonable cause.

By ‘VPN service’ I mean a overseas service which sells subscriber-based access to a OpenVPN or similar gateway. The subscriber runs a OpenVPN client, the service runs a OpenVPN server. The traffic from within that encrypted VPN tunnel is then NATed and sent out the Internet-facing interface of the OpenVPN server. The traffic from the subscriber appears to have the IP address of the VPN server; this makes VPN services popular for avoiding geo-locked Internet content from Hula, Netflix and BBC iPlayer.

The theory is that this IP address misdirection also defeats ubiquitous surveillance. An agency producing metadata from the subscriber's traffic sees only communication with the VPN service. An agency tapping the subscriber's traffic sees only the IP address of the subscriber exchanging encrypted content with the IP address of the VPN service.

Unfortunately ubiquitous surveillance is ubiquitous: if a national spy agency cannot tap the traffic itself then it can ask its Five Eyes partner to do the tap. This means that the traffic of the VPN service is also tapped. One interface contains traffic with the VPN subscribers; the other interface contains unencrypted traffic from all subscribers to the Internet. Recall that the content of the traffic with the VPN subscribers is encrypted.

Can a national spy agency relate the unencrypted Internet traffic back to the subscriber's connections? If so then it can tap content and metdata as if the VPN service was not being used.

Unfortunately it is trivial for a national spy agency to do this. ‘Traffic analysis’ is the examination of patterns of traffic. TCP traffic is very vulnerable to traffic analysis:

  • Examining TCP traffic we see a very prominent pattern at the start of every connection. This ‘TCP three-way handshake’ sends one small packet all by itself for the entire round-trip time, receives one small packet all by itself for the entire round trip time, then sends one large packet. Within a small time window we will see the same pattern in VPN service's encrypted traffic with the subscriber and in the VPN service's unencrypted Internet traffic.

  • Examining TCP traffic we see a very prominent pattern which a connection encounters congestion. This ‘TCP multiplicative decrease’ halves the rate of transmission upon traffic where the sender has not received a Acknowledgement packet within the expected time. Within a small time window we will see the same pattern in VPN service's encrypted traffic with the subscriber and in the VPN service's unencrypted Internet traffic.

These are only the gross features. It doesn't take much imagination to see that the interval between Acks can be used to group connections with the same round-trip time. Or that the HTTP GET and response is also prominent. Or that jittering in web streaming connections is prominent.

In short, by using traffic analysis a national spy agency can — with a high probability — assign the unencrypted traffic on the Internet interface to the encrypted traffic from the VPN subscriber. That is, given traffic with (Internet site IP address, VPN service Internet-facing IP address) and (VPN service subscriber-facing IP address, Subscriber IP address) then traffic analysis allows a national spy agency to reduce that to (Internet site IP address, Subscriber IP address). That is, the same result as if the VPN service was not used.

The only question remains is if the premier national spy agencies are actually exchanging tables of (datetime, VPN service subscriber-facing IP address, Internet site IP address, Subscriber IP address) to allow national taps of (datetime, VPN server IP address, Subscriber IP address) to be transformed into (datetime, Internet site IP address, Subscriber IP address). There is nothing technical to prevent them from doing so. Based upon the revealed behaviour of the Five Eyes agencies it is reasonable to expect that this is being done.

Tim Serong: Dear ASIO

Sat, 2014-09-27 11:27

Since the Senate passed legislation expanding your surveillance powers on Thursday night, you’ve copped an awful lot of flack on Twitter. Part of the problem, I think – aside from the legislation being far too broad – is that we don’t actually know who you are, or what exactly it is you get up to. You could be part of a spy novel, a movie or a decades-long series of cock ups. You could be script kiddies with a budget. Or you could be something else entirely.

At times like this I try to remind myself to assume good faith; to remember that most people are basically decent and are trying to live a good life. Some people are even trying to make the world a better place, whatever that might mean.

For those of you then who are decent people, and who are trying to keep Australia safe from whatever mysterious threats are out there that we don’t know about – all without wishing to impinge on or risk destroying the freedoms that we enjoy here – you have my thanks.

For those of you involved in the formulation of The National Security Legislation Amendment Bill 2014 (No 1) – you who might be reading this post as I type it, rather than after I publish it – I have tried very, very hard to imagine that you honestly believe you are making the world a better place. And maybe you do actually think that, but for my part I cannot see the powers granted as anything other than a direct assault on our democracy. As Glenn Greenwald pointed out, I should be more worried about bathroom accidents, restaurant meals and lightning strikes than terrorism. As a careful bath user with a strong stomach and a sturdy house to hide in, I think I’m fairly safe on that front. Frankly I’m more worried about climate change. Do you have anyone on staff who can investigate that threat to our national security?

Anyway, thanks for reading, and I’ll take it as a kindness if you don’t edit this post without asking first.

Regards,

Tim Serong

Linux Users of Victoria (LUV) Announce: LUV Main October 2014 Meeting: MySQL + CCNx

Sat, 2014-09-27 00:29
Start: Oct 7 2014 19:00 End: Oct 7 2014 21:00 Start: Oct 7 2014 19:00 End: Oct 7 2014 21:00 Location: 

The Buzzard Lecture Theatre. Evan Burge Building, Trinity College, Melbourne University Main Campus, Parkville.

Link:  http://luv.asn.au/meetings/map

Stewart Smith, A History of MySQL

Hank, Content-Centric Networking

The Buzzard Lecture Theatre, Evan Burge Building, Trinity College Main Campus Parkville Melways Map: 2B C5

Notes: Trinity College's Main Campus is located off Royal Parade. The Evan Burge Building is located near the Tennis Courts. See our Map of Trinity College. Additional maps of Trinity and the surrounding area (including its relation to the city) can be found at http://www.trinity.unimelb.edu.au/about/location/map

Parking can be found along or near Royal Parade, Grattan Street, Swanston Street and College Crescent. Parking within Trinity College is unfortunately only available to staff.

For those coming via Public Transport, the number 19 tram (North Coburg - City) passes by the main entrance of Trinity College (Get off at Morrah St, Stop 12). This tram departs from the Elizabeth Street tram terminus (Flinders Street end) and goes past Melbourne Central Timetables can be found on-line at:

http://www.metlinkmelbourne.com.au/route/view/725

Before and/or after each meeting those who are interested are welcome to join other members for dinner. We are open to suggestions for a good place to eat near our venue. Maria's on Peel Street in North Melbourne is currently the most popular place to eat after meetings.

LUV would like to acknowledge Red Hat for their help in obtaining the Buzzard Lecture Theatre venue and VPAC for hosting, and BENK Open Systems for their financial support of the Beginners Workshops

Linux Users of Victoria Inc., is an incorporated association, registration number A0040056C.

October 7, 2014 - 19:00

read more

Andrew Pollock: [life] Day 240: A day of perfect scheduling

Fri, 2014-09-26 22:25

Today was a perfectly lovely day, the schedule just flowed so nicely.

I started the day making a second batch of pizza sauce for the Riverfire party I'm hosting tomorrow night. Once that was finished, we walked around the corner to my dentist for a check up.

Zoe was perfect during the check up, she just sat in the corner of the room and watched and also played on her phone. The dentist commented on how well behaved she was. It blew my mind to run into Tanya there for the second time in a row. We're obviously on the same schedules, but it's just crazy to always wind up with back to back appointments.

After the appointment, we pretty much walked onto a bus to the city, so we could meet Nana for lunch. While we were on the bus, I called up and managed to get haircut appointments for both of us at 3pm. I figured we could make the return trip via CityCat, and the walk home would take us right past the hairdresser.

The bus got us in about 45 minutes early, so we headed up to the Museum of Brisbane in City Hall to see if we could get into the clock tower. We got really lucky, and managed to get onto the 11:45am tour.

Things have changed since I was a kid and my Nana used to take me up the tower. They no longer let you be up there when the bells chime, which is a shame, but apparently it's very detrimental to your hearing.

Zoe liked the view, and then we went back down to King George Square to wait for Nana.

We went to Jo Jo's for lunch, and they somehow managed to lose Zoe and my lunch order, and after about 40 minutes of waiting, I chased it up, and it still took a while to sort out. Zoe was very patient waiting the whole time, despite being starving.

After lunch, she wanted to see Nana's work, so we went up there. On the way back out, she wanted to play with the Drovers statues on Ann Street for a bit. After that, we made our way to North Quay and got on a CityCat, which nicely got us to the hairdresser in time for our appointment.

After that, we walked home, and drove around to check out a few bulk food places that I've learned about from my Thermomix Consultant training. We checked out a couple in Woolloongabba, and they had some great stuff available to the public.

It was getting late, so after a failed attempt at finding one in West End, we returned home so I could put dinner on.

It was a smoothly flowing day today, and Zoe handled it so well.

Michael Still: The Decline and Fall of IBM: End of an American Icon?

Fri, 2014-09-26 19:27






ISBN: 0990444422

LibraryThing

This book is quite readable, which surprises me for the relatively dry topic. Whilst obviously not everyone will agree with the author's thesis, it is clear that IBM hasn't been managed for long term success in a long time and there are a lot of very unhappy employees. The book is an interesting perspective on a complicated problem.



Tags for this post: book robert_cringely ibm corporate decline

Related posts: Phones; Your first computer?; Advertising inside the firewall; Corporate networks; Loyalty; Dead IBM DeveloperWorks Comment Recommend a book

Andrew Pollock: [life] Day 239: Cousin catch up, ice skating and a TM5 pickup

Fri, 2014-09-26 10:25

My sister, brother-in-law and niece are in town for a wedding on the weekend, so after collecting Zoe from the train station, we headed out to Mum and Dad's for the morning to see them all. My niece, Emma, has grown heaps since I last saw her. Her and Zoe had some nice cuddles and played together really well.

I'd also promised Zoe that I'd take her ice skating, so that dovetailed pretty well with the visit, as instead of going to Acacia Ridge, we went to Boondall after lunch and skated there.

Zoe was very confident this time on the ice. She spent more time without her penguin than with it, so I think next time she'll be fine without one at all. She only had a couple of falls, the first one I think was a bit painful for her and a shock, but after that she was skating around really well. I think she was quite proud of herself.

My new Thermomix had been delivered to my Group Leader's house, so after that, we drove over there so I could collect it and get walked through how I should handle deliveries for customers. Zoe napped in the car on the way, and woke up without incident, despite it being a short nap. She had a nice time playing with Maria's youngest daughter while Maria walked me through everything, which was really lovely.

Time got away on me a bit, and we hurried home so that Sarah could pick Zoe up. I then got stuck into making some pizza sauce for our Riverfire pizza party on Saturday night.

Craige McWhirter: Enabling OpenStack Roles To Resize Volumes Via Policy

Thu, 2014-09-25 15:28

If you have volume backed OpenStack instances, you may need to resize them. In most usage cases you'll want to have un-privileged users resize the instances. This documents how you can modify the Cinder policy to allow tenant members assigned to a particular role to have permissions to resize volumes.

Assumptions:
  • You've already created your OpenStack tenant.
  • You've already created your OpenStack user.
  • You know how to allocate roles to users in tenants.
Select the Role

You will need to create or identify a suitable role. In this example I'll use "Support".

Modify policy.json

Once the role has been created or identified, add these lines to the /etc/cinder/policy.json on the Cinder API server(s):

"context_is_support": [["role:Support"]], "admin_or_support": [["is_admin:True"], ["rule:context_is_support"]],

Modify "volume_extension:volume_admin_actions:reset_status" to use the new context:

"volume_extension:volume_admin_actions:reset_status": [["rule:admin_or_support"]], Add users to the role

Add users who need priveleges to resize volumes to the role SupportAdmin in their tennant.

The users you have added to the "Support" role should now be able to resize volumes.

Gabriel Noronha: EVSE for Sun Valley Toursit Park

Wed, 2014-09-24 22:26

So you might of seen a couple posts about Sun Valley Tourist Park, that is because we visit there a lot to visit grandma and grandpa (wife’s parents) .  So we decided because its outside of our return range we have to charge there to get home if we take the I-MIEV. but with the Electric Vehicle Supply Equipment (EVSE) that comes with the car limits the charge rate to 10amps max. So we convinced the park to install a 32amp EVSE.  This allow us to charge at the I-MIEV full rate of 13amps so 30% faster.

Aeroviroment EVSE-RS at Sun Valley

If you want to know more about the EVSE it’s an Aeroviroment EVSE RS.  It should work fine with the Holden volt, Mitsubishi Outlander PHEV, I-MIEV 2012 or later (may not work with 2010 models) and the Nissan LEAF.

If you are in the central coast and want somewhere to charge you can find the details on how to contact the park on plugshare. It’s available for public use depending on how busy the park is and the driver paying a nominal fee, and the driver phones ahead, during office hours.

 

Andrew Pollock: [life] Day 238: Picnic play date in Roma Street Parklands with a side trip to the museum

Wed, 2014-09-24 22:25

School holidays are a good time for Zoe to have a weekday play date with my friend Kim's daughter Sarah, and we'd lined up a picnic in Roma Street Parklands today.

Zoe had woken up at about 1:30am with a nightmare, and subsequently slept in. It had taken me forever to get back to sleep, so I was pretty tired and slept a bit late too.

We got going eventually, and narrowly missed a train, so had to wait for the next one. We got into the Parklands pretty much on time, and despite the drizzly weather, had a nice morning making our way around the gardens.

The weather progressively improved by lunchtime, and after an early lunch, Kim and kids headed home, and we headed into the museum.

Unfortunately I was wrong about which station we had to get off to go to the museum, and we got off at Southbank rather than South Brisbane and had a long, slow walk of shame to get to the museum.

We used the freebie tickets I'd gotten to see the Deep Oceans exhibit, before heading home. I love the museum's free cloaking service, as it allowed me to divest myself of picnic blankets, my backpack and the Esky while we were at the museum.

While we were making the long walk of shame to the museum, I got a call from the car repairer to say that my car was ready, so after we returned to the rental car at the train station we drove directly to the repairer and collected the car, which involved a lot of shuffling of car contents and car seats. I then thought I'd lost my car key, and that involved an unnecessary second visit back to the car rental place on foot before I discovered it was in my pocket all along.

When we got home, Zoe wanted to play pirates again with our chocolate gold coins. What we wound up playing was a variant of "hide the thimble" in her bedroom, where she hid the chocolate gold coins all over the place, and then proceeded to show me where she'd hidden them all. It was very cute.

There was a tiny bit of TV before Sarah arrived to pick up Zoe.

Andrew Pollock: [life] Day 237: A day with the grandparents and a lot cooking

Wed, 2014-09-24 22:25

Yesterday was a pretty full on day. I had to drop the car off to get the rear bumper replaced, and I also had to get to my Thermomix Consultant practical training by 9:30am.

I'd arranged to drop the car off at 8am and then pick up a rental car, and Mum was coming to collect Zoe at 8:30am. Zoe woke up at a good time, and we managed to get going extra early, so I dropped the car off early and was picking up the rental car before 8am.

Mum also arrived extra early, so I used the additional time to swing by the Valley to check my PO box, as I had a suspicion my Thermomix Consultant kit might have arrived, and it had.

I then had to get over to my Group Leader's house to do the practical training, which consisted of watching and giving a demo, with a whole bunch of advice and feedback along the way. It was a long day of much cooking, but it was good to get all of the behind the scenes tricks on how to prepare for a demo, give the demo and have it all run smoothly and to schedule.

I then headed over to Mum and Dad's for dinner. Zoe had had a great day, and my Aunty Peggy was also down from Toowoomba. We stayed for dinner and then headed home. I managed to get Zoe to bed more or less on time.

Tim Serong: Something Like a Public Consultation

Wed, 2014-09-24 20:27

The Australian government often engages in public consultation on a variety of matters. This is a good thing, because it provides an opportunity for us to participate in our governance. One such recent consultation was from the Attorney-General’s Department on Online Copyright Infringement. I quote:

On 30 July 2014, the Attorney-General, Senator the Hon George Brandis, and the Minister for Communications Malcolm Turnbull MP released a discussion paper on online copyright infringement.

Submissions were sought from interested organisations and individuals on the questions outlined in the discussion paper and on other possible approaches to address this issue.

Submissions were accepted via email, and there was even a handy online form where you could just punch in your answers to the questions provided. The original statement on publishing submissions read:

Submissions received may be made public on this website unless otherwise specified. Submitters should indicate whether any part of the content should not be disclosed to the public. Where confidentiality is requested, submitters are encouraged to provide a public version that can be made available.

This has since been changed to:

Submissions received from peak industry groups, companies, academics and non-government organisations that have not requested confidentiality are being progressively published on the Online copyright infringement—submissions page.

As someone who in a fit of inspiration late one night (well, a fit of some sort, but I’ll call it inspiration), put in an individual submission I am deeply disappointed that submissions from individuals are apparently not being published. Geordie Guy has since put in a Freedom of Information request for all individual submissions, but honestly the AGD should be publishing these. It was after all a public consultation.

For the record then, here’s my submission:

Question 1: What could constitute ‘reasonable steps’ for ISPs to prevent or avoid copyright infringement?

In our society, internet access has become a necessary public utility.  We communicate with our friends and families, we do our banking, we purchase and sell goods and services, we participate in the democratic process; we do all these things online.  It is not the role of gas, power or water companies to determine what their customers do with the gas, power or water they pay for.  Similarly, it is not the role of ISPs to police internet usage.

Question 2: How should the costs of any ‘reasonable steps’ be shared between industry participants?

Bearing in mind my answer to question 1, any costs incurred should rest squarely with the copyright owners.

Question 3: Should the legislation provide further guidance on what would constitute ‘reasonable steps’?

The legislation should explicitly state that:

  1. Disconnection is not a reasonable step given that internet access is a necessary public utility.
  2. Deep packet inspection, or any other technological means of determining the content, or type of content being accessed by a customer, is not a reasonable step as this would constitute a gross invasion of privacy.
Question 4: Should different ISPs be able to adopt different ‘reasonable steps’ and, if so, what would be required within a legislative framework to accommodate this?

Given that it is not the role of ISPs to police internet usage (see answer to question 1), there are no reasonable steps for ISPs to adopt.

Question 5: What rights should consumers have in response to any scheme or ‘reasonable steps’ taken by ISPs or rights holders? Does the legislative framework need to provide for these rights?

Consumers need the ability to freely challenge any infringement notice, and there must be a guarantee they will not be disconnected.  The fact that an IP address does not uniquely identify a specific person should be enshrined in legislation.  The customer’s right to privacy must not be violated (see point 2 of answer to question 3).

Question 6: What matters should the Court consider when determining whether to grant an injunction to block access to a particular website?

As we have seen with ASIC’s spectacularly inept use of section 313 of Australia’s Telecommunications Act to inadvertently block access to 250,000 web sites, such measures can and will result in wild and embarrassing unintended consequences.  In any case, any means employed in Australia to block access to overseas web sites is exceedingly trivial to circumvent using freely available proxy servers and virtual private networks.  Consequently the Court should not waste its time granting injunctions to block access to web sites.

Question 7: Would the proposed definition adequately and appropriately expand the safe harbour scheme?

The proposed definition would seem to adequately and appropriately expand the safe harbour scheme, assuming the definition of “service provider” extends to any person or entity who provides internet access of any kind to any other person or entity.  For example, if my personal internet connection is also being used by a friend, a family member or a random passerby who has hacked my wifi, I should be considered a service provider to them under the safe harbour scheme.

Question 8: How can the impact of any measures to address online copyright infringement best be measured?

I am deeply dubious of the efficacy and accuracy of any attempt to measure the volume and impact of copyright infringement.  Short of actively surveilling the communications of the entire population, there is no way to accurately measure the volume of copyright infringement at any point in time, hence there is no way to effectively quantify the impact of any measures designed to address online copyright infringement.

Even if the volume of online copyright infringement could be accurately measured, one cannot assume that an infringing copy equates to a lost sale.  At one end of the spectrum, a single infringing copy could have been made by someone who would never have been willing or able to pay for access to that work.  At the other end of the spectrum, a single infringing copy could expose a consumer to a whole range of new media, resulting in many purchases that never would have occurred otherwise.

Question 9: Are there alternative measures to reduce online copyright infringement that may be more effective?

There are several alternative measures that may be more effective, including:

  1. Content distributors should ensure that their content is made available to the Australian public at a reasonable price, at the same time as releases in other countries, and absent any Digital Restrictions Management technology (DRM, also sometimes erroneously termed Digital Rights Management, which does more to inconvenience legitimate purchasers than it does to curb copyright infringement).
  2. Content creators and distributors should be encouraged to update their business models to accommodate and take advantage of the realities of ubiquitous digital communications.  For example, works can be made freely available online under liberal licenses (such as Creative Commons Attribution Share-Alike) which massively increases exposure, whilst also being offered for sale, perhaps in higher quality on physical media, or with additional bonus content in the for-purchase versions.  Public screenings, performances, displays, commissions and so forth (depending on the media in question) will contribute further income streams all while reducing copyright infringement.
  3. Australian copyright law could be amended such that individuals making copies of works (e.g. downloading works, or sharing works with each other online) on a noncommercial basis does not constitute copyright infringement.  Changing the law in this way would immediately reduce online copyright infringement, because a large amount of activity currently termed infringement would no longer be seen as such.

Finally, as a member of Pirate Party Australia it would be remiss of me not to provide a link to the party’s rather more detailed and well-referenced submission, which thankfully was published by the AGD. We’ve also got a Pozible campaign running to raise funds for an English translation of the Dutch Pirate Bay blocking appeal trial ruling, which will help add to the body of evidence demonstrating that web site blocking is ineffective.

Craige McWhirter: Resizing a Root Volume for an Openstack Instance

Wed, 2014-09-24 18:28

This documents how to resize an OpenStack instance that has it's root partition backed by a volume. In this circumstance "nova resize" will not resize the diskspace as expected.

Assumptions: Shutdown the instance you wish to resize

Check the status of the source VM and stop it if it's not already:

$ nova list +--------------------------------------+-----------+--------+------------+- ------------+---------------------------------------------+ | ID | Name | Status | Task State | Power State | Networks | +--------------------------------------+-----------+--------+------------+- ------------+---------------------------------------------+ | 4fef1b97-901e-4ab1-8e1f-191cb2f75969 | ResizeMe0 | ACTIVE | - | Running | Tutorial=192.168.0.107 | +--------------------------------------+-----------+--------+------------+- ------------+---------------------------------------------+ $ nova stop ResizeMe0 $ nova list +--------------------------------------+-----------+--------+------------+- ------------+---------------------------------------------+ | ID | Name | Status | Task State | Power State | Networks | +--------------------------------------+-----------+---------+-----------+- ------------+---------------------------------------------+ | 4fef1b97-901e-4ab1-8e1f-191cb2f75969 | ResizeMe0 | SHUTOFF | - | Running | Tutorial=192.168.0.107 | +--------------------------------------+-----------+---------+------------+- ------------+---------------------------------------------+ Identify and extend the volume

Obtain the ID of the volume attached to the instance:

$ nova show ResizeMe0 | grep volumes | os-extended-volumes:volumes_attached | [{"id": "616dbaa6-f5a5-4f06-9855-fdf222847f3e"}] |

Set the volume's state to be "available" to so we can resize it:

$ cinder reset-state --state available 616dbaa6-f5a5-4f06-9855-fdf222847f3e $ cinder show 616dbaa6-f5a5-4f06-9855-fdf222847f3e | grep " status " | status | available |

Extend the volume to the desired size:

$ cinder extend 616dbaa6-f5a5-4f06-9855-fdf222847f3e 4

Set the status back to being in use:

$ cinder reset-state --state in-use 616dbaa6-f5a5-4f06-9855-fdf222847f3e Start the instance back up again

Start the instance again:

$ nova start ResizeMe0

Voila! Your old instance is now running with an increased disk size as requested.

Russell Coker: Cheap 3G Data in Australia

Wed, 2014-09-24 18:26
The Request

I was asked for advice about cheap 3G data plans. One of the people who asked me has a friend with no home Internet access, the friend wants access but doesn’t want to pay too much. I don’t know whether the person in question can’t use ADSL/Cable (maybe they are about to move house) or whether they just don’t want to pay for it.

3G data in urban areas in Australia is fast enough for most Internet use. But it’s not good for online games or VOIP. It’s also not very useful for Youtube and other online video. There is a variety of 3G speed testing apps for Android phones and there are presumably similar apps for the iPhone. Before signing up for 3G at home it’s probably best to get a friend who’s on the network in question to test Internet speed at your house, it would be annoying to sign up for an annual contract and then discover that your home is in a 3G dead spot.

Cheapest Offers

The best offer at the moment for moderate data use seems to be Amaysim with 10G for $99.90 and an expiry time of 365 days [1]. 10G in a year isn’t a lot, but it’s pre-paid so the user can buy another 10G of data whenever they want. At the moment $10 for 1G of data in a month and $20 for 2G of data in a month seem to be common offerings for 3G data in Australia. If you use exactly 1G per month then Amaysim isn’t any better than a number of other telcos, but if your usage varies (as it does with most people) then spreading the data use over several months offers significant savings without the need to save big downloads for the last day of the month.

For more serious Internet use Virgin has pre-paid offerings of 6G for $30 and 12G for $40 which has to be used in a month [2]. Anyone who uses an average of more than 3G per month will get better value from the Virgin offers.

If anyone knows of cheaper options than Amaysim and Virgin then please let me know.

Better Coverage

Both Amaysim and Virgin use the Optus network which covers urban areas quite well. I used Virgin a few years ago (and presume that it has only improved since then) and my wife uses Amaysim now. I haven’t had any great problems with either telco. If you need better coverage than the Optus network provides then Telstra is the only option. Telstra have a number of prepaid offers, the most interesting is $100 for 10G of data that expires in 90 days [3].

That Telstra offer is the same price as the Amaysim offer and only slightly more expensive than Virgin if you average 3.3G per month. It’s a really good deal if you average 3.3G per month as you can expect it to be faster and have better coverage.

Which One to Choose?

I think that the best option for someone who is initially connecting their home via 3g is to start with Amaysim. Amaysim is the cheapest for small usage and they have an Amaysim Android app and web page for tracking usage. After using a few gig of data on Amaysim it should be possible to determine which plan is going to be most economical in the long term.

Connecting to the Internet

To get the best speed you need a 4G AKA LTE connection. But given that 3G speed is great enough to use expensive amounts of data it doesn’t seem necessary to me. I’ve done a lot of work over the Internet with 3G from Virgin, Kogan, Aldi, and Telechoice and haven’t felt a need to pay for anything faster.

I think that the best thing to do is to use an old phone running Android 2.3 or iOS 4.3 as a Wifi access point. The cost of a dedicated 3G Wifi AP is enough to significantly change the economics of such Internet access and most people have access to old smart phones.

Related posts:

  1. Changing Phone Prices in Australia 18 months ago when I signed up with Virgin Mobile...
  2. Cheap Net Access in Australia The cheapest ADSL or Cable net access in Australia seems...
  3. Aldi Changes, Cheap Telcos, and Estimating Costs I’ve been using Aldi as my mobile phone provider for...

Robert Collins: what-poles-for-the-tent

Wed, 2014-09-24 16:28

So Monty and Sean have recently blogged about about the structures (1, 2) they think may work better for OpenStack. I like the thrust of their thinking but had some mumblings of my own to add.

Firstly, I very much like the focus on social structure and needs – what our users and deployers need from us. That seems entirely right.

And I very much like the getting away from TC picking winners and losers. That was never an enjoyable thing when I was on the TC, and I don’t think it has made OpenStack better.

However, the thing that picking winners and losers did was that it allowed users to pick an API and depend on it. Because it was the ‘X API for OpenStack’. If we don’t pick winners, then there is no way to say that something is the ‘X API for OpenStack’, and that means that there is no forcing function for consistency between different deployer clouds. And so this appears to be why Ring 0 is needed: we think our users want consistency in being able to deploy their application to Rackspace or HP Helion. They want vendor neutrality, and by giving up winners-and-losers we give up vendor neutrality for our users.

Thats the only explanation I can come up with for needing a Ring 0 – because its still winners and losers (e.g. picking an arbitrary project) keystone, grandfathering it in, if you will. If we really want to get out of the role of selecting projects, I think we need to avoid this. And we need to avoid it without losing vendor neutrality (or we need to give up the idea of vendor neutrality).

One might say that we must pick winners for the very core just by its, but I don’t think thats true. If the core is small, many people will still want vendor neutrality higher up the stack. If the core is large, then we’ll have a larger % of APIs covered and stable granting vendor neutrality. So a core with fixed APIs will be under constant pressure to expand: not just from developers of projects, but from users that want API X to be fixed and guaranteed available and working a particular way at [most] OpenStack clouds.

Ring 0 also fulfils a quality aspect – we can check that it all works together well in a realistic timeframe with our existing tooling. We are essentially proposing to pick functionality that we guarantee to users; and an API for that which they have everywhere, and the matching implementation we’ve tested.

To pull from Monty’s post:

“What does a basic end user need to get a compute resource that works and seems like a computer? (end user facet)

What does Nova need to count on existing so that it can provide that. “

He then goes on to list a bunch of things, but most of them are not needed for that:

We need Nova (its the only compute API in the project today). We don’t need keystone (Nova can run in noauth mode and deployers could just have e.g. Apache auth on top). We don’t need Neutron (Nova can do that itself). We don’t need cinder (use local volumes). We need Glance. We don’t need Designate. We don’t need a tonne of stuff that Nova has in it (e.g. quotas) – end users kicking off a simple machine have -very- basic needs.

Consider the things that used to be in Nova: Deploying containers. Neutron. Cinder. Glance. Ironic. We’ve been slowly decomposing Nova (yay!!!) and if we keep doing so we can imagine getting to a point where there truly is a tightly focused code base that just does one thing well. I worry that we won’t get there unless we can ensure there is no pressure to be inside Nova to ‘win’.

So there’s a choice between a relatively large set of APIs that make the guaranteed available APIs be comprehensive, or a small set that that will give users what they need just at the beginning but might not be broadly available and we’ll be depending on some unspecified process for the deployers to agree and consolidate around what ones they make available consistently.

In sort one of the big reasons we were picking winners and losers in the TC was to consolidate effort around a single API – not implementation (keystone is already on its second implementation). All the angst about defcore and compatibility testing is going to be multiplied when there is lots of ecosystem choice around APIs above Ring 0, and the only reason that won’t be a problem for Ring 0 is that we’ll still be picking winners.

How might we do this?

One way would be to keep picking winners at the API definition level but not the implementation level, and make the competition be able to replace something entirely if they implement the existing API [and win hearts and minds of deployers]. That would open the door to everything being flexible – and its happened before with Keystone.

Another way would be to not even have a Ring 0. Instead have a project/program that is aimed at delivering the reference API feature-set built out of a single, flat Big Tent – and allow that project/program to make localised decisions about what components to use (or not). Testing that all those things work together is not much different than the current approach, but we’d have separated out as a single cohesive entity the building of a product (Ring 0 is clearly a product) from the projects that might go into it. Projects that have unstable APIs would clearly be rejected by this team; projects with stable APIs would be considered etc. This team wouldn’t be the TC : they too would be subject to the TC’s rulings.

We could even run multiple such teams – as hinted at by Dean Troyer one of the email thread posts. Running with that I’d then be suggesting

  • IaaS product: selects components from the tent to make OpenStack/IaaS
  • PaaS product: selects components from the tent to make OpenStack/PaaS
  • CaaS product (containers)
  • SaaS product (storage)
  • NaaS product (networking – but things like NFV, not the basic Neutron we love today). Things where the thing you get is useful in its own right, not just as plumbing for a VM.

So OpenStack/NaaS would have an API or set of APIs, and they’d be responsible for considering maturity, feature set, and so on, but wouldn’t ‘own’ Neutron, or ‘Neutron incubator’ or any other component – they would be a *cross project* team, focused at the product layer, rather than the component layer, which nearly all of our folk end up locked into today.

Lastly Sean has also pointed out that we have large N N^2 communication issues – I think I’m proposing to drive the scope of any one project down to a minimum, which gives us more N, but shrinks the size within any project, so folk don’t burn out as easily, *and* so that it is easier to predict the impact of changes – clear contracts and APIs help a huge amount there.



Lev Lafayette: Opportunities and Issues in Free Software

Tue, 2014-09-23 23:29

Presentation to Software Freedom Day (Melbourne), September 2014

Andrew McDonnell: Evaluating the security of OpenWRT (part 2) – bugfix

Tue, 2014-09-23 23:26

I had a bug applying the RELRO flag to busybox, this is fixed in GitHub now.

For some reason the build links the busybox binary a second time and I missed the flag.

Also an omission from my prior blog entry: uClibc has RELRO turned on in its configuration already in OpenWRT, so does not need flags passing through to it. However, it is failing to build its libraries with RELRO in all cases, in spite of the flag. This problem doesn’t happen in a standalone uClibc build from the latest uClibc trunk, but I haven’t scoped how to get uClibc trunk into OpenWRT. This may have been unclear they way I described it.