Planet Linux Australia

Syndicate content
Planet Linux Australia - http://planet.linux.org.au
Updated: 28 min 50 sec ago

Simon Horms: kexec-tools 2.0.8 Released

Wed, 2014-10-08 17:26

I have released version 2.0.8 of kexec-tools, the user-space portion of kexec a soft-reboot and crash-dump facility of Linux.

This is a feature release.

The code is available as a tarball here and in git here.

More information is available in the announcement email.

Simon Horms: kexec-tools 2.0.7 Released

Wed, 2014-10-08 17:26

I have released version 2.0.7 of kexec-tools, the user-space portion of kexec a soft-reboot and crash-dump facility of Linux.

This is a feature release.

The code is available as a tarball here and in git here.

More information is available in the announcement email.

Stewart Smith: Quick MySQL 5.7.5 thoughts

Wed, 2014-10-08 10:26

It was great to see the recent announcement of MySQL 5.7.5 over at the MySQL Server Team blog. I’m looking forward to throwing this release at some of the POWER8 systems we have for a couple of really good reasons: 1) Does it work better than previous MySQL 5.7 releases “out of the box” on POWER? 2) What do the scalability improvements in 5.7.5 mean for peak QPS on POWER (and can I set a new record?).

Looking through the list of changes, I’m (casually not) surprised as to the number of features and the amount of work that echoes what we were working on in Drizzle a few years ago.

A closer look at the source for 5.7.5 may also prove enlightening, I wonder how the MySQL team is coping with a lot of the code rot legacy and the absolutely atrocious internal APIs they inherited…

Andrew Pollock: [life] Day 251: Kindergarten, some more training and other general running around

Wed, 2014-10-08 09:25

Yesterday was the first day of Term 4. I can't believe we're at Term 4 already. This year has flown by.

I had a letter from the PAG to all of the Kindergarten parents to put into the notice pockets at Kindergarten first up, so I drove to the Kindergarten not long after opening time, and quickly put them in all the pockets.

Then I headed over to Beaurepaires for a free tyre checkup.

After that, I headed over to my Group Leader's house for some more practical training. I'm feeling pretty confident about my ability to conduct a Thermomix demonstration now, especially after having done my first "real" one on Sunday night.

After that, it was time to pick up Zoe from Kindergarten. It was wonderful to see her after a week away. She wanted to go to Megan's house for a play date, but Megan had tennis, so we went and did the grocery shopping first.

After the grocery shopping, we popped around the Megan's for a bit, and then went home.

I made dinner, and Zoe seemed pretty tired, so I got her to bed a little bit early.

Craige McWhirter: Post Receive Git Hook to Push to Github

Tue, 2014-10-07 22:27

I self-host my own git repositories. I also use github as a backup for many of them. I have a use case for a number of them where the data changes can be quite large and pushing to both my own and github's remote services doubles my bandwidth usage in my mostly bandwidth contrained environments.

To get around those contraints, I wanted to only push to my git repository and have that service then push to github. This is how I went about it, courtesy of Matt Palmer

Assumptions:
  • You have your own git server
  • You have a github account
  • You're fairly familiar with git
Authentication

It's likely you have not used your remote server to connect to github. To make sure everything happens smoothly, you need to:

  • Add the SSH key for your user account on your server to the authorised keys on your github account
  • SSH just once from your server to github to accept the key from github.
Add a Remote for github

In the bare git repository on your server, you need to add the remote configuration. On a Debian server using gitweb, this file would be located as /var/cache/git/MYREPO/config. Add the below lines to it:

[remote "github"] url = git@github.com:MYACCOUNT/MYREPO.git fetch = +refs/heads/*:refs/remotes/github/* autopush = true Add a post-receive Hook

Now we need to create a post-receive hook to process the push to github. Going with the previous example, edit /var/cache/git/MYREPO/hooks/post-receive

#!/bin/bash for remote in $(git remote); do if [ "$(git config "remote.${remote}.autopush")" = "true" ]; then git push "$remote" fi done

Happy automated pushing to github.

Stewart Smith: Things that are not news

Tue, 2014-10-07 09:26

The following are not news:

  • Human has genitals (which may/may not have been exposed)
  • Absolutely anything about The Bachelor
  • Anything any celebrity is wearing, anyone they’re dating or if they are currently wearing underwear.
  • any list of “top 10″ things.
  • TEENAGERS DO THINGS THAT TEENAGERS DO!

(feel free to extend the list)

linux.conf.au News: Miniconf Call for Presentations - Linux.conf.au 2015 Systems Administration Miniconf

Tue, 2014-10-07 07:28

As part of the linux.conf.au conference in Auckland, New Zealand in January 2015 we will be holding a one day mini conference oriented to Linux Systems Administration.

The organisers of the Systems Administration Miniconf would like to invite proposals for presentations to be delivered at the Miniconf. Please forward this CFP to your colleagues, social networks, and other relevant mailing lists.

This is our 9th year at Linux.conf.au. To see presentations from our previous years, see the miniconf website: http://sysadmin.miniconf.org/.

Presentation Topics

Topics for presentations could include (but are not limited to):

Systems Administration Best Practice, Infrastructure as a Service (IAAS), Platform as a Service (PAAS), Docker, Containerisation, Software as a Service (SAAS), Virtualisation, "Cloud" Computing, Service Migration, Dealing with Extranets, Coping with the shortage of IPv4 addresses, Software Defined Networking (SDN), DevOps, Configuration Management, Bootstrapping systems, Lifecycle Management, Monitoring, Dealing with BYOD, Backups in a virtual/distributed world, Security in a world where you cannot even trust your shell, Troubleshooting, Log Management, Buying Decisions, Identity Management, Multi-Factor Authentication, Web and Email management, Keeping legacy systems functioning, and other War stories from the Real World.

We strongly welcome topics on best practice, new developments in systems administration and cutting edge techniques to better manage Linux environments both virtual and physical.

Presentations should be of a technical nature and speakers should assume that members of the audience have at least a couple of years experience in Unix/Linux administration.

Format of Presentations

We are now seeking proposals for presentations at the mini-conference.

We have openings for:

  • 45 minute double-length presentations
  • 20 minute full presentations
  • 10-20 minute "Life in the Real World" presentations
  • 5-10 minute "lightning talks"

Please note, due to the single day available (and whole-LCA keynote before morning tea), we expect the majority of available timeslots to be 20 minutes long or less.

The 10-20 minute "Life the Real World" presentations are intended for people to talk about their sites or real world projects/problems. Information could include: Servers, talks, tools, people and roles, experiences, software, operating systems, monitoring, alerting, provisioning etc. The intent is to give attendees a picture of what is happening in the "real world" and some understanding of how other sites work, as well as offer you a chance to get suggestions on other tools and practices that might help your site. Discussion of "lessons learned" through really trying to use some of these things are especially welcomed.

Submitting talks

Please note that in order to give a presentation or attend the miniconf you must be registered (and paid up) for the main linux.conf.au conference. Presenting at the Miniconf does not entitle you to discounted or free registration at the main conference nor priority with registration. Unfortunately the Miniconf has no budget for sponsorship of speakers.

Submissions should be made via: http://sysadmin.miniconf.org/proposal15.html

Questions should be sent to: lca2015 at sysadmin.miniconf.org

Dates and Deadlines

To encourage early submissions priority (both of inclusion and scheduling) will be given to presentations submitted before the 19th of October 2014.

  • 2014-10-19 - Deadline for early submissions
  • 2014-10-26 - Early submissions confirmation
  • 2014-11-16 - Deadline for submissions
  • 2014-11-30 - Confirmation of all presentations
  • 2015-01-13 - Start of Miniconf and 2nd day of linux.conf.au 2015
Contact and Questions

Please see our website for more information on the miniconf, past presentations and presenting at it. If you have any questions please feel free to email the organisers at: lca2015 at sysadmin.miniconf.org

Ewen McNeill

LCA2015 Sysadmin Miniconf Convener

Stewart Smith: MariaDB 10.0 on POWER

Mon, 2014-10-06 18:26

Good news for those wanting to run MariaDB on POWER systems, the latest 10.0 bzr tree (as of a couple of weeks ago) builds and runs well!

I recently pulled the latest MariaDB 10.0 from BZR and built it on a POWER8 system in the lab to run some quick tests. The MariaDB team has done some work on getting MariaDB to run on POWER recently, a bunch of which is based off my work on MySQL on POWER.

There’s obviously still some work in progress going on, but my initial results show performance within around 10% of MySQL, so with a bit of work we will hopefully see MariaDB reach performance parity.

One interesting find was the code to account for thread memory usage uses a single atomic variable: this does not scale and does end up showing up on profiles.

I’ll comment more on the code in a future post, but it looks like we will have MariaDB being functional on POWER in an upcoming release.

Stewart Smith: MariaDB & Trademarks, and advice for your project

Mon, 2014-10-06 11:26

I want to emphasize this for those who have not spent time near trademarks: trademarks are trouble and another one of those things where no matter what, the lawyers always win. If you are starting a company or an open source project, you are going to have to spend a whole bunch of time with lawyers on trademarks or you are going to get properly, properly screwed.

MySQL AB always held the trademark for MySQL. There’s this strange thing with trademarks and free software, where while you can easily say “use and modify this code however you want” and retain copyright on it (for, say, selling your own version of it), this does not translate too well to trademarks as there’s a whole “if you don’t defend it, you lose it” thing.

The law, is, in effect, telling you that at some point you have to be an arsehole to not lose your trademark. (You can be various degrees of arsehole about it when you have to, and whenever you do, you should assume that people are acting in good faith and just have not spent the last 40,000 years of their life talking to trademark lawyers like you have).Basically, you get to spend time telling people that they have to rename their product from “MySQL Headbut” to “Headbut for MySQL” and that this is, in fact, a really important difference.

You also, at some point, get to spend a lot of time talking about when the modifications made by a Linux distribution to package your software constitute sufficient changes that it shouldn’t be using your trademark (basically so that you’re never stuck if some arse comes along, forks it, makes it awful and keeps using your name, to the detriment of your project and business).

If you’re wondering why Firefox isn’t called Firefox in Debian, you can read the Mozilla trademark policy and probably some giant thread on debian-legal I won’t point to.

Of course, there’s ‘ MySQL trademark policy and when I was at Percona, I spent some non-trivial amount of time attempting to ensure we had a trademark policy that would work from a legal angle, a corporate angle, and a get-our-software-into-linux-distros-happily angle.

So, back in 2010, Monty started talking about a draft MariaDB trademark policy (see also, Ubuntu trademark policy, WordPress trademark policy). If you are aiming to create a development community around an open source project, this is something you need to get right. There is a big difference between contributing to a corporate open source product and an open source project – both for individuals and corporations. If you are going to spend some of your spare time contributing to something, the motivation goes down when somebody else is going to directly profit off it (corporate project) versus a community of contributors and companies who will all profit off it (open source project). The most successful hybrid of these two is likely Ubuntu, and I am struggling to think of another (maybe Fedora?).

Linux is an open source project, RedHat Enterprise Linux is an open source product and in case it wasn’t obvious when OpenSolaris was no longer Open, OpenSolaris was an open source product (and some open source projects have sprung up around the code base, which is great to see!). When a corporation controls the destiny of the name and the entire source code and project infrastructure – it’s a product of that corporation, it’s not a community around a project.

From the start, it seemed that one of the purposes of MariaDB was to create a developer community around a database server that was compatible with MySQL, and eventually, to replace it. MySQL AB was not very good at having an external developer community, it was very much an open source product and not a an open source project (one of the downsides to hiring just about anyone who ever submitted a patch). Things struggled further at Sun and (I think) have actually gotten better for MySQL at Oracle – not perfect, I could pick holes in it all day if I wanted, but certainly better.

When we were doing Drizzle, we were really careful about making sure there was a development community. Ultimately, with Drizzle we made a different fatal error, and one that we knew had happened to another open source project and nearly killed it: all the key developers went to work for a single company. Looking back, this is easily my biggest professional regret and one day I’ll talk about it more.

Brian Aker observed (way back in 2010) that MariaDB was, essentially, just Monty Program. In 2013, I did my own analysis on the source tree of MariaDB 5.5.31 and MariaDB 10.0.3-ish to see if indeed there was a development community (tl;dr; there wasn’t, and I had the numbers to prove it).If you look back at the idea of the Open Database Alliance and the MariaDB Foundation, actually, I’m just going to quote Henrik here from his blog post about leaving MariaDB/Monty Program:

When I joined the company over a year ago I was immediately involved in drafting a project plan for the Open Database Alliance and its relation to MariaDB. We wanted to imitate the model of the Linux Foundation and Linux project, where the MariaDB project would be hosted by a non-profit organization where multiple vendors would collaborate and contribute. We wanted MariaDB to be a true community project, like most successful open source projects are – such as all other parts of the LAMP stack.

….

The reality today, confirmed to me during last week, is that:

Those in charge at Monty Program have decided to keep ownership of the MariaDB trademark, logo and mariadb.org domain, since this will make the company more valuable to investors and eventually to potential buyers.

Now, with Monty Program being sold to/merged into (I’m really not sure) SkySQL, it was SkySQL who had those things. So instead of having Monty Program being (at least in theory) one of the companies working on MariaDB and following the Hacker Business Model, you now have a single corporation with all the developers, all of the trademarks, that is, essentially a startup with VC looking to be valuable to potential buyers (whatever their motives).

Again, I’m going to just quote Henrik on the us-vs-them on community here:

Some may already have observed that the 5.2 release was not announced at all on mariadb.org, rather on the Monty Program blog. It is even intact with the “us vs them” attitude also MySQL AB had of its community, where the company is one entity and “outside community contributors” is another. This is repeated in other communication, such as the recent Recently in MariaDB newsletter.

This was, again, back in 2010.

More recently, Jeremy Cole, someone who has pumped a fair bit of personal and professional effort into MySQL and MariaDB over the past (many) years, asked what seemed to be a really simple question on the maria-discuss mailing list. Basically, “What’s going on with the MariaDB trademark? Isn’t this something that should be under the MariaDB foundation?”

The subsequent email thread was as confusing as ever and should be held up as a perfect example about what not to do. Some of us had by now, for years, smelt something fishy going on around the talk of a community project versus the reality. At the time (October 2013), Rasmus Johansson (VP of Engineering at SkySQL and Board Member of MariaDB foundation) said this:

The MariaDB Foundation and SkySQL are currently working on the trademark issue to come up with a solution on what rights to the trademark each entity should have. Expect to hear more about this in a fairly near future.

 

MariaDB has from its beginning been a very community friendly project and much of the success of MariaDB relies in that fact. SkySQL of course respects that.

(and at the same time, there were pages that were “Copyright MariaDB” which, as it was pointed out, was not an actual entity… so somebody just wasn’t paying attention). Also, just to make things even less clear about where SkySQL the corporation, Monty Program the corporation and the MariaDB Foundation all fit together, Mark Callaghan noticed this text up on mariadb.com:

The MariaDB Foundation also holds the trademark of the MariaDB server and owns mariadb.org. This ensures that the official MariaDB development tree<https://code.launchpad.net/maria> will always be open for the MariaDB developer community.

So…. there’s no actual clarity here. I can imagine attempting to get involved with MariaDB inside a corporation and spending literally weeks talking to a legal department – which thrills significantly less than standing in lines at security in an airport does.

So, if you started off as yay! MariaDB is going to be a developer community around an open source project that’s all about participation, you may have even gotten code into MariaDB at various times… and then started to notice a bit of a shift… there may have been some intent to make that happen, to correct what some saw as some of the failings of MySQL, but the reality has shown something different.

Most recently, SkySQL has renamed themselves to MariaDB. Good luck to anyone who isn’t directly involved with the legal processes around all this differentiating between MariaDB the project, MariaDB Foundation and MariaDB the company and who owns what. Urgh. This is, in no way, like the Linux Foundation and Linux.

Personally, I prefer to spend my personal time contributing to open source projects rather than products. I have spent the vast majority of my professional life closer to the corporate side of open source, some of which you could better describe as closer to the open source product end of the spectrum. I think it is completely and totally valid to produce an open source product. Making successful companies, products and a butt-ton of money from open source software is an absolutely awesome thing to do and I, personally, have benefited greatly from it.

MariaDB is a corporate open source product. It is no different to Oracle MySQL in that way. Oracle has been up front and honest about it the entire time MySQL has been part of Oracle, everybody knew where they stood (even if you sometimes didn’t like it). The whole MariaDB/Monty Program/SkySQL/MariaDB Foundation/Open Database Alliance/MariaDB Corporation thing has left me with a really bitter taste in my mouth – where the opportunity to create a foundation around a true community project with successful business based on it has been completely squandered and mismanaged.

I’d much rather deal with those who are honest and true about their intentions than those who aren’t.

My guess is that this factored heavily into Henrik’s decision to leave in 2010 and (more recently) Simon Phipps’s decision to leave in August of this year. These are two people who I both highly respect, never have enough time to hang out with and I would completely trust to do the right thing and be honest when running anything in relation to free and open source software.

Maybe WebScaleSQL will succeed here – it’s a community with a purpose and several corporate contributors. A branch rather than a fork may be the best way to do this (Percona is rather successful with their branch too).

Sridhar Dhanapalan: Twitter posts: 2014-09-29 to 2014-10-05

Mon, 2014-10-06 01:26

Tridge on UAVs: CanberraUAV Outback Challenge 2014 Debrief

Sat, 2014-10-04 13:47

We've now had a few days to get back into a normal routine after the excitement of the Outback Challenge last week, so I thought this is a good time to write up a report on how things went for team I was part of, CanberraUAV. There have been plenty of posts already about the competition results, so I will concentrate on the technical details of our OBC entry, what went right and what went badly wrong.



For comparison, you might like to have a look at the debrief I wrote two years ago for the 2012 OBC. In 2012 there were far fewer teams, and nobody won the grand prize, although CanberraUAV went quite close. This year there were more than 3 times as many teams and 4 teams completed the criterion for the challenge, with the winner coming down to points. That means the Outback Challenge is well and truly "done", which reflects the huge advances in amateur UAV technology over the last few years.



The drama for CanberraUAV really started several weeks before the challenge. Our primary competition aircraft was the "Bushmaster", a custom built tail dragger with 50cc petrol motor designed by our chief pilot, Jack Pittar. Jack had designed the aircraft to have both plenty of room inside for whatever payload the rest of the team came up with, and to easily fit in the back of a station wagon. To achieve this he had designed it with a novel folding fuselage. Jack (along with several other team members) had put hundreds of hours into building the Bushmaster and had done a great job. It was beautifully laid out inside, and really was a purpose built Outback Challenge aircraft.



Just a couple of days after our D3 deliverable was sent in we had an unfortunate accident. A member of our local flying club was flying his CAP232 aerobatic plane at the same time we we were doing a test mission, and he misjudged a loop. The CAP232 loop went right through the rear of the Bushmaster fuselage, slicing it off. The Bushmaster hit the ground at 180 km/h, with predictable results.



Jack and Greg set to work on a huge effort to build a new Bushmaster, but we didn't manage to get it done in time. Luckily the OBC organisers allowed us to switch to our backup aircraft, a VQ Porter 2.7m RTF with some customisations. We had included the Porter in our D2 deliverable just in case it was needed, and already had plenty of autonomous hours up on it as we had been using it as a test aircraft for all our systems. It was the same basic style as the Bushmaster (a petrol powered tail dragger), but was a bit more cramped inside for fitting our equipment and used a bit smaller engine (a DLE35).



Strategy



Our basic strategy this year was the same as in 2012. We would search at a relatively low altitude (100m AGL this year, compared to 90m AGL in 2012), with a interleaved "mow the lawn" pattern. This year we setup the search with 60% overlap compared with 50% last year, and we had a longer turn around area at the end of each search leg to ensure the aircraft got back on track fully before it entered the search area. With that search pattern and a target airspeed of 28m/s we expected to cover the whole search area in 20 minutes, then we would cover it again in the reverse direction (with a 50m offset) if we didn't find Joe on the first pass.



As with 2012 we used an on-board computer to autonomously find Joe. This year we used an Odroid-XU, which is a quad core system running at 1.6GHz. That gave us a lot more CPU power than in 2012 (when we used a pandaboard), which allowed us to use more CPU intensive image recognition code. We did the first level histogram scanning at the full camera resolution this year (1280x960), whereas in 2012 we had run the first level scan at 640x480 to save CPU. That is why we were happy to fly a bit higher this year.



While the basic approach to image recognition was the same, we had improved the details of the implementation a lot in the last two years, with hundreds of small changes to the image scoring, communications and user interface. Using our 2012 image data as a guide (along with numerous test flights at our local flying field) we had refined the cuav code to provide much better object discrimination, and also to cope better with communication failures. We were pretty confident it could find Joe very reliably.



The takeoff



When the running order was drawn from a hat we were the 2nd last team on the list, so we ended up flying on the Friday. We'd been up early each morning anyway in case the order was changed (which does happen sometimes), and we'd actually been called out to the airfield on the Thursday afternoon, but didn't end up flying then due to high wind.



Our time to takeoff finally came just before 8am Friday morning. As with 2012 we did an auto takeoff, using the new tail-dragger takeoff coded added to APM:Plane just a couple of months before.



Unfortunately the takeoff did not go as planned. In fact, we were darn lucky the plane got into the air at all! As soon as Jack flicked the switch on the transmitter to put the plane into AUTO it started veering left on the runway, and nearly scraped a wing as it limped it's way into the air. A couple of seconds later it came quite close to the tents where the OBC organisers and our GCS was located.



It did get off the ground though, and missed the tents while it climbed, finally switching to normal navigation to the 2nd waypoint once it got to an altitude of 40m. Jack had his finger on the switch on the transmitter which would have taken manual control and very nearly aborted the takeoff. This would have to go down as one of the worst takeoffs in OBC history.



So why did it go so badly wrong? My initial analysis when I looked at the logs later was that the wind had pushed the plane sideways. After examining the logs more carefully though I discovered that while the wind did play a part, the biggest issue was the compass. For some reason the compass offsets we had loaded were quite different from the ones we had been testing with in all our flights in Canberra. I still don't know why the offsets changed, although the fact that we had compass learning enabled almost certainly played a part. We'd done dozens of auto takeoffs in Canberra with no issues, with the plane tracking beautifully down the center line of the runway. To have it go so badly wrong for the flight that matters was a disappointment.



I've decided that the best way to fix the issue for future flights is to make the auto takeoff code completely independent of the compass. Normally the compass is needed to get an initial yaw for the aircraft while it is not moving (as our hobby-grade GPS sensors can't give yaw when not moving), and that initial yaw is used to set the ground heading for the takeoff. That means that with the current code, any compass error at the time you change into AUTO will directly impact the takeoff.



Because takeoff is quite a short process (usually 20s or less), we can take an alternative approach. The gyros won't drift much in 20s, so what we can do is just keep a constant gyro heading until the aircraft is moving fast enough to guarantee a good GPS ground track. At that point we can add whatever gyro based yaw error has built up to the GPS ground course and use that as our navigation heading for the rest of the takeoff. That should make us completely independent of compass for takeoff, which should solve the problem for everyone, rather than just fixing it for our aircraft. I'm working on a set of patches to implement this, and expect it to be in the next release.



Note that once the initial takeoff is complete the compass plays almost no role in the flight of a fixed wing plane if you have the EKF enabled, unless you lose GPS lock. The EKF rejected our compass as being inconsistent, and happily got correct yaw from the fusion of GPS velocity and other sensors for the rest of the flight.



The search



After the takeoff things went much more smoothly. The plane headed off to the search area as planned, and tracked the mission extremely well. It is interesting to compare the navigation accuracy of this years mission compared to the 2012 mission. In 2012 we were using a simple vector field navigation algorithm, whereas we now use the L1 navigation code. This year we also used Paul's EKF for attitude and position estimation, and the TECS controller for speed/height control. The differences are really remarkable. We were quite pleased with how our Mugin flew in 2012, but this year it was massively better. The tracking along each leg of the search was right down the line, despite the 15 knot winds.



Another big difference from 2012 is that we were using the new terrain following capability that we had added this year. In 2012 we used a python script to generate our waypoints and that script automatically added intermediate waypoints to follow the terrain of the search area. This year we just set all waypoints as 100 meters AGL and let the autopilot do its job. That made things a lot simpler and also resulted in better geo-referencing and search area coverage.



On our GCS imaging display we had 4 main windows up. One is a "live" view from the camera. That is setup to only update if there is plenty of spare bandwidth on our radio links, and is really just there to give us something to watch while the imaging code does its job, plus to give us some situational awareness of what the aircraft is tracking over.

The second window is the map, which shows the mission flight path, the geo-fence, two plane icons (AHRS position estimate and GPS position estimate) plus overlays of thumbnails from the image recognition system of any "interesting" objects the Odroid has found.



The 3rd window is the "Mosaic" window. That shows a grid of thumbnails from the image recognition system, and has menus and hot-keys to allow us to control the image recognition process and sort the thumbnails in various ways. We expected Joe to have a image score of over 1000, but we set the threshold for thumbnails to display in the mosaic as 500. Setting a lower threshold means we get shown a lot of not very interesting thumbnails (things like oddly shaped tree stumps and patches of dirt that are a bit out of the ordinary), but at least it means we can see the system is working, and it stops the search being quite so boring.



The final window is a still image window. That is filled with a full resolution image of whatever image we have selected for download from the aircraft, allowing us to get some context around a thumbnail if we want it. Often it is much easier to work out what an object is if you can see the surroundings.



On top of those imaging windows we also have the usual set of flight control windows. We had 3 laptops setup in our GCS tent, along with a 40" LCD TV for one of the laptops (my thinkpad). Stephen ran the flight monitoring on his laptop, and Matt helped with the imaging and the antenna tracker from his laptop. Splitting the task up in this way helped prevent overload of any one person, which made the whole experience more manageable.



On Stephens laptop he had the main MAVProxy console showing the status of the key parts of the autopilot, plus graphs showing the consistency of the various subsystems. The ardupilot code on Pixhawk has a lot of redundancy at both the sensor level and at the higher level algorithm level, and it is useful to plot graphs showing if we get any significant discrepancies, giving us a warning of a potential failure. For this flight everything went smoothly (apart from the takeoff) but it is nice to have enough screens to show this sort of information anyway. It also makes the whole setup look a bit more professional to have a few fancy graphs going :-)



About 20 minutes after takeoff the imaging system spotted Joe lying in his favourite position in a peanut field. We had covered nearly all of the first pass across the search area so we were glad to finally spot him. The images were very clear, so we didn't need to spend any time discussing if it really was Joe or not.



We clicked the "Show Position" menu item we'd added to the MAVProxy map, which pops up a window giving the position in various coordinate systems. Greg passed this to the judges who quickly confirmed it was within the required 100 meters of Joe.



The bottle drop



That initial position was only a rough estimate though. To refine the position we used a new trick we'd added to MAVProxy. The "wp movemulti" command allowed us to move and rotate a pre-defined confirmation flight pattern over the top of Joe. That setup the plane to fly a butterfly pattern over Joe at an altitude of 80 meters, passing over him with wings level. That gives an optimal set of images for our image recognition system to work with, and the tight turns allow the wind estimation algorithm in ardupilot to get an accurate idea of wind direction and speed.



In the weeks leading up to the competition we had spent a lot of time refining our bottle drop system. We realised the key to a good drop is timing consistency and accurate wind drift estimation.



To improve the timing consistency we had changed our bottle release mechanism to be in two stages. The bottle was held to the bottom of the Porter by two servos. One servo held the main weight of the bottle inside a plywood harness, while the second servo was attached to the top of the parachute by a wire, using a glider release mechanism.



The idea is that when approaching Joe for the drop the main servo releases first, which leaves the bottle hanging beneath the plane held by the glider release servo with the parachute unfurled. The wind drags the bottle at an angle behind the plane for 2 seconds before the 2nd servo is released. The result is much more consistent timing of the release, as there is no uncertainty in how long the bottle tumbles before the chute unfolds.



The second part of the bottle drop problem is good wind drift estimation. We had decided to use a small parachute as we were not certain the bottle would withstand the impact with the ground without a chute. That meant significant wind drift, which meant we really had to know the wind direction and speed quite accurately, and also needed some data on how fast the bottle would drift in the wind.



In the weeks before the competition we did a lot of bottle drops, but the really key one was a drop just a week before we left for Kingaroy. That was the first drop in completely still conditions, which meant it finally gave us a "zero wind" data point, which was important in calculating the wind drift. Combining that drop with some previous drop results we came up with a very simple formula giving the wind drift distance for a drop from 80 meters as 5 times the wind speed in meters/second, as long as we dropped directly into the wind.



We had added a new feature to APM:Plane to allow an exact "acceptance radius" for an individual waypoint to be set, overriding the global WP_RADIUS parameter. So once we had the wind speed and direction it was just a matter of asking MAVProxy to rotate the drop mission waypoints to match the wind (which was coming from -121 degrees) and to set the acceptance radius for the drop to 35 meters (70 meters for zero wind, minus 7 times 5 for the 7m/s wind the EKF had estimated).



The plane then slowed to 20m/s for the drop, and did a 350m approach to ensure the wings were nice and level at the drop point. As the drop happened we could see the parachute unfurl in the camera view from the aircraft, which was a nice confirmation that the bottle had been successfully dropped.



The mission was setup for the aircraft to go back to the butterfly confirmation pattern after the drop. We had done that to allow us to see where the bottle had actually landed relative to Joe, in case we wanted to do a 2nd drop. We had 3 bottles ready (one on the plane, two back at the airfield), and were ready to fit a new bottle and adjust the drop point if we'd been too far off.



As soon as we did the confirmation pass it became clear we didn't need to drop a 2nd bottle. We measured the distance of the bottle to Joe as under 3 meters using the imaging system (the judges measured it officially as 2.6m), so we asked the plane to come back and land.



The landing



This year we used a laser rangefinder (a SF/02 from LightWare) to help with the landing. Using a rangefinder really helps ensure the flare is at the right altitude and produces much more consistent landings.



The only real drama we had with the landing was that we came in a bit fast, and ballooned more than it should have on the flare. The issue was that we hadn't used a shallow enough approach. Combined with the quite strong (14 knot) cross-wind it was an "interesting" landing.



We also should have set the landing point a bit closer to the end of the runway. We had put it quite a way along the runway as we weren't sure if the laser rangefinder would pick up anything strange as it crossed over the road and airport boundary fence, but in hindsight we'd put the touchdown point a bit too close to the geofence. Full size glider operations running at the same time meant only part of the runway was available for OBC teams to use.



The landing was entirely successful, and was probably better than a manual landing would have been by me in the same wind conditions (I'm only a mediocre pilot, and landings are my weakest point), but we certainly can do better. Paul Riseborough is already looking at ways to improve the autoland to hopefully produce something that produces a round of applause from spectators in future landings.



Radio performance



Another part of the mission that is worth looking at is the radio performance. We had two radio links to the aircraft - one was a RFD900 900MHz radio, and that performed absolutely flawlessly as usual. We had around 40 dB of fade margin at a range of over 6km, which is absolutely huge. Every team that flew in the OBC this year used a RFD900, which is a great credit to Seppo and the team at RFDesign.



The 2nd radio was a Ubiquity Rocket M5, which is a 5.8GHz ethernet bridge. We used an active antenna tracker this year for the 5.8GHz link, with a 28dBi MIMO antenna on the ground, and a 10dBi MIMO omni antenna in the aircraft (the protrusion from the top of the fuselage is for the antenna). The 5.8GHz link gave us lots of bandwidth for the images, but was not nearly as reliable as the RFD900 link. It dropped out 6 times over the course of the mission, with the longest dropout lasting just over a minute. The dropouts were primarily caused by magnetometer calibration on the antenna tracker - during the mission we had to add some manual trim to the tracker to improve the alignment. That worked, but really we should have used a ground antenna with a bit less gain (maybe around 24dBi) to give us a wider beam width.



Another alternative would have been to use a lower frequency. The 5.8GHz Rocket gives fantastic bandwidth, but we don't really need that much bandwidth for our system. The Robota team used 2.4GHz Ubiquity radios and much simpler antennas and ended up with a much better link than we had. The difference in path loss between 2.4GHz and 5.8GHz is quite significant.



The reason we didn't use the 2.4GHz gear is that we do most of our testing at a local MAAA flying club, and we know that if someone crashes their expensive model while we have a powerful 2.4GHz radio running then there will always be the thought that our radio may have caused interference with their 2.4GHz RC link.



So we're now looking into the possibility of using a 900MHz ethernet bridge. The Ubiquity M900 looks like a real possibility. It doesn't offer nearly as much bandwidth as the 5.8GHz or 2.4GHz radios as Australia only has 13MHz of spectrum available in the 900MHz band for ISM use, but that should still be enough for our application. We have heard that the spread spectrum M900 doesn't significantly interfere with the RFD900 in the same band (as the RFD900 is a TDM frequency hopping radio), but we have yet to test that theory.



Alternatively we may use two RFD900s in the same aircraft, with different numbers of hopping channels and different air data rates to minimise interference. One would be dedicated to telemetry and the other to image data. A RFD900 at 128kbps should be plenty for our cuav imaging system as long as the "live camera view" window is set to quite a low resolution and update rate.



Team cooperation



One of the most notable things about this years competition was just how friendly the discussions between the teams were. The competition has a great spirit of cooperation and it really is a fantastic experience to work closely with so many UAV developers from all over the world.



I don't really have time to go through all the teams that attended, but I do want to mention some of the highlights for me. Top of the list would have to be meeting Ben and Daniel Dyer from team SFWA. They put in an absolutely incredible effort to build their own autopilot from scratch. Their build log at http://au.tono.my/log/index.html is incredible to read, and shows just what can be achieved in a short time with enough talent. It was fantastic that they completed the challenge (the first team to ever do so) and I look forward to seeing how they take their system forward.



I'd also like to offer a tip of the hat to Team Swiss Fang. They used the PX4 native stack on a Pixhawk and it was fantastic to see how far they pushed that autopilot stack in the lead up to the competition. That is one of the things that competitions like the OBC can do for an autopilot - push it to much higher levels.



Team OpenUAS also deserves a mention, and I was especially pleased to meet Christophe who is one of the key people behind the Paparrazzi autopilot. Paparrazzi is a real pioneer in the field of amateur autopilots. Many of the posts we make on "ardupilot has just achieved X" on diydrones could reasonably be responded to by saying "Paparrazzi did that 3 years ago". The OpenUAS team had bad luck in both the 2012 competition and again this year. This time round it was an airspeed sensor failure which led to a crash soon after takeoff which is really tragic given the effort they have put in and the pedigree of their autopilot stack.



The Robota team also did very well, coming in second behind our team. Particularly impressive was the performance of the Goose autopilot on a quite small foam plane in the wind over Kingaroy. The automatic landing was fantastic. The Robota team used a much simpler approach, just using a 2.4GHz Ubiquity link to send a digital video stream to 3 video monitors and having 3 people staring at those screens to find Joe. Extremely simple, but it worked. They were let down a bit by the drop accuracy in the wind, but a great effort done with style.



I was absolutely delighted when Team Thunder, who were also running APM:Plane, completed the challenge, coming in 4th place. They built their system partly on the image recognition code we had released, which is exactly what we hoped would happen. We want to see UAV developers building on each others work to make better and better systems, so having Team Thunder complete the mission was great.



Overall ardupilot really shone in the competition. Over half the teams that flew in the competition were running ardupilot. Our community has shown that we can put together systems that compete with the best in the world. We've come a long way in the last few years and I'm sure there is a bright future for more developments in the UAV S&R space that will see ardupilot saving lives on a regular basis.



Thank you



On behalf of CanberraUAV I'd like to offer a huge thank you to the OBC organisers for the massive effort they have put in over so many years to run the competition. Back in 2007 when the competition started it took real insight for Rod Walker and Jon Roberts to see that a competition of this nature could push amateur UAV technology ahead so much, and it took a massive amount of perseverance to keep it going to the point that teams were able to finally complete the competition. The OBC has had a huge impact on amateur UAV technology.



We'd also like to thank our sponsors, 3DRobotics, who have been a huge help for CanberraUAV. We really couldn't have done it without you. Working with 3DR on this sort of technology is a great pleasure.



Next steps



Completing the Outback Challenge isn't the end for CanberraUAV and we are already starting discussions on what we want to do next. I've posted some ideas on our mailing list and we would welcome suggestions from anyone who wants to take part. We've come a long way, but we're not yet at the point where putting together an effective S&R aircraft is easy.

Andrew McDonnell: (UPDATE) Evaluating the security of OpenWRT (part 3) adventures in NOEXECSTACK’land

Sat, 2014-10-04 11:26

Of course, there are more things I had known but not fully internalised yet. Of course.

Many  MIPS architectures, and specifically, most common router architectures, don’t have hardware support for NX.

 

Yet. It surely wont be long.

 

My own feeling in this age of Heartbleed and Shellshock is we should get ahead of the curve if we can – if a distribution supports NX in the toolchain then when a newer SOC arrives there is one less thing that we can forget about.

If I had bothered to keep reading instead of hacking I may have rediscovered sooner. But then I would know significantly less about embedded toolchains and how to debug and patch them.  Anyway, a determined user could also cherry-pick emulated NX protection from PAX.

When they Google this problem they will at least find my work.

 

How else to  learn?  :-)

Andrew McDonnell: Evaluating the security of OpenWRT (part 3) adventures in NOEXECSTACK’land

Sat, 2014-10-04 02:27

To recap our experiments to date, out of the box OpenWRT, with further digging, may appear to give the impression to have sporadic coverage of various Linux hardening measures without doing a bit of extra work. This in fact can be a false impression – see update – but for the uninitiated it could take a bit of digging to check!

One metric of interest not closely examined to date is the NOEXECSTACK attribute on executable binaries and libraries. When coupled with Kernel support, if enabled this disallows execution of code in the stack memory area of a program, thus preventing an entire class of vulnerabilities from working.  I mentioned NOEXECSTACK in passing previously; from the checksec report we saw that the x86 build has 100% coverage of NOEXECSTACK, whereas the MIPS build was almost completely lacking.

For a quick introduction to NOEXECSTACK, see http://wiki.gentoo.org/wiki/Hardened/GNU_stack_quickstart.

Down the Toolchain Rabbit Hole

As far as a challenging detective exercise, this one was a bit of a doosy, for me at least.  Essentially  I had to turn the OpenWRT build system inside out to understand how it worked, and then the same with uClibc, so that I could learn where to begin to start.  After rather a few false starts, the culprit turned out to be somewhere completely different.

First, OpenWRT by default uses uClibc as the C library, which is the bedrock upon which the rest of the user space is built.  The C library is not just a standard package however. OpenWRT, like the majority of typical embedded Linux systems employs a  “toolchain” or “buildroot” architecture.  Simply put, the combines packages together the C/C++ compiler, the assembler, linker, the C library and various other core components in a way that the remainder of the firmware can be built without having knowledge of how this layer is put together.

This is a Good Thing as it turns out, especially when cross-compiling, i.e. when building the OpenWRT firmware for a CPU or platform (the TARGET) that is different from that where the firmware build happens (the HOST.)  Especially where everything is bootstrapped from source, as OpenWRT is.

The toolchain is combined from the following components:

  • binutils — provides the linker (ld) and the assembler and code for manipulating ELF files (Linux binaries) and object libraries
  • gcc — provides the C/C++ compiler and, often overlooked, libgcc, a library various “intrinsic” functions such as optimisations for certain C library functions, amongst others
  • A C library — in this case, uClibc
  • gdb — a debugger

Now all these elements need to be built in concert, and installed to the correct locations, and to complicate matters, the toolchain actually has multiple categories of output:

  • Programs that run on the HOST that produce programs and libraries that run on the TARGET (such as the cross compiler)
  • Programs that run on the on the TARGET (e.g. ldd, used for scanning dependencies)
  • Programs and libraries that run on the HOST to perform various tasks related to the above
  • Header files that are needed to build other programs that run on the HOST to perform various tasks related to the above
  • Header files that are needed to build programs and libraries that run on the TARGET
  • Even, programs that run on the on the TARGET  to produce programs and libraries that run on the TARGET (a target-hosted C compiler!)

Confused yet?

All this magic is managed by the OpenWRT build system in the following way:

  • The toolchain programs are unpacked and built individually under the build_dir/toolchain directory
  • The results of the toolchain build designed to run on the host under the staging_dir/toolchain
  • The partial toolchain under staging_dir is used to build the remaining items under build_dir which are finally installed to staging_dir/target/blah-rootfs
  • (this is an approximation, maybe build OpenWRT for yourself to find out all the accurate naming conventions )
  • The kernel headers are an intrinsic part of this because of the C library, so along the way a pass over the target Linux kernel source is required as well.

OpenWRT is flexible enough to allow the C compiler to be changed (e.g. between stock gcc 4.6 and LInaro gcc 4.8) , and the binutils version, and even switch the C library between different project implementations ( uClibc vs eglibc vs MUSL.)

OpenWRT fetches the sources for all these things, then applies a number of local patches, before building.

We will need to refer to this later.

Confirming the Problem and Fishing for Red Herrings.

The first thing to note is that x86 has no problem, but MIPS does, and I want to run OpenWRT on various embedded devices with MIPS SOC.  Without that I may never have bothered digging deeper!

Of course I dived in initially and took the naive brute force approach.  I patched OpenWRT to apply the override flag to the linker: -Wl,-z,noexecstack. This was a bit unthinking, after all x86 did not need this.

Doing this gave partial success. In fact most programs gained NOEXECSTACK, except for a chunk of the uClibc components, busybox, and tellingly as it turned out, libgcc_s.so. That is, core components used by nearly everything. Of course.

(Spoiler: modern Linux toolchain implementations actually enable NOEXECSTACK by DEFAULT for C code! Which was an important fact I forgot at this point! Silly me.)

At this point,  I managed to overlook libgcc_s.so and decided to focus on uClibc. This decision would greatly expand my knowledge of OpenWRT and uClibc and embedded built systems, and do nothing to solve the problem the proper way!

OpenWRT builds uClibc as a host package, which basically means it abuses Makefiles to generate a uClibc configuration file partly derived from the OpenWRT config file settings, and eventually call the uClibc top level makefile to build uClibc. This can only be done after building binutils and two of three stages of gcc.

At this point I still did not fully understand how NOEXECSTACK really should be employed, which is probably an artefact of working on this stuff late at night and not reading everything as carefully as I might have.  So I did the obvious and incorrect thing and worked out how to patch uClibc further to push the force -Wl,-z,noexecstack through it. What I had to do to do that could almost take another blog article, so I’ll skip it for brevity.  Anyway, this did not solve the problem.

Finally I turned on all the debug and examined the build:

make V=csw toolchain/uClibc/compile

(Aside: the documentation for OpenWRT mentions using V=s to turn on some verboseness, but to get the actual compiler and linker commands of the toolchain build you need the extra flags. I should probably try and update the OpenWRT wiki but I have invested so much time in this that I might have to leave that as an exercise for the reader)

All the libraries were being linked using the -Wl,-z,noexecstack flag, yet some still failed checksec. Argh!

I should also note that repeating this process over and over gets tedious, taking about 20 minutes to built the toolchain plus minimal target firmware on my quad core AMD Phenom. Dont delete the build_dir/host and staging_dir/host  directories or it doubles!

So something else was going on.

Upgrades and trampolines, or not.

I sought help from the uClibc developers mailing list, who suggested I first try using up to date software. This was a fair point, as OpenWRT is using a 2 year old release of uClibc and 1 year old release of binutils, etc.

This of course entailed having to learn how to patch OpenWRT to give me that choice.

So another week later, around work and family and life, I found some time to do this, and discovered that the problem persisted.

At this point I revisited the Gentoo hardening guide.  After some detective work I discovered that several MIPS assembler files inside of uClibc did not actually have the recommended code fragments. Aha! I thought. Incorrectly again, as I should have realised; uClibc has already thought of this and when NOEXECSTACK is configured, as it is for OpenWRT, uClibc passes a different flag to the assembler that has the effect of fixing NOEXECSTACK for assembler files. And of course after I patched about 17 .S files and waited another half hour, the checksec scan was still unsuccessful. Red herring again!

I started to get desperate when I read about some compiler systems that use something called a ‘trampolline’. So I went back to the mailing uClibc  list.

At this point I would like to thank the uClibc developers for being so patient with me, as the solution was now near at hand.

Cutting edge patches and a wrinkle in time.

One of the uClibc developers pointed me to a patch posted on the gcc mailing list. As fate would happen, dated 10 September 2014, which was _after_ I started on these investigations.  This actually went full circle back to libgcc_s.so which was the small library I passed over to focus on uClibc.  This target library itself has some assembly files, which were neither built with the noexecstack option nor including the Gentoo-suggested assembly pragma. This patch also applies on gcc, not on uClibc, and of course was outside of binutils which was the other component I had to upgrade.  The fact that libgcc_s.so was not clean should maybe have pointed me to look at gcc, and it did cross my mind. But we all have to learn somehow.  Without all the above I would be the poorer for my knowledge of the internals of all these systems.

So I applied this patch, and finally, bliss, a sea of green NX enabled flags. Without in the end any modifications actually required to the older version of uClibc used inside OpenWRT.

This even fixed busybox NX status without modification.  So empirically this confirms what I read previously and also overlooked to my detriment, that being NOEXECSTACK is aggregated from all linked libraries: if one is missing it pollutes the lot.

Postscript

Now I have to package the patch so that it can be submitted to OpenWRT, initially against Barrier Breaker given that has just been released!

Then I will probably need to repeat it against all the supported gcc versions and submit separate patches for those. That will get a bit tedious, thankfully I can start a test build then go away…

Lev Lafayette: Simple PRINCE2 Governance

Sat, 2014-10-04 01:29

Pre-Project and Start-Up (SU)

A project is defined as a temporary organisation created for the purpose of delivering business products with a degree of uniqueness according to an agreed Business Case.

A project mandate must come from those managers and those authorised to allocate duties and funds, subject to delegated authority ("corporate /programme management").

Authorised individuals must raise a Project Mandate. This should state the basic objectives, scope, quality, and constraints and identify the customer, user, and other interested parties.

read more

David Rowe: Degrowth Economy

Fri, 2014-10-03 14:29

Just read this article: Life in a de-growth economy and why you might actually enjoy it.

I like the idea of a steady state economy. Simple maths shows how stupid endless growth is. And yet our politicians cling to it. We will get a steady state, energy neutral economy one day. It’s just a question of if we are forced, or if it’s managed.

Some thoughts on the article above:

  • I don’t agree that steady state implies localisation. Trade and specialisation and wonderful inventions. It’s more efficient if I write your speech coding software than you working it out. It’s for more efficient for a farmer to grow food than me messing about in my back yard. What is missing is a fossil fuel free means of transport to sustain trade and transportation of goods from where they are efficiently produced to where they are consumed.
  • Likewise local food production like they do in Cuba. Better to grow lots of food on a Cuban farm, they just lack an efficient way to transport it.
  • I have some problems with “organic” food production in the backyard, or my neighbours backyard. To me it’s paying more for chemically identical food to what I buy in the supermarket. Modern, scientific, food production has it’s issues, but these can be solved by science. On a small scale, sure, gardening is fun, and it would be great to meet people in communal gardens. However it’s no way to feed a hungry world.
  • Likewise this articles vision of us repairing/recycling clothing. New is still fine, as long as it’s resource-neutral, e.g. cotton manufactured into jeans using solar powered factories, and transported to my shopping mall in an electric vehicle. Or synthetic fibres from bio-fuels or GM bacteria.
  • Software costs zero to upgrade but can improve our standard of living. So there can be “growth” in some sense at no expense in resources. You can use my speech codec and conserve resources (energy for transmission and radio spectrum). I can send you that software over the Internet, so we don’t need an aircraft to ship you a black box or even a CD.

I live by some anti-growth, anti-consumer principles. I drive an electric car that is a based on a 25 year old recycled petrol car chassis. I don’t have a fossil fuel intensive commute. I use my bike more than my car.

I work part time from home mainly on volunteer work. My work is developing software that I can give away to help people. This software (for telecommunications) will in turn remove the need for expensive radio hardware, save power, and yet improve telecommunications.

I live inexpensively compared to my peers who are paying large mortgages due to the arbitrarily high price of land here, and other costs I have managed to avoid or simply say no to. No great luck or financial acumen at work here, although my parents taught me the useful habit of spending less than I earn. I’m not a very good consumer!

I don’t aspire to a larger home in a nice area or more gadgets. That would just mean more house work and maintenance and expense and less time on helping people with my work. In fact I aspire to a smaller home, and less gadgets (I keep throwing stuff out). I am renting at the moment as the real estate prices here are spiralling upwards and I don’t want to play that game. Renting will allow me to down-shift even further when my children are a little older. I have no debt, and no real desire to make more money, a living wage is fine. Although I do have investments and savings which I like tracking on spreadsheets.

I am typing this on a laptop made in 2008. I bought a second, identical one a few years later for $300 and swap parts between them so I always have a back up.

I do however burn a lot of fossil fuel in air travel. My home uses 11 kWhr/day of electricity, which, considering this includes my electric car and hence all my “fuel” costs, is not bad.

More

In the past I have written about why I think economic growth is evil. There is a lot of great information on this topic such as this physics based argument on why we will cook (literally!) in a few hundred years if we keep increasing energy use. The Albert Bartlett lectures on exponential growth are also awesome.

Gabriel Noronha: Recharging NSW

Fri, 2014-10-03 12:26

So those who have been following this blog know that I’ve been a keen enthusiast for EVs attempting to grow and expand the amount of EVs in Australia and the related charging network.

Some of my frustration on how slowly it has been growing has turned into why don’t I do something about it.

So I have. I’m now a director of a new company Recharging NSW Pty Ltd. The main aim is to encourage and support EV uptake in Australia.  By increase both cars on the road and public charging.

So there isn’t much I can share at present everything is still in the planning phases. but stay tuned.

 

 

 

Andrew Pollock: [opinion] On Islamaphobia

Fri, 2014-10-03 11:25

It's taken me a while to get sufficiently riled up about Australia's current Islamaphobia outbreak, but it's been brewing in me for a couple of weeks.

For the record, I'm an Atheist, but I'll defend your right to practise your religion, just don't go pushing it on me, thank you very much. I'm also not a huge fan of Islam, because it does seem to lend itself to more violent extremism than other religions, and ISIS/ISIL/IS (whatever you want to call them) aren't doing Islam any favours at the moment. I'm against extremism of any stripes though. The Westboro Baptists are Christian extremists. They just don't go around killing people. I'm also not a big fan of the burqa, but again, I'll defend a Muslim woman's right to choose to wear one. They key point here is choice.

I got my carpets cleaned yesterday by an ethnic couple. I like accents, and I was trying to pick theirs. I thought they may have been Turkish. It turned out they were Kurdish. Whenever I hear "Kurd" I habitually stick "Bosnian" in front of it after the Bosnian War that happened in my childhood. Turns out I wasn't listening properly, and that was actually "Serb". Now I feel dumb, but I digress.

I got chatting with the lady while her husband did the work. I got a refresher on where most Kurds are/were (Northern Iraq) and we talked about Sunni versus Shia Islam, and how they differed. I learned a bit yesterday, and I'll have to have a proper read of the Wikipedia article I just linked to, because I suspect I'll learn a lot more.

We briefly talked about burqas, and she said that because they were Sunni, they were given the choice, and they chose not to wear it. That's the sort of Islam that I support. I suspect a lot of the women running around in burqas don't get a lot of say in it, but I don't think banning it outright is the right solution to that. Those women need to feel empowered enough to be able to cast off their burqas if that's what they want to do.

I completely agree that a woman in a burqa entering a secure place (for example Parliament House) needs to be identifiable (assuming that identification is verified for all entrants to Parliament House). If it's not, and they're worried about security, that's what the metal detectors are for. I've been to Dubai. I've seen how they handle women in burqas at passport control. This is an easily solvable problem. You don't have to treat burqa-clad women as second class citizens and stick them in a glass box. Or exclude them entirely.

Michael Still: On layers

Wed, 2014-10-01 13:28
There's been a lot of talk recently about what we should include in OpenStack and what is out of scope. This is interesting, in that many of us used to believe that we should do ''everything''. I think what's changed is that we're learning that solving all the problems in the world is hard, and that we need to re-focus on our core products. In this post I want to talk through the various "layers" proposals that have been made in the last month or so. Layers don't directly address what we should include in OpenStack or not, but they are a useful mechanism for trying to break up OpenStack into simpler to examine chunks, and I think that makes them useful in their own right.



I would address what I believe the scope of the OpenStack project should be, but I feel that it makes this post so long that no one will ever actually read it. Instead, I'll cover that in a later post in this series. For now, let's explore what people are proposing as a layering model for OpenStack.



What are layers?



Dean Troyer did a good job of describing a layers model for the OpenStack project on his blog quite a while ago. He proposed the following layers (this is a summary, you should really read his post):



  • layer 0: operating system and Oslo
  • layer 1: basic services -- Keystone, Glance, Nova
  • layer 2: extended basics -- Neutron, Cinder, Swift, Ironic
  • layer 3: optional services -- Horizon and Ceilometer
  • layer 4: turtles all the way up -- Heat, Trove, Moniker / Designate, Marconi / Zaqar




Dean notes that Neutron would move to layer 1 when nova-network goes away and Neutron becomes required for all compute deployments. Dean's post was also over a year ago, so it misses services like Barbican that have appeared since then. Services are only allowed to require services from lower numbered layers, but can use services from higher number layers as optional add ins. So Nova for example can use Neutron, but cannot require it until it moves into layer 1. Similarly, there have been proposals to add Ceilometer as a dependency to schedule instances in Nova, and if we were to do that then we would need to move Ceilometer down to layer 1 as well. (I think doing that would be a mistake by the way, and have argued against it during at least two summits).



Sean Dague re-ignited this discussion with his own blog post relatively recently. Sean proposes new names for most of the layers, but the intent remains the same -- a compute-centric view of the services that are required to build a working OpenStack deployment. Sean and Dean's layer definitions are otherwise strongly aligned, and Sean notes that the probability of seeing something deployed at a given installation reduces as the layer count increases -- so for example Trove is way less commonly deployed than Nova, because the set of people who want a managed database as a service is smaller than the set of of people who just want to be able to boot instances.



Now, I'm not sure I agree with the compute centric nature of the two layers proposals mentioned so far. I see people installing just Swift to solve a storage problem, and I think that's a completely valid use of OpenStack and should be supported as a first class citizen. On the other hand, resolving my concern with the layers model there is trivial -- we just move Swift to layer 1.



What do layers give us?



Sean makes a good point about the complexity of OpenStack installs and how we scare away new users. I agree completely -- we show people our architecture diagrams which are deliberately confusing, and then we wonder why they're not impressed. I think we do it because we're proud of the scope of the thing we've built, but I think our audiences walk away thinking that we don't really know what problem we're trying to solve. Do I really need to deploy Horizon to have working compute? No of course not, but our architecture diagrams don't make that obvious. I gave a talk along these lines at pyconau, and I think as a community we need to be better at explaining to people what we're trying to do, while remembering that not everyone is as excited about writing a whole heap of cloud infrastructure code as we are. This is also why the OpenStack miniconf at linux.conf.au 2015 has pivoted from being a generic OpenStack chatfest to being something more solidly focussed on issues of interest to deployers -- we're just not great at talking to our users and we need to reboot the conversation at community conferences until its something which meets their needs.





We intend this diagram to amaze and confuse our victims



Agreeing on a set of layers gives us a framework within which to describe OpenStack to our users. It lets us communicate the services we think are basic and always required, versus those which are icing on the cake. It also let's us explain the dependency between projects better, and that helps deployers work out what order to deploy things in.



Do layers help us work out what OpenStack should focus on?



Sean's blog post then pivots and starts talking about the size of the OpenStack ecosystem -- or the "size of our tent" as he phrases it. While I agree that we need to shrink the number of projects we're working on at the moment, I feel that the blog post is missing a logical link between the previous layers discussion and the tent size conundrum. It feels to me that Sean wanted to propose that OpenStack focus on a specific set of layers, but didn't quite get there for whatever reason.



Next Monty Taylor had a go at furthering this conversation with his own blog post on the topic. Monty starts by making a very important point -- he (like all involved) both want the OpenStack community to be as inclusive as possible. I want lots of interesting people at the design summits, even if they don't work directly on projects that OpenStack ships. You can be a part of the OpenStack community without having our logo on your product.



A concrete example of including non-OpenStack projects in our wider community was visible at the Atlanta summit -- I know for a fact that there were software engineers at the summit who work on Google Compute Engine. I know this because I used to work with them at Google when I was a SRE there. I have no problem with people working on competing products being at our summits, as long as they are there to contribute meaningfully in the sessions, and not just take from us. It needs to be a two way street. Another concrete example is Ceph. I think Ceph is cool, and I'm completely fine with people using it as part of their OpenStack deploy. What upsets me is when people conflate Ceph with OpenStack. They are different. They're separate. And that is fine. Let's just not confuse people by saying Ceph is part of the OpenStack project -- it simply isn't because it doesn't fall under our governance model. Ceph is still a valued member of our community and more than welcome at our summits.



Do layers help us work our what to focus OpenStack on for now? I think they do. Should we simply say that we're only going to work on a single layer? Absolutely not. What we've tried to do up until now is have OpenStack be a single big thing, what we call "the integrated release". I think layers gives us a tool to find logical ways to break that thing up. Perhaps we need a smaller integrated release, but then continue with the other projects but on their own release cycles? Or perhaps they release at the same time, but we don't block the release of a layer 1 service on the basis of release critical bugs in a layer 4 service?



Is there consensus on what sits in each layer?



Looking at the posts I can find on this topic so far, I'd have to say the answer is no. We're close, but we're not aligned yet. For example, one proposal has a tweak to the previously proposed layer model that adds Cinder, Designate and Neutron down into layer 1 (basic services). The author argues that this is because stateless cloud isn't particularly useful to users of OpenStack. However, I think this is wrong to be honest. I can see that stateless cloud isn't super useful by itself, but we are assuming that OpenStack is the only piece of infrastructure that a given organization has. Perhaps that's true for the public cloud case, but the vast majority of OpenStack deployments at this point are private clouds. So, you're an existing IT organization and you're deploying OpenStack to increase the level of flexibility in compute resources. You don't need to deploy Cinder or Designate to do that. Let's take the storage case for a second -- our hypothetical IT organization probably already has some form of storage -- a SAN, or NFS appliances, or something like that. So stateful cloud is easy for them -- they just have their instances mount resources from those existing storage pools like they would any other machine. Eventually they'll decide that hand managing that is horrible and move to Cinder, but that's probably later once they've gotten through the initial baby step of deploying Nova, Glance and Keystone.



The first step to using layers to decide what we should focus on is to decide what is in each layer. I think the conversation needs to revolve around that for now, because it we drift off into whether existing in a given layer means you're voted off the OpenStack island, when we'll never even come up with a set of agreed layers.



Let's ignore tents for now



The size of the OpenStack "tent" is the metaphor being used at the moment for working out what to include in OpenStack. As I say above, I think we need to reach agreement on what is in each layer before we can move on to that very important conversation.



Conclusion



Given the focus of this post is the layers model, I want to stop introducing new concepts here for now. Instead let me summarize where I stand so far -- I think the layers model is useful. I also think the layers should be an inverted pyramid -- layer 1 should be as small as possible for example. This is because of the dependency model that the layers model proposes -- it is important to keep the list of things that a layer 2 service must use as small and coherent as possible. Another reason to keep the lower layers as small as possible is because each layer represents the smallest possible increment of an OpenStack deployment that we think is reasonable. We believe it is currently reasonable to deploy Nova without Cinder or Neutron for example.



Most importantly of all, having those incremental stages of OpenStack deployment gives us a framework we have been missing in talking to our deployers and users. It makes OpenStack less confusing to outsiders, as it gives them bite sized morsels to consume one at a time.



So here are the layers as I see them for now:



  • layer 0: operating system, and Oslo
  • layer 1: basic services -- Keystone, Glance, Nova, and Swift
  • layer 2: extended basics -- Neutron, Cinder, and Ironic
  • layer 3: optional services -- Horizon, and Ceilometer
  • layer 4: application services -- Heat, Trove, Designate, and Zaqar




I am not saying that everything inside a single layer is required to be deployed simultaneously, but I do think its reasonable for Ceilometer to assume that Swift is installed and functioning. The big difference here between my view of layers and that of Dean, Sean and Monty is that I think that Swift is a layer 1 service -- it provides basic functionality that may be assumed to exist by services above it in the model.



I believe that when projects come to the Technical Committee requesting incubation or integration, they should specify what layer they see their project sitting at, and the justification for a lower layer number should be harder than that for a higher layer. So for example, we should be reasonably willing to accept proposals at layer 4, whilst we should be super concerned about the implications of adding another project at layer 1.



In the next post in this series I'll try to address the size of the OpenStack "tent", and what projects we should be focussing on.



Tags for this post: openstack kilo technical committee tc layers

Related posts: One week of Nova Kilo specifications; Compute Kilo specs are open; My candidacy for Kilo Compute PTL; Juno TC Candidacy; Juno nova mid-cycle meetup summary: nova-network to Neutron migration; Juno Nova PTL Candidacy



Comment

Andrew Pollock: [life] Day 244: TumbleTastics, photo viewing, bulk goods and a play in the park

Wed, 2014-10-01 12:26

Yesterday was another really lovely day. Zoe had her free trial class with TumbleTastics at 10am, and Zoe was asking for a leotard for it, because that's what she was used to wearing when she went to Gold Star gymnastics in Mountain View. After Sarah dropped her off, we dashed out to Cannon Hill to go to K Mart in search of a leotard.

We found a leotard, and got home with enough time to scooter over to TumbleTastics for the class. Zoe had an absolute ball again, and the teacher was really impressed with her physical ability, and suggested for her regular classes that start next term, that she be slotted up a level. It sounds like her regularly scheduled class will have older kids and more boys, so that should be good. I just love that Zoe's so physically confident.

We scootered back home, and after lunch, drove back to see Hannah to view our photos from last week's photo shoot. There were some really beautiful photos in the set, so now I need to decide which one I want on a canvas.

Since we were already out, I thought we could go and check out the food wholesaler at West End that we'd failed to get to last week. I'm glad we did, because it was awesome. There was a coffee shop attached to the place, so we grabbed a coffee and a babyccino together after doing some shopping there.

While we were out, I thought it was a good opportunity check out a new park, and we drove down to what I guess was Orleigh Park, and had an absolutely fantastic afternoon down there by the river, the shade levels in the mid-afternoon were fantastic. I'd like to make an all day outing one day on the CityCat and CityGlider, and bus one way and take the ferry back the other way with a picnic lunch in the middle.

We headed home after about an hour playing in the park, and Zoe watched a little bit of TV before Sarah came to pick her up.

Zoe's spending the rest of the school holidays with Sarah, so I'm going to use the extra spare time to try and catch up on my taxes and my real estate licence training, which I've been neglecting.

Colin Charles: Oracle Linux ships MariaDB

Wed, 2014-10-01 07:25

I can’t remember why I was installing Oracle Enterprise Linux 7 on Oracle VirtualBox a while back, but I did notice something interesting. It ships, just like CentOS 7, MariaDB Server 5.5. Presumably, this means that MariaDB is now supported by Oracle, too ;-) [jokes aside, It’s likely because OEL7 is meant to be 100% compatible to RHEL7]

The only reason I mention this now is Vadim Tkachenko, probably got his most retweeted tweet recently, stating just that. If you want to upgrade to MariaDB 10, don’t forget that the repository download tool provides CentOS 7 binaries, which should “just work”.

If you want to switch to MySQL, there is a Public Yum repository that MySQL provides (and also don’t forget to check the Extras directory of the full installation – from OEL7 docs sub-titled: MySQL Community and MariaDB Packages). Be sure to read the MySQL docs about using the Yum repository. I also just noticed that the docs now have information on replacing a third-party distribution of MySQL using the MySQL yum repository.

Related posts:

  1. MariaDB 10.0.5 storage engines – check the Linux packages
  2. Using MariaDB on CentOS 6
  3. MariaDB 10.0.3: installing the additional engines