Planet Linux Australia

Syndicate content
Planet Linux Australia - http://planet.linux.org.au
Updated: 1 min 31 sec ago

Jeremy Visser: Turning out the lights

Tue, 2018-09-11 20:04
We put an unbelievable amount of data in the hands of third parties. In plain text. Traceable right back to you with mimimal effort to do so. For me, giving my data to the likes of Google, Apple, Microsoft, and the rest of the crowd, has always been a tradeoff: convenience vs. privacy. Sometimes, privacy wins. Most of the time, convenience wins. My iPhone reports in to Apple. My Android phone reports in to Google.

Jeremy Visser: We do not tolerate bugs; they are of the devil

Tue, 2018-09-11 20:04
I was just reading an article entitled “Nine traits of the veteran network admin”, and this point really struck a chord with me: Veteran network admin trait No. 7: We do not tolerate bugs; they are of the devil On occasion, conventional troubleshooting or building new networks run into an unexplainable blocking issue. After poring over configurations, sketching out connections, routes, and forwarding tables, and running debugs, one is brought no closer to solving the problem.

Jeremy Visser: SPA525G with ASA 9.1.x

Tue, 2018-09-11 20:04
At work, we have a staff member who has a Cisco SPA525G phone at his home that has built-in AnyConnect VPN support. Over the weekend, I updated our Cisco ASA firewall (which sits in front of our UC500 phone system) from version 8.4.7 to 9.1.3 and the phone broke with the odd error “Failed to obtain WebVPN cookie”. Turns out the fix was very simple. Just update the firmware on the SPA525G to the latest version.

Jeremy Visser: Floppy drive music

Tue, 2018-09-11 20:04
Some time in 2013 I set up a rig to play music with a set of floppy drives. At linux.conf.au 2015 in Auckland I gave a brief lightning talk about this, and here is a set of photos and some demo music to accompany. The hardware consists of six 3.5″ floppy drives connected to a LeoStick (Arduino) via custom vero board that connects the direction and step pins (18 and 20, respectively) as well as permanently grounding the select pin A (14).

Jeremy Visser: Configuring Windows for stable IPv6 addressing

Tue, 2018-09-11 20:04
By default, Windows will use randomised IPv6 addresses, rather than using stable EUI-64 addresses derived from the MAC address. This is great for privacy, but not so great for servers that need a stable address. If you run an Exchange mail server, or need to be able to access the server remotely, you will want a stable IPv6 address assigned. You may think this is possible simply by editing the NIC and setting a manual IPv6 address.

Jeremy Visser: One week with the Nexus 5

Tue, 2018-09-11 20:04
My ageing Motorola Milestone finally received a kick to the bucket last week when my shiny new Nexus 5 phone arrived. Though fantastic by 2009 standards, the Milestone could only officially run Android 2.2, and 2.3 with the help of an unofficial CyanogenMod port. Having been end-of-lifed for some time now, and barely being able to render a complex web page without running out of memory, it was time for me to move on.

Jeremy Visser: Restore ASA 5500 configuration from blank slate

Tue, 2018-09-11 20:04
The Cisco ASA 5500 series (e.g. 5505, 5510) firewalls have a fairly nice GUI interface called ASDM. It can sometimes be a pain, but it could be a lot worse than it is. One of the nice things ASDM does it let you save a .zip file backup of your entire ASA configuration. It includes your startup-configuration, VPN secrets, AnyConnect image bundles, and all those other little niceties. But when you set up an ASA from scratch to restore from said .

Russell Coker: Fail2ban

Sun, 2018-09-09 18:03

I’ve recently setup fail2ban [1] on a bunch of my servers. It’s purpose is to ban IP addresses associated with password guessing – or whatever other criteria for badness you configure. It supports Linux, OpenBSD [2] and probably most Unix type OSs too. I run Debian so I’ve been using the Debian packages of fail2ban.

The first thing to note is that it is very easy to install and configure (for the common cases at least). For a long time installing it had been on my todo list but I didn’t make the time to do it, after installing it I realised that I should have done it years ago, it was so easy.

Generally to configure it you just create a file under /etc/fail2ban/jail.d with the settings you want, any settings that are different from the defaults will override them. For example if you have a system running dovecot on the default ports and sshd on port 999 then you could put the following in /etc/fail2ban/jail.d/local.conf:

[dovecot] enabled = true [sshd] port = 999

By default the Debian package of fail2ban only protects sshd.

When fail2ban is running on Linux the command “iptables -L -n -v|grep f2b” will show the rules that match inbound traffic and the names of the chains they direct traffic to. To see if fail2ban has acted to protect a service you can run a command like “iptables -L f2b-sshd -n” to see the iptables rules.

The fail2ban entries in the INPUT table go before other rules, so it should work with any custom iptables rules you have configured as long as either fail2ban is the last thing to be started or your custom rules don’t flush old entries.

There are hooks for sending email notifications etc, that seems excessive to me but it’s always good to have options to extend a program.

In the past I’ve tried using kernel rate limiting to minimise hostile activity. That didn’t work well as there are legitimate end users who do strange things (like a user who setup their web-cam to email them every time it took a photo).

Conclusion

Fail2ban has some good features. I don’t think it will do much good at stopping account compromise as anything that is easily guessed could be guessed using many IP addresses and anything that has a good password can’t be guessed without taking many years of brute-force attacks while also causing enough noise in the logs to be noticed. What it does do is get rid of some of the noise in log files which makes it easier to find and fix problems. To me the main benefit is to improve the signal to noise ratio of my log files.

Related posts:

  1. Ethernet Interface Naming With Systemd Systemd has a new way of specifying names for Ethernet...
  2. Using LetsEncrypt Lets Encrypt is a new service to provide free SSL...
  3. Debian SSH Problems It has recently been announced that Debian had a serious...

Russell Coker: Google and Certbot (Letsencrypt)

Sat, 2018-09-08 22:03

Like most people I use Certbot AKA Letsencrypt to create SSL certificates for my sites. It’s a great service, very easy to use and it generally works well.

Recently the server running www.coker.com.au among other domains couldn’t get a certbot certificate renewed, here’s the error message:

Failed authorization procedure. mail.gw90.de (http-01): urn:acme:error:unauthorized :: The client lacks sufficient authorization :: "mail.gw90.de" was considered an unsafe domain by a third-party API, listen.gw90.de (http-01): urn:acme:error:unauthorized :: The client lacks sufficient authorization :: "listen.gw90.de" was considered an unsafe domain by a third-party API IMPORTANT NOTES: - The following errors were reported by the server: Domain: mail.gw90.de Type: unauthorized Detail: "mail.gw90.de" was considered an unsafe domain by a third- party API Domain: listen.gw90.de Type: unauthorized Detail: "listen.gw90.de" was considered an unsafe domain by a third-party API

It turns out that Google Safebrowsing had listed those two sites. Visit https://listen.gw90.de/ or https://mail.gw90.de/ today (and maybe for some weeks or months in the future) using Google Chrome (or any other browser that uses the Google Safebrowsing database) and it will tell you the site is “Dangerous” and probably refuse to let you in.

One thing to note is that neither of those sites has any real content, I only set them up in Apache to get SSL certificates that are used for other purposes (like mail transfer as the name suggests). If Google had listed my blog as a “Dangerous” site I wouldn’t be so surprised, WordPress has had more than a few security issues in the past and it’s not implausible that someone could have compromised it and made it serve up hostile content without me noticing. But the two sites in question have a DocumentRoot that is owned by root and was (until a few days ago) entirely empty, now they have a index.html that just says “This site is empty”. It’s theoretically possible that someone could have exploited a RCE bug in Apache to make it serve up content that isn’t in the DocumentRoot, but that seems unlikely (why waste an Apache 0day on one of the less important of my personal sites). It is possible that the virtual machine in question was compromised (a VM on that server has been compromised before [1]) but it seems unlikely that they would host bad things on those web sites if they did.

Now it could be that some other hostname under that domain had something inappropriate (I haven’t yet investigated all possibilities). But if so Google’s algorithm has a couple of significant problems, firstly if they are blacklisting sites related to one that had an issue then it would probably make more sense to blacklist by IP address (which means including some coker.com.au entries on the same IP). In the case of a compromised server it seems more likely to have multiple bad sites on one IP than multiple bad subdomains on different IPs (given that none of the hostnames in question have changed IP address recently and Google of course knows this). The next issue is that extending blacklisting doesn’t make sense unless there is evidence of hostile intent. I’m pretty sure that Google won’t blacklist all of ibm.com when (not if) a server in that domain gets compromised. I guess they have different policies for sites of different scale.

Both I and a friend have reported the sites in question to Google as not being harmful, but that hasn’t changed anything yet. I’m very disappointed in Google, listing sites, not providing any reason why (it could be a hostname under that domain was compromised and if so it’s not fixed yet BECAUSE GOOGLE DIDN’T REPORT A PROBLEM), and not removing the listing when it’s totally obvious there’s no basis for it.

While it makes sense for certbot to not issue SSL certificates to bad sites. It seems that they haven’t chosen a great service for determining which sites are bad.

Anyway the end result was that some of my sites had an expired SSL certificate for a day. I decided not to renew certificates before they expired to give Google a better chance of noticing their mistake and then I was busy at the time they expired. Now presumably as the sites in question have an invalid SSL certificate it will be even harder to convince anyone that they are not hostile.

Related posts:

  1. google-bank Currently many people have Google advertising on their web sites,...
  2. Bugs in Google Chrome I’m currently running google-chrome-beta version 5.0.375.55-r47796 on Debian/Unstable. It’s the...
  3. Google web sites and Chromium CPU Use Chromium is the free software build of the Google Chrome...

Ben Martin: A floating shelf for tablets

Fri, 2018-09-07 19:45
The choice of replacing the small marble table entirely or trying to "work around" it with walnut. The lower walnut tabletop is about 44cm by 55cm and is just low enough to give easy access to slide laptop(s) under the main table top. The top floating shelf is wide enough to happily accommodate two ipad sized tablets. The top shelf and lower tabletop are attached to the backing by steel brackets which cut through to the back through four CNC created mortises.


Cutting the mortises was interesting, I had to drop back to using a 1/2 inch cutting bit in order to service the 45mm depth of the timber. The back panel was held down with machining clamps but toggles would have done the trick, it was just what was on hand at the time. I cut the mortises through from the back using an upcut bit and the front turned out very clean without any blow out. You could probably cut yourself on the finish it was so clean.

The upcut doesn't make a difference in this job but it is always good to plan and see the outcomes for the next time when the cut will be exposed. The fine grain of walnut is great to work with CNC, though most of my bits are upcut for metal work.

I will likely move on to adding a head rest to the eames chair next. But that is a story for another day.

Anthony Towns: Money Matters

Thu, 2018-09-06 18:00

I have a few things I need to write, but am still a bit too sick with the flu to put together something novel, so instead I’m going to counter-blog Rob Collins recent claim that Money doesn’t matter. Rob’s thoughts are similar to ones I’ve had before, but I think they’re ultimately badly mistaken.

There’s three related, but very different, ways of thinking about money: as a store of value, as a medium of exchange, and as a unit of account. In normal times, dollars (or pounds or euros) work for all three things, so it’s easy to confuse them, but when you’re comparing different moneys some are better at one than another, and when a money starts failing, it will generally fail at each purpose at different rates.

Rob says “Money isn’t wealth” — but that’s wrong. In so far as money serves as a store of value, it is wealth. That’s why having a million dollars in your bank account makes you feel wealthy. The obvious failure mode for store of value is runaway inflation, and that quickly becomes a humanitarian disaster. Money can be one way to store value, but it isn’t the only way: you can store value by investing in artwork, buying property, building a company, or anything else that you expect to be able to sell at some later date. The main difference between those forms of investment versus money is that, ideally, monetary investments have low risk (perhaps the art you bought goes out of fashion and becomes worthless, or the company goes bankrupt, but your million dollars remains a million dollars), and low variance (you won’t make any huge profits, but you won’t make huge losses either). Unlike other assets, money also tends to be very fungible — if you earn $1000, you can spend $100 and have $900 left over; but if you have an artwork worth $1000 it’s a lot harder to sell one tenth of it.

Rob follows up by saying that money is “a thing you can exchange for other things”, which is true — money is a medium of exchange. Ideally it’s cheap and efficient, hard to counterfeit, and easy to verify. This is mostly a matter of technology: pretty gems are good at these things in some ways, coins and paper notes are good in others, cheques kind of work though they’re a bit to easy to counterfeit and a bit too hard to verify, and these days computer networks make credit card systems pretty effective. Ultimately a lot of modern systems have ended up as walled gardens though, and while they’re efficient, they aren’t cheap: whether you consider the 1% fees credit card companies charge, or the 2%-4% fees paypal charges, or the 30% fees from the Apple App Store or Google Play Stores, those are all a lot larger than how much you’d lose accepting a $50 note from someone directly. I have a lot of hope that Bitcoin’s Lightning Network will eventually have a huge impact here. Note that if money isn’t wealth — that is, it doesn’t manage to be a good store of value even in the short term, it’s not a good medium of exchange either: you can’t buy things with it because the people selling will have to immediately get rid of it or they’ll be making a loss; which is why currencies undergoing hyperinflation result in black markets where trade happens in stable currencies.

With modern technology and electronic derivatives, you could (in theory) probably avoid ever holding money. If you’re a potato farmer and someone wants to buy a potato from you, but you want to receive fertilizer for next season’s crop rather than paper money, the exchange could probably be fully automated by an online exchange so that you end up with an extra hundred grams of fertilizer in your next order, with all the details automatically worked out. If you did have such a system, you’d entirely avoid using money as a store of value (though you’d probably be using a credit account with your fertilizer supplier as a store of value), and you’d at least mostly avoid using money as a medium of exchange, but you’d probably still end up using money as a medium of account — that is you’d still be listing the price of potatoes in dollars.

A widely accepted unit of account is pretty important — you need it in order to make contracts work, and it makes comparing different trades much easier. Compare the question “should I sell four apples for three oranges, or two apples for ten strawberries?” with “should I sell four apples for $5, or two apples for $3” and “should I buy three oranges for $5 or ten strawberries for $3?” While I suppose it’s theoretically possible to do finance and economics without a common unit of account, it would be pretty difficult.

This is a pretty key part and it’s where money matters a lot. If you have an employment contract saying you’ll be paid $5000 a month, then it’s pretty important what “$5000” is actually worth. If a few months down the track there’s a severe inflation event, and it’s only worth significantly less, then you’ve just had a severe pay cut (eg, the Argentinian Peso dropped from 5c USD in April to 2.5c USD in September). If you’ve got a well managed currency, that usually means low but positive inflation, so you’ll instead get a 2%-5% pay cut every year — which is considered desirable by economists as it provides an automatic way to devote less resources to less valuable jobs, without managers having to deliberately fire people, or directly cut peoples’ pay. Of course, people tend to be as smart as economists, and many workers expect automatic pay rises in line with inflation anyway.

Rob’s next bit is basically summarising the concept of sticky prices: if there’s suddenly more money to go around, the economy goes weird because people aren’t able to fix prices to match the new reality quickly, causing shortages if there’s more money before there’s higher prices, or gluts (and probably a recession) if there’s less money and people can’t afford to buy all the stuff that’s around — this is what happened in the global financial crisis in 2008/9, though I don’t think there’s really a consensus on whether the blame for less money going around should be put on the government via the Federal Reserve, or the banks, or some other combination of actors.

To summarise so far: money does matter a lot. Having a common unit of account so you can give things meaningful prices is essential, having a convenient store of value that you can use for large and small amounts, and being able to easily trade it for goods and services is a really big deal. Screwing it up hurts people directly, and can be really massively harmful. You could probably use something different for medium of exchange than method of account (eg, a lot of places accepting cryptocurrencies use the cryptocurrency as medium of exchange, but use regular dollars for both store of value and pricing); but without a store of value you don’t have a medium of exchange, and once you’ve got a method of account, having it also work as a store of value is probably too convenient to skip.

But all that said, money is just a tool — generally money isn’t what anyone wants, people want the things they can get with money. Rob phrases that as “resources and productivity”, which is fine; I think the economics jargon would be “real GDP” — ie, the actual stuff that goes into GDP, as opposed to the dollar figure you put on it. Things start going wonky quickly though, in particular with the phrase “If, given the people currently in our country, and what they are being paid to do today, we have enough resources, and enough labour-and-productivity to …” — this starts mixing up nominal and real terms: people expect to be paid in dollars, but resources and labour are real units. If you’re talking about allocating real resources rather than dollars, you need to balance that against paying people in real resources rather than dollars as well, because that’s what they’re going to buy with their nominal dollars.

Why does that matter? Ultimately, because it’s very easy to get the maths wrong and not have your model of the national economy balanced: you allocate some resources here, pay some money there, then forget that the people you paid will use that money to reallocate some resources. If the error’s large enough and systemic enough, you’ll get runaway inflation and all the problems that go with it.

Rob has a specific example here: an unemployed (but skilled) builder, and a homeless family (who need a house built). Why not put the two together, magic up some money to prime the system and build a house? Voila the builder has a job, and the family has a home and everyone is presumably better off. But you can do the same thing without money: give the homeless family a loaded gun and introduce them to the builder: the builder has a job, and the family get a home, and with any luck the bullet doesn’t even get used! The key problem was that we didn’t inspect the magic sufficiently: the builder doesn’t want a job, or even money, he wants the rewards that the job and the money obtain. But where do those rewards come from? Maybe we think the family will contribute to the economy once they have a roof over their heads — if so, we could commit to that: forget the gun, the family goes to a bank, demonstrates they’ll be able to earn an income in future, and takes out a loan, then goes to the builder and pays for their house, and then they get jobs and pay off their mortgage. But if the house doesn’t let the family get jobs and pay for the home, the things the builder buys with his pay have to come from somewhere, and the only way that can happen is by making everyone else in the country a little bit poorer. Do that enough, and everyone who can will move to a different country that doesn’t have that problem.

Loans are a serious answer to the problem in general: if the family is going to be able to work and pay for the house eventually, the problem isn’t one of money, it’s one of risk: whoever currently owns the land, or the building supplies, or whatever doesn’t want to take the risk they’ll never see anything for letting the house get built. But once you have someone with founds who is willing to take the risk, things can start happening without any change in government policies. Loaning directly to the family isn’t the only way; you could build a set of units on spec, and run a charity that finds disadvantaged families, and sets them up, and maybe provide them with training or administrative support to help them get into the workforce, at which point they can pay you back and you can either turn a profit, or help the next disadvantaged family; or maybe both.

Rob then asks himself a bunch of questions, which I’ll answer too:

  • What about the foreign account deficit? (It doesn’t matter in the first place, unless perhaps you’re anti-immigrant, and don’t want foreigners buying property)
  • What about the fact that lots of land is already owned by someone? (There’s enough land in Australia outside of Sydney/Melbourne that this isn’t an issue; I don’t have any idea what it’s like in NZ, but see Tokyo for ways of housing people on very little land if it is a problem)
  • How do we fairly get the family the house they deserve? (They don’t deserve a house; if they want a nice house, they should work and save for it. If they’re going through hard times, and just need a roof over their heads, that’s easily and cheaply done, and doesn’t need a lot of space)
  • Won’t some people just ride on the coat-tails of others? (Yes, of course they will. That’s why you target the assistance to help them survive and get back on their feet, and if they want to get whatever it is they think they deserve, they can work for it, like everyone else)
  • Isn’t this going to require taking things other people have already earnt? (Generally, no: people almost always buy houses with loans, for instance, rather than being given them for free, or buying them outright; there might be a need to raises taxes, but not to fundamentally change them, though there might be other reasons why larger reform is worthwhile)

This brings us back to the claim Rob makes at the start of his blog: that the whole “government cannot pay for healthcare” thing is nonsense. It’s not nonsense: at the extreme, government can’t pay for enough healthcare for everyone to live to 120 while feeling like they’re 30. Even paying enough for everyone to have the best possible medical care isn’t feasible: even if NZ has a uniform health care system with 100% of its economy devoted to caring for the sick and disabled, there’s going to be a specialist facility somewhere overseas that does a better job. If there isn’t a uniform healthcare system (and there won’t be, even if only due to some doctors/nurses being individually more talented), there’ll also be better and worse places to go in NZ. The reason we have worrying fiscal crises in healthcare and aged support isn’t just a matter of money that can be changed with inflation, it’s that the real economic resources we’re expecting to have don’t align with the promises we’re already making. Those resources are usually expressed in dollar terms, but that’s because having a unit of account makes talking about these things easier: we don’t have to explicitly say “we’ll need x surgeons and y administrators and z MRI machines and w beds” but instead can just collect it all and say “we’ll need x billion dollars”, and leave out a whole mass of complexity, while still being reasonably accurate.

(Similar with “education” — there are limits to how well you can educate everyone, and there’s a trade off between how many resources you might want to put into educating people versus how many resources other people would prefer. In a democracy, that’s just something that’s going to get debated. As far as land goes, on the other hand, I don’t think there’s a fundamental limit to the government taking control over land it controls, though at least in Australia I believe that’s generally considered to be against the vibe of the constitution. If you want to fairly compensate land holders for taking their land, that goes back to budget negotiations and government priorities, and doesn’t seem very interesting in the abstract)

Probably the worst part of Rob’s blog is this though: “We get 10% less things done. Big deal.” Getting 10% less things done is a disaster, for comparison the Great Recession in the US had a GDP drop of less than half that, at -4.2% between 2007Q4 and 2009Q2, and the Great Depression was supposedly about -15% between 1929 and 1932. Also, saying “we’d want 90% of folk not working” is pretty much saying “90% of folk have nothing of value to contribute to anyone else”, because if they did, they could do that, be paid for it, and voila, they’re working. That simply doesn’t seem plausible to me, and I think things would get pretty ugly if it ended up that way despite it’s implausibility.

(Aside: for someone who’s against carbs, “potato farmer” as the go to example seems an interesting choice… )

Lev Lafayette: New Developments in Supercomputing

Tue, 2018-09-04 20:08

Over the past 33 years the International Super Computing conference in Germany has become one of the world's major computing events with the bi-annual announcement of the Top500 systems, which continues to be dominated in entirety by Linux systems. In June this year over 3,500 people attended ISC with a programme of tutorials, workshops and miniconferences, poster sessions, student competitions, a vast vendor hall, and numerous other events.

This presentation gives an overview of ISC and makes an attempt to cover many of the new developments and directions in supercomputing including new systems. metrics measurement, machine learning, and HPC education. In addition, the presentation will also feature material from the HPC Advisory Council conference in Fremantle held in August.

Michael Still: Kubernetes Fundamentals: Setting up nginx ingress

Tue, 2018-09-04 16:00

I’m doing the Linux Foundation Kubernetes Fundamentals course at the moment, and I was very disappointed in the chapter on Ingress Controllers. To be honest it feels like an after thought — there is no lab, and the provided examples don’t work if you re-type them into Kubernetes (you can’t cut and paste of course, just to add to the fun).

I found this super annoying, so I thought I’d write up my own notes on how to get nginx working as an Ingress Controller on Kubernetes.

First off, the nginx project has excellent installation resources online at github. The only wart with their instructions is that they changed the labels used on the pods for the ingress controller, which means the validation steps in the document don’t work until that is fixed. That is reported in a github issue and there was a proposed fix that didn’t have an associated issue that pre-dates the creation of the issue.

The basic process, assuming a baremetal Kubernetes install, is this:

$ NGINX_GITHUB="https://raw.githubusercontent.com/kubernetes/ingress-nginx" $ kubectl apply -f $NGINX_GITHUB/master/deploy/mandatory.yaml $ kubectl apply -f $NGINX_GITHUB/master/deploy/provider/baremetal/service-nodeport.yaml

Wait for the pods to fetch their images, and then check if the pods are healthy:

$ kubectl get pods -n ingress-nginx NAME READY STATUS RESTARTS AGE default-http-backend-6586bc58b6-tn7l5 1/1 Running 0 20h nginx-ingress-controller-79b7c66ff-m8nxc 1/1 Running 0 20h

That bit is mostly explained by the Linux Foundation course. Well, he links to the github page at least and then you just read the docs. The bit that isn’t well explained is how to setup ingress for a pod. This is partially because kubectl doesn’t have a command line to do this yet — you have to POST an API request to get it done instead.

First, let’s create a target deployment and service:

$ kubectl run ghost --image=ghost deployment.apps/ghost created $ kubectl expose deployments ghost --port=2368 service/ghost exposed

The YAML to create an ingress for this new “ghost” service looks like this:

$ cat sample_ingress.yaml apiVersion: extensions/v1beta1 kind: Ingress metadata: name: ghost spec: rules: - host: ghost.10.244.2.13.nip.io http: paths: - path: / backend: serviceName: ghost servicePort: 2368

Where 10.244.2.13 is the IP that my CNI assigned to the nginx ingress controller. You can look that up with a describe of the nginx ingress controller pod:

$ kubectl describe pod nginx-ingress-controller-79b7c66ff-m8nxc -n ingress-nginx | grep IP IP: 10.244.2.13

Now we can create the ingress entry for this ghost deployment:

$ kubectl apply -f sample_ingress.yaml ingress.extensions/ghost created

This causes the nginx configuration to get re-created inside the nginx pod by magix pixies. Now, assuming we have a route from our desktop to 10.244.2.13, we can just go to http://ghost.10.244.2.13.nip.io in a browser and you should be greeted by the default front page for the ghost installation (which turns out to be a publishing platform, who knew?).

To cleanup the ingress, you can use the normal “get”, “describe”, and “delete” verbs that you use for other things in kubectl, with the object type of “ingress”.

Simon Lyall: Audiobooks – August 2018

Mon, 2018-09-03 12:04

Homo Deus: A Brief History of Tomorrow by Yuval Noah Harari

An interesting listen. Covers both history of humanity and then extrapolates ways things might go in the future. Many plausible ideas (although no doubt some huge misses). 8/10

Higher: A Historic Race to the Sky and the Making of a City by Neal Bascomb

The architects, owners & workers behind the Manhattan Trust Building, the Chrysler Building and the Empire State Building all being built New York at the end of the roaring 20s. Fascinating and well done. 9/10

The Invention Of Childhood by Hugh Cunningham

The story of British childhood from the year 1000 to the present. Lots of quotes (by actors) from primary sources such as letters (which is less distracting than sometimes). 8/10

The Sign of Four by Arthur Conan Doyle – Read by Stephen Fry

Very well done reading by Fry. Story excellent of course. 8/10

My Happy Days in Hollywood: A Memoir by Garry Marshall

Memoir by writer, producer (Happy Days, etc) and director (Pretty Woman, The Princess Diaries, etc). Great stories mostly about the positive side of the business. Very inspiring 8/10

Napoleon by J. Christopher Herold

A biography of Napoleon with a fair amount about the history of the rest of Europe during the period thrown in. A fairly short (11 hours) book but some but not exhausting detail. 7/10

Storm in a Teacup: The Physics of Everyday Life by Helen Czerski

A good popular science book linking everyday situations and objects with bigger concepts (eg coffee stains to blood tests). A fun listen. 7/10

All These Worlds Are Yours: The Scientific Search for Alien Life by Jon Willis

The author reviews recent developments in the search for life and suggests places it might be found and how missions to search them (he gives himself a $4 billion budget) should be prioritised. 8/10

Ready Player One by Ernest Cline

I’m right in the middle of the demographic for most of the references here so I really enjoyed it. Good voicing by Wil Wheaton too. Story is straightforward but all pretty fun. 8/10

Russell Coker: Suggestions for Trump Supporters

Sat, 2018-09-01 22:03

I’ve had some discussions with Trump supporters recently. Here are some suggestions for anyone who wants to have an actual debate about political issues. Note that this may seem harsh to Trump supporters. But it seems harsh to me when Trump supporters use a social event to try and push their beliefs without knowing any of the things I list in this post. If you are a Trump supporter who doesn’t do these things then please try to educate your fellow travellers, they are more likely to listen to you than to me.

Facts

For a discussion to be useful there has to be a basis in facts. When one party rejects facts there isn’t much point. Anyone who only takes their news from an ideological echo chamber is going to end up rejecting facts. The best thing to do is use fact checking sites of which Snopes [1] is the best known. If you are involved in political discussions you should regularly correct people who agree with you when you see them sharing news that is false or even merely unsupported by facts. If you aren’t correcting mistaken people on your own side then you do your own cause a disservice by allowing your people to discredit their own arguments. If you aren’t regularly seeking verification of news you read then you are going to be misled. I correct people on my side regularly, at least once a week. How often do you correct your side?

The next thing is that some background knowledge of politics is necessary. Politics is not something that you can just discover by yourself from first principles. If you aren’t aware of things like Dog Whistle Politics [2] then you aren’t prepared to have a political debate. Note that I’m not suggesting that you should just learn about Dog Whistle Politics and think you are ready to have a debate, it’s one of many things that you need to know.

Dog whistle politics is nothing new or hidden, if you don’t know about such basics you can’t really participate in a discussion of politics. If you don’t know such basics and think you can discuss politics then you are demonstrating the Dunning-Kruger effect [3].

The Southern Strategy [4] is well known by everyone who knows anything about US politics. You can think it’s a good thing if you wish and you can debate the extent to which it still operates, but you can’t deny it happened. If you are unaware of such things then you can’t debate US politics.

The Civil rights act of 1964 [5] is one of the most historic pieces of legislation ever passed in the US. If you don’t know about it then you just don’t know much about US politics. You may think that it is a bad thing, but you can’t deny that it happened, or that it happened because of the Democratic party. This was the time in US politics when the Republicans became the party of the South and the Democrats became the centrist (possibly left) party that they are today. It is ridiculous to claim that Republicans are against racism because Abraham Lincoln was a Republican. Ridiculous claims might work in an ideological echo chamber but they won’t convince anyone else.

Words Have Meanings

To communicate we need to have similar ideas of what words mean. If you use words in vastly different ways to other people then you can’t communicate with them. Some people in the extreme right claim that because the Nazi party in Germany was the
“Nationalsozialistische Deutsche Arbeiterpartei” (“NSDAP”) which translates to English as “National Socialist German Workers Party” that means that they were “socialists”. Then they claim that “socialists” are “leftist” so therefore people on the left are Nazis. That claim requires using words like “left” and “socialism” in vastly different ways to most people.

Snopes has a great article about this issue [6], I recommend that everyone read it, even those who already know that Nazis weren’t (and aren’t) on the left side of politics.

The Wikipedia page of the Unite the Right rally [7] (referenced in the Snopes article) has a photo of people carrying Nazi flags and Confederate flags. Those people are definitely convinced that Nazis were not left wing! They are also definitely convinced that people on the right side of politics (which in the US means the Republican party) support the Confederacy and oppose equal rights for Afro-American people. If you want to argue that the Republican party is the one opposed to racism then you need to come up with an explanation for how so many people who declare themselves on the right of politics got it wrong.

Here’s a local US news article about the neo-Nazi who had “commie killer” written on his helmet while beating a black man almost to death [8]. Another data point showing that Nazis don’t like people on the left.

In other news East Germany (the German Democratic Republic) was not a
democracy. North Korea (the Democratic People’s Republic of Korea) is not a democracy either. The use of “socialism” by the original Nazis shouldn’t be taken any more seriously than the recent claims by the governments of East Germany and North Korea.

Left vs right is a poor summary of political positions, the Political Compass [9] is better. While Hitler and Stalin have different positions on economics I think that citizens of those countries didn’t have very different experiences, one extremely authoritarian government is much like another. I recommend that you do the quiz on the Political Compass site and see if the people it places in similar graph positions to you are ones who you admire.

Sources of Information

If you are only using news sources that only have material you agree with then you are in an ideological echo chamber. When I recommend that someone look for other news sources what I don’t expect in response is an email analysing a single article as justification for rejecting that entire news site. I recommend sites like the New York Times as having good articles, but they don’t only have articles I agree with and they sometimes publish things I think are silly.

A news source that makes ridiculous claims such as that Nazis are “leftist” is ridiculous and should be disregarded. A news source that merely has some articles you disagree with might be worth using.

Also if you want to convince people outside your group of anything related to politics then it’s worth reading sites that might convince them. I often read The National Review [10], not because I agree with their articles (that is a rare occurrence) but because they write for rational conservatives and I hope that some of the extreme right wing people will find their ideas appealing and come back to a place where we can have useful discussions.

When evaluating news articles and news sources one thing to consider is Occam’s Razor [11]. If an article has a complex and implausible theory when a simpler theory can explain it then you should be sceptical of that article. There are conspiracies but they aren’t as common as some people believe and they are generally of limited complexity due to the difficulty people have in keeping secrets. An example of this is some of the various conspiracy theories about storage of politicians’ email. The simplest explanation (for politicians of all parties) is that they tell someone like me to “just make the email work” and if their IT staff doesn’t push back and refuse to do it without all issues being considered then it’s the IT staff at fault. Stupidity explains many things better than conspiracies. Regardless of the party affiliation, any time a politician is accused of poor computer security I’ll ask whether someone like me did their job properly.

Covering for Nazis

Decent people have to oppose Nazis. The Nazi belief system is based on the mass murder of people based on race and the murder of people who disagree with them. In Germany in the 1930s there were some people who could claim not to know about the bad things that Nazis were doing and they could claim to only support Nazis for other reasons. Neo-Nazis are not about creating car companies like VolksWagen all they are about is hatred. The crimes of the original Nazis are well known and well documented, it’s not plausible that anyone could be unaware of them.

Mitch McConnell has clearly stated “There are no good neo-Nazis” [12] in clear opposition to Trump. While I disagree with Mitch on many issues, this is one thing we can agree on. This is what decent people do, they work together with people they usually disagree with to oppose evil. Anyone who will support Nazis out of tribal loyalty has demonstrated the type of person they are.

Here is an article about the alt-right meeting to celebrate Trump’s victory where Richard Spencer said “hail Trump, hail our people, hail victory” while many audience members give the Nazi salute [13]. You can skip to 42 seconds in if you just want to see that part. Trump supporters try to claim it’s the “Roman salute”, but that’s not plausible given that there’s no evidence of Romans using such a salute and it was first popularised in Fascist Italy [14]. The Wikipedia page for the Nazi Salute [15] notes that saying “hail Hitler” or “hail victory” was standard practice while giving the salute. I think that it’s ridiculous to claim that a group of people offering the Hitler salute while someone says “hail Trump” and “hail victory” are anything but Nazis. I also think it’s ridiculous to claim to not know of any correlation between the alt-right and Nazis and then immediately know about the “Roman Salute” defence.

The Americans used to have a salute that was essentially the same as the Nazi Salute, the Bellamy Salute was officially replaced by the hand over heart salute in 1942 [16]. They don’t want anything close to a Nazi salute, and no-one did until very recently when neo-Nazis stopped wearing Klan outfits in the US.

Every time someone makes claims about a supposed “Roman salute” explanation for Richard Spencer’s fans I wonder if they are a closet Nazi.

Anti-Semitism

One final note, I don’t debate people who are open about neo-Nazi beliefs. When someone starts talking about a “Jewish Conspiracy” or use other Nazi phrases then the conversation is over. Nazis should be shunned. One recent conversation with a Trump supported ended quickly after he started talking about a “Jewish conspiracy”. He tried to get me back into the debate by claiming “there are non-Jews in the conspiracy too” but I was already done with him.

Decent Trump Supporters

If you want me to believe that you are one of the decent Trump supporters a good way to start is to disclaim the horrible ideas that other Trump supporters endorse. If you can say “I believe that black people and Jews are my equal and I will not stand next to or be friends with anyone who carries a Nazi flag” then we can have a friendly discussion about politics. I’m happy to declare “I have never supported a Bolshevik revolution or the USSR and will never support such things” if there is any confusion about my ideas in that regard. While I don’t think any reasonable person would think that I supported the USSR I’m happy to make my position clear.

I’ve had people refuse to disclaim racism when asked. If you can’t clearly say that you consider people of other races to be your equal then everyone will think that you are racist.

Related posts:

  1. Coalitions In Australia we are about to have a federal election,...
  2. Senator Online I’ve been asked for my opinion of senatoronline.org.au which claims...
  3. The Meaning of Godwin’s Law A widely cited unofficial rule on the Internet is known...

Lev Lafayette: Exploring Issues in Event-Based HPC Cloudbursting

Sat, 2018-09-01 00:06

The use of cloud compute, especially in proportion to single-node tasks, provides a more effective allocation of financial resources. The introduction of cloud-bursting to scheduling systems could ideally provide on-demand compute resources for High Performance Computing (HPC) systems, where queue wait-times are a source of user consternation.

Using experiential examples in Slurm's Cloudbursting capability (an extension of the scheduler's power management features), initial successes and bug discoveries highlight the problems of replication and latency that limit the scope of cloudbursting. Nevertheless, under such circumstances wrapper scripts for particular subsets of jobs are still considered viable; an example of this approach is indicated by MOAB/NODUS.

A presentation to HPC:AI 2018 Perth Conference

Matthew Oliver: Keystone Federated Swift – Multi-region cluster, multiple federation, access same account

Fri, 2018-08-31 20:05

Welcome to the final post in the series, it has been a long time coming. If required/requested I’m happy to delve into any of these topics deeper, but I’ll attempt to explain the situation, the best approach to take and how I got a POC working, which I am calling the brittle method. It definitely isn’t the best approach but as it was solely done on the Swift side and as I am a OpenStack Swift dev it was the quickest and easiest for me when preparing for the presentation.

To first understand how we can build a federated environment where we have access to our account no matter where we go, we need to learn about how keystone authentication works from a Swift perspective. Then we can look at how we can solve the problem.

Swift’s Keystoneauth middleware

As mentioned in earlier posts, there isn’t any magic in the way Swift authentication works. Swift is an end-to-end storage solution and so authentication is handled via authentication middlewares. further a single Swift cluster can talk to multiple auth backends, which is where the `reseller_prefix` comes into play. This was the first approach I blogged about in these series.

 

There is nothing magical about how authentication works, keystoneauth has it’s own idiosyncrasies, but in general it simply makes a decision whether this request should be allowed. It makes writing your own simple, and maybe an easily way around the problem. Ie. write an auth middleware to auth directly to your existing company LDAP server or authentication system.

 

To setup keystone authentication, you use keystones authtoken middleware and directly afterwards in the pipeline place the Swift keystone middleware, configuring each of them in the proxy configuration:

pipeline = ... authtoken keystoneauth ... proxy-server The authtoken middleware

Generally every request to Swift will include a token, unless it’s using tempurl, container-sync or to a container that has global read enabled but you get the point.

As the swift-proxy is a python wsgi app the request first hits the first middleware in the pipeline (left most) and works it’s way to the right. When it hits the authtoken middleware the token in the request will be sent to keystone to be authenticated.

The resulting metadata, ie the user, storage_url, groups, roles etc, and dumped into the request environment and then passed to the next middleware. The keystoneauth middleware.

The keystoneauth middleware

The keystoneauth middleware checks the request environment for the metadata dumped by the authtoken middleware and makes a decision based on that. Things like:

  • If the token was one for one of the reseller_admin roles, then they have access.
  • If the user isn’t a swift user of the account/project the request is for, is there an ACL that will allow it.
  • If the user has a role that identifies them as a swift user/operator of this Swift account then great.

 

When checking to see if the user has access to the given account (Swift account) it needs to know what account the request is for. This is easily determined as it’s defined by the path of the URL your hitting. The URL you send to the Swift proxy is what we call the storage url. And is in the form of:

http(s)://<url of proxy or proxy vip>/v1/<account>/<container>/<object>

The container and object elements are optional as it depends on what your trying to do in Swift. When the keystoneauth middleware is authenticating it’ll check that the project_id (or tenant_id) metadata dumped by authtoken, when this is concatenated with the reseller_prefix, matches the account in the given storage_url. For example let’s say the following metadata was dumped by authtoken:

{ "X_PROJECT_ID": 'abcdefg12345678', "X_ROLES": "swiftoperator", ... }

And the reseller_prefix for keystone auth was AUTH_ and we make any member of the swiftoperator role (in keystone) a swift operator (a swift user on the account). Then keystoneauth would allow access if the account in the storage URL matched AUTH_abcdefg12345678.

 

When you authenticate to keystone the object storage endpoint will point not only to the Swift endpoint (the swift proxy or swift proxy load balancer), but it will also include your account. Based on your project_id. More on this soon.

 

Does that make sense? So simply put to use keystoneauth in a multi federated environment, we just need to make sure no matter which keystone we end up using and asking for the swift endpoint always returns the same Swift account name.

And there lies our problem, the keystone object storage endpoint and the metadata authtoken dumps uses the project_id/tenant_id. This isn’t something that is synced or can be passed via federation metadata.

NOTE: This also means that you’d need to use the same reseller_prefix on all keystones in every federated environment. Otherwise the accounts wont match.

 

Keystone Endpoint and Federation Side

When you add an object storage endpoint in keystone, for swift, the url looks something like:

http://swiftproxy:8080/v1/AUTH_$(tenant_id)s

 

Notice the $(tenant_id)s at the end? This is a placeholder that keystone internally will replace with the tenant_id of the project you authenticated as. $(project_id)s can also be used and maps to the same thing. And this is our problem.

When setting up federation between keystones (assuming keystone 2 keystone federation) you generate a mapping. This mapping can include the project name, but not the project_id. Theses ids are auto-generated, not deterministic by name, so creating the same project on different federated keystone servers will have different project_id‘s. When a keystone service provider (SP) federates with a keystone identity provider (IdP) the mapping they share shows how the provider should map federated users locally. This includes creating a shadow project if a project doesn’t already exist for the federated user to be part of.

Because there is no way to sync project_id’s in the mapping the SP will create the project which will have a unique project_id. Meaning when the federated user has authenticated their Swift storage endpoint from keystone will be different, in essence as far as Swift is concerned they will have access but to a completely different Swift account. Let’s use an example, let’s say there is a project on the IdP called ProjectA.

project_name project_id IdP ProjectA 75294565521b4d4e8dc7ce77a25fa14b SP ProjectA cb0d5805d72a4f2a89ff260b15629799

Here we have a ProjectA on both IdP and SP. The one on the SP would be considered a shadow project to map the federated user too. However the project_id’s are both different, because they are uniquely  generated when the project is created on each keystone environment. Taking the Object Storage endpoint in keystone as our example before we get:

 

   Object Storage Endpoint IdP http://swiftproxy:8080/v1/AUTH_75294565521b4d4e8dc7ce77a25fa14b SP http://swiftproxy:8080/v1/AUTH_cb0d5805d72a4f2a89ff260b15629799

So when talking to Swift you’ll be accessing different accounts, AUTH_75294565521b4d4e8dc7ce77a25fa14b and AUTH_cb0d5805d72a4f2a89ff260b15629799 respectively. This means objects you write in one federated environment will be placed in a completely different account so you wont be able access them from elsewhere.

 

Interesting ways to approach the problem

Like I stated earlier the solution would simply be to always be able to return the same storage URL no matter which federated environment you authenticate to. But how?

  1. Make sure the same project_id/tenant_id is used for _every_ project with the same name, or at least the same name in the domains that federation mapping maps too. This means direct DB hacking, so not a good solution, we should solve this in code, not make OPs go hack databases.
  2. Have a unique id for projects/tenants that can be synced in federation mapping, also make this available in the keystone endpoint template mapping, so there is a consistent Swift account to use. Hey we already have project_id which meets all the criteria except mapping, so that would be easiest and best.
  3. Use something that _can_ be synced in a federation mapping. Like domain and project name. Except these don’t map to endpoint template mappings. But with a bit of hacking that should be fine.

Of the above approaches, 2 would be the best. 3 is good except if you pick something mutable like the project name, if you ever change it, you’d now authenticate to a completely different swift account. Meaning you’d have just lost access to all your old objects! And you may find yourself with grumpy Swift Ops who now need to do a potentially large data migration or you’d be forced to never change your project name.

Option 2 being unique, though it doesn’t look like a very memorable name if your using the project id, wont change. Maybe you could offer people a more memorable immutable project property to use. But to keep the change simple being able simply sync the project_id should get us everything we need.

 

When I was playing with this, it was for a presentation so had a time limit, a very strict one, so being a Swift developer and knowing the Swift code base I hacked together a varient on option 3 that didn’t involve hacking keystone at all. Why, because I needed a POC and didn’t want to spend most my time figuring out the inner workings of Keystone, when I could just do a few hacks to have a complete Swift only version. And it worked. Though I wouldn’t recommend it. Option 3 is very brittle.

 

The brittle method – Swift only side – Option 3b

Because I didn’t have time to simply hack keystone, I took a different approach. The basic idea was to let authtoken authenticate and then finish building the storage URL on the swift side using the meta-data authtoken dumps into wsgi request env. Thereby modifying the way keystoneauth authenticates slightly.

Step 1 – Give the keystoneauth middleware the ability to complete the storage url

By default we assume the incoming request will point to a complete account, meaning the object storage endpoint in keystone will end with something like:

'<uri>/v1/AUTH_%(tenant_id)s'

So let’s enhance keystoneauth to have the ability to if given only the reseller_prefix to complete the account. So I added a use_dynamic_reseller option.

If you enable use_dynamic_reseller then the keystoneauth middleware will pull the project_id from authtoken‘s meta-data dumped in the wsgi environment. This allows a simplified keystone endpoint in the form:

'<uri>/v1/AUTH_'

This shortcut makes configuration easier, but can only be reliably used when on your own account and providing a token. API elements like tempurl  and public containers need the full account in the path.

This still used project_id so doesn’t solve our problem, but meant I could get rid of the $(tenant_id)s from the endpoints. Here is the commit in my github fork.

Step 2 – Extend the dynamic reseller to include completing storage url with names

Next, we extend the keystoneauth middleware a little bit more. Give it another option, use_dynamic_reseller_name, to complete the account with either project_name or domain_name and project_name but only if your using keystone authentication version 3.

If you are, and want to have an account based of the name of the project, then you can enable use_dynamic_reseller_name in conjuction with use_dynamic_reseller to do so. The form used for the account would be:

<reseller_prefix><project_domain_name>_<project_name>

So using our example previously with a reseller_preix of AUTH_, a project_domain_name of Domain and our project name of ProjectA, this would generate an account:

AUTH_Domain_ProjectA

This patch is also in my github fork.

Does this work, yes! But as I’ve already mentioned in the last section, this is _very_ brittle. But this also makes it confusing to know when you need to provide only the reseller_prefix or your full account name. It would be so much easier to just extend keystone to sync and create shadow projects with the same project_id. Then everything would just work without hacking.

Clinton Roy: Moving to Melbourne

Fri, 2018-08-31 18:00

Now that the paperwork has finally all been dealt with, I can announce that I’ll be moving down to Melbourne to take up a position with the Australian Synchrotron, basically a super duper x-ray machine used for research of all types. My official position is a >in< Senior Scientific Software Engineer <out> I’ll be moving down to Melbourne shortly, staying with friends (you remember that offer you made, months ago?) until I find a rental near Monash Uni, Clayton.

I will be leaving behind Humbug, the computer group that basically opened up my entire career, and The Edge, SLQ, my home-away-from-home study. I do hope to be able to find replacements for these down south.

I’m looking at having a small farewell nearby soon.

A shout out to Netbox Blue for supplying all my packing boxes. Allll of them.

OpenSTEM: This Week in Australian History

Fri, 2018-08-31 16:06
The end of August and beginning of September is traditionally linked to the beginning of Spring in Australia, although the change in seasons is experienced in different ways in different parts of the country and was marked in locally appropriate ways by Aboriginal people. As a uniquely Australian celebration of Spring, National Wattle Day, celebrated […]

Pia Waugh: Mā te wā, Aotearoa

Fri, 2018-08-31 12:01

Today I have some good news and sad news. The good news is that I’ve been offered a unique chance to drive “Digital Government” Policy and Innovation for all of government, an agenda including open government, digital transformation, technology, open and shared data, information policy, gov as a platform, public innovation, service innovation and policy innovation. For those who know me, these are a few of my favourite things

The sad news, for some folk anyway, is I need to leave New Zealand Aotearoa to do it.

Over the past 18 months (has it only been that long!) I have been helping create a revolutionary new way of doing government. We have established a uniquely cross-agency funded and governed all-of-government function, a “Service Innovation Lab”, for collaborating on the design and development of better public services for New Zealand. By taking a “life journey” approach, government agencies have a reason to work together to improve the full experience of people rather than the usual (and natural) focus on a single product, service or portfolio. The Service Innovation Lab has a unique value in providing an independent place and way to explore design-led and evidence-based approaches to service innovation, in collaboration with service providers across public, private and non-profit sectors. You can see everything we’ve done over the past year here  and from the first 10 week experiment here. I have particularly enjoyed working with and exploring the role of the Citizen Advice Bureau in New Zealand as a critical and trusted service delivery organisation in New Zealand. I’m also particularly proud of both our work in exploring optimistic futures as a way to explore what is possible, rather than just iterate away from pain, and our exploration of better rules for government including legislation as code. The next stage for the Lab is very exciting! As you can see in the 2017-18 Final Report, there is an ambitious work programme to accelerate the delivery of more integrated and more proactive services, and the team is growing with new positions opening up for recruitment in the coming weeks!

Please see the New Zealand blog (which includes my news) here

Professionally, I get most excited about system transformation. Everything we do in the Lab is focused on systemic change, and it is doing a great job at having an impact on the NZ (and global) system around it, especially for its size. But a lot more needs to be done to scale both innovation and transformation. Personally, I have a vision for a better world where all people have what they need to thrive, and I feel a constant sense of urgency in transitioning our public institutions into the 21st century, from an industrial age to the information age, so they can more effectively support society as the speed of change and complexity exponentially grows. This is going to take a rethink of how the entire system functions, especially at the policy and legislative levels.

With this in mind, I have been offered an extraordinary opportunity to explore and demonstrate systemic transformation of government. The New South Wales Department of Finance, Services and Innovation (NSW DFSI) has offered me the role of Executive Director for Digital Government, a role responsible for the all-of-government policy and innovation for ICT, digital, open, information, data, emerging tech and transformation, including a service innovation lab (DNA). This is a huge opportunity to drive systemic transformation as part of a visionary senior leadership team with Martin Hoffman (DFSI Secretary) and Greg Wells (GCIDO). I am excited to be joining NSW DFSI, and the many talented people working in the department, to make a real difference for the people of NSW. I hope our work and example will help raise the bar internationally for the digital transformation of governments for the benefit of the communities we serve.

Please see the NSW Government media release here.

One of the valuable lessons from New Zealand that I will be taking forward in this work has been in how public services can (and should) engage constructively and respectfully with Indigenous communities, not just because they are part of society or because it is the right thing to do, but to integrate important principles and context into the work of serving society. Our First Australians are the oldest cluster of cultures in the world, and we have a lot to learn from them in how we live and work today.

I want to briefly thank the Service Innovation team, each of whom is utterly brilliant and inspiring, as well as the wonderful Darryl Carpenter and Karl McDiarmid for taking that first leap into the unknown to hire me and see what we could do. I think we did well I’m delighted that Nadia Webster will be taking over leading the Lab work and has an extraordinary team to take it forward. I look forward to collaborating between New Zealand and New South Wales, and a race to the top for more inclusive, human centred, digitally enabled and values drive public services.

My last day at the NZ Government Service Innovation Lab is the 14th September and I start at NSW DFSI on the 24th September. We’ll be doing some last celebratory drinks on the evening of the 13th September so hold the date for those in Wellington. For those in Sydney, I can’t wait to get started and will see you soon!