Planet Linux Australia

Syndicate content
Planet Linux Australia - http://planet.linux.org.au
Updated: 1 hour 45 min ago

Colin Charles: (tweet) Summary of Percona Live 2015

Fri, 2016-04-08 20:25

The problem with Twitter is that we talk about something and before you know it, people forget. (e.g. does WebScaleSQL have an async client library?) How many blog posts are there about Percona Live Santa Clara 2015? This time (2016), I’m going to endeavour to write more than to just tweet – I want to remember this stuff, and search archives (and also note the changes that happen in this ecosystem). And maybe you do too as well. So look forward to more blogs from Percona Live Data Performance Conference 2016. In the meantime, here’s tweets in chronological order from my Twitter search.

  • crowd filling up the keynote room for #perconalive
  • beginning shortly, we’ll see @peterzaitsev at #perconalive doing his keynote
  • #perconalive has over 1,200 attendees – oracle has 20 folk, with 22 folk from facebook
  • #perconalive is going to be in Amsterdam sept 21-22 2015 (not in London this year). And in 2015, April 18-21 2016!
  • We have @PeterZaitsev on stage now at #perconalive
  • 5 of the 5 top websites are powered by MySQL – an Oracle ad – alexa rankings? http://www.alexa.com/topsites #perconalive
  • now we have Harrison Fisk on ployglot persistence at facebook #perconalive
  • make it work / make it fast / make it efficient – the facebook hacker way #perconalive
  • a lot of FB innovation goes into having large data sizes with short query time response #perconalive
  • “small data” to facebook? 10’s of petabytes with <5ms response times. and yes, this all sits in mysql #perconalive
  • messages eventually lands in hbase for long term storage for disk #perconalive they like it for LSM
  • Harrison introduces @RocksDB to be fast for memory/flash/disk, and its also LSM based. Goto choice for 100’s of services @ FB #perconalive
  • Facebook Newsfeed is pulled from RocksDB. 9 billion QPS at peak! #perconalive
  • Presto works all in memory on a streaming basis, whereas Hive uses map/reduce. Queries are much faster in Presto #perconalive
  • Scuba isn’t opensource – real time analysis tool to debug/understand whats going on @ FB. https://research.facebook.com/publications/456106467831449/scuba-diving-into-data-at-facebook/ … #perconalive
  • InnoDB as a read-optimized store and RocksDB as a write-optimized store — so RocksDB as storage engine for MySQL #perconalive
  • Presto + MySQL shards is something else FB is focused on – in production @ FB #perconalive
  • loving the woz keynote @ #perconalive – wondering if like apple keynotes, we’ll see a “one more thing” after this ;)
  • “i’m only a genius at one thing: that’s making people think i’m a genius” — steve wozniak #perconalive
  • Happiness = Smiles – Frowns (H=S-F) & Happiness = Food, Fun, Friends (H=F³) Woz’s philosophy on being happy + having fun daily #perconalive
  • .@Percona has acquired @Tokutek in a move that provides some consolidation in the MySQL database market and takes..
  • MySQL Percona snaps up Tokutek to move onto MongoDB and NoSQL turf http://zd.net/1ct6PEI by @wolpe
  • One more thing – congrats @percona @peterzaitsev #perconalive Percona has acquired Tokutek with storage engines for MySQL & MongoDB – @PeterZaitsev #perconalive
  • Percona is now a player in the MongoDB space with TokuMX! #perconalive
  • The tokumx mongodb logo is a mongoose… #perconalive Percona will continue to support TokuDB/TokuMX to customers + new investments in it
  • @Percona “the company driving MySQL today” and “the brains behind MySQL”. New marketing angle? http://www.datanami.com/2015/04/14/mysql-leader-percona-takes-aim-at-mongodb/ …
  • We have Steaphan Greene from @facebook talk about @WebScaleSQL at #perconalive
  • what is @webscalesql? its a collaboration between Alibaba, Facebook, Google, LinkedIn, and Twitter to hack on mysql #perconalive
  • close collaboration with @mariadb @mysql @percona teams on @webscalesql. today? upstream 5.6.24 today #perconalive
  • whats new in @WebScaleSQL ? asynchronous mysql client, with support from within HHVM, from FB & LinkedIn #perconalive
  • smaller @webscalesql change (w/big difference) – lower innodb buffer pool memory footprint from FB & Google #perconalive
  • reduce double-write mode while still preserving safety. query throttling, server side statement timeouts, threadpooling #perconalive
  • logical readahead to make full table scans as much as 10x fast. @WebScaleSQL #perconalive
  • whats coming to @WebScaleSQL – online innodb defragmentation, DocStore (JSON style document database using mysql) #perconalive
  • MySQL & RocksDB coming to @WebScaleSQL thanks to facebook & @MariaDB #perconalive
  • So, @webscalesql will skip 5.7 – they will backport interesting features into the 5.6 branch! #perconalive
  • likely what will be next to @webscalesql ? will be mysql-5.8, but can’t push major changes upstream. so might not be an option #perconalive
  • Why only minor changes from @WebScaleSQL to @MySQL upstream? #perconalive
  • Only thing not solved with @webscalesql & upstream @mysql – the Contributor license agreement #perconalive
  • All @WebScaleSQL features under Apache CCLA if oracle can accept it. Same with @MariaDB @percona #perconalive
  • Steaphan Greene says tell Oracle you want @webscalesql features in @mysql. Pressure in public to use the Apache CLA! #perconalive
  • We now have Patrik Sallner CEO from @MariaDB doing the #perconalive keynote ==> 1+1 > 2 (the power of collaboration)
  • “contributors make mariadb” – patrik sallner #perconalive
  • Patrik Sallner tells the story about the CONNECT storage engine and how the retired Olivier Bertrand writes it #perconalive
  • Google contributes table/tablespace encryption to @MariaDB 10.1 #perconalive
  • Patrik talks about the threadpool – how #MariaDB made it, #Percona improved it, and all benefit from opensource development #perconalive
  • and now we have Tomas Ulin from @mysql @oracle for his #perconalive keynote
  • 20 years of MySQL. 10 years of Oracle stewardship of InnoDB. 5 years of Oracle stewardship of @MySQL #perconalive
  • Tomas Ulin on the @mysql 5.7 release candidate. It’s gonna be a great release. Congrats Team #MySQL #perconalive
  • MySQL 5.7 has new optimizer hint frameworks. New cost based optimiser. Generated (virtual) columns. EXPLAIN for running thread #perconalive
  • MySQL 5.7 comes with the query rewrite plugin (pre/post parse). Good for ORMs. “Eliminates many legacy use cases for proxies” #perconalive
  • MySQL 5.7 – native JSON datatypes, built-in JSON functions, JSON comparator, indexing of documents using generated columns #perconalive
  • InnoDB has native full-text search including full CJK support. Does anyone know how FTS compares to MyISAM in speed? #perconalive
  • MySQL 5.7 group replication is unlikely to make it into 5.7 GA. Designed as a plugin #perconalive
  • Robert Hodges believes more enterprises will use MySQL thanks to the encryption features (great news for @mariadb) #perconalive
  • Domas on FB Messenger powered by MySQL. Goals: response time, reliability, and consistency for mobile messaging #perconalive
  • FB Messenger: Iris (in-memory pub-sub service – like a queue with cache semantics). And MySQL as persistence layer #perconalive
  • FB focuses on tiered storage: minutes (in memory), days (flash) and longterm (on disks). #perconalive
  • Gotta keep I/O devices for 4-5 years, so don’t waste endurance capacity of device (so you don’t write as fast as a benchmark) #perconalive
  • Why MySQL+InnoDB? B-Tree: cheap overwrites, I/O has high perf on flash, its also quick and proven @ FB #perconalive
  • What did FB face as issues to address with MySQL? Write throughput. Asynchronous replication. and Failover time. #perconalive
  • HA at Facebook: <30s failover, <1s switchover, > 99.999% query success rate
  • Learning a lot about LSM databases at Facebook from Yoshinori Matsunobu – check out @rocksdb + MyRocks https://github.com/MySQLOnRocksDB/mysql-5.6 …
  • The #mysqlawards 2015 winners #PerconaLive
  • Percona has a Customer Advisory Board now – Rob Young #perconalive
  • craigslist: mysql for active, mongodb for archives. online alter took long. that’s why @mariadb has https://mariadb.com/kb/en/mariadb/progress-reporting/ … #perconalive
  • can’t quite believe @percona is using db-engines rankings in a keynote… le sigh #perconalive
  • “Innovation distinguishes between a leader and a follower” – Steve Jobs #perconalive
  • Percona TokuDB: “only alternative to MySQL + InnoDB” #perconalive
  • “Now that we have the rights to TokuDB, we can add all the cool features ontop of Percona XtraDB Cluster (PXC)” – Rob Young #perconalive
  • New Percona Cloud Tools. Try it out. Helps remote DBA/support too. Wonder what the folk at VividCortex are thinking about now #perconalive
  • So @MariaDB isn’t production ready FOSS? I guess 3/6 top sites on Alexa rank must disagree #perconalive
  • Enjoying Encrypting MySQL data at Google by @jeremycole & Jonas — you can try this in @mariadb 10.1.4 https://mariadb.com/kb/en/mariadb/mariadb-1014-release-notes/ … #perconalive
  • google encryption: mariadb uses the api to have a plugin to store the keys locally; but you really need a key management server #perconalive
  • Google encryption: temporary tables during query execution for the Aria storage engine in #MariaDB #perconalive
  • find out more about google mysql encryption — https://code.google.com/p/google-mysql/ or just use it at 10.1.4! https://downloads.mariadb.org/mariadb/10.1.4/ #perconalive
  • Encrypting MySQL data at Google – Percona Live 2015 #perconalive http://wp.me/p5WPkh-5F
  • The @WebScaleSQL goals are still just to provide access to the code, as opposed to supporting it or making releases #perconalive
  • There is a reason DocStore & Oracle/MySQL JSON 5.7 – they were designed together. But @WebScaleSQL goes forward with DocStore #perconalive
  • So @WebScaleSQL will skip 5.7, and backport things like live resize of the InnoDB buffer pool #perconalive
  • How to view @WebScaleSQL? Default GitHub branch is the active one. Ignore -clean branches, just reference for rebase #perconalive
  • All info you need should be in the commit messages @WebScaleSQL #perconalive
  • Phabricator is what @WebScaleSQL uses as a code review system. All diffs are public, anyone can follow reviews #perconalive
  • automated testing with jenkins/phabricator for @WebScaleSQL – run mtr on ever commit, proposed diffs, & every night #perconalive
  • There is feature documentation, and its a work in progress for @WebScaleSQL. Tells you where its included, etc. #perconalive
  • Checked out the new ANALYZE statement feature in #MariaDB to analyze JOINs? Sergei Petrunia tells all #perconalive https://mariadb.com/kb/en/mariadb/analyze-statement/ …

Rusty Russell: Bitcoin Generic Address Format Proposal

Fri, 2016-04-08 12:29

I’ve been implementing segregated witness support for c-lightning; it’s interesting that there’s no address format for the new form of addresses.  There’s a segregated-witness-inside-p2sh which uses the existing p2sh format, but if you want raw segregated witness (which is simply a “0” followed by a 20-byte or 32-byte hash), the only proposal is BIP142 which has been deferred.

If we’re going to have a new address format, I’d like to make the case for shifting away from bitcoin’s base58 (eg. 1At1BvBMSEYstWetqTFn5Au4m4GFg7xJaNVN2):

  1. base58 is not trivial to parse.  I used the bignum library to do it, though you can open-code it as bitcoin-core does.
  2. base58 addresses are variable-length.  That makes webforms and software mildly harder, but also eliminates a simple sanity check.
  3. base58 addresses are hard to read over the phone.  Greg Maxwell points out that the upper and lower case mix is particularly annoying.
  4. The 4-byte SHA check does not guarantee to catch the most common form of errors; transposed or single incorrect letters, though it’s pretty good (1 in 4 billion chance of random errors passing).
  5. At around 34 letters, it’s fairly compact (36 for the BIP141 P2WPKH).

This is my proposal for a generic replacement (thanks to CodeShark for generalizing my previous proposal) which covers all possible future address types (as well as being usable for current ones):

  1. Prefix for type, followed by colon.  Currently “btc:” or “testnet:“.
  2. The full scriptPubkey using base 32 encoding as per http://philzimmermann.com/docs/human-oriented-base-32-encoding.txt.
  3. At least 30 bits for crc64-ecma, up to a multiple of 5 to reach a letter boundary.  This covers the prefix (as ascii), plus the scriptPubKey.
  4. The final letter is the Damm algorithm check digit of the entire previous string, using this 32-way quasigroup. This protects against single-letter errors as well as single transpositions.

These addresses look like btc:ybndrfg8ejkmcpqxot1uwisza345h769ybndrrfg (41 digits for a P2WPKH) or btc:yybndrfg8ejkmcpqxot1uwisza345h769ybndrfg8ejkmcpqxot1uwisza34 (60 digits for a P2WSH) (note: neither of these has the correct CRC or check letter, I just made them up).  A classic P2PKH would be 45 digits, like btc:ybndrfg8ejkmcpqxot1uwisza345h769wiszybndrrfg, and a P2SH would be 42 digits.

While manually copying addresses is something which should be avoided, it does happen, and the cost of making them robust against common typographic errors is small.  The CRC is a good idea even for machine-based systems: it will let through less than 1 in a billion mistakes.  Distinguishing which blockchain is a nice catchall for mistakes, too.

We can, of course, bikeshed this forever, but I wanted to anchor the discussion with something I consider fairly sane.

Kristy Wagner: Panama Papers – what does it mean for me?

Thu, 2016-04-07 11:26

It is just a little insane that in the process of my setting up this web site my motivation to launch it with a ‘real’ blog post simply had not transpired.  Then came along Panama Papers, which took me past that point of wanting to write to simply want to bury myself in research and to never resurface.  Data and I have a very loving relationship, I love to surround myself in it and it loves to share its secrets with me.

So, I buried myself and looked at all the tattles on big business and large personalities, and woke up this morning with two questions:

  • How did they manage to mine what they have and how much did they just hand over on platters to government agencies?
  • What does this mean to me?

The former is already being unveiled in the media.  So, I thought I would address the latter.  What do you think this means for you?

Well, the times are changing and, if successful in protecting its source then The International Consortium of Investigative Journalists it has opened up a brand spanking new precedent for whistle-blowers.  The survival of this whistle-blower is now completely dependent on not having given away their identity for themselves in their day to day interactions, dare a bead of sweat even think about beading near an eyebrow then their fate shall likely be sealed, along with anyone else with a vaguely guilty conscience.  Some of the ‘powers’ named in the document have the power to make you disappear and have no issue reaching to foreign soil to reach that goal.

So whilst they are on the chase, we innocents stand by and watch to ‘crazy’ unfold.  In other countries with greater questionability over political ethics there is going to be political change, in some countries there shall be protest and the destabilisation of leaders, in others solidarity as the political spin continues.  Here in Australia, though, it comes down to a bunch of corporations and about 800 of individuals. In the scheme of things, without impact figures, this seems relatively small.  Only time will reveal the size of the tax pie that got carved by offshore money movements.

Spineless ATO settlements

As the Australian Tax Office (ATO) tracks down these individuals and hits them up for tax evasion, we are going to see many, many quiet settlements.  To avoid the difficulties and costs of due Court process the ATO are known to taking settlements at as little as one tenth the known debt, including historic debts by organisations now named as being associated with the Panama Papers.  We are going to see a lot of these but don’t hold your breath, it will be some time before they start building cases.  As a result of settlements, they are not known for seeing these people face non-financial punishments such as actual gaol time.  I hope someone in political oversight gives the ATO a giant kick up the pants expecting that every one of these people gets prosecuted to the full extent of the law.  But hope rarely equates to action in political spheres.

More Leaks

Once one person gets away with whistleblowing, others follow suit, somewhat like media attention on suicides.  If you are free of the offshore banking set up it doesn’t mean you are off the hook.  If your actions are shady then know that even a sniff of it from others may lead to you being caught.  Are you shuffling money, paying family members who are not actual employees, or have a raft of false invoices in your tax deductions?  Yep, if you are then you are now more likely to have someone pull the rug out from under you and it is going to be someone you know.

A company that holds nearly nothing on the public web has lost over two terabytes of data, what can someone with system access do to you?  My first suggestion is to clean up your act and confess any sins before you get caught.  Lest, you shall be the at mercy of others….

 

 

 

Sorry, I trailed off topic reminiscing of a movie scene with Andy Garcia in Oceans 11 as his character, Terry Benedict, throws a tantrum demanding that staff find out how they (Ocean’s 11) hacked into his system.  Whistle-blowers are going to do it if they are given a strong enough encouragement and they suspect they know what they might find.

Dark Web Understanding

Through disclosure of how the journalists went about mining this giant slab of data and communicating about it securely over international boundaries is going to give way to a better understanding of the Dark Web.  Suddenly the fear associated with secure anonymised communication is going to make sense as a safe haven for those pursuing truth (and not just those pedalling unsavoury wares).  I think somewhere in the last 3-5 years we forgot about those proxies set up for people in foreign countries to be able to share what is ‘really’ going on with the rest of the world.  I believe as this story unfolds people will come to understand the differences between the dark web, the deep web and the internet and rather than fear it, will simple accept each for what they are, knowing that all three are destined to evolve over time also.

Closing Loopholes

The Australian Government can’t change international law or laws governing other sovereign nations so what will they do to close loopholes?  The easiest way is to tax the shit out of every dollar that heads offshore and some of that can be controlled in the first instances but other shadier means will be found around it.  The way around it is to make moving the money offshore more expensive than keeping it here.  Some of the obvious tax avoidance avenues that will need to be closed through taxation will be:

  • dividends payments to foreign shareholders,
  • payments to benefactors of trusts who are not paying tax in Australia on the amount,
  • revenues paid back into foreign parent/holding companies,
  • fees paid on goods and services, possibly inflated or non-substantiable items, also through tariffs and duties, and
  • taxation on loans taken by Australian companies from overseas sources, or the limitation of interest deductibility from the same, including from parent/holding companies.

These are just the ones that spring to mind and is in no way comprehensive.  The key thing in this, though, is that such measures would make our economy quite protectionist.  (Not that this is necessarily always a bad thing.)  It means that we will see costs of imports rise, giving way for local business to be a preferred provider of products for our citizens which is awesome, other than for industries that we have allowed to collapse over the last 30 years.  Implementing protectionist tax reform had ought to be worth it too, I actually figure that now would be time to remove exemptions for mining whilst we are at it.  Everyone knows that their international buy prices from Australia to the sister company that sells it is just another loophole to be closed.  Let’s face it, people still need the commodities that are being ripped from our land.  If we are going to let our resources be removed from our own use, then it should benefit our nation far more than it does now.

In reality though implementing the closure of loopholes, like those above, has some serious economic impacts to international trade and our supply chains.  It is going to take a long time for Cabinet parties to get their heads around the impact let alone to debate the cost to benefit from such a decision.  Don’t hold your breath for every loop to be closed but be sure that we are going to see some changes.  Maybe though, we can move the pressure of Mum and Dad investor who are using negative gearing to finance their retirement and instead focus on catching real tax that is being let slide through for the benefit of large multi-nationals and the one percenters who are prepared to dabble in the potentially unlawful.

What about me?

Well, watch the tax space, have a chat to people who understand the economy and international trade about what is happening politically.  The means of gap closure will provide more opportunities in our country for us to build economies anew.  It also means that we may see somewhat of a market crash as foreign investors are taxed for taking their money home but we will find our way through and the crash could open up opportunities that Generation Y and I are missing in respect to owning their own homes and securing personal investment in a meaningful way for the first time.

If we closed these gaps quickly enough for businesses to not have opportunity to jump loopholes, then we would also have a significant revenue impact which one would hope would be pushed to the citizens as investment into education and health.  The potential though is quite amazing because this eye opener is also a good means to revisit the short sightedness of the sale of public assets.  If we temporarily crashed the market by restricting foreign investment and cutting the ability for foreign ownership by sale, we have the opportunity for the Commonwealth as well as States and Territories to change the way they manage economics and asset allocation.

Also, for business owners, this is potentially a great time to consider your capability to be a provider to Government for goods and services.  There is going to be a counterbalance from this event which may just put your tender into the preference pool because as the owner you have a visible Australian face that pays tax right here in Australia.  Right now, that is a really good thing for you.

I am not a bad guy, but I do have offshore funds, what about me?

All I can say is come clean.  Make sure you declare your offshore funds (whether through Mossack Fonsecca or not) to the Australian Tax Office and make sure you take advice directly from them on the tax liabilities you face from your choice of tax minimisation so that you do not cross into the realms of tax avoidance.  If there is an infraction that has already occurred then upfront disclosure is your best option especially before the ATO pulls your file together.  I respect to this one whistle-blower incident, Deputy Commissioner, Mr Michael Cranston has in an ATO press statement detailed:

“The information we have includes some taxpayers who we have previously investigated, as well as a small number who disclosed their arrangements with us under the Project DO IT initiative. It also includes a large number of taxpayers who haven’t previously come forward, including high wealth individuals, and we are already taking action on those cases.”

So, just come clean.  (By the way, the ATO have already offered to treat you nicely if you do.)

Political Ethics has changed

Just before you start thinking that this is all that is going to happen…ponder this statement, also from Mr Cranston:

“The message is clear – taxpayers can’t rely on these secret arrangements being kept secret and we will act on any information that is provided to us.”

Does anyone else here notice a giant ethical shift from the events of Wikileaks where the Government condemned such actions?  In the era of wikileaks controversies Governments were scared of the capability of insiders to release sensitive data and they worked on logging and permissioning tools to deter anyone who might be tempted, but now?  But now what is going on?

Okay, granted, in times of cutbacks it could be questioned if our political leadership have any ethics at all in respect to where tax dollars are given and taken.  However, the statement above flies in the face of our deontological political comfort zone where even if a political leader messed up that we have been encouraged to judge them on the intent of their actions rather than the outcome.  Now, because the leak is corporate and to Government benefit the tables have turned to whistle-blower as hero.  The fact that this act in Australia, given the role that Mossack Fonseca played, would likely be illegal white collar crime shows a giant swing of political view that our Government has permitted to come through the taxation office that now is a time that our Government will embrace consequentialism – that the ‘rightness’ of an act shall be determined on the consequences it produces rather than the morality of the act itself.

How does that play out into politics and society?  That is a whole new conversation for another day.  But if you are keen then share your thoughts on what it will look like in the comments below, along with your thoughts and opinions on how the Panama Papers will affect you.

____

Featured image source: CC-By only: https://www.flickr.com/photos/famzoo/

Kristy Wagner: Hello world!

Thu, 2016-04-07 11:26

Sorry interwebs, I know it was peaceful without me being around blogging for the last 5 years but, it is too late for you.  I am back!

New domain, new site, new content.  I look forward to sharing some thoughts with you.

Ian Wienand: Image building in OpenStack CI

Tue, 2016-04-05 15:26

Also titled minimal images - maximal effort!

The OpenStack Infrastructure Team manages a large continuous-integration system that provides the broad range of testing the OpenStack project requires. Tests are run thousands of times a day across every project, on multiple platforms and on multiple cloud-providers. There are essentially no manual steps in any part of the process, with every component being automated via scripting, a few home-grown tools and liberal doses of Puppet and Ansible. More importantly, every component resides in the public git trees right alongside every other OpenStack project, with contributions actively encouraged.

As with any large system, technical debt can build up and start to affect stability and long-term maintainability. OpenStack Infrastructure can see some of this debt accumulating as more testing environments across more cloud-providers are being added to support ever-growing testing demands. Thus a strong focus of recent work has been consolidating testing platforms to be smaller, better defined and more maintainable. This post illustrates some of the background to the issues and describes how these new platforms are more reliable and maintainable.

OpenStack CI Overview

Before getting into details, it's a good idea to get a basic big-picture conceptual model of how OpenStack CI testing works. If you look at the following diagram and follow the numbers with the explanation below, hopefully you'll have all the context you need.

  1. The developer uploads their code to gerrit via the git-review tool. There is no further action required on their behalf and the developer simply waits for results.

  2. Gerrit provides a JSON-encoded "firehose" output of everything happening to it. New reviews, votes, updates and more all get sent out over this pipe. Zuul is the overall scheduler that subscribes itself to this information and is responsible for managing the CI jobs appropriate for each change.

  3. Zuul has a configuration that tells it what jobs to run for what projects. Zuul can do lots of interesting things, but for the purposes of this discussion we just consider that it puts the jobs it wants run into gearman for a Jenkins master to consume. gearman is a job-server; as they explain it "[gearman] provides a generic application framework to farm out work to other machines or processes that are better suited to do the work". Zuul puts into gearman basically a tuple (job-name, node-type) for each job it wants run, specifying the unique job name to run and what type of node it should be run on.

  4. A group of Jenkins masters are subscribed to gearman as workers. It is these Jenkins masters that will consume the job requests from the queue and actually get the tests running. However, Jenkins needs two things to be able to run a job — a job definition (what to actually do) and a slave node (somewhere to do it).

    The first part — what to do — is provided by job-definitions stored in external YAML files and processed by Jenkins Job Builder (jjb) in to job configurations for Jenkins. Each Jenkins master gets these definitions pushed to it constantly by Puppet, thus each Jenkins master instance knows about all the jobs it can run automatically. Zuul also knows about these job definitions; this is the job-name part of the tuple we said it put into gearman.

    The second part — somewhere to run the test — takes some more explaining. To the next point...

  5. Several cloud companies donate capacity in their clouds for OpenStack to run CI tests. Overall, this capacity is managed by a customised orchestration tool called nodepool. Nodepool watches the gearman queue and sees what requests are coming out of Zuul. It looks at node-type of jobs in the queue and decides what types of nodes need to start and which cloud providers have capacity to satisfy demand. Nodepool will monitor the start-up of the virtual-machines and register the new nodes to the Jenkins master instances.

  6. At this point, the Jenkins master has what it needs to actually get jobs started. When nodepool registers a host to a Jenkins master as a slave, the Jenkins master can now advertise its ability to consume jobs. For example, if a ubuntu-trusty node is provided to the Jenkins master instance by nodepool, Jenkins can now consume from gearman any job it knows about that is intended to run on an ubuntu-trusty slave. Jekins will run the job as defined in the job-definition on that host — ssh-ing in, running scripts, copying the logs and waiting for the result. (It is a gross oversimplification, but for the purposes of OpenStack CI, Jenkins is pretty much used as a glorified ssh/scp wrapper. Zuul Version 3, under development, is working to remove the need for Jenkins to be involved at all).

  7. Eventually, the test will finish. The Jenkins master will put the result back into gearman, which Zuul will consume. The slave will be released back to nodepool, which destroys it and starts all over again (slaves are not reused and also have no sensitive details on them, as they are essentially publicly accessible). Zuul will wait for the results of all jobs for the change and post the result back to Gerrit; it either gives a positive vote or the dreaded negative vote if required jobs failed (it also handles merges to git, but we'll ignore that bit for now).

In a nutshell, that is the CI work-flow that happens thousands-upon-thousands of times a day keeping OpenStack humming along.

Image builds

So far we have glossed over how nodepool actually creates the images that it hands out for testing. Image creation, illustrated in step 8 above, contains a lot of important details.

Firstly, what are these images and why build them at all? These images are where the "rubber hits the road" — they are instantiated into the virtual-machines that will run DevStack, unit-testing or whatever else someone might want to test.

The main goal is to provide a stable and consistent environment in which to run a wide-range of tests. A full OpenStack deployment results in hundreds of libraries and millions of lines of code all being exercised at once. The testing-images are right at the bottom of all this, so any instability or inconsistency affects everyone; leading to constant fire-firefighting and major inconvenience as all forward-progress stops when CI fails. We want to support a wide number of platforms interesting to developers such as Ubuntu, Debian, CentOS and Fedora, and we also want to and make it easy to handle new releases and add other platforms. We want to ensure this can be maintained without too much day-to-day hands-on.

Caching is a big part of the role of these images. With thousands of jobs going on every day, an occasional network blip is not a minor annoyance, but creates constant and difficult to debug failures. We want jobs to rely on as few external resources as possible so tests are consistent and stable. This means caching things like the git trees tests might use (OpenStack just broke the 1000 repository mark), VM images, packages and other common bits and pieces. Obviously a cache is only as useful as the data in it, so we build these images up every day to keep them fresh.

Snapshot images

If you log into almost any cloud-provider's interface, they almost certainly have a range of pre-canned images of common distributions for you to use. At first, the base images for OpenStack CI testing came from what the cloud-providers had as their public image types. However, over time, there are a number of issues that emerge:

  1. No two images, even for the same distribution or platform, are the same. Every provider seems to do something "helpful" to the images which requires some sort of workaround.
  2. Providers rarely leave these images alone. One day you would boot the image to find a bunch of Python libraries pip-installed, or a mount-point moved, or base packages removed (all happened).
  3. Even if the changes are helpful, it does not make for consistent and reproducible testing if every time you run, you're on a slightly different base system.
  4. Providers don't have some images you want (like a latest Fedora), or have different versions, or different point releases. All update asynchronously whenever they get around to it.

So the original incarnations of OpenStack CI images were based on these public images. Nodepool would start one of these provider images and then run a series of scripts on it — these scripts would firstly try to work-around any quirks to make the images look as similar as possible across providers, and then do the caching, setup things like authorized keys and finish other configuration tasks. Nodepool would then snapshot this prepared image and start instantiating VM's from these snapshots into the pool for testing. If you hear someone talking about a "snapshot image" in OpenStack CI context, that's likely what they are referring to.

Apart from the stability of the underlying images, the other issue you hit with this approach is that the number of images being built starts to explode when you take into account multiple providers and multiple regions. Even with just Rackspace and the (now defunct) HP Cloud we would end up creating snapshot images for 4 or 5 platforms across a total of about 8 regions — meaning anywhere up to 40 separate image builds happening daily (you can see how ridiculous it was getting in the logging configuration used at the time). It was almost a fait accompli that some of these would fail every day — nodepool can deal with this by reusing old snapshots — but this leads to a inconsistent and heterogeneous testing environment.

Naturally there was a desire for something more consistent — a single image that could run across multiple providers in a much more tightly controlled manner.

Upstream-based builds

Upstream distributions do provide "cloud-images", which are usually pre-canned .qcow2 format files suitable for uploading to your average cloud. So the diskimage-builder tool was put into use creating images for nodepool, based on these upstream-provided images. In essence, diskimage-builder uses a series of elements (each, as the name suggests, designed to do one thing) that allow you to build a completely customised image. It handles all the messy bits of laying out the image file, tries to be smart about caching large downloads and final things like conversion to qcow2 or vhd.

nodepool has used diskimage-builder to create customised images based upon the upstream releases for some time. These are better, but still have some issues for the CI environment:

  1. You still really have no control over what does or does not go into the upstream base images. You don't notice a change until you deploy a new image based on an updated version and things break.
  2. The images still start with a fair amount of "stuff" on them. For example cloud-init is a rather large Python program and has a fair few dependencies. These dependencies can both conflict with parts of OpenStack or end up tacitly hiding real test requirements (the test doesn't specify it, but the package is there as part of another base dependency. Things then break when the base dependencies change). The whole idea of the CI is that (as much as possible) you're not making any assumptions about what is required to run your tests — you want everything explicitly included.
  3. An image that "works everywhere" across multiple cloud-providers is quite a chore. cloud-init hasn't always had support for config-drive and Rackspace's DHCP-less environment, for example. Providers all have their various different networking schemes or configuration methods which needs to be handled consistently.

If you were starting this whole thing again, things like LXC/Docker to keep "systems within systems" might come into play and help alleviate some of the packaging conflicts. Indeed they may play a role in the future. But don't forget that DevStack, the major CI deployment mechanism, was started before Docker existed. And there's tricky stuff with networking and Neutron going on. And things like iSCSI kernel drivers that containers don't support well. And you need to support Ubuntu, Debian, CentOS and Fedora. And you have hundreds of developers already relying on what's there. So change happens incrementally, and in the mean time, there is a clear need for a stable, consistent environment.

Minimal builds

To this end, diskimage-builder now has a serial of "minimal" builds that are really that — systems with essentially nothing on them. For Debian and Ubuntu this is achieved via debootstrap, for Fedora and CentOS we replicate this with manual installs of base packages into a clean chroot environment. We add on a range of important elements that make the image useful; for example, for networking, we have simple-init which brings up the network consistently across all our providers but has no dependencies to mess with the base system. If you check the elements provided by project-config you can see a range of specific elements that OpenStack Infra runs at each image build (these are actually specified by in arguments to nodepool, see the config file, particularly diskimages section). These custom elements do things like caching, using puppet to install the right authorized_keys files and setup a few needed things to connect to the host. In general, you can see the logs of an image build provided by nodepool for each daily build.

So now, each day at 14:14 UTC nodepool builds the daily images that will be used for CI testing. We have one image of each type that (theoretically) works across all our providers. After it finishes building, nodepool uploads the image to all providers (p.s. the process of doing this is so insanely terrible it spawned shade; this deserves many posts of its own) at which point it will start being used for CI jobs. If you wish to replicate this entire process, the build-image.sh script, run on an Ubuntu Trusty host in a virtualenv with diskimage-builder will get you pretty close (let us know of any issues!).

DevStack and bare nodes

There are two major ways OpenStack projects test their changes:

  1. Running with DevStack, which brings up a small, but fully-functional, OpenStack cloud with the change-under-test applied. Generally tempest is then used to ensure the big-picture things like creating VM's, networks and storage are all working.
  2. Unit-testing within the project; i.e. what you do when you type tox -e py27 in basically any OpenStack project.

To support this testing, OpenStack CI ended up with the concept of bare nodes and devstack nodes.

  • A bare node was made for unit-testing. While tox has plenty of information about installing required Python packages into the virtualenv for testing, it doesn't know anything about the system packages required to build those Python packages. This means things like gcc and library -devel packages which many Python packages use to build bindings. Thus the bare nodes had an ever-growing and not well-defined list of packages that were pre-installed during the image-build to support unit-testing. Worse still, projects didn't really know their dependencies but just relied on their testing working with this global list that was pre-installed on the image.
  • In contrast to this, DevStack has always been able to bootstrap itself from a blank system to a working OpenStack deployment by ensuring it has the right dependencies installed. We don't want any packages pre-installed here because it hides actual dependencies that we want explicitly defined within DevStack — otherwise, when a user goes to deploy DevStack for their development work, things break because their environment differs slightly to the CI one. If you look at all the job definitions in OpenStack, by convention any job running DevStack has a dsvm in the job name — this referred to running on a "DevStack Virtual Machine" or a devstack node. As the CI environment has grown, we have more and more testing that isn't DevStack based (puppet apply tests, for example) that rather confusingly want to run on a devstack node because they do not want dependencies installed. While it's just a name, it can be difficult to explain!

Thus we ended up maintaining two node-types, where the difference between them is what was pre-installed on the host — and yes, the bare node had more installed than a devstack node, so it wasn't that bare at all!

Specifying Dependencies

Clearly it is useful to unify these node types, but we still need to provide a way for the unit-test environments to have their dependencies installed. This is where a tool called bindep comes in. This tool gives project authors a way to specify their system requirements in a similar manner to the way their Python requirements are kept. For example, OpenStack has the concept of global requirements — those Python dependencies that are common across all projects so version skew becomes somewhat manageable. This project now has some extra information in the other-requirements.txt file, which lists the system packages required to build the Python packages in the global-requirements list.

bindep knows how to look at these lists provided by projects and get the right packages for the platform it is running on. As part of the image-build, we have a cache-bindep element that can go through every project and build a list of the packages it requires. We can thus pre-cache all of these packages onto the images, knowing that they are required by jobs. This both reduces the dependency on external mirrors and improves job performance (as the packages are locally cached) but doesn't pollute the system by having everything pre-installed.

Package installation can now happen via the way we really should be doing it — as part of the CI job. There is a job-macro called install-distro-packages which a test can use to call bindep to install the packages specified by the project before the run. You might notice the script has a "fallback" list of packages if the project does not specify it's own dependencies — this essentially replicates the environment of a bare node as we transition to projects more strictly specifying their system requirements.

We can now start with a blank image and all the dependencies to run the job can be expressed by and within the project — leading to a consistent and reproducible environment without any hidden dependencies. Several things have broken as part of removing bare nodes — this is actually a good thing because it means we have revealed areas where we were making assumptions in jobs about what the underlying platform provides. There's a few other job-macros that can do things like provide MySQL/Postgres instances for testing or setup other common job requirements. By splitting these types of things out from base-images we also improve the performance of jobs who don't waste time doing things like setting up databases for jobs that don't need it.

As of this writing, the bindep work is new and still a work-in-progress. But the end result is that we have no more need for a separate bare node type to run unit-tests. This essentially halves the number of image-builds required and brings us to the goal of a single image for each platform running all CI.

Conclusion

While dealing with multiple providers, image-types and dependency chains has been a great effort for the infra team, to everyone's credit I don't think the project has really noticed much going on underneath.

OpenStack CI has transitioned to a situation where there is a single image type for each platform we test that deploys unmodified across all our providers and runs all testing environments equally. We have better insight into our dependencies and better tools to manage them. This leads to greatly decreased maintenance burden, better consistency and better performance; all great things to bring to OpenStack CI!

Ian Wienand: Image building in OpenStack CI

Mon, 2016-04-04 15:26

Also titled minimal images - maximal effort!

A large part of OpenStack Infrastructure teams recent efforts has been focused on moving towards more stable and maintainable CI environments for testing.

OpenStack CI Overview

Before getting into details, it's a good idea to get a basic big-picture conceptual model of how OpenStack CI testing works. If you look at the following diagram and follow the numbers with the explanation below, hopefully you'll have all the context you need.

  1. The developer uploads their code to gerrit via the git-review tool. They wait.

  2. Gerrit provides a JSON-encoded "firehose" output of everything happening to it. New reviews, votes, updates and more all get sent out over this pipe. Zuul is the overall scheduler that subscribes itself to this information and is responsible for managing the CI jobs appropriate for each change.

  3. Zuul has a configuration that tells it what jobs to run for what projects. Zuul can do lots of interesting things, but for the purposes of this discussion we just consider that it puts the jobs it wants run into gearman for a Jenkins master to consume. gearman is a job-server; as they explain it "[gearman] provides a generic application framework to farm out work to other machines or processes that are better suited to do the work". Zuul puts into gearman basically a tuple (job-name, node-type) for each job it wants run, specifying the unique job name to run and what type of node it should be run on.

  4. A group of Jenkins masters are subscribed to gearman as workers. It is these Jenkins masters that will consume the job requests from the queue and actually get the tests running. However, Jenkins needs two things to be able to run a job — a job definition (what to actually do) and a slave node (somewhere to do it).

    The first part — what to do — is provided by job-definitions stored in external YAML files and processed by Jenkins Job Builder (jjb) in to job configurations for Jenkins. Each Jenkins master gets these definitions pushed to it constantly by Puppet, thus each Jenkins master instance knows about all the jobs it can run automatically. Zuul also knows about these job definitions; this is the job-name part of the tuple we said it put into gearman.

    The second part — somewhere to run the test — takes some more explaining. To the next point...

  5. Several cloud companies donate capacity in their clouds for OpenStack to run CI tests. Overall, this capacity is managed by a customised orchestration tool called nodepool. Nodepool watches the gearman queue and sees what requests are coming out of Zuul. It looks at the node-type of jobs in the queue and decides what type of nodes need to start in what clouds to satisfy demand. Nodepool will monitor the start-up of the virtual-machines and register the new nodes to the Jenkins master instances.

  6. At this point, the Jenkins master has what it needs to actually get jobs started. When nodepool registers a host to a Jenkins master as a slave, the Jenkins master can now advertise its ability to consume jobs. For example, if a ubuntu-trusty node is provided to the Jenkins master instance by nodepool, Jenkins can now consume from gearman any job it knows about that is intended to run on an ubuntu-trusty slave. Jekins will run the job as defined in the job-definition on that host — ssh-ing in, running scripts, copying the logs and waiting for the result. (It is a gross oversimplification, but for the purposes of OpenStack CI, Jenkins is pretty much used as a glorified ssh/scp wrapper. Zuul Version 3, under development, is working to remove the need for Jenkins to be involved at all).

  7. Eventually, the test will finish. The Jenkins master will put the result back into gearman, which Zuul will consume. The slave will be released back to nodepool, which destroys it and starts all over again (slaves are not reused and also have no sensitive details on them, as they are essentially publicly accessible). Zuul will wait for the results of all jobs for the change and post the result back to Gerrit; it either gives a positive vote or the dreaded negative vote if required jobs failed (it also handles merges to git, but we'll ignore that bit for now).

In a nutshell, that is the CI work-flow that happens thousands-upon-thousands of times a day keeping OpenStack humming along.

Image builds

So far we have glossed over how nodepool actually creates the images that it hands out for testing. Image creation, illustrated in step 8 above, contains a lot of important details.

Firstly, what are these images and why build them at all? These images are where the "rubber hits the road" — they are instantiated into the virtual-machines that will run DevStack, unit-testing or whatever else someone might want to test.

The main goal is to provide a stable and consistent environment in which to run a wide-range of tests. A full OpenStack deployment results in hundreds of libraries and millions of lines of code all being exercised at once. The testing-images are right at the bottom of all this, so any instability or inconsistency affects everyone; leading to constant fire-firefighting and major inconvenience as all forward-progress stops when CI fails. We want to support a wide number of platforms interesting to developers such as Ubuntu, Debian, CentOS and Fedora, and we also want to and make it easy to handle new releases and add other platforms. We want to ensure this can be maintained without too much day-to-day hands-on.

Caching is a big part of the role of these images. With thousands of jobs going on every day, an occasional network blip is not a minor annoyance, but creates constant and difficult to debug failures. We want jobs to rely on as few external resources as possible so tests are consistent and stable. This means caching things like the git trees tests might use (OpenStack just broke the 1000 repository mark), VM images, packages and other common bits and pieces. Obviously a cache is only as useful as the data in it, so we build these images up every day to keep them fresh.

Snapshot images

If you log into almost any cloud-provider's interface, they almost certainly have a range of pre-canned images of common distributions for you to use. At first, the base images for OpenStack CI testing came from what the cloud-providers had as their public image types. However, over time, there are a number of issues that emerge:

  1. No two images, even for the same distribution or platform, are the same. Every provider seems to do something "helpful" to the images which requires some sort of workaround.
  2. Providers rarely leave these images alone. One day you would boot the image to find a bunch of Python libraries pip-installed, or a mount-point moved, or base packages removed (all happened).
  3. Even if the changes are helpful, it does not make for consistent and reproducible testing if every time you run, you're on a slightly different base system.
  4. Providers don't have some images you want (like a latest Fedora), or have different versions, or different point releases. All update asynchronously whenever they get around to it.

So the original incarnations of OpenStack CI images were based on these public images. Nodepool would start one of these provider images and then run a series of scripts on it — these scripts would firstly try to work-around any quirks to make the images look as similar as possible across providers, and then do the caching, setup things like authorized keys and finish other configuration tasks. Nodepool would then snapshot this prepared image and start instantiating VM's based on these images into the pool for testing. If you hear someone talking about a "snapshot image" in OpenStack CI context, that's likely what they are referring to.

Apart from the stability of the underlying images, the other issue you hit with this approach is that the number of images being built starts to explode when you take into account multiple providers and multiple regions. Even with just Rackspace and the (now defunct) HP Cloud we would end up creating snapshot images for 4 or 5 platforms across a total of about 8 regions — meaning anywhere up to 40 separate image builds happening daily (you can see how ridiculous it was getting in the logging configuration used at the time). It was almost a fait accompli that some of these would fail every day — nodepool can deal with this by reusing old snapshots — but this leads to a inconsistent and heterogeneous testing environment.

Naturally there was a desire for something more consistent — a single image that could run across multiple providers in a much more tightly controlled manner.

Upstream-based builds

Upstream distributions do provide "cloud-images", which are usually pre-canned .qcow2 format files suitable for uploading to your average cloud. So the diskimage-builder tool was put into use creating images for nodepool, based on these upstream-provided images. In essence, diskimage-builder uses a series of elements (each, as the name suggests, designed to do one thing) that allow you to build a completely customised image. It handles all the messy bits of laying out the image file, tries to be smart about caching large downloads and final things like conversion to qcow2 or vhd.

nodepool has used diskimage-builder to create customised images based upon the upstream releases for some time. These are better, but still have some issues for the CI environment:

  1. You still really have no control over what does or does not go into the upstream base images. You don't notice a change until you deploy a new image based on an updated version and things break.
  2. The images still start with a fair amount of "stuff" on them. For example cloud-init is a rather large Python program and has a fair few dependencies. These dependencies can both conflict with parts of OpenStack or end up tacitly hiding real test requirements (the test doesn't specify it, but the package is there as part of another base dependency. Things then break when the base dependencies change). The whole idea of the CI is that (as much as possible) you're not making any assumptions about what is required to run your tests — you want everything explicitly included.
  3. An image that "works everywhere" across multiple cloud-providers is quite a chore. cloud-init hasn't always had support for config-drive and Rackspace's DHCP-less environment, for example. Providers all have their various different networking schemes or configuration methods which needs to be handled consistently.

If you were starting this whole thing again, things like LXC/Docker to keep "systems within systems" might come into play and help alleviate some of the packaging conflicts. Indeed they may play a role in the future. But don't forget that DevStack, the major CI deployment mechanism, was started before Docker existed. And there's tricky stuff with networking and Neutron going on. And things like iSCSI kernel drivers that containers don't support well. And you need to support Ubuntu, Debian, CentOS and Fedora. And you have hundreds of developers already relying on what's there. So change happens incrementally, and in the mean time, there is a clear need for a stable, consistent environment.

Minimal builds

To this end, diskimage-builder now has a serial of "minimal" builds that are really that — systems with essentially nothing on them. For Debian and Ubuntu this is achieved via debootstrap, for Fedora and CentOS we replicate this with manual installs of base packages into a clean chroot environment. We add-on a range of important elements that make the image useful; for example, for networking, we have simple-init which brings up the network consistently across all our providers but has no dependencies to mess with the base system. If you check the elements provided by project-config you can see a range of specific elements that OpenStack Infra runs at each image build (these are actually specified by in arguments to nodepool, see the config file, particularly diskimages section). These custom elements do things like caching, using puppet to install the right authorized_keys files and setup a few needed things to connect to the host. In general, you can see the logs of an image build provided by nodepool for each daily build.

So now, each day at 14:14 UTC nodepool builds the daily images that will be used for CI testing. We have one image of each type that (theoretically) works across all our providers. After it finishes building, nodepool uploads the image to all providers (p.s. the process of doing this is so insanely terrible it spawned shade; this deserves many posts of its own) at which point it will start being used for CI jobs. If you wish to replicate this entire process, the build-image.sh script, run on an Ubuntu Trusty host in a virtualenv with diskimage-builder will get you pretty close (let us know of any issues!).

DevStack and bare nodes

There are two major ways OpenStack projects test their changes:

  1. Running with DevStack, which brings up a small, but fully-functional, OpenStack cloud with the change-under-test applied. Generally tempest is then used to ensure the big-picture things like creating VM's, networks and storage are all working.
  2. Unit-testing within the project; i.e. what you do when you type tox -e py27 in basically any OpenStack project.

To support this testing, OpenStack CI ended up with the concept of bare nodes and devstack nodes.

  • A bare node was made for unit-testing. While tox has plenty of information about installing required Python packages into the virtualenv for testing, it doesn't know anything about the system packages required to build those Python packages. This means things like gcc and library -devel packages which many Python packages use to build bindings. Thus the bare nodes had an ever-growing and not well-defined list of packages that were pre-installed during the image-build to support unit-testing. Worse still, projects didn't really know their dependencies but just relied on their testing working with this global list that was pre-installed on the image.
  • In contrast to this, DevStack has always been able to bootstrap itself from a blank system to a working OpenStack deployment by ensuring it has the right dependencies installed. We don't want any packages pre-installed here because it hides actual dependencies that we want explicitly defined within DevStack — otherwise, when a user goes to deploy DevStack for their development work, things break because their environment differs slightly to the CI one. If you look at all the job definitions in OpenStack, by convention any job running DevStack has a dsvm in the job name — this referred to running on a "DevStack Virtual Machine" or a devstack node. As the CI environment has grown, we have more and more testing that isn't DevStack based (puppet apply tests, for example) that rather confusingly want to run on a devstack node because they do not want dependencies installed. While it's just a name, it can be difficult to explain!

Thus we ended up maintaining two node-types, where the difference between them is what was pre-installed on the host — and yes, the bare node had more installed than a devstack node, so it wasn't that bare at all!

Specifying Dependencies

Clearly it is useful to unify these node types, but we still need to provide a way for the unit-test environments to have their dependencies installed. This is where a tool called bindep comes in. This tool gives project authors a way to specify their system requirements in a similar manner to the way their Python requirements are kept. For example, OpenStack has the concept of global requirements — those Python dependencies that are common across all projects so version skew becomes somewhat manageable. This project now has some extra information in the other-requirements.txt file, which lists the system packages required to build the Python packages in the global-requirements list.

bindep knows how to look at these lists provided by projects and get the right packages for the platform it is running on. As part of the image-build, we have a cache-bindep element that can go through every project and build a list of the packages it requires. We can thus pre-cache all of these packages onto the images, knowing that they are required by jobs. This both reduces the dependency on external mirrors and improves job performance (as the packages are locally cached) but doesn't pollute the system by having everything pre-installed.

Package installation can now happen via the way we really should be doing it — as part of the CI job. There is a job-macro called install-distro-packages which a test can use to call bindep to install the packages specified by the project before the run. You might notice the script has a "fallback" list of packages if the project does not specify it's own dependencies — this essentially replicates the environment of a bare node as we transition to projects more strictly specifying their system requirements.

We can now start with a blank image and all the dependencies to run the job can be expressed by and within the project — leading to a consistent and reproducible environment without any hidden dependencies. Several things have broken as part of removing bare nodes — this is actually a good thing because it means we have revealed areas where we were making assumptions in jobs about what the underlying platform provides. There's a few other job-macros that can do things like provide MySQL/Postgres instances for testing or setup other common job requirements. By splitting these types of things out from base-images we also improve the performance of jobs who don't waste time doing things like setting up databases for jobs that don't need it.

As of this writing, the bindep work is new and still a work-in-progress. But the end result is that we have no more need for a separate bare node type to run unit-tests. This essentially halves the number of image-builds required and brings us to the goal of a single image for each platform running all CI.

Conclusion

While dealing with multiple providers, image-types and dependency chains has been a great effort for the infra team, to everyone's credit I don't think the project has really noticed much going on underneath.

OpenStack CI has transitioned to a situation where there is a single image type for each platform we test that deploys unmodified across all our providers and runs all testing environments equally. We have better insight into our dependencies and better tools to manage them. This leads to greatly decreased maintenance burden, better consistency and better performance; all great things to bring to OpenStack CI!

Ian Wienand: Image building for OpenStack CI -- Minimal images, big effort

Mon, 2016-04-04 13:26

A large part of OpenStack Infrastructure teams recent efforts has been focused on moving towards more stable and maintainable CI environments for testing.

OpenStack CI Overview

Before getting into details, it's a good idea to get a basic big-picture conceptual model of how OpenStack CI testing works. If you look at the following diagram and follow the numbers with the explanation below, hopefully you'll have all the context you need.

  1. The developer uploads their code to gerrit via the git-review tool. They wait.

  2. Gerrit provides a JSON-encoded "firehose" output of everything happening to it. New reviews, votes, updates, etc. Zuul is the overall scheduler that subscribes itself to this information and is responsible for managing the CI jobs appropriate for each change.

  3. Zuul has a configuration that tells it what jobs to run for what projects. Zuul can do lots of interesting things, but for the purposes of this discussion we just consider that it puts the jobs it wants run into gearman for a Jenkins host to consume. gearman is a job-server, and as they explain it "provides a generic application framework to farm out work to other machines or processes that are better suited to do the work".

  4. A group of Jenkins hosts are subscribed to gearman as workers. It is these hosts that will consume the job requests from the queue and actually get the tests running. Jenkins needs two things to be able to run a job -- a job definition (something to do) and a slave node (somewhere to do it).

    The first part -- what to do -- is provided by job-definitions stored in external YAML files and processed by Jenkins Job Builder (jjb) in to job configurations for Jenkins. Thus each Jenkins instance is knows about all the jobs it might need to run. Zuul also knows about these job definitions, so you can see how we now have a mapping where Zuul can put a job into gearman saying "run test foo-bar-baz" and a Jenkins host can consume that request and know what to do.

    The second part -- somewhere to run the test -- takes some more explaining. Let's skip to the next point...

  5. Several cloud companies donate capacity in their clouds for OpenStack to run CI tests. Overall, this capacity is managed by nodepool -- a customised orchestration tool. Nodepool watches the gearman queue and sees what requests are coming out of Zuul, and decides what type of capacity to provide and in what clouds to satisfy the outstanding job queue. Nodepool will start-up virtual-machines as required, and register those nodes to the Jenkins instances.

  6. At this point, Jenkins has what it needs to actually get jobs started. When nodepool registers a host to Jenkins as a slave, the Jenkins host can now advertise its ability to consume jobs. For example, if a ubuntu-trusty node is provided to the Jenkins instance by nodepool, Jenkins can now consume a job from the queue intended to run on an ubuntu-trusty host. It will run the job as defined in the job-definition -- doing what Jenkins does by ssh-ing into the host, running scripts, copying the logs and waiting for the result. (It is a gross oversimplification, but but Jenkins is pretty much a glorified ssh/scp wrapper to OpenStack CI. Zuul Version 3, under development, is working to remove the need for Jenkins to be involved at all).

  7. Eventually, the test will finish. Jenkins will put the result back into gearman, which Zuul will consume. The slave will be released back to nodepool, which destroys it and starts all over again (slaves are not reused, and also have no important details on them, as they are essentially publicly accessible). Zuul will wait for the results of all jobs and post the result back to Gerrit and give either a positive vote or the dreaded negative vote if required jobs failed (it also handles merges to git, but we'll ignore that bit for now).

In a nutshell, that is the CI work-flow that happens thousands-upon-thousands of times a day keeping OpenStack humming along.

Image builds

There is, however, another more asynchronous part of the process that hides a lot of details the rest of the system relies on. Illustrated in step 8 above, this is the management of the images that tests are being run upon. Above we we said that a test runs on a ubuntu-trusty, centos, fedora or some other type of node, but glossed over where these images come from.

Firstly, what are these images, and why build them at all? These images are where the "rubber hits the road" -- where DevStack, functional testing or whatever else someone might want to test is actually run for real. Caching is a big part of the role of these images. With thousands of jobs going on every day, an occasional network blip is not a minor annoyance, but creates constant and difficult to debug CI failures. We want the images that CI runs on to rely on as few external resources as possible so test runs are as stable as possible. This means caching all the git trees tests might use, things like images consumed during tests and other bits and pieces. Obviously a cache is only as useful as the data in it, so we build these images up every day to keep them fresh.

If you log into almost any cloud-provider's interface, they almost certainly have a range of pre-canned images of common distributions for you to use. At first, the base images for OpenStack CI testing came from what the cloud-providers had as their public image types. However, over time, there are a number of issues that emerge:

  1. Providers rarely leave these images alone. One day you would boot the image to find a bunch of Python libraries pip-installed, or a mount-point moved, or base packages removed (all happened).
  2. Providers don't have some images you want (like a latest Fedora), or have different versions, or different point releases. All update asynchronously whenever they get around to it.
  3. No two images, even for the same distribution or platform, are the same. Every provider seems to do something "helpful" to the images which requires some sort of workaround.
  4. Even if the changes are helpful, it does not make for consistent and reproducible testing if every time you run, you're on a slightly different base system.

So the original incarnations of building images was that nodepool would start one of these provider images, run a bunch of scripts on it to make a base-image (do the caching, setup keys, etc), snapshot it and then start putting VM's based on these images into the pool for testing. The first problem you hit here is that the number of images being built starts to explode when you take into account multiple providers and multiple regions. With Rackspace and (now defunct) HP cloud) there was a situation where we were building 4 or 5 images across a total of about 8 regions -- meaning anywhere up to 40 separate image builds happening. It was almost a fait accompli that some images would fail every day -- nodepool can deal with this by reusing old snapshots; but this leads to a inconsistent and heterogeneous testing environment.

OpenStack is like a gigantic Jenga tower, with a full DevStack deployment resulting in hundreds of libraries and millions of lines of code all being exercised at once. The testing images are right at the bottom of all this, and it doesn't take much to make the whole thing fall over (see points about providers not leaving images alone). This leads to constant fire-firefighting and everyone annoyed as all CI stops. Naturally there was a desire for something much more consistent -- a single image that could run across multiple providers in a much more tightly controlled manner.

Upstream-based builds

Upstream distributions do provide their "cloud-images", which are usually pre-canned .qcow2 format files suitable for uploading to your average cloud. So the diskimage-builder tool was put into use creating images for nodepool, based on these upstream-provided images. In essence, diskimage-builder uses a series of elements (each, as the name suggests, designed to do one thing) that allow you to build a completely customised image. It handles all the messy bits of laying out the image file, tries to be smart about caching large downloads and final things like conversion to qcow2 or vhd or whatever your cloud requires. So nodepool has used diskimage-builder to create customised images based upon the upstream releases for some time. These are better, but still have some issues for the CI environment:

  1. You still really have no control over what does or does not go into the upstream base images. You don't notice a change until you deploy a new image based on an updated version and things break.
  2. The images still have a fair amount of "stuff" on them. For example cloud-init is a rather large Python program and has a fair few dependencies. Tese dependencies can both conflict with what parts of OpenStack wants, or inversely end up hiding requirements because we end up with a dependency tacitly provided by some part of the base-image. The whole idea of the CI is that (as much as possible) you're not making any assumptions about what is required to run your tests -- you want everything explicitly included. (If you were starting this whole thing again, things like Docker might come into play. Indeed they may be in the future. But don't forget that DevStack, the major CI deployment mechanism, was started before Docker existed. And there's tricky stuff with networking and Neutron etc going on).
  3. An image that "works everywhere" across multiple cloud-providers is quite a chore. cloud-init hasn't always had support for config-drive and Rackspace's DHCP-less environment, for example. Providers seem to like providing different networking schemes or configuration methods.
Minimal builds

To this end, diskimage-builder now has a serial of "minimal" builds that are really that -- systems with essentially nothing on them. For Debian and Ubuntu, this is achieved via debootstrap, for Fedora and CentOS we replicate this with manual installs of base packages into a clean chroot environment. We add-on a range of important "elements" that make the image useful; for example, for networking, we have simple-init which brings up the network consistently across all our providers but has no dependencies to mess with the base system. If you check the elements provided by project-config you can see a range of specific elements that OpenStack Infra runs at each image build (these are actually specified by in arguments to nodepool, see the config file, particularly diskimages section). These custom elements do things like caching, using puppet to install the right authorized_keys files and setup a few needed things to connect to the host. In general, you can see the logs of an image build provided by nodepool for each daily build.

So now, each day at 14:00 UTC nodepool builds the daily images that will be used for CI testing. We have one image of each type that (theoretically) works across all our providers. After it finishes building, nodepool uploads the image to all providers (p.s. the process of doing this is so insanely terrible it spawned shade; this deserves many posts of its own) at which point it will start being used for CI jobs.

Dependencies

But, as they say, there's more! Personally I'm not sure how it started, but OpenStack CI ended up with the concept of bare nodes and devstack nodes. A bare node was one that was used for functional testing; i.e. what you do when you type tox -e py27 in basically any OpenStack project. The problem with this is that tox has plenty of information about installing required Python packages into the virtualenv for testing; but it doesn't know anything about the system packages required to build the Python libraries. This means things like gcc and -devel packages which many Python libraries use to build library bindings. In contrast to this, DevStack has always been able to bootstrap itself from a blank system, ensuring it has the right libraries, etc, installed to be able to get a functional OpenStack environment up and running.

If you remember the previous comments, we don't want things pre-installed for DevStack, because it hides actual devstack dependencies that we want explicitly defined. But the bare nodes, used for functional testing, were different -- we had an every-growing and not well-defined list of packages that were installed on those nodes to make sure functional testing worked. You don't want jobs relying on this; we want to be sure if jobs have a dependency, they require it explicitly.

So this is where a tool called bindep comes in. OpenStack has the concept of global requirements -- those Python dependencies that are common across all projects so version skew becomes somewhat manageable. This now has some extra information in the other-requirements.txt file, which lists the system-packages required to build the Python-packages. bindep knows how to look at this and get the right packages for the platform it is running on. Indeed -- remember how it was previously mentioned we want to minimise dependencies on external resources at runtime? Well we can pre-cache all of these packages onto the images, knowing that they are likely to be required by packages. How do we get the packages installed? The way we really should be doing it -- as part of the CI job. There is a macro called install-distro-packages which uses bindep to install those packages as required by the global-requirements list. The result -- no more need for this bare node type! In all cases we can start with essentially a blank image and all the dependencies to run the job are expressed by and within the job -- leading to a consistent and reproducible environment. Several things have broken as part of removing bare nodes -- this is actually a good thing because it means we have revealed areas where we were making assumptions in jobs about what the underlying platform provides; issues that get fixed by thinking about and ensuring we have correct dependencies bringing up jobs.

There's a few other macros there that do things like provide MySQL/Postgres instances or setup other common job requirements. By splitting these out we also improve the performance of jobs who now only bring in the dependencies they need -- we don't waste time doing things like setting up databases for jobs that don't need it.

Conclusion

While dealing with multiple providers, image-types and dependency chains has been a great effort for the infra team, to everyone's credit I don't think the project has really noticed much going on underneath.

OpenStack CI has transitioned to a situation where there is a single image type for each platform we test that deploys unmodified across all our providers. We have better insight into our dependencies and better tools to manage them. This leads to greatly decreased maintenance burdens, better consistency and better performance; all great things to bring to OpenStack CI!