Planet Linux Australia
Steven Hanley: [mtb/events] The Surf Coast Century 2014 - Anglesea, Victoria - Surf Coast track and Otway ranges
Near Bells beach around 18km into the 100km run fullsize)
Earlier this year I competed in The North Face 100 km run in the Blue Mountains, it has a fairly large amount of climbing (over 4000 metres) and though it is remarkably pretty it is a tough day out. When a friend suggested doing the Surf Coast Centruy I thought why the heck not, less than half the climbing and I have never spent much time in this part of Victoria.
Report and photos are online. I was happy with my time, had a good day out and definately have to hand it to the locals, they live in a beautiful place. I also got a PB time for the distance by a fairly large amount (almost 3 hours faster than my TNF100 time)
Steven Hanley: [mtb/events] The Coastal Classic 2014 - Otford to Bundeena in the Royal National Park
Running along a beach 10km in to the event fullsize)
I had the opportunity to run in the Coastal Classic two weeks ago, Max Adventure do a great event here by getting access to the coastal walkiing track for this run and I recommend a visit to this track by anyone looking for a great coastal walk or run.
Report and photos are online. It was a really pretty run and I had never been along the track so it was a great day out.
So, I’ve been looking around for a while (and a few times now) for any good resources that cover a bunch of MySQL architecture and technical details aimed towards the technically proficient but not MySQL literate audience. I haven’t really found anything. I mean, there’s the (huge and very detailed) MySQL manual, there’s the MySQL Internals manual (which is sometimes only 10 years out of date) and there’s various blog entries around the place. So I thought I’d write something explaining roughly how it all fits together and what it does to your system (processes, threads, IO etc).(Basically, I’ve found myself explaining this enough times in the past few years that I should really write it down and just point people to my blog). I’ve linked to things for more reading. You should probably read them at some point.
Years ago, there were many presentations on MySQL Architecture. I went to try and find some on YouTube and couldn’t. We were probably not cool enough for YouTube and the conferences mostly weren’t recorded. So, instead, I’ll just link to Brian on NoSQL – because it’s important to cover NoSQL as well.
So, here is a quick overview of executing a query inside a MySQL Server and all the things that can affect it. This isn’t meant to be complete… just a “brief” overview (of a few thousand words).
MySQL is an open source relational database server, the origins of which date back to 1979 with MySQL 1.0 coming into existence in 1995. It’s code that has some history and sometimes this really does show. For a more complete history, see my talk from linux.conf.au 2014: Past, Present and Future of MySQL (YouTube, Download).
The MySQL Server runs as a daemon (mysqld). Users typically interact with it over a TCP or UNIX domain socket through the MySQL network protocol (of which multiple implementations exist under various licenses). Each connection causes the MySQL Server (mysqld) to spawn a thread to handle the client connection.
There are now several different thread-pool plugins that instead of using one thread per connection, multiplex connections over a set of threads. However, these plugins are not commonly deployed and we can largely ignore them. For all intents and purposes, the MySQL Server spawns one thread per connection and that thread alone performs everything needed to service that connection. Thus, parallelism in the MySQL Server is gotten from executing many concurrent queries rather than one query concurrently.
The MySQL Server will cache threads (the amount is configurable) so that it doesn’t have to have the overhead of pthread_create() for each new connection. This is controlled by the thread_cache_size configuration option. It turns out that although creating threads may be a relatively cheap operation, it’s actually quite time consuming in the scope of many typical MySQL Server connections.
Because the MySQL Server is a collection of threads, there’s going to be thread local data (e.g. connection specific) and shared data (e.g. cache of on disk data). This means mutexes and atomic variables. Most of the more advanced ways of doing concurrency haven’t yet made it into MySQL (e.g. RCU hasn’t yet and is pretty much needed to get 1 million TPS), so you’re most likely going to see mutex contention and contention on cache lines for atomic operations and shared data structures.
There are also various worker threads inside the MySQL Server that perform various functions (e.g. replication).
Until sometime in the 2000s, more than one CPU core was really uncommon, so the fact that there were many global mutexes in MySQL wasn’t really an issue. These days, now that we have more reliable async networking and disk IO system calls but MySQL has a long history, there’s global mutexes still and there’s no hard and fast rule about how it does IO.
Over the past 10 years of MySQL development, it’s been a fight to remove the reliance on global mutexes and data structures controlled by them to attempt to increase the number of CPU cores a single mysqld could realistically use. The good news is that it’s no longer going to max out on the number of CPU cores you have in your phone.
So, you have a MySQL Client (e.g. the mysql client or something) connecting to the MySQL Server. Now, you want to enter a query. So you do that, say “SELECT 1;”. The query is sent to the server where it is parsed, optimized, executed and the result returns to the client.
Now, you’d expect this whole process to be incredibly clean and modular, like you were taught things happened back in university with several black boxes that take clean input and produce clean output that’s all independent data structures. At least in the case of MySQL, this isn’t really the case. For over a decade there’s been lovely architecture diagrams with clean boxes – the code is not like this at all. But this probably only worries you once you’re delving into the source.
The parser is a standard yacc one – there’s been attempts to replace it over the years, none of which have stuck – so we have the butchered yacc one still. With MySQL 5.0, it exploded in size due to the addition of things like SQL2003 stored procedures and it is of common opinion that it’s rather bloated and was better in 4.1 and before for the majority of queries that large scale web peeps execute.
There is also this thing called the Query Cache – protected by a single global mutex. It made sense in 2001 for a single benchmark. It is a simple hash of the SQL statement coming over the wire to the exact result to send(2) over the socket back to a client. On a single CPU system where you ask the exact same query again and again without modifying the data it’s the best thing ever. If this is your production setup, you probably want to think about where you went wrong in your life. On modern systems, enabling the query cache can drop server performance by an order of magnitude. A single global lock is a really bad idea. The query cache should be killed with fire – but at least in the mean time, it can be disabled.
Normally, you just have the SQL progress through the whole process of parse, optimize, execute, results but the MySQL Server also supports prepared statements. A prepared statement is simply this: “Hi server, please prepare this statement for execution leaving the following values blank: X, Y and Z” followed by “now execute that query with X=foo, Y=bar and Z=42″. You can call execute many times with different values. Due to the previously mentioned not-quite-well-separated parse, optimize, execute steps, prepared statements in MySQL aren’t as awesome as in other relational databases. You basically end up saving parsing the query again when you execute it with new parameters. More on prepared statements (from 2006) here. Unless you’re executing the same query many times in a single connection, server side prepared statements aren’t worth the network round trips.
The absolute worst thing in the entire world is MySQL server side prepared statements. It moves server memory allocation to be the responsibility of the clients. This is just brain dead stupid and a reason enough to disable prepared statements. In fact, just about every MySQL client library for every programming language ever actually fakes prepared statements in the client rather than trust every $language programmer to remember to close their prepared statements. Open many client connections to a MySQL Server and prepare a lot of statements and watch the OOM killer help you with your DoS attack.
So now that we’ve connected to the server, parsed the query (or done a prepared statement), we’re into the optimizer. The optimizer looks at a data structure describing the query and works out how to execute it. Remember: SQL is declarative, not procedural. The optimizer will access various table and index statistics in order to work out an execution plan. It may not be the best execution plan, but it’s one that can be found within reasonable time. You can find out the query plan for a SELECT statement by prepending it with EXPLAIN.
The MySQL optimizer is not the be all and end all of SQL optimizers (far from it). A lot of MySQL performance problems are due to complex SQL queries that don’t play well with the optimizer, and there’s been various tricks over the years to work around deficiencies in it. If there’s one thing the MySQL optimizer does well it’s making quick, pretty good decisions about simple queries. This is why MySQL is so popular – fast execution of simple queries.
To get table and index statistics, the optimizer has to ask the Storage Engine(s). In MySQL, the actual storage of tables (and thus the rows in tables) is (mostly) abstracted away from the upper layers. Much like a VFS layer in an OS kernel, there is (for some definition) an API abstracting away the specifics of storage from the rest of the server. The API is not clean and there are a million and one layering violations and exceptions to every rule. Sorry, not my fault.
Table definitions are in FRM files on disk, entirely managed by MySQL (not the storage engines) and for your own sanity you should not ever look into the actual file format. Table definitions are also cached by MySQL to save having to open and parse a file.
Originally, there was MyISAM (well, and ISAM before it, but that’s irrelevant now). MyISAM was non-transactional but relatively fast, especially for read heavy workloads. It only allowed one writer although there could be many concurrent readers. MyISAM is still there and used for system tables. The current default storage engine is called InnoDB. It’s all the buzzwords like ACID and MVCC. Just about every production environment is going to be using InnoDB. MyISAM is effectively deprecated.
InnoDB originally was its own independent thing and has (to some degree) been maintained as if it kind of was. It is, however, not buildable outside a MySQL Server anymore. It also has its own scalability issues. A recent victory was splitting the kernel_mutex, which was a mutex that protected far too much internal InnoDB state and could be a real bottleneck where NRCPUs > 4.
So, back to query execution. Once the optimizer has worked out how to execute the query, MySQL will start executing it. This probably involves accessing some database tables. These are probably going to be InnoDB tables. So, MySQL (server side) will open the tables, looking up the MySQL Server table definition cache and creating a MySQL Server side table share object which is shared amongst the open table instances for that table. See here for scalability hints on these (from 2009). The opened table objects are also cached – table_open_cache. In MySQL 5.6, there is table_open_cache_instances, which splits the table_open_cache mutex into table_open_cache_instances mutexes to help reduce lock contention on machines with many CPU cores (> 8 or >16 cores, depending on workload).
Once tables are opened, there are various access methods that can be employed. Table scans are the worst (start at the start and examine every row). There’s also index scans (often seeking to part of the index first) and key lookups. If your query involves multiple tables, the server (not the storage engine) will have to do a join. Typically, in MySQL, this is a nested loop join. In an ideal world, this would all be really easy to spot when profiling the MySQL server, but in reality, everything has funny names like rnd_next.
As an aside, any memory allocated during query execution is likely done as part of a MEM_ROOT – essentially a pool allocator, likely optimized for some ancient libc on some ancient linux/Solaris and it just so happens to still kinda work okay. There’s some odd server configuration options for (some of the) MEM_ROOTs that get exactly no airtime on what they mean or what changing them will do.
InnoDB has its own data dictionary (separate to FRM files) which can also be limited in current MySQL (important when you have tens of thousands of tables) – which is separate to the MySQL Server table definitions and table definition cache.
But anyway, you have a number of shared data structures about tables and then a data structure for each open table. To actually read/write things to/from tables, you’re going to have to get some data to/from disk.
InnoDB tables can be stored either in one giant table space or file-per-table. (Even though it’s now configurable), InnoDB database pages are 16kb. Database pages are cached in the InnoDB Buffer Pool, and the buffer-pool-size should typically be about 80% of system memory. InnoDB will use a (configurable) method to flush. Typically, it will all be O_DIRECT (it’s configurable) – which is why “just use XFS” is step 1 in IO optimization – the per inode mutex in ext3/ext4 just doesn’t make IO scale.
InnoDB will do some of its IO in the thread that is performing the query and some of it in helper threads using native linux async IO (again, that’s configurable). With luck, all of the data you need to access is in the InnoDB buffer pool – where database pages are cached. There exists innodb_buffer_pool_instances configuration option which will split the buffer pool into several instances to help reduce lock contention on the InnoDB buffer pool mutex.
All InnoDB tables have a clustered index. This is the index by which the rows are physically sorted by. If you have an INT PRIMARY KEY on your InnoDB table, then a row with that primary key value of 1 will be physically close to the row with primary key value 2 (and so on). Due to the intricacies of InnoDB page allocation, there may still be disk seeks involved in scanning a table in primary key order.
Every page in InnoDB has a checksum. There was an original algorithm, then there was a “fast” algorithm in some forks and now we’re converging on crc32, mainly because Intel implemented CPU instructions to make that fast. In write heavy workloads, this used to show up pretty heavily in profiles.
InnoDB has both REDO and UNDO logging to keep both crash consistency and provide consistent read views to transactions. These are also stored on disk, the redo logs being in their own files (size and number are configurable). The larger the redo logs, the longer it may take to run recovery after a crash. The smaller the redo logs, the more trouble you’re likely to run into with large or many concurrent transactions.
If your query performs writes to database tables, those changes are written to the REDO log and then, in the background, written back into the table space files. There exists configuration parameters for how much of the InnoDB buffer pool can be filled with dirty pages before they have to be flushed out to the table space files.
In order to maintain Isolation (I in ACID), InnoDB needs to assign a consistent read view to a new transaction. Transactions are either started explicitly (e.g. with BEGIN) or implicitly (e.g. just running a SELECT statement). There has been a lot of work recently in improving the scalability of creating read views inside InnoDB. A bit further in the past there was a lot of work in scaling InnoDB for greater than 1024 concurrent transactions (limitations in UNDO logging).
Fancy things that make InnoDB generally faster than you’d expect are the Adaptive Hash Index and change buffering. There are, of course, scalability challenges with these too. It’s good to understand the basics of them however and (of course), they are configurable.
If you end up reading or writing rows (quite likely) there will also be a translation between the InnoDB row format(s) and the MySQL Server row format(s). The details of which are not particularly interesting unless you’re delving deep into code or wish to buy me beer to hear about them.
Query execution may need to get many rows from many tables, join them together, sum things together or even sort things. If there’s an index with the sort order, it’s better to use that. MySQL may also need to do a filesort (sort rows, possibly using files on disk) or construct a temporary table in order to execute the query. Temporary tables are either using the MEMORY (formerly HEAP) storage engine or the MyISAM storage engine. Generally, you want to avoid having to use temporary tables – doing IO is never good.
Once you have the results of a query coming through, you may think that’s it. However, you may also be part of a replication hierarchy. If so, any changes made as part of that transaction will be written to the binary log. This is a log file maintained by the MySQL Server (not the storage engines) of all the changes to tables that have occured. This log can then be pulled by other MySQL servers and applied, making them replication slaves of the master MySQL Server.
We’ll ignore the differences between statement based replication and row based replication as they’re better discussed elsewhere. Being part of replication means you get a few extra locks and an additional file or two being written. The binary log (binlog for short) is a file on disk that is appended to until it reaches a certain size and is then rotated. Writes to this file vary in size (along with the size of transactions being executed). The writes to the binlog occur as part of committing the transaction (and the crash safety between writing to the binlog and writing to the storage engine are covered elsewhere – basically: you don’t want to know).
If your MySQL Server is a replication slave, then you have a thread reading the binary log files from another MySQL Server and then another thread (or, in newer versions, threads) applying the changes.
If the slow query log or general query log is enabled, they’ll also be written to at various points – and the current code for this is not optimal, there be (yes, you guess it) global mutexes.
Once the results of a query have been sent back to the client, the MySQL Server cleans things up (frees some memory) and prepares itself for the next query. You probably have many queries being executed simultaneously, and this is (naturally) a good thing.
There… I think that’s a mostly complete overview of all the things that can go on during query execution inside MySQL.
Linux.conf.au is pleased to announce that an Open Hardware Miniconf will be run the Linux.conf.au 2015 conference in Auckland, New Zealander during January 2015 .
The concept of Free / Open Source Software, already well understood by LCA attendees, is complemented by a rapidly growing community focused around Open Hardware and "maker culture". One of the drivers of the popularity of the Open Hardware community is easy access to cheap devices such as Arduino, which is a microcontroller development board originally intended for classroom use but now a popular building block in all sorts of weird and wonderful hobbyist and professional projects.
Interest in Open Hardware is high among FOSS enthusiasts but there is also a barrier to entry with the perceived difficulty and dangers of dealing with hot soldering irons, unknown components and unfamiliar naming schemes. The Miniconf will use the Arduino microcontroller board as a stepping stone to help ease software developers into dealing with Open Hardware. Topics will cover both software and hardware issues, starting with simpler sessions suitable for Open Hardware beginners and progressing through to more advanced topics.
The day will run in two distinct halves. The first part of the day will be a hands-on assembly session where participants will have the chance to solder together a special hardware project developed for the miniconf. Instructors will be on hand to assist with soldering and the other mysteries of hardware assembly. The second part of the day will be presentations about Open Hardware topics, including information on software to run on the hardware project built earlier in the day.
Please see www.openhardwareconf.org for more info.Miniconf organiser Jon Oxer Jon Oxer has been hacking on both hardware and software since he was a little tacker. Most recently he's been focusing more on the Open Hardware side, co-founding Freetronics as a result of organising the first Arduino Miniconf at LCA2010 and designing the Arduino-based payloads that were sent into orbit in 2013 on board satellites ArduSat-X and ArduSat-1. His books include "Ubuntu Hacks" and "Practical Arduino".
It’s been a little while since I blogged on MySQL on POWER (last time was thinking that new releases would be much better for running on POWER). Well, I recently grabbed the MySQL 5.6.20 source tarball and had a go with it on a POWER8 system in the lab. There is good news: I now only need one patch to have it function pretty flawlessly (no crashes). Unfortunately, there’s still a bit of an odd thing with some of the InnoDB mutex code (bug filed at some point soon).
But, with this one patch applied, I was getting okay sysbench results and things are looking good.
Now just to hope the MySQL team applies my other patches that improve things on POWER. To be honest, I’m a bit disappointed many of them have sat there for this long… it doesn’t help build a development community when patches can sit for months without either “applied” or “fix these things first”. That being said, just as I write this, Bug 72809 which I filed has been closed as the fix has been merged into 5.6.22 and 5.7.6, so there is hope… it’s just that things can just be silent for a while. It seems I go back and forth on how various MySQL variants are going with fostering an external development community.
linux.conf.au News: Developer, Testing, Release and Continuous Integration Automation Miniconf at Linux.conf.au 2015
Linux.conf.au is pleased to announce that the Developer, Testing, Release and Continuous Integration Automation Miniconf will be part of Linux.conf.au in Auckland, New Zealand during January 2015.
This miniconf is all about improving the way we produce, collaborate, test and release software.
We want to cover tools and techniques to improve the way we work together to produce higher quality software:
- code review tools and techniques (e.g. gerrit)
- continuous integration tools (e.g. jenkins)
- CI techniques (e.g. gated trunk, zuul)
- testing tools and techniques (e.g. subunit, fuzz testing tools)
- release tools and techniques: daily builds, interacting with distributions, ensuring you test the software that you ship.
- applying CI in your workplace/project
Organiser: Stewart Smith
Stewart currently works for IBM in the Linux Technology Center on KVM on POWER, giving him a job that is even harder to explain to non-Linux geek people than ever before. Previously he worked for Percona as Director of Server Development where he oversaw development of many of Percona’s software products. He comes from many years experience in databases and free and open source software development. He’s often found hacking on the Drizzle database server, taking photos, running, brewing beer and cycling (yes, all at the same time).
I have recently spent some time pondering the state of embedded system security.
I have been a long time user of OpenWRT, partly based on the premise that it should be “more secure” out of the box (provided attention is paid to routine hardening); but as the Infosec world keeps on evolving, I thought I would take a closer look at the state of play.
OpenWRT is a Linux distribution targeted as a replacement firmware for consumer home routers, having been named after the venerable Linksys WRT54g, but it is gaining a growing user base in the so-called Internet of Things space, as evidenced by devices such as the Carambola2, VoCore and WRTnode.
Now there are many, many areas that could be the focus of security related attention. One useful reference is the Arch Linux security guide , which covers a broad array of mitigations, but for some reason I felt like diving into the deeper insides of the operating system. I decided it would be worthwhile to compare the security performance of the core of OpenWRT, specifically the Linux kernel plus the userspace toolchain, against other “mainstream” distributions.
The theory is that ensuring a baseline level of protection in the toolchain and the kernel will cut of a large number of potential root escalation vulnerabilities before they can get started. To get started, take a look at the Gentoo Linux toolchain hardening guide. This lists mitigations such as: Stack Smashing Protection (SSP), used to make it much harder for malware to take advantage of stack overrun situations; position independent code (PIC or PIE) that randomises the address of a program in memory when it is loaded; and so-on to other features such as Address Space Layout Randomisation (ASLR) in the kernel, all of these designed to halt entire classes of exploits in their tracks.
I wont repeat all the pros and cons of the various methods here. It should be noted that these methods are not foolproof; new techniques such as Return Oriented Programming (ROP) can be used to circumvent these mitigations. But defense-in-depth would appear to be an increasingly important strategy. If a device can be made secure using these techniques it is probably ahead of 95% of the embedded market and more likely to avoid the larger number of automated attacks; perhaps this a bit like the old joke:
The first zebra said to the second zebra: “Why are you bothering to run? The cheetah is faster than us!”
To which the second zebra replied, “I only have to run faster than you!”Analysis of just a few aspects.
There is an excellent tool “checksec.sh”, that can scan binaries in a Linux system and report their status against a variety of security criteria.
The original author has retained his website but the script was last updated in November 2011. I found a more recently maintained version on GitHub featuring various enhancements (the maintainer even accepted a patch from me addressing a cosmetic bugfix.)
The basic procedure for using the tool against an OpenWRT build is straightforward enough:
- Configure the OpenWRT build and ensure the tar.gz target is enabled ( CONFIG_TARGET_ROOTFS_TARGZ=y )
- To expedite the testing I build OpenWRT with a pretty minimal set of packages
- Build the image as a tar.gz
- Unpack the tar.gz image to a temporary directory
- Run the tool
When repeating a test I made sure I cleaned out the toolchain and the target binaries, because I ahve been bitten by spuriuos results caused by changes to the configuration not propagating through all components. This includes manually removing the staging_dir which may be missed by make clean. This made each build take around 20+ minutes on my quad core Phenom machine.
I used the following base configuration, only changing the target:CONFIG_DEVEL=y CONFIG_DOWNLOAD_FOLDER="/path/to/downloads" CONFIG_BUILD_LOG=y CONFIG_TARGET_ROOTFS_TARGZ=y
I repeated the test for three platform configurations initially (note, these are mutually exclusive choices) – the Carambola2 and rt305x are MIPS platforms.CONFIG_TARGET_ramips_rt305x_MPRA1=y CONFIG_TARGET_x86_kvm_guest=y CONFIG_TARGET_ar71xx_generic_CARAMBOLA2=y
There are several directories most likely to have binaries of interest, so of course I scanned them using a script, essentially consisting of:for p in lib usr/lib sbin usr/sbin bin usr/bin ; do checksec.sh --dir $p ; done
The results were interesting.
(to be continued)Further reading
Mum and Dad had gotten Zoe a family pass to Australia Zoo for her birthday, but we hadn't gotten around to being able to use it until today.
As the zoo opened at 9am, and we had to pick up Mum and Dad along the way, it was an exercise in getting going early. Much to my pleasant surprise, we managed to do it, and arrived at the zoo as they were throwing open the front doors.
Patronage levels were quite low, so it made for pretty easy getting around. As it was the shuttle that runs around the zoo was horribly backed up. It can't scale well at all when the zoo is actually busy.
We caught most of the timed shows, except for the koala one (because of the aforementioned shuttle, mostly). Zoe enjoyed the Wildlife Warriors show in the Crocoseum. It's been quite a few years since I've been to Australia Zoo, and that show in particular has only gotten better.
It's really funny the stuff that Zoe is brave about and afraid of. She'll happily climb all over crocodile sculptures, and touch any animal she can, and even ask questions of the zoo keepers during Q&A, but a fake cave with a dinosaur head baring teeth had her freaking out. To her credit, she wanted to come back later on to face her fears and see what it was she'd seen the first time.
It was a really good day out. The weather was about as good as yesterday, and Zoe handled all the walking around very well. As I predicted, she fell asleep in the car about 8 minutes after we departed, and stayed asleep while we dropped of Mum and Dad and until we arrived back at home. She was still pretty wrecked after that, and bedtime was a bit messy, but we got there in the end.
We decided to upgrade our tickets to an annual pass, as we completely failed to properly look at all the newer stuff on the other side of Zoo Road, so we'll come back again another time.
I've always been appalled at the price tag on entry to Australia Zoo. Zoos in the US are so cheap compared to zoos in Australia. I'll have to write a blog post some time comparing the prices. That said, Australia Zoo is a pretty good quality zoo, even if the range of animals isn't comparable. The Crocoseum is an amazing stadium. I'd hate to be there with all 5000 seats full. The Feeding Frenzy Food Court is also pretty impressive in its capacity. I'd just hate to be in the zoo when either of these were working to capacity, because of the rest of the zoo isn't that big, really.
I have to wonder what it's like for the Irwin family, particularly the kids, growing up there with so much of Steve still front and centre. They had Bindi's songs, like My Daddy The Crocodile Hunter piped all over the place. My personal opinion of the man changed from disdain for someone who seemed so ocker and over the top, to admiration for his passion after watching his interview with Andrew Denton on Enough Rope. I remember where I was when I heard of his passing. He was a big man, who's left a big legacy. I think that's why I'm always happy to go to Australia Zoo, despite the price tag.
Inspired by the Community Leadership Summit run each year before OSCON, Donna Benjamin will be running an event to bring together community leaders, organizers and managers and the projects and organizations that are interested in growing and empowering a strong community.
The event pulls together the leading minds in community management, relations and online collaboration to discuss, debate and continue to refine the art of building an effective and capable community.
The event will be run in a similar manner to the parent event:
as an open unconference-style event in which everyone who attends is welcome to lead and contribute sessions on any topic that is relevant. These sessions are very much discussion sessions: the participants can interact directly, offer thoughts and experience, and share ideas and questions. These unconference sessions are also augmented with a series of presentations from leaders in the field, panel debates and networking opportunities.
Donna Benjamin currently chairs the Drupal community working group, sits on the board of the Drupal Association, and works as community engagement director with PreviousNext. She's also been an advisor to councils of Linux Australia, and was conference director for LCA2008 in Melbourne. Donna has also served as President of Linux Users of Victoria, and as a Director of Open Source Industry Australia.
At the receiver, we need to “sample” the PSK modem signal at just the right time. A timing estimator looks at the received signal and works out the best time to sample. Timing estimation is a major part of a PSK demodulator. if you get the timing estimate wrong, the scatter diagram looks worse and you get more bit errors.
The basic idea is we pass the modem signal through a non-linearity. The non-linearity could be the absolute function (i.e. a rectifier), a square law, or in analog-land a diode. This strips the phase information from the signal leaving the amplitude (envelope of the original signal) bobbing up and down at the symbol rate. Turns out that the phase of this envelope signal is related to the timing offset of the PSK signal. The phase can be extracted using a single point Discrete Fourier Transform (DFT).
Here’s an example using some high school trig functions. Consider a simple BPSK signal that consists of alternating symbols ….,-1,+1,-1,+1….. Once filtered, this will look something like a sine wave at half the symbol rate, r(t)=cos(Rs(t-T)/2), where T is the timing offset, and Rs is the symbol rate. So if Rs=50 symbols/s (50 baud), r(t) would be a sine wave at 25 Hz, with some time offset T.
If we square r(t) we get s(t)=r(t)*r(t) = 0.5 + 0.5*cos(Rs(t-T)), using the trig identify cosacosb=(cos(a-b)+cos(a+b))/2. The second term of s(t) is a sine wave of frequency Rs, with phase = RsT. So if we perform a single point DFT at frequency Rs on s(t), the phase will be related to the timing offset.
That’s pretty much what happens in rx_est_timing(). We use the parallel QPSK signals, and a nice long window of modem samples to get a good estimate in the presence of noise and frequency selective fading.
Linux.conf.au is pleased to announce that the Open Source for Humanitarian Tech Miniconf will be part of Linux.conf.au for the first time in Auckland, New Zealand during January 2015.
Technology is increasingly important in humanitarian response. Now responders are better connected to digital volunteers, more advanced tools such as unmanned-aerial vehicles give a better review of post disaster situations and great quantities of data can be collected and analyzed. Often these solutions are not expendable and are based on expensive proprietary solutions.
The Humanitarian Tech Miniconf will focus on two main audiences:
- Existing technologists who are interested in ways they can assist with technology in humanitarian response.
- Allowing existing projects and participants to share what they are working on and look for ways to integrate.
Technologists who work on UAVs, mesh networks, data collection platforms and content management will be invited to speak, and humanitarians to give a background in humanitarian response to those not familiar.
Kate Chapman is the Executive Director at the Humanitarian OpenStreetMap Team. Her most recent work has been in Indonesia working on a pilot program over the past year analyzing the feasibility utilizing OpenStreetMap for collection of exposure data. This project has hosted a OpenStreetMap mapping competition, a month long event to map critical infrastructure in Jakarta and assisting community facilitators in moving from hand-drawn maps to digital maps. Previous to working at HOT Kate was involved in development of multiple web-GIS applications including GeoCommons and iMapData.
Today was a glorious day, and the universe was very kind to me.
I couldn't quite get myself motivated to go for a run before my chiropractic adjustment, so I had a nice relaxed start to the day instead.
I decided to get the cleaners back for a regular clean, and it was such a much better use of my time. I used the time I recouped to clean out my email inbox, and sort out a whole bunch of things that had been piling up on my to do list.
I biked over for my massage, and my massage therapist said he'd book a Thermomix demo with me, that was pretty awesome of him. Biking home, I decided to phone in an order for a burger from Grill'D and just continue biking over to pick it up. On the way down Oxford Street, I found a $20 bill on the road, so that paid for lunch for me. I was pretty stoked.
On the way home, I ran into my yoga teacher. I love my little village.
I had a little bit of time before I had to head to Kindergarten early to chair the monthly PAG meeting. Because the weather was so nice, I decided to bike in for it. It's hard to believe that Term 3 has ended now. The year is flying by.
We biked home, and only had about 20 minutes before we had to turn around and bike to swim class. After swim class, on a whim, I decided we should go out for dinner, so we biked down to Oxford Street and had some gyoza for dinner and biked home.
Tomorrow's going to be an early start and a big day, and apparently Zoe woke up early this morning, so I got her to bed a little early tonight. It's pretty hot tonight, so hopefully she'll sleep okay.
Yesterday was a busy day, and it left me quite drained by the end of the day.
I had my first day of Thermomix Consultant training. It went fine. I learned that the TM31 has a retractable cord. Who knew?! All this time I've been wrapping it all over the place when I take it out of the house.
I had to leave a bit early to pick up Zoe from Kindergarten and take her to her final tennis lesson of the term. It was a pretty hot day, but it went well.
After that, we dropped into the local coffee shop for a milkshake and cookie with Megan and her Mum, before we headed home.
We didn't have a huge amount of time at home before we had to leave again to meet Sarah at the train station to drop Zoe off. I then jumped on a train to meet Anshu in the city for dinner, which was a nice way to unwind.
The linux.conf.au 2015 conference is pleased to announce the Kernel Miniconf will be part of the programme in Auckland, New Zealand.
The linux kernel is the world's largest open source project and is at the heart of every linux distribution. The miniconf will be about all things kernel and is targeted mainly at experienced developers. Any interested parties will be welcome as always.
Key areas of discussion will be:
- New developments and APIs
- Pain points for users
- Process/Community issues
Tony Breeds is organising this year’s Kernel miniconf. He is a kernel developer focusing on powerpc. In his spare time Tony also maintains the kernel cross compilers on kernel.org
With Juno now closed to new features we’ve started looking at what we plan to do in the Kilo development cycle. As mentioned in a previous post most of the V2 API had been implemented in V2.1 with the exception of the networking APIs. The first and highest priority will be to complete and verify the V2.1 API is equivalent to the V2 API:
- Finish porting the missing parts of the V2 API to V2.1. This is primarily networking support and a handful of patches which did not merge in the Juno cycle before feature freeze.
- Continue tempest testing of the V2.1 API using the V2 API tempest tests. Testing so far has already found some bugs and there will be some work to ensure we have adequate coverage of the V2 API.
- Encourage operators to start testing the V2.1 API so we have further verification that V2.1 is equivalent to the V2 API . It should also give us a better feeling for how much impact strong input validation will have on current users of the V2 API.
Support for making both backwards compatible and non-backwards compatible changes using microversions is probably the second most important Nova API feature to be developed in Kilo. Microversions work by allowing a client to request a specific version of the Nova API. Each time a change is made to the Nova API that is visible to a client, the version of the API is incremented. This allows the client to detect when new API features are available, and control when they want to adapt their programs to backwards incompatible changes. By default when a client makes a request of the V2.1 it will behave as the V2 API. However if it supplies a header like:
then the V2.1 API code will behave as the 2.214 version of the API. There will also be the ability to query the server what versions are supported. Although there was broad support for using a microversions technique the community was unable to come to a consensus on the detail of how microversions would be implemented. A high priority early in the Kilo cycle will be to get agreement on the implementation details of microversions. In addition to development work required in the Nova API to support microversions we will also need to add microversion functionality to novaclient.API Policy Cleanup
The policy authorisation checks are currently spread between the API, compute and db layers. The deeper into the Nova internals the policy checks are made, the more work there is needed to unwind in case of authentication failure. Since the Icehouse development cycle we have been progressively been moving the policy checks from the lower layers of Nova up into the Nova API. The draft nova specification for this work is here.
The second part of the policy changes is to have a policy for each method. Currently the API gives fairly inconsistent control to operators over how resources can be accessed. Sometimes they are able to set permissions based on a per plugin basis, and at other times on an individual method granularity. Rather than add flexibility to authentication to plugins on an ad hoc basis we will be adding them to all methods on all plugins. The draft nova specification for this work is here.Reducing technical debt in the V2/V2.1 API Code
Like many other areas of Nova, the API code has over time accumulated a reasonable amount of technical debt. The major areas we will look at addressing in Kilo are:
- Maximise sharing of unittest code
- General unittest code cleanup. Poorly structured unittests make it more difficult to add new tests as well as make it harder to debug test failures.
- API samples test infrastructure improvements. The api sample tests are currently very slow to execute and there is significant overhead when updating them due to API changes. There are also gaps in test coverage, both in terms of APIs covered and full verification of the responses for those that do have existing tests.
- Documentation generation for the Nova API is currently a very manual and error prone process. We need to automate as much of this process as possible and can use the new jsonschema input validation to help do this.
The planning for the work that will be done in Kilo is still ongoing and the API team welcomes any feedback from users, operators and other developers. The etherpad where work items for Kilo can be proposed is here. Note that the focus of this etherpad is on infrastructure improvements to the API code, not new API features.
The Nova API team also holds meetings every Friday at 00:00UTC in the #openstack-meeting channel on freenode. Anyone interested in the future development direction of the Nova API is welcome to join.
After already having trouble Electoral Commission banning photography in polling places I now get a threatening email from them.
Yesterday I made this Tweet:
— Simon Lyall (@slyall) September 16, 2014
and today I get the following email
Subject: Electoral Commission complaint – London exit poll posted on Twitter account
The Electoral Commission has received a complaint with regard to an exit poll being taken and then published on the Twitter account of @slyall. We understand that this is your Twitter account.
Under section 197(1)(d) of the Electoral Act 1993, it is an offence to conduct a public opinion poll of persons who have voted (exit polls). Section 197(1)(d) states:
197 Interfering with or influencing voters
(1) Every person commits an offence and shall be liable on conviction to a fine not exceeding $20,000 who at an election—
(d) at any time before the close of the poll, conducts in relation to the election a public opinion poll of persons voting before polling day
In order to assist the Commission in considering this complaint, could you please provide the following information:
1. Who conducted the exit poll and when was it conducted?
2. How did you receive this information?
3. Any other information you believe to be of relevance to the Commission’s consideration.
4. How you might remedy this matter.
Can you please provide the above information by 5pm, Friday 19 September 2014. In the first instance, to avoid further complaints, you may wish to remove the Twitter post.
Please telephone me if you wish to discuss this further.
I’ll update as things develop
Why is it having mental illness and smoking go hand in hand. I’m looking closer to getting a foot in the door with my dream job doing something I love. I also plan to quit smoking again with my fiance and two friends. It’s not going to be easy but I went seven months not smoking. I really don’t enjoy it and it costs too much. That and every cigarette takes eight minutes off my life. I’ll keep you readers of bluehackers posted about the job. I’m excited..
Apparently Microsoft has bought Mojang, the game studio that brought us Minecraft. I find it hard to think of a worse cultural match than this one – Microsoft has spent the last twenty years, for example, trying to move its customers off a once off license payment and into a subscription model. It reminds me of the last Microsoft game I ever played (Mechwarrior Vengeance – no, seriously). Microsoft bought out the Mechwarrior franchise and (IMO) killed the magic. My guess is that Minecraft’s days (or at least Minecraft’s days of magic) are numbered.
Getting close to Beta now.
I have just spent a week tracking down a mysterious 256 Hz low level (30mVpp) tone that ended up being a software configuration error in the DAC initialisation code! However before that I thought it was a noise issue caused by PCB layout, or the wrong sort of bypass capacitor. I also popped another uC looking for it, costing a few days while I had the dud uC removed (thanks Matt) and I loaded a fresh one. Think I better stick to software…..
To get to the bottom of the problem I partially loaded a “minimal” 3rd SM1000. Just enough parts to make the uC run the dac_ut unit test program, power supply bypass, STLINK, crystal oscillator, boot mode resistors. Only about 1 hrs soldering, quite remarkable really.
Here is the mysterious triangle wave:
Michael Wild DL2F2 in Germany has built his own SM1000, using a PCB I sent him and the parts off our Digikey BOM. That’s a pretty exciting confirmation of the design. Having a another unit running has been very useful when making comparative tests. Here is Michael’s SM1000:
Michael, Rick KA8BMA, and Matt VK5ZM have also been very helpful in suggestions when I have been debugging. Thanks guys! Matt has also been explaining to us how ceramic caps can lose up to 80% of their C as their working voltage is approached. So best to use tantalum or electrolytic capacitors for noise sensitive applications like power supply and analog rail filtering (e.g. C25, C28, C29, C10, C12 on the SM1000). I could hear the noise drop when I soldered an electrolytic capacitor across C25 (input to the SM1000 switching power supply).
Right now I’m trying to reduce some remaining noise sources and soon hope to test over the air to see how the SM1000 works with large RF fields nearby, and also ensure it works OK with a couple of HF radio models. Rick is busy updating the schematic and PCB with a bunch of small changes we have picked up working on the prototypes.
We have a quote for the Qty 100 beta run, and are still on target to ship SM1000s in 2014. As soon as we have completed testing the prototypes we will kick off the Beta run and I’ll start taking pre-orders. Lots of work required on the software, but I figure that can wait until the Beta hardware is getting made. I want to take the open source approach of release early and release often, and that means getting betas into your hands ASAP.
"Software defined everything," DevOps, and cloud are driving open source further and faster than we might have imagined possible just a decade ago. Most recently, Docker containers and orchestration have opened up all kinds of new opportunities to develop, deploy, and manage software from the developer's desktop well into production.
The linux.conf.au 2015 organisers are pleased to announce the Clouds, Containers, and Orchestration miniconf at LCA in Auckland, New Zealand during January 2015. The miniconf will focus on the open source tools and best practices for working with cloud tools, containers, and orchestration software (e.g., Kubernetes, Apache Mesos, geard, and others). We'll have the leading developers working on those tools, as well as users who are deploying them in real production environments to share their knowledge and show where tools will be going in 2015.
Joe Brockmeier, the miniconf organiser, has a long history of involvement with Linux and open source. Currently he works on Red Hat's Open Source and Standards (OSAS) team, and is involved with Project Atomic, the Fedora Project's Cloud Working Group, is a member of the Apache Software Foundation. He is a technology journalist and has written for ReadWriteWeb, LWN, Linux.com, Linux Magazine, Linux Pro Magazine, ZDNet, and many others.