Planet Linux Australia

Syndicate content
Planet Linux Australia - http://planet.linux.org.au
Updated: 1 hour 52 min ago

Andrew Pollock: [life] Day 212: A trip to the pool

Mon, 2014-09-01 18:25

This is what I get for not blogging on the day of, I can't clearly remember what we did on Friday now...

I had the plumber out in the morning, and then some cleaners to give a quote. I can't remember what we did after that.

After lunch we biked to Colmslie Pool and swam for a bit, I remember that much, and then I had some friends join Anshu and us for dinner, but the rest is coming up blank.

Andrew Pollock: [life] Day 212: A trip to the pool

Mon, 2014-09-01 18:25

This is what I get for not blogging on the day of, I can't clearly remember what we did on Friday now...

I had the plumber out in the morning, and then some cleaners to give a quote. I can't remember what we did after that.

After lunch we biked to Colmslie Pool and swam for a bit, I remember that much, and then I had some friends join Anshu and us for dinner, but the rest is coming up blank.

Brendan Scott: brendanscott

Mon, 2014-09-01 15:28

AGs is seeking public submissions on Online Copyright Infringement.

Some thoughts are:

The cover letter to the inquiry cites a PWC report prepared for the Australian Copyright Council.  The letter fails to note that gains are offset by the role of intellectual property in transfer pricing by multinationals.  There is strong evidence to suggest that intellectual property regimes have the effect of substantially reducing Australian taxation revenue through the use of transfer pricing mechanisms.

Page 3 of the discussion paper states that  the High Court in Roadshow said “that there were no reasonable steps that could have been taken by iiNet to reduce its subscribers’ infringements.”  The discussion paper goes on to enquire about what reasonable steps a network operator could take to reduce subscribers’ infringements. The whole of the debate about copyright infringement on the internet is infected by this sort of double speak.

The discussion paper does not specifically ask about a three strikes regime.  However, it invites discussion of a three strikes regime by raising it in the cover matter then inviting proposals as to what might be a “reasonable step.  Where noted my responses on a particular question relate to a three strikes regime.

Question 1:

Compelling an innocent person to assist a third party is to deprive that person of their liberty.  The only reasonable steps that come to mind are for network operators to respond to subpoenas validly issued to them – at least that is determined on a case by case basis under the supervision of a court.

Question 2:

Innocent third parties should not be required to assist in the enforcement of someone else’s rights. Any assistance that an innocent third party is required to give should be at the rights holder’s cost.  To do otherwise is to effectively require (in the case of a network) all customers to subsidise the private rights of the “rights’ holders'” enforcement. This is an inefficient an inequitable equivalent to a taxation scheme for public services.  The Government may as well compulsorily acquire the rights in question and equitably spread the cost through a levy.

Question 3:

No.  The existing section 36/101 was specifically inserted to provide exactly the clarity proposed here.  Rights holders were satisfied at the time.

Question 4:

Presumably reasonable is an objective test.

Question 5:

This response assumes the proposed implementation of a “three strikes” regime.

There is a Federal Magistrates court which is able to hear copyright infringement cases.  Defendants should have the right to have the case against them heard in a judicial forum. Under a three strikes regime an individual is required to justify their actions based on an accusation of infringement.  In the absence of a justification they suffer a sanction.  Our legal system should not be driven by mere accusations.  Defendants also have the right to know that the case against them is particular to them and not a cookie cutter accusation.

Question 6:

The court should have regard to what aims a block is intended to achieve, whether a block will be effective in achieving those aims and what impact a block will have on innocent third parties which may be affected by it.  For example, when Megaupload was taken down many innocent people lost their data with no warning.  This is more likely to be the case in the future as computing resources are increasingly shared in load balanced cloud storage implementations. These third parties are not represented in court and have no opportunity to put their case before a block is implemented.

A practice should be established whereby the court requires an undertaking from any person seeking a block to indemnify any innocent third party affected by the block against any damage suffered by them.  Alternatively, the Government could establish a victims compensation scheme that can run alongside such a block.  These third parties will be collateral damage from such a scheme.  Indeed, if the test for a site is only a “dominant purpose” test then collateral damage necessarily a consequence of the block.   An indemnity will serve the purpose of guiding incentives to reduce damage to innocent third parties.

Question 7

If the Government implements proposals which extend the applicability of auhtorisation infringements to smaller and smaller entities (eg a cafe providing wifi) then the safe harbour provisions need to be sufficiently simple and certain as to allow those entities to rely on them. At the moment they are compex and convoluted. If a cafe is forced to pay hundreds or thousands of dollars for legal advice about their wifi service, they will simply not provide it.

Question 8

Before the impact of measures can be measured [sic] a baseline first needs to be established for the purpose the Copyright Act is intended to serve.   In particular, the purpose of the Copyright Act is not to reduce infringement.  Rather, its titular purpose is to promote the creation of works and other subject matter.  This receives no mention in the discussion paper.  Historically, the Copyright Act has been promoted as necessary to maintain distribution networks (pre 1980s), as a means of providing creators with an income (last 2 centuries, but repeatedly contradicted empirically – most recently in the Don’t Give Up Your Day Job report),  as a natural right of authors (00s – contrary to judicial pronouncements on the issue) and now, apparently, as a means of stimulating the economy.  An Act which has so mutable a purpose ought to be considered with a jaundiced eye.

The reference to the PWC document suggests that the Hargreaves report would be a good starting point for further policy making.

Question 9

The retail price of downloadable copies of copyright works in Australia (exclusive of GST) should not exceed the price in their country of origin by more than 5% when sold directly.  The 5% figure is intended to allow for some additional costs of selling into Australia.

Implement the Productivity Commission’s recommendations on parallel importation.

Question 10, 11

The next two paragraphs of the response to this question deals primarily with a possible three strikes regime although the final observations are of a general character.

“Three strikes” regulation will effectively shift the burden of enforcement further away from rights holders to people who are the least equipped to implement it.  What will parents who receive warning letters do?  Will they implement a sophisticated filtering system on their home router?  Will they send their children off to a reeducation camp run by the rights’ holders? More likely they will blanket ban the internet access.  How will cafes manage their risk? More likely they will not provide wifi access.  This has already been the death knell of community wifi networks in the US.  The collateral damage from these proposals is difficult to quantify but there is every reason to believe it will be widespread.  This damage is routinely ignored in policy making.

Will rights’ holders use such a system against everyone? That is unlikely.  Rather, it will be used against some individuals unlucky enough to be first on the list. Those individuals will be used as examples for others.  This will be a law which will be enforced in an arbitrary and discriminatory fashion.  As such it will undermine respect for the law more generally.

The comments on the proposals above assume that they are acted on bona fide.  Once network operators are conditioned to a Pavlovian response to requests the system will be abused – the Get Up! organisation already believes it has been the subject of misuse: https://www.getup.org.au/campaigns/great-barrier-reef–3/adani-video/someone-wants-to-silence-us-dont-let-them

Evasion technologies have previously been a niche interest.  The size of the market limited their growth.  These provisions will sheet home to all citizens the need to implement evasion technologies, thereby greatly increasing the market and therefore the economic incentive for their evolution.  The long run effect of implementing proposals which effect this form of general surveillance of the population is to weaken national security.

By insulating rights holders from the costs of enforcement the proposals disconnect rights holders from the very externalities that enforcement creates.  If there were ever a recipe for poor policy, such a disconnection would be a key element of it.

 

 



Russell Coker: Links August 2014

Mon, 2014-09-01 02:26

Matt Palmer wrote a good overview of DNSSEC [1].

Sociological Images has an interesting article making the case for phasing out the US $0.01 coin [2]. The Australian $0.01 and $0.02 coins were worth much more when they were phased out.

Multiplicity is a board game that’s designed to address some of the failings of SimCity type games [3]. I haven’t played it yet but the page describing it is interesting.

Carlos Buento’s article about the Mirrortocracy has some interesting insights into the flawed hiring culture of Silicon Valley [4].

Adam Bryant wrote an interesting article for NY Times about Google’s experiments with big data and hiring [5]. Among other things it seems that grades and test results have no correlation with job performance.

Jennifer Chesters from the University of Canberra wrote an insightful article about the results of Australian private schools [6]. Her research indicates that kids who go to private schools are more likely to complete year 12 and university but they don’t end up earning more.

Kiwix is an offline Wikipedia reader for Android, needs 9.5G of storage space for the database [7].

Melanie Poole wrote an informative article for Mamamia about the evil World Congress of Families and their connections to the Australian government [8].

The BBC has a great interactive web site about how big space is [9].

The Raspberry Pi Spy has an interesting article about automating Minecraft with Python [10].

Wired has an interesting article about the Bittorrent Sync platform for distributing encrypted data [11]. It’s apparently like Dropbox but encrypted and decentralised. Also it supports applications on top of it which can offer social networking functions among other things.

ABC news has an interesting article about the failure to diagnose girls with Autism [12].

The AbbottsLies.com.au site catalogs the lies of Tony Abbott [13]. There’s a lot of work in keeping up with that.

Racialicious.com has an interesting article about “Moff’s Law” about discussion of media in which someone says “why do you have to analyze it” [14].

Paul Rosenberg wrote an insightful article about conservative racism in the US, it’s a must-read [15].

Salon has an interesting and amusing article about a photography project where 100 people were tased by their loved ones [16]. Watch the videos.

Related posts:

  1. Links August 2013 Mark Cuban wrote an interesting article titled “What Business is...
  2. Links February 2014 The Economist has an interesting and informative article about the...
  3. Links July 2014 Dave Johnson wrote an interesting article for Salon about companies...

Sridhar Dhanapalan: Twitter posts: 2014-08-25 to 2014-08-31

Mon, 2014-09-01 01:27

linux.conf.au News: Big announcement about upcoming announcements...

Sun, 2014-08-31 15:27

The Papers Committee weekend went extremely well and without bloodshed according to Steve, although there was some very strong discussion from time to time! The upshot is that we have a fantastic program now, with excellent presentations all across the board.

We have already begun contacting Miniconf organisers to let them know who has been successful and who hasn’t, and over the next couple of weeks we will be sending emails out to everyone who submitted a presentation to let them know how they fared.

If you have been accepted to run a Miniconf then your contact will be Simon Lyall (miniconfs@lca2015.linux.org.au) and if you have been accepted as a speaker then your contact will be Lisa Sands (speakers@lca2015.linux.org.au). We will be asking for a photo of you and your twitter name, as we will be running a Speaker Feature about a different presenter each day - don’t worry - you will be notified on your day!

We want to give great thanks to everyone who submitted papers - you are all still winners in our eyes, and we hope that even if you weren’t selected this time that won’t put you off attending the conference and having a great time. Please note that due to the large volume of submissions, we are unable to provide feedback on why any particular submission was unsuccessful.

Our earlybird registration will be opening soon, so watch this space!

Francois Marier: Outsourcing your webapp maintenance to Debian

Sun, 2014-08-31 08:46

Modern web applications are much more complicated than the simple Perl CGI scripts or PHP pages of the past. They usually start with a framework and include lots of external components both on the front-end and on the back-end.

Here's an example from the Node.js back-end of a real application:

$ npm list | wc -l 256

What if one of these 256 external components has a security vulnerability? How would you know and what would you do if of your direct dependencies had a hard-coded dependency on the vulnerable version? It's a real problem and of course one way to avoid this is to write everything yourself. But that's neither realistic nor desirable.

However, it's not a new problem. It was solved years ago by Linux distributions for C and C++ applications. For some reason though, this learning has not propagated to the web where the standard approach seems to be to "statically link everything".

What if we could build on the work done by Debian maintainers and the security team?

Case study - the Libravatar project

As a way of discussing a different approach to the problem of dependency management in web applications, let me describe the decisions made by the Libravatar project.

Description

Libravatar is a federated and free software alternative to the Gravatar profile photo hosting site.

From a developer point of view, it's a fairly simple stack:

The service is split between the master node, where you create an account and upload your avatar, and a few mirrors, which serve the photos to third-party sites.

Like with Gravatar, sites wanting to display images don't have to worry about a complicated protocol. In a nutshell, all that a site needs to do is hash the user's email and add that hash to a base URL. Where the federation kicks in is that every email domain is able to specify a different base URL via an SRV record in DNS.

For example, francois@debian.org hashes to 7cc352a2907216992f0f16d2af50b070 and so the full URL is:

http://cdn.libravatar.org/avatar/7cc352a2907216992f0f16d2af50b070

whereas francois@fmarier.org hashes to 0110e86fdb31486c22dd381326d99de9 and the full URL is:

http://fmarier.org/avatar/0110e86fdb31486c22dd381326d99de9

due to the presence of an SRV record on fmarier.org.

Ground rules

The main rules that the project follows is to:

  1. only use Python libraries that are in Debian
  2. use the versions present in the latest stable release (including backports)
Deployment using packages

In addition to these rules around dependencies, we decided to treat the application as if it were going to be uploaded to Debian:

  • It includes an "upstream" Makefile which minifies CSS and JavaScript, gzips them, and compiles PO files (i.e. a "build" step).
  • The Makefile includes a test target which runs the unit tests and some lint checks (pylint, pyflakes and pep8).
  • Debian packages are produced to encode the dependencies in the standard way as well as to run various setup commands in maintainer scripts and install cron jobs.
  • The project runs its own package repository using reprepro to easily distribute these custom packages.
  • In order to update the repository and the packages installed on servers that we control, we use fabric, which is basically a fancy way to run commands over ssh.
  • Mirrors can simply add our repository to their apt sources.list and upgrade Libravatar packages at the same time as their system packages.
Results

Overall, this approach has been quite successful and Libravatar has been a very low-maintenance service to run.

The ground rules have however limited our choice of libraries. For example, to talk to our queuing system, we had to use the raw Python bindings to the C Gearman library instead of being able to use a nice pythonic library which wasn't in Debian squeeze at the time.

There is of course always the possibility of packaging a missing library for Debian and maintaining a backport of it until the next Debian release. This wouldn't be a lot of work considering the fact that responsible bundling of a library would normally force you to follow its releases closely and keep any dependencies up to date, so you may as well share the result of that effort. But in the end, it turns out that there is a lot of Python stuff already in Debian and we haven't had to package anything new yet.

Another thing that was somewhat scary, due to the number of packages that were going to get bumped to a new major version, was the upgrade from squeeze to wheezy. It turned out however that it was surprisingly easy to upgrade to wheezy's version of Django, Apache and Postgres. It may be a problem next time, but all that means is that you have to set a day aside every 2 years to bring everything up to date.

Problems

The main problem we ran into is that we optimized for sysadmins and unfortunately made it harder for new developers to setup their environment. That's not very good from the point of view of welcoming new contributors as there is quite a bit of friction in preparing and testing your first patch. That's why we're looking at encoding our setup instructions into a Vagrant script so that new contributors can get started quickly.

Another problem we faced is that because we use the Debian version of jQuery and minify our own JavaScript files in the build step of the Makefile, we were affected by the removal from that package of the minified version of jQuery. In our setup, there is no way to minify JavaScript files that are provided by other packages and so the only way to fix this would be to fork the package in our repository or (preferably) to work with the Debian maintainer and get it fixed globally in Debian.

One thing worth noting is that while the Django project is very good at issuing backwards-compatible fixes for security issues, sometimes there is no way around disabling broken features. In practice, this means that we cannot run unattended-upgrades on our main server in case something breaks. Instead, we make use of apticron to automatically receive email reminders for any outstanding package updates.

On that topic, it can occasionally take a while for security updates to be released in Debian, but this usually falls into one of two cases:

  1. You either notice because you're already tracking releases pretty well and therefore could help Debian with backporting of fixes and/or testing;
  2. or you don't notice because it has slipped through the cracks or there simply are too many potential things to keep track of, in which case the fact that it eventually gets fixed without your intervention is a huge improvement.

Finally, relying too much on Debian packaging does prevent Fedora users (a project that also makes use of Libravatar) from easily contributing mirrors. Though if we had a concrete offer, we would certainly look into creating the appropriate RPMs.

Is it realistic?

It turns out that I'm not the only one who thought about this approach, which has been named "debops". The same day that my talk was announced on the DebConf website, someone emailed me saying that he had instituted the exact same rules at his company, which operates a large Django-based web application in the US and Russia. It was pretty impressive to read about a real business coming to the same conclusions and using the same approach (i.e. system libraries, deployment packages) as Libravatar.

Regardless of this though, I think there is a class of applications that are particularly well-suited for the approach we've just described. If a web application is not your full-time job and you want to minimize the amount of work required to keep it running, then it's a good investment to restrict your options and leverage the work of the Debian community to simplify your maintenance burden.

The second criterion I would look at is framework maturity. Given the 2-3 year release cycle of stable distributions, this approach is more likely to work with a mature framework like Django. After all, you probably wouldn't compile Apache from source, but until recently building Node.js from source was the preferred option as it was changing so quickly.

While it goes against conventional wisdom, relying on system libraries is a sustainable approach you should at least consider in your next project. After all, there is a real cost in bundling and keeping up with external dependencies.

This blog post is based on a talk I gave at DebConf 14: slides, video.

Maxim Zakharov: Australian Singing Competition

Sun, 2014-08-31 03:25

The Finals concert of the 2014 Australian Singing Competition was an amazing experience, and it was the first time I listened to opera singers live.

Congratulations to the winner, Isabella Moore from New Zealand!

Matt Palmer: Chromium tabs crashing and not rendering correctly?

Sat, 2014-08-30 15:26

If you’ve noticed your chrome/chromium on Linux having problems since you upgraded to somewhere around version 35/36, you’re not alone. Thankfully, it’s relatively easy to workaround. It will hit people who keep their browser open for a long time, or who have lots of tabs (or if you’re like me, and do both).

To tell if you’re suffering from this particular problem, crack open your ~/.xsession-errors file (or wherever your system logs stdout/stderr from programs running under X), and look for lines that look like this:

[22161:22185:0830/124533:ERROR:shared_memory_posix.cc(231)] Creating shared memory in /dev/shm/.org.chromium.Chromium.gFTQSy failed: Too many open files

And

[22161:22185:0830/124601:ERROR:host_shared_bitmap_manager.cc(122)] Cannot create shared memory buffer

If you see those errors, congratulations! The rest of this blog post will be of use to you.

There’s probably a myriad of bugs open about this problem, but the one I found was #367037: Shared memory-related tab crash. It turns out there’s a file handle leak in the chromium codebase somewhere, relating to shared memory handling. There’s no fix available, but the workaround is quite simple: increase the number of files that processes are allowed to have open.

System-wide, you can do this by creating a file /etc/security/limits.d/local-nofile.conf, containing this line:

* - nofile 65535

You could also edit /etc/security/limits.conf to contain the same line, if you were so inclined. Note that this will only take effect next time you login, or perhaps even only when you restart X (or, at worst, your entire machine).

This doesn’t help you if you’ve got Chromium already open and you’d like to stop it from crashing Right Now (perhaps restarting your machine would be a terrible hardship, causing you to lose your hard-won uptime record), then you can use a magical tool called prlimit.

The prlimit syscall is available if you’re running a Linux 2.6.36 or later kernel, and running at least glibc 2.13. You’ll have a prlimit command line program if you’ve got util-linux 2.21 or later. If not, you can use the example source code in the prlimit(2) manpage, changing RLIMIT_CPU to RLIMIT_NOFILE, and then running it like this:

prlimit <PID> 65535 65535

The <PID> argument is taken from the first number in the log messages from .xsession-errors – in the example above, it’s 22161.

And now, you can go back to using your tabs as ersatz bookmarks, like I do.

David Rowe: SM1000 Part 4 – Killing a PCB and PTT Working

Fri, 2014-08-29 21:30

Last Sunday the ADC1 net on the first SM1000 prototype went open circuit all of a sudden. After messing about for a few hours I lifted the uC pin for that net and soldered a fine wire to the other end of the net. That lasted a few days then fell off. I then broke the uC pin trying to put it all back together. So then I tried to use some Chip Quick I had laying about from the Mesh Potato days to remove the uC. I royally screwed that up, breaking several pads.

It’s been 7 years since my last surface mount assembly project and it shows!

However when the uC came off the reason for the open circuit became apparent. The photo below was taken through the microscope I use for surface mount assembly:

At the top is the bottom part of a square pad that is part of the ADC1 net. The track is broken just before the lower left corner of the pad. Many of the pads under the uC were in various stages of decomposition, e.g. solder mask and tinning gone, down to bare copper. Turns out I used too much flux and it wasn’t cleaned out from under the chip when I washed the PCB. For the past few weeks it was busy eating away the PCB.

Oh well, one step back! So this week I built another SM1000, and today I brought it to life. After fixing a few small assembly bugs I debugged the “switches and leds” driver and sm1000_main.c, which means I now have PTT operation working. So it’s normally in receive mode, but press PTT and it swaps to tx mode. The sync, PTT, and error LEDs work too. Cool.

Here is a picture of prototype number 2:

The three trimmers along the bottom set the internal mic amp, and line levels to the “mic” and “speaker” ports of the radio. The pot on the RHS is the internal speaker volume control. The two switches upper RHS are PTT and power. On the left is a RJ45 for the audio connections to the radio and under the PCB (not visible) are a bunch of 3.5mm sockets that provide alternate audio connections to the radio.

What next? Well the speaker audio is a bit distorted at high volume so I might look into that and see if the LM386 is behaving as specified. Then hook it up to a real radio and test it over the air. That will shake down the interfaces some more and see if it’s affected by strong nearby RF. Oh, and I need to test USB and a few other minor interfaces.

I’m very happy with progress and we are on track to release the SM1000 in beta form commercially in late 2014.

Tridge on UAVs: Lidar landing with APM:Plane

Fri, 2014-08-29 20:47

Over the last couple of days I have been testing the Lidar based auto-landing code that will be in the upcoming 3.1.1 release of APM:Plane. I'm delighted to say that it has gone very well!

Testing has been done on two planes - one is a Meridian sports plane with a OS46 Nitro motor. That is a tricycle undercarriage, so has very easy ground steering. The tests today were with the larger VQ Porter 2.7m tail-dragger with a DLE-35 petrol motor. That has a lot of equipment on board for the CanberraUAV OBC entry, so it weighs 14kg at takeoff making it a much more difficult plane to land well.

The Lidar is a SF/02 from LightWare, a really nice laser rangefinder that works nicely with Pixhawk. It has a 40m range, which is great for landing, allowing the plane plenty of time to lock onto the glide slope in the landing approach.

APM:Plane has supported these Lidars and other rangefinders for a while, but until now has not been able to use them for landing. Instead they were just being logged to the microSD card, but not actively used. After some very useful discussions with Paul Riseborough we now have the Lidar properly integrated into the landing code.

The test flights today were an auto-takeoff (with automatic ground steering), a quick auto-circuit then an automatic landing. The first landing went long as I'd forgotten to drop THR_MIN down to zero (I normally have it at 20% to ensure the engine stays at a reasonable level during auto flight). After fixing that we got a series of good auto flights.

The flight was flown with a 4 second flare time, which is probably a bit long as it allowed the plane to lose too much speed on the final part of the flare. That is why it bounces a bit as it starts losing height. I'll try with a bit shorter flare time tomorrow.

Here is the video of one of the Meridian flights yesterday. Sorry for missing part of the flight, the video was shot with a cell phone by a friend at the field.

Here is another video of the Porter flying today, but taken from the north of the runway

I'd like to thank Charles Wannop from Flying Finish Productions for the video of the Porter today with help from Darrell Burkey.

Gary Pendergast: The Next Adventure

Fri, 2014-08-29 16:26

Over my past few years at Automattic, I’ve worked on a bunch of different teams and projects – VideoPress, the WordPress iOS app, various Social projects, and most recently, o2. I even took a few months to work on WordPress core, helping build the auto-update functionality that we now see rolling out security updates within hours of their release.

The few months I spent working on WordPress core made me realise something – there’s a lot more I have to contribute. So, with the WordPress 4.0 RC out the door, I’m super excited to be moving to my next project – working on WordPress core full time!

Automattic naturally puts a lot of people-hours into WordPress, with over 30 of us contributing to WordPress 3.9. I’m looking forward to being a bigger part of that, and giving more back to WordPress community!

Robert Collins: Test processes as servers

Fri, 2014-08-29 15:29

Since its very early days subunit has had a single model – you run a process, it outputs test results. This works great, except when it doesn’t.

On the up side, you have a one way pipeline – there’s no interactivity needed, which makes it very very easy to write a subunit backend that e.g. testr can use.

On the downside, there’s no interactivity, which means that anytime you want to do something with those tests, a new process is needed – and thats sometimes quite expensive – particularly in test suites with 10’s of thousands of tests.Now, for use in the development edit-execute loop, this is arguably ok, because one needs to load the new tests into memory anyway; but wouldn’t it be nice if tools like testr that run tests for you didn’t have to decide upfront exactly how they were going to run. If instead they could get things running straight away and then give progressively larger and larger units of work to be run, without forcing a new process (and thus new discovery directory walking and importing) ? Secondly, testr has an inconsistent interface – if testr is letting a user debug things to testr through to child workers in a chain, it needs to use something structured (e.g. subunit) and route stdin to the actual worker, but the final testr needs to unwrap everything – this is needlessly complex. Lastly, for some languages at least, its possibly to dynamically pick up new code at runtime – so a simple inotify loop and we could avoid new-process (and more importantly complete-enumeration) *entirely*, leading to very fast edit-test cycles.

So, in this blog post I’m really running this idea up the flagpole, and trying to sketch out the interface – and hopefully get feedback on it.

Taking subunit.run as an example process to do this to:

  1. There should be an option to change from one-shot to server mode
  2. In server mode, it will listen for commands somewhere (lets say stdin)
  3. On startup it might eager load the available tests
  4. One command would be list-tests – which would enumerate all the tests to its output channel (which is stdout today – so lets stay with that for now)
  5. Another would be run-tests, which would take a set of test ids, and then filter-and-run just those ids from the available tests, output, as it does today, going to stdout. Passing somewhat large sets of test ids in may be desirable, because some test runners perform fixture optimisations (e.g. bringing up DB servers or web servers) and test-at-a-time is pretty much worst case for that sort of environment.
  6. Another would be be std-in a command providing a packet of stdin – used for interacting with debuggers

So that seems pretty approachable to me – we don’t even need an async loop in there, as long as we’re willing to patch select etc (for the stdin handling in some environments like Twisted). If we don’t want to monkey patch like that, we’ll need to make stdin a socketpair, and have an event loop running to shepard bytes from the real stdin to the one we let the rest of Python have.

What about that nirvana above? If we assume inotify support, then list_tests (and run_tests) can just consult a changed-file list and reload those modules before continuing. Reloading them just-in-time would be likely to create havoc – I think reloading only when synchronised with test completion makes a great deal of sense.

Would such a test server make sense in other languages?  What about e.g. testtools.run vs subunit.run – such a server wouldn’t want to use subunit, but perhaps a regular CLI UI would be nice…