Planet Linux Australia

Syndicate content
Planet Linux Australia -
Updated: 1 hour 3 min ago

Maxim Zakharov: dpsearch-4.54-2015-07-06

Fri, 2015-07-10 09:25

A new snapshot version of DataparkSearch Engine has been released. You can get it on Google Drive.

Here is the list of changes since previous snapshot:

  • Crossword section is now includes value of TITLE attribute of IMG tag and values of ALT and TITLE attributes of A and LINK tags found on documents pointing to the indexing document
  • Meta PROPERTY is now indexing
  • URL info data is now stored for all documents with HTTP status code < 400
  • configure is now understands --without-libextractor switch to build dpsearch without libextractor support even it has been installed
  • robots.txt support is enabled for sites crawling using HTTPS scheme
  • AuthPing command has been added to send authorisation request before getting documents from a web-site. See details below.
  • Cookie command has been added.
  • Add support for SOCKS5 proxy without authorisation and with username authorisation. See details below.
  • A number of minor fixes

AuthPing command

Some web-sites may serve different content to a logged in user. In most cases logging in process consists of sending a POST or GET HTTP request to a specific URL before you start to receive targeted content. You may use AuthPing command to send such authentication request before requesting any document from the web-site.


AuthPing "POST"

This command specify a POST request to be send to the URL address with the following CGI loading:

AuthPing command should be specified before each Server/Realm/Subnet command it affects. And specified request is sent each time an indexing thread access a web-server for the first time in a run session.

Using SOCKS5 proxy

Proxy command is now accepting proxy type option with value either http either socks5. If you need to use username authentication with SOCKS5 proxy please use ProxyAuthBasic command to specify username and password.


Proxy socks5 localhost:9050

In this example a SOCKS5 proxy connection to local Tor system is specified which uses no authentication method for connection.

Binh Nguyen: Ableton and Ableton Push Hacking

Fri, 2015-07-10 00:47
For those who have been tracking this blog, it's been obvious that I've recently been spending more and more time with the Ableton Push.

If you don't know what this is please see the following...  

Easy to miss Push features

Helpful Push information.

Jeremy Ellis meets Ableton Push

Mad Zach Push Performance Walkthrough

Decap Push Performance

It's basically an advanced, modern musical instrument/MIDI controller.

There have been others who have attempted to de-compile and extend/modify the behaviour of the device but while information and the extensions that have been provided have been interesting and useful they have been somewhat limited.

ableton: just release the py midi remote scripts

Live 9 MIDI Remote Scripts revealed...

I'm beginning to understand why. The following link provides an update to automatically generated documentation (via epydoc) of decompiled Ableton Remote Script code (my scripts for decompilation and automated documentation have been included in the package).

If you want to make any additional modifications of behaviour you'll need to be aware of the following:

- you'll need to catch up on your Python coding

- you'll need knowledge of how the device works, music and mathematical theory, Ableton, and core computing knowledge. It is not sufficient to know how they work seperately. You need to know how everything fits together.

- sounds obvious but start small and move up. This is critical particularly with reference to the awkward style of programming that they can sometimes resort to. More on this to below

- the code can vary in quality and style quite significantly at times. At times it seems incredibly clean, elegant, and well documented. At other times, it there is no documentation at all and doesn't seem to be well designed or engineered or have keep maintenance in mind. For instance, a commonly used design pattern is MVC. This doesn't seem to follow that. They use a heap of sentinels throughout there code. Moreover, the characters that are used can be a bit confusing. They don't use preprocesser directives/constants where they may be better suited. If you break certain aspects of the code you can end up breaking a whole lot of other parts. This may be deliberate (to reduce the chances of third party modification which is likely particularly as there seems to be some authentication/handshake mechasnisms in the code to stop it from working with 'uncertified devices') or not (they lack resources or just have difficult timelines to deal with)

- be prepared to read through a lot of code just to understand/make a change to something very small. As stated previously strictly speaking at times they don't adhere to good practice. That said, other aspects can be changed extremely easily without breaking other components

- due to the previous two poits it should seem obvious that it can be very difficult to debug things sometimes. Here's the other thing you should note,

- the reason why Ableton suffers from strange crashes and hangs from time to time becomes much more obvious when you look at the way they code. In the past, I've built programs (ones which rely on automated code generation in particular) that relied on a lot of consecutive steps that required proper completion/sequencing for things to work properly. When things work well, things are great. When things break, you feel and look incredibly silly

- you may need to figure out a structure for ensuring and maintaining a clean coding environment. I try to have two screens with one for clean code and another for modified code. Be prepared to restart from scratch by reverting to a clean pyc code and only one or a small number of modified py files.

- caching occurs in situations where you may not entirely expect. If you can not explain what is happening and suspect caching just restart the system. Better yet, maintain your development environment in a virtual machine to reduce hardware stress caused by continual restarts.

- you will need patience. As stated previously, due to the way code has been structured (sometimes) you'll need to understand it properly to allow you to make changes without breaking other parts. Be prepared to modify, delete, or add code just to help you understand it

- if you've ever dealt with firmware or embedded devices on a regular basis you would be entirely familiar with some of what I'm talking about. Like a lot of embedded devices you'll have limited feedback if something goes wrong and you'll be scratching your head with regards to how to work the problem

You may require a lot of Linux/UNIX based tools and other debugging utilities such as IDA Pro, Process Explorer and Process Monitor from the Sysinternals Suite. Once you examine Ableton using such utilities, it becomes much clearer how the program has been structured, engineered, and designed. One thing that can cause mayhem in particular is the Ableton Indexer which when it kicks in at the wrong time can make it feel as though the entire system has frozen.

Ablton indexing Crashes

(42474187) Disable "Ableton Index" possible?

The actual index file/s are located at

C:\Users\[username]\AppData\Roaming\Ableton\Live 9.1.7\Database

- the most relevant log file is located at,

C:\Users\[username]\AppData\Roaming\Ableton\Live 9.1.7\Preferences\Log.txt

The timestamps works on the basis on amount of time since program startup. Time of startup is clearly outlined.

Ableton takes 130 - 2 mins to start up ?

Delete it if you need to if you get confused about how it works.

- be aware that there are some things that you can't do anything about. The original Novation Launchpad was considered somewhat sluggish in terms of refresh rate and latency. The electronics were subsequently updated in the Novation Launchpad S to deal with it. You may encounter similar circumstances here.

Push browser - slow, freezing, sluggish :(

- they have a strong utility/systems engineering mentality. A lot of files are archives which include relatively unobfuscated content. For instance, look in a

C:\Users\[username]\AppData\Roaming\Ableton\Live 9.1.7\Live Reports

and you'll find a lot of 'alp' files. These are 'Crash Reports' which are sent to Ableton to help debug problems. Rename them to a gz file extension and run it through hexedit. Same with 'adg' audio device group files. Rename to gz and gunzip to see a flat XML file containing some encoded information but mostly free/human readable content. It will be interesting to see how much of this can be manually altered achieving flexibility in programming without having to understand the underlying file format.

- each version of Ableton seems to have a small version of Python included. To make certain, advanced extensions others have suggested installing a libraries seperately or a different version of Python...

- be prepared to learn multiple protocols and languages in order to make the changes that you want

How to control the Push LCD text with sysex messages

For a lot of people, the device seems incredibly expensive for what amounts to a MIDI controller. It was much the same with me. The difference is that it's becoming increasingly clear how flexible the device can be with adequate knowledge of the platform. feature requests

The Ableton Push is a good platform but it will never realise it's full potential if the software isn't upgraded.

If you are interested in signing up to test the latest Beta version of Ableton please see the following...

Simon Lyall: NetHui 2015 – Thursday Afternoon

Thu, 2015-07-09 15:28

Domains: growth, change, transition

  • Transition of .nz to second level domains
  • Some stuff re moving root zone control away from the US
  • Problem with non-ascii domains (IDNs). They work okay, but not 3rd party apps or apps in Organisations. Eg can’t register on Facebook or other websites.
  • 60% of Government Depts don’t accept IDNs as email addresses, lots of other orgs
  • 1/3 of all new .nz domains created at second level
  • Around 95k or 600k .nz domains now at second level (about 2/3s of these from rights are 3LD holder)
  • Some people when you give them your change it into
  • 1st principles of .nz whois public policy.
  • People are in danger if they address is published
  • But what the ability to contact the real owner of a domain
  • 4 people in room with signed domains
  • 300 signed .nz domains. 150 with DS record
  • Around 3 people in room with new TLDs. See for current stats

Internet of Things

  • Where does the data from your house appliances go?
  • Forwarded to other companies
  • Issues need to be understandable by ordinary citizens especially terms and conditions
  • Choose the data that you choose to share with the company rather than company choosing what it shares with you (and others)
  • In health care area people worried about sharing data if it will affect their insurance premiums or coverage
  • Many people don’t understand what their data is, they don’t understand that if every time they do something (on a device) it is stored and can be used later. How to educate people without sounding paranoid?
  • “IoT is connecting things whose primary purpose is not connecting to the Internet”
  • “The cost of sharing is bearable, because the sharing is valuable.”
  • More granularities of trust. No current standards or experience or feeling for this since such a new area and rapidly evolving
  • NZ law should override overly aggressive agreements (by overseas companies)
  • Some discussion about standards, lots of them, full stack, piecemeal, rapidly changing
  • Will the IoT make everything useless after the zombie apocalypse?
  • “Denial of Service attack on your IoT pill bottle would be bad!”
  • Concern that something like a pill bottle failing can put life in danger. Very high level of reliability needed which is rare and hard in software

Panel: Parliamentary Internet Forum

  •  With Gareth Hughes (Green Party), Clare Curran (Labour Party), Brett Hudson (National Party), Ria Bond (NZ First), Karen Melhuish Spencer (Core Education), Nigel Robertson (University of Waikato)
  • What roles does the Education system play in the Internet
    • National guy mostly talked about UFB and RBI programmes, computers in homes
    • Gareth Hughes adopts the “I went out to XYZ School” story. Pushes Teachers not trained and 1 in 4 homes don’t have Internet access.
    • Claire – Got distracted about discussion re her pants. But she said 40% of jobs at risk over next 10-15 years due to impact of technology
    • Karen – I got distracted about another clothing related discussion on twitter
    • Nigel – 1. Use the Internet to do what we already do better. Help people to use the Internet better (digital literacy)
  • Lots of discussion about retraining older people to handle jobs in the future as their present jobs go away
  • How much should government be leading vs getting out of the way and just funding it?
    • Nigel – Government should provide direction. Different in tertiary and other sectors
    • Karen – Collaborative and connected but not mandating
  • “We need to prepare people not just for the jobs of the future, but also to create the companies of the future” – Martin Danner
  • Lots of other stuff but I got distracted.


Colin Charles: #PerconaLive Amsterdam – schedule now out

Thu, 2015-07-09 14:25

The schedule is out for Percona Live Europe: Amsterdam (September 21-23 2015), and you can see it at:

From MariaDB Corporation/Foundation, we have 1 tutorial: Best Practices for MySQL High Availability – Colin Charles (MariaDB)

And 5 talks:

  1. Using Docker for Fast and Easy Testing of MariaDB and MaxScale – Andrea Tosatto (Colt Engine s.r.l.) (I expect Maria Luisa is giving this talk together – she’s a wonderful colleague from Italy)
  2. Databases in the Hosted Cloud Colin Charles (MariaDB)
  3. Database Encryption on MariaDB 10.1 Jan Lindström (MariaDB Corporation), Sergei Golubchik (Monty Program Ab)
  4. Meet MariaDB 10.1 Colin Charles (MariaDB), Monty Widenius (MariaDB Foundation)
  5. Anatomy of a Proxy Server: MaxScale Internals Ivan Zoratti (ScaleDB Inc.)

OK, Ivan is from ScaleDB now, but he was the SkySQL Ab ex-CTO, and one of the primary architects behind MaxScale! We may have more talks as there are some TBD holes to be filled up, but the current schedule looks pretty amazing already.

What are you waiting for, register now!

Simon Lyall: NetHui 2015 – Thursday Morning

Thu, 2015-07-09 09:28

Ministerial address: Hon. Amy Adams, Minister for Communications

  • Mentions she was at community group meeting where people were “shocked” when it was suggested that minutes be sent via email
  • Talk up of the UFB rollout. Various stats about how it is going
  • Also mentioned that Mobile build is part of UFB, better cellular connectivity in rural regions
  • Notes that this will never be 100% complete. The bar keeps moving
  • Very different takeup in different regions. 2% in some 19% in others. Local organisations pushing
  • Good Internet is especially important for remote countries like New Zealand
  • Talk about getting better access in common areas (eg shared driveways) for network builds
  • Notes how Broadcasting and Communications as well as other areas are converging. Previously they were separate silos. Similar for other areas.
  • Harmful Digital Communications Act.
    • Says new framework, adjustment may be needed and bedding down the courts.
    • Says that majority of cases will go to mediation
    • Similar Act in Australia very few things going to courts
    • Gave similar silly literal readings of others acts ( RMA requires a permit to sneeze )
  • 5 “Questions” to minister. 2 on TPP, 1 on Captions, 1 pushing some project and one actual question that she got to answer.
  • Maybe they should look at this idea for the Questions

Keynote: Kathy Brown, ISOC CEO

  • GDP of a National is highly correlated with the growth of the Internet
  • 75% of the benefit of the Internet goes to existing businesses
  • ISOC Global Internet Report 2015
  • Huge growth in Mobile Internet
  • “94% of the global population is covered by mobile networks. Mobile broadband covers 48% of global population”
  • Huge gap between developed and developing counties
  • Report is Online and “Interactional”
  • Challenges
    • Openness of the Internet means information is out there, exposed and gettable by the wrong people sometimes
    • Generational divide in attitude to privacy
  • Privacy is a matter of personal choice. The tools should be available should you wish to use them

Govt 2.0: Digital by default

  • Rachel Prosser and David Farrar facilitating.
  • Room full
  • Result 10 programme background
  • NZ Government Web toolkit
  • 50,000 registered with NZ Realme site
  • Shared rules between local governments, problems with same rules everywhere. Some limitations,. Perhaps at least similar technical standards
  • People don’t care about governments structure, they just want a service, don’t care how depts are arranged.


Simon Lyall: NetHui 2015 – InTac afternoon

Wed, 2015-07-08 14:28

Building an access network for demand and scale – new challengesKurt Rogers, Chorus

  • Over 1 million broadband connections on access network
  • 70-80% of BB connections
  • Average connection sped now near 20Mb/s due to VDSL and Fibre
  • Busiest 15 minute period (around 9pm Thursday) of week averaging 0.5Mb/s per user ( up from 100kb/s just 3 years ago )
  • Jump in mid-2013 when Netflix and Lightbox launched
  • Average bandwidth per user growing 50%/year. Grown that much in 1st half of 2015
  • Quite a few people still on ADSL1 modems when ADSL2 would work
  • Same a lot of people can get VDSL that don’t realize
  • Lots of people on 30Meg fibre plan at the start, now most going for 100Mb/s
  • Rural broadband (RBI)
    • 85k lines upgraded to FTTN
    • Average speed jumped 5.6Mb/s to 15Mb/s after a single rural cabinet upgraded cause everybody could now use ADSL2 and faster uplink. One fibre guy got 48Mb/s on VDSL, other 37Mb/s
    • More speed out there than some people realize
  • VDSL bandplan moving from 997 to 998. Trail average speed increases were from 32 to 46Mb/s for downstream. Minimal change on upstream speed.
  • Capacity
    • Aggregation link bandwidth. Alert threshold at 70%, Max threshold at 90%
  • Technology down the road to speed up aggregation links with Next Generation PON technology

The new smart ISPColin Brown, GM of Networks at Spark

  • Working on caching infrastructure, bigger and closer to their edge
  • Big traffic growth this year
  • Big growth in mobile traffic especially upload
  • 60% of phones in stores are 4G capable
  • Providers investing a lot of money , profits lower. Less like banks, more like airlines
  • Technology refresh every 5 years rather than every 10


Rusty Russell: The Megatransaction: Why Does It Take 25 Seconds?

Wed, 2015-07-08 14:28

Last night f2pool mined a 1MB block containing a single 1MB transaction.  This scooped up some of the spam which has been going to various weakly-passworded “brainwallets”, gaining them 0.5569 bitcoins (on top of the normal 25 BTC subsidy).  You can see the megatransaction on

It was widely reported to take about 25 seconds for bitcoin core to process this block: this is far worse than my “2 seconds per MB” result in my last post, which was considered a pretty bad case.  Let’s look at why.

How Signatures Are Verified

The algorithm to check a transaction input (of this form) looks like this:

  1. Strip the other inputs from the transaction.
  2. Replace the input script we’re checking with the script of the output it’s trying to spend.
  3. Hash the resulting transaction with SHA256, then hash the result with SHA256 again.
  4. Check the signature correctly signed that hash result.

Now, for a transaction with 5570 inputs, we have to do this 5570 times.  And the bitcoin core code does this by making a copy of the transaction each time, and using the marshalling code to hash that; it’s not a huge surprise that we end up spending 20 seconds on it.

How Fast Could Bitcoin Core Be If Optimized?

Once we strip the inputs, the result is only about 6k long; hashing 6k 5570 times takes about 265 milliseconds (on my modern i3 laptop).  We have to do some work to change the transaction each time, but we should end up under half a second without any major backflips.

Problem solved?  Not quite….

This Block Isn’t The Worst Case (For An Optimized Implementation)

As I said above, the amount we have to hash is about 6k; if a transaction has larger outputs, that number changes.  We can fit in fewer inputs though.  A simple simulation shows the worst case for 1MB transaction has 3300 inputs, and 406000 byte output(s): simply doing the hashing for input signatures takes about 10.9 seconds.  That’s only about two or three times faster than the bitcoind naive implementation.

This problem is far worse if blocks were 8MB: an 8MB transaction with 22,500 inputs and 3.95MB of outputs takes over 11 minutes to hash.  If you can mine one of those, you can keep competitors off your heels forever, and own the bitcoin network… Well, probably not.  But there’d be a lot of emergency patching, forking and screaming…

Short Term Steps

An optimized implementation in bitcoind is a good idea anyway, and there are three obvious paths:

  1. Optimize the signature hash path to avoid the copy, and hash in place as much as possible.
  2. Use the Intel and ARM optimized SHA256 routines, which increase SHA256 speed by about 80%.
  3. Parallelize the input checking for large numbers of inputs.
Longer Term Steps

A soft fork could introduce an OP_CHECKSIG2, which hashes the transaction in a different order.  In particular, it should hash the input script replacement at the end, so the “midstate” of the hash can be trivially reused.  This doesn’t entirely eliminate the problem, since the sighash flags can require other permutations of the transaction; these would have to be carefully explored (or only allowed with OP_CHECKSIG).

This soft fork could also place limits on how big an OP_CHECKSIG-using transaction could be.

Such a change will take a while: there are other things which would be nice to change for OP_CHECKSIG2, such as new sighash flags for the Lightning Network, and removing the silly DER encoding of signatures.

Stewart Smith: The sad state of MySQL and NUMA

Wed, 2015-07-08 13:27

Way back in 2010, MySQL Bug 57241 was filed, pointing out that the “swap insanity” problem was getting serious on x86 systems – with NUMA being more and more common back then.

The swapping problem is due to running out of memory on a NUMA node and having to swap things to other nodes (see Jeremy Cole‘s blog entry also from 2010 on the topic of swap insanity). This was back when 64GB and dual quad core CPUs was big – in the past five years big systems have gotten bigger.

Back then there were two things you could do to have your system be usable: 1) numa=off as kernel boot parameter (this likely has other implications though) and 2) “numactl –interleave all” in mysqld_safe script (I think MariaDB currently has this built in if you set an option but I don’t think MySQL does, otherwise perhaps the bug would have been closed).

Anyway, it’s now about 5 years since this bug was opened and even when there’s been a patch in the Twitter MySQL branch for a while (years?) and my Oracle Contributor Agreement signed patch attached to bug 72811 since May 2014 (over a year) we still haven’t seen any action.

My patch takes the approach of you want things allocated at server startup to be interleaved across nodes (e.g. buffer pool) while runtime allocations are probably per connection and are thus fine (in fact, better) to do node local allocations.

Without a patch like this, or without running mysqld with the right numactl incantation, you end up either having all your memory on one NUMA node (potentially not utilising full memory bandwidth of the hardware), or you end up with swap insanity, or you end up with some other not exactly what you’d expect situation.

While we could have MySQL be more NUMA aware and perhaps do a buffer pool instance per NUMA node or some such thing, it’s kind of disappointing that for dedicated database servers bought in the past 7+ years (according to one comment on one of the bugs) this crippling issue hasn’t been addressed upstream.

Just to make it even more annoying, on certain workloads you end up with a lot of mutex contention, which can end up meaning that binding MySQL to fewer NUMA nodes (memory and CPU) ends up increasing performance (cachelines don’t have as far to travel) – this is a different problem than swap insanity though, and one that is being addressed.

Update: My patch as part of has been merged! MySQL on NUMA machines just got a whole lot better. I just hope it’s enabled by default…

Simon Lyall: NetHui 2015 – InTac morning

Wed, 2015-07-08 10:28

IntroductionDean Pemberton, InternetNZ

Dean was going to do an intro but got cock-blocked by some guy in a High-Vis vest.

The People Factor: what users wantPaul Brislen, ex-CEO of TUANZ

  • Working from home since 1999, 30kb/s at first. Made it work
  • Currently has 10Mb/s shared with busy family, often congested, not using much TV yet
  • Television driving demand.
  • Some infrastructure showing the strain
  • Southern cross replacement will be via Sydney. A couple of thousand km in the wrong direction when going to the US
  • Rural broadband still to deliver on the promise, no uptake stats, not great service level
  • Internet access critical path for economic development. lack of political will
  • Dean got to do his intro talk now.
  • Will Internet be priced on peak usage? A: Already offpeak discounts, some ISPs manage home/biz customer ratio to keep traffic balanced
  • Average usage per customer is 5Mb/s for ISP with streaming orientated ISP (acct sold with device).
  • 60% of International traffic going to Aus (to CDNS)
  • Consumers don’t accept buffering, high quality video (bitrate and production quality). Want TV to just-work.
  • NZ doesn’t want to be a “rural” level of internet access, equiv to a farm in more connected countries
  • Could multicast work for live events like sport?
  • Hard to get overage to work to work when people leave TV on all day
  • Plenty of people in Auckland not getting UFB till 2017 (or later)

The connected home and the Internet of ThingsAmber Craig, ANZ

  • At top of Hype cycle
  • Has home Switches on Wemo (have to get upgraded)
  • Lots of devices generating a lot of data
  • Video Blogging – 10GB of raw data, 1GB of finished for just 5 minutes. Uploading to shared drives, sending back and forth through multiple edits
  • Network capacity if probably not much for IoT compared to video, but home will be a source of a lot more uploads
  • With IPv6 maybe less NAT, harder to manage (since people are not used to it).
  • Whose responsibility is it to ensure that Internet works in every room
  • Building standards, what are customers, government, ISP each prepared to pay for?
  • What about medical dependency people who need Internet. A lot of this goes over GSM since that is more “reliable”

Lightbox – content delivery in New ZealandKym Nyblock, Chief Executive of Lightbox

  • Lightbox is part of Spark ventures, morepork, skinny, bigpipe
  • Lighbox – On line TV service, $12.99/month thousands of hours of online content
  • 40% of US household have SVOD, but pay-TV only down 25%
  • Many providers around the world, multiple providers in many countries. Youtube also bit player in the corner
  • SVOD have some impact on piracy, especially those who only pirate cause they want content same day as programme airs in the US
  • Lots of screens now in the house, TV not only viewed on TVs
  • Lightbox challenges
    • Rights issues, lots of competition with other providers, some with fuzzy launch dates
    • NZ Internet not too bad
    • Had to work within an existing company
  • Existing providers
    • Sky – 850k homes, announced own product, has most sports
    • Netflix – approx 30k homes, coming to NZ soon
  • From Biz plan to launch in 12 months
  • Marketing job to be very simple – “Grandma Rule” ( can be explained to Grandma, used by her)
  • Express service delivers content right after views in the US. Lots of views for the episodes that are brand new. One new episode can be 10% of days total views
  • Very agile company, plans changed a lot.
  • Future
    • Customers will have several providers and change often
    • Multiple providers in the market, more to come
    • Premium and exclusive content will drive, simple interface will keep it
    • Rights issues are a problem but locked into the studio system
    • Try to “grow the category”, majority on consumers still using linear, scheduled TV
    • Try to address local rights ownership. This is the bit where they dug at US based providers and people using them.
    • Working on a Sports offering
    • and then she showed a Lightbox ad
    • Question costs of other ISPs of getting good lightbox due to charges from Spark-Wholesale for bandwidth exchanged. Not really answered

Quickflix – another view of content delivery in New ZealandPaddy Buckley, MD of Quickflix NZ

  • 1st service to launch in March 2012
  • Subscription service for movies and TV shows and Standalone pay-per-view service for new-release movies and some TV shows
  • Across lots of devices, Smart TVs, phones, computers, games consoles, tablets, tivo, chromecast. No Linux Client
  • Just 15% of views via the website now
  • Content: New release movies, subscriptions content movies, TV shows
  • Uses Akamai for delivery. Hosting Centers in Sydney and Perth. AWS/Azure
  • Unwritten 5 second rule. Content should play within 5 seconds of pressing play
  • The future
    • Multiple Models, Not just SVOD, eg TVOD, AVOD, EVOD, EST
    • More fibre, fast home wifi and better hardware
    • VOD content getting nearer to the viewer. HbbTV combines broadcast and on-demand being done by freeview
    • Android TV
    • Viewing levels to increase (volume and frequency), people will pick and mix between providers
    • Aiming at 50% of households, 1 million is quite a lots for any scale.
  • Coming soon
    • 1080p/4K , 5.1 surround sound
    • Fewer device limits. All services and all devices
    • More streams
    • Changing release windows
    • Live streaming
    • PPV options to compliment
    • Download now, view later
  • What we need from ISPs
    • Significant bandwidth
    • Mooorrreee bandwidth
    • People will change ISPs if the ISP can’t provide the level of service
    • Netflix is naming and shaming. Netflix best/worst list
  • Prediction that NZ could hit 50% SVOD within a couple of years
  • Asked if they will be going broke in next few months. Says he’s done deal with Presto in Aus and will ease funding problems but business as normal in the NZ
  • SVOD has evolved from back-catalog TV shows a few years ago to first-run now. Will probably keep going forward with individual shows being provider-exclusive for now, especially since services are fairly low cost per month
  • A few questions about subtitles. Usually available (although can cost extra) but not good support with end devices to turn on/off .


James Morris: Linux Security Summit 2015 Schedule Published

Wed, 2015-07-08 02:27

The schedule for the 2015 Linux Security Summit is now published!

The refereed talks are:

  • CC3: An Identity Attested Linux Security Supervisor Architecture – Greg Wettstein, IDfusion
  • SELinux in Android Lollipop and Android M – Stephen Smalley, NSA
  • Linux Incident Response – Mike Scutt and Tim Stiller, Rapid7
  • Assembling Secure OS Images – Elena Reshetova, Intel
  • Linux and Mobile Device Encryption – Paul Lawrence and Mike Halcrow, Google
  • Security Framework for Constraining Application Privileges – Lukasz Wojciechowski, Samsung
  • IMA/EVM: Real Applications for Embedded Networking Systems – Petko Manolov, Konsulko Group, and Mark Baushke, Juniper Networks
  • Ioctl Command Whitelisting in SELinux – Jeffrey Vander Stoep, Google
  • IMA/EVM on Android Device – Dmitry Kasatkin, Huawei Technologies

There will be several discussion sessions:

  • Core Infrastructure Initiative – Emily Ratliff, Linux Foundation
  • Linux Security Module Stacking Next Steps – Casey Schaufler, Intel
  • Discussion: Rethinking Audit – Paul Moore, Red Hat

Also featured are brief updates on kernel security subsystems, including SELinux, Smack, AppArmor, Integrity, Capabilities, and Seccomp.

The keynote speaker will be Konstantin Ryabitsev, sysadmin for  Check out his Reddit AMA!

See the schedule for full details, and any updates.

This year’s summit will take place on the 20th and 21st of August, in Seattle, USA, as a LinuxCon co-located event.  As such, all Linux Security Summit attendees must be registered for LinuxCon. Attendees are welcome to attend the Weds 19th August reception.

Hope to see you there!

Matt Palmer: It's 10pm, do you know where your SSL certificates are?

Tue, 2015-07-07 13:46

The Internet is going encrypted. Revelations of mass-surveillance of Internet traffic has given the Internet community the motivation to roll out encrypted services – the biggest of which is undoubtedly HTTP.

The weak point, though, is SSL Certification Authorities. These are “trusted third parties” who are supposed to validate that a person requesting a certificate for a domain is authorised to have a certificate for that domain. It is no secret that these companies have failed to do the job entrusted to them, again, and again, and again. Oh, and another one.

However, at this point, doing away with CAs and finding some other mechanism isn’t feasible. There is no clear alternative, and the inertia in the current system is overwhelming, to the point where it would take a decade or more to migrate away from the CA-backed SSL certificate ecosystem, even if there was something that was widely acknowledged to be superior in every possible way.

This is where Certificate Transparency comes in. This protocol, which works as part of the existing CA ecosystem, requires CAs to publish every certificate they issue, in order for the certificate to be considered “valid” by browsers and other user agents. While it doesn’t guarantee to prevent misissuance, it does mean that a CA can’t cover up or try to minimise the impact of a breach or other screwup – their actions are fully public, for everyone to see.

Much of Certificate Transparency’s power, however, is diminished if nobody is looking at the certificates which are being published. That is why I have launched, a site for searching the database of logged certificates. At present, it is rather minimalist, however I intend on adding more features, such as real-time notifications (if a new cert for your domain or organisation is logged, you’ll get an e-mail about it), and more advanced searching capabilities.

If you care about the security of your website, you should check out SSL Aware and see what certificates have been issued for your site. You may be unpleasantly surprised.

Rusty Russell: Bitcoin Core CPU Usage With Larger Blocks

Tue, 2015-07-07 09:28

Since I was creating large blocks (41662 transactions), I added a little code to time how long they take once received (on my laptop, which is only an i3).

The obvious place to look is CheckBlock: a simple 1MB block takes a consistent 10 milliseconds to validate, and an 8MB block took 79 to 80 milliseconds, which is nice and linear.  (A 17MB block took 171 milliseconds).

Weirdly, that’s not the slow part: promoting the block to the best block (ActivateBestChain) takes 1.9-2.0 seconds for a 1MB block, and 15.3-15.7 seconds for an 8MB block.  At least it’s scaling linearly, but it’s just slow.

So, 16 Seconds Per 8MB Block?

I did some digging.  Just invalidating and revalidating the 8MB block only took 1 second, so something about receiving a fresh block makes it worse. I spent a day or so wrestling with benchmarking[1]…

Indeed, ConnectTip does the actual script evaluation: CheckBlock() only does a cursory examination of each transaction.  I’m guessing bitcoin core is not smart enough to parallelize a chain of transactions like mine, hence the 2 seconds per MB.  On normal transaction patterns even my laptop should be about 4 times faster than that (but I haven’t actually tested it yet!).

So, 4 Seconds Per 8MB Block?

But things are going to get better: I hacked in the currently-disabled libsecp256k1, and the time for the 8MB ConnectTip dropped from 18.6 seconds to 6.5 seconds.

So, 1.6 Seconds Per 8MB Block?

I re-enabled optimization after my benchmarking, and the result was 4.4 seconds; that’s libsecp256k1, and an 8MB block.

Let’s Say 1.1 Seconds for an 8MB Block

This is with some assumptions about parallelism; and remember this is on my laptop which has a fairly low-end CPU.  While you may not be able to run a competitive mining operation on a Raspberry Pi, you can pretty much ignore normal verification times in the blocksize debate.


[1] I turned on -debug=bench, which produced impenetrable and seemingly useless results in the log.

So I added a print with a sleep, so I could run perf.  Then I disabled optimization, so I’d get understandable backtraces with perf.  Then I rebuilt perf because Ubuntu’s perf doesn’t demangle C++ symbols, which is part of the kernel source package. (Are we having fun yet?).  I even hacked up a small program to help run perf on just that part of bitcoind.   Finally, after perf failed me (it doesn’t show 100% CPU, no idea why; I’d expect to see main in there somewhere…) I added stderr prints and ran strace on the thing to get timings.

Sridhar Dhanapalan: Twitter posts: 2015-06-29 to 2015-07-05

Mon, 2015-07-06 00:27

Donna Benjamin: CCR at OSCON

Sun, 2015-07-05 12:27
Sunday, July 5, 2015 - 11:11

I've given a "Constructive Conflict Resolution" talk twice now. First at DrupalCon Amsterdam, and again at DrupalCon Los Angeles. It's something I've been thinking about since joining the Drupal community working group a couple of years ago. I'm giving the talk again at OSCON in a couple of weeks. But this time, it will be different. Very different. Here's why.

After seeing tweets about Gina Likins keynote at ApacheCon earlier this year I reached out to her to ask if she'd be willing to collaborate with me about Conflict Resolution in open source, and ended up inviting her to co-present with me at OSCON. We've been working together over the past couple of weeks. It's been a joy, and a learning experience! I'm really excited about where the talk is heading now. If you're going to be at OSCON, please come along. If you're interested, please follow our tweets tagged #osconCCR.

Jen Krieger from interviewed Gina and I about our talk - here's the article: Teaching open source communities about conflict resolution

In the meantime, do you have stories of conflict in Open Source Communities to share?

  • How were they resolved?
  • Were they intractable?
  • Do the wounds still fester?
  • Was positive change an end result?
  • Do you have resources for dealing with conflict?

Tweet your thoughts to me @kattekrab

Here's the slides

Binh Nguyen: Python Decompilation, Max4Live Programming, Ableton Push Colour Calibration, Automated DJ'ing and More

Sat, 2015-07-04 04:46
I was recently discussing with someone how Ableton programming/scripting works. This was particularly within the context of the Ableton Push device and possible hacking of other devices to allow for more sophisticated functionality. Apparently, many of the core scripts use Python. They need to be decompiled to allow you to have a proper look at them though. Obviously, some of the scripts are non-tricial and will require a sufficient understanding of both music as well as programming to be useful.

A decompilation of all files in the following directory,

C:\ProgramData\Ableton\Live9Suite\Resources\MIDI Remote Scripts\

is available here. The reason why I've done it is because others who have previously done it have removed it from their websites.

The decompilation was achieved using two small scripts which I created available here and use uncompyle2, at their core. Since the current code contains an error which doesn't allow for a successful RPM build I've had to make a small modification.

For those who want to know the uncompyle2 currently only works with Python 2.7. To get it running in a Debian based environment I had to change a symlink so that /usr/bin/python -> python2.7 as opposed to /usr/bin/python -> python2.6

To get the RPM build working I had to copy README.rst to README.Running 'python bdist_rpm' would give me an RPM package. Running 'alien' allows conversion of the RPM to a DEB package for easy installation on a Debian based platform.

Successful RPM and DEB packages are available from my website,

The following ZIP archive contains updated code, RPM, and DEB packages. following ZIP archive contains the decompiled code and scripts to automate decompilation of the Ableton code.

For those who are interested, Max4Live programming looks rather interesting for building devices and effects. It also looks like a perfect choice for those who may be on a limited budget and looking to extend Ableton's capabilities.

There have has been some grumbles regarding Ableton Push quality control (Novaton has sort of had similar problems with their Launchpad series but it hasn't been as obvious because most current models have only relied on a limited set of colours. Note to others this issue isn't actually covered by warranty either and it's a difficult problem to fix from a manufacturing perpsective. Hence, the need for this particular solution.) with regards to inconsitent colouring of LEDs. There was a small application that was created but wasn't publicly released. It's called, '' and basically allows for calibration of white on the device by altering internal colour balance of primary colours. It's available on some file sharing websites. You'll require firmware version 1.7 tor it to run.

Someone recently asked me about automated DJ options. I've seen a few but they seem to be becoming increasingly sophisticated.

How To DJ - Phil K (Intermediate Level)

Apparently, some of my ideas and perspectives regarding the modern world and capitalism are similar to that of Thomas Piketty. However, the way in which we would set about rebalancing global economics to ensure a more fair and just global economic system for all is somewhat different. More on this in time...

Some options for puchasing used music equipment locally.

In case you've ever wanted to download videos from various websites, there are quite a few options out there.

If you've had minor scratches on your optical discs you know that they can be extraordinarily frustrating. There are quite a few solutions out there for it though.

If you ever have to use automated imaging/partitioning software sometimes things don't turn out perfectly. Hidden partitions appear when they shouldn't wreaking havoc with links throughout your system. Changing the partition type is the solution though the actual 'type/code/number' may vary depending on the circumstances.

Options for locking down a device in case it is lost or stolen are increasingly popular nowadays even in consumer class devices. It's interesting how far, some companies are willing to take this and what their implementation is like.

Help evaluate, test, and design Windows 10.

David Rowe: WTF Internal Combustion?

Fri, 2015-07-03 14:30

At the moment I’m teaching my son to drive in my Electric Car. Like my daughter before him it’s his first driving experience. Recently, he has started to drive his grandfathers pollution generator, which has a manual transmission. So I was trying to explain why the clutch is needed, and it occurred to me just how stupid internal combustion engines are.

Dad: So if you dump the clutch too early the engine stops.

Son: Why?

Dad: Well, a petrol engine needs a certain amount of energy to keep it running, for like compression for the next cycle. If you put too big a load on the engine, it doesn’t have enough power to move the car and keep the engine running.

Dad: Oh yeah and that involves a complex clutch that can be burnt out if you don’t use it right. Or an automatic transmission that requires a complex cooling system and means you use even more (irreplaceable) fossil fuel as it’s less efficient.

Dad: Oh, and petrol motors only work well in a very narrow range of RPM so we need complex gearboxes.

Dad thinks to himself: WTF internal combustion?

Electric motors aren’t like that. Mine works better at 0 RPM (more torque), not worse. When the car stops my electric motor stops. It’s got one moving part and one gear ratio. Why on earth would you keep using irreplaceable fossil fuels when stopped at the traffic lights? It just doesn’t make sense.

The reason of course is energy density. We need to store a couple of hundred km worth of energy in a reasonable amount of weight. Petrol has about 44 MJ/kg. Let see, one of my Lithium cells weighs 3.3kg, and is rated at 100AH at 3.2V. So thats (100AH)(3600 seconds/H)(3.2V)/(3kg)=0.386MJ/kg or about 100 times worse than petrol. However that’s not the whole story, an EV is about 85% efficient in converting that energy into movement while a dinosaur juice combuster is only about 15% efficient.

Anyhoo it’s now possible to make EVs with 500 km range (hello Tesla) so energy density has been nailed. The rest is a business problem, like establishing a market for smart phones. We’re quite good at solving business problems, as someone tends to get rich.

I mean, if we can make billions of internal combustion engines with 1000′s of moving parts, cooling systems, gearboxes, anti-pollution, fuel injection, engine management, controlled detonation of an explosive (they also make napalm out of petrol) and countless other ancillary systems I am sure human kind can make a usable battery!

Internal combustion is just a bad hack.

History is going to judge us as very stupid. We are chewing through every last drop of fossil fuel to keep driving to and from homes in the suburbs that we can’t afford, to buy stuff we don’t need, making plastic for gadgets we throw away, and flying 1000′s of km to exotic locations for holidays, and overheating the planet using our grandchildren’s legacy of hydrocarbons that took 75 million years to form.

Oh that’s right. It’s for the economy.

Rusty Russell: Wrapper for running perf on part of a program.

Fri, 2015-07-03 14:28

Linux’s perf competes with early git for title of least-friendly Linux tool.  Because it’s tied to kernel versions, and the interfaces changes fairly randomly, you can never figure out how to use the version you need to use (hint: always use -g).

But when it works, it’s very useful.  Recently I wanted to figure out where bitcoind was spending its time processing a block; because I’m a cool kid, I didn’t use gprof, I used perf.  The problem is that I only want information on that part of bitcoind.  To start with, I put a sleep(30) and a big printf in the source, but that got old fast.

Thus, I wrote “perfme.c“.  Compile it (requires some trivial CCAN headers) and link perfme-start and perfme-stop to the binary.  By default it runs/stops perf record on its parent, but an optional pid arg can be used for other things (eg. if your program is calling it via system(), the shell will be the parent).

Linux Users of Victoria (LUV) Announce: LUV Main July 2015 Meeting: Ansible / BTRFS / Educating People to become Linux Users

Thu, 2015-07-02 20:29
Start: Jul 7 2015 18:30 End: Jul 7 2015 20:30 Start: Jul 7 2015 18:30 End: Jul 7 2015 20:30 Location: 

200 Victoria St. Carlton VIC 3053



• Andrew Pam, An introduction to Ansible

• Russell Coker, BTRFS update

• Lev Lafayette, Educating People to become Linux Users: Some Key Insights from Adult Education

200 Victoria St. Carlton VIC 3053 (formerly the EPA building)

Before and/or after each meeting those who are interested are welcome to join other members for dinner. We are open to suggestions for a good place to eat near our venue. Maria's on Peel Street in North Melbourne is currently the most popular place to eat after meetings.

LUV would like to acknowledge Red Hat for their help in obtaining the venue and VPAC for hosting.

Linux Users of Victoria Inc. is an incorporated association, registration number A0040056C.

July 7, 2015 - 18:30

read more

Donna Benjamin: Certification: Necessary Evil?

Thu, 2015-07-02 15:27
Thursday, July 2, 2015 - 14:14

I wrote this as a comment in response to Dries' post about the Acquia certification program - I thought I'd share it here too. I've commented there before.

I've also been conflicted about certifications. I still am. And this is because I fully appreciate the pros and cons. The more I've followed the issue, the more conflicted I've become about it.

My current stand, is this. Certifications are a necessary evil. Let me say a little on why that is.

I know many in the Drupal community are not in favour of certification, mostly because it can't possibly adequately validate their experience.

It also feels like an insult to be expected to submit to external assessment after years of service contributing to the code-base, and to the broader landscape of documentation, training, and professional service delivery.

Those in the know, know how to evaluate a fellow Drupalist. We know what to look for, and more importantly where to look. We know how to decode the secret signs. We can mutter the right incantations. We can ask people smart questions that uncover their deeper knowledge, and reveal their relevant experience.

That's our massive head start. Or privilege. 

Drupal is now a mature platform for web and digital communications. The new challenge that comes with that maturity, is that non-Drupalists are using Drupal. And non specialists are tasked with ensuring sites are built by competent people. These people don't have time to learn what we know. The best way we can help them, is to support some form of certification.

But there's a flip side. We've all laughed at the learning curve cartoon about Drupal. Because it's true. It is hard. And many people don't know where to start. Whilst a certification isn't going to solve this completely, it will help to solve it, because it begins to codify the knowledge many of us take for granted.

Once that knowledge is codified, it can be studied. Formally in classes, or informally through self-directed exploration and discovery.

It's a starting point.

I empathise with the nay-sayers. I really do. I feel it too. But on balance, I think we have to do this. But even more, I hope we can embrace it with more enthusiasm.

I really wish the Drupal Association had the resources to run and champion the certification system, but the truth is, as Dries outlines above, it's a very time-consuming and expensive proposition to do this work.

So, Acquia - you have my deep, albeit somewhat reluctant, gratitude!


Thanks Dries - great post.



(Drupal Association board member)

James Purser: Tell your MP you support Same Sex Marriage

Thu, 2015-07-02 11:30

If you support the right for two people to get married regardless of gender, then please respectfully and politely contact your local federal member and let them know.

Those who oppose this have already started up their very effective networks, and we will need to work very hard to counter it.

If you're not sure who your local MP or Senators are, I recommend you use to find out. Just punch in your post code and it will let you know, as well as give you a run down of their voting history.

Do it, DO IT NOW!

This message brought to you by the realisation that I'm going to be rainbow haired soon.

Blog Catagories: Politicssame sex marriage