news aggregator

Russell Coker: Forking Mon and DKIM with Mailing Lists

Planet Linux Australia - 15 hours 37 min ago

I have forked the “Mon” network/server monitoring system. Here is a link to the new project page [1]. There hasn’t been an upstream release since 2010 and I think we need more frequent releases than that. I plan to merge as many useful monitoring scripts as possible and support them well. All Perl scripts will use strict and use other best practices.

The first release of etbe-mon is essentially the same as the last release of the mon package in Debian. This is because I started work on the Debian package (almost all the systems I want to monitor run Debian) and as I had been accepted as a co-maintainer of the Debian package I put all my patches into Debian.

It’s probably not a common practice for someone to fork upstream of a package soon after becoming a comaintainer of the Debian package. But I believe that this is in the best interests of the users. I presume that there are other collections of patches out there and I hope to merge them so that everyone can get the benefits of features and bug fixes that have been separate due to a lack of upstream releases.

Last time I checked mon wasn’t in Fedora. I believe that mon has some unique features for simple monitoring that would be of benefit to Fedora users and would like to work with anyone who wants to maintain the package for Fedora. I am also interested in working with any other distributions of Linux and with non-Linux systems.

While setting up the mailing list for etbemon I wrote an article about DKIM and mailing lists (primarily Mailman) [2]. This explains how to setup Mailman for correct operation with DKIM and also why that seems to be the only viable option.

Related posts:

  1. DKIM and Mailing Lists Currently we have a problem with the Debian list server...
  2. An Update on DKIM Signing and SE Linux Policy In my previous post about DKIM [1] I forgot to...
  3. Installing DKIM and Postfix in Debian I have just installed Domain Key Identified Mail (DKIM) [1]...

OpenSTEM: This Week in HASS – term 3, week 3

Planet Linux Australia - Mon, 2017-07-24 09:04

This week our youngest students are playing games from different places around the world, in the past. Slightly older students are completing the Timeline Activity. Students in Years 4, 5 and 6 are starting to sink their teeth into their research project for the term, using the Scientific Process.

Foundation/Prep/Kindy to Year 3

This week students in stand-alone Foundation/Prep/Kindy classes (Unit F.3) and those integrated with Year 1 (Unit F-1.3) are examining games from the past. The teacher can choose to match these to the stories from Week 1 of the unit, as games are listed matching each of the places and time periods included in those stories. However, some games are more practical to play than others, and some require running around, so the teacher may wish to choose games which suit the circumstances of each class. Teachers can discuss how different places have different types of games and why these games might be chosen in those places (e.g. dragons in China and lions in Africa).

Students in Years 1 (Unit 1.3), 2 (Unit 2.3) and 3 (Unit 3.3) have this week to finish off the Timeline Activity. The Timeline activity requires some investment of time, which can be done as 2 half hour sessions or one longer session. Some flexible timing is built into the unit for teachers who want to match this activity to the number line in Maths, and other revise or cover the number line in more depth as a complement to this activity.

Years 3 to 6 Arthur Phillip

Last week students in Years 3 to 6 chose a research topic, related to a theme in Australian History. Different themes are studied by different year levels. Students in Year 3 (Unit 3.7) study a topic in the history of their capital city or local community. Students in Year 4 (Unit 4.3) study a topic from Australian history in the precolonial or early colonial periods. Students in Year 5 (Unit 5.3) study a topic from Australian colonial history and students in Year 6 (Unit 6.3) study a topic related to Federation or 20th century Australian history. These research topics are undertaken as a Scientific Investigation. This week the focus is on defining a Research Question and undertaking Background Research. Student workbooks will guide students through the process of choosing a research question within their chosen topic, and then how to start the Background Research. These sections will be included in the Scientific Report each student produces at the end of this unit. OpenSTEM resources available with each unit provide a starting point for this Background Research.


Gabriel Noronha: test post

Planet Linux Australia - Sun, 2017-07-23 21:03

test posting from

01 – [Jul-24 13:35 API] Volley error on – exception: null
02 – [Jul-24 13:35 API] StackTrace:

03 – [Jul-24 13:35 API] Dispatching action: PostAction-PUSHED_POST
04 – [Jul-24 13:35 POSTS] Post upload failed. GENERIC_ERROR: The Jetpack site is inaccessible or returned an error: transport error – HTTP status code was not 200 (403) [-32300]
05 – [Jul-24 13:35 POSTS] updateNotificationError: Error while uploading the post: The Jetpack site is inaccessible or returned an error: transport error – HTTP status code was not 200 (403) [-32300]
06 – [Jul-24 13:35 EDITOR] Focus out callback received

OpenSTEM: New Dates for Earliest Archaeological Site in Aus!

Planet Linux Australia - Thu, 2017-07-20 11:04
Thylacine or Tasmanian Tiger.

This morning news was released of a date of 65,000 years for archaeological material at the site of Madjedbebe rock shelter in the Jabiluka mineral lease area, surrounded by Kakadu National Park. The site is on the land of the Mirarr people, who have partnered with archaeologists from the University of Queensland for this investigation. It has also produced evidence of the earliest use of ground-stone tool technology, the oldest seed-grinding tools in Australia and stone points, which may have been used as spears. Most fascinating of all, there is the jawbone of a Tasmanian Tiger or Thylacine (which was found across continental Australia during the Ice Age) coated in a red pigment, thought to be the reddish rock, ochre. There is much evidence of use of ochre at the site, with chucks and ground ochre found throughout the site. Ochre is often used for rock art and the area has much beautiful rock art, so we can deduce that these rock art traditions are as old as the occupation of people in Australia, i.e. at least 65,000 years old! The decoration of the jawbone hints at a complex realm of abstract thought, and possibly belief, amongst our distant ancestors – the direct forebears of modern Aboriginal people.

Kakadu view, NT Tourism.

Placing the finds from Madjebebe rock shelter within the larger context, the dating, undertaken by Professor Zenobia Jacobs from the University of Wollongong, shows that people were living at the site during the Ice Age, a time when many, now-extinct, giant animals roamed Australia; and the tiny Homo floresiensis was living in Indonesia. These finds show that the ancestors of Aboriginal people came to Australia with much of the toolkit of their rich, complex lives already in place. This technology, extremely advanced for the time, allowed them to populate the entire continent of Australia, first managing to survive in the hash Ice Age environment and then also managing to adapt to the enormous changes in sea level, climate and vegetation at the end of the Ice Age.

The team of archaeologists working at Madjebebe rock shelter, in conjunction with Mirarr traditional owners, are finding all sorts of wonderful archaeological material, from which they can deduce much rich, detailed information about the lives of the earliest people in Australia. We look forward to hearing more from them in the future. Students who are interested, especially those in Years 4, 5 and 6, can read more about these sites and the animals and lives of people in Ice Age Australia in our resources People Reach Australia, Early Australian Sites, Ice Age Animals and the Last Ice Age, which are covered in Units 4.1, 5.1 and 6.1.

sthbrx - a POWER technical blog: XDP on Power

Planet Linux Australia - Mon, 2017-07-17 10:08

This post is a bit of a break from the standard IBM fare of this blog, as I now work for Canonical. But I have a soft spot for Power from my time at IBM - and Canonical officially supports 64-bit, little-endian Power - so when I get a spare moment I try to make sure that cool, officially-supported technologies work on Power before we end up with a customer emergency! So, without further ado, this is the story of XDP on Power.


eXpress Data Path (XDP) is a cool Linux technology to allow really fast processing of network packets.

Normally in Linux, a packet is received by the network card, an SKB (socket buffer) is allocated, and the packet is passed up through the networking stack.

This introduces an inescapable latency penalty: we have to allocate some memory and copy stuff around. XDP allows some network cards and drivers to process packets early - even before the allocation of the SKB. This is much faster, and so has applications in DDOS mitigation and other high-speed networking use-cases. The IOVisor project has much more information if you want to learn more.


XDP processing is done by an eBPF program. eBPF - the extended Berkeley Packet Filter - is an in-kernel virtual machine with a limited set of instructions. The kernel can statically validate eBPF programs to ensure that they terminate and are memory safe. From this it follows that the programs cannot be Turing-complete: they do not have backward branches, so they cannot do fancy things like loops. Nonetheless, they're surprisingly powerful for packet processing and tracing. eBPF programs are translated into efficient machine code using in-kernel JIT compilers on many platforms, and interpreted on platforms that do not have a JIT. (Yes, there are multiple JIT implementations in the kernel. I find this a terrifying thought.)

Rather than requiring people to write raw eBPF programs, you can write them in a somewhat-restricted subset of C, and use Clang's eBPF target to translate them. This is super handy, as it gives you access to the kernel headers - which define a number of useful data structures like headers for various network protocols.

Trying it

There are a few really interesting project that are already up and running that allow you to explore XDP without learning the innards of both eBPF and the kernel networking stack. I explored the samples in the bcc compiler collection and also the samples from the netoptimizer/prototype-kernel repository.

The easiest way to get started with these is with a virtual machine, as recent virtio network drivers support XDP. If you are using Ubuntu, you can use the uvt-kvm tooling to trivially set up a VM running Ubuntu Zesty on your local machine.

Once your VM is installed, you need to shut it down and edit the virsh XML.

You need 2 vCPUs (or more) and a virtio+vhost network card. You also need to edit the 'interface' section and add the following snippet (with thanks to the xdp-newbies list):

<driver name='vhost' queues='4'> <host tso4='off' tso6='off' ecn='off' ufo='off'/> <guest tso4='off' tso6='off' ecn='off' ufo='off'/> </driver>

(If you have more than 2 vCPUs, set the queues parameter to 2x the number of vCPUs.)

Then, install a modern clang (we've had issues with 3.8 - I recommend v4+), and the usual build tools.

I recommend testing with the prototype-kernel tools - the DDOS prevention tool is a good demo. Then - on x86 - you just follow their instructions. I'm not going to repeat that here.


What happens when you try this on Power? Regular readers of my posts will know to expect some minor hitches.

XDP does not disappoint.

Firstly, the prototype-kernel repository hard codes x86 as the architecture for kernel headers. You need to change it for powerpc.

Then, once you get the stuff compiled, and try to run it on a current-at-time-of-writing Zesty kernel, you'll hit a massive debug splat ending in:

32: (61) r1 = *(u32 *)(r8 +12) misaligned packet access off 0+18+12 size 4 load_bpf_file: Permission denied

It turns out this is because in Ubuntu's Zesty kernel, CONFIG_HAS_EFFICIENT_UNALIGNED_ACCESS is not set on ppc64el. Because of that, the eBPF verifier will check that all loads are aligned - and this load (part of checking some packet header) is not, and so the verifier rejects the program. Unaligned access is not enabled because the Zesty kernel is being compiled for CPU_POWER7 instead of CPU_POWER8, and we don't have efficient unaligned access on POWER7.

As it turns out, IBM never released any officially supported Power7 LE systems - LE was only ever supported on Power8. So, I filed a bug and sent a patch to build Zesty kernels for POWER8 instead, and that has been accepted and will be part of the next stable update due real soon now.

Sure enough, if you install a kernel with that config change, you can verify the XDP program and load it into the kernel!

If you have real powerpc hardware, that's enough to use XDP on Power! Thanks to Michael Ellerman, maintainer extraordinaire, for verifying this for me.

If - like me - you don't have ready access to Power hardware, you're stuffed. You can't use qemu in TCG mode: to use XDP with a VM, you need multi-queue support, which only exists in the vhost driver, which is only available for KVM guests. Maybe IBM should release a developer workstation. (Hint, hint!)

Overall, I was pleasantly surprised by how easy things were for people with real ppc hardware - it's encouraging to see something not require kernel changes!

eBPF and XDP are definitely growing technologies - as Brendan Gregg notes, now is a good time to learn them! (And those on Power have no excuse either!)

OpenSTEM: This Week in HASS – term 3, week 2.

Planet Linux Australia - Mon, 2017-07-17 09:04

This week older students start their research projects for the term, whilst younger students are doing the Timeline Activity. Our youngest students are thinking about the places where people live and can join together with older students as buddies to Build A Humpy together.

Foundation/Prep/Kindy to Year 3

Students in stand-alone Foundation/Prep/Kindy classes (Unit F.3), or those in classes integrated with Year 1 (Unit F-1.3) are considering different types of homes this week. They will think about where the people in the stories from last week live and compare that to their own houses. They can consider how homes were different in the past and how our homes help us meet our basic needs. There is an option this week for these students to buddy with older students, especially those in Years 4, 5 and 6, to undertake the Building A Humpy activity together. In this activity students collect materials to build a replica Aboriginal humpy or shelter outside. Many teachers find that both senior primary and the younger students get a lot of benefit from helping each other with activities, enriching the learning experience. The Building a Humpy activity is one where the older students can assist the younger students with the physical requirements of building a humpy, whilst each group considers aspects of the activity relevant to their own studies, and comparing past ways of life to their own.

Students in Years 1 (Unit 1.3), 2 (Unit 2.3) and 3 (Unit 3.3) are undertaking the Timeline Activity this week. This activity is designed to complement the concept of the number line from the Mathematics curriculum, whilst helping students to learn to visualise the abstract concepts of the past and different lengths of time between historical events and the present. In this activity students walk out a timeline, preferably across a large open space such as the school Oval, whilst attaching pieces of paper at intervals to a string. The pieces of paper refer to specific events in history (starting with their own birth years) and cover a wide range of events from the material covered this year. Teachers can choose from events in Australian and world history, covering 100s, 1000s and even millions of years, back to the dinosaurs. Teachers can also add their own events. Thus the details of the activity are able to be altered in different years to maintain student interest. Depending on the class, the issue of scale can be addressed in various ways. By physically moving their bodies, students will start to understand the lengths of time involved in examinations of History. This activity is repeated in increasing detail in higher years, to make sure that the fundamental concepts are absorbed by students over time.

Years 3 to 6

Students in Years 3 to 6 are starting their term research projects on Australian history this week. Students in Year 3 (Unit 3.7) concentrate on topics from the history of their capital city or local community. Suggested topics are included for Brisbane, Melbourne, Sydney, Adelaide, Darwin, Hobart, Perth and Canberra. Teachers can substitute their own topics for a local community study. Students will undertake a Scientific Investigation into an aspect of their chosen research project and will produce a Scientific Report. It is recommended that teachers supplement the resources provided with old photographs, books, newspapers etc, many of which can be accessed online, to provide the students with extra material for their investigation.

First Fleet

Students in Year 4 (Unit 4.3) will be focusing on Australia in the period up to and including the arrival of the First Fleet and the early colonial period. OpenSTEM’s Understanding Our World® program encompasses the whole Australian curriculum for HASS and thus does not simply rely on “flogging the First Fleet to death”! There are 7 research themes for Year 4 students: “Australia Before 1788”; “The First Fleet”; “Convicts and Settlers”; “Aboriginal People in Colonial Australia”; “Australia and Other Nations in the 17th, 18th and 19th centuries”; “Colonial Children”; “Colonial Animals and their Impact”. These themes are allocated to groups of students and each student chooses an individual research topic within their groups themes. Suggested topics are given in the Teacher Handbook, as well as suggested resources.

19th century china dolls

Year 5 (Unit 5.3) students focus on the colonial period in Australia. There are 9 research themes for Year 5 students. These are: “The First Fleet”; “Convicts and Settlers”; “The 6 Colonies”; “Aboriginal People in Colonial Australia”; “Resistance to Colonial Authorities”; “Sugar in Queensland”; “Colonial Children”; “Colonial Explorers” and “Colonial Animals and their Impact”. As well as themes unique to Year 5, some overlap is provided to facilitate teaching in multi-year classes. The range of themes also allows for the possibility of teachers choosing different themes in different years. Once again individual topics and resources are suggested in the Teacher Handbook.

Year 6 (Unit 6.3) students will examine research themes around Federation and the early 20th century. There are 8 research themes for Year 6 students: “Federation and Sport”; “Women’s Suffrage”; “Aboriginal Rights in Australia”; “Henry Parkes and Federation”; “Edmund Barton and Federation”; “Federation and the Boer War”; “Samuel Griffith and the Constitution”; “Children in Australian History”. Individual research topics and resources are suggested in the Teachers Handbook. It is expected that students in Year 6 will be able to research largely independently, with weekly guidance from their teacher. OpenSTEM’s Understanding Our World® program is aimed at developing research skills in students progressively, especially over the upper primary years. If the program is followed throughout the primary years, students are well prepared for high school by the end of Year 6, having practised individual research skills for several years.


Anthony Towns: Bitcoin: ASICBoost – Plausible or not?

Planet Linux Australia - Thu, 2017-07-13 21:00

So the first question: is ASICBoost use plausible in the real world?

There are plenty of claims that it’s not:

  • “Much conspiracy around today. I don’t believe SegWit non-activation has anything to do with AsicBoost!” – Timo Hanke, one of the patent applicants, on twitter
  • “there’s absolutely nothing but baseless accusations flying around” – Emin Gun Sirer’s take, linked from the Bitmain statement
  • “no company would ever produce a chip that would have a switch in to hide that it’s actually an ASICboost chip.” – Sam Cole formerly of KNCMiner which went bankrupt due to being unable to compete with Bitmain in 2016
  • “I believe their claim about not activating ASICBoost. It is very small money for them.” – Guy Corem of SpoonDoolies, who independently discovered ASICBoost
  • “No one is even using Asicboost.” – Roger Ver (/u/memorydealers) on reddit

A lot of these claims don’t actually match reality though: ASICBoost is implemented in Bitmain miners sold to the public, and since it defaults to off, a switch to hide it is obviously easily possible since it’s disabled by default, contradicting Sam Cole’s take. There’s plenty of circumstantial evidence of ASICBoost-related transaction structuring in blocks, contradicting the basis on which Emin Gun Sirer’s dismisses the claims. The 15%-30% improvement claims that Guy Corem and Sam Cole cite are certainly large enough to be worth looking into — and  Bitmain confirms to have done on testnet. Even Guy Corem’s claim that they only amount to $2,000,000 in savings per year rather than $100,000,000 seems like a reason to expect it to be in use, rather than so little that you wouldn’t bother.

If ASICBoost weren’t in use on mainnet it would probably be relatively straightforward to prove that: Bitmain could publish the benchmarks results they got when testing on testnet, and why that proved not to be worth doing on mainnet, and provide instructions for their customers on how to reproduce their results, for instance. Or Bitmain and others could support efforts to block ASICBoost from being used on mainnet, to ensure no one else uses it, for the greater good of the network — if, as they claim, they’re already not using it, this would come at no cost to them.

To me, much of the rhetoric that’s being passed around seems to be a much better match for what you would expect if ASICBoost were in use, than if it was not. In detail:

  • If ASICBoost were in use, and no one had any reason to hide it being used, then people would admit to using it, and would do so by using bits in the block version.
  • If ASICBoost were in use, but people had strong reasons to hide that fact, then people would claim not to be using it for a variety of reasons, but those explanations would not stand up to more than casual analysis.
  • If ASICBoost were not in use, and it was fairly easy to see there is no benefit to it, then people would be happy to share their reasoning for not using it in detail, and this reasoning would be able to be confirmed independently.
  • If ASICBoost were not in use, but the reasons why it is not useful require significant research efforts, then keeping the detailed reasoning private may act as a competitive advantage.

The first scenario can be easily verified, and does not match reality. Likewise the third scenario does not (at least in my opinion) match reality; as noted above, many of the explanations presented are superficial at best, contradict each other, or simply fall apart on even a cursory analysis. Unfortunately that rules out assuming good faith — either people are lying about using ASICBoost, or just dissembling about why they’re not using it. Working out which of those is most likely requires coming to our own conclusion on whether ASICBoost makes sense.

I think Jimmy Song had some good posts on that topic. His first, on Bitmain’s ASICBoost claims finds some plausible examples of ASICBoost testing on testnet, however this was corrected in the comments as having been performed by Timo Hanke, rather than Bitmain. Having a look at other blocks’ version fields on testnet seems to indicate that there hasn’t been much other fiddling of version fields, so presumably whatever testing of ASICBoost was done by Bitmain, fiddling with the version field was not used; but that in turn implies that Bitmain must have been testing covert ASICBoost on testnet, assuming their claim to have tested it on testnet is true in the first place (they could quite reasonably have used a private testnet instead). Two later posts, on profitability and ASICBoost and Bitmain’s profitability in particular, go into more detail, mostly supporting Guy Corem’s analysis mentioned above. Perhaps interestingly, Jimmy Song also made a proposal to the bitcoin-dev shortly after Greg’s original post revealing ASICBoost and prior to these posts; that proposal would have endorsed use of ASICBoost on mainnet, making it cheaper and compatible with segwit, but would also have made use of ASICBoost readily apparent to both other miners and patent holders.

It seems to me there are three different ways to look at the maths here, and because this is an economics question, each of them give a different result:

  • Greg’s maths splits miners into two groups each with 50% of hashpower. One group, which is unable to use ASICBoost is assumed to be operating at almost zero profit, so their costs to mine bitcoins are only barely below the revenue they get from selling the bitcoin they mine. Using this assumption, the costs of running mining equipment are calculated by taking the number of bitcoin mined per year (365*24*6*12.5=657k), multiplying that by the price at the time ($1100), and halving the costs because each group only mines half the chain. This gives a cost of mining for the non-ASICBoost group of $361M per year. The other group, which uses ASICBoost, then gains a 30% advantage in costs, so only pays 70%, or $252M, a comparative saving of approximately $100M per annum. This saving is directly proportional to hashrate and ASICBoost advantage, so using Guy Corem’s figures of 13.2% hashrate and 15% advantage, this reduces from $95M to $66M, saving about $29M per annum.
  • Guy Corem’s maths estimates Bitmain’s figures directly: looking at the AntPool hashpower share, he estimates 500PH/s in hashpower (or 13.2%); he uses the specs of the AntMiner S9 to determine power usage (0.1 J/GH); he looks at electricity prices in China and estimates $0.03 per kWh; and he estimates the ASICBoost advantage to be 15%. This gives a total cost of 500M GH/s * 0.1 J/GH / 1000 W/kW * $0.03 per kWh * 24 * 365 which is $13.14 M per annum, so a 15% saving is just under $2M per annum. If you assume that the hashpower was 50% and ASICBoost gave a 30% advantage instead, this equates to about 1900 PH/s, and gives a benefit of just under $15M per annum. In order to get the $100M figure to match Greg’s result, you would also need to increase electricity costs by a factor of six, from 3c per kWH to 20c per kWH.
  • The approach I prefer is to compare what your hashpower would be keeping costs constant and work out the difference in revenue: for example, if you’re spending $13M per annum in electricity, what is your profit with ASICBoost versus without (assuming that the difficulty retargets appropriately, but no one else changes their mining behaviour). Following this line of thought, if you have 500PH/s with ASICBoost giving you a 30% boost, then without ASICBoost, you have 384 PH/s (500/1.3). If that was 13.2% of hashpower, then the remaining 86.8% of hashpower is 3288 PH/s, so when you stop using ASICBoost and a retarget occurs, total hashpower is now 3672 PH/s (384+3288), and your percentage is now 10.5%. Because mining revenue is simply proportional to hashpower, this amounts to a loss of 2.7% of the total bitcoin reward, or just under $20M per year. If you match Greg’s assumptions (50% hashpower, 30% benefit) that leads to an estimate of $47M per annum; if you match Guy Corem’s assumptions (13.2% hashpower, 15% benefit) it leads to an estimate of just under $11M per annum.

So like I said, that’s three different answers in each of two scenarios: Guy’s low end assumption of 13.2% hashpower and a 15% advantage to ASICBoost gives figures of $29M/$2M/$11M; while Greg’s high end assumptions of 50% hashpower and 30% advantage give figures of $100M/$15M/$47M. The differences in assumptions there is obviously pretty important.

I don’t find the assumptions behind Greg’s maths realistic: in essence, it assumes that mining be so competitive that it is barely profitable even in the short term. However, if that were the case, then nobody would be able to invest in new mining hardware, because they would not recoup their investment. In addition, even if at some point mining were not profitable, increases in the price of bitcoin would change that, and the price of bitcoin has been increasing over recent months. Beyond that, it also assumes electricity prices do not vary between miners — if only the marginal miner is not profitable, it may be that some miners have lower costs and therefore are profitable; and indeed this is likely the case, because electricity prices vary over time due to both seasonal and economic factors. The method Greg uses does is useful for establishing an upper limit, however: the only way ASICBoost could offer more savings than Greg’s estimate would be if every block mined produced less revenue than it cost in electricity, and miners were making a loss on every block. (This doesn’t mean $100M is an upper limit however — that estimate was current in April, but the price of bitcoin has more than doubled since then, so the current upper bound via Greg’s maths would be about $236M per year)

A downside to Guy’s method from the point of view of outside analysis is that it requires more information: you need to know the efficiency of the miners being used and the cost of electricity, and any error in those estimates will be reflected in your final figure. In particular, the cost of electricity needs to be a “whole lifecycle” cost — if it costs 3c/kWh to supply electricity, but you also need to spend an additional 5c/kWh in cooling in order to keep your data-centre operating, then you need to use a figure of 8c/kWh to get useful results. This likely provides a good lower bound estimate however: using ASICBoost will save you energy, and if you forget to account for cooling or some other important factor, then your estimate will be too low; but that will still serve as a loose lower bound. This estimate also changes over time however; while it doesn’t depend on price, it does depend on deployed hashpower — since total hashrate has risen from around 3700 PH/s in April to around 6200 PH/s today, if Bitmain’s hashrate has risen proportionally, it has gone from 500 PH/s to 837 PH/s, and an ASICBoost advantage of 15% means power cost savings have gone from $2M to $3.3M per year; or if Bitmain has instead maintained control of 50% of hashrate at 30% advantage, the savings have gone from $15M to $25M per year.

The key difference between my method and both Greg’s and Guy’s is that they implicitly assume that consuming more electricity is viable, and costs simply increase proportionally; whereas my method assumes that this is not viable, and instead that sufficient mining hardware has been deployed that power consumption is already constrained by some other factor. This might be due to reaching the limit of what the power company can supply, or the rating of the wiring in the data centre, or it might be due to the cooling capacity, or fire risk, or some other factor. For an operation spanning multiple data centres this may be the case for some locations but not others — older data centres may be maxed out, while newer data centres are still being populated and may have excess capacity, for example. If setting up new data centres is not too difficult, it might also be true in the short term, but not true in the longer term — that is having each miner use more power due to disabling ASICBoost might require shutting some miners down initially, but they may be able to be shifted to other sites over the course of a few weeks or month, and restarted there, though this would require taking into account additional hosting costs beyond electricity and cooling. As such, I think this is a fairly reasonable way to produce an plausible estimate, and it’s the one I’ll be using. Note that it depends on the bitcoin price, so the estimates this method produces have also risen since April, going from $11M to $24M per annum (13.2% hash, 15% advantage) or from $47M to $103M (50% hash, 30% advantage).

The way ASICBoost works is by allowing you to save a few steps: normally when trying to generate a proof of work, you have to do essentially six steps:

  1. A = Expand( Chunk1 )
  2. B = Compress( A, 0 )
  3. C = Expand( Chunk2 )
  4. D = Compress( C, B )
  5. E = Expand( D )
  6. F = Compress( E )

The expected process is to do steps (1,2) once, then do steps (3,4,5,6) about four billion (or more) times, until you get a useful answer. You do this process in parallel across many different chips. ASICBoost changes this process by observing that step (3) is independent of steps (1,2) — so by finding a variety of Chunk1s — call them Chunk1-A, Chunk1-B, Chunk1-C and Chunk1-D that are each compatible with a common Chunk2. In that case, you do steps (1,2) four times for each different Chunk1, then do step (3) four billion (or more) times, and do steps (4,5,6) 16 billion (or more) times, to get four times the work, while saving 12 billion (or more) iterations of step (3). Depending on the number of Chunk1’s you set yourself up to find, and the relative weight of the Expand versus Compress steps, this comes to (n-1)/n / 2 / (1+c/e), where n is the number of different Chunk1’s you have. If you take the weight of Expand and Compress steps as about equal, it simplifies to 25%*(n-1)/n, and with n=4, this is 18.75%. As such, an ASICBoost advantage of about 20% seems reasonably practical to me. At 50% hash and 20% advantage, my estimates for ASICBoost’s value are $33M in April, and $72M today.

So as to the question of whether you’d use ASICBoost, I think the answer is a clear yes: the lower end estimate has risen from $2M to $3.3M per year, and since Bitmain have acknowledged that AntMiner’s support ASICBoost in hardware already, the only additional cost is finding collisions which may not be completely trivial, but is not difficult and is easily automated.

If the benefit is only in this range, however, this does not provide a plausible explanation for opposing segwit: having the Bitcoin industry come to a consensus about how to move forward would likely increase the bitcoin price substantially, definitely increasing Bitmain’s mining revenue — even a 2% increase in price would cover their additional costs. However, as above, I believe this is only a lower bound, and a more reasonable estimate is on the order of $11M-$47M as of April or $24M-$103M as of today. This is a much more serious range, and would require an 11%-25% increase in price to not be an outright loss; and a far more attractive proposition would be to find a compromise position that both allows the industry to move forward (increasing the price) and allows ASICBoost to remain operational (maintaining the cost savings / revenue boost).


It’s possible to take a different approach to analysing the cost-effectiveness of mining given how much you need to pay in electricity costs. If you have access to a lot of power at a flat rate, can deal with other hosting issues, can expand (or reduce) your mining infrastructure substantially, and have some degree of influence in how much hashpower other miners can deploy, then you can derive a formula for what proportion of hashpower is most profitable for you to control.

In particular, if your costs are determined by an electricity (and cooling, etc) price, E, in dollars per kWh and performance, r, in Joules per gigahash, then given your hashrate, h in terahash/second, your power usage in watts is (h*1e3*r), and you run this for 600 seconds on average between each block (h*r*6e5 Ws), which you divide by 3.6M to convert to kWh (h*r/6), then multiply by your electricity cost to get a dollar figure (h*r*E/6). Your revenue depends on the hashrate of the everyone else, which we’ll call g, and on average you receive (p*R*h/(h+g)) every 600 seconds where p is the price of Bitcoin in dollars and R is the reward (subsidy and fees) you receive from a block. Your profit is just the difference, namely h*(p*R/(h+g) – r*E/6). Assuming you’re able to manufacture and deploy hashrate relatively easily, at least in comparison to everyone else, you can optimise your profit by varying h while the other variables (bitcoin price p, block reward R, miner performance r, electricity cost E, and external hashpower g) remain constant (ie, set the derivative of that formula with respect to h to zero and simplify) which gives a result of 6gpR/Er = (g+h)^2.

This is solvable for h (square root both sides and subtract g), but if we assume Bitmain is clever and well funded enough to have already essentially optimised their profits, we can get a better sense of what this means. Since g+h is just the total bitcoin hashrate, if we call that t, and divide both sides, we get 6gpR/Ert = t, or g/t = (Ert)/(6pR), which tells us what proportion of hashrate the rest of the network can have (g/t) if Bitmain has optimised its profits, or, alternative we can work out h/t = 1-g/t = 1-(Ert)/(6pR) which tells us what proportion of hashrate Bitmain will have if it has optimised its profits.  Plugging in E=$0.03 per kWH, r=0.1 J/GH, t=6e6 TH/s, p=$2400/BTC, R=12.5 BTC gives a figure of 0.9 – so given the current state of the network, and Guy Corem’s cost estimate, Bitmain would optimise its day to day profits by controlling 90% of mining hashrate. I’m not convinced $0.03 is an entirely reasonable figure, though — my inclination is to suspect something like $0.08 per kWh is more reasonable; but even so, that only reduces Bitmain’s optimal control to around 73%.

Because of that incentive structure, if Bitmain’s current hashrate is lower than that amount, then lowering manufacturing costs for own-use miners by 15% (per Sam Cole’s estimates) and lowering ongoing costs by 15%-30% by using ASICBoost could have a compounding effect by making it easier to quickly expand. (It’s not clear to me that manufacturing a line of ASICBoost-only miners to reduce manufacturing costs by 15% necessarily makes sense. For one thing, this would come at a cost of not being able to mine with them while they are state of the art, then sell them on to customers once a more efficient model has been developed, which seems like it might be a good way to manage inventory. For another, it vastly increases the impact of ASICBoost not being available: rather than simply increasing electricity costs by 15%-30%, it would mean reducing output to 10%-25% of what it was, likely rendering the hardware immediately obsolete)

Using the same formula, it’s possible to work out a ratio of bitcoin price (p) to hashrate (t) that makes it suboptimal for a manufacturer to control a hashrate majority (at least just due to normal mining income): h/t < 0.5, 1-Ert/6pR < 0.5, so t > 3pR/Er. Plugging in p=2400, R=12.5, e=0.08, r=0.1, this gives a total hash rate of 11.25M TH/s, almost double the current hash rate. This hashrate target would obviously increase as the bitcoin price increases, halve if the block reward halves (if a fall in the inflation subsidy is not compensated by a corresponding increase in fee income eg), increase if the efficiency of mining hardware increases, and decrease if the cost of electricity increases. For a simpler formula, assuming the best hosting price is $0.08 per kWh, and while the Antminer S9’s efficiency at 0.1 J/GH is state of the art, and the block reward is 12.5 BTC, the global hashrate in TH/s should be at least around 5000 times the price (ie 3R/Er = 4787.5, near enough to 5000).

Note that this target also sets a limit on the range at which mining can be profitable: if it’s just barely better to allow other people to control >50% of miners when your cost of electricity is E, then for someone else whose cost of electricity is 2*E or more, optimal profit is when other people control 100% of hashrate, that is, you don’t mine at all. Thus if the best large scale hosting globally costs $0.08/kWh, then either mining is not profitable anywhere that hosting costs $0.16/kWh or more, or there’s strong centralisation pressure for a mining hardware manufacturer with access to the cheapest electrictiy to control more than 50% of hashrate. Likewise, if Bitmain really can do hosting at $0.03/kWh, then either they’re incentivised to try to control over 50% of hashpower, or mining is unprofitable at $0.06/kWh and above.

If Bitmain (or any mining ASIC manufacturer) is supplying the majority of new hashrate, they actually have a fairly straightforward way of achieving that goal: if they dedicate 50-70% of each batch of ASICs built for their own use, and sell the rest, with the retail price of the sold miners sufficient to cover the manufacturing cost of the entire batch, then cashflow will mostly take care of itself. At $1200 retail price and $500 manufacturing costs (per Jimmy Song’s numbers), that strategy would imply targeting control of up to about 58% of total hashpower. The above formula would imply that’s the profit-maximising target at the current total hashrate and price if your average hosting cost is about $0.13 per kWh. (Those figures obviously rely heavily on the accuracy of the estimated manufacturing costs of mining hardware; at $400 per unit and $1200 retail, that would be 67% of hashpower, and about $0.09 per kWh)

Strategies like the above are also why this analysis doesn’t apply to miners who buy their hardware rather from a vendor, rather than building their own: because every time they increase their own hash rate (h), the external hashrate (g) also increases as a direct result, it is not valid to assume that g is constant when optimising h, so the partial derivative and optimisation is in turn invalid, and the final result is not applicable.


Bitmain’s mining pool, AntPool, obviously doesn’t directly account for 58% or more of total hashpower; though currently they’re the pool with the most hashpower at about 20%. As I understand it, Bitmain is also known to control at least and ConnectBTC which add another 7.6%. The other “Emergent Consensus” supporting pools (,, ViaBTC) account for about 22% of hashpower, however, which brings the total to just under 50%, roughly the right ballpark — and an additional 8% or 9% could easily be pointed at other public pools like slush or f2pool. Whether the “emergent consensus” pools are aligned due to common ownership and contractual obligations or simply similar interests is debatable, though. ViaBTC is funded by Bitmain, and Canoe was built and sold by Bitmain, which means strong contractual ties might exist, however  Jihan Wu, Bitmain’s co-founder, has disclaimed equity ties to is owned by Roger Ver, but I haven’t come across anything implying a business relationship between Bitmain and beyond supplier and customer. However John McAffee’s apparently forthcoming MGT mining pool is both partnered with Bitmain and advised by Roger Ver, so the existence of tighter ties may be plausible.

It seems likely to me that Bitmain is actually behaving more altruistically than is economically rational according to the analysis above: while it seems likely to me that,, ViaBTC and Canoe have strong ties to Bitmain and that Bitmain likely has a high level of influence — whether due to contracts, business relationships or simply due to the loyalty and friendship — this nevertheless implies less control over the hashpower than direct ownership and management, and likely less profit. This could be due to a number of factors: perhaps Bitmain really is already sufficiently profitable from mining that they’re focusing on building their business in other ways; perhaps they feel the risks of centralised mining power are too high (and would ultimately be a risk to their long term profits) and are doing their best to ensure that mining power is decentralised while still trying to maximise their return to their investors; perhaps the rate of expansion implied by this analysis requires more investment than they can cover from their cashflow, and additional hashpower is funded by new investors who are simply assigned ownership of a new mining pool, which may helps Bitmain’s investors assure themselves they aren’t being duped by a pyramid scheme and gives more of an appearance of decentralisation.

It seems to me therefore there could be a variety of ways in which Bitmain may have influence over a majority of hashpower:

  • Direct ownership and control, that is being obscured in order to avoid an economic backlash that might result from people realising over 50% of hashpower is controlled by one group
  • Contractual control despite independent ownership, such that customers of Bitmain are committed to follow Bitmain’s lead when signalling blocks in order to maintain access to their existing hardware, or to be able to purchase additional hardware (an account on reddit appearing to belong to the GBMiners pool has suggested this is the case)
  • Contractual control due to offering essential ongoing services, eg support for physical hosting, or some form of mining pool services — maintaining the infrastructure for covert ASICBoost may be technically complex enough that Bitmain’s customers cannot maintain it themselves, but that Bitmain could relatively easily supply as an ongoing service to their top customers.
  • Contractual influence via leasing arrangements rather than sale of hardware — if hardware is leased to customers, or financing is provided, Bitmain could retain some control of the hardware until the leasing or financing term is complete, despite not having ownership
  • Coordinated investment resulting in cartel-like behaviour — even if there is no contractual relationship where Bitmain controls some of its customers in some manner, it may be that forming a cartel of a few top miners allows those miners to increase profits; in that case rather than a single firm having control of over 50% of hashrate, a single cartel does. While this is technically different, it does not seem likely to be an improvement in practice. If such a cartel exists, its members will not have any reason to compete against each other until it has maximised its profits, with control of more than 70% of the hashrate.


So, conclusions:

  • ASICBoost is worth using if you are able to. Bitmain is able to.
  • Nothing I’ve seen suggest Bitmain is economically clueless; so since ASICBoost is worth doing, and Bitmain is able to use it on mainnet, Bitmain are using it on mainnet.
  • Independently of ASICBoost, Bitmain’s most profitable course of action seems to be to control somewhere in the range of 50%-80% of the global hashrate at current prices and overall level of mining.
  • The distribution of hashrate between mining pools aligned with Bitmain in various ways makes it plausible, though not certain, that this may already be the case in some form.
  • If all this hashrate is benefiting from ASICBoost, then my estimate is that the value of ASICBoost is currently about $72M per annum
  • Avoiding dominant mining manufacturers tending towards supermajority control of hashrate requires either a high global hashrate or a relatively low price — the hashrate in TH/s should be about 5000 times the price in dollars.
  • The current price is about $2400 USD/BTC, so the corresponding hashrate to prevent centralisation at that price point is 12M TH/s. Conversely, the current hashrate is about 6M TH/s, so the maximum price that doesn’t cause untenable centralisation pressure is $1200 USD/BTC.

Colin Charles: CFP for Percona Live Europe Dublin 2017 closes July 17 2017!

Planet Linux Australia - Thu, 2017-07-13 13:01

I’ve always enjoyed the Percona Live Europe events, because I consider them to be a lot more intimate than the event in Santa Clara. It started in London, had a smashing success last year in Amsterdam (conference sold out), and by design the travelling conference is now in Dublin from September 25-27 2017.

So what are you waiting for when it comes to submitting to Percona Live Europe Dublin 2017? Call for presentations close on July 17 2017, the conference has a pretty diverse topic structure (MySQL [and its diverse ecosystem including MariaDB Server naturally], MongoDB and other open source databases including PostgreSQL, time series stores, and more).

And I think we also have a pretty diverse conference committee in terms of expertise. You can also register now. Early bird registration ends August 8 2017.

I look forward to seeing you in Dublin, so we can share a pint of Guinness. Sláinte.

Francois Marier: Toggling Between Pulseaudio Outputs when Docking a Laptop

Planet Linux Australia - Wed, 2017-07-12 15:07

In addition to selecting the right monitor after docking my ThinkPad, I wanted to set the correct sound output since I have headphones connected to my Ultra Dock. This can be done fairly easily using Pulseaudio.

Switching to a different pulseaudio output

To find the device name and the output name I need to provide to pacmd, I ran pacmd list-sinks:

2 sink(s) available. ... * index: 1 name: <alsa_output.pci-0000_00_1b.0.analog-stereo> driver: <module-alsa-card.c> ... ports: analog-output: Analog Output (priority 9900, latency offset 0 usec, available: unknown) properties: analog-output-speaker: Speakers (priority 10000, latency offset 0 usec, available: unknown) properties: device.icon_name = "audio-speakers"

From there, I extracted the soundcard name (alsa_output.pci-0000_00_1b.0.analog-stereo) and the names of the two output ports (analog-output and analog-output-speaker).

To switch between the headphones and the speakers, I can therefore run the following commands:

pacmd set-sink-port alsa_output.pci-0000_00_1b.0.analog-stereo analog-output pacmd set-sink-port alsa_output.pci-0000_00_1b.0.analog-stereo analog-output-speaker Listening for headphone events

Then I looked for the ACPI event triggered when my headphones are detected by the laptop after docking.

After looking at the output of acpi_listen, I found jack/headphone HEADPHONE plug.

Combining this with the above pulseaudio names, I put the following in /etc/acpi/events/thinkpad-dock-headphones:

event=jack/headphone HEADPHONE plug action=su francois -c "pacmd set-sink-port alsa_output.pci-0000_00_1b.0.analog-stereo analog-output"

to automatically switch to the headphones when I dock my laptop.

Finding out whether or not the laptop is docked

While it is possible to hook into the docking and undocking ACPI events and run scripts, there doesn't seem to be an easy way from a shell script to tell whether or not the laptop is docked.

In the end, I settled on detecting the presence of USB devices.

I ran lsusb twice (once docked and once undocked) and then compared the output:

lsusb > docked lsusb > undocked colordiff -u docked undocked

This gave me a number of differences since I have a bunch of peripherals attached to the dock:

--- docked 2017-07-07 19:10:51.875405241 -0700 +++ undocked 2017-07-07 19:11:00.511336071 -0700 @@ -1,15 +1,6 @@ Bus 001 Device 002: ID 8087:8000 Intel Corp. Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub -Bus 003 Device 081: ID 0424:5534 Standard Microsystems Corp. Hub -Bus 003 Device 080: ID 17ef:1010 Lenovo Bus 003 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub -Bus 002 Device 041: ID xxxx:xxxx ... -Bus 002 Device 040: ID xxxx:xxxx ... -Bus 002 Device 039: ID xxxx:xxxx ... -Bus 002 Device 038: ID 17ef:100f Lenovo -Bus 002 Device 037: ID xxxx:xxxx ... -Bus 002 Device 042: ID 0424:2134 Standard Microsystems Corp. Hub -Bus 002 Device 036: ID 17ef:1010 Lenovo Bus 002 Device 002: ID xxxx:xxxx ... Bus 002 Device 004: ID xxxx:xxxx ... Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub

I picked 17ef:1010 as it appeared to be some internal bus on the Ultra Dock (none of my USB devices were connected to Bus 003) and then ended up with the following port toggling script:

#!/bin/bash if /usr/bin/lsusb | grep 17ef:1010 > /dev/null ; then # docked pacmd set-sink-port alsa_output.pci-0000_00_1b.0.analog-stereo analog-output else # undocked pacmd set-sink-port alsa_output.pci-0000_00_1b.0.analog-stereo analog-output-speaker fi

James Morris: Linux Security Summit 2017 Schedule Published

Planet Linux Australia - Tue, 2017-07-11 23:01

The schedule for the 2017 Linux Security Summit (LSS) is now published.

LSS will be held on September 14th and 15th in Los Angeles, CA, co-located with the new Open Source Summit (which includes LinuxCon, ContainerCon, and CloudCon).

The cost of LSS for attendees is $100 USD. Register here.

Highlights from the schedule include the following refereed presentations:

There’s also be the usual Linux kernel security subsystem updates, and BoF sessions (with LSM namespacing and LSM stacking sessions already planned).

See the schedule for full details of the program, and follow the twitter feed for the event.

This year, we’ll also be co-located with the Linux Plumbers Conference, which will include a containers microconference with several security development topics, and likely also a TPMs microconference.

A good critical mass of Linux security folk should be present across all of these events!

Thanks to the LSS program committee for carefully reviewing all of the submissions, and to the event staff at Linux Foundation for expertly planning the logistics of the event.

See you in Los Angeles!

OpenSTEM: This Week in HASS – term 3, week 1

Planet Linux Australia - Mon, 2017-07-10 15:04

Today marks the start of a new term in Queensland, although most states and territories have at least another week of holidays, if not more. It’s always hard to get back into the swing of things in the 3rd term, with winter cold and the usual round of flus and sniffles. OpenSTEM’s 3rd term units branch into new areas to provide some fresh material and a new direction for the new semester. This term younger students are studying the lives of children in the past from a narrative context, whilst older students are delving into aspects of Australian history.

Foundation/Prep/Kindy to Year 3

The main resource for our youngest students for Unit F.3 is Children in the Past – a collection of stories of children from a range of different historical situations. This resource contains 6 stories of children from Aboriginal Australia more than 1,000 years ago, Ancient Egypt, Ancient Rome, Ancient China, Aztec Mexico and Zulu Southern Africa several hundred years ago. Teachers can choose one or two stories from this resource to study in depth with the students this term. The range of stories allows teachers to tailor the material to their class and ensure that there is no need to repeat the same stories in consecutive years. Students will compare the lives of children in the stories with their own lives – focusing on different aspects in different weeks of the term. In this first week teachers will read the stories to the class and help them find the places described on the OpenSTEM “Our World” map and/or a globe.

Students in integrated Foundation/Prep/Kindy and Year 1 classes (Unit F-1.3), will also be examining stories from the Children in the Past resource. Students in Years 1 (Unit 1.3), 2 (Unit 2.3) and 3 (Unit 3.3) will also be comparing their own lives with those of children in the past; however, they will use a collection of stories called Living in the Past, which covers the same areas and time periods as Children in the Past, but provides more in-depth information about a broader range of subject areas and includes the story of the young Tom Petrie, growing up in Brisbane in the 1840s. Students in Year 1 will be considering family structures and the differences and similarities between their own families and the families described in the stories. Students in Year 2 are starting to understand the differences which technology makes to peoples’ lives, especially the technology behind different modes of transport. Students in Year 3 retain a focus on local history. In fact, the Understanding Our World® units for Year 3, term 3 are tailored to match the capital city of the state or territory in which the student lives. Currently units are available for Brisbane and Perth, other capital cities are in preparation. Additional resources are available describing the foundation and growth of Brisbane and Perth, with other cities to follow. Teachers may also prefer to focus on the local community in a smaller town and substitute their own resources for those of the capital city.

Years 3 to 6 First Australian Parliament

Older students are focusing on Australian history this term – Year 3 students (Unit 3.7) will be considering the history of their capital city (or local community) within the broader context of Australian history. Students in Year 4 (Unit 4.3) will be examining Australia in the period up to and including the first half of the 19th century. Students in Year 5 (Unit 5.3) examine the colonial period in Australian history; whilst students in Year 6 (Unit 6.3) are investigating Federation and Australia in the 20th century. In this first week of term, students in Years 3 to 6 will be compiling a timeline of Australian history and filling in important events which they already know about or have learnt about in previous units. Students will revisit this timeline in later weeks to add additional information. The main resources for this week are The History of Australia, a broad overview of Australian history from the Ice Age to the 20th century; and the History of Australian Democracy, an overview of the development of the democratic process in Australia.

The rest of the 3rd term will be spent compiling a scientific report on an investigation into an aspect of Australian history. Students in Year 3 will choose a research topic from a list of themes concerning the history of their capital city. Students in Year 4 will choose from themes on Australia before 1788, the First Fleet, experiences of convicts and settlers, including children, as well as the impact of different animals brought to Australia during the colonial period. Students in Year 5 will choose from themes on the Australian colonies and people including explorers, convicts and settlers, massacres and resistance, colonial animals and industries such as sugar in Queensland. Students in Year 6 will choose from themes on Federation, including personalities such as Henry Parkes and Edmund Barton, Sport, Women’s Suffrage, Children, the Boer War and Aboriginal experiences. This research topic will be undertaken as a guided investigation throughout the term.

Anthony Towns: Bitcoin: ASICBoost and segwit2x – Background

Planet Linux Australia - Mon, 2017-07-10 15:00

I’ve been trying to make heads or tails of what the heck is going on in Bitcoin for a while now. I’m not sure I’ve actually made that much progress, but I’ve at least got some thoughts that seem coherent now.

First, this post is background for people playing along at home who aren’t familiar with the issues or jargon: Bitcoin is a currency based on an electronic ledger that essentially tracks how much Bitcoin exists, and how someone can be authorised to transfer it to someone else; that ledger is currently about 100GB in size, growing at a rate of about a gigabyte a week. The ledger is updated by miners, who compete by doing otherwise pointless work running cryptographic hashes (and in so doing obtain a “proof of work”), and in return receive a reward (denominated in bitcoin) made up from fees by people transacting and an inflation subsidy. Different miners are competing in an essentially zero-sum game, because fees and inflation are essentially a fixed amount that is (roughly) divided up amongst miners according to how much work they do — so while you get more reward for doing more work, it comes at a cost of other miners receiving less reward.

Because the ledger only grows by (about) a gigabyte each week (or a megabyte per block, which is roughly every ten minutes), there is a limit on how many transactions can be included each week (ie, supply is limited), which both increases fees and limits adoption — so for quite a while now, people in the bitcoin ecosystem with a focus on growth have wanted to work out ways to increase the transaction rate. Initial proposals in mid 2015 suggested allowing miners to regularly determine the limit with no official upper bound (nominally “BIP100“, though never actually formally submitted as a proposal), or to increase by a factor of eight within six months, then double every two years after that, until reaching almost 200 times the current size by 2036 (BIP101), or to increase at a rate of about 17% per annum (suggested on the mailing list, but never formally proposed BIP103). These proposals had two major risks: locking in a lot of growth that may turn out to be unnecessary or actively harmful, and requiring what is called a “hard fork”, which would render the existing bitcoin software unable to track the ledger after the change took affect with the possible outcome that two ledgers would coexist and would in turn cause a range of problems. To reduce the former risk, a minimal compromise proposal was made to “kick the can down the road” and just double the ledger growth rate, then figure out a more permanent solution down the road (BIP102) (or to double it three times — to 2MB, 4MB then 8MB — over four years, per Adam Back). A few months later, some of the devs figured out a way to more or less achieve this that also doesn’t require a hard fork, and comes with a host of other benefits, and proposed an update called “segregated witness” at the December 2015 Scaling Bitcoin conference.

And shortly after that things went completely off the rails, and have stayed that way since. Ultimately there seem to be two camps: one group is happy to deploy segregated witness, and is eager to make further improvements to Bitcoin based on that (this is my take on events); while the other group does not, perhaps due to some combination of being opposed to the segregated witness changes directly, wanting a more direct upgrade immediately, being afraid deploying segregated witness will block other changes, or wanting to take control of the bitcoin codebase/roadmap from the current developers (take this with a grain of salt: these aren’t opinions I share or even find particularly reasonable, so I can’t do them justice when describing them; cf ViaBTC’s post to get that side of the argument made directly, eg)

Most recently, and presumably on the basis that the opposed group are mostly worried that deploying segregated witness will prevent or significantly delay a more direct increase in capacity, a bitcoin venture capitalist, Barry Silbert, organised an agreement amongst a number of companies including many miners, to both activate segregated witness within the next month, and to do a hard fork capacity increase by the end of the year. This is the “segwit2x” project; named because it takes segregated witness, (“segwit”) and then additionally doubles its capacity increase (“2x”). This agreement is not supported by any of the existing dev team, and is being developed by Jeff Garzik (who was behind BIP100 and BIP102 mentioned above) in a forked codebase renamed “btc1“, so if successful, this may also satisfy members of the opposed group motivated by a desire to take control of the bitcoin codebase and roadmap, despite that not being an explicit part of the agreement itself.

To me, the arguments presented for opposing segwit don’t really seem plausible. As far as future development goes, a roadmap was put out in December 2015 and endorsed by many developers that explicitly included a hard fork for increased capacity (“moderate block size increase proposals (such as 2/4/8 …)”), among many other things, so the risk of no further progress happening seems contrary to the facts to me. The core bitcoin devs are extremely capable in my estimation, so replacing them seems a bad idea from the start, but even more than that, they take a notably hands off approach to dictating where Bitcoin will go in future — so, to my mind, it seems like a more sensible thing to try would be working with them to advance the bitcoin ecosystem in whatever direction you want, rather than to try to replace them outright. In that context, it seems particularly notable to me that in the eighteen months between the segregated witness proposal and the segwit2x agreement, there hasn’t been any serious attempt to propose a hard fork capacity increase that meets the core dev’s quality standards; for instance there has never been any code for BIP100, and of the various hard forking codebases that have arisen by advocates of the hard fork approach — Bitcoin XT, Bitcoin Classic, Bitcoin Unlimited, btc1, and Bitcoin ABC — none have been developed in a way that’s suitable for the changes to be reviewed and merged into core via a pull request in the normal fashion. Further, since one of the main criticisms of a hard fork is that deployment costs are higher when it is done in a short time frame (weeks or a few months versus a year or longer), that lack of engagement over the past 18 months followed by a desperate rush now seems particularly poor to me.

A different explanation for the opposition to segwit became public in April, however. ASICBoost is a patent-pending optimisation to the way Bitcoin miners do the work that entitles them to extend the ledger (for which they receive the rewards described earlier), and while there are a few ways of making use of ASICBoost, perhaps the most effective way turns out to be incompatible with segwit. There are three main alternatives to the covert, segwit-incompatible approach, all of which have serious downsides. The first, overt ASICBoost via modifying the block version reveals that you’re using ASICBoost, which would either (a) encourage other miners to also use the optimisation reducing your profits, (b) give the patent holder cause to charge you royalties or cause other problems (assuming the patent is eventually granted and deemed valid), or (c) encourage the bitcoin community at large to change the ecosystem rules so that the optimisation no longer works. The second, mining empty blocks via ASICBoost means you don’t gain any fee income, reducing your revenue and hence profit. And the third, rolling the extranonce to find a collision rather than combining partial transaction trees increases the preparation work by a factor of ten or so, which is probably enough to outweigh the savings from the optimisation in the first place.

If ASICBoost were being used by a significant number of miners, and segregated witness prevents its continued use in practice, then we suddenly have a very plausible explanation for much of the apparent madness: the loss of the optimisation could significantly increase some miners’ costs or reduce their revenue, reducing profit either way (a high end estimate of $100,000,000 per year was given in the original explanation), which would justify significant investment in blocking that change. Further, an honest explanation of the problem would not be feasible, because this would be just as bad as doing the optimisation overtly — it would increase competition, alert the potential patent owners, and might cause the optimisation to be deliberately disabled — all of which would also negatively affect profits. As a result, there would be substantial opposition to segwit, but the reasons presented in public for this opposition would be false, and it would not be surprising if the people presenting these reasons only give half-hearted effort into providing evidence — their purpose is simply to prevent or at least delay segwit, rather than to actually inform or build a new consensus. To this line of thinking the emphasis on lack of communication from core devs or the desire for a hard fork block size increase aren’t the actual goal, so the lack of effort being put into resolving them over the past 18 months from the people complaining about them is no longer surprising.

With that background, I think there are two important questions remaining:

  1. Is it plausible that preventing ASICBoost would actually cost people millions in profit, or is that just an intriguing hypothetical that doesn’t turn out to have much to do with reality?
  2. If preserving ASICBoost is a plausible motivation, what will happen with segwit2x, given that by enabling segregated witness, it does nothing to preserve ASICBoost?

Well, stay tuned…

Lev Lafayette: One Million Jobs for Spartan

Planet Linux Australia - Sat, 2017-07-08 23:04

Whilst it is a loose metric, our little cluster, "Spartan", at the University of Melbourne ran its 1 millionth job today after almost exactly a year since launch.

The researcher in question is doing their PhD in biochemistry. The project is a childhood asthma study:

"The nasopharynx is a source of microbes associated with acute respiratory illness. Respiratory infection and/ or the asymptomatic colonisation with certain microbes during childhood predispose individuals to the development of asthma.

Using data generated from 16S rRNA sequencing and metagenomic sequencing of nasopharyn samples, we aim to identify which specific microbes and interactions are important in the development of asthma."

Moments like this is why I do HPC.

Congratulations to the rest of the team and to the user community.

read more

Danielle Madeley: Using the Nitrokey HSM with GPG in macOS

Planet Linux Australia - Fri, 2017-07-07 19:01

Getting yourself set up in macOS to sign keys using a Nitrokey HSM with gpg is non-trivial. Allegedly (at least some) Nitrokeys are supported by scdaemon (GnuPG’s stand-in abstraction for cryptographic tokens) but it seems that the version of scdaemon in brew doesn’t have support.

However there is gnupg-pkcs11-scd which is a replacement for scdaemon which uses PKCS #11. Unfortunately it’s a bit of a hassle to set up.

There’s a bunch of things you’ll want to install from brew: opensc, gnupg, gnupg-pkcs11-scd, pinentry-mac, openssl and engine_pkcs11.

brew install opensc gnupg gnupg-pkcs11-scd pinentry-mac \ openssl engine-pkcs11

gnupg-pkcs11-scd won’t create keys, so if you’ve not made one already, you need to generate yourself a keypair. Which you can do with pkcs11-tool:

pkcs11-tool --module /usr/local/lib/ -l \ --keypairgen --key-type rsa:2048 \ --id 10 --label 'Danielle Madeley'

The --id can be any hexadecimal id you want. It’s up to you to avoid collisions.

Then you’ll need to generate and sign a self-signed X.509 certificate for this keypair (you’ll need both the PEM form and the DER form):

/usr/local/opt/openssl/bin/openssl << EOF engine -t dynamic \ -pre SO_PATH:/usr/local/lib/engines/ \ -pre ID:pkcs11 \ -pre LIST_ADD:1 \ -pre LOAD \ -pre MODULE_PATH:/usr/local/lib/ req -engine pkcs11 -new -key 0:10 -keyform engine \ -out cert.pem -text -x509 -days 3640 -subj '/CN=Danielle Madeley/' x509 -in cert.pem -out cert.der -outform der EOF

The flag -key 0:10 identifies the token and key id (see above when you created the key) you’re using. If you want to refer to a different token or key id, you can change these.

And import it back into your HSM:

pkcs11-tool --module /usr/local/lib/ -l \ --write-object cert.der --type cert \ --id 10 --label 'Danielle Madeley'

You can then configure gnupg-agent to use gnupg-pkcs11-scd. Edit the file ~/.gnupg/gpg-agent.conf:

scdaemon-program /usr/local/bin/gnupg-pkcs11-scd pinentry-program /usr/local/bin/pinentry-mac

And the file ~./gnupg/gnupg-pkcs11-scd.conf:

providers nitrokey provider-nitrokey-library /usr/local/lib/

gnupg-pkcs11-scd is pretty nifty in that it will throw up a (pin entry) dialog if your token is not available, and is capable of supporting multiple tokens and providers.

Reload gpg-agent:

gpg-agent --server gpg-connect-agent << EOF RELOADAGENT EOF

Check your new agent is working:

gpg --card-status

Get your key handle (grip), which is the 40-character hex string after the phrase KEY-FRIEDNLY (sic):

gpg-agent --server gpg-connect-agent << EOF SCD LEARN EOF

Import this key into gpg as an ‘Existing key’, giving the key grip above:

gpg --expert --full-generate-key

You can now use this key as normal, create sub-keys, etc:

gpg -K /Users/danni/.gnupg/pubring.kbx ------------------------------- sec> rsa2048 2017-07-07 [SCE] 1172FC7B4B5755750C65F9A544B80C280F80807C Card serial no. = 4B43 53233131 uid [ultimate] Danielle Madeley <> echo -n "Hello World" | gpg --armor --clearsign --textmode

Side note: the curses-based pinentry doesn’t deal with piping content into stdin, which is why you want pinentry-mac.

You can also import your certificate into gpgsm:

gpgsm --import < ca-certificate gpgsm --learn-card

And that’s it, now you can sign your git tags with your super-secret private key, or whatever it is you do. Remember that you can’t exfiltrate the secret keys from your HSM in the clear, so if you need a backup you can create a DKEK backup (see the SmartcardHSM docs), or make sure you’ve generated that revocation certificate, or just decided disaster recovery is for dweebs.

Rusty Russell: Broadband Speeds, 2 Years Later

Planet Linux Australia - Thu, 2017-07-06 21:01

Two years ago, considering the blocksize debate, I made two attempts to measure average bandwidth growth, first using Akamai serving numbers (which gave an answer of 17% per year), and then using fixed-line broadband data from OFCOM UK, which gave an answer of 30% per annum.

We have two years more of data since then, so let’s take another look.

OFCOM (UK) Fixed Broadband Data

First, the OFCOM data:

  • Average download speed in November 2008 was 3.6Mbit
  • Average download speed in November 2014 was 22.8Mbit
  • Average download speed in November 2016 was 36.2Mbit
  • Average upload speed in November 2008 to April 2009 was 0.43Mbit/s
  • Average upload speed in November 2014 was 2.9Mbit
  • Average upload speed in November 2016 was 4.3Mbit

So in the last two years, we’ve seen 26% increase in download speed, and 22% increase in upload, bringing us down from 36/37% to 33% over the 8 years. The divergence of download and upload improvements is concerning (I previously assumed they were the same, but we have to design for the lesser of the two for a peer-to-peer system).

The idea that upload speed may be topping out is reflected in the Nov-2016 report, which notes only an 8% upload increase in services advertised as “30Mbit” or above.

Akamai’s State Of The Internet Reports

Now let’s look at Akamai’s Q1 2016 report and Q1-2017 report.

  • Annual global average speed in Q1 2015 – Q1 2016: 23%
  • Annual global average speed in Q1 2016 – Q1 2017: 15%

This gives an estimate of 19% per annum in the last two years. Reassuringly, the US and UK (both fairly high-bandwidth countries, considered in my previous post to be a good estimate for the future of other countries) have increased by 26% and 19% in the last two years, indicating there’s no immediate ceiling to bandwidth.

You can play with the numbers for different geographies on the Akamai site.

Conclusion: 19% Is A Conservative Estimate

17% growth now seems a little pessimistic: in the last 9 years the US Akamai numbers suggest the US has increased by 19% per annum, the UK by almost 21%.  The gloss seems to be coming off the UK fixed-broadband numbers, but they’re still 22% upload increase for the last two years.  Even Australia and the Philippines have managed almost 21%.

Danielle Madeley: python-pkcs11 with the Nitrokey HSM

Planet Linux Australia - Tue, 2017-07-04 23:01

So my Nitrokey HSM arrived and it works great, thanks to the Nitrokey peeps for sending me one.

Because the OpenSC PKCS #11 module is a little more lightweight than some of the other vendors, which often implement mechanisms that are not actually supported by the hardware (e.g. the Opencryptoki TPM module), I wrote up some documentation on how to use the device, focusing on how to extract the public keys for using outside of PKCS #11, as the Nitrokey doesn’t implement any of the public key functions.

Nitrokey with python-pkcs11

This also encouraged me to add a whole bunch more of the import/extraction functions for the diverse key formats, including getting very frustrated at the lack of documentation for little things like how OpenSSL stores EC public keys (the answer is as SubjectPublicKeyInfo from X.509), although I think there might be some operating system specific glitches with encoding some DER structures. I think I need to move from pyasn1 to asn1crypto.

OpenSTEM: The Science of Cats

Planet Linux Australia - Mon, 2017-07-03 09:04

Ah, the comfortable cat! Most people agree that cats are experts at being comfortable and getting the best out of life, with the assistance of their human friends – but how did this come about? Geneticists and historians are continuing to study how cats and people came to live together and how cats came to organise themselves into such a good deal in their relationship with humans. Cats are often allowed liberties that few other animals, even domestic animals, can get away with – they are fed and usually pampered with comfortable beds (including human furniture), are kept warm, cuddled on demand; and, very often, are not even asked to provide anything except affection (on their terms!) in return. Often thought of as solitary animals, cats’ social behaviour is actually a lot more complex and recently further insights have been gained about how cats and humans came to enjoy the relationship that they have today.

Many people know that the Ancient Egyptians came to certain agreements with cats – cats are depicted in some of their art and mummified cats have been found. It is believed that cats may have been worshipped as representatives of the Cat Goddess, Bastet – interestingly enough, a goddess of war! Statues of cats from Ancient Egypt emphasise their regal bearing and tendency towards supercilious expressions. Cats were present in Egyptian art by 1950 B.C. and it was long thought that Egyptians were the first to domesticate the cat. However, in 2004 a cat was found buried with a human  on the island of Cyprus in the Mediterranean 9,500 years ago, making it the earliest known cat associated with humans. This date was many thousands of years earlier than Egyptian cats. In 2008 a site in the Nile Valley was found which contained the remains of 6 cats – a male, a female and 4 kittens, which seemed to have been cared for by people about 6,000 years ago.

African Wild Cat, photo by Sonelle, CC-BY-SA

It is now fairly well accepted that cats domesticated people, rather than the other way round! Papers refer to cats as having “self-domesticated”, which sounds in line with cat behaviour. Genetically all modern cats are related to African (also called Near Eastern) wild cats 8,000 years ago. There was an attempt to domesticate leopard cats about 7,500 years ago in China, but none of these animals contributed to the genetic material of the world’s modern cat populations. As humans in the Near East developed agriculture and started to live in settled villages, after 10,000 years ago, cats were attracted to these ready sources of food and more. The steady supply of food from agriculture allowed people to live in permanent villages. Unfortunately, these villages, stocked with food, also attracted other animals, such as rats and mice, not as welcome and potential carriers of disease. The rats and mice were a source of food for the cats who probably arrived in the villages as independent, nocturnal hunters, rather than as deliberately being encouraged by people.

Detail of cat from tomb of Nebamun

Once cats were living in close proximity to people, trust developed and soon cats were helping humans in the hunt, as is shown in this detail from an Egyptian tomb painting on the right. Over time, cats became pets and part of the family and followed farmers from Turkey across into Europe, as well as being painted sitting under dining tables in Egypt. People started to interfere with the breeding of cats and it is now thought that the Egyptians selected more social, rather than more territorial cats. Contrary to the popular belief that cats are innately solitary, in fact African Wild Cats have complex social behaviour, much of which has been inherited by the domestic cat. African wild cats live in loosely affiliated groups made up mostly of female cats who raise kittens together. There are some males associated with the group, but they tend to visit infrequently and have a larger range, visiting several of the groups of females and kittens. The female cats take turns at nursing, looking after the kittens and hunting. The adult females share food only with their own kittens and not with the other adults. Cats recognise who belongs to their group and who doesn’t and tend to be aggressive to those outside the group. Younger cats are more tolerant of strangers, until they form their own groups. Males are not usually social towards each other, but occasionally tolerate each other in loose ‘brotherhoods’.

In our homes we form the social group, which may include one or more cats. If there is more than one cat these may subdivide themselves into cliques or factions. Pairs of cats raised together often remain closely bonded and affectionate for life. Other cats (especially males) may isolate themselves from the group and do not want to interact with other cats. Cats that are happy on their own do not need other cats for company. It is more common to find stressed cats in multi-cat households. Cats will tolerate other cats best if they are introduced when young. After 2 years of age cats are less tolerant of newcomers to the group. Humans take the place of parents in their cats’ lives. Cats who grow up with humans retain some psychological traits from kittenhood and never achieve full psychological maturity.

At the time that humans were learning to manipulate the environment to their own advantage by domesticating plants and animals, cats started learning to manipulate us. They have now managed to achieve very comfortable and prosperous lives with humans and have followed humans around the planet. Cats arrived in Australia with the First Fleet, having found a very comfortable niche on sailing ships helping to control vermin. Matthew Flinders‘ cat, Trim, became famous as a result of the book Flinders wrote about him. However, cats have had a devastating effect on the native wildlife of Australia. They kill millions of native animals every year, possibly even millions each night. It is thought that they have been responsible for the extinction of numbers of native mice and small marsupial species. Cats are very efficient and deadly nocturnal hunters. It is recommended that all cats are kept restrained indoors or in runs, especially at night. We must not forget that our cuddly companions are still carnivorous predators.

David Rowe: Codec 2 Wideband

Planet Linux Australia - Tue, 2017-06-27 09:04

I’m spending a month or so improving the speech quality of a couple of Codec 2 modes. I have two aims:

  1. Make the 700 bit/s codec sound better, to improve speech quality on low SNR HF channels (beneath 0dB).
  2. Develop a higher quality mode in the 2000 to 3000 bit/s range, that can be used on HF channels with modest SNRs (around 10dB)

I ran some numbers on the new OFDM modem and LDPC codes, and turns out we can get 3000 bit/s of codec data through a 2000 Hz channel at down to 7dB SNR.

Now 3000 bit/s is broadband for me – I’ve spent years being very frugal with my bits while I play in low SNR HF land. However it’s still a bit low for Opus which kicks in at 6000 bit/s. I can’t squeeze 6000 bit/s through a 2000 Hz RF channel without higher order QAM constellations which means SNRs approaching 20dB.

So – what can I do with 3000 bit/s and Codec 2? I decided to try wideband(-ish) audio – the sort of audio bandwidth you get from Skype or AM broadcast radio. So I spent a few weeks modifying Codec 2 to work at 16 kHz sample rate, and Jean Marc gave me a few tips on using DCTs to code the bits.

It’s early days but here are a few samples:

Description Sample 1 Original Speech Listen 2 Codec 2 Model, orignal amplitudes and phases Listen 3 Synthetic phase, one bit voicing, original amplitudes Listen 4 Synthetic phase, one bit voicing, amplitudes at 1800 bit/s Listen 5 Simulated analog SSB, 300-2600Hz BPF, 10dB SNR Listen

Couple of interesting points:

  • Sample (2) is as good as Codec 2 can do, its the unquantised model parameters (harmonic phases and amplitudes). It’s all down hill from here as we quantise or toss away parameters.
  • In (3) I’m using a one bit voicing model, this is very vocoder and shouldn’t work this well. MBE/MELP all say you need mixed excitation. Exploring that conundrum would be a good Masters degree topic.
  • In (3) I can hear the pitch estimator making a few mistakes, e.g. around “sheet” on the female.
  • The extra 4kHz of audio bandwidth doesn’t take many more bits to encode, as the ear has a log frequency response. It’s maybe 20% more bits than 4kHz audio.
  • You can hear some words like “well” are muddy and indistinct in the 1800 bit/s sample (4). This usually means the formants (spectral) peaks are not well defined, so we might be tossing away a little too much information.
  • The clipping on the SSB sample (5) around the words “depth” and “hours” is an artifact of the PathSim AGC. But dat noise. It gets really fatiguing after a while.

Wideband audio is a big paradigm shift for Push To Talk (PTT) radio. You can’t do this with analog radio: 2000 Hz of RF bandwidth, 8000 Hz of audio bandwidth. I’m not aware of any wideband PTT radio systems – they all work at best 4000 Hz audio bandwidth. DVSI has a wideband codec, but at a much higher bit rate (8000 bits/s).

Current wideband codecs shoot for artifact-free speech (and indeed general audio signals like music). Codec 2 wideband will still have noticeable artifacts, and probably won’t like music. Big question is will end users prefer this over SSB, or say analog FM – at the same SNR? What will 8kHz audio sound like on your HT?

We shall see. I need to spend some time cleaning up the algorithms, chasing down a few bugs, and getting it all into C, but I plan to be testing over the air later this year.

Let me know if you want to help.

Play Along

Unquantised Codec 2 with 16 kHz sample rate:

$ ./c2sim ~/Desktop/c2_hd/speech_orig_16k.wav --Fs 16000 -o - | play -t raw -r 16000 -s -2 -

With “Phase 0” synthetic phase and 1 bit voicing:

$ ./c2sim ~/Desktop/c2_hd/speech_orig_16k.wav --Fs 16000 --phase0 --postfilter -o - | play -t raw -r 16000 -s -2 -


FreeDV 2017 Road Map – this work is part of the “Codec 2 Quality” work package.

Codec 2 page – has an explanation of the way Codec 2 models speech with harmonic amplitudes and phases.

OpenSTEM: Guess the Artefact! – #2

Planet Linux Australia - Mon, 2017-06-26 15:05

Today’s Guess the Artefact! covers one of a set of artefacts which are often found confusing to recognise. We often get questions about these artefacts, from students and teachers alike, so here’s a chance to test your skills of observation. Remember – all heritage and archaeological material is covered by State or Federal legislation and should never be removed from its context. If possible, photograph the find in its context and then report it to your local museum or State Heritage body (the Dept of Environment and Heritage Protection in Qld; the Office of Environment and Heritage in NSW; the Dept of Environment, Planning and Sustainable Development in ACT; Heritage Victoria; the Dept of Environment, Water and Natural Resources in South Australia; the State Heritage Office in WA and the Heritage Council – Dept of Tourism and Culture in NT).

This artefact is made of stone. It measures about 12 x 8 x 3 cm. It fits easily and comfortably into an adult’s hand. The surface of the stone is mostly smooth and rounded, it looks a little like a river cobble. However, one side – the right-hand side in the photo above – is shaped so that 2 smooth sides meet in a straight, sharpish edge. Such formations do not occur on naturally rounded stones, which tells us that this was shaped by people and not just rounded in a river. The smoothed edges meeting in a sharp edge tell us that this is ground-stone technology. Ground stone technology is a technique used by people to create smooth, sharp edges on stones. People grind the stone against other rocks, occasionally using sand and water to facilitate the process, usually in a single direction. This forms a smooth surface which ends in a sharp edge.

Neolithic Axe

Ground stone technology is usually associated with the Neolithic period in Europe and Asia. In the northern hemisphere, this technology was primarily used by people who were learning to domesticate plants and animals. These early farmers learned to grind grains, such as wheat and barley, between two stones to make flour – thus breaking down the structure of the plant and making it easier to digest. Our modern mortar and pestle is a descendant of this process. Early farmers would have noticed that these actions produced smooth and sharp edges on the stones. These observations would have led them to apply this technique to other tools which they used and thus develop the ground-stone technology. Here (picture on right) we can see an Egyptian ground stone axe from the Neolithic period. The toolmaker has chosen an attractive red and white stone to make this axe-head.

In Japan this technology is much older than elsewhere in the northern hemisphere, and ground-stone axes have been found dating to 30,000 years ago during the Japanese Palaeolithic period. Until recently these were thought to be the oldest examples of ground-stone technology in the world. However, in 2016, Australian archaeologists Peter Hiscock, Sue O’Connor, Jane Balme and Tim Maloney reported in an article in the journal Australian Archaeology, the finding of a tiny flake of stone (just over 1 cm long and 1/2 cm wide) from a ground stone axe in layers dated to 44,000 to 49,000 years ago at the site of Carpenter’s Gap in the Kimberley region of north-west Australia. This tiny flake of stone – easily missed by anyone not paying close attention – is an excellent example of the extreme importance of ‘archaeological context’. Archaeological material that remains in its original context (known as in situ) can be dated accurately and associated with other material from the same layers, thus allowing us to understand more about the material. Anything removed from the context usually can not be dated and only very limited information can be learnt.

The find from the Kimberley makes Australia the oldest place in the world to have ground-stone technology. The tiny chip of stone, broken off a larger ground-stone artefact, probably an axe, was made by the ancestors of Aboriginal people in the millennia after they arrived on this continent. These early Australians did not practise agriculture, but they did eat various grains, which they leaned to grind between stones to make flour. It is possible that whilst processing these grains they learned to grind stone tools as well. Our artefact, shown above, is undated. It was found, totally removed from its original context, stored under an old house in Brisbane. The artefact is useful as a teaching aid, allowing students to touch and hold a ground-stone axe made by Aboriginal people in Australia’s past. However, since it was removed from its original context at some point, we do not know how old it is, or even where it came from exactly.

Our artefact is a stone tool. Specifically, it is a ground stone axe, made using technology that dates back almost 50,000 years in Australia! These axes were usually made by rubbing a hard stone cobble against rocks by the side of a creek. Water from the creek was used as a lubricant, and often sand was added as an extra abrasive. The making of ground-stone axes often left long grooves in these rocks. These are called ‘grinding grooves’ and can still be found near some creeks in the landscape today, such as in Kuringai Chase National Park in Sydney. The ground-stone axes were usually hafted using sticks and lashings of plant fibre, to produce a tool that could be used for cutting vegetation or other uses. Other stone tools look different to the one shown above, especially those made by flaking stone; however, smooth stones should always be carefully examined in case they are also ground-stone artefacts and not just simple stones!

Linux Users of Victoria (LUV) Announce: LUV Beginners July Meeting: Teaching programming using video games

Planet Linux Australia - Mon, 2017-06-26 15:03
Start: Jul 15 2017 12:30 End: Jul 15 2017 16:30 Start: Jul 15 2017 12:30 End: Jul 15 2017 16:30 Location:  Infoxchange, 33 Elizabeth St. Richmond Link:

Andrew Pam will be demonstrating a range of video games that run natively on Linux and explicitly include programming skills as part of the game including SpaceChem, InfiniFactory, TIS-100, Shenzen I/O, Else Heart.Break(), Hack 'n' Slash and Human Resource Machine.  He will seek feedback on the suitability of these games for teaching programming skills to non-programmers and the possibility of group play in a classroom or workshop setting.

The meeting will be held at Infoxchange, 33 Elizabeth St. Richmond 3121 (enter via the garage on Jonas St.) Late arrivals, please call (0421) 775 358 for access to the venue.

LUV would like to acknowledge Infoxchange for the venue.

Linux Users of Victoria Inc., is an incorporated association, registration number A0040056C.

July 15, 2017 - 12:30

read more