Planet Linux Australia

Syndicate content
Planet Linux Australia - http://planet.linux.org.au
Updated: 1 hour 3 min ago

Linux Users of Victoria (LUV) Announce: LUV Main August 2017 Meeting

Fri, 2017-07-28 00:02
Start: Aug 1 2017 18:30 End: Aug 1 2017 20:30 Start: Aug 1 2017 18:30 End: Aug 1 2017 20:30 Location:  The Dan O'Connell Hotel, 225 Canning Street, Carlton VIC 3053 Link:  http://luv.asn.au/meetings/map

Tuesday, August 1, 2017

6:30 PM to 8:30 PM
The Dan O'Connell Hotel
225 Canning Street, Carlton VIC 3053

Speakers:

  • Tony Cree, CEO Aboriginal Literacy Foundation (to be confirmed)
  • Russell Coker, QEMU and ARM on AMD64

Russell Coker will demonstrate how to use QEMU to run software for ARM CPUs on an x86 family CPU.

The Dan O'Connell Hotel, 225 Canning Street, Carlton VIC 3053

Food and drinks will be available on premises.

Before and/or after each meeting those who are interested are welcome to join other members for dinner.

Linux Users of Victoria Inc., is an incorporated association, registration number A0040056C.

August 1, 2017 - 18:30

Linux Users of Victoria (LUV) Announce: LUV Beginners August Meeting: Secure Shell (SSH)

Fri, 2017-07-28 00:02
Start: Aug 26 2017 12:30 End: Aug 26 2017 16:30 Start: Aug 26 2017 12:30 End: Aug 26 2017 16:30 Location:  Infoxchange, 33 Elizabeth St. Richmond Link:  http://luv.asn.au/meetings/map

Secure Shell (SSH)

The meeting will be held at Infoxchange, 33 Elizabeth St. Richmond 3121 (enter via the garage on Jonas St.) Late arrivals, please call (0421) 775 358 for access to the venue.

LUV would like to acknowledge Infoxchange for the venue.

Linux Users of Victoria Inc., is an incorporated association, registration number A0040056C.

August 26, 2017 - 12:30

read more

Russell Coker: Forking Mon and DKIM with Mailing Lists

Wed, 2017-07-26 02:02

I have forked the “Mon” network/server monitoring system. Here is a link to the new project page [1]. There hasn’t been an upstream release since 2010 and I think we need more frequent releases than that. I plan to merge as many useful monitoring scripts as possible and support them well. All Perl scripts will use strict and use other best practices.

The first release of etbe-mon is essentially the same as the last release of the mon package in Debian. This is because I started work on the Debian package (almost all the systems I want to monitor run Debian) and as I had been accepted as a co-maintainer of the Debian package I put all my patches into Debian.

It’s probably not a common practice for someone to fork upstream of a package soon after becoming a comaintainer of the Debian package. But I believe that this is in the best interests of the users. I presume that there are other collections of patches out there and I hope to merge them so that everyone can get the benefits of features and bug fixes that have been separate due to a lack of upstream releases.

Last time I checked mon wasn’t in Fedora. I believe that mon has some unique features for simple monitoring that would be of benefit to Fedora users and would like to work with anyone who wants to maintain the package for Fedora. I am also interested in working with any other distributions of Linux and with non-Linux systems.

While setting up the mailing list for etbemon I wrote an article about DKIM and mailing lists (primarily Mailman) [2]. This explains how to setup Mailman for correct operation with DKIM and also why that seems to be the only viable option.

Related posts:

  1. DKIM and Mailing Lists Currently we have a problem with the Debian list server...
  2. An Update on DKIM Signing and SE Linux Policy In my previous post about DKIM [1] I forgot to...
  3. Installing DKIM and Postfix in Debian I have just installed Domain Key Identified Mail (DKIM) [1]...

OpenSTEM: This Week in HASS – term 3, week 3

Mon, 2017-07-24 10:04
This week our youngest students are playing games from different places around the world, in the past. Slightly older students are completing the Timeline Activity. Students in Years 4, 5 and 6 are starting to sink their teeth into their research project for the term, using the Scientific Process. Foundation/Prep/Kindy to Year 3 This week […]

Gabriel Noronha: test post

Sun, 2017-07-23 22:03

test posting from wordpress.com

01 – [Jul-24 13:35 API] Volley error on https://public-api.wordpress.com/rest/v1.1/sites/4046490/posts/366/?context=edit&locale=en_AU – exception: null
02 – [Jul-24 13:35 API] StackTrace: com.android.volley.ServerError
at com.android.volley.toolbox.BasicNetwork.performRequest(BasicNetwork.java:179)
at com.android.volley.NetworkDispatcher.run(NetworkDispatcher.java:114)

03 – [Jul-24 13:35 API] Dispatching action: PostAction-PUSHED_POST
04 – [Jul-24 13:35 POSTS] Post upload failed. GENERIC_ERROR: The Jetpack site is inaccessible or returned an error: transport error – HTTP status code was not 200 (403) [-32300]
05 – [Jul-24 13:35 POSTS] updateNotificationError: Error while uploading the post: The Jetpack site is inaccessible or returned an error: transport error – HTTP status code was not 200 (403) [-32300]
06 – [Jul-24 13:35 EDITOR] Focus out callback received

OpenSTEM: New Dates for Earliest Archaeological Site in Aus!

Thu, 2017-07-20 12:04
This morning news was released of a date of 65,000 years for archaeological material at the site of Madjedbebe rock shelter in the Jabiluka mineral lease area, surrounded by Kakadu National Park. The site is on the land of the Mirarr people, who have partnered with archaeologists from the University of Queensland for this investigation. […]

sthbrx - a POWER technical blog: XDP on Power

Mon, 2017-07-17 11:08

This post is a bit of a break from the standard IBM fare of this blog, as I now work for Canonical. But I have a soft spot for Power from my time at IBM - and Canonical officially supports 64-bit, little-endian Power - so when I get a spare moment I try to make sure that cool, officially-supported technologies work on Power before we end up with a customer emergency! So, without further ado, this is the story of XDP on Power.

XDP

eXpress Data Path (XDP) is a cool Linux technology to allow really fast processing of network packets.

Normally in Linux, a packet is received by the network card, an SKB (socket buffer) is allocated, and the packet is passed up through the networking stack.

This introduces an inescapable latency penalty: we have to allocate some memory and copy stuff around. XDP allows some network cards and drivers to process packets early - even before the allocation of the SKB. This is much faster, and so has applications in DDOS mitigation and other high-speed networking use-cases. The IOVisor project has much more information if you want to learn more.

eBPF

XDP processing is done by an eBPF program. eBPF - the extended Berkeley Packet Filter - is an in-kernel virtual machine with a limited set of instructions. The kernel can statically validate eBPF programs to ensure that they terminate and are memory safe. From this it follows that the programs cannot be Turing-complete: they do not have backward branches, so they cannot do fancy things like loops. Nonetheless, they're surprisingly powerful for packet processing and tracing. eBPF programs are translated into efficient machine code using in-kernel JIT compilers on many platforms, and interpreted on platforms that do not have a JIT. (Yes, there are multiple JIT implementations in the kernel. I find this a terrifying thought.)

Rather than requiring people to write raw eBPF programs, you can write them in a somewhat-restricted subset of C, and use Clang's eBPF target to translate them. This is super handy, as it gives you access to the kernel headers - which define a number of useful data structures like headers for various network protocols.

Trying it

There are a few really interesting project that are already up and running that allow you to explore XDP without learning the innards of both eBPF and the kernel networking stack. I explored the samples in the bcc compiler collection and also the samples from the netoptimizer/prototype-kernel repository.

The easiest way to get started with these is with a virtual machine, as recent virtio network drivers support XDP. If you are using Ubuntu, you can use the uvt-kvm tooling to trivially set up a VM running Ubuntu Zesty on your local machine.

Once your VM is installed, you need to shut it down and edit the virsh XML.

You need 2 vCPUs (or more) and a virtio+vhost network card. You also need to edit the 'interface' section and add the following snippet (with thanks to the xdp-newbies list):

<driver name='vhost' queues='4'> <host tso4='off' tso6='off' ecn='off' ufo='off'/> <guest tso4='off' tso6='off' ecn='off' ufo='off'/> </driver>

(If you have more than 2 vCPUs, set the queues parameter to 2x the number of vCPUs.)

Then, install a modern clang (we've had issues with 3.8 - I recommend v4+), and the usual build tools.

I recommend testing with the prototype-kernel tools - the DDOS prevention tool is a good demo. Then - on x86 - you just follow their instructions. I'm not going to repeat that here.

POWERful XDP

What happens when you try this on Power? Regular readers of my posts will know to expect some minor hitches.

XDP does not disappoint.

Firstly, the prototype-kernel repository hard codes x86 as the architecture for kernel headers. You need to change it for powerpc.

Then, once you get the stuff compiled, and try to run it on a current-at-time-of-writing Zesty kernel, you'll hit a massive debug splat ending in:

32: (61) r1 = *(u32 *)(r8 +12) misaligned packet access off 0+18+12 size 4 load_bpf_file: Permission denied

It turns out this is because in Ubuntu's Zesty kernel, CONFIG_HAS_EFFICIENT_UNALIGNED_ACCESS is not set on ppc64el. Because of that, the eBPF verifier will check that all loads are aligned - and this load (part of checking some packet header) is not, and so the verifier rejects the program. Unaligned access is not enabled because the Zesty kernel is being compiled for CPU_POWER7 instead of CPU_POWER8, and we don't have efficient unaligned access on POWER7.

As it turns out, IBM never released any officially supported Power7 LE systems - LE was only ever supported on Power8. So, I filed a bug and sent a patch to build Zesty kernels for POWER8 instead, and that has been accepted and will be part of the next stable update due real soon now.

Sure enough, if you install a kernel with that config change, you can verify the XDP program and load it into the kernel!

If you have real powerpc hardware, that's enough to use XDP on Power! Thanks to Michael Ellerman, maintainer extraordinaire, for verifying this for me.

If - like me - you don't have ready access to Power hardware, you're stuffed. You can't use qemu in TCG mode: to use XDP with a VM, you need multi-queue support, which only exists in the vhost driver, which is only available for KVM guests. Maybe IBM should release a developer workstation. (Hint, hint!)

Overall, I was pleasantly surprised by how easy things were for people with real ppc hardware - it's encouraging to see something not require kernel changes!

eBPF and XDP are definitely growing technologies - as Brendan Gregg notes, now is a good time to learn them! (And those on Power have no excuse either!)

OpenSTEM: This Week in HASS – term 3, week 2

Mon, 2017-07-17 10:04
This week older students start their research projects for the term, whilst younger students are doing the Timeline Activity. Our youngest students are thinking about the places where people live and can join together with older students as buddies to Build A Humpy together. Foundation/Prep/Kindy to Year 3 Students in stand-alone Foundation/Prep/Kindy classes (Unit F.3), […]

Anthony Towns: Bitcoin: ASICBoost – Plausible or not?

Thu, 2017-07-13 22:00

So the first question: is ASICBoost use plausible in the real world?

There are plenty of claims that it’s not:

  • “Much conspiracy around today. I don’t believe SegWit non-activation has anything to do with AsicBoost!” – Timo Hanke, one of the patent applicants, on twitter
  • “there’s absolutely nothing but baseless accusations flying around” – Emin Gun Sirer’s take, linked from the Bitmain statement
  • “no company would ever produce a chip that would have a switch in to hide that it’s actually an ASICboost chip.” – Sam Cole formerly of KNCMiner which went bankrupt due to being unable to compete with Bitmain in 2016
  • “I believe their claim about not activating ASICBoost. It is very small money for them.” – Guy Corem of SpoonDoolies, who independently discovered ASICBoost
  • “No one is even using Asicboost.” – Roger Ver (/u/memorydealers) on reddit

A lot of these claims don’t actually match reality though: ASICBoost is implemented in Bitmain miners sold to the public, and since it defaults to off, a switch to hide it is obviously easily possible since it’s disabled by default, contradicting Sam Cole’s take. There’s plenty of circumstantial evidence of ASICBoost-related transaction structuring in blocks, contradicting the basis on which Emin Gun Sirer’s dismisses the claims. The 15%-30% improvement claims that Guy Corem and Sam Cole cite are certainly large enough to be worth looking into — and  Bitmain confirms to have done on testnet. Even Guy Corem’s claim that they only amount to $2,000,000 in savings per year rather than $100,000,000 seems like a reason to expect it to be in use, rather than so little that you wouldn’t bother.

If ASICBoost weren’t in use on mainnet it would probably be relatively straightforward to prove that: Bitmain could publish the benchmarks results they got when testing on testnet, and why that proved not to be worth doing on mainnet, and provide instructions for their customers on how to reproduce their results, for instance. Or Bitmain and others could support efforts to block ASICBoost from being used on mainnet, to ensure no one else uses it, for the greater good of the network — if, as they claim, they’re already not using it, this would come at no cost to them.

To me, much of the rhetoric that’s being passed around seems to be a much better match for what you would expect if ASICBoost were in use, than if it was not. In detail:

  • If ASICBoost were in use, and no one had any reason to hide it being used, then people would admit to using it, and would do so by using bits in the block version.
  • If ASICBoost were in use, but people had strong reasons to hide that fact, then people would claim not to be using it for a variety of reasons, but those explanations would not stand up to more than casual analysis.
  • If ASICBoost were not in use, and it was fairly easy to see there is no benefit to it, then people would be happy to share their reasoning for not using it in detail, and this reasoning would be able to be confirmed independently.
  • If ASICBoost were not in use, but the reasons why it is not useful require significant research efforts, then keeping the detailed reasoning private may act as a competitive advantage.

The first scenario can be easily verified, and does not match reality. Likewise the third scenario does not (at least in my opinion) match reality; as noted above, many of the explanations presented are superficial at best, contradict each other, or simply fall apart on even a cursory analysis. Unfortunately that rules out assuming good faith — either people are lying about using ASICBoost, or just dissembling about why they’re not using it. Working out which of those is most likely requires coming to our own conclusion on whether ASICBoost makes sense.

I think Jimmy Song had some good posts on that topic. His first, on Bitmain’s ASICBoost claims finds some plausible examples of ASICBoost testing on testnet, however this was corrected in the comments as having been performed by Timo Hanke, rather than Bitmain. Having a look at other blocks’ version fields on testnet seems to indicate that there hasn’t been much other fiddling of version fields, so presumably whatever testing of ASICBoost was done by Bitmain, fiddling with the version field was not used; but that in turn implies that Bitmain must have been testing covert ASICBoost on testnet, assuming their claim to have tested it on testnet is true in the first place (they could quite reasonably have used a private testnet instead). Two later posts, on profitability and ASICBoost and Bitmain’s profitability in particular, go into more detail, mostly supporting Guy Corem’s analysis mentioned above. Perhaps interestingly, Jimmy Song also made a proposal to the bitcoin-dev shortly after Greg’s original post revealing ASICBoost and prior to these posts; that proposal would have endorsed use of ASICBoost on mainnet, making it cheaper and compatible with segwit, but would also have made use of ASICBoost readily apparent to both other miners and patent holders.

It seems to me there are three different ways to look at the maths here, and because this is an economics question, each of them give a different result:

  • Greg’s maths splits miners into two groups each with 50% of hashpower. One group, which is unable to use ASICBoost is assumed to be operating at almost zero profit, so their costs to mine bitcoins are only barely below the revenue they get from selling the bitcoin they mine. Using this assumption, the costs of running mining equipment are calculated by taking the number of bitcoin mined per year (365*24*6*12.5=657k), multiplying that by the price at the time ($1100), and halving the costs because each group only mines half the chain. This gives a cost of mining for the non-ASICBoost group of $361M per year. The other group, which uses ASICBoost, then gains a 30% advantage in costs, so only pays 70%, or $252M, a comparative saving of approximately $100M per annum. This saving is directly proportional to hashrate and ASICBoost advantage, so using Guy Corem’s figures of 13.2% hashrate and 15% advantage, this reduces from $95M to $66M, saving about $29M per annum.
  • Guy Corem’s maths estimates Bitmain’s figures directly: looking at the AntPool hashpower share, he estimates 500PH/s in hashpower (or 13.2%); he uses the specs of the AntMiner S9 to determine power usage (0.1 J/GH); he looks at electricity prices in China and estimates $0.03 per kWh; and he estimates the ASICBoost advantage to be 15%. This gives a total cost of 500M GH/s * 0.1 J/GH / 1000 W/kW * $0.03 per kWh * 24 * 365 which is $13.14 M per annum, so a 15% saving is just under $2M per annum. If you assume that the hashpower was 50% and ASICBoost gave a 30% advantage instead, this equates to about 1900 PH/s, and gives a benefit of just under $15M per annum. In order to get the $100M figure to match Greg’s result, you would also need to increase electricity costs by a factor of six, from 3c per kWH to 20c per kWH.
  • The approach I prefer is to compare what your hashpower would be keeping costs constant and work out the difference in revenue: for example, if you’re spending $13M per annum in electricity, what is your profit with ASICBoost versus without (assuming that the difficulty retargets appropriately, but no one else changes their mining behaviour). Following this line of thought, if you have 500PH/s with ASICBoost giving you a 30% boost, then without ASICBoost, you have 384 PH/s (500/1.3). If that was 13.2% of hashpower, then the remaining 86.8% of hashpower is 3288 PH/s, so when you stop using ASICBoost and a retarget occurs, total hashpower is now 3672 PH/s (384+3288), and your percentage is now 10.5%. Because mining revenue is simply proportional to hashpower, this amounts to a loss of 2.7% of the total bitcoin reward, or just under $20M per year. If you match Greg’s assumptions (50% hashpower, 30% benefit) that leads to an estimate of $47M per annum; if you match Guy Corem’s assumptions (13.2% hashpower, 15% benefit) it leads to an estimate of just under $11M per annum.

So like I said, that’s three different answers in each of two scenarios: Guy’s low end assumption of 13.2% hashpower and a 15% advantage to ASICBoost gives figures of $29M/$2M/$11M; while Greg’s high end assumptions of 50% hashpower and 30% advantage give figures of $100M/$15M/$47M. The differences in assumptions there is obviously pretty important.

I don’t find the assumptions behind Greg’s maths realistic: in essence, it assumes that mining be so competitive that it is barely profitable even in the short term. However, if that were the case, then nobody would be able to invest in new mining hardware, because they would not recoup their investment. In addition, even if at some point mining were not profitable, increases in the price of bitcoin would change that, and the price of bitcoin has been increasing over recent months. Beyond that, it also assumes electricity prices do not vary between miners — if only the marginal miner is not profitable, it may be that some miners have lower costs and therefore are profitable; and indeed this is likely the case, because electricity prices vary over time due to both seasonal and economic factors. The method Greg uses does is useful for establishing an upper limit, however: the only way ASICBoost could offer more savings than Greg’s estimate would be if every block mined produced less revenue than it cost in electricity, and miners were making a loss on every block. (This doesn’t mean $100M is an upper limit however — that estimate was current in April, but the price of bitcoin has more than doubled since then, so the current upper bound via Greg’s maths would be about $236M per year)

A downside to Guy’s method from the point of view of outside analysis is that it requires more information: you need to know the efficiency of the miners being used and the cost of electricity, and any error in those estimates will be reflected in your final figure. In particular, the cost of electricity needs to be a “whole lifecycle” cost — if it costs 3c/kWh to supply electricity, but you also need to spend an additional 5c/kWh in cooling in order to keep your data-centre operating, then you need to use a figure of 8c/kWh to get useful results. This likely provides a good lower bound estimate however: using ASICBoost will save you energy, and if you forget to account for cooling or some other important factor, then your estimate will be too low; but that will still serve as a loose lower bound. This estimate also changes over time however; while it doesn’t depend on price, it does depend on deployed hashpower — since total hashrate has risen from around 3700 PH/s in April to around 6200 PH/s today, if Bitmain’s hashrate has risen proportionally, it has gone from 500 PH/s to 837 PH/s, and an ASICBoost advantage of 15% means power cost savings have gone from $2M to $3.3M per year; or if Bitmain has instead maintained control of 50% of hashrate at 30% advantage, the savings have gone from $15M to $25M per year.

The key difference between my method and both Greg’s and Guy’s is that they implicitly assume that consuming more electricity is viable, and costs simply increase proportionally; whereas my method assumes that this is not viable, and instead that sufficient mining hardware has been deployed that power consumption is already constrained by some other factor. This might be due to reaching the limit of what the power company can supply, or the rating of the wiring in the data centre, or it might be due to the cooling capacity, or fire risk, or some other factor. For an operation spanning multiple data centres this may be the case for some locations but not others — older data centres may be maxed out, while newer data centres are still being populated and may have excess capacity, for example. If setting up new data centres is not too difficult, it might also be true in the short term, but not true in the longer term — that is having each miner use more power due to disabling ASICBoost might require shutting some miners down initially, but they may be able to be shifted to other sites over the course of a few weeks or month, and restarted there, though this would require taking into account additional hosting costs beyond electricity and cooling. As such, I think this is a fairly reasonable way to produce an plausible estimate, and it’s the one I’ll be using. Note that it depends on the bitcoin price, so the estimates this method produces have also risen since April, going from $11M to $24M per annum (13.2% hash, 15% advantage) or from $47M to $103M (50% hash, 30% advantage).

The way ASICBoost works is by allowing you to save a few steps: normally when trying to generate a proof of work, you have to do essentially six steps:

  1. A = Expand( Chunk1 )
  2. B = Compress( A, 0 )
  3. C = Expand( Chunk2 )
  4. D = Compress( C, B )
  5. E = Expand( D )
  6. F = Compress( E )

The expected process is to do steps (1,2) once, then do steps (3,4,5,6) about four billion (or more) times, until you get a useful answer. You do this process in parallel across many different chips. ASICBoost changes this process by observing that step (3) is independent of steps (1,2) — so by finding a variety of Chunk1s — call them Chunk1-A, Chunk1-B, Chunk1-C and Chunk1-D that are each compatible with a common Chunk2. In that case, you do steps (1,2) four times for each different Chunk1, then do step (3) four billion (or more) times, and do steps (4,5,6) 16 billion (or more) times, to get four times the work, while saving 12 billion (or more) iterations of step (3). Depending on the number of Chunk1’s you set yourself up to find, and the relative weight of the Expand versus Compress steps, this comes to (n-1)/n / 2 / (1+c/e), where n is the number of different Chunk1’s you have. If you take the weight of Expand and Compress steps as about equal, it simplifies to 25%*(n-1)/n, and with n=4, this is 18.75%. As such, an ASICBoost advantage of about 20% seems reasonably practical to me. At 50% hash and 20% advantage, my estimates for ASICBoost’s value are $33M in April, and $72M today.

So as to the question of whether you’d use ASICBoost, I think the answer is a clear yes: the lower end estimate has risen from $2M to $3.3M per year, and since Bitmain have acknowledged that AntMiner’s support ASICBoost in hardware already, the only additional cost is finding collisions which may not be completely trivial, but is not difficult and is easily automated.

If the benefit is only in this range, however, this does not provide a plausible explanation for opposing segwit: having the Bitcoin industry come to a consensus about how to move forward would likely increase the bitcoin price substantially, definitely increasing Bitmain’s mining revenue — even a 2% increase in price would cover their additional costs. However, as above, I believe this is only a lower bound, and a more reasonable estimate is on the order of $11M-$47M as of April or $24M-$103M as of today. This is a much more serious range, and would require an 11%-25% increase in price to not be an outright loss; and a far more attractive proposition would be to find a compromise position that both allows the industry to move forward (increasing the price) and allows ASICBoost to remain operational (maintaining the cost savings / revenue boost).

 

It’s possible to take a different approach to analysing the cost-effectiveness of mining given how much you need to pay in electricity costs. If you have access to a lot of power at a flat rate, can deal with other hosting issues, can expand (or reduce) your mining infrastructure substantially, and have some degree of influence in how much hashpower other miners can deploy, then you can derive a formula for what proportion of hashpower is most profitable for you to control.

In particular, if your costs are determined by an electricity (and cooling, etc) price, E, in dollars per kWh and performance, r, in Joules per gigahash, then given your hashrate, h in terahash/second, your power usage in watts is (h*1e3*r), and you run this for 600 seconds on average between each block (h*r*6e5 Ws), which you divide by 3.6M to convert to kWh (h*r/6), then multiply by your electricity cost to get a dollar figure (h*r*E/6). Your revenue depends on the hashrate of the everyone else, which we’ll call g, and on average you receive (p*R*h/(h+g)) every 600 seconds where p is the price of Bitcoin in dollars and R is the reward (subsidy and fees) you receive from a block. Your profit is just the difference, namely h*(p*R/(h+g) – r*E/6). Assuming you’re able to manufacture and deploy hashrate relatively easily, at least in comparison to everyone else, you can optimise your profit by varying h while the other variables (bitcoin price p, block reward R, miner performance r, electricity cost E, and external hashpower g) remain constant (ie, set the derivative of that formula with respect to h to zero and simplify) which gives a result of 6gpR/Er = (g+h)^2.

This is solvable for h (square root both sides and subtract g), but if we assume Bitmain is clever and well funded enough to have already essentially optimised their profits, we can get a better sense of what this means. Since g+h is just the total bitcoin hashrate, if we call that t, and divide both sides, we get 6gpR/Ert = t, or g/t = (Ert)/(6pR), which tells us what proportion of hashrate the rest of the network can have (g/t) if Bitmain has optimised its profits, or, alternative we can work out h/t = 1-g/t = 1-(Ert)/(6pR) which tells us what proportion of hashrate Bitmain will have if it has optimised its profits.  Plugging in E=$0.03 per kWH, r=0.1 J/GH, t=6e6 TH/s, p=$2400/BTC, R=12.5 BTC gives a figure of 0.9 – so given the current state of the network, and Guy Corem’s cost estimate, Bitmain would optimise its day to day profits by controlling 90% of mining hashrate. I’m not convinced $0.03 is an entirely reasonable figure, though — my inclination is to suspect something like $0.08 per kWh is more reasonable; but even so, that only reduces Bitmain’s optimal control to around 73%.

Because of that incentive structure, if Bitmain’s current hashrate is lower than that amount, then lowering manufacturing costs for own-use miners by 15% (per Sam Cole’s estimates) and lowering ongoing costs by 15%-30% by using ASICBoost could have a compounding effect by making it easier to quickly expand. (It’s not clear to me that manufacturing a line of ASICBoost-only miners to reduce manufacturing costs by 15% necessarily makes sense. For one thing, this would come at a cost of not being able to mine with them while they are state of the art, then sell them on to customers once a more efficient model has been developed, which seems like it might be a good way to manage inventory. For another, it vastly increases the impact of ASICBoost not being available: rather than simply increasing electricity costs by 15%-30%, it would mean reducing output to 10%-25% of what it was, likely rendering the hardware immediately obsolete)

Using the same formula, it’s possible to work out a ratio of bitcoin price (p) to hashrate (t) that makes it suboptimal for a manufacturer to control a hashrate majority (at least just due to normal mining income): h/t < 0.5, 1-Ert/6pR < 0.5, so t > 3pR/Er. Plugging in p=2400, R=12.5, e=0.08, r=0.1, this gives a total hash rate of 11.25M TH/s, almost double the current hash rate. This hashrate target would obviously increase as the bitcoin price increases, halve if the block reward halves (if a fall in the inflation subsidy is not compensated by a corresponding increase in fee income eg), increase if the efficiency of mining hardware increases, and decrease if the cost of electricity increases. For a simpler formula, assuming the best hosting price is $0.08 per kWh, and while the Antminer S9’s efficiency at 0.1 J/GH is state of the art, and the block reward is 12.5 BTC, the global hashrate in TH/s should be at least around 5000 times the price (ie 3R/Er = 4787.5, near enough to 5000).

Note that this target also sets a limit on the range at which mining can be profitable: if it’s just barely better to allow other people to control >50% of miners when your cost of electricity is E, then for someone else whose cost of electricity is 2*E or more, optimal profit is when other people control 100% of hashrate, that is, you don’t mine at all. Thus if the best large scale hosting globally costs $0.08/kWh, then either mining is not profitable anywhere that hosting costs $0.16/kWh or more, or there’s strong centralisation pressure for a mining hardware manufacturer with access to the cheapest electrictiy to control more than 50% of hashrate. Likewise, if Bitmain really can do hosting at $0.03/kWh, then either they’re incentivised to try to control over 50% of hashpower, or mining is unprofitable at $0.06/kWh and above.

If Bitmain (or any mining ASIC manufacturer) is supplying the majority of new hashrate, they actually have a fairly straightforward way of achieving that goal: if they dedicate 50-70% of each batch of ASICs built for their own use, and sell the rest, with the retail price of the sold miners sufficient to cover the manufacturing cost of the entire batch, then cashflow will mostly take care of itself. At $1200 retail price and $500 manufacturing costs (per Jimmy Song’s numbers), that strategy would imply targeting control of up to about 58% of total hashpower. The above formula would imply that’s the profit-maximising target at the current total hashrate and price if your average hosting cost is about $0.13 per kWh. (Those figures obviously rely heavily on the accuracy of the estimated manufacturing costs of mining hardware; at $400 per unit and $1200 retail, that would be 67% of hashpower, and about $0.09 per kWh)

Strategies like the above are also why this analysis doesn’t apply to miners who buy their hardware rather from a vendor, rather than building their own: because every time they increase their own hash rate (h), the external hashrate (g) also increases as a direct result, it is not valid to assume that g is constant when optimising h, so the partial derivative and optimisation is in turn invalid, and the final result is not applicable.

 

Bitmain’s mining pool, AntPool, obviously doesn’t directly account for 58% or more of total hashpower; though currently they’re the pool with the most hashpower at about 20%. As I understand it, Bitmain is also known to control at least BTC.com and ConnectBTC which add another 7.6%. The other “Emergent Consensus” supporting pools (Bitcoin.com, BTC.top, ViaBTC) account for about 22% of hashpower, however, which brings the total to just under 50%, roughly the right ballpark — and an additional 8% or 9% could easily be pointed at other public pools like slush or f2pool. Whether the “emergent consensus” pools are aligned due to common ownership and contractual obligations or simply similar interests is debatable, though. ViaBTC is funded by Bitmain, and Canoe was built and sold by Bitmain, which means strong contractual ties might exist, however  Jihan Wu, Bitmain’s co-founder, has disclaimed equity ties to BTC.top. Bitcoin.com is owned by Roger Ver, but I haven’t come across anything implying a business relationship between Bitmain and Bitcoin.com beyond supplier and customer. However John McAffee’s apparently forthcoming MGT mining pool is both partnered with Bitmain and advised by Roger Ver, so the existence of tighter ties may be plausible.

It seems likely to me that Bitmain is actually behaving more altruistically than is economically rational according to the analysis above: while it seems likely to me that Bitcoin.com, BTC.top, ViaBTC and Canoe have strong ties to Bitmain and that Bitmain likely has a high level of influence — whether due to contracts, business relationships or simply due to the loyalty and friendship — this nevertheless implies less control over the hashpower than direct ownership and management, and likely less profit. This could be due to a number of factors: perhaps Bitmain really is already sufficiently profitable from mining that they’re focusing on building their business in other ways; perhaps they feel the risks of centralised mining power are too high (and would ultimately be a risk to their long term profits) and are doing their best to ensure that mining power is decentralised while still trying to maximise their return to their investors; perhaps the rate of expansion implied by this analysis requires more investment than they can cover from their cashflow, and additional hashpower is funded by new investors who are simply assigned ownership of a new mining pool, which may helps Bitmain’s investors assure themselves they aren’t being duped by a pyramid scheme and gives more of an appearance of decentralisation.

It seems to me therefore there could be a variety of ways in which Bitmain may have influence over a majority of hashpower:

  • Direct ownership and control, that is being obscured in order to avoid an economic backlash that might result from people realising over 50% of hashpower is controlled by one group
  • Contractual control despite independent ownership, such that customers of Bitmain are committed to follow Bitmain’s lead when signalling blocks in order to maintain access to their existing hardware, or to be able to purchase additional hardware (an account on reddit appearing to belong to the GBMiners pool has suggested this is the case)
  • Contractual control due to offering essential ongoing services, eg support for physical hosting, or some form of mining pool services — maintaining the infrastructure for covert ASICBoost may be technically complex enough that Bitmain’s customers cannot maintain it themselves, but that Bitmain could relatively easily supply as an ongoing service to their top customers.
  • Contractual influence via leasing arrangements rather than sale of hardware — if hardware is leased to customers, or financing is provided, Bitmain could retain some control of the hardware until the leasing or financing term is complete, despite not having ownership
  • Coordinated investment resulting in cartel-like behaviour — even if there is no contractual relationship where Bitmain controls some of its customers in some manner, it may be that forming a cartel of a few top miners allows those miners to increase profits; in that case rather than a single firm having control of over 50% of hashrate, a single cartel does. While this is technically different, it does not seem likely to be an improvement in practice. If such a cartel exists, its members will not have any reason to compete against each other until it has maximised its profits, with control of more than 70% of the hashrate.

 

So, conclusions:

  • ASICBoost is worth using if you are able to. Bitmain is able to.
  • Nothing I’ve seen suggest Bitmain is economically clueless; so since ASICBoost is worth doing, and Bitmain is able to use it on mainnet, Bitmain are using it on mainnet.
  • Independently of ASICBoost, Bitmain’s most profitable course of action seems to be to control somewhere in the range of 50%-80% of the global hashrate at current prices and overall level of mining.
  • The distribution of hashrate between mining pools aligned with Bitmain in various ways makes it plausible, though not certain, that this may already be the case in some form.
  • If all this hashrate is benefiting from ASICBoost, then my estimate is that the value of ASICBoost is currently about $72M per annum
  • Avoiding dominant mining manufacturers tending towards supermajority control of hashrate requires either a high global hashrate or a relatively low price — the hashrate in TH/s should be about 5000 times the price in dollars.
  • The current price is about $2400 USD/BTC, so the corresponding hashrate to prevent centralisation at that price point is 12M TH/s. Conversely, the current hashrate is about 6M TH/s, so the maximum price that doesn’t cause untenable centralisation pressure is $1200 USD/BTC.

Colin Charles: CFP for Percona Live Europe Dublin 2017 closes July 17 2017!

Thu, 2017-07-13 14:01

I’ve always enjoyed the Percona Live Europe events, because I consider them to be a lot more intimate than the event in Santa Clara. It started in London, had a smashing success last year in Amsterdam (conference sold out), and by design the travelling conference is now in Dublin from September 25-27 2017.

So what are you waiting for when it comes to submitting to Percona Live Europe Dublin 2017? Call for presentations close on July 17 2017, the conference has a pretty diverse topic structure (MySQL [and its diverse ecosystem including MariaDB Server naturally], MongoDB and other open source databases including PostgreSQL, time series stores, and more).

And I think we also have a pretty diverse conference committee in terms of expertise. You can also register now. Early bird registration ends August 8 2017.

I look forward to seeing you in Dublin, so we can share a pint of Guinness. Sláinte.

Francois Marier: Toggling Between Pulseaudio Outputs when Docking a Laptop

Wed, 2017-07-12 16:07

In addition to selecting the right monitor after docking my ThinkPad, I wanted to set the correct sound output since I have headphones connected to my Ultra Dock. This can be done fairly easily using Pulseaudio.

Switching to a different pulseaudio output

To find the device name and the output name I need to provide to pacmd, I ran pacmd list-sinks:

2 sink(s) available. ... * index: 1 name: <alsa_output.pci-0000_00_1b.0.analog-stereo> driver: <module-alsa-card.c> ... ports: analog-output: Analog Output (priority 9900, latency offset 0 usec, available: unknown) properties: analog-output-speaker: Speakers (priority 10000, latency offset 0 usec, available: unknown) properties: device.icon_name = "audio-speakers"

From there, I extracted the soundcard name (alsa_output.pci-0000_00_1b.0.analog-stereo) and the names of the two output ports (analog-output and analog-output-speaker).

To switch between the headphones and the speakers, I can therefore run the following commands:

pacmd set-sink-port alsa_output.pci-0000_00_1b.0.analog-stereo analog-output pacmd set-sink-port alsa_output.pci-0000_00_1b.0.analog-stereo analog-output-speaker Listening for headphone events

Then I looked for the ACPI event triggered when my headphones are detected by the laptop after docking.

After looking at the output of acpi_listen, I found jack/headphone HEADPHONE plug.

Combining this with the above pulseaudio names, I put the following in /etc/acpi/events/thinkpad-dock-headphones:

event=jack/headphone HEADPHONE plug action=su francois -c "pacmd set-sink-port alsa_output.pci-0000_00_1b.0.analog-stereo analog-output"

to automatically switch to the headphones when I dock my laptop.

Finding out whether or not the laptop is docked

While it is possible to hook into the docking and undocking ACPI events and run scripts, there doesn't seem to be an easy way from a shell script to tell whether or not the laptop is docked.

In the end, I settled on detecting the presence of USB devices.

I ran lsusb twice (once docked and once undocked) and then compared the output:

lsusb > docked lsusb > undocked colordiff -u docked undocked

This gave me a number of differences since I have a bunch of peripherals attached to the dock:

--- docked 2017-07-07 19:10:51.875405241 -0700 +++ undocked 2017-07-07 19:11:00.511336071 -0700 @@ -1,15 +1,6 @@ Bus 001 Device 002: ID 8087:8000 Intel Corp. Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub -Bus 003 Device 081: ID 0424:5534 Standard Microsystems Corp. Hub -Bus 003 Device 080: ID 17ef:1010 Lenovo Bus 003 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub -Bus 002 Device 041: ID xxxx:xxxx ... -Bus 002 Device 040: ID xxxx:xxxx ... -Bus 002 Device 039: ID xxxx:xxxx ... -Bus 002 Device 038: ID 17ef:100f Lenovo -Bus 002 Device 037: ID xxxx:xxxx ... -Bus 002 Device 042: ID 0424:2134 Standard Microsystems Corp. Hub -Bus 002 Device 036: ID 17ef:1010 Lenovo Bus 002 Device 002: ID xxxx:xxxx ... Bus 002 Device 004: ID xxxx:xxxx ... Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub

I picked 17ef:1010 as it appeared to be some internal bus on the Ultra Dock (none of my USB devices were connected to Bus 003) and then ended up with the following port toggling script:

#!/bin/bash if /usr/bin/lsusb | grep 17ef:1010 > /dev/null ; then # docked pacmd set-sink-port alsa_output.pci-0000_00_1b.0.analog-stereo analog-output else # undocked pacmd set-sink-port alsa_output.pci-0000_00_1b.0.analog-stereo analog-output-speaker fi

James Morris: Linux Security Summit 2017 Schedule Published

Wed, 2017-07-12 00:01

The schedule for the 2017 Linux Security Summit (LSS) is now published.

LSS will be held on September 14th and 15th in Los Angeles, CA, co-located with the new Open Source Summit (which includes LinuxCon, ContainerCon, and CloudCon).

The cost of LSS for attendees is $100 USD. Register here.

Highlights from the schedule include the following refereed presentations:

There’s also be the usual Linux kernel security subsystem updates, and BoF sessions (with LSM namespacing and LSM stacking sessions already planned).

See the schedule for full details of the program, and follow the twitter feed for the event.

This year, we’ll also be co-located with the Linux Plumbers Conference, which will include a containers microconference with several security development topics, and likely also a TPMs microconference.

A good critical mass of Linux security folk should be present across all of these events!

Thanks to the LSS program committee for carefully reviewing all of the submissions, and to the event staff at Linux Foundation for expertly planning the logistics of the event.

See you in Los Angeles!

OpenSTEM: This Week in HASS – term 3, week 1

Mon, 2017-07-10 16:04
Today marks the start of a new term in Queensland, although most states and territories have at least another week of holidays, if not more. It’s always hard to get back into the swing of things in the 3rd term, with winter cold and the usual round of flus and sniffles. OpenSTEM’s 3rd term units […]

Anthony Towns: Bitcoin: ASICBoost and segwit2x – Background

Mon, 2017-07-10 16:00

I’ve been trying to make heads or tails of what the heck is going on in Bitcoin for a while now. I’m not sure I’ve actually made that much progress, but I’ve at least got some thoughts that seem coherent now.

First, this post is background for people playing along at home who aren’t familiar with the issues or jargon: Bitcoin is a currency based on an electronic ledger that essentially tracks how much Bitcoin exists, and how someone can be authorised to transfer it to someone else; that ledger is currently about 100GB in size, growing at a rate of about a gigabyte a week. The ledger is updated by miners, who compete by doing otherwise pointless work running cryptographic hashes (and in so doing obtain a “proof of work”), and in return receive a reward (denominated in bitcoin) made up from fees by people transacting and an inflation subsidy. Different miners are competing in an essentially zero-sum game, because fees and inflation are essentially a fixed amount that is (roughly) divided up amongst miners according to how much work they do — so while you get more reward for doing more work, it comes at a cost of other miners receiving less reward.

Because the ledger only grows by (about) a gigabyte each week (or a megabyte per block, which is roughly every ten minutes), there is a limit on how many transactions can be included each week (ie, supply is limited), which both increases fees and limits adoption — so for quite a while now, people in the bitcoin ecosystem with a focus on growth have wanted to work out ways to increase the transaction rate. Initial proposals in mid 2015 suggested allowing miners to regularly determine the limit with no official upper bound (nominally “BIP100“, though never actually formally submitted as a proposal), or to increase by a factor of eight within six months, then double every two years after that, until reaching almost 200 times the current size by 2036 (BIP101), or to increase at a rate of about 17% per annum (suggested on the mailing list, but never formally proposed BIP103). These proposals had two major risks: locking in a lot of growth that may turn out to be unnecessary or actively harmful, and requiring what is called a “hard fork”, which would render the existing bitcoin software unable to track the ledger after the change took affect with the possible outcome that two ledgers would coexist and would in turn cause a range of problems. To reduce the former risk, a minimal compromise proposal was made to “kick the can down the road” and just double the ledger growth rate, then figure out a more permanent solution down the road (BIP102) (or to double it three times — to 2MB, 4MB then 8MB — over four years, per Adam Back). A few months later, some of the devs figured out a way to more or less achieve this that also doesn’t require a hard fork, and comes with a host of other benefits, and proposed an update called “segregated witness” at the December 2015 Scaling Bitcoin conference.

And shortly after that things went completely off the rails, and have stayed that way since. Ultimately there seem to be two camps: one group is happy to deploy segregated witness, and is eager to make further improvements to Bitcoin based on that (this is my take on events); while the other group does not, perhaps due to some combination of being opposed to the segregated witness changes directly, wanting a more direct upgrade immediately, being afraid deploying segregated witness will block other changes, or wanting to take control of the bitcoin codebase/roadmap from the current developers (take this with a grain of salt: these aren’t opinions I share or even find particularly reasonable, so I can’t do them justice when describing them; cf ViaBTC’s post to get that side of the argument made directly, eg)

Most recently, and presumably on the basis that the opposed group are mostly worried that deploying segregated witness will prevent or significantly delay a more direct increase in capacity, a bitcoin venture capitalist, Barry Silbert, organised an agreement amongst a number of companies including many miners, to both activate segregated witness within the next month, and to do a hard fork capacity increase by the end of the year. This is the “segwit2x” project; named because it takes segregated witness, (“segwit”) and then additionally doubles its capacity increase (“2x”). This agreement is not supported by any of the existing dev team, and is being developed by Jeff Garzik (who was behind BIP100 and BIP102 mentioned above) in a forked codebase renamed “btc1“, so if successful, this may also satisfy members of the opposed group motivated by a desire to take control of the bitcoin codebase and roadmap, despite that not being an explicit part of the agreement itself.

To me, the arguments presented for opposing segwit don’t really seem plausible. As far as future development goes, a roadmap was put out in December 2015 and endorsed by many developers that explicitly included a hard fork for increased capacity (“moderate block size increase proposals (such as 2/4/8 …)”), among many other things, so the risk of no further progress happening seems contrary to the facts to me. The core bitcoin devs are extremely capable in my estimation, so replacing them seems a bad idea from the start, but even more than that, they take a notably hands off approach to dictating where Bitcoin will go in future — so, to my mind, it seems like a more sensible thing to try would be working with them to advance the bitcoin ecosystem in whatever direction you want, rather than to try to replace them outright. In that context, it seems particularly notable to me that in the eighteen months between the segregated witness proposal and the segwit2x agreement, there hasn’t been any serious attempt to propose a hard fork capacity increase that meets the core dev’s quality standards; for instance there has never been any code for BIP100, and of the various hard forking codebases that have arisen by advocates of the hard fork approach — Bitcoin XT, Bitcoin Classic, Bitcoin Unlimited, btc1, and Bitcoin ABC — none have been developed in a way that’s suitable for the changes to be reviewed and merged into core via a pull request in the normal fashion. Further, since one of the main criticisms of a hard fork is that deployment costs are higher when it is done in a short time frame (weeks or a few months versus a year or longer), that lack of engagement over the past 18 months followed by a desperate rush now seems particularly poor to me.

A different explanation for the opposition to segwit became public in April, however. ASICBoost is a patent-pending optimisation to the way Bitcoin miners do the work that entitles them to extend the ledger (for which they receive the rewards described earlier), and while there are a few ways of making use of ASICBoost, perhaps the most effective way turns out to be incompatible with segwit. There are three main alternatives to the covert, segwit-incompatible approach, all of which have serious downsides. The first, overt ASICBoost via modifying the block version reveals that you’re using ASICBoost, which would either (a) encourage other miners to also use the optimisation reducing your profits, (b) give the patent holder cause to charge you royalties or cause other problems (assuming the patent is eventually granted and deemed valid), or (c) encourage the bitcoin community at large to change the ecosystem rules so that the optimisation no longer works. The second, mining empty blocks via ASICBoost means you don’t gain any fee income, reducing your revenue and hence profit. And the third, rolling the extranonce to find a collision rather than combining partial transaction trees increases the preparation work by a factor of ten or so, which is probably enough to outweigh the savings from the optimisation in the first place.

If ASICBoost were being used by a significant number of miners, and segregated witness prevents its continued use in practice, then we suddenly have a very plausible explanation for much of the apparent madness: the loss of the optimisation could significantly increase some miners’ costs or reduce their revenue, reducing profit either way (a high end estimate of $100,000,000 per year was given in the original explanation), which would justify significant investment in blocking that change. Further, an honest explanation of the problem would not be feasible, because this would be just as bad as doing the optimisation overtly — it would increase competition, alert the potential patent owners, and might cause the optimisation to be deliberately disabled — all of which would also negatively affect profits. As a result, there would be substantial opposition to segwit, but the reasons presented in public for this opposition would be false, and it would not be surprising if the people presenting these reasons only give half-hearted effort into providing evidence — their purpose is simply to prevent or at least delay segwit, rather than to actually inform or build a new consensus. To this line of thinking the emphasis on lack of communication from core devs or the desire for a hard fork block size increase aren’t the actual goal, so the lack of effort being put into resolving them over the past 18 months from the people complaining about them is no longer surprising.

With that background, I think there are two important questions remaining:

  1. Is it plausible that preventing ASICBoost would actually cost people millions in profit, or is that just an intriguing hypothetical that doesn’t turn out to have much to do with reality?
  2. If preserving ASICBoost is a plausible motivation, what will happen with segwit2x, given that by enabling segregated witness, it does nothing to preserve ASICBoost?

Well, stay tuned…

Lev Lafayette: One Million Jobs for Spartan

Sun, 2017-07-09 00:04

Whilst it is a loose metric, our little cluster, "Spartan", at the University of Melbourne ran its 1 millionth job today after almost exactly a year since launch.

The researcher in question is doing their PhD in biochemistry. The project is a childhood asthma study:

"The nasopharynx is a source of microbes associated with acute respiratory illness. Respiratory infection and/ or the asymptomatic colonisation with certain microbes during childhood predispose individuals to the development of asthma.

Using data generated from 16S rRNA sequencing and metagenomic sequencing of nasopharyn samples, we aim to identify which specific microbes and interactions are important in the development of asthma."

Moments like this is why I do HPC.

Congratulations to the rest of the team and to the user community.

read more

Danielle Madeley: Using the Nitrokey HSM with GPG in macOS

Fri, 2017-07-07 20:01

Getting yourself set up in macOS to sign keys using a Nitrokey HSM with gpg is non-trivial. Allegedly (at least some) Nitrokeys are supported by scdaemon (GnuPG’s stand-in abstraction for cryptographic tokens) but it seems that the version of scdaemon in brew doesn’t have support.

However there is gnupg-pkcs11-scd which is a replacement for scdaemon which uses PKCS #11. Unfortunately it’s a bit of a hassle to set up.

There’s a bunch of things you’ll want to install from brew: opensc, gnupg, gnupg-pkcs11-scd, pinentry-mac, openssl and engine_pkcs11.

brew install opensc gnupg gnupg-pkcs11-scd pinentry-mac \ openssl engine-pkcs11

gnupg-pkcs11-scd won’t create keys, so if you’ve not made one already, you need to generate yourself a keypair. Which you can do with pkcs11-tool:

pkcs11-tool --module /usr/local/lib/opensc-pkcs11.so -l \ --keypairgen --key-type rsa:2048 \ --id 10 --label 'Danielle Madeley'

The --id can be any hexadecimal id you want. It’s up to you to avoid collisions.

Then you’ll need to generate and sign a self-signed X.509 certificate for this keypair (you’ll need both the PEM form and the DER form):

/usr/local/opt/openssl/bin/openssl << EOF engine -t dynamic \ -pre SO_PATH:/usr/local/lib/engines/engine_pkcs11.so \ -pre ID:pkcs11 \ -pre LIST_ADD:1 \ -pre LOAD \ -pre MODULE_PATH:/usr/local/lib/opensc-pkcs11.so req -engine pkcs11 -new -key 0:10 -keyform engine \ -out cert.pem -text -x509 -days 3640 -subj '/CN=Danielle Madeley/' x509 -in cert.pem -out cert.der -outform der EOF

The flag -key 0:10 identifies the token and key id (see above when you created the key) you’re using. If you want to refer to a different token or key id, you can change these.

And import it back into your HSM:

pkcs11-tool --module /usr/local/lib/opensc-pkcs11.so -l \ --write-object cert.der --type cert \ --id 10 --label 'Danielle Madeley'

You can then configure gnupg-agent to use gnupg-pkcs11-scd. Edit the file ~/.gnupg/gpg-agent.conf:

scdaemon-program /usr/local/bin/gnupg-pkcs11-scd pinentry-program /usr/local/bin/pinentry-mac

And the file ~./gnupg/gnupg-pkcs11-scd.conf:

providers nitrokey provider-nitrokey-library /usr/local/lib/opensc-pkcs11.so

gnupg-pkcs11-scd is pretty nifty in that it will throw up a (pin entry) dialog if your token is not available, and is capable of supporting multiple tokens and providers.

Reload gpg-agent:

gpg-agent --server gpg-connect-agent << EOF RELOADAGENT EOF

Check your new agent is working:

gpg --card-status

Get your key handle (grip), which is the 40-character hex string after the phrase KEY-FRIEDNLY (sic):

gpg-agent --server gpg-connect-agent << EOF SCD LEARN EOF

Import this key into gpg as an ‘Existing key’, giving the key grip above:

gpg --expert --full-generate-key

You can now use this key as normal, create sub-keys, etc:

gpg -K /Users/danni/.gnupg/pubring.kbx ------------------------------- sec> rsa2048 2017-07-07 [SCE] 1172FC7B4B5755750C65F9A544B80C280F80807C Card serial no. = 4B43 53233131 uid [ultimate] Danielle Madeley <danielle@madeley.id.au> echo -n "Hello World" | gpg --armor --clearsign --textmode

Side note: the curses-based pinentry doesn’t deal with piping content into stdin, which is why you want pinentry-mac.

You can also import your certificate into gpgsm:

gpgsm --import < ca-certificate gpgsm --learn-card

And that’s it, now you can sign your git tags with your super-secret private key, or whatever it is you do. Remember that you can’t exfiltrate the secret keys from your HSM in the clear, so if you need a backup you can create a DKEK backup (see the SmartcardHSM docs), or make sure you’ve generated that revocation certificate, or just decided disaster recovery is for dweebs.

Rusty Russell: Broadband Speeds, 2 Years Later

Thu, 2017-07-06 22:01

Two years ago, considering the blocksize debate, I made two attempts to measure average bandwidth growth, first using Akamai serving numbers (which gave an answer of 17% per year), and then using fixed-line broadband data from OFCOM UK, which gave an answer of 30% per annum.

We have two years more of data since then, so let’s take another look.

OFCOM (UK) Fixed Broadband Data

First, the OFCOM data:

  • Average download speed in November 2008 was 3.6Mbit
  • Average download speed in November 2014 was 22.8Mbit
  • Average download speed in November 2016 was 36.2Mbit
  • Average upload speed in November 2008 to April 2009 was 0.43Mbit/s
  • Average upload speed in November 2014 was 2.9Mbit
  • Average upload speed in November 2016 was 4.3Mbit

So in the last two years, we’ve seen 26% increase in download speed, and 22% increase in upload, bringing us down from 36/37% to 33% over the 8 years. The divergence of download and upload improvements is concerning (I previously assumed they were the same, but we have to design for the lesser of the two for a peer-to-peer system).

The idea that upload speed may be topping out is reflected in the Nov-2016 report, which notes only an 8% upload increase in services advertised as “30Mbit” or above.

Akamai’s State Of The Internet Reports

Now let’s look at Akamai’s Q1 2016 report and Q1-2017 report.

  • Annual global average speed in Q1 2015 – Q1 2016: 23%
  • Annual global average speed in Q1 2016 – Q1 2017: 15%

This gives an estimate of 19% per annum in the last two years. Reassuringly, the US and UK (both fairly high-bandwidth countries, considered in my previous post to be a good estimate for the future of other countries) have increased by 26% and 19% in the last two years, indicating there’s no immediate ceiling to bandwidth.

You can play with the numbers for different geographies on the Akamai site.

Conclusion: 19% Is A Conservative Estimate

17% growth now seems a little pessimistic: in the last 9 years the US Akamai numbers suggest the US has increased by 19% per annum, the UK by almost 21%.  The gloss seems to be coming off the UK fixed-broadband numbers, but they’re still 22% upload increase for the last two years.  Even Australia and the Philippines have managed almost 21%.

Danielle Madeley: python-pkcs11 with the Nitrokey HSM

Wed, 2017-07-05 00:01

So my Nitrokey HSM arrived and it works great, thanks to the Nitrokey peeps for sending me one.

Because the OpenSC PKCS #11 module is a little more lightweight than some of the other vendors, which often implement mechanisms that are not actually supported by the hardware (e.g. the Opencryptoki TPM module), I wrote up some documentation on how to use the device, focusing on how to extract the public keys for using outside of PKCS #11, as the Nitrokey doesn’t implement any of the public key functions.

Nitrokey with python-pkcs11

This also encouraged me to add a whole bunch more of the import/extraction functions for the diverse key formats, including getting very frustrated at the lack of documentation for little things like how OpenSSL stores EC public keys (the answer is as SubjectPublicKeyInfo from X.509), although I think there might be some operating system specific glitches with encoding some DER structures. I think I need to move from pyasn1 to asn1crypto.

OpenSTEM: The Science of Cats

Mon, 2017-07-03 10:04
Ah, the comfortable cat! Most people agree that cats are experts at being comfortable and getting the best out of life, with the assistance of their human friends – but how did this come about? Geneticists and historians are continuing to study how cats and people came to live together and how cats came to […]