Planet Linux Australia

Syndicate content
Planet Linux Australia - http://planet.linux.org.au
Updated: 1 week 5 days ago

Lev Lafayette: Batchholds, Leap Seconds, and PBS Restarts

Sat, 2016-02-27 23:31

It is not unusual for a few jobs to fall into a batchhold state when one is managing a cluster; users often write PBS submissions with errors in them (such as requesting more core than what is actually available). When a sysadmin has the opportunity to do so they should check such scripts, and educate the users on what they have done wrong.

read more

Tridge on UAVs: A new chapter in ArduPilot development

Sat, 2016-02-27 16:43

The ArduPilot core development team is starting on a new phase in the project's development. We’ve been having a lot of discussions lately about how to become better organised and better meet the needs of both our great user community and the increasing number of organisations using ArduPilot professionally. The dev team is passionate about making the best autopilot software we can and we are putting the structures in place to make that happen.

Those of you who have been following the developments over the years know that ArduPilot has enjoyed a very close relationship with 3DRobotics for a long time, including a lot of direct funding of ArduPilot developers by 3DR. As 3DR changes its focus that relationship has changed, and the relationship now is not one of financial support for developers but instead 3DR will be one of many companies contributing to open source development both in ArduPilot and the wider DroneCode community. The reduction in direct funding by 3DR is not really too surprising as the level of financial support in the past was quite unusual by open source project standards.

Meanwhile the number of other individuals and companies directly supporting ArduPilot development has been increasing a lot recently, with over 130 separate people contributing to the code in the last year alone, and the range of companies making autopilot hardware and airframes aimed at ArduPilot users has also grown enormously.

We’re really delighted with how the developer community is working together, and we’re very confident that ArduPilot has a very bright future

Creation of ArduPilot non-profit

The ArduPilot dev team is creating a non-profit entity to act as a focal point for ArduPilot development. It will take a while to get this setup, but the aim is to have a governance body that aims to guide the direction the project takes and ensure the project meets the needs of the very diverse user community. Once the organisation is in place we will make another announcement, but you can expect it to be modelled on the many successful open source non-profits that exist across the free software community.

The non-profit organisation will oversee the management of the documentation, the auto-build and test servers and will help set priorities for future development.

We’re working with 3DR now to organise the transfer of the ardupilot.com domain to the development team leads, and will transfer it to the non-profit once that is established. The dev team has always led the administration of that site, so this is mostly a formality, but we are also planning on a re-work of the documentation to create an improved experience for the community and to make it easier to maintain.

Expansion of ArduPilot consulting businesses

In addition to the non-profit, we think there is a need for more consulting services around ArduPilot and DroneCode. We’ve recognised this need for a while as the developers have often received requests for commercial support and consulting services. That is why we created this commercial support list on the website last year:

http://planner.ardupilot.com/wiki/common-commercial-support/

It is time to take that to the next level by promoting a wider range of consulting services for ArduPilot. As part of that a group of the ArduPilot developers are in the process of creating a company that will provide a broad range of consulting services around ArduPilot. You will see some more announcements about this soon and we think this will really help ArduPIlot expand into places that are hard to get to now. We are delighted at this development, and hope these  companies listed on the website will provide a vibrant commercial support ecosystem for the benefit of the entire ArduPilot community.

Best of both worlds

We think that having a non-profit to steer the project while having consulting businesses to support those who need commercial support provides the best of both worlds. The non-profit ArduPilot project and the consulting businesses will be separate entities, but the close personal and professional relationships that have built up in the family of ArduPilot developers will help both to support each other.

Note that ArduPilot is committed to open source and free software principles, and there will be no reduction in features or attempt to limit the open source project. ArduPilot is free and always will be. We care just as much about the hobbyist users as we do about supporting commercial use. We just want to make a great autopilot while providing good service to all users, whether commercial or hobbyist.

Thank you!

We’d also like to say a huge thank you to all the ArduPilot users and developers that have allowed ArduPilot to develop so much in recent years. We’ve come a very long way and we’re really proud of what we have built.

Finally we’d also like to thank all the hardware makers that support ArduPilot. The huge range of hardware available to our users from so many companies is fantastic, and we want to make it easier for our users to find the right hardware for their needs. We will continue working to improve the documentation to make that easier.

Happy flying!

The ArduPilot Dev Team

OpenSTEM: Junior Primary Students Build Stonehenge Model

Fri, 2016-02-26 16:30

Just a few weeks ago junior primary students did the Building Stonehenge Activity, as part of our Integrated History/Geography Program for Primary.

Seville Road State School on Brisbane’s south-side kindly sent us a photo to show you. This class used wooden blocks they happened to have, other classes use collected cardboard boxes.

Year 1-3 Building Stonehenge Activity (Photo: Seville Rd State School)

Our materials are designed to provide a more engaging learning experience for students as well as teachers. Here, students are examining different types of calendars and ways of measuring time. Stonehenge is given as an example of a solar calendar. This leads naturally into a discussion of solstices, equinoxes and seasons.

Stewart Smith: MySQL Contributions status

Fri, 2016-02-26 11:27

This post is an update to the status of various MySQL bugs (some with patches) that I’ve filed over the past couple of years (or that people around me have). I’m not looking at POWER specific ones, as there are these too, but each of these bugs here deal with general correctness of the code base.

Firstly, let’s look at some points I’ve raised:

  • Incorrect locking for global_query_id (bug #72544)

    Raised on May 5th, 2014 on the internals list. As of today, no action (apart from Dimitri verifying the bug back in May 2014). There continues to be locking that perhaps only works by accident around query IDs.Soon, this bug will be two years old.
  • Endian code based on CPU type rather than endian define (bug #72715)

    About six-hundred and fifty days ago I filed this bug – back in May 2014, which probably has a relatively trivial fix of using the correct #ifdef of BIG_ENDIAN/LITTLE_ENDIAN rather than doing specific behavior based on #ifdef __i386__

    What’s worse is that this looks like somebody being clever for a compiler in the 1990s, which unlikely ends up with the most optimal code today.
  • mysql-test-run.pl –valgrind-all does not run all binaries under valgrind (bug #74830)

    Yeah, this should be a trivial fix, but nothing has happened since November 2014.

    I’m unlikely to go provide a patch simply because it seems to take sooooo damn long to get anything merged.
  • MySQL 5.1 doesn’t build with Bison 3.0 (bug #77529)

    Probably of little consequence, unless you’re trying to build MySQL 5.1 on a linux distro released in the last couple of years. Fixed in Maria for a long time now.

Trivial patches:

  • Incorrect function name in DBUG_ENTER (bug #78133)

    Pull request number 25 on github – a trivial patch that is obviously correct, simply correcting some debug only string.

    So far, over 191 days with no action. If you can’t get trivial and obvious patches merged in about 2/3rds of a year, you’re not going to grow contributions. Nearly everybody coming to a project starts out with trivial patches, and if a long time contributor who will complain loudly on the internet (like I am here) if his trivial patches aren’t merged can’t get it in, what chance on earth does a newcomer have?

    In case you’re wondering, this is the patch: --- a/sql/rpl_rli_pdb.cc +++ b/sql/rpl_rli_pdb.cc @@ -470,7 +470,7 @@ bool Slave_worker::read_info(Rpl_info_handler *from) bool Slave_worker::write_info(Rpl_info_handler *to) { - DBUG_ENTER("Master_info::write_info"); + DBUG_ENTER("Slave_worker::write_info");
  • InnoDB table flags in bitfield is non-optimal (bug #74831)

    With a patch since I filed this back in November 2014, it’s managed to sit idle long enough for GCC 4.8 to practically disappear from anywhere I care about, and 4.9 makes better optimization decisions. There are other reasons why C bitfields are an awful idea too.

Actually complex issues:

  • InnoDB mutex spin loop is missing GCC barrier (bug #72755)

    Again, another bug filed back in May 2014, where InnoDB is doing a rather weird trick to attempt to get the compiler to not optimize away a spinloop. There’s a known good way of doing this, it’s called a compiler barrier. I’ve had a patch for nearly two years, not merged :(
  • buf_block_align relies on random timeouts, volatile rather than memory barriers (bug #74775)

    This bug was first filed in November 2014 and deals with a couple of incorrect assumptions about memory ordering and what volatile means.

    While this may only exhibit a problem on ARM and POWER processors (as well as any other relaxed memory ordering architectures, x86 is the notable exception), it’s clearly incorrect and very non-portable.

    Don’t expect MySQL 5.7 to work properly on ARM (or POWER). Try this: ./mysql-test-run.pl rpl.rpl_checksum_cache --repeat=10

    You’ll likely find MySQL > 5.7.5 still explodes.

    In fact, there’s also Bug #79378 which Alexey Kopytov filed with patch that’s been sitting idle since November 2015 which is likely related to this bug.

Not covered here: universal CRC32 hardware acceleration (rather than just for innodb data pages) and other locking issues (some only recently discovered). I also didn’t go into anything filed in December 2015… although in any other project I’d expect something filed in December 2015 to have been looked at by now.

Like it or not, MySQL is still upstream for all the MySQL derivatives active today. Maybe this will change as RocksDB and TokuDB gain users and if WebScaleSQL, MariaDB and Percona can foster a better development community.

Chris Samuel: Eight years

Thu, 2016-02-25 19:26

Thanks Dad. Thinking of you.

This item originally posted here:



Eight years

Tridge on UAVs: Building, flying and crashing a large QuadPlane

Thu, 2016-02-25 18:32

Not all of the adventures that CanberraUAV have with experimental aircraft go as well as we might hope. This is the story of our recent build of a large QuadPlane and how the flight ended in a crash.

As part of our efforts in developing aircraft for the Outback Challenge 2016 competition CanberraUAV has been building both large helicopters and large QuadPlanes. The competition calls for a fast, long range VTOL aircraft, and those two airframe types are the obvious contenders.

This particular QuadPlane was the first that we've built that is of the size and endurance that we thought it would easily handle the OBC mission. We based it on the aircraft we used to win the OBC'2014 competition, a 2.7m wingspan VQ Porter with a 35cc DLE35 petrol engine. This is the type of aircraft you commonly see at RC flying clubs for people who want to fly larger scale civilian aircraft. It flies well and the fuselage and is easy to work on with plenty of room for additional payloads.

VQ Porter QuadPlane Build

The base airframe has a typical takeoff weight of a bit over 7kg. In the configuration we used in the 2014 competition it weighed over 11kg as we had a lot of extra equipment onboard like long range radios, the bottle drop system and onboard computers, plus lots of fuel. When rebuilt as a QuadPlane it weighed around 15kg, which is really stretching the base airframe to close to its limits.

To convert the porter to a QuadPlane we started by glueing 300x100 1mm thick carbon fibre sheets to the under-surface of the wings, and added 800x20x20 square section carbon fibre tubes as motor arms. This basic design was inspired by what Sander did for his QuadRanger build.

in the above image you can see the CF sheet and the CF tubes being glued to the wing. We used silicon sealant between the sheet and the wing, and epoxy for gluing the two 800mm tubes together and attaching them to the wing. This worked really well and we will be using it again.

For the batteries of the quad part of the plane we initially thought we'd put them in the fuselage as that is the easiest way to do the build, but after some further thought we ended up putting them out on the wings:

They are held on using velcro and cup-hooks epoxied to the CF sheet and spars, with rubber bands for securing them. That works really well and we will also be using it again.

The reason for the change to wing mounted batteries is twofold. The first is concerns of induction on the long wires needed in such a big plane leading to the ESCs being damaged (see for example http://www.rcgroups.com/forums/showthread.php?t=952523&highlight=engin+wire). The second is that we think the weight being out on the wings will reduce the stress on the wing root when doing turns in fixed wing mode.

We used 4S 5Ah 65C batteries in a 2S 2P arrangement, giving us 10Ah of 8S in total to the ESCs. We didn't cross-couple the batteries between left and right side, although we may do so in future builds.

For quad motors we used NTM Prop Drive 50-60 motors at 380kV. That is overkill really, but we wanted this plane to stay steady while hovering 25 knot winds, and for that you need a pretty high power to weight ratio to overcome the wind on the big wings. It certainly worked, flying this 15kg QuadPlane did not feel cumbersome at all. The plane responded very quickly to the sticks despite its size.

We wired it with 10AWG wire, which helped keep the voltage drop down, and tried to keep the battery cables short. Soldering lots of 10AWG connectors is a pain, but worth it. We mostly used 4mm bullets, with some HXT-4mm for the battery connectors. The Y connections needed to split the 8S across two ESCs was done with direct spliced solder connections.

For the ESCs we used RotorStar 120A HV. It seemed a good choice as it had plenty of headroom over the expected 30A hover current per motor, and 75A full throttle current. This ESC was our only major regret in the build, for reasons which will become clear later.

For props we used APC 18x5.5 propellers, largely because they are quite cheap and are big enough to be efficient, while not being too unwieldy in the build.

For fixed wing flight we didn't change anything over the setup we used for OBC'2014, apart from losing the ability to use the flaps due to the position of the quad arms. A VTOL aircraft doesn't really need flaps though, so it was no big loss. We expected the 35cc petrol engine would pull the plane along fine with our usual 20x10 prop.

We did reduce the maximum bank angle allowed in fixed wing flight, down from 55 degrees to 45 degrees. The aim was to keep the wing loading in reasonable limits in turns given we were pushing the airframe well beyond the normal flying weight. This worked out really well, with no signs of stress during fixed wing flight.

Test flights

The first test flight went fine. It was just a short hover test, with a nervous pilot (me!) at the sticks. I hadn't flown such a big quadcopter before (it is 15kg takeoff weight) and I muffed up the landing when I realised I should try and land off the runway to keep out of the way of other aircraft using the strip. I tried to reposition a few feet while landing and it landed heavier than it should have. No damage to anything except my pride.

The logs showed it was flying perfectly. Our initial guess of 0.5 for roll and pitch gains worked great, with the desired and achieved attitude matching far better than I ever expected to see in an aircraft of this type. The feel on the sticks was great too - it really responded well. That is what comes from having 10kW of power in an aircraft.

The build was meant to have a sustained hover time of around 4 minutes (using ecalc), and the battery we actually used for the flight showed we were doing a fair bit better than we predicted. A QuadPlane doesn't need much hover time.  For a one hour mission for OBC'2016 we reckon we need less than 2 minutes of VTOL flight, so 4 minutes is lots of safety margin.

Unfortunately the second test flight didn't go so well. It started off perfectly, with a great vertical takeoff, and a perfect transition to forward flight as the petrol engine engaged.

The plane was then flown for a bit in FBWA mode, and it responded beautifully. After that we switched to full auto and it flew the mission without any problems. It did run the throttle on the petrol engine at almost full throttle the entire time, as we were aiming for 28m/s and it was struggling a bit with the drag of the quad motors and the extra weight, but the tuning was great and we were already celebrating as we started the landing run.

The transition back to hover mode also went really well, with none of the issues we thought we might have had. Then during the descent for landing the rear left motor stopped, and we once again proved that a quadcopter doesn't fly well on 3 motors.

Unfortunately there wasn't time to switch back to fixed wing flight and the plane came down hard nose first. Rather a sad moment for the CanberraUAV team as this was the aircraft that had won the OBC for us in 2014. It was hard to see it in so many pieces.

We looked at the logs to try to see what had happened and Peter immediately noticed the tell tale sign of motor failure (one PWM channel going to maximum and staying there). We then looked carefully at the motors and ESCs, and after initially suspecting a cabling issue we found the cause was a burnt out ESC:

The middle FET is dead and shows burn marks. Tests later showed the FETs on either side in the same row were also dead. This was a surprise to us as the ESC was so over spec for our setup. We did discover one possible contributing cause:

that red quality control sticker is placed over the FET on the other side of the board from the dead one, and the design of the ESC is such that the heat from the dead FET has to travel via that covered FET to the heatsink. The sticker was between the FET and the heatsink, preventing heat from getting out.

All we can say for certain is the ESC failed though, so of course we started to think about motor redundancy. We're building two more large QuadPlanes now, one of them based on an OctaQuad design, in an X8 configuration with the same base airframe (a spare VQ Porter 2.7m that we had built for OBC'2014). The ArduPilot QuadPlane code already supports octa configs (along with hexa and several others). For this build we're using T-Motor MT3520-11 400kV motors, and will probably use t-motor ESCs. We will also still use the 18x5.5 props, just more of them!

Strangely enough, the better power to weight ratio of the t-motor motors means the new octa X8 build will be a bit lighter than the quad build. We're hoping it will come in at around 13.7kg, which will help reduce the load on the forward motor for fixed wing flight.

Many thanks to everyone involved in building this plane, and especially to Grant Morphett for all his building work and Jack Pittar for lots of good advice.

Building and flying a large QuadPlane has been a lot of fun, and we've learnt a lot. I hope to do a blog post of a more successful flight of our next QuadPlane creation soon!

David Rowe: Double Tuned Filter Notch Mystery

Thu, 2016-02-25 12:30

For the SM2000 I need a 146MHz Band Pass Filter (BPF). This lead me to the Double Tuned Filter (DTF) or Double Tuned Circuit (DTC) – two air cored coils coupled to each other, and resonated with trimmer capacitors. To get a narrow pass band, the Q of the resonators must be kept high, which means an impedance of a few 1000 ohms at resonance. So I connect the low impedance 50 ohm input and output by tapping the coils at half a turn.

Here is the basic schematic (source Vasily Ivanenko on Twitter):

For 146MHz the inductors are about 150nH, and capacitors about 8pF. For 435MHz the inductors are about 50nH, and capacitor 3pF. The “hot end” of the inductors where the trimmer cap connects is high impedance at resonance, several 1000 ohms.

These filters are sometimes called Helical Filters – however this is a little confusing to me. My understanding is that Helical filters – although similar in physical construction – operate on transmission line principles and have the “hot” end of the inductors left open – no capacitor to ground.

Here is a photo of a 146MHz DTC filter, and it’s frequency response:

I understand how the band pass response is formed, but have been mystified by the notches in the response. In particular, what causes the low frequency notch just under the centre frequency? Here is a similar DTC I built for 435MHz and it’s frequency response, also with the mystery notches:

After a few days of messing about with Spice, Octave simulations, RF books, and bugging my RF brains trust (thanks Yung, Neil, Jeff), I accidentally stumbled across the reason. A small capacitor (around 1pF) between the hot end of the inductors creates the low frequency notch. Physically, this is parasitic capacitance coupling across the air gap between the coils.

Here is a LTspice simulation of the UHF version of the circuit. Note how the tapped inductors are modelled by a small L in series with the main inductance. The “K” directive models the coupling. Air cored transformers have a low coupling coefficient, I guessed at values of 0.02 to 0.1. You can see the notch just before resonance caused by the 1pF parasitic coupling between the two inductors. Without this capacitor in the model, the notch goes away.

The tapped inductor is used for an impedance match. An equivalent circuit is simply driving and loading the circuit with a high impedance, say 1500 ohms:

After some head scratching I found this useful model for transformers. It’s valid for low-k transformers where the primary and secondary inductance is the same (Ref):

Note it doesn’t model DC isolation but that’s OK for this circuit. While I don’t understand the derivation of this model, it does makes intuitive sense. A loosely coupled air core transformer can be modeled as a high (inductive) impedance between the primary and secondary. We still get reasonable power transfer (a few dB insertion loss) as the impedance of the primary and secondary is also high at resonance.

Using the model in Fig 55 with k = 0.1, the top L in the PI arrangement is about 10L1 or 500nH. I also removed the 3pF capacitors in an attempt to isolate just the components responsible for the notch. So we get:

Finally! Now I can understand how the notch is created. We have 1pF in parallel with 500nH, which forms a parallel resonant circuit at 225MHz. Parallel resonance circuits have very high impedance at resonance, which blocks the signal, causing the notch.

It took me a while to spot the parallel resonance. I had assumed a series resonance shorting the signal to ground, and wasted a lot of time looking for that. Parasitic inductance in the capacitors is often the reason for notches above resonance.

This suggests we can position the notch by adjusting the capacitance between the coils, either by spacing or adding a real capacitor. Positioning the notch could be useful, e.g. deeply attenuating energy at an image frequency before a mixer.

sthbrx - a POWER technical blog: Work Experience At Ozlabs

Wed, 2016-02-24 23:00

As a recent year twelve graduate my knowledge of computer science was very limited and my ability to write working programs was all but none. So you can imagine my excitement when I heard of an opening for work experience with IBM's internationally renowned Ozlabs team, or as I knew them the Linux Gods. My first day of working at Ozlabs I learnt more about programing then in six years of secondary education. I met most of the Ozlabs team and made connections that will certainly help with my pursuit of a career in IT. Because in business its who you know more than what you know, and now I know the guys at Ozlabs I know how to write code and run it on my own Linux Distro. And on top of all the extremely valuable knowledge I am on a first name basis with the Linux Gods at the LTC.

After my first week at Ozlabs I cloned this blog from Octopress and reformatted it for pelican static site generator.For those who don't know Octopress is a ruby based static site generator so converting the embedded ruby gems to pelicans python code was no easy task for this newbie. Luckily I had a team of some of the best software developers in the world to help and teach me their ways. After we sorted the change from ruby to python and I was able to understand both languages, I presented my work to the team. They then decided to throw me a curve ball as they did not like any of pelicans default themes, instead they wanted the original Octopress theme on the new blog. This is how I learnt GitHub is my bestest friend, because some kind soul had already converted the ruby theme into python and it ran perfectly!

Now it was a simple task of re-formatting the ruby-gem text files into markdown which is pelican compatible(which is why we chose pelican in the first place). So now we had a working pelican blog on the Octopress theme, one issue it was very annoying to navigate. Using my newly learned skills and understanding of python I inserted tags, categories, web-links, navigation bar and I started learning how to code C. And it all worked fine! That was what I a newbie could accomplish in one week. I still have two more weeks left here and I have plenty of really interesting work left to do. This has been one of the greatest learning experiences of my life and I would do it again if I could! So if you are looking for experience in it or software development look no further because you could be learning to code from the people who wrote the language itself. The Linux Gods.

James Purser: Some random thoughts on #senatereform

Tue, 2016-02-23 11:30

So Turnbull dropped the hammer on the senate yesterday with a set of changes to the way the senate is going to be elected. At the core of these changes will be the removal of the (and there really is no other way to describe them) anti-democratic preference deals that meant that voters lost control over where their preferences were directed if they voted above the line.

The changes have another benefit apart from returning full control over preferences to the voter. They also give all those groups that have sprung up to game the current system a good kick in the goolies.

This is a good thing.

Let me get one thing straight first up. I have no problem with the micro parties competing in elections, the more the merrier I say. It makes for a vibrant democracy when there are multiple, competing groups all representing their particular world views. Where the problem lies is that many of the micro parties that went to the last election were, in essence empty shells. Fronts setup to funnel preferences around until one of the "real" micro parties managed to scrape enough to get a place in the senate. 

The new system will mean that parties won't be able to rely on dodgy deals done with preference farmers, or in fact some of the viler participants in our democratic process. Instead, they will have compete in the open for the voters preference, which means that they're going to have to do more than just rely on their tiny, tiny bases. Instead they're going to have to work their butts off to engage with the wider electorate.

As can be expected, there has been a lot of wailing and gnashing of teeth since the announcement. Lots of declarations of the death of democracy (again) and so on and so forth. Some of the loudest complaining of all appears to be coming from the one of the biggest beneficiaries of preference farming. The ALP. As soon as Turnbull finished his press conference, the ALP was on the hustings accusing the Government of everything from causing confusion and dismay to a dastardly scheme to gerrymander the Senate so that it would permanently be in a state of Coalition control.

You know what? I am personally looking forward to my vote going exactly where I want it to go, and I am especially looking forward to making sure that my preferences don't accidentally end up propping up some of the worst of the worst this country has to offer.

Blog Catagories: Politics

OpenSTEM: Learning New Skills Faster | Washington Post

Mon, 2016-02-22 11:31

https://www.washingtonpost.com/news/wonk/wp/2016/02/12/how-to-learn-new-skills-twice-as-fast/

How to level up twice as quickly.

The short version of the research outcome is that when you are learning new skills, just repeating the exact same task is not actually the most efficient. Making subtle changes in the task/routine speeds up the learning process.

Considering our brains work and learn like neural nets (actually neural nets are modelled off our brains, but anyhow), I’m not surprised: repeating exactly the same thing will strengthen the related pathways. But the real world doesn’t keep things absolutely identical, so introducing small changes in the learning process will create a better pattern in our neural net.

Generally, people regard “dumb” repetition as boring, and I don’t blame them. It generally makes very little sense.

The researchers note that too much variation doesn’t work either – which again makes sense, if you think in neural net context: if your range of pathways is too broad, you’re not strengthening pathways very much. Quantity rather than quality.

So, a bit of variety is good for learning!

Chris Smart: Permanently setting SELinux context on files

Mon, 2016-02-22 09:29

I’m sure there are lots of howtos on the Internet for this, but…

Say you are running a web server like nginx and your log files are in a non-standard location, you may have problems starting the service because SELinux is blocking nginx from reading or writing to the files.

You can set the context of these files so that nginx will be happy:

[user@server ~]$ sudo chcon -Rv --type=httpd_log_t /srv/mydomain.com/logs/

That’s only temporary however, and the original context will be restored if you run restorecon or relabel your filesystem.

So you can do this permanently using the semanage command, like so:

[user@server ~]$ sudo semanage fcontext -a -t httpd_log_t "/srv/mydomain.com/logs(/.*)?"

Now you can use the standard selinux command to restore the correct label and it will use the new one you set above.

[user@server ~]$ sudo restorecon -rv /srv/

Pia Waugh: Finding the natural motivation for change

Mon, 2016-02-22 07:27

Update: I added a section on how competition can be motivating

I’ve had a lot of people and ideas in my life that have been useful to me so I wanted to share a theory I have applied in my work that might be useful to others. The concept of finding the ‘natural motivation’ of players involved is a key component when I’m planning any type of systemic change. This isn’t a particularly unique or new idea, but I am constantly surprised how rarely I see it adopted in practice, and how often things fail by not taking it into consideration. It is critical if you want to take a new idea from the domain of evangelists and into ‘business as usual’ because if you can’t embed something into the normal way people act and think, then whatever you are trying to do will be done reluctantly and at best, tacked on to normal processes as an afterthought.

In recent years I’ve been doing a lot of work to try to change systems, thinking and culture around open government, technology in government and open data, with some success. This is in part because I purposefully take an approach that tries to identify and tap into the natural motivation of all players involved. This means understanding how what I’m trying to do could benefit the very people who need to change their behaviours, and helping them want to do something new of their own volition. Why does this matter? If I asked you to spend an extra couple of hours a week at work, for no extra pay, doing something you don’t understand that seems completely unrelated to your job or life, you’d tell me to sod off. And understandably so! And yet we expect people and behaviours to simply comply if we change the rules. If I talked to you about how a new way of doing something would save you time, get a better outcome, save money or made life better in any way, you would be more interested. Then it simply becomes a matter of whether the effort is worth the benefit.

Die hard policy wonks will argue that you can always punish non compliance or create incentives if you are serious enough about the change you want to make. I would argue that you can force certain behaviour changes through punishment or reward, but if people aren’t naturally motivated to make the behaviour change themselves then the change will be both unsustainable and minimally implemented.

I’m going to use open data in government as my example of this in practice. Now I can hear a lot of people saying “well public servants should do open data by default because it is good for the community!” but remember the question above. In the first instance, if I’m asking someone to publish data without understanding why, they will see it as just extra work for no benefit – merely a compliance activity that gets in the way of their real work. People ask the understandable question of why would anyone want to divert resources and money into open data when it could be used to do something ‘real’ like build a road, deliver a better service, pay a salary, etc? Every day public servants are being asked to do more with less, so open data appears at first glance like a low priority. If the community and economy were to benefit from open data, then we had to figure out how to create a systemic change in government to publish open data naturally, or it would never scale or be sustainable.

When I took over data.gov.au, there was a reasonable number of datasets published but they weren’t being updated and nothing new was being added. It was a good first attempt, but open data had not really been normalised in agencies, so data publishing was sparodic. I quickly realised if open data was just seen as a policy and compliance issue, then this would never really change and we would hit a scaling issue of how much we could do ourselves. Through research, experimenting and experience, we did find that open data can help agencies be more effective, more efficient and more able to support an ecosystem of information and service delivery rather than all the pressure being on agencies to do everything. This was a relief because if there was no benefit to the public service itself, then realistically open data would always be prioritised lower than other activities, regardless of the political or policy whims of the few.

So we started working with agencies on the basis that although open data was the policy position that agencies were expected to adopt, there were real benefits to agencies if they adopted an open data approach. We would start an agency on the open data journey by helping them identify datasets that save them time and money, looking at resource intensive requests for data they regularly get and how to automate the publishing of that data. This then frees up resources of which a proportion can often be justified to start a small open data team. Whatever the agency motivations, there is always an opportunity for open data to support that goal if integrated properly. We focused on automation, building open data into existing processes (rather than creating a new process), supporting and promoting public reuse of data (GovHack was particularly helpful for this), identifying community priority datasets, raising public confidence in using government open data and removing barriers for publishing data. We knew centralised publishing would never scale, so we focused our efforts on a distributed publishing model where the central data.gov.au team provided technical support and a free platform for publishing data, but agencies did their own publishing with our help. Again this meant we had to help agencies understand how useful open data was to them so they could justify putting resources towards their own data publishing capacity. We knew agencies would need to report on their own success and progress with open data, so we also ensured they could access their own data utilisation analytics, which is also publicly available for a little extra motivation.

We collected examples from agecnies on the benefits to help inform and encourage other agencies, and found the key agency benefits of open data were broadly:

  1. Efficiency – proactively publishing data that is commonly asked for in an automated way frees up resources.
  2. Innovation – once data is published, so long as it is published well and kept up to date, other people and organisations will use the data to create new information, analysis and services. This innovation can be adopted by the agency, but it also takes the pressure off the agency to deliver all things to all people, by enabling others to scratch their own itch.
  3. Improved services – by publishing data in a programmatically accessible way, agencies found cheaper and more modular service delivery was possible through reusable data sources. Open data is often the first step for agencies on the path to more modular and API driven way of doing things (which the private sector embraced a decade ago). I believe if we could get government data, content and services API enabled by default, we would see dramatically cheaper and better services across all governments, with the opportunity for a public ecosystem of cross jurisdictional service and information delivery to emerge.

To extend the natural motivation consideration further, we realised that unless data was published in a way that people in the community could actually find and use, then all the publishing in the world would not help. We had to ensure the way data was publishing supported the natural motivation of people who want to use data, and this would in turn create a feedback loop to encourage greater publishing of data. We adopted a “civic hacker empathy” approach (with credit to Chris Gough for the concept) so that we always put ourselves in the shoes of those wanting to use data to prioritise how to publish it, and to inform and support agencies to publish data in a way that could be easily consumed. This meant agencies starting on the open data journey were not only encouraged to adopt good technical practices from day 1, but were clearly educated on the fact they wouldn’t yield the benefits from open data without publishing data well.

I should also mention that motivation doesn’t need to always come from within the individual person or the organisation. Sometimes motivation can come from a little healthy competition! I have had people in agencies utterly uninterested in open data that I’ve decided to not push (why spend effort on a closed door when there are partially open or open doors available!) who have become interested when other agencies have had some success. Don’t underestimate the power of public successes! Be as loud as you can about successes you have as this will build interest and demand, and help bring more people on your journey.

So to wrap up, I’ve been amazed how many people I meet, particularly in the federal government, who think they can change behaviour by simply having a policy, or law, or a financial incentive. The fact is, people will generally only do something because they want to, and this applies as much in the work place as anywhere else. If you try to force people to do something that they don’t want to, they will find myriad ways to avoid it or do the bare minimum they have to, which will never yield the best results. Every single barrier to open data we came across woud magically disappear if the agency and people involved were naturally morivated to do open data.

If you want to make real change, I encourage you to take an empathetic approach, think about all the players in the system, and how to ensure they are naturally motivated to change. I always tell the data.gov.au team that we always need to ensure the path of technical integrity is the path of least resistance, because this ensures an approach which is good for both the data publishers and data consumers. It goes without saying that a change is easiest to encourage when it has integrity and provides genuine benefits. In the case of open data, we simply needed to help others come on the journey for the idea to flourish. I’m proud to say the data.gov.au team have managed to dramatically increase the amount of open data available in Australia as well as support a rapidly growing capacity and appetite for open data throughout the public service. Huge kudos to the team! With the data.gov.au team now moved to the Department of Prime Minister and Cabinet, and merged with the spatial data branch from the Department of Communications, we have a stronger than ever central team to continue the journey.

Note: I should say I’m currently on maternity leave till the end of 2016, hence the time to publish some of these ideas. They are my own thoughts and not representative of any one else. I hope they are useful

Clinton Roy: clintonroy

Sun, 2016-02-21 14:28

I’m planning on doing this walk before linux.conf.au 2017. I’m interested in a couple of experienced hikers to join me. It starts at Melaleuca and finishes at Cockle creak.

Important Points

This is a reasonably serious hike, you need to have plenty of multi day hike experience to join me.

  • Estimated dates: 8th Jan to 17th Jan 2017
  • Eighty five kilometre hike done over seven days.
  • Tides and wet weather may add a day or three.
  • Fly in to the starting point (yeah, it’s remote)
  • Carrying own food and sterilize own water.
  • Everything you take in, you take out.
  • Taking care of your own poop along the way (think shovel).
  • Camping overnight, no cabins
  • Bus back to Hobart (three hours)
Todo:
  • Organise Park pass.
  • Organise flights.
  • Organise bus.
  • Organise cooking fuel.
  • Try not to scare everyone from joining me.
  • Order guide book.
  • Beg/borrow/steal/hire epirb
Links:

Filed under: Uncategorized

Lev Lafayette: Multicore World 2016 : A Summary

Sun, 2016-02-21 11:31

Multicore World is a small annual international conference held in New Zealand/Aotearoa sponsored by OpenParallel. I have been fortunate enough to act an MC for all but one of the five conferences since its inception, this year also presenting a short paper on the introduction of the new HPC/Cloud hybrid at the University of Melbourne.

read more

Binh Nguyen: A New Cold War?, Economic Crisis, and More

Sun, 2016-02-21 03:04
Most people in the defense/intelligence community view China/Russia as more of a threat from a defensive perspective than a political, financial, etc... threat. However, if you look deeper it's clearer that there are greater strategies/tactics at play. While the world appears to be somewhat stable there is clearly a power struggle going on behind the scenes. If we look deeper there are a few

Anthony Towns: Bitcoin Fees vs Supply and Demand

Thu, 2016-02-18 20:27

Continuing from my previous post on historical Bitcoin fees… Obviously history is fun and all, but it’s safe to say that working out what’s going on now is usually far more interesting and useful. But what’s going on now is… complicated.

First, as was established in the previous post, most transactions are still paying 0.1 mBTC in fees (or 0.1 mBTC per kilobyte, rounded up to the next kilobyte).

Again, as established in the previous post, that’s a fairly naive approach: miners will fill blocks with the smallest transactions that pay the highest fees, so if you pay 0.1 mBTC for a small transaction, that will go in quickly, but if you pay 0.1 mBTC for a large transaction, it might not be included in the blockchain at all.

It’s essentially like going to a petrol station and trying to pay a flat $30 to fill up, rather than per litre (or per gallon); if you’re riding a scooter, you’re probably over paying; if you’re driving an SUV, nobody will want anything to do with you. Pay per litre, however, and you’ll happily get your tank filled, no matter what gets you around.

But back in the bitcoin world, while miners have been using the per-byte approach since around July 2012, as far as I can tell, users haven’t really even had the option of calculating fees in the same way prior to early 2015, with the release of Bitcoin Core 0.10.0. Further, that release didn’t just change the default fees to be per-byte rather than (essentially) per-transaction; it also dynamically adjusted the per-byte rate based on market conditions — providing an estimate of what fee is likely to be necessary to get a confirmation within a few blocks (under an hour), or within ten or twenty blocks (two to four hours).

There are a few sites around that make these estimates available without having to run Bitcoin Core yourself, such as bitcoinfees.21.co, or bitcoinfees.guthub.io. The latter has a nice graph of recent fee rates:

You can see from this graph that the estimated fee rates vary over time, both in the peak fee to get a transaction confirmed as quickly as possible, and in how much cheaper it might be if you’re willing to wait.

Of course, that just indicates what you “should” be paying, not what people actually are paying. But since the blockchain is a public ledger, it’s at least possible to sift through the historical record. Rusty already did this, of course, but I think there’s a bit more to discover. There’s three ways in which I’m doing things differently to Rusty’s approach: (a) I’m using quantiles instead of an average, (b) I’m separating out transactions that pay a flat 0.1 mBTC, (c) I’m analysing a few different transaction sizes separately.

To go into that in a little more detail:

  • Looking at just the average values doesn’t seem really enlightening to me, because it can be massively distorted by a few large values. Instead, I think looking at the median value, or even better a few percentiles is likely to work better. In particular I’ve chosen to work with “sextiles”, ie the five midpoints you get when splitting each day’s transactions into sixths, which gives me the median (50%), the tertiles (33% and 66%), and two additional points showing me slightly more extreme values (16.7% and 83.3%).
  • Transactions whose fees don’t reflect market conditions at all, aren’t really interesting to analyse — if there are enough 0.1 mBTC, 200-byte transactions to fill a block, then a revenue maximising miner won’t mine any 400-byte transactions that only pay 0.1 mBTC, because they could fit two 200-byte transactions in the same space and get 0.2 mBTC; and similarly for transactions of any size larger than 200-bytes. There’s really nothing more to it than that. Further, because there are a lot of transactions that are essentially paying a flat 0.1 mBTC fee, they make it fairly hard to see what the remaining transactions are doing — but at least it’s easy to separate them out.
  • Because the 0.10 release essentially made two changes at once (namely, switching from a hardcoded default fee to a fee that varies on market conditions, and calculating the fee based on a per-byte rate rather than essentially a per-transaction rate) it can be hard to see which of these effects are taking place. By examining the effect on transactions of a particular size, we can distinguish the effects however: using a per-transaction fee will result in different transactions sizes paying different per-byte rates, while using per-byte fee will result in the transactions of different sizes harmonising at a particular rate. Similarly, using fee estimation will result in the fees for a particular transaction size varying over time; whereas the average fee rate might vary over time simply due to using per-transaction fees while the average size of transactions varies. I’ve chosen four sizes: 220-230 bytes which is the size of a transaction spending a single, standard, pay-to-public-key-hash (P2PKH) input (with a compressed public key) to two P2PKH outputs; 370-380 bytes which matches a transaction spending two P2PKH inputs to two P2PKH outputs; 520-520 bytes which matches a transaction spending three P2PKH inputs to two P2PKH inputs, and 870-1130 bytes which catches transactions around 1kB.

The following set of graphs take this approach, with each transaction size presented as a separate graph. Each graph breaks the relevant transactions into sixths, selecting the sextiles separating each sixth — each sextile is then smoothed over a 2 week period to make it a bit easier to see.

We can make a few observations from this (click the graph to see it at full size):

  • We can see that prior to June 2015, fees were fairly reliably set at 0.1 mBTC per kilobyte or part thereof — so 220B transactions paid 0.45 mBTC/kB, 370B transactions paid 0.27 mBTC/kB, 520B transactions paid 0.19 mBTC/kB, and transactions slightly under 1kB paid 0.1 mBTC/kB while transactions slightly over 1kB paid 0.2 mBTC/kB (the 50% median line in between 0.1 mBTC/kB and 0.2 mBTC/kB is likely an artifact of the smoothing). These fees didn’t take transaction size into account, and did not vary depending on market conditions — so they did not reflect changes in demand, how full blocks were, the price of Bitcoin in USD, the hashpower used to secure the blockchain, or any similar factors that might be relevant.
  • We can very clearly see that there was a dramatic response to market conditions in late June 2015 — and not coincidentally this was when the “stress tests” or “flood attack” occurred.
  • It’s also pretty apparent the market response here wasn’t actually very rational or consistent — eg 220B transactions spiked to paying over 0.8 mBTC/kB, while 1000B transactions only spiked to a little over 0.4 mBTC/kB — barely as much as 220B transactions were paying prior to the stress attack. Furthermore, even while some transactions were paying significantly higher fees, transactions paying standard fees were still going through largely unhindered, making it questionable whether paying higher fees actually achieved anything.
  • However, looking more closely at the transactions with a size of around 1000 bytes, we can also see there was a brief period in early July (possibly a very brief period that’s been smeared out due to averaging) where all of the sextiles were above the 0.1 mBTC/kB line — indicating that there were some standard fee paying transactions that were being hindered. That is to say that it’s very likely that during that period, any wallet that (a) wasn’t programmed to calculate fees dynamically, and (b) was used to build a transaction about 1kB in size, would have produced a transaction that would not actually get included in the blockchain. While it doesn’t meet the definition laid out by Jeff Garzik, I think it’s fair to call this a “fee event”, in that it’s an event, precipitated by fee rates, that likely caused detectable failure of bitcoin software to work as intended.
  • On the other hand, it is interesting to notice that a similar event has not yet reoccurred since; even during later stress attacks, or Black Friday or Christmas shopping rushes.

As foreshadowed, we can redo those graphs with transactions paying one of the standard fees (ie exactly 0.1 mBTC, 0.01 mBTC, 0.2 mBTC, 0.5 mBTC, 1m BTC, or 10 mBTC) removed:

As before, we can make a few observations from these graphs:

  • First, they’re very messy! That is, even amongst the transactions that pay variable fees, there’s no obvious consensus on what the right fee to pay is, and some users are paying substantially more than others.
  • In early February, which matches the release of Bitcoin Core 0.10.0, there was a dramatic decline in the lowest fees paid — which is what you would predict if a moderate number of users started calculating fees rather than using the defaults, and found that paying very low fees still resulted in reasonable confirmation times. That is to say, wallets that dynamically calculate fees have substantially cheaper transactions.
  • However, those fees did not stay low, but have instead risen over time — roughly linearly. The blue dotted trend line is provided as a rough guide; it rises from 0 mBTC/kB on 1st March 2015, to 0.27 mBTC/kB on 1st March 2016. That is, market driven fees have roughly risen to the same cost per-byte as a 2-input, 2-output transaction, paying a flat 0.1 mBTC.

At this point, it’s probably a good idea to check that we’re not looking at just a handful of transactions when we remove those paying standard 0.1 mBTC fees. Graphing the number of transactions per day of each type (ie, total transactions, 220 byte transactions (1-input, 2-output), 370 byte transactions (2-input, 2-output), 520 byte transactions (3-input, 2-output), and 1kB transactions shows that they all increased over the course of the year, and that there are far more small transactions than large ones. Note that the top-left graph has a linear y-axis; while the other graphs use a logarithmic y-axis — so that each step in the vertical indicates a ten-times increase in number of transactions per day. No smoothing/averaging has been applied.

We can see from this that by and large the number of transactions of each type have been increasing, and that the proportion of transactions paying something other than the standard fees has been increasing. However it’s also worth noting that the proportion of 3-input transactions using non-standard fees actually decreased in November — which likely indicates that many users (or the maintainers of wallet software used by many users) had simply increased the default fee temporarily while concerned about the stress test, and reverted to defaults when the concern subsided, rather than using a wallet that estimates fees dynamically. In any event, by November 2015, we have at least about a thousand transactions per day at each size, even after excluding standard fees.

If we focus on the sextiles that roughly converge to the trend line we used earlier, we can, in fact make a very interesting observation: after November 2015, there is significant harmonisation on fee levels across different transaction sizes, and that harmonisation remains fairly steady even as the fee level changes dynamically over time:

Observations this time?

  • After November 2015, a large bunch of transactions of difference sizes were calculating fees on a per-byte basis, and tracking a common fee-per-byte level, which has both increased and decreased since then. That is to say, a significant number of transactions are using market-based fees!
  • The current market rate is slightly lower than the what a 0.1 mBTC, 2-input, 2-output transaction is paying (ie, 0.27 mBTC/kB).
  • The recent observed markets rate correspond roughly to the 12-minute or 20-minute fee rates in the bitcoinfees graph provided earlier. That is, paying higher rates than the observed market rates is unlikely to result in quicker confirmation.
  • There are many transactions paying significantly higher rates (eg, 1-input 2-output transactions paying a flat 0.1 mBTC).
  • There are also many transactions paying lower rates (eg, 3-input 2-output transactions paying a flat 0.1 mBTC) that can expect delayed confirmation.

Along with the trend line, I’ve added four grey, horizontal guide lines on those graphs; one at each of the standard fee rates for the transaction sizes we’re considering (0.1 mBTC/kB for 1000 byte transactions, 0.19 mBTC/kB for 520 byte transactions, 0.27 mBTC/kB for 370 byte transactions, and 0.45 mBTC/kB for 220 byte transactions).

An interesting fact to observe is that when the market rate goes above any of the grey dashed lines, then transactions of the corresponding size that just pay the standard 0.1 mBTC fee become now less profitable to mine than transactions that pay the fees at the market rate. In a very particular sense this will induce a “fee event”, of the type mentioned earlier. That is, with the fee rate above 0.1 mBTC/kB, transactions of around 1000 bytes that pay 0.1 mBTC will generally suffer delays. Following the graph, for the transactions we’re looking at there have already been two such events — a fee event in July 2015, where 1000 byte transactions paying standard fees began getting delayed regularly due to the market fees began exceeding 0.1 mBTC/kB (ie, the 0.1 mBTC fee divided by 1 kB transaction size); and following that a second fee event during November impacting 3-input, 2-output transactions, due to market fees exceeding 0.19 mBTC/kB (ie, 0.1 mBTC divided by 0.52 kB). Per the graph, a few of the trend lines are lingering around 0.27 mBTC/kB, indicating a third fee event is approaching, where 370 byte transactions (ie 2-input, 2-output) paying standard fees will start to suffer delayed confirmations.

However the grey lines can also be considered as providing “resistance” to fee increases — for the market rate to go above 0.27 mBTC/kB, there must be more transactions attempting to pay the market rate than there were 2-input, 2-output transactions paying 0.1 mBTC. And there were a lot of those — tens of thousands — which means market fees will only be able to increase with higher adoption of software that calculates fees using dynamic estimates.

It’s not clear to me why fees harmonised so effectively as of November; my best guess is that it’s just the result of gradually increasing adoption, accentuated by my choice of quantiles to look at, along with averaging those results over a fortnight. At any rate, most of the interesting activity seems to have taken place around August:

  • Bitcoin Core 0.11.0 came out in July with some minor fee estimation improvements.
  • Electrum came out with dynamic fees in 2.4.1 in August.
  • Copay (by bitpay) adder dynamic fees in 1.1.3 in August.
  • Mycelium added per-byte fees in 2.5.8 in December.

Of course, obviously many wallets still don’t do per-byte, dynamic fees as far as I can tell:

  • Blockchain.info just defaults to 0.1 mBTC as far as I can tell, the API seems to require a minimum fee of 0.1 mBTC
  • coinbase.com pays 0.3 mBTC per transaction (from what I’ve seen, they tend to use 3-input, 3-output transactions, which presumably means about 600 bytes per transaction for a rate of perhaps 0.5 mBTC/kB)
  • Airbitz seems to choose a fee based on transaction amount rather than transaction size
  • myTrezor seems have a default 0.1 mBTC fee, that can optionally be raised to 0.5 mBTC
  • bitcoinj does not do per-byte fees, or calculate fees dynamically (although an app based on bitcoinj might do so)

Summary

  • Many wallets still don’t calculate fees dynamically, or even calculate fees at a per-byte level.
  • A significant number of wallets are dynamically calculating fees, at a per-byte granularity
  • Wallets that dynamically calculate fees pay substantially lower fees than those that don’t
  • Paying higher than dynamically calculated market rates generally will not get your transaction confirmed any quicker
  • Market-driven fees have risen to about the same fee level that wallets used for 2-input, 2-output transactions at the start of 2015
  • Market-driven fees will only be able to rise further with increased adoption of wallets that support market-driven fees.
  • There have been two fee events for wallets that don’t do market based fees, and paid a flat fee of 0.1 mBTC already. For those wallets, since about July 2015, fees have been high enough to cause transactions near 1000 bytes to have delayed confirmations; and since about November 2015, fees have been high enough to cause transactions above 520 bytes (ie, 3-input, 2-output) to be delayed. A third fee event is very close, affecting transactions above 370 bytes (ie, 2-input, 2-output).

Lev Lafayette: Foreward to Sequential and Parallel Programming with C and Fortran by Dr. John L. Gustafson

Thu, 2016-02-18 10:31

It is finally time for a book like this one.

When parallel programming was just getting off the ground in the late 1960s, it started as a battle between starry-eyed academics who envisioned how fast and wonderful it could be, and cynical hard-nosed executives of computer companies who joked that “parallel computing is the wave of the future, and always will be.”

read more

Silvia Pfeiffer: WebRTC predictions for 2016

Thu, 2016-02-18 04:51

I wrote these predictions in the first week of January and meant to publish them as encouragement to think about where WebRTC still needs some work. I’d like to be able to compare the state of WebRTC in the browser a year from now. Therefore, without further ado, here are my thoughts.

WebRTC Browser support

I’m quite optimistic when it comes to browser support for WebRTC. We have seen Edge bring in initial support last year and Apple looking to hire engineers to implement WebRTC. My prediction is that we will see the following developments in 2016:

  • Edge will become interoperable with Chrome and Firefox, i.e. it will publish VP8/VP9 and H.264/H.265 support
  • Firefox of course continues to support both VP8/VP9 and H.264/H.265
  • Chrome will follow the spec and implement H.264/H.265 support (to add to their already existing VP8/VP9 support)
  • Safari will enter the WebRTC space but only with H.264/H.265 support

Codec Observations

With Edge and Safari entering the WebRTC space, there will be a larger focus on H.264/H.265. It will help with creating interoperability between the browsers.

However, since there are so many flavours of H.264/H.265, I expect that when different browsers are used at different endpoints, we will get poor quality video calls because of having to negotiate a common denominator. Certainly, baseline will work interoperably, but better encoding quality and lower bandwidth will only be achieved if all endpoints use the same browser.

Thus, we will get to the funny situation where we buy ourselves interoperability at the cost of video quality and bandwidth. I’d call that a “degree of interoperability” and not the best possible outcome.

I’m going to go out on a limb and say that at this stage, Google is going to consider strongly to improve the case of VP8/VP9 by improving its bandwidth adaptability: I think they will buy themselves some SVC capability and make VP9 the best quality codec for live video conferencing. Thus, when Safari eventually follows the standard and also implements VP8/VP9 support, the interoperability win of H.264/H.265 will become only temporary overshadowed by a vastly better video quality when using VP9.

The Enterprise Boundary

Like all video conferencing technology, WebRTC is having a hard time dealing with the corporate boundary: firewalls and proxies get in the way of setting up video connections from within an enterprise to people outside.

The telco world has come up with the concept of SBCs (session border controller). SBCs come packed with functionality to deal with security, signalling protocol translation, Quality of Service policing, regulatory requirements, statistics, billing, and even media service like transcoding.

SBCs are a total overkill for a world where a large number of Web applications simply want to add a WebRTC feature – probably mostly to provide a video or audio customer support service, but it could be a live training session with call-in, or an interest group conference all.

We cannot install a custom SBC solution for every WebRTC service provider in every enterprise. That’s like saying we need a custom Web proxy for every Web server. It doesn’t scale.

Cloud services thrive on their ability to sell directly to an individual in an organisation on their credit card without that individual having to ask their IT department to put special rules in place. WebRTC will not make progress in the corporate environment unless this is fixed.

We need a solution that allows all WebRTC services to get through an enterprise firewall and enterprise proxy. I think the WebRTC standards have done pretty well with firewalls and connecting to a TURN server on port 443 will do the trick most of the time. But enterprise proxies are the next frontier.

What it takes is some kind of media packet forwarding service that sits on the firewall or in a proxy and allows WebRTC media packets through – maybe with some configuration that is necessary in the browsers or the Web app to add this service as another type of TURN server.

I don’t have a full understanding of the problems involved, but I think such a solution is vital before WebRTC can go mainstream. I expect that this year we will see some clever people coming up with a solution for this and a new type of product will be born and rolled out to enterprises around the world.

Summary

So these are my predictions. In summary, they address the key areas where I think WebRTC still has to make progress: interoperability between browsers, video quality at low bitrates, and the enterprise boundary. I’m really curious to see where we stand with these a year from now.

It’s worth mentioning Philipp Hancke’s tweet reply to my post:

— we saw some clever people come up with a solution already. Now it needs to be implemented

James Purser: Catalyst and psuedoscience

Tue, 2016-02-16 22:31

Hrmm, I think I'll let Upulie lead on tonights episode of Catalyst.

"It misrepresents scientific risk assessment; cherry picks data and is carefully constructed incitement"

For those of you who didn't watch or havent yet caught it on iView, tonights episode of Catalsyt was called "Wi-fried" and billed as exploring the potential health effects of the surge in wifi enabled devices.

The first warning sign that this episode was going to be problematic was the presenter involved. Dr Maryanne Demasi was the presenter on the controversial double episode in 2013 on Statins. That story claimed to expose the truth about Statins, but really pushed a line of pseudoscience and misinformation from people who had a vested interest in selling "alternative" therapies.

Leaving aside the title of the episode, the second warning sign came with the intro to the episode.

NARRATION

You can't see it or hear it, but wi-fi blankets our homes, our cities and our schools.

Dr Devra Davis

Children today are growing up in a sea of radiofrequency microwave radiation that did not exist five years ago.

NARRATION

Our safety agencies dispute that wireless devices like mobile phones cause harm.

Dr Ken Karipidis

Don't think it's good enough to say at the moment that mobile phone use does cause cancer.

Dr Devra Davis

Cell phones emit pulsed radiation...

NARRATION

But some of the world's leading scientists and industry insiders are breaking ranks to warn us of the risks.

Let's have a look at that shall we?

We start with a slightly scary statement. Wifi is blanketing our homes and schools. 

Then we lead in with the "expert" reinforcing the previous scary statement with another scary statement,

The Narrator gives us the "but the authorities say we shouldn't be worried" line

Then we return to the "expert" with another scary sounding statement and the Narrator breathlessly revealing that some brave scientists are Breaking Ranks! to tell us the truth.

Ye gods. This rubbish belongs on A Current Affair not the ABC.

Ketan Joshi wrote up a guide to tonights episode, in fact he did it yesterday and it matches up so well with what actually happened. If you haven't seen the episode I suggest you sit down with the post open and tick off the elements as the episode progresses.

This sort of thing really annoys me. It's hard enough to combat psuedoscience like this without what should be the most respected science program in the country joining the ranks of the kind of people who push products like anti-radiation patches for mobile phones. There are already reports of people, worried by tonights episode trying to contact their schools to express concerns about their kids being swamped by evil cancer causing wifi.

Yup, parents are trying to organise an urgent meeting on Wifi with the principal. Woohoo @MaryanneDemasi, you did it. #catalyst

— ❤️ 그렝이론 (@glengyron) February 16, 2016

 The crazy thing is that of all the main stream media orgs in the country, the ABC is generally the best one around. It even has a dedicated science section that I use to grab stories for #lunchtimescience because it's a source I can trust. Things like tonights episode of Catalyst can only do damage to the reputation that has been built up over the years by good, solid science reporting.

So where does that leave us?

Well, in my opinion, Catalyst as a flagship program is useless. I cannot trust that it will deal with scientific controversies with rigor if it can't even handle something as simple as the wifi thing. 

Bring back Beyond 2000.

Blog Catagories: sciencecommunication

Craige McWhirter: LCA2016 Revisited - Copyleft For the Next Decade

Tue, 2016-02-16 16:11

Bradley Kuhn presented his thoughts on a comprehensive plan for the next decade of Copyleft.

  • Copyleft is a strategy to defend Free Software.
  • Gave an example of OpenStack as a project where proprietary updates are not being released back upstream.
  • Quoted Martin Fink from Hewlett Packard Enterprise as stating that we should start forking non-Copyleft projects as AGPL projects. Taunted HP by stating he looked forward to HP's AGPL fork of OpenStack. I could not find a reference for this quote but did find this strongly worded post: Change Your Default from Hewlett Packard Enterprise.
  • Proportionate reduction in Copyleft software the reason it's "not working".
  • Copyleft is complimentary not contradictory to other strategies.
  • Referred to his LCA2015 talk on attacks to Copyleft.
  • Rhetoric attacks replaced with co-option and astroturfing.

"If Copyleft is not enforced, there is no observable difference between Copyleft and non-Copyleft." -- Bradley Kuhn

  • Doesn't expect companies will ever do enforcement on behalf of the community.
  • Opposition's political strategy:
    • Convince developers that Copyleft is "old-hat" or harms adoption.
      • Fund them to write non-Copyleft code.
    • For projects that are Copyleft:
      • Don't let developers keep their own copyrights.
      • Don't enforce Copyleft on behalf of the community.
      • Vilify community enforcers
      • If enforcing, sell out for other goals ie: money
  • Spoke about some the issues around corporate lawyers.
  • Anti-Copyleft lawyers are amazing organised
  • Alleges there are secret meetings where they actively workshop anti-Copyleft strategies.
  • Provided two examples of this.
Always Have a Plan
  • Copyleft opponents are more savvy than they were previously.
  • Demand from your employers to own the copyrights on your work.
  • Copyleft is better enforced by multiple copyright holders.
  • Return to volunteer coding.
  • Write AGPL'd software , Free Software on your own time.
  • Be willing to fork non-Copyleft projects.
  • Listed examples of companies replacing Copyleft projects with their own.
  • Believes that the Linux kernel modules will be the next battle ground. Perhaps the definitive battleground.
  • More and more GPL violators are deliberately violating.
  • Their goal is to test and extend the limits of violation.
  • Enforcement is a zero sum game.
    • If Copyleft wins, software that should be free is liberated.
    • If they win, software that should be free remains proprietary.
  • Enforcement done for profit is a path to corruption.
How to Help
  • Developers join the enforcement coalitions
  • Financially support the Software Freedom Conservancy.
  • Complete Corresponding Source has to work.
  • Software Freedom is the underdog.
  • Called for a community of individuals to stand up for the GPL.
  • A community of cooperating individuals is our strength
  • Which is how Free Software started.
  • Conservancy can't act alone.