Planet Linux Australia

Syndicate content
Planet Linux Australia -
Updated: 1 week 6 days ago

Linux Users of Victoria (LUV) Announce: Software Freedom Day Meeting 2015

Thu, 2016-01-28 23:30
Start: Sep 19 2015 11:00 End: Sep 19 2015 16:00 Start: Sep 19 2015 11:00 End: Sep 19 2015 16:00 Location: 

Electron Workshop 31 Arden Street, North Melbourne.


There will not be a regular LUV Beginners workshop for the month of September. Instead, you're going to be in for a much bigger treat!

This month, Free Software Melbourne[1], Linux Users of Victoria[2] and Electron Workshop[3] are joining forces to bring you the local Software Freedom Day event for Melbourne.

The event will take place on Saturday 19th September between 11am and 4pm at:

Electron Workshop

31 Arden Street, North Melbourne.


Electron Workshop is on the south side of Arden Street, about half way between Errol Street and Leveson Street. Public transport: 57 tram, nearest stop at corner of Errol and Queensberry Streets; 55 and 59 trams run a few blocks away along Flemington Road; 402 bus runs along Arden Street, but nearest stop is on Errol Street. On a Saturday afternoon, some car parking should be available on nearby streets.

LUV would like to acknowledge Red Hat for their help in obtaining the Trinity College venue and VPAC for hosting.

Linux Users of Victoria Inc., is an incorporated association, registration number A0040056C.

September 19, 2015 - 11:00

read more

Ben Martin: CNC Control with MQTT

Thu, 2016-01-28 20:22
I recently upgraded a 3040 CNC machine by replacing the parallel port driven driver board with a smoothieboard. This runs a 100Mhz Cortex-M mcu and has USB and ethernet interfaces, much more modern. This all lead me to coming up with a new controller to move the cutting head, all without needing to update the controller box or recompile or touch the smoothieboard firmware.

I built a small controller box with 12 buttons on it and shoved an esp8266 into that box with a MCP23017 chip to allow access to 16 gpio over TWI from the esp mcu. The firmware is fairly simple on the esp, it enables the internal pull ups on all gpio pins on the 23017 chip and sends an MQTT message when each button is pressed and released. The time since MCU boot in milliseconds is sent as the MQTT payload. This way, one can work out if this is a short or longer button press and move the cutting head a proportional distance.

The web interface for smoothie provides a pronterface like interface for manipulating where the cutting head is on the board and the height it is at. So lucky that it's open source firmware so I can see the non obfuscated javascript that the web interface uses. Then work out the correct POST method to send gcode commands directly to the smoothieboard on the CNC.

The interesting design here is using software on the server to make the controller box meet the smoothieboard. On the server MQTT messages are turned into POST requests using mqtt-launcher. The massive benefit here is that I can change what each button does on the CNC without needing to reprogram the controller or modify the cnc firmware. Just change the mqtt-launcher config file and all is well. So far MQTT is the best "IoT" tech I've had the privilege to use.

I'll probably build another controller for controlling 3d printers. Although most 3d printers just home each axis there is sometimes some pesky commands that must be run at startup, to help home z-axis for example. Having physical buttons to move the x axis down by 4mm, 1mm and 0.1mm makes it so much less likely to fat finger the web interface and accidentally crash the bed by initiating a larger z-axis movement than one had hoped for.

Russell Coker: Using LetsEncrypt

Wed, 2016-01-27 23:26

Lets Encrypt is a new service to provide free SSL keys [1]. I’ve just set it up on a few servers that I run.


The first thing to note is that the client is designed to manage your keys and treat all keys on a server equally with a single certificate. It shouldn’t be THAT difficult to do things in other ways but it would involve extra effort. The next issue that can make things difficult is that it is designed that the web server will have a module to negotiate new keys automatically. Automatically negotiating new keys will be really great when we get that all going, but as I didn’t feel like installing a slightly experimental Apache module on my servers that meant I had to stop Apache while I got the keys – and I’ll have to do that again every 3 months as the keys have a short expiry time.

There are some other ways of managing keys, but the web servers I’m using Lets Encrypt with at the moment aren’t that important and a couple of minutes of downtime is acceptable.

When you request multiple keys (DNS names) for one server to make it work without needless effort you have to get them all in the one operation. That gives you a single key file for all DNS names which is very convenient for services that don’t support getting the hostname before negotiating SSL. But it could be difficult if you wanted to have one of the less common configurations such as having a mail server and a web server on the same IP addess but using different keys

How To Get Keys

deb testing main

The letsencrypt client is packaged for Debian in Testing but not in Jessie. Adding the above to the /etc/apt/sources.list file for a Jessie system allows installing it and a few dependencies from Testing. Note that there are problems with doing this, you can’t be certain that all the other apps installed will be compatible with the newer versions of libraries that are installed and you won’t get security updates.

letsencrypt certonly --standalone-supported-challenges tls-sni-01

The above command makes the letsencrypt client listen on port 443 to talk to the Lets Encrypt server. It prompts you for server names so if you want to minimise the downtime for your web server you could specify the DNS names on the command-line.

If you run it on a SE Linux system you need to run “setsebool allow_execmem 1” before running it and “setsebool allow_execmem 0” afterwards as it needs execmem access. I don’t think it’s a problem to temporarily allow execmem access for the duration of running this program, if you use KDE then you will be forced to allow such access all the time for the desktop to operate correctly.

How to Install Keys

[ssl:emerg] [pid 9361] AH02564: Failed to configure encrypted (?) private key, check /etc/letsencrypt/live/

The letsencrypt client suggests using the file fullchain.pem which has the key and the full chain of certificates. When I tried doing that I got errors such as the above in my Apache error.log. So I gave up on that and used the separate files. The only benefit of using the fullchain.pem file is to have a single line in a configuration file instead of 3. Trying to debug issues with fullchain.pem took me a lot longer than copy/paste for the 3 lines.

Under /etc/letsencrypt/live/$NAME there are symlinks to the real files. So when you get new keys the old keys will be stored but the same file names can be used.

SSLCertificateFile "/etc/letsencrypt/live/"

SSLCertificateChainFile "/etc/letsencrypt/live/"

SSLCertificateKeyFile "/etc/letsencrypt/live/"

The above commands are an example for configuring Apache 2.

smtpd_tls_cert_file = /etc/letsencrypt/live/

smtpd_tls_key_file = /etc/letsencrypt/live/

smtpd_tls_CAfile = /etc/letsencrypt/live/

Above is an example of Postfix configuration.

ssl_cert = </etc/letsencrypt/live/

ssl_key = </etc/letsencrypt/live/

ssl_ca = </etc/letsencrypt/live/

Above is an example for Dovecot, it goes in /etc/dovecot/conf.d/10-ssl.conf in a recent Debian version.


At this stage using letsencrypt is a little fiddly so for some commercial use (where getting the latest versions of software in production is difficult) it might be a better option to just pay for keys. However some companies I’ve worked for have had issues with getting approval for purchases which would make letsencrypt a good option to avoid red tape.

When Debian/Stretch is released with letsencrypt I think it will work really well for all uses.

No related posts.

Lev Lafayette: Can processes survive after shutdown?

Wed, 2016-01-27 21:29

I had a process in a "uninterruptible sleep" state. Trying to kill it is, unsurprisingly, unhelpful. All the literature on the subject will say that it cannot be killed, and they're right. It's called "uninterruptible" for a reason. An uninterruptable process is in a system call that cannot be interrupted by a signal (such as a SIGKILL, SIGTERM etc).

read more

OpenSTEM: History and Geography for Primary program

Wed, 2016-01-27 13:30

OpenSTEM’s History and Geography for Primary program provides an integrated curriculum implementation, which aims to provide holistic learning for students in both Key Learning Areas. By integrating History and Geography, not only is the level of engagement higher, as students are able to gain a more rounded understanding of processes in places through time, but the time needed for teaching is optimised.

With complete lesson plans!

Australian Curriculum

The program is tailored exactly to the requirements of the Australian Curriculum so that all curriculum strands in both curricula are addressed efficiently. The focus is on providing a broad overview of global events and then focussing in on specific issues. A particular focal point, as determined by the Australian Curriculum, is Australian History, with Aboriginal History, sustainability and the environment as important foci as well.

Our Approach

Student engagement is the primary aim of this curriculum implementation and a range of activities ensure that learning takes place in a very hands-on and multimodal way.

Scientific research has identified that children are more engaged, with better retention of information, when a range of input stimuli are provided. In particular, visual and kinaesthetic methods of input have the broadest range of uptake of information in pre-puberty age groups. OpenSTEM’s blend of activities and resources addresses these methods directly.

The material is designed so as to provide for flexibility in use. Teachers can choose to utilise the individual resources within their own teaching framework, or they can choose to use the detailed weekly lesson plans as laid out in the Teacher Handbook. A Student Workbook is also provided, with a continual assessment option, to complete the package.

OpenSTEM uses particular techniques (such as coloured words within the text) which address a range of learning styles and have been shown to increase focus for students with concentration challenges.


The term 1 teacher units and supporting resources are now available, and already in use by some schools. Additional units and resources are made available progressively during this first year of this program, and updated thereafter.

You may purchase individual teacher units and resource PDFs, or subscribe (from an individual teacher or family to an entire school) and get the teacher units at half price and the resource PDFs for free!

You can also download some sample PDFs (at no cost, no login/details required) so that you are able to see and assess the quality of our materials. 

If you need more information and for any questions you may have, please contact us.

Cross-curricular options

OpenSTEM’s History and Geography program provides a range of cross-curricular options. In particular Science extensions are provided to address the Science curriculum. Some aspects of the Mathematics curriculum also follow naturally from this material.

These cross-curricular components help students apply newly learnt concepts and skills in a broader context.


OpenSTEM materials are designed to be adapted for use in multi-year level classrooms. Suggested implementations for multi-year level classes are provided in the Teacher Handbook for each unit.

In some cases the same resources and topics are used by different year levels and it is the depth of understanding and analysis required which is all that changes between the year levels. The Student Workbook for each unit reflects the differing requirements for different year levels. Using this structure, the teacher is not trying to teach different material simultaneously in order to meet the requirements of the National Curriculum.


Homeschooling parents also have great flexibility in their use of this material. The program is designed to be easily adaptable for the homeschooling situation. Parents can choose to use the resources within their own program, or allow the student to explore the material as their interest leads them. Alternatively, the parent can use the Teacher Handbook and Student Workbook to provide a series of lessons, knowing they will thus match all the curriculum requirements.

Non-linear learners can approach the student workbook in a non-linear fashion, referring to the matching resources as required in order to engage with the material. Using this material, the parent can tailor the learning to match the speed, abilities and particular challenges of each student.

The potential for extension and acceleration will suit students with those particular needs, whilst the shift between broad and narrow focus in the resources will provide consolidation for those students who need more time to work through learning material.

James Purser: A call out for People who science!

Tue, 2016-01-26 16:30

Right, there is just over two weeks to go until the inaugeral episode of Lunchtime Science and I am looking for People Who Science to interview for the show.

What I am looking to do is a ten minute segment where we introduce the Person Who Sciences and the project they are currently working on. We'll record it using Skype, either skype to skype or skype to phone, or in rare occasions, in real life.

So if you know anyone who's doing science and you think they would be worth talking to, please let me know. Ping me via the following:

  • @purserj on Twitter
  • James Purser on Facebook
  • james AT angrybeanie DOT com

Bring on the science :)

Blog Catagories: forscience! lunchtimescience podcasting

Dave Hall: Per Environment Config in Drupal 8

Mon, 2016-01-25 19:30

One of the biggest improvements in Drupal 8 is the new configuration management system. Config is now decoupled from code and the database. Unlike Drupal 6 and 7, developers no longer have to rely on the features module for moving configuration around.

Most large Drupal sites, and some smaller ones, require per environment configuration. Prior to Drupal 8 this was usually achieved using a combination of hard coding config variables and features. Drupal 8 still allows users to put config variables in the settings.php file, but putting config in code feels like a backward step given D8 emphasis on separating concerns.

For example we may have a custom module which calls a RESTful API of a backend service. There are dev, stage and production endpoints that we need to configure. We also keep our config out of docroot and use drush to import the config at deployment time. We have the following structure in our git repo:

/ +- .git/ | +- .gitignore | +- | +- config/ | | | +- | | | +- base/ | | | +- dev/ | | | +- prod/ | | | +- stage/ | +- docroot/ | +- scripts/ | +- and-so-on/

When a developer needs to export the config for the site they run drush config-export --destination=/path/to/project/config/base. This exports all of the configuration to the specified path. To override the API endpoint for the dev environment, the developer would make the config change and then export just that piece of configuration. That can be done by runing drush config-get mymodule.endpoint > /path/to/project/config/dev/mymodule.endpoint.yml.

Drupal 8 and drush don't allow you to import the 2 config sets at the same time, so we need to run 2 drush commands to import our config. drush config-import --partial --source=/path/to/project/config/base && drush config-import --partial --source=/path/to/project/config/dev. The first command imports the base config and the second applies any per environment overrides. The --partial flag prevents drush deleting any missing config. In most cases this is ok, but watch out if you delete a view or block placement.

Best practices are still emerging for managing configuration in Drupal 8. While I have this method working, I'm sure others have different approaches. Please leave a comment if you have an alternative method.

OpenSTEM: Introducing Aunt Madge’s Suitcase Activity

Mon, 2016-01-25 18:30

Oh no! Aunt Madge has gone off on her holiday around the world and left one of her suitcases behind!

She has sent a note to you to please bring her the suitcase, she has also sent a ticket so that you can fly after her to take her the suitcase.

School students need to help Aunt Madge by taking her suitcase to her. She has left a clue to where she is.

This Activity Resource consists of 3 PDFs with instructions, colour photos of locations around the world, a custom map for each location, and detailed descriptions.

Suitable for all school age year levels. We recommend using a globe in the classroom in addition to the OpenSTEM blackline world map so children get used to different projections. For home use, you can also use an atlas or other wall map, of course.

Also used in OpenSTEM’s Integrated History & Geography Program, this is a practical and fun activity to get kids relating to geography and learning about the world. There’s unlimited scope for building on, with (for instance) the child’s own friends and family members travelling or living overseas.

The Aunt Madge’s Suitcase Activity resource is available FREE for OpenSTEM Subscribers ($25+GST for non-subscribers). You can also download samples of some of our resources, to see and assess the quality of OpenSTEM materials for yourself before subscribing.

BlueHackers: Science on High IQ, Empathy and Social Anxiety |

Sat, 2016-01-23 11:13

Although Western medicine has radically transformed our world for the better, and given rise to some of the most remarkable breakthroughs in human history, in some ways it is still scratching at the lower slopes of the bigger picture. Only recently have our health systems begun to embrace the healing power of some ancient Eastern traditions such as meditation, for example. But overall, nowhere across the human health spectrum is Western medicine more unknowledgeable than in the realm of mental health. The human brain is the most complex biological machine in the known Universe, and our understanding of its inner workings is made all the more challenging when we factor in the symbiotic relationship of the mind-body connection.

When it comes to the wide range of diagnoses in the mental health spectrum, anxiety is the most common — affecting 40 million adults in the United States age 18 and older (18% of U.S. population). And although anxiety can manifest in extreme and sometimes crippling degrees of intensity, Western doctors are warming up to the understanding that a little bit of anxiety could be incredibly beneficial in the most unexpected ways. One research study out of Lakehead University discovered that people with anxiety scored higher on verbal intelligence tests. Another study conducted by the Interdisciplinary Center Herzliya in Israel found that people with anxiety were superior than other participants at maintaining laser-focus while overcoming a primary threat as they are being bombarded by numerous other smaller threats, thereby significantly increasing their chances of survival. The same research team also discovered that people with anxiety showed signs of “sentinel intelligence”, meaning they were able to detect real threats that were invisible to others (i.e. test participants with anxiety were able to detect the smell of smoke long before others in the group).

Another research study from the SUNY Downstate Medical Center in New York involved participants with generalized anxiety disorder (GAD). The findings revealed that people with severe cases of GAD had much higher IQ’s than those who had more mild cases. The theory is that “an anxious mind is a searching mind,” meaning children with GAD develop higher levels of cognitive ability and diligence because their minds are constantly examining ideas, information, and experiences from multiple angles simultaneously.

But perhaps most fascinating of all is a research study published by the National Institutes of Health and the National Center for Biotechnology Information involving participants with social anxiety disorder (i.e. social phobia). The researchers embarked on their study with the following thesis: “Individuals with social phobia (SP) show sensitivity and attentiveness to other people’s states of mind. Although cognitive processes in SP have been extensively studied, these individuals’ social cognition characteristics have never been examined before. We hypothesized that high-socially-anxious individuals (HSA) may exhibit elevated mentalizing and empathic abilities.” The research methods were as follows: “Empathy was assessed using self-rating scales in HSA individuals (n=21) and low-socially-anxious (LSA) individuals (n=22), based on their score on the Liebowitz social anxiety scale. A computerized task was used to assess the ability to judge first and second order affective vs. cognitive mental state attributions.”

Remarkably, the scientists found that a large portion of people with social anxiety disorder are gifted empaths — people whose right-brains are operating significantly above normal levels and are able to perceive the physical sensitivities, spiritual urges, motivations, and intentions of other people around them (see Dr. Jill Bolte Taylor’s TED Talk below for a powerful explanation of this ability). The team’s conclusion reads: “Results support the hypothesis that high-socially-anxious individuals demonstrate a unique profile of social-cognitive abilities with elevated cognitive empathy tendencies and high accuracy in affective mental state attributions.” To understand more about the traits of an empath you can CLICK HERE. And to see if you align with the 22 most common traits of an empath CLICK HERE.

Empaths who have fully embraced their abilities are able to function on a purely intuition-based level. As Steve Jobs once said, “[Intuition] is more powerful than intellect,” and in keeping with this appreciation, writer Carolyn Gregoire recently penned a fascinating feature entitled “10 Things Highly Intuitive People Do Differently” and you can read it in full by visiting And to learn why Western medicine may be misinterpreting mental illness at large, be sure to read the fascinating account of Malidoma Patrice Somé, Ph.D. — a shaman and a Western-trained doctor. “In the shamanic view, mental illness signals the birth of a healer, explains Malidoma Patrice Somé. Thus, mental disorders are spiritual emergencies, spiritual crises, and need to be regarded as such to aid the healer in being born.” You can read the full story by reading “What A Shaman Sees In A Mental Hospital”. For more great stories about the human brain be sure to visit The Human Brain on FEELguide. (Sources: Business Insider, The Mind Unleashed, Huffington Post, photo courtesy of My Science Academy).

Lev Lafayette: Deleting "Stuck" Compute Jobs

Fri, 2016-01-22 14:30

Often on a cluster a user launches a compute job only to discover that they have some need to delete it (e.g., the data file is corrupt, there was an error in their application commands or PBS script). In TORQUE/PBSPro/OpenPBS etc this can be carried out by the standard PBS command, qdel.

[compute-login ~] qdel job_id

Sometimes however that simply doesn't work. An error message like the following is typical: "qdel: Server could not connect to MOM". I think I've seen this around a hundred times in the past few years.

read more

Chris Samuel: Mount Burnett Observatory Open Day – 23rd February 2016 – noon until late!

Thu, 2016-01-21 20:26

If you’re around Melbourne, interested in astronomy and fancy visiting a community powered astronomical observatory that has a very active outreach and amateur astronomy focus then can I interest you in the Mount Burnett Observatory open day this Saturday (January 23rd) from noon onwards?

We’re going to have all sorts of things going on – talks, telescopes, radio astronomy, tours of the observatory dome (originally built by Monash University), lots of enthusiastic volunteers!

We’re fundraising to build a new accessible modern dome to complement the existing facilities so please come and help us out.

This item originally posted here:

Mount Burnett Observatory Open Day – 23rd February 2016 – noon until late!

Russell Coker: Finding Storage Performance Problems

Thu, 2016-01-21 13:26

Here are some basic things to do when debugging storage performance problems on Linux. It’s deliberately not an advanced guide, I might write about more advanced things in a later post.

Disk Errors

When a hard drive is failing it often has to read sectors several times to get the right data, this can dramatically reduce performance. As most hard drives aren’t monitored properly (email or SMS alerts on errors) it’s quite common for the first notification about an impending failure to be user complaints about performance.

View your kernel message log with the dmesg command and look in /var/log/kern.log (or wherever your system is configured to store kernel logs) for messages about disk read errors, bus resetting, and anything else unusual related to the drives.

If you use an advanced filesystem like BTRFS or ZFS there are system commands to get filesystem information about errors. For BTRFS you can run “btrfs device stats MOUNTPOINT” and for ZFS you can run “zpool status“.

Most performance problems aren’t caused by failing drives, but it’s a good idea to eliminate that possibility before you continue your investigation.

One other thing to look out for is a RAID array where one disk is noticeably slower than the others. For example if you have a RAID-5 or RAID-6 array every drive should have almost the same number of reads and writes, if one disk in the array is at 99% performance capacity and the other disks are at 5% then it’s an indication of a failing disk. This can happen even if SMART etc don’t report errors.

Monitoring IO

The iostat program in the Debian sysstat package tells you how much IO is going to each disk. If you have physical hard drives sda, sdb, and sdc you could run the command “iostat -x 10 sda sdb sdc” to tell you how much IO is going to each disk over 10 second periods. You can choose various durations but I find that 10 seconds is long enough to give results that are useful.

By default iostat will give stats on all block devices including LVM volumes, but that usually gives too much data to analyse easily.

The most useful things that iostat tells you are the %util (the percentage utilisation – anything over 90% is a serious problem), the reads per second “r/s“, and the writes per second “w/s“.

The parameters to iostat for block devices can be hard drives, partitions, LVM volumes, encrypted devices, or any other type of block device. After you have discovered which block devices are nearing their maximum load you can discover which of the partitions, RAID arrays, or swap devices on that disk are causing the load in question.

The iotop program in Debian (package iotop) gives a display that’s similar to that of top but for disk io. It generally isn’t essential (you can run “ps ax|grep D” to get most of that information), but it is handy. It will tell you which programs are causing IO on a busy filesystem. This can be good when you have a busy system and don’t know why. It isn’t very useful if you have a system that is used for one task, EG a database server that is known to be busy doing database stuff.

It’s generally a good idea to have sysstat and iotop installed on all systems. If a system is experiencing severe performance problems you might not want to wait for new packages to be installed.

In Debian the sysstat package includes the sar utility which can give historical information on system load. One benefit of using sar for diagnosing performance problems is that it shows you the time of day that has the most load which is the easiest time to diagnose performance problems.

Swap Use

Swap use sometimes confuses people. In many cases swap use decreases overall disk use, this is the design of the Linux paging algorithms. So if you have a server that accesses a lot of data it might swap out some unused programs to make more space for cache.

When you have multiple virtual machines on one system sharing the same disks it can be difficult to determine the best allocation for RAM. If one VM has some applications allocating a lot of RAM but not using it much then it might be best to give it less RAM and force those applications into swap so that another VM can cache all the data it accesses a lot.

The important thing is not the amount of swap that is allocated but the amount of IO that goes to the swap partition. Any significant amount of disk IO going to a swap device is a serious problem that can be solved by adding more RAM.

Reads vs Writes

The ratio of reads to writes depends on the applications and the amount of RAM. Some applications can have most of their reads satisfied from cache. For example an ideal configuration of a mail server will have writes significantly outnumber reads (I’ve seen ratios of 5:1 for writes to reads on real mail servers). Ideally a mail server will cache all new mail for at least an hour and as the most prolific users check their mail more frequently than that most mail will be downloaded before it leaves the cache. If you have a mail server with reads outnumbering writes then it needs more RAM. RAM is cheap nowadays so if you don’t want to compete with Gmail it should be cheap to buy enough RAM to cache all recent mail.

The ratio of reads to writes is important because it’s one way of quickly determining if you have enough RAM and adding RAM is often the cheapest way of improving performance.

Unbalanced IO

One common performance problem on systems with multiple disks is having more load going to some disks than to others. This might not be a problem (EG having cron jobs run on disks that are under heavy load while the web server accesses data from lightly loaded disks). But you need to consider whether it’s desirable to have some disks under more load than others.

The simplest solution to this problem is to just have a single RAID array for all data storage. This is also the solution that gives you the maximum available disk space if you use RAID-5 or RAID-6.

A more complex option is to use some SSDs for things that require performance and disks for things that don’t. This can be done with the ZIL and L2ARC features of ZFS or by just creating a filesystem on SSD for the data that is most frequently accessed.

What Did I Miss?

I’m sure that I missed something, please let me know of any other basic things to do – or suggestions for a post on more advanced things.

Related posts:

  1. Strange SATA Disk Performance Below is a GNUPlot graph of ZCAV output from a...
  2. Vibration and Strange SATA Performance Almost two years ago I blogged about a strange performance...
  3. New Storage Developments Eweek has an article on a new 1TB Seagate drive....

Craig Sanders: lm-sensors configs for Asus Sabertooth 990FX and M5A97 R2.0

Wed, 2016-01-20 22:26

I had to replace a motherboard and CPU a few days ago (bought an Asus M5A97 R2.0), and wanted to get lm-sensors working properly on it. Got it working eventually, which was harder than it should have been because the lm-sensors site is MIA, seems to have been rm -rf -ed.

For anyone else with this motherboard, the config is included below.

This inspired me to fix the config for my Asus Sabertooth 990FX motherboard. Also included below.

To install, copy-paste to a file under /etc/sensors.d/ and run sensors -s to make sensors evaluate all of the set statemnents.

# Asus M5A97 R2.0 # based on Asus M5A97 PRO from chip "k10temp-pci-00c3" label temp1 "CPU Temp (rel)" chip "it8721-*" label in0 "+12V" label in1 "+5V" label in2 "Vcore" label in2 "+3.3V" ignore in4 ignore in5 ignore in6 ignore in7 ignore fan3 compute in0 @ * (515/120), @ / (515/120) compute in1 @ * (215/120), @ / (215/120) label temp1 "CPU Temp" label temp2 "M/B Temp" set temp1_min 30 set temp1_max 70 set temp2_min 30 set temp2_max 60 label fan1 "CPU Fan" label fan2 "Chassis Fan" label fan3 "Power Fan" ignore temp3 set in0_min 12 * 0.95 set in0_max 12 * 1.05 set in1_min 5 * 0.95 set in1_max 5 * 1.05 set in3_min 3.3 * 0.95 set in3_max 3.3 * 1.05 ignore intrusion0 #Asus Sabertooth 990FX # modified from the version at chip "it8721-isa-0290" # Temperatures label temp1 "CPU Temp" label temp2 "M/B Temp" label temp3 "VCORE-1" label temp4 "VCORE-2" label temp5 "Northbridge" # I put all these here as a reference since the label temp6 "DRAM" # Asus Thermal Radar tool on my Windows box displays label temp7 "USB3.0-1" # all of them. label temp8 "USB3.0-2" # lm-sensors ignores all but the CPU and M/B temps. label temp9 "PCIE-1" # If that is really what they are. label temp10 "PCIE-2" set temp1_min 0 set temp1_max 70 set temp2_min 0 set temp2_max 60 ignore temp3 # Fans label fan1 "CPU Fan" label fan2 "Chassis Fan 1" label fan3 "Chassis Fan 2" label fan4 "Chassis Fan 3" # label fan5 "Chassis Fan 4" # lm-sensor complains about this ignore fan2 ignore fan3 set fan1_min 600 set fan2_min 600 set fan3_min 600 # Voltages label in0 "+12V" label in1 "+5V" label in2 "Vcore" label in3 "+3.3V" label in5 "VDDA" compute in0 @ * (50/12), @ / (50/12) compute in1 @ * (205/120), @ / (205/120) set in0_min 12 * 0.95 set in0_max 12 * 1.05 set in1_min 5 * 0.95 set in1_max 5 * 1.05 set in2_min 0.80 set in2_max 1.6 set in3_min 3.20 set in3_max 3.6 set in5_min 2.2 set in5_max 2.8 ignore in4 ignore in6 ignore in7 ignore intrusion0 chip "k10temp-pci-00c3" label temp1 "CPU Temp"

lm-sensors configs for Asus Sabertooth 990FX and M5A97 R2.0 is a post from: Errata

Dave Hall: Internal Applications: When Semantic Versioning Doesn't Make Sense

Tue, 2016-01-19 23:30

Semantic Versioning (or SemVer) is great for libraries and open source applications. It allows development teams to communicate to user and downstream developers the scope of changes in a release. One of the most important indicators in versioning is backwards compatibility (BC). SemVer makes any BC break clear. Web services and public APIs are another excellent use case for SemVer.

As much as I love semantic versioning for public projects, I am not convinced about using it for internal projects.

Libraries, be they internal or publicly released should follow the semantic version spec. Not only does this encourage reuse and sharing of libraries, it also displays good manners towards your fellow developers. Internal API endpoints should also comply with SemVer, but versioning end points should only expose major.minor in the version string. Again this will help maintain good relations with other devs.

End users of your application probably won't care if you drop jQuery in favour of straight JS, swap Apache for nginx or even if you upgrade to PHP 7/Python 3.x. Some users will care if you move a button. They won't care that the underlying data model and classes remain unchanged and so this is only a patch release. In the mind of those users, the button move is a BC break, because you changed their UI. At the end of the day, users don't care about version numbers, they care about features and bug fixes.

Regardless of the versioning system used, changes should be documented in a changelog. The changelog shouldn't just be a list of git commits streamed into a text file. This changelog should be something a non technical user can get their head around. This allows all stakeholders to clearly see the incremental improvement in the system.

In my mind SemVer breaks down when it comes to these types of (web) applications. Without a doubt any of the external integration points of the app should use semantic versioning. However the actual app versioning should just increment. I recommend using date based version numbering, such as 201512010 for the first release on 1 December 2015. The next release on that same day, if required, would be 201512011 and so on. If you release many times per day then you might want to consider a 2 or 3 digit counter component.

The components that make up your application such as the libraries, docker base images, ansible playbooks and so on should be using SemVer. Your build and test infrastructure should be able to create reproducible builds of the full stack if you reference specific versions or dependencies.

Instead of conditioning end users to understand semantic versioning, energy should be expended on teaching users to understand continuous delivery and deployment. The app is going to grow and evolve, tagging a release should be something a script can do. It shouldn't need 5 people in the tea room trying to work out if a feature can go in or not because it might be considered a BC break.

When building an application, get the code written and have people using it. If they're not happy, continue to iterate.

Binh Nguyen: Anonymous Group, Random Thoughts, and More

Mon, 2016-01-18 00:43
One point leads to another. The latest group/movement to come into the limelight is 'Anonymous'. As you'll see their background is varied and at times it's hard to distinguish their exact intent... - despite what they say there's a lot of difficulty distinguishing whether or not they are activists or merely trouble makers at times. Part of the time you feel as though if they focused more on

Chris Smart: Resurrect Nexus 4 with red light of death by using wireless charger

Sun, 2016-01-17 21:29

I should have posted this ages ago. There is a well known problem which may affect Nexus 4 devices where it powers off and won’t power on again. When you plug it into the USB charger you get a solid red light and it never recovers.

If you search for the problem the main advice is to return the phone for a factory fix, however there’s an easier trick that’s worked for me; using a wireless charger.

Every time I’ve had this problem (on a few different Nexus 4 phones) I’ve been able to bring the phone back by sitting it on my Nexus wireless charger for a few minutes, then pressing the power button and it springs to life.

After that, you can charge and use the phone as normal. I am writing this now because I was reminded after using this trick to fix a friend’s phone tonight (he’d been googling the problem for a while with no luck).

Maybe someone still has one in their bottom drawer and can make use of it again by using this trick!

Jonathan Adamczewski: Another another C++11 ‘countof’

Sat, 2016-01-16 03:27

My earlier post received this comment which is a pretty neat little improvement over the one from

Here it is, with one further tweak:

template<typename T, std::size_t N> constexpr std::integral_constant<std::size_t, N> countof(T const (&)[N]) noexcept {   return {}; } #define COUNTOF(...) decltype(countof(__VA_ARGS__))::value

The change I’ve made to pfultz2’s version is to use ::value rather than {} after decltype in the macro.

This makes the type of the result std::size_t not std::integral_constant, so it can be used in va_arg settings without triggering compiler or static analysis warnings.

It also has the advantage of not triggering extra warnings in VS2015U1 (this issue).

Ian Wienand: Australia, ipv6 and dd-wrt

Fri, 2016-01-15 15:26

It seems that other than Internode, no Australian ISP has any details at all about native IPv6 deployment. Locally I am on Optus HFC, which I believe has been sold to the NBN, who I believe have since discovered that it is not quite what they thought it was. i.e. I think they have more problems than rolling out IPv6 and I won't hold my breath.

So the only other option is to use a tunnel of some sort, and it seems there is really only one option with local presence via SixXS. There are other options, notably, but they do not have Australian tunnel-servers. SixXS is the only one I could find with a tunnel in Sydney.

So first sign up for an account there. The process was rather painless and my tunnel was provided quickly.

After getting this, I got dd-wrt configured and working on my Netgear WNDR3700 V4. Here's my terse guide, cobbled together from other bits and pieces I found. I'm presuming you have a recent dd-wrt build that includes the aiccu tool to create the tunnel, and are pretty familiar with logging into it, etc.

Firstly, on dd-wrt make sure you have JFFS2 turned on for somewhere to install scripts. Go Administration, JFFS2 Support, Internal Flash Storage, Enabled.

Next, add the aiccu config file to /jffs/etc/aiccu.conf

# AICCU Configuration # Login information username USERNAME password PASSWORD # Protocol and server listed on your tunnel protocol tic server # Interface names to use ipv6_interface sixxs # The tunnel_id to use # (only required when there are multiple tunnels in the list) #tunnel_id <your tunnel id> # Be verbose? verbose false # Daemonize? daemonize true # Require TLS? requiretls true # Set default route? defaultroute true

Now you can add a script to bring up the tunnel and interface to /jffs/config/sixxs.ipup (make sure you make it executable) where you replace your tunnel address in the ip commands.

# wait until time is synced while [ `date +%Y` -eq 1970 ]; do sleep 5 done # check if aiccu is already running if [ -n "`ps|grep etc/aiccu|grep -v grep`" ]; then aiccu stop sleep 1 killall aiccu fi # start aiccu sleep 3 aiccu start /jffs/etc/aiccu.conf sleep 3 ip -6 addr add 2001:....:....:....::/64 dev br0 ip -6 route add 2001:....:....:....::/64 dev br0 sleep 5 #### BEGIN FIREWALL RULES #### WAN_IF=sixxs LAN_IF=br0 #flush tables ip6tables -F #define policy ip6tables -P INPUT DROP ip6tables -P FORWARD DROP ip6tables -P OUTPUT ACCEPT # Input to the router # Allow all loopback traffic ip6tables -A INPUT -i lo -j ACCEPT #Allow unrestricted access on internal network ip6tables -A INPUT -i $LAN_IF -j ACCEPT #Allow traffic related to outgoing connections ip6tables -A INPUT -i $WAN_IF -m state --state RELATED,ESTABLISHED -j ACCEPT # for multicast ping replies from link-local addresses (these don't have an # associated connection and would otherwise be marked INVALID) ip6tables -A INPUT -p icmpv6 --icmpv6-type echo-reply -s fe80::/10 -j ACCEPT # Allow some useful ICMPv6 messages ip6tables -A INPUT -p icmpv6 --icmpv6-type destination-unreachable -j ACCEPT ip6tables -A INPUT -p icmpv6 --icmpv6-type packet-too-big -j ACCEPT ip6tables -A INPUT -p icmpv6 --icmpv6-type time-exceeded -j ACCEPT ip6tables -A INPUT -p icmpv6 --icmpv6-type parameter-problem -j ACCEPT ip6tables -A INPUT -p icmpv6 --icmpv6-type echo-request -j ACCEPT ip6tables -A INPUT -p icmpv6 --icmpv6-type echo-reply -j ACCEPT # Forwarding through from the internal network # Allow unrestricted access out from the internal network ip6tables -A FORWARD -i $LAN_IF -j ACCEPT # Allow some useful ICMPv6 messages ip6tables -A FORWARD -p icmpv6 --icmpv6-type destination-unreachable -j ACCEPT ip6tables -A FORWARD -p icmpv6 --icmpv6-type packet-too-big -j ACCEPT ip6tables -A FORWARD -p icmpv6 --icmpv6-type time-exceeded -j ACCEPT ip6tables -A FORWARD -p icmpv6 --icmpv6-type parameter-problem -j ACCEPT ip6tables -A FORWARD -p icmpv6 --icmpv6-type echo-request -j ACCEPT ip6tables -A FORWARD -p icmpv6 --icmpv6-type echo-reply -j ACCEPT #Allow traffic related to outgoing connections ip6tables -A FORWARD -i $WAN_IF -m state --state RELATED,ESTABLISHED -j ACCEPT

Now you can reboot, or run the script, and it should bring the tunnel up and you should be correclty firewalled such that packets get out, but nobody can get in.

Back to the web-interface, you can now enable IPv6 with Setup, IPV6, Enable. You leave "IPv6 Type" as Native IPv6 from ISP. Then I enabled Radvd and added a custom config in the text-box to get DNS working with google DNS on hosts with:

interface br0 { AdvSendAdvert on; prefix 2001:....:....:....::/64 { }; RDNSS 2001:4860:4860::8888 2001:4860:4860::8844 { }; };

(again, replace the prefix with your own)

That is pretty much it; at this point, you should have an IPv6 network and it's most likely that all your network devices will "just work" with it. I got full scores on the IPv6 test sites on a range of devices.

Unfortunately, even a geographically close tunnel still really kills latency; compare these two traceroutes:

$ mtr -r -c 1 Start: Fri Jan 15 14:51:18 2016 HOST: jj Loss% Snt Last Avg Best Wrst StDev 1. |-- 2001:....:....:....:: 0.0% 1 1.4 1.4 1.4 1.4 0.0 2. |-- 0.0% 1 12.0 12.0 12.0 12.0 0.0 3. |-- 0.0% 1 13.5 13.5 13.5 13.5 0.0 4. |-- 0.0% 1 13.7 13.7 13.7 13.7 0.0 5. |-- 0.0% 1 11.5 11.5 11.5 11.5 0.0 6. |-- 2001:4860::1:0:8613 0.0% 1 14.1 14.1 14.1 14.1 0.0 7. |-- 2001:4860::8:0:79a0 0.0% 1 115.1 115.1 115.1 115.1 0.0 8. |-- 2001:4860::8:0:8877 0.0% 1 183.6 183.6 183.6 183.6 0.0 9. |-- 2001:4860::1:0:66d6 0.0% 1 196.6 196.6 196.6 196.6 0.0 10.|-- 2001:4860:0:1::72d 0.0% 1 189.7 189.7 189.7 189.7 0.0 11.|-- 0.0% 1 194.9 194.9 194.9 194.9 0.0 $ mtr -4 -r -c 1 Start: Fri Jan 15 14:51:46 2016 HOST: jj Loss% Snt Last Avg Best Wrst StDev 1.|-- gateway 0.0% 1 1.3 1.3 1.3 1.3 0.0 2.|-- 0.0% 1 11.0 11.0 11.0 11.0 0.0 3.|-- ??? 100.0 1 0.0 0.0 0.0 0.0 0.0 4.|-- ??? 100.0 1 0.0 0.0 0.0 0.0 0.0 5.|-- ??? 100.0 1 0.0 0.0 0.0 0.0 0.0 6.|-- 0.0% 1 12.1 12.1 12.1 12.1 0.0 7.|-- 0.0% 1 10.4 10.4 10.4 10.4 0.0

When you watch what is actually using ipv6 (the ipvfoo plugin for Chrome is pretty cool, it shows you what requests are going where), it's mostly all just traffic to really big sites (Google/Google Analytics, Facebook, Youtube, etc) who have figured out IPv6.

Since these are exactly the type of places that have made efforts to get caching as close as possible to you (Google's mirror servers are within Optus' network, for example) and so you're really shooting yourself in the foot going around it using an external tunnel. The other thing is that I'm often hitting IPv6 mirrors and downloading larger things for work stuff (distro updates, git clones, image downloads, etc) which is slower and wasting someone else's bandwith for really no benefit.

So while it's pretty cool to have an IPv6 address (and a fun experiment) I think I'm going to turn it off. One positive was that after running with it for about a month, nothing has broken -- which suggests that most consumer level gear in a typical house (phones, laptops, TVs, smart-watches, etc) is either ready or ignores it gracefully. Bring on native IPv6!

Jonathan Adamczewski: Another C++11 ‘countof’

Thu, 2016-01-14 19:27

Note: There’s an update here.

Read “Better array ‘countof’ implementation with C++ 11” for context. Specifically, it presents Listing 5 as an implementation of countof() using C++11 constexpr:

  • template<typename T, std::size_t N> constexpr std::size_t countof(T const (&)[N]) noexcept { return N; }

But this falls short. Just a little.

There are arguments that could be passed to a naive sizeof(a)/sizeof(a[0]) macro that will cause the above to fail to compile.


struct S { int a[4]; }; void f(S* s) { constexpr size_t s_a_count = countof(s->a); int b[s_a_count]; // do things... }

This does not compile. s is not constant, and countof() is a constexpr function whose result is needed at compile time, and so expects a constexpr-friendly argument. Even though it is not used.

Errors from this kind of thing can look like this from clang-3.7.0:

error: constexpr variable 's_a_count' must be initialized by a constant expression note: read of non-constexpr variable 's' is not allowed in a constant expression

or this from Visual Studio 2015 Update 1:

error: C2131: expression did not evaluate to a constant

(Aside: At the time of writing, the error C2131 seems to be undocumented for VS2015. But Visual Studio 6.0 had an error with the same number)

Here’s a C++11 version of countof() that will give the correct result for countof(s->a) above:

#include <type_traits> template<typename Tin> constexpr std::size_t countof() { using T = typename std::remove_reference<Tin>::type; static_assert(std::is_array<T>::value, "countof() requires an array argument"); static_assert(std::extent<T>::value > 0, // [0] "zero- or unknown-size array"); return std::extent<T>::value; } #define countof(a) countof<decltype(a)>()

Some of the details:

Adding a countof() macro allows use of decltype() in the caller’s context, which provides the type of the member array of a non-const object at compile time.

std::remove_reference is needed to get the array type from the result of decltype(). Without it, std::is_array and std::extent produce false and zero, respectively.

The first static assert ensures that countof() is being called on an actual array. The upside over failed template instantiation or specialization is that you can write your own human-readable, slightly more context aware error message (better than mine).

The second static assert validates that the array size is known, and is greater than zero. Without it, countof<int[]>() will return zero (which will be wrong) without error. And zero-sized arrays will also result in zero — in practice they rarely actually contain zero elements. This isn’t a function for finding the size of those arrays.

And then std::extent<T>::value produces the actual count of the elements of the array.


If replacing an existing sizeof-based macro with a constexpr countof() alternate, Visual Studio 2015 Update 1 will trigger warnings in certain cases where there previously were no warnings.

warning C4267: conversion from 'size_t' to 'int', possible loss of data

It is unfortunate to have to add explicit casts when the safety of such operations is able to be determined by the compiler. I have optimistically submitted this as an issue at

[0] Typo fix thanks to this commentor

Jeremy Kerr: Kernel testing for OpenPOWER platforms

Thu, 2016-01-14 16:27

Last week, Michael and I were discussing long-term Linux support for OpenPOWER platforms, particularly the concern about testing for non-IBM hardware. We'd like to ensure that the increasing range of OpenPOWER platforms, from different manufacturers, don't lose compatibility with the upstream Linux kernel.

Previously, there were only a few POWER vendors, producing a fairly limited range of hardware, so it's reasonable to get a decent amount of test coverage by booting the kernel on a small number of machines with diverse-enough components. Now, with OpenPOWER, machines are being built by different manufacturers, and so it's getting less feasible to do that coverage testing in a single lab.

To solve this, Chris, who has been running the jenkins setup on, has added some kernel builds for the latest mainline kernel. We're using a .config that should be suitable for all OpenPOWER platforms. The idea here is to get as much of the OpenPOWER hardware community as possible to test the latest Linux kernel.

If you're an OpenPOWER vendor, I'd strongly suggest setting up some regular testing on this kernel. This means you'll catch any breakages before they affect users of your platform.

Setting up a test process

The jenkins build jobs expose a last-sucessful-build URL, allowing you to grab the bootable kernel image easily:

[Note that these are HTTPS links, and you should be ensuring that the certificates are correct for anything you download and boot!]

To help with testing, we've also produced a little root-filesystem image that can be booted with this kernel as an initramfs image. That's produced via a similar jenkins job.

To set up an automated test process for this kernel:

If you find a build that fails on your machine, please send us an email at

Alternatively, if there's something extra you need in the kernel configuration or initramfs setup, let me know at

Future work

To improve the testing coverage, we'd like to add some automated tests to the initramfs image, rather than just booting to a shell. Stay tuned for updates!