Planet Linux Australia

Syndicate content
Planet Linux Australia - http://planet.linux.org.au
Updated: 1 hour 13 min ago

Lev Lafayette: Not The Best Customer Service (laptop.com.au)

Mon, 2018-10-08 23:05

You would think with a website like laptop.com.au you would be sitting on a gold mine of opportunity. It would take real effort not to turn such a domain advantage into a real advantage, to become the country's specialist and expert provider of laptops. But alas, some effort is required in this regard and it involves what, in my considered opinion, is not doing the right thing. I leave you, gentle reader, to form your own opinion on the matter from the facts provided.

In mid-August 2018 I purchased a laptop from said provider. I didn't require anything fancy, but it did need to be light and small. The Lenovo Yoga 710-11ISK for $699 seemed to fit the bill. The dispatch notice was sent on August 14, and on August 21st I received the item and noticed that there were a few things wrong. Firstly, the processor was nowhere near as powerful as advertised (and no surprise there - they're advertising the bust-speed of a water-cooled processor, not an air-cooled small laptop). Further, the system came with half of the advertised 8GB of RAM.

With the discrepancy pointed out they offered what I considered a paltry sum of $100 - which would be quite insufficient for the loss of performance, and it was not the kind of system that could be upgraded with ease. Remarkably they made the claim "We would offer to swap over, however, it's expensive to ship back and forth and we don't have another in stock at this time". I asked that if this was the case why they were still advertising the supposedly absent model on their website (and, at the time of writing, October 8), it is apparently still available. I pointed out that their own terms and conditions stated: "A refund, repair, exchange or credit is available if on arrival the goods are advised faulty or the model or the specifications are incorrect", which was certainly the case here.

Receiving no reply several days later I had to contact them again. The screen on the system had completely ceased to function. I demanded that the refund the cost of the laptop plus postage, as per their own terms and conditions. The system was faulty and the specifications are incorrect. They offered to replace the machine. I told them I preferred a refund, as I now had reasonable doubts about their quality control, as per Victorian consumer law.

I sent the laptop, express post with signature, and waited. A week later I had to contact them again and provided the Australia Post tracking record to show that it had been delivered (but not collected). It was at the point that, instead of providing a refund as I had requested, they sent a second laptop, completely contrary to my wishes. They responded that they had "replaced machine with original spec that u ordered. Like new condition" and that "We are obliged under consumer law to provide a refund within 30 days of purchase" (any delays were due to their inaction). At that point a case was opened at the Commonwealth Bank (it was paid via credit card), and Consumer Affair Victoria.

But it gets better. They sent the wrong laptop again. This time with a completely different processor, and significantly heavier and larger. It was pointed out to them that they have sent the wrong machine, twice, and the second time contrary to my requests. It was pointed out to them that all they had to do was provide a refund as requested for the machine and my postage costs. It was pointed out that it is my fault that you sent the wrong machine and that was their responsibility. It was pointed out that that it was not my fault that they sent a second, wrong, machine, contrary to my request, and that, again, their responsibility. Indeed, they could benefit by having someone look at their business processes and quality assurance - because there has been many years of this retailer showing less than optimal customer service.

At this point, they buckled and agreed to provide a full refund if I sent the second laptop back - which I have done and will update this 'blog post as the story unfolds.

Now whilst some of you gentle readers may think that surely it couldn't have been that bad, and surely there's another side to this story. So it is in the public interest and in the principle of disclosure and transparency, that I provide a full set of the correspondence as a text file attached. You can make up your own mind.

Gary Pendergast: WordPress 5.0 Needs You!

Thu, 2018-10-04 10:04

Yesterday, we started the WordPress 5.0 release cycle with an announcement post.

A Plan for 5.0

Usually during major releases of WordPress, the dedicated release lead chooses a few folks to help them through the time-consuming work of managing an excellent release cycle. We are blessed with s…

Make WordPress Core Make WordPress

It’s a very exciting time to be involved in WordPress, and if you want to help make it the best, now’s an excellent opportunity to jump right in.

A critical goal of this release cycle is transparency.

As a member of the WordPress 5.0 leadership team, the best way for me to do my job is to get feedback from the wider WordPress community as early, and as quickly as possible. I think I speak for everyone on the leadership team when I say that we all feel the same on this. We want everyone to be able to participate, which will require some cooperation from everyone in the wider WordPress community.

The release post was published as soon as it was written, we wanted to get it out quickly, so everyone could be aware of what’s going on. Publishing quickly does mean that we’re still writing the more detailed posts about scope, and timeline, and processes. Instead of publishing a completed plan all at once, we intentionally want to include everyone from the start, and evolve plans as we get feedback.

With no other context, the WordPress 5.0 timeline of “release candidate in about a month” would be very short, which is why we’ve waited until Gutenberg had proved itself before setting a timeline. As we mentioned in the post, WordPress 5.0 will be “WordPress 4.9.8 + Gutenberg”. The Gutenberg plugin is running on nearly 500k sites, and WordPress 4.9.8 is running on millions of sites. For comparison, it’s considered a well tested major version if we see 20k installs before the final release date. Gutenberg is a bigger change than we’ve done in the past, so should be held to a higher standard, and I think we can agree that 500k sites is a pretty good test base: it arguably meets, or even exceeds that standard.

We can have a release candidate ready in a month.

Try Gutenberg

The Gutenberg core team are currently focussed on finishing off the last few features. The Gutenberg plugin has evolved exceedingly quickly thanks to their work, it’s moved so much faster than anything we’ve done in WordPress previously. As we transition to bug fixing, you should expect to see the same rapid improvement.

The block editor’s backwards compatibility with the classic editor is important, of course, and the Classic Editor plugin is a part of that: if you have a site that doesn’t yet work with the block editor, please go ahead and install the plugin. I’d be happy to see the Classic Editor plugin getting 10 million or more installs, if people need it. That would both show a clear need for the classic interface to be maintained for a long time, and because it’s the official WordPress plugin for doing it, we can ensure that it’s maintained for as long as it’s needed. This isn’t a new scenario to the WordPress core team, we’ve been backporting security fixes to WordPress 3.7 for years. We’re never going to leave site owners out in the cold there, and exactly the same attitude applies to the Classic Editor plugin.

The broader Gutenberg project is a massive change, and WordPress is a big ship to turn.

It’s going to take years to make this transition, and it’s okay if WordPress 5.0 isn’t everything for everyone. There’ll be a WordPress 5.1, and 5.2, and 5.3, and so on, the block editor will continue to evolve to work for more and more people.

My role in WordPress 5.0 is to “generally shepherd the merge”. I’ve built or guided some of the most complex changes we’ve made in Core in recent years, and they’ve all been successful. I don’t intend to change that record, WordPress 5.0 will only be released when I’m as confident in it as I was for all of those previous projects.

Right now, I’m asking everyone in the WordPress community for a little bit of trust, that we’re all working with the best interests of WordPress at heart. I’m also asking for a little bit of patience, we’re only human, we can only type so fast, and we do need to sleep every now and then.

Dave Hall: AWS Parameter Store

Wed, 2018-10-03 10:04

Anyone with a moderate level of AWS experience will have learned that Amazon offers more than one way of doing something. Storing secrets is no exception. 

It is possible to spin up Hashicorp Vault on AWS using an official Amazon quick start guide. The down side of this approach is that you have to maintain it.

If you want an "AWS native" approach, you have 2 services to choose from. As the name suggests, Secrets Manager provides some secrets management tools on top of the store. This includes automagic rotation of AWS RDS credentials on a regular schedule. For the first 30 days the service is free, then you start paying per secret per month, plus API calls.

There is a free option, Amazon's Systems Manager Parameter Store. This is what I'll be covering today.

Structure

It is easy when you first start out to store all your secrets at the top level. After a while you will regret this decision. 

Parameter Store supports hierarchies. I recommend using them from day one. Today I generally use /[appname]-[env]/[KEY]. After some time with this scheme I am finding that /[appname]/[env]/[KEY] feels like it will be easier to manage. IAM permissions support paths and wildcards, so either scheme will work.

If you need to migrate your secrets, use Parameter Store namespace migration script

Access Controls

Like most Amazon services IAM controls access to Parameter Store. 

Parameter Store allows you to store your values as plain text or encrypted using a key using KMS. For encrypted values the user must have have grants on the parameter store value and KMS key. For consistency I recommend encrypting all your parameters.

If you have a monolith a key per application per envionment is likely to work well. If you have a collection of microservices having a key per service per environment becomes difficult to manage. In this case share a key between several services in the same environment.

Here is an IAM policy for an Lambda function to access a hierarchy of values in parameter store:

{   "Version":"2012-10-17",   "Statement":[     {       "Sid":"ReadParams",       "Effect":"Allow",       "Action":[         "ssm:GetParametersByPath"       ],       "Resource":"arn:aws:ssm:us-east-1:1234567890:parameter/my-app/dev/*"     },     {       "Sid":"Decrypt",       "Effect":"Allow",       "Action":[         "kms:Decrypt"       ],       "Resource":"arn:aws:kms:us-east-1:1234567890:key/20180823-7311-4ced-bad5-653587846973"     }   ] }

To allow your developers to manage the parameters in dev you will need a policy that looks like this:

{   "Version":"2012-10-17",   "Statement":[     {       "Sid":"ManageParams",       "Effect":"Allow",       "Action":[         "ssm:DeleteParameter",         "ssm:DeleteParameters",         "ssm:GetParameter",         "ssm:GetParameterHistory",         "ssm:GetParametersByPath",         "ssm:GetParameters",         "ssm:PutParameter"       ],       "Resource":"arn:aws:ssm:us-east-1:1234567890:parameter/my-app/dev/*"     },     {       "Sid":"ListParams",       "Effect":"Allow",       "Action":"ssm:DescribeParameters",       "Resource":"*"     },     {       "Sid":"DecryptEncrypt",       "Effect":"Allow",       "Action":[         "kms:Decrypt",         "kms:Encrypt"       ],       "Resource":"arn:aws:kms:us-east-1:1234567890:key/20180823-7311-4ced-bad5-653587846973"     }   ] }

Amazon has great documentation on controlling access to Parameter Store and KMS.

Adding Parameters

Amazon allows you to store almost any string up to 4Kbs in length in the Parameter store. This gives you a lot of flexibility.

Parameter Store supports deep hierarchies. You will find this becomes annoying to manage. Use hierarchies to group your values by application and environment. Within the heirarchy use a flat structure. I recommend using lower case letters with dashes between words for your paths. For the parameter keys use upper case letters with underscores. This makes it easy to differentiate the two when searching for parameters. 

Parameter store encodes everything as strings. There may be cases where you want to store an integer as an integer or a more complex data structure. You could use a naming convention to differentiate your different types. I found it easiest to encode every thing as json. When pulling values from the store I json decode it. The down side is strings must be wrapped in double quotes. This is offset by the flexibility of being able to encode objects and use numbers.

It is possible to add parameters to the store using 3 different methods. I generally find the AWS web console easiest when adding a small number of entries. Rather than walking you through this, Amazon have good documentation on adding values. Remember to always use "secure string" to encrypt your values.

Adding parameters via boto3 is straight forward. Once again it is well documented by Amazon.

Finally you can maintain parameters in with a little bit of code. In this example I do it with Python.

import boto3 namespace = "my-app" env = "dev" kms_uuid = "20180823-7311-4ced-bad5-653587846973" # Objects must be json encoded then wrapped in quotes because they're stored as strings. parameters = {"key": '"value"', "MY_INT": 1234, "MY_OBJ": '{"name": "value"}'} ssm = boto3.client("ssm") for parameter in parameters: ssm.put_parameter( Name=f"/{namespace}/{env}/{parameter.upper()}", # Everything must go in as a string. Value=str(parameters[parameter]), Type="SecureString", KeyId=kms_uuid, # Use with caution. Overwrite=True, ) Using Parameters

I have used Parameter Store from Python and the command line. It is easier to use it from Python.

My example assumes that it a Lambda function running with the policy from earlier. The function is called my-app-dev. This is what my code looks like:

import json import boto3 def load_params(namespace: str, env: str) -> dict:     """Load parameters from SSM Parameter Store.     :namespace: The application namespace.     :env: The current application environment.     :return: The config loaded from Parameter Store.     """     config = {}     path = f"/{namespace}/env/"     ssm = boto3.client("ssm", region_name="us-east-1")     more = None     args = {"Path": path, "Recursive": True, "WithDecryption": True}     while more is not False:         if more:             args["NextToken"] = more         params = ssm.get_parameters_by_path(**args)         for param in params["Parameters"]:             key = param["Name"].split("/")[3]             config[key] = json.loads(param["Value"])         more = params.get("NextToken", False)     return config

If you want to avoid loading your config each time your Lambda function is called you can store the results in a global variable. This leverages Amazon's feature that doesn't clear global variables between function invocations. The catch is that your function won't pick up parameter changes without a code deployment. Another option is to put in place logic for periodic purging of the cache.

On the command line things are little harder to manage if you have more than 10 parameters. To export a small number of entries as environment variables, you can use this one liner:

$(aws ssm get-parameters-by-path --with-decryption --path /my-app/dev/ | jq -r '.Parameters[] | "export " + (.Name | split("/")[3] | ascii_upcase | gsub("-"; "_")) + "=" + .Value + ";"')

Make sure you have jq installed and the AWS cli installed and configured.

Conclusion

Amazon's System Manager Parameter Store provides a secure way of storing and managing secrets for your AWS based apps. Unlike Hashicorp Vault, Amazon manages everything for you. If you don't need the more advanced features of Secrets Manager you don't have to pay for them. For most users Parameter Store will be adequate.

Ben Martin: CNC made close up lens filter holder

Tue, 2018-10-02 12:20
Close up filters attach to the end of a camera lens and allow you to take photos closer to the subject than you normally would have been able to do. This is very handy for electronics and other work as you can get clear images of circuit boards and other small detail. I recently got a collection of 3 such filters which didn't come with any sort of real holder, the container they shipped in was not really designed for longer term use.


The above is the starting design for a filter holder cut in layers from walnut and stacked together to create the enclosure. The inside is shown below where the outer diameter can hold the 80mm black ring and the inner circles are 70mm and are there to keep the filters from touching each other. Close up filters can be quite fish eyed looking with a substantial curve to the lens on the filter, so a gap is needed to keep each filter away from the next one. A little felt is used to cushion the filter from the walnut itself which adds roughly 1.5mm to the design so the felt layer all have space to live as well.



The bottom has little feet which extend slightly beyond the tangent of the circle so they both make good contact with the ground and there is no rocking. Using two very cheap hinges works well in this design to try to minimize the sideways movement (slop) in the hinges themselves. A small leather strap will finish the enclosure off allowing it to be secured closed.

It is wonderful to be able to turn something like this around. I can only imagine what the world looks like from the perspective of somebody who is used to machining with 5 axis CNC.



OpenSTEM: Helping Migrants to Australia

Fri, 2018-09-28 20:05
The end of the school year is fast approaching with the third term either over or about to end and the start of the fourth term looming ahead. There never seems to be enough time in the last term with making sure students have met all their learning outcomes for the year and with final […]

David Rowe: Codec 2 2200 Candidate D

Fri, 2018-09-28 20:04

Every time I start working on Deep Learning and Codec 2 I get side tracked! This time, I started developing a reference codec that could be used to explore machine learning, however the reference codec was sounding pretty good early in it’s development so I have pushed it through to a fully quantised state. For lack of a better name it’s called candidate D, as that’s where I am up to in a series of codec prototypes.

The previous Codec 2 2200 post described candidate C. That also evolved from a “quick effort” to develop a reference codec to explore my deep learning ideas.

Learning about Vector Quantisation

This time, I explored Vector Quantisation (VQ) of spectral magnitude samples. I feel my VQ skills are weak, so did a bit of reading. I really enjoy learning, especially in areas I have been fooling around for a while but never really understood. It’s a special feeling when the theory clicks into place with the practical.

So I have these vectors of K=40 spectral magnitude samples, that I want to quantise. To get a feel for the data I started out by looking at smaller 2 and 3 dimensional vectors. Forty dimensions is a bit much to handle, so I started out by plotting smaller slices. Here are 2D and 3D scatter plots of adjacent samples in the vector:


The data is highly correlated, almost a straight line relationship. An example of a 2-bit, 2D vector quantiser for this data might be the points (0,0) (20,20) (30,30) (40,40). Consider representing the same data with two 1D (scalar) quantisers over the same 2 bit range (0,20,30,40). This would take 4 bits in total, and be wasteful as it would represent points that would never occur, such as (60,0).

[1] helped me understand the relationship between covariance and VQ, using 2D vectors. For Candidate D I extended this to K=40 dimensions, the number of samples I am using for the spectral magnitudes. Then [2] (thirty year old!) paper how the DCT relates to vector quantisation and the eigenvector/value rotations described in [1]. I vaguely remember snoring my way through eigen-thingies at math lectures in University!

My VQ work to date has used minimum Mean Square Error (MSE) to train and match vectors. I have been uncomfortable with MSE matching for a while, as I have observed poor choices in matching vectors to speech. For example if the target vector falls off sharply at high frequencies (say a LPF at 3500 Hz), the VQ will try to select a vector that matches that fall off, and ignore smaller, more perceptually important features like formants.

VQs are often trained to minimise the average error. They tend to cluster VQ points closer to those samples that are more likely to occur. However I have found that beneath a certain threshold, we can’t hear the quantisation error. In Codec 2 it’s hard to hear any distortion when spectral magnitudes are quantised to 6 dB steps. This suggest that we are wasting bits with fine quantiser steps, and there may be better ways to design VQs, for example a uniform grid of points that covers a few standard deviations of data on the scatter plots above.

I like the idea of uniform quantisation across vector dimensions and the concepts I learnt during this work allowed me to do just that. The DCT effectively lets me use scalar quantisation of each vector element, so I can easily choose any quantiser shape I like.

Spectral Quantiser

Candidate D uses a similar design and bit allocation to Candidate C. Candidate D uses K=40 resampling of the spectral magnitudes, to help preserve narrow high frequency formants that are present for low pitch speakers like hts1a. The DCT of the rate K vectors is computed, and quantised using a Huffman code.

There are not enough bits to quantise all of the coefficients, so we stop when we run out of bits, typically after 15 or 20 (out of a total of 40) DCTs. On each frame the algorithm tries direct or differential quantisation, and chooses the method with the lowest error.

Results

I have a couple of small databases that I use for listening tests (about 15 samples in total). I feel Candidate D is better than Codec 2 1300, and also Codec 2 2400 for most (but not all) samples.

In particular, Candidate D handles samples with lots of low frequency energy better, e.g. cq_ref and kristoff in the table below.

Sample 1300 2400 2200 D cq_ref Listen Listen Listen kristoff Listen Listen Listen me Listen Listen Listen vk5local_1 Listen Listen Listen ebs Listen Listen Listen

For a high quality FreeDV mode I want to improve speech quality over FreeDV 1600 (which uses Codec 2 1300 plus some FEC bits), and provide better robustness to different speakers and recording conditions. As you can hear – there is a significant jump in quality between the 1300 bit/s codec and candidate D. Implemented as a FreeDV mode, it would compare well with SSB at high SNRs.

Next Steps

There are many aspects of Candidate D that could be explored:

  • Wideband audio, like the work from last year.
  • Back to my original aim of exploring deep learning with Codec 2.
  • Computing the DCT coefficients from the rate L (time varying) magnitude samples.
  • Better time/freq quantisation using a 2D DCT rather than the simple difference in time scheme used for Candidate D.
  • Porting to C and developing a real time FreeDV 2200 mode.

The current candidate D 2200 codec is implemented in Octave, so porting to C is required before it is usable for real world applications, plus some more C to integrate with FreeDV.

If anyone would like to help, please let me know. It’s fairly straight forward C coding, I have already done the DSP. You’ll learn a lot, and be part of the open source future of digital radio.

Reading Further

[1] A geometric interpretation of the covariance matrix, really helped me understand what was going on with VQ in 2 dimensions, which can then be extended to larger dimensions.

[2] Vector Quantization in Speech Coding, Makhoul et al.

[3 Codec 2 Wideband, previous DCT based Codec 2 Work.

James Morris: 2018 Linux Security Summit North America: Wrapup

Fri, 2018-09-28 08:02

The 2018 Linux Security Summit North America (LSS-NA) was held last month in Vancouver, BC.

Attendance continued to grow this year, with a record of 220+ attendees.  Our room was upgraded as a result, with spectacular views.

Linux Security Summit NA 2018, Vancouver,BC

We also had many great proposals and the schedule ended up being a very tight fit.  We’ve asked for an extra day for LSS-NA next year — here’s hoping.

Slides of all presentations are available here: https://events.linuxfoundation.org/events/linux-security-summit-north-america-2018/program/slides/

Videos may be found in this youtube playlist.

Once again, as is typical, the conference was focused around development, somewhat uniquely in the world of security conferences.  It’s interesting to see more attention seemingly being paid to the lower parts of the stack: secure booting, firmware, and hardware roots of trust, as well as the continued efforts in hardening the kernel.

LWN provided some excellent coverage of LSS-NA:

Paul Moore has a brief writeup here.

Thanks to everyone involved in the event for 2018: the speakers, attendees, the program committee, the sponsors, and the organizing team at the Linux Foundation.  LSS-NA would not be possible without all of you!

OpenSTEM: Our interwoven ancestry

Thu, 2018-09-27 16:05
In 2008 a new group of human ancestors – the Denisovans, were defined on the basis of a single finger knuckle (phalanx) bone discovered in Denisova cave in the Altai mountains of Siberia. A molar tooth, found at Denisova cave earlier (in 2000) was determined to be of the same group. Since then extensive work […]

Gary Pendergast: Straight White Guy Discovers Diversity and Inclusion Problem in Open Source

Tue, 2018-09-25 00:04

This is a bit of strange post for me to write, it’s a topic I’m quite inexperienced in. I’ll warn you straight up: there’s going to be a lot of talking about my thought processes, going off on tangents, and a bit of over-explaining myself for good measure. Think of it something like high school math, where you had to “show your work”, demonstrating how you arrived at the answer. 20 years later, it turns out there really is a practical use for high school math.

Michael Still: Scared Weird Frozen Guy

Sat, 2018-09-22 12:00

The true life story of a kid from Bribie Island (I’ve been there!) running a marathon in Antartica, via being a touring musical comedian, doing things like this:

This book is an interesting and light read, and came kindly recommended by Michael Carden, who pretty much insisted I take the book off him at a cafe. I don’t regret reading it and would recommend it to people looking for a light autobiography for a rainy (and perhaps cold) evening or two.

Oh, and the Scared Weird Little Guys of course are responsible for this gem…

This book is highly recommended and now I really want to go for a run.

Title: Scared Weird Frozen Guy
Author: Rusty Berther
Genre: Comedians
Release Date: 2012
Pages: 325

After 20 incredible years as part of a musical comedy duo, Scared Weird Little Guy, Rusty Berther found himself running a marathon in Antarctica. What drove him to this? In this hilarious and honest account of his life as a Scared Weird Little Guy, and his long journey attempting an extreme physical and mental challenge at the bottom of the world, Rusty examines where he started from, and where he just might be going to.

Russell Coker: Words Have Meanings

Fri, 2018-09-21 02:03

As a follow-up to my post with Suggestions for Trump Supporters [1] I notice that many people seem to have private definitions of words that they like to use.

There are some situations where the use of a word is contentious and different groups of people have different meanings. One example that is known to most people involved with computers is “hacker”. That means “criminal” according to mainstream media and often “someone who experiments with computers” to those of us who like experimenting with computers. There is ongoing discussion about whether we should try and reclaim the word for it’s original use or whether we should just accept that’s a lost cause. But generally based on context it’s clear which meaning is intended. There is also some overlap between the definitions, some people who like to experiment with computers conduct experiments with computers they aren’t permitted to use. Some people who are career computer criminals started out experimenting with computers for fun.

But some times words are misused in ways that fail to convey any useful ideas and just obscure the real issues. One example is the people who claim to be left-wing Libertarians. Murray Rothbard (AKA “Mr Libertarian”) boasted about “stealing” the word Libertarian from the left [2]. Murray won that battle, they should get over it and move on. When anyone talks about “Libertarianism” nowadays they are talking about the extreme right. Claiming to be a left-wing Libertarian doesn’t add any value to any discussion apart from demonstrating the fact that the person who makes such a claim is one who gives hipsters a bad name. The first time penny-farthings were fashionable the word “libertarian” was associated with left-wing politics. Trying to have a sensible discussion about politics while using a word in the opposite way to almost everyone else is about as productive as trying to actually travel somewhere by penny-farthing.

Another example is the word “communist” which according to many Americans seems to mean “any person or country I don’t like”. It’s often invoked as a magical incantation that’s supposed to automatically win an argument. One recent example I saw was someone claiming that “Russia has always been communist” and rejecting any evidence to the contrary. If someone was to say “Russia has always been a shit country” then there’s plenty of evidence to support that claim (Tsarist, communist, and fascist Russia have all been shit in various ways). But no definition of “communism” seems to have any correlation with modern Russia. I never discovered what that person meant by claiming that Russia is communist, they refused to make any comment about Russian politics and just kept repeating that it’s communist. If they said “Russia has always been shit” then it would be a clear statement, people can agree or disagree with that but everyone knows what is meant.

The standard response to pointing out that someone is using a definition of a word that is either significantly different to most of the world (or simply inexplicable) is to say “that’s just semantics”. If someone’s “contribution” to a political discussion is restricted to criticising people who confuse “their” and “there” then it might be reasonable to say “that’s just semantics”. But pointing out that someone’s writing has no meaning because they choose not to use words in the way others will understand them is not just semantics. When someone claims that Russia is communist and Americans should reject the Republican party because of their Russian connection it’s not even wrong. The same applies when someone claims that Nazis are “leftist”.

Generally the aim of a political debate is to convince people that your cause is better than other causes. To achieve that aim you have to state your cause in language that can be understood by everyone in the discussion. Would the person who called Russia “communist” be more or less happy if Russia had common ownership of the means of production and an absence of social classes? I guess I’ll never know, and that’s their failure at debating politics.

Related posts:

  1. TED – Defining Words I recently joined the community based around the TED conference...
  2. political compass It appears that some people don’t understand what right-wing means...
  3. Terms of Abuse for Minority Groups Due to the comments on my blog post about Divisive...

Linux Users of Victoria (LUV) Announce: LUV October 2018 Workshop: CoCalc and Nectar

Tue, 2018-09-18 02:03
Start: Oct 20 2018 12:30 End: Oct 20 2018 16:30 Start: Oct 20 2018 12:30 End: Oct 20 2018 16:30 Location:  Infoxchange, 33 Elizabeth St. Richmond Link:  http://luv.asn.au/meetings/map

CoCalc and Nectar

Paul Leopardi will give a talk and live demo of the cloud services that he is currently using, CoCalc to host Jupyter notebooks, Python code, and similar and the Nectar research cloud service to host a PostgreSQL database and a Plotly Dash dashboard to disseminate his research results:

The meeting will be held at Infoxchange, 33 Elizabeth St. Richmond 3121.  Late arrivals please call (0421) 775 358 for access to the venue.

LUV would like to acknowledge Infoxchange for the venue.

Linux Users of Victoria is a subcommittee of Linux Australia.

October 20, 2018 - 12:30

read more

Linux Users of Victoria (LUV) Announce: LUV October 2018 Main Meeting: Non-alphabetic languages / Haskell

Tue, 2018-09-18 02:03
Start: Oct 2 2018 18:30 End: Oct 2 2018 20:30 Start: Oct 2 2018 18:30 End: Oct 2 2018 20:30 Location:  Kathleen Syme Library, 251 Faraday Street Carlton VIC 3053 Link:  http://www.melbourne.vic.gov.au/community/hubs-bookable-spaces/kathleen-syme-lib...

PLEASE NOTE RETURN TO ORIGINAL START TIME

6:30 PM to 8:30 PM Tuesday, October 2, 2018
Training Room, Kathleen Syme Library, 251 Faraday Street Carlton VIC 3053

Speakers:

  • Wen Lin, Using Linux/FOSS in non-alphabetic languages like Chinese
  • Shannon Pace, Haskell
Using Linux/FOSS in non-alphabetic languages like Chinese

Wen will discuss different input methods on a keyboard first designed only for alphabetic-base languages.

Many of us like to go for dinner nearby after the meeting, typically at Brunetti's or Trotters Bistro in Lygon St.  Please let us know if you'd like to join us!

Linux Users of Victoria is a subcommittee of Linux Australia.

October 2, 2018 - 18:30

read more

Gary Pendergast: The Mission: Democratise Publishing

Sun, 2018-09-16 16:04

It’s exciting to see the Drupal Gutenberg project getting under way, it makes me proud of the work we’ve done ensuring the flexibility of the underlying Gutenberg architecture. One of the primary philosophies of Gutenberg’s technical architecture is platform agnosticism, and we can see the practical effects of this practice coming to fruition across a variety of projects.

Yoast are creating new features for the block editor, as well as porting existing features, which they’re able to reuse in the classic editor.

Outside of WordPress Core, the Automattic teams who work on Calypso have been busy adding Gutenberg support, in order to make the block editor interface available on WordPress.com. Gutenberg and Calypso are large JavaScript applications, built with strong opinions on design direction and technical architecture, and having significant component overlap. That these two projects can function together at all is something of an obscure engineering feat that’s both difficult and overwhelming to appreciate.

If we reached the limit of Gutenberg’s platform agnosticism here, it would still be a successful project.

But that’s not where the ultimate goals of the Gutenberg project stand. From early experiments in running the block editor as a standalone application, to being able to compile it into a native mobile component, and now seeing it running on Drupal, Gutenberg’s technical goals have always included a radical level of platform agnosticism.

Better Together

Inside the WordPress world, significant effort and focus has been on ensuring backwards compatibility with existing WordPress sites, plugins, and practices. Given that WordPress is such a hugely popular platform, it’s exceedingly important to ensure this is done right. With Gutenberg expanding outside of the WordPress world, however, we’re seeing different focuses and priorities arise.

The Gutenberg Cloud service is a fascinating extension being built as part of the Drupal Gutenberg project, for example. It provides a method for new blocks to be shared and discovered, the sample hero block sets a clear tone of providing practical components that can be rapidly put together into a full site. While we’ve certainly seen similar services appear for the various site builder plugins, this is the first one (that I’m aware of, at least) build specifically for Gutenberg.

By making the Gutenberg experience available for everyone, regardless of their technical proficiency, experience, or even preferred platform, we pave the way for a better future for all.

Democratising Publishing

You might be able to guess where this is going.

David Rowe: Porting a LDPC Decoder to a STM32 Microcontroller

Sat, 2018-09-15 10:04

A few months ago, FreeDV 700D was released. In that post, I asked for volunteers to help port 700D to the STM32 microcontroller used for the SM1000. Don Reid, W7DMR stepped up – and has been doing a fantastic job porting modules of C code from the x86 to the STM32.

Here is a guest post from Don, explaining how he has managed to get a powerful LDPC decoder running on the STM32.

LDPC for the STM32

The 700D mode and its LDPC function were developed and used on desktop (x86) platforms. The LDPC decoder is implemented in the mpdecode_core.c source file.

We’d like to run the decoder on the SM1000 platform which has an STM32F4 processor. This requires the following changes:

  • The code used doubles in several places, while the stm32 has only single precision floating point hardware.
  • It was thought that the memory used might be too much for a system with just 192k bytes of RAM.
  • There are 2 LDPC codes currently supported, HRA_112_112 used in 700D and, H2064_516_sparse used for Balloon Telemetry. While only the 700D configuration needed to work on the STM32 platform, any changes made to the mainstream code needed to work with the H2064_516_sparse code.

Testing

Before making changes it was important to have a well defined test process to validate new versions. This allowed each change to be validated as it was made. Without this the final debugging would likely have been very difficult.

The ldpc_enc utility can generate standard test frames and the ldpc_dec utility receive the frames and measure bit errors. So errors can be detected directly and BER computed. ldpc_enc can also output soft decision symbols to emulate what the modem would receive and pass into the LDPC decoder. A new utility ldpc_noise was written to add AWGN to the sample values between the above utilities. here is a sample run:

$ ./ldpc_enc /dev/zero - --sd --code HRA_112_112 --testframes 100 | ./ldpc_noise - - 1 | ./ldpc_dec - /dev/null --code HRA_112_112 --sd --testframes single sided NodB = 1.000000, No = 1.258925 code: HRA_112_112 code: HRA_112_112 Nframes: 100 CodeLength: 224 offset: 0 measured double sided (real) noise power: 0.640595 total iters 3934 Raw Tbits..: 22400 Terr: 2405 BER: 0.107 Coded Tbits: 11200 Terr: 134 BER: 0.012

ldpc_noise is passed a “No” (N-zero) level of 1dB, Eb=0, so Eb/No = -1, and we get a 10% raw BER, and 1% after LDPC decoding. This is a typical operating point for 700D.

A shell script (ldpc_check) combines several runs of these utilities, checks the results, and provides a final pass/fail indication.

All changes were made to new copies of the source files (named *_test*) so that current users of codec2-dev were not disrupted, and so that the behaviour could be compared to the “released” version.

Unused Functions

The code contained several functions which are not used anywhere in the FreeDV/Codec2 system. Removing these made it easier to see the code that was used and allowed the removal of some variables and record elements to reduce the memory used.

First Compiles

The first attempt at compiling for the stm32 platform showed that the the code required more memory than was available on the processor. The STM32F405 used in the SM1000 system has 128k bytes of main RAM.

The largest single item was the DecodedBits array which was used to saved the results for each iteration, using 32 bit integers, one per decoded bit.

int *DecodedBits = calloc( max_iter*CodeLength, sizeof( int ) );

This used almost 90k bytes!

The decode function used for FreeDV (SumProducts) used only the last decoded set. So the code was changed to save only one pass of values, using 8 bit integers. This reduced the ~90k bytes to just 224 bytes!

The FreeDV 700D mode requires on LDPC decode every 160ms. At this point the code compiled and ran but was too slow – using around 25ms per iteration, or 300 – 2500ms per frame!

C/V Nodes

The two main data structures of the LDPC decoder are c_nodes and v_nodes. Each is an array where each node contains additional arrays. In the original code these structures used over 17k bytes for the HRA_112_112 code.

Some of the elements of the c and v nodes (index, socket) are indexes into these arrays. Changing these from 32 bit to 16 bit integers and changing the sign element into a 8 bit char saved about 6k bytes.

The next problem was the run time. Each 700D frame must be fully processed in 160 ms and the decoder was taking several times this long. The CPU load was traced to the phi0() function, which was calling two maths library functions. After optimising the phi0 function (see below) the largest use of time was the index computations of the nested loops which accessed these c and v node structures.

With each node having separate arrays for index, socket, sign, and message these indexes had to be computed separately. By changing the node structures to hold an array of sub-nodes instead this index computation time was significantly reduced. An additional benefit was about a 4x reduction in the number of memory blocks allocated. Each allocation block includes additional memory used by malloc() and free() so reducing the number of blocks reduces memory use and possible heap fragmentation.

Additional time was saved by only calculating the degree elements of the c and v nodes at start-up rather than for every frame. That data is kept in memory that is statically allocated when the decoder is initialized. This costs some memory but saves time.

This still left the code calling malloc several hundred times for each frame and then freeing that memory later. This sort of memory allocation activity has been known to cause troubles in some embedded systems and is usually avoided. However the LDPC decoder needed too much memory to allow it to be statically allocated at startup and not shared with other parts of the code.

Instead of allocating an array of sub-nodes for each c or v node, a single array of bytes is passed in from the parent. The initialization function which calculates the degree elements of the nodes also counts up the memory space needed and reports this to its caller. When the decoder is called for a frame, the node’s pointers are set to use the space of this array.

Other arrays that the decoder needs were added to this to further reduce the number of separate allocation blocks.

This leaves the decisions of how to allocate and share this memory up to a higher level of the code. The plan is to continue to use malloc() and free() at a higher level initially. Further testing can be done to look for memory leakage and optimise overall memory usage on the STM32.

PHI0

There is a non linear function named “phi0” which is called inside several levels of nested loops within the decoder. The basic operation is:

phi0(x) = ln( (e^x + 1) / (e^x - 1) )

The original code used double precision exp() and log(), even though the input, output, and intermediate values are all floats. This was probably an oversight. Changing to the single single precision versions expf() and logf() provided some improvements, but not enough to meet our CPU load goal.

The original code used piecewise approximation for some input values. This was extended to cover the full range of inputs. The code was also structured differently to make it faster. The original code had a sequence of if () else if () else if () … This can take a long time when there are many steps in the approximation. Instead two ranges of input values are covered with linear steps that is implemented with table lookups.

The third range of inputs in non linear and is handled by a binary tree of comparisons to reduce the number of levels. All of this code is implemented in a separate file to allow the original or optimised version of phi0 to be used.

The ranges of inputs are:

x >= 10 result always 0 10 > x >= 5 steps of 1/2 5 > x >= 1/16 steps of 1/16 1/16 > x >= 1/4096 use 1/32, 1/64, 1/128, .., 1/4096 1/4096 > x result always 10

The range of values that will appear as inputs to phi0() can be represented with as fixed point value stored in a 32 bit integer. By converting to this format at the beginning of the function the code for all of the comparisons and lookups is reduced and uses shifts and integer operations. The step levels use powers of 2 which let the table index computations use shifts and make the fraction constants of the comparisons simple ones that the ARM instruction set can create efficiently.

Misc

Two of the configuration values are scale factors that get multiplied inside the nested loops. These values are 1.0 in both of the current configurations so that floating point multiply was removed.

Results

The optimised LDPC decoder produces the same output BER as the original.

The optimised decoder uses 12k of heap at init time and needs another 12k of heap at run time. The original decoder just used heap at run time, that was returned after each call. We have traded off the use of static heap to clean up the many small heap allocations and reduce execution time. It is probably possible to reduce the static space further perhaps at the cost of longer run times.

The maximum time to decode a frame using 100 iterations is 60.3 ms and the median time is 8.8 ms, far below our budget of 160ms!

Future Possibilities

The remaining floating point computations in the decoder are addition and subtraction so the values could be represented with fix point values to eliminate the floating point operations.

Some values which are computed from the configuration (degree, index, socket) are constants and could be generated at compile time using a utility called by cmake. However this might actually slow down the operation as the index computations might become slower.

The index and socket elements of C and V nodes could be pointers instead of indexes into arrays.

Experiments would be required to ensure these changes actually speed up the decoder.

Bio

Don got his first amateur license in high school but was soon distracted with getting an engineering degree (BSEE, Univ. of Washington), then family and life. He started his IC design career with the CPU for the HP-41C calculator. Then came ICs for printers and cameras, work on IC design tools, and some firmware for embedded systems. Exposure to ARES public service lead to a new amateur license – W7DMR and active involvement with ARES. He recently retired after 42 years and wanted to find an open project that combined radio, embedded systems and DSP.

Don lives in Corvallis, Oregon, USA a small city with the state technical university and several high tech companies.

Open Source Projects and Volunteers

Hi it’s David back again ….

Open source projects like FreeDV and Codec 2 rely on volunteers to make them happen. The typical pattern is people get excited, start some work, then drift away after a few weeks. Gold is the volunteer that consistently works week in, week out until their particular project is done. The number of hours/week doesn’t matter – it’s the consistency that is really helpful to the projects. I have a few contributors/testers/users of FreeDV in this category and I appreciate you all deeply – thank you.

If you would like to help out, please contact me. You’ll learn a lot and get to work towards an open source future for HF radio.

If you can’t help out technically, but would like to support this work, please consider Patreon or PayPal.

Reading Further

LDPC using Octave and the CML library. Our LDPC decoder comes from Coded Modulation Library (CML), which was originally used to support Matlab/Octave simulations.

Horus 37 – High Speed SSTV Images. The CML LDPC decoder was converted to a regular C library, and used for sending images from High Altitude Balloons.

Steve Ports an OFDM modem from Octave to C. Steve is another volunteer who put in a fine effort on the C coding of the OFDM modem. He recently modified the modem to handle high bit rates for voice and HF data applications.

Rick Barnich KA8BMA did a fantastic job of designing the SM1000 hardware. Leading edge, HF digital voice hardware, designed by volunteers.

David Rowe: Tony K2MO Tests FreeDV

Thu, 2018-09-13 10:04

Tony, K2MO, has recently published some fine videos of FreeDV 1600, 700C, and 700D passing through simulated HF channels. The results are quite interesting.

This video shows the 700C mode having the ability to decode with 50% of it’s carriers removed:

This 700C modem sends two copies of the tx signal at high and low frequencies, a form of diversity to help overcome selective fading. These are the combined at the receiver.

Tony’s next video shows three FreeDV modes passing through a selective fading HF channel simulation:

This particular channel has slow fading, a notch gradually creeps across the spectrum.

Tony originally started testing to determine which FreeDV mode worked best on NVIS paths. He used path parameters based on VOACAP prediction models which show the relative time delay and signal power for the each propagation mode i.e., F1, F2:

Note the long delay paths (5ms). The CCIR NVIS path model also suggests a path delay of 7ms. That much delay puts the F-layer at 1000 km (well out into space), which is a bit of a puzzle.

This video shows the results of the VOCAP NVIS path:

In this case 700C does better than 700D. The 700C modem (COHPSK) is a parallel tone design, which is more robust to long multipath delays. The OFDM modem used for 700D is configured for multipath delays of up to 2ms, but tends to fall over after that as the “O” for Orthogonal assumption breaks down. It can be configured for longer delays, at a small cost in low SNR performance.

The OFDM modem gives much tighter packing for carriers, which allows us to include enough bits for powerful FEC, and have a very narrow RF bandwidth compared to 700C. FreeDV 700D has the ability to perform interleaving (Tools-Options “FreeDV 700 Options”), which is a form of time diversity. This feature is not widely used at present, but simulations suggest it is worth up to 4dB.

It would be interesting to combine frequency diversity, LDPC, and OFDM in a wider bandwidth signal. If anyone is interested in doing a little C coding to try this let me know.

I’ve actually seen long delay on NVIS paths in the “real world”. Here is a 40M 700D contact between myself and Mark, VK5QI, who is about 40km away from me. Note at times there are notches on the waterfall 200Hz apart, indicating a round trip path delay of 1500km:

Reading Further

Modems for HF Digital Voice Part 1
, explaining the frequency diversity used in 700C
Testing FreeDV 700C, shows how to use some built in test features like noise insertion and interfering carriers.
FreeDV 700D
FreeDV User Guide, including new 700D features like interleaving

Russell Coker: Thinkpad X1 Carbon Gen 6

Tue, 2018-09-11 22:03

In February I reviewed a Thinkpad X1 Carbon Gen 1 [1] that I bought on Ebay.

I have just been supplied the 6th Generation of the Thinkpad X1 Carbon for work, which would have cost about $1500 more than I want to pay for my own gear. ;)

The first thing to note is that it has USB-C for charging. The charger continues the trend towards smaller and lighter chargers and also allows me to charge my phone from the same charger so it’s one less charger to carry. The X1 Carbon comes with a 65W charger, but when I got a second charger it was only 45W but was also smaller and lighter.

The laptop itself is also slightly smaller in every dimension than my Gen 1 version as well as being noticeably lighter.

One thing I noticed is that the KDE power applet disappears when battery is full – maybe due to my history of buying refurbished laptops I haven’t had a battery report itself as full before.

Disabling the touch pad in the BIOS doesn’t work. This is annoying, there are 2 devices for mouse type input so I need to configure Xorg to only read from the Trackpoint.

The labels on the lid are upside down from the perspective of the person using it (but right way up for people sitting opposite them). This looks nice for observers, but means that you tend to put your laptop the wrong way around on your desk a lot before you get used to it. It is also fancier than the older model, the red LED on the cover for the dot in the I in Thinkpad is one of the minor fancy features.

As the new case is thinner than the old one (which was thin compared to most other laptops) it’s difficult to open. You can’t easily get your fingers under the lid to lift it up.

One really annoying design choice was to have a proprietary Ethernet socket with a special dongle. If the dongle is lost or damaged it will probably be expensive to replace. An extra USB socket and a USB Ethernet device would be much more useful.

The next deficiency is that it has one USB-C/DisplayPort/Thunderbolt port and 2 USB 3.1 ports. USB-C is going to be used for everything in the near future and a laptop with only a single USB-C port will be as annoying then as one with a single USB 2/3 port would be right now. Making a small laptop requires some engineering trade-offs and I can understand them limiting the number of USB 3.1 ports to save space. But having two or more USB-C ports wouldn’t have taken much space – it would take no extra space to have a USB-C port in place of the proprietary Ethernet port. It also has only a HDMI port for display, the USB-C/Thunderbolt/DisplayPort port is likely to be used for some USB-C device when you want an external display. The Lenovo advertising says “So you get Thunderbolt, USB-C, and DisplayPort all rolled into one”, but really you get “a choice of one of Thunderbolt, USB-C, or DisplayPort at any time”. How annoying would it be to disconnect your monitor because you want to read a USB-C storage device?

As an aside this might work out OK if you can have a DisplayPort monitor that also acts as a USB-C hub on the same cable. But if so requiring a monitor that isn’t even on sale now to make my laptop work properly isn’t a good strategy.

One problem I have is that resume from suspend requires holding down power button. I’m not sure if it’s hardware or software issue. But suspend on lid close works correctly and also suspend on inactivity when running on battery power. The X1 Carbon Gen 1 that I own doesn’t suspend on lid close or inactivity (due to a Linux configuration issue). So I have one laptop that won’t suspend correctly and one that won’t resume correctly.

The CPU is an i5-8250U which rates 7,678 according to cpubenchmark.net [2]. That’s 92% faster than the i7 in my personal Thinkpad and more importantly I’m likely to actually get that performance without having the CPU overheat and slow down, that said I got a thermal warning during the Debian install process which is a bad sign. It’s also only 114% faster than the CPU in the Thinkpad T420 I bought in 2013. The model I got doesn’t have the fastest possible CPU, but I think that the T420 didn’t either. A 114% increase in CPU speed over 5 years is a long way from the factor of 4 or more that Moore’s law would have predicted.

The keyboard has the stupid positions for the PgUp and PgDn keys I noted on my last review. It’s still annoying and slows me down, but I am starting to get used to it.

The display is FullHD, it’s nice to have a laptop with the same resolution as my phone. It also has a slider to cover the built in camera which MIGHT also cause the microphone to be disconnected. It’s nice that hardware manufacturers are noticing that some customers care about privacy.

The storage is NVMe. That’s a nice feature, although being only 240G may be a problem for some uses.

Conclusion

Definitely a nice laptop if someone else is paying.

The fact that it had cooling issues from the first install is a concern. Laptops have always had problems with cooling and when a laptop has cooling problems before getting any dust inside it’s probably going to perform poorly in a few years.

Lenovo has gone too far trying to make it thin and light. I’d rather have the same laptop but slightly thicker, with a built-in Ethernet port, more USB ports, and a larger battery.

Related posts:

  1. More About the Thinkpad X301 Last month I blogged about the Thinkpad X301 I got...
  2. Thinkpad T420 I’ve owned a Thinkpad T61 since February 2010 [1]. In...
  3. Thinkpad X1 Carbon I just bought a Thinkpad X1 Carbon to replace my...

Jeremy Visser: ABC iview and the ‘Australia tax’

Tue, 2018-09-11 20:04
Unless you have been living in a cave, it is probable that you heard about a federal parliamentary inquiry into IT pricing (somewhat aptly entitled “At what cost? — IT pricing and the Australia tax”) which reported that, amongst other things, online geo-blocking can as much as double pricing for IT products in what is blatant price discrimination. Not only do Australians pay, on average, 42% more than US’ians for Adobe products, and 66% more for Microsoft products, but music (such as the iTunes Store), video games, and e-books (e.

Jeremy Visser: iPads as in-flight entertainment

Tue, 2018-09-11 20:04
I’m writing this whilst sitting on a Qantas flight from Perth to Sydney, heading home after attending the fantastic linux.conf.au 2014. The plane is a Boeing 767, and unlike most flights I have been on in the last decade, this one has no in-flight entertainment system built into the backs of seats. Instead, every passenger is issued with an Apple iPad (located in the back seat pocket), fitted with what appears to be a fairly robust leather jacket emblazoned with the words “SECURITY DEVICE ATTACHED” (presumably to discourage theft).

Jeremy Visser: You reap what you sow

Tue, 2018-09-11 20:04
So the ABC has broken iview for all non–Chrome Linux users. How so? Because the ABC moved iview to use a streaming format supported only by the latest versions of Adobe Flash (e.g. version 11.7, which is available on Windows and OS X), but Adobe have ceased Linux support for Flash as of version 11.2 (for reasons I don’t yet understand, some users report that the older Flash 10.3 continues to work with iview).