Planet Linux Australia

Syndicate content
Planet Linux Australia - http://planet.linux.org.au
Updated: 59 min 40 sec ago

Stewart Smith: Workaround for opal-prd using 100% CPU

Thu, 2016-10-20 19:00

opal-prd is the Processor RunTime Diagnostics daemon, the userspace process that on OpenPower systems is responsible for some of the runtime diagnostics. Although a userspace process, it memory maps (as in mmap) in some code loaded by early firmware (Hostboot) called the HostBoot RunTime (HBRT) and runs it, using calls to the kernel to accomplish any needed operations (e.g. reading/writing registers inside the chip). Running this in user space gives us benefits such as being able to attach gdb, recover from segfaults etc.

The reason this code is shipped as part of firmware rather than as an OS package is that it is very system specific, and it would be a giant pain to update a package in every Linux distribution every time a new chip or machine was introduced.

Anyway, there’s a bug in the HBRT code that means if there’s an ECC error in the HBEL (HostBoot Error Log) partition in the system flash (“bios” or “pnor”… the flash where your system firmware lives), the opal-prd process may get stuck chewing up 100% CPU and not doing anything useful. There’s https://github.com/open-power/hostboot/issues/67 for this.

You will notice a problem if the opal-prd process is using 100% CPU and the last log messages are something like:

HBRT: ERRL:>>ErrlManager::ErrlManager constructor. HBRT: ERRL:iv_hiddenErrorLogsEnable = 0x0 HBRT: ERRL:>>setupPnorInfo HBRT: PNOR:>>RtPnor::getSectionInfo HBRT: PNOR:>>RtPnor::readFromDevice: i_offset=0x0, i_procId=0 sec=11 size=0x20000 ecc=1 HBRT: PNOR:RtPnor::readFromDevice: removing ECC... HBRT: PNOR:RtPnor::readFromDevice> Uncorrectable ECC error : chip=0,offset=0x0

(the parameters to readFromDevice may differ)

Luckily, there’s a simple workaround to fix it all up! You will need the pflash utility. Primarily, pflash is meant only for developers and those who know what they’re doing. You can turn your computer into a brick using it.

pflash is packaged in Ubuntu 16.10 and RHEL 7.3, but you can otherwise build it from source easily enough:

git clone https://github.com/open-power/skiboot.git cd skiboot/external/pflash make

Now that you have pflash, you just need to erase the HBEL partition and write (ECC) zeros:

dd if=/dev/zero of=/tmp/hbel bs=1 count=147456 pflash -P HBEL -e pflash -P HBEL -p /tmp/hbel

Note: you cannot just erase the partition or use the pflash option to do an ECC erase, you may render your system unbootable if you get it wrong.

After that, restart opal-prd however your distro handles restarting daemons (e.g. systemctl restart opal-prd.service) and all should be well.

Binh Nguyen: Common Russian Media Themes, Has Western Liberal Capitalist Democracy Failed?, and More

Tue, 2016-10-18 03:22
After watching international media for a while (particularly those who aren't part of the standard 'Western Alliance') you'll realise that their are common themes: - they are clearly against the current international order. Believe that things will be better if changed. Wants the rules changed (especially as they seemed to have favoured some countries who went through the World Wars relatively

Tridge on UAVs: CanberraUAV Outback Challenge 2016 Debrief

Mon, 2016-10-17 18:17

I have finally written up an article on our successful Outback Challenge 2016 entry

The members of CanberraUAV are home from the Outback Challenge and life is starting to return to normal after an extremely hectic (and fun!) time preparing our aircraft for this years challenge. It is time to write up our usual debrief acticle to give those of you who weren't able to be there some idea of what happened.

For reference here are the articles from the 2012 and 2014 challenges:

http://diydrones.com/profiles/blogs/canberrauav-outback-challenge-2012-debrief
http://diydrones.com/profiles/blogs/canberrauav-outback-challenge-2014-debrief

Medical Express

The Outback Challenge is held every two years in Queensland, Australia. As the challenge was completed by multiple teams in 2014 the organisers needed to come up with a new challenge. The new challenge for 2016 was called "Medical Express" and the challenge was to retrieve a blood sample from Joe at a remote landing site.

outback-joe.jpg762x562 217 KB

The back-story is that poor Outback Joe is trapped behind flood waters on his remote property in Queensland. Unfortunately he is ill, and doctors at a nearby hospital need a blood sample to diagnose his illness. A UAV is called in to fly a 23km path to a place where Joe is waiting. We only know Joes approximate position (within 100 meters) so
first off the UAV needs to find Joe using an on-board camera. After finding Joe the aircraft needs to find a good landing site in an area littered with obstacles. The landing site needs to be more than 30 meters from Joe (to meet CASA safety requirements) but less than 80 meters (so Joe doesn't have to walk too far).

The aircraft then needs to perform an automatic landing, and then wait for Joe to load the blood sample into an easily accessible carrier. Joe then presses a button to indicate he is done loading the blood sample. The aircraft needs to wait for one minute for Joe to get clear, and then perform an automatic takeoff and flight back to the home location to deliver the blood sample to waiting hospital staff.

That story hides a lot of very challenging detail. For example, the UAV must maintain continuous telemetry contact with the operators back at the base. That needs to be done despite not knowing exactly where the landing site will be until the day before the challenge starts.

Also, the landing area has trees around it and no landing strip, so a normal fixed wing landing and takeoff is very problematic. The organisers wanted teams to come up with a VTOL solution and in this
they were very successful, kickstarting a huge effort to develop the VTOL capabilities of multiple open source autopilot systems.

The organisers also had provided a strict flight path that the teams have to follow to reach the search area where Joe is located. The winding path over the rural terrain of Dalby is strictly enforced, with any aircraft breaching the geofence required to immediately and automatically terminate by crashing into the ground.

The organisers also gave quite a wide range of flight distance and weather conditions that the teams had to be able to cope with. The distance to the search area could be up to 30km, meaning a round trip distance of 60km without taking into account all the time spent above the search area trying to find Joe. The teams had to be able to fly in up to 25 knots average wind on the ground, which could mean well over 30 knots in the air.

The mission also needed to be completed in one hour, including the time for spent loading the blood sample and circling above Joe.

Russell Coker: Improving Memory

Mon, 2016-10-17 17:02

I’ve just attended a lecture about improving memory, mostly about mnemonic techniques. I’m not against learning techniques to improve memory and I think it’s good to teach kids a variety of things many of which won’t be needed when they are younger as you never know which kids will need various skills. But I disagree with the assertion that we are losing valuable skills due to “digital amnesia”.

Nowadays we have programs to check spelling so we can avoid the effort of remembering to spell difficult words like mnemonic, calendar apps on our phones that link to addresses and phone numbers, and the ability to Google the world’s knowledge from the bathroom. So the question is, what do we need to remember?

For remembering phone numbers it seems that all we need is to remember numbers that we might call in the event of a mobile phone being lost or running out of battery charge. That would be a close friend or relative and maybe a taxi company (and 13CABS isn’t difficult to remember).

Remembering addresses (street numbers etc) doesn’t seem very useful in any situation. Remembering the way to get to a place is useful and it seems to me that the way the navigation programs operate works against this. To remember a route you would want to travel the same way on multiple occasions and use a relatively simple route. The way that Google maps tends to give the more confusing routes (IE routes varying by the day and routes which take all shortcuts) works against this.

I think that spending time improving memory skills is useful, but it will either take time away from learning other skills that are more useful to most people nowadays or take time away from leisure activities. If improving memory skills is fun for you then it’s probably better than most hobbies (it’s cheap and provides some minor benefits in life).

When I was in primary school it was considered important to make kids memorise their “times tables”. I’m sure that memorising the multiplication of all numbers less than 13 is useful to some people, but I never felt a need to do it. When I was young I could multiply any pair of 2 digit numbers as quickly as most kids could remember the result. The big difference was that most kids needed a calculator to multiply any number by 13 which is a significant disadvantage.

What We Must Memorise

Nowadays the biggest memory issue is with passwords (the Correct Horse Battery Staple XKCD comic is worth reading [1]). Teaching mnemonic techniques for the purpose of memorising passwords would probably be a good idea – and would probably get more interest from the audience.

One interesting corner-case of passwords is ATM PIN numbers. The Wikipedia page about PIN numbers states that 4-12 digits can be used for PINs [2]. The 4 digit PIN was initially chosen because John Adrian Shepherd-Barron (who is credited with inventing the ATM) was convinced by his wife that 6 digits would be too difficult to memorise. The fact that hardly any banks outside Switzerland use more than 4 digits suggests that Mrs Shepherd-Barron had a point. The fact that this was decided in the 60’s proves that it’s not “digital amnesia”.

We also have to memorise how to use various supposedly user-friendly programs. If you observe an iPhone or Mac being used by someone who hasn’t used one before it becomes obvious that they really aren’t so user friendly and users need to memorise many operations. This is not a criticism of Apple, some tasks are inherently complex and require some complexity of the user interface. The limitations of the basic UI facilities become more obvious when there are operations like palm-swiping the screen for a screen-shot and a double-tap plus drag for a 1 finger zoom on Android.

What else do we need to memorise?

Related posts:

  1. Xen Memory Use and Zope I am currently considering what to do regarding a Zope...
  2. Improving Computer Reliability In a comment on my post about Taxing Inferior Products...
  3. Chilled Memory Attacks In 1996 Peter Gutmann wrote a paper titled “Secure Deletion...

Clinton Roy: In Memory of Gary Curtis

Sun, 2016-10-16 17:00

This week we learnt of the sad passing of a long term regular attendee of Humbug, Gary Curtis. Gary was often early, and nearly always the last to leave.

One  of Gary’s prized possessions was his car, more specifically his LINUX number plate. Gary was very happy to be our official airport-conference shuttle for linux.conf.au keynote speakers in 2011 with this number plate.

Gary always had very strong opinions about how Humbug and our Humbug organised conferences should be run, but rarely took to running the events himself. It became a perennial joke at Humbug AGMs that we would always nominate Gary for positions, and he would always decline. Eventually we worked out that Humbug was one of the few times Gary wasn’t in charge of a group, and that was relaxing for him.

A topic that Gary always came back to was genealogy, especially the phone app he was working on.

A peculiar quirk of Humbug meetings is that they run on Saturday nights, and thus we often have meetings at the same time as Australian elections. Gary was always keen to keep up with the election on the night, often with interesting insights.

My most personal memory of Gary was our road trip after OSDC New Zealand, we did something like three days of driving around in a rental car, staying at hotels along the way. Gary’s driving did little to impress me, but he was certainly enjoying himself.

Gary will be missed.

 


Filed under: Uncategorized

Glen Turner: Activating IPv6 stable privacy addressing from RFC7217

Thu, 2016-10-13 11:34
Understand stable privacy addressing

In Three new things to know about deploying IPv6 I described the new IPv6 Interface Identifier creation scheme in RFC7217.* This scheme results in an IPv6 address which is stable, and yet has no relationship to the device's MAC address, nor can an address generated by the scheme be used to track the machine as it moves to other subnets.

This isn't the same as RFC4941 IP privacy addressing. RFC4941 addresses are more private, as they change regularly. But that instability makes attaching to a service on the host very painful. It's also not a great scheme for support staff: an unstable address complicates network fault finding. RFC7217 seeks a compromise position which provides an address which is difficult to use for host tracking, whilst retaining a stable address within a subnet to simplify fault finding and make for easy hosting of services such as SSH.

The older RFC4291 EUI-64 Interface Identifier scheme is being deprecated in favour of RFC7217 stable privacy addressing.

For servers you probably want to continue to use static addressing with a unique address per service. That is, a server running multiple services will hold multiple IPv6 addresses, and each service on the server bind()s to its address.

Configure stable privacy addressing

To activate the RFC7217 stable privacy addressing scheme in a Linux which uses Network Manager (Fedora, Ubuntu, etc) create a file /etc/NetworkManager/conf.d/99-local.conf containing:

[connection] ipv6.ip6-privacy=0 ipv6.addr-gen-mode=stable-privacy

Then restart Network Manager, so that the configuration file is read, and restart the interface. You can restart an interface by physically unplugging it or by:

systemctl restart NetworkManagerip link set dev eth0 down && ip link set dev eth0 up

This may drop your SSH session if you are accessing the host remotely.

Verify stable privacy addressing

Check the results with:

ip --family inet6 addr show dev eth0 scope global 1: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 state UP qlen 1000 inet6 2001:db8:1:2:b03a:86e8:e163:2714/64 scope global noprefixroute dynamic valid_lft 2591932sec preferred_lft 604732sec

The highlighted Interface Identifier part of the IPv6 address should have changed from the EUI-64 Interface Identifier; that is, the Interface Identifier should not contain any bytes of the interface's MAC address. The other parts of the IPv6 address — the Network Prefix, Subnet Identifier and Prefix Length — should not have changed.

If you repeat the test on a different subnet then the Interface Identifier should change. Upon returning to the original subnet the Interface Identifier should return to the original value.

Maxim Zakharov: One more fix for AMP WordPress plugin

Thu, 2016-10-13 11:05

With the recent AMP update at Google you may notice increased number of AMP parsing errors in your search console. They look like

The mandatory tag 'html ⚡ for top-level html' is missing or incorrect.

Some plugins, e.g. Add Meta Tags, may alter language_attributes() using 'language_attributes' filter and adding XML-related attributes which are disallowed (see www.ampproject.org/docs/reference/spec#required-markup ) and that causes the error mentioned above.

I have made a fix solving this problem and made pull request for WordPress AMP plugin, you may see it here:
github.com/Automattic/amp-wp/pull/531

Linux Users of Victoria (LUV) Announce: LUV Main November 2016 Meeting: The Internet of Toys / Special General Meeting / Functional Programming

Tue, 2016-10-11 03:02
Start: Nov 2 2016 18:30 End: Nov 2 2016 20:30 Start: Nov 2 2016 18:30 End: Nov 2 2016 20:30 Location: 

6th Floor, 200 Victoria St. Carlton VIC 3053

Link:  http://luv.asn.au/meetings/map

Speakers:

• Nick Moore, The Internet of Toys: ESP8266 and MicroPython
• Special General Meeting
• Les Kitchen, Functional Programming

200 Victoria St. Carlton VIC 3053 (the EPA building)

Late arrivals needing access to the building and the sixth floor please call 0490 627 326.

Before and/or after each meeting those who are interested are welcome to join other members for dinner. We are open to suggestions for a good place to eat near our venue. Maria's on Peel Street in North Melbourne is currently the most popular place to eat after meetings.

LUV would like to acknowledge Red Hat for their help in obtaining the venue.

Linux Users of Victoria Inc. is an incorporated association, registration number A0040056C.

November 2, 2016 - 18:30

read more

Linux Users of Victoria (LUV) Announce: LUV Beginners October Meeting: Build a Simple RC Bot!

Sun, 2016-10-09 21:03
Start: Oct 15 2016 12:30 End: Oct 15 2016 16:30 Start: Oct 15 2016 12:30 End: Oct 15 2016 16:30 Location: 

Infoxchange, 33 Elizabeth St. Richmond

Link:  http://luv.asn.au/meetings/map

Build a Simple RC Bot! Getting started with Arduino and Android

In this introductory talk, Ivan Lim Siu Kee will take you through the process of building a simple remote controlled bot. Find out how you can get started on building simple remote controlled bots of your own. While effort has been made to keep the presentation as beginner friendly as possible, some programming experience is still recommended to get the most out of this talk.

The meeting will be held at Infoxchange, 33 Elizabeth St. Richmond 3121 (enter via the garage on Jonas St.)

Late arrivals, please call (0490) 049 589 for access to the venue.

LUV would like to acknowledge Infoxchange for the venue.

Linux Users of Victoria Inc., is an incorporated association, registration number A0040056C.

October 15, 2016 - 12:30

read more

Craig Sanders: Converting to a ZFS rootfs

Sun, 2016-10-09 17:03

My main desktop/server machine (running Debian sid) at home has been running XFS on mdadm raid-1 on a pair of SSDs for the last few years. A few days ago, one of the SSDs died.

I’ve been planning to switch to ZFS as the root filesystem for a while now, so instead of just replacing the failed drive, I took the opportunity to convert it.

NOTE: at this point in time, ZFS On Linux does NOT support TRIM for either datasets or zvols on SSD. There’s a patch almost ready (TRIM/Discard support from Nexenta #3656), so I’m betting on that getting merged before it becomes an issue for me.

Here’s the procedure I came up with:

1. Buy new disks, shutdown machine, install new disks, reboot.

The details of this stage are unimportant, and the only thing to note is that I’m switching from mdadm RAID-1 with two SSDs to ZFS with two mirrored pairs (RAID-10) on four SSDs (Crucial MX300 275G – at around $100 AUD each, they’re hard to resist). Buying four 275G SSDs is slightly more expensive than buying two of the 525G models, but will perform a lot better.

When installed in the machine, they ended up as /dev/sdp, /dev/sdq, /dev/sdr, and /dev/sds. I’ll be using the symlinks in /dev/disk/by-id/ for the zpool, but for partition and setup, it’s easiest to use the /dev/sd? device nodes.

2. Partition the disks identically with gpt partition tables, using gdisk and sgdisk.

I need:

  • A small partition (type EF02, 1MB) for grub to install itself in. Needed on gpt.
  • A small partition (type EF00, 1MB) for EFI System. I’m not currently booting with UEFI but I want the option to move to it later.
  • A small partition (type 8300, 2GB) for /boot.

    I want /boot on a separate partition to make it easier to recover from problems that might occur with future upgrades. 2GB might seem excessive, but as this is my tftp & dhcp server I can’t rely on network boot for rescues, so I want to be able to put rescue ISO images in there and boot them with grub and memdisk.

    This will be mdadm RAID-1, with 4 copies.

  • A larger partition (type 8200, 4GB) for swap. With 4 identically partitioned SSDs, I’ll end up with 16GB swap (using zswap for block-device backed compressed RAM swap)

  • A large partition (type bf07, 210GB) for my rootfs

  • A small partition (type bf08, 2GBB) to provide ZIL for my HDD zpools

  • A larger partition (type bf09, 32GB) to provide L2ARC for my HDD zpools

ZFS On Linux uses partition type bf08 (“Solaris Reserved 1”) natively, but doesn’t seem to care what the partition types are for ZIL and L2ARC. I arbitrarily used bf08 (“Solaris Reserved 2”) and bf09 (“Solaris Reserved 3”) for easy identification. I’ll set these up later, once I’ve got the system booted – I don’t want to risk breaking my existing zpools by taking away their ZIL and L2ARC (and forgetting to zpool remove them, which I might possibly have done once) if I have to repartition.

I used gdisk to interactively set up the partitions:

# gdisk -l /dev/sdp GPT fdisk (gdisk) version 1.0.1 Partition table scan: MBR: protective BSD: not present APM: not present GPT: present Found valid GPT with protective MBR; using GPT. Disk /dev/sdp: 537234768 sectors, 256.2 GiB Logical sector size: 512 bytes Disk identifier (GUID): 4234FE49-FCF0-48AE-828B-3C52448E8CBD Partition table holds up to 128 entries First usable sector is 34, last usable sector is 537234734 Partitions will be aligned on 8-sector boundaries Total free space is 6 sectors (3.0 KiB) Number Start (sector) End (sector) Size Code Name 1 40 2047 1004.0 KiB EF02 BIOS boot partition 2 2048 2099199 1024.0 MiB EF00 EFI System 3 2099200 6293503 2.0 GiB 8300 Linux filesystem 4 6293504 14682111 4.0 GiB 8200 Linux swap 5 14682112 455084031 210.0 GiB BF07 Solaris Reserved 1 6 455084032 459278335 2.0 GiB BF08 Solaris Reserved 2 7 459278336 537234734 37.2 GiB BF09 Solaris Reserved 3

I then cloned the partition table to the other three SSDs with this little script:

clone-partitions.sh

#! /bin/bash src='sdp' targets=( 'sdq' 'sdr' 'sds' ) for tgt in "${targets[@]}"; do sgdisk --replicate="/dev/$tgt" /dev/"$src" sgdisk --randomize-guids "/dev/$tgt" done 3. Create the mdadm for /boot, the zpool, and and the root filesystem.

Most rootfs on ZFS guides that I’ve seen say to call the pool rpool, then create a dataset called "$(hostname)-1" and then create a ROOT dataset under that. so on my machine, that would be rpool/ganesh-1/ROOT. Some reverse the order of hostname and the rootfs dataset, for rpool/ROOT/ganesh-1.

There might be uses for this naming scheme in other environments but not in mine. And, to me, it looks ugly. So I’ll use just $(hostname)/root for the rootfs. i.e. ganesh/root

I wrote a script to automate it, figuring I’d probably have to do it several times in order to optimise performance. Also, I wanted to document the procedure for future reference, and have scripts that would be trivial to modify for other machines.

create.sh

#! /bin/bash exec &> ./create.log hn="$(hostname -s)" base='ata-Crucial_CT275MX300SSD1_' md='/dev/md0' md_part=3 md_parts=( $(/bin/ls -1 /dev/disk/by-id/${base}*-part${md_part}) ) zfs_part=5 # 4 disks, so use the top half and bottom half for the two mirrors. zmirror1=( $(/bin/ls -1 /dev/disk/by-id/${base}*-part${zfs_part} | head -n 2) ) zmirror2=( $(/bin/ls -1 /dev/disk/by-id/${base}*-part${zfs_part} | tail -n 2) ) # create /boot raid array mdadm "$md" --create \ --bitmap=internal \ --raid-devices=4 \ --level 1 \ --metadata=0.90 \ "${md_parts[@]}" mkfs.ext4 "$md" # create zpool zpool create -o ashift=12 "$hn" \ mirror "${zmirror1[@]}" \ mirror "${zmirror2[@]}" # create zfs rootfs zfs set compression=on "$hn" zfs set atime=off "$hn" zfs create "$hn/root" zpool set bootfs="$hn/root" # mount the new /boot under the zfs root mount "$md" "/$hn/root/boot"

If you want or need other ZFS datasets (e.g. for /home, /var etc) then create them here in this script. Or you can do that later after you’ve got the system up and running on ZFS.

If you run mysql or postgresql, read the various tuning guides for how to get best performance for databases on ZFS (they both need their own datasets with particular recordsize and other settings). If you download Linux ISOs or anything with bit-torrent, avoid COW fragmentation by setting up a dataset to download into with recordsize=16K and configure your BT client to move the downloads to another directory on completion.

I did this after I got my system booted on ZFS. For my db, I stoppped the postgres service, renamed /var/lib/postgresql to /var/lib/p, created the new datasets with:

zfs create -o recordsize=8K -o logbias=throughput -o mountpoint=/var/lib/postgresql \ -o primarycache=metadata ganesh/postgres zfs create -o recordsize=128k -o logbias=latency -o mountpoint=/var/lib/postgresql/9.6/main/pg_xlog \ -o primarycache=metadata ganesh/pg-xlog

followed by rsync and then started postgres again.

4. rsync my current system to it.

Logout all user sessions, shut down all services that write to the disk (postfix, postgresql, mysql, apache, asterisk, docker, etc). If you haven’t booted into recovery/rescue/single-user mode, then you should be as close to it as possible – everything non-esssential should be stopped. I chose not to boot to single-user in case I needed access to the web to look things up while I did all this (this machine is my internet gateway).

Then:

hn="$(hostname -s)" time rsync -avxHAXS -h -h --progress --stats --delete / /boot/ "/$hn/root/"

After the rsync, my 130GB of data from XFS was compressed to 91GB on ZFS with transparent lz4 compression.

Run the rsync again if (as I did), you realise you forgot to shut down postfix (causing newly arrived mail to not be on the new setup) or something.

You can do a (very quick & dirty) performance test now, by running zpool scrub "$hn". Then run watch zpool status "$hn". As there should be no errorss to correct, you should get scrub speeds approximating the combined sequential read speed of all vdevs in the pool. In my case, I got around 500-600M/s – I was kind of expecting closer to 800M/s but that’s good enough….the Crucial MX300s aren’t the fastest drive available (but they’re great for the price), and ZFS is optimised for reliability more than speed. The scrub took about 3 minutes to scan all 91GB. My HDD zpools get around 150 to 250M/s, depending on whether they have mirror or RAID-Z vdevs and on what kind of drives they have.

For real benchmarking, use bonnie++ or fio.

5. Prepare the new rootfs for chroot, chroot into it, edit /etc/fstab and /etc/default/grub.

This script bind mounts /proc, /sys, /dev, and /dev/pts before chrooting:

chroot.sh

#! /bin/sh hn="$(hostname -s)" for i in proc sys dev dev/pts ; do mount -o bind "/$i" "/${hn}/root/$i" done chroot "/${hn}/root"

Change /etc/fstab (on the new zfs root to) have the zfs root and ext4 on raid-1 /boot:

/ganesh/root / zfs defaults 0 0 /dev/md0 /boot ext4 defaults,relatime,nodiratime,errors=remount-ro 0 2

I haven’t bothered with setting up the swap at this point. That’s trivial and I can do it after I’ve got the system rebooted with its new ZFS rootfs (which reminds me, I still haven’t done that :).

add boot=zfs to the GRUB_CMDLINE_LINUX variable in /etc/default/grub. On my system, that’s:

GRUB_CMDLINE_LINUX="iommu=noagp usbhid.quirks=0x1B1C:0x1B20:0x408 boot=zfs"

NOTE: If you end up needing to run rsync again as in step 4. above copy /etc/fstab and /etc/default/grub to the old root filesystem first. I suggest to /etc/fstab.zfs and /etc/default/grub.zfs

6. Install grub

Here’s where things get a little complicated. Running install-grub on /dev/sd[pqrs] is fine, we created the type ef02 partition for it to install itself into.

But running update-grub to generate the new /boot/grub/grub.cfg will fail with an error like this:

/usr/sbin/grub-probe: error: failed to get canonical path of `/dev/ata-Crucial_CT275MX300SSD1_163313AADD8A-part5'.

IMO, that’s a bug in grub-probe – it should look in /dev/disk/by-id/ if it can’t find what it’s looking for in /dev/

I fixed that problem with this script:

fix-ata-links.sh

#! /bin/sh cd /dev ln -s /dev/disk/by-id/ata-Crucial* .

After that, update-grub works fine.

NOTE: you will have to add udev rules to create these symlinks, or run this script on every boot otherwise you’ll get that error every time you run update-grub in future.

7. Prepare to reboot

Unmount proc, sys, dev/pts, dev, the new raid /boot, and the new zfs filesystems. Set the mount point for the new rootfs to /

umount-zfs-root.sh

#! /bin/sh hn="$(hostname -s)" md="/dev/md0" for i in dev/pts dev sys proc ; do umount "/${hn}/root/$i" done umount "$md" zfs umount "${hn}/root" zfs umount "${hn}" zfs set mountpoint=/ "${hn}/root" zfs set canmount=off "${hn}" 8. Reboot

Remember to configure the BIOS to boot from your new disks.

The system should boot up with the new rootfs, no rescue disk required as in some other guides – the rsync and chroot stuff has already been done.

9. Other notes
  • If you’re adding partition(s) to a zpool for ZIL, remember that ashift is per vdev, not per zpool. So remember to specify ashift=12 when adding them. e.g.

    zpool add -o ashift=12 export log \ mirror ata-Crucial_CT275MX300SSD1_163313AAEE5F-part6 \ ata-Crucial_CT275MX300SSD1_163313AB002C-part6

    Check that all vdevs in all pools have the correct ashift value with:

    zdb | grep -E 'ashift|vdev|type' | grep -v disk
10. Useful references

Reading these made it much easier to come up with my own method. Highly recommended.

Converting to a ZFS rootfs is a post from: Errata

Maxim Zakharov: Data structure for word relative cooccurence frequencies, counts and prefix tree

Sat, 2016-10-08 19:05

Trying to solve the task of calculating word cooccurrence relative frequencies fast, I have created an interesting data structure, which also allows to calculate counts for the first word in the pair to check; and it creates word prefix tree for the text processing, which can be used for further text analysis.

The source code is available on GitHub: github.com/Maxime2/cooccurrences

When you execute make command you should see the following output:

cc -O3 -funsigned-char cooccur.c -o cooccur -lm Example 1 ./cooccur a.txt 2 < a.in | tee a.out Checking pair d e Count:3 cocount:3 Relative frequency: 1.00 Checking pair a b Count:3 cocount:1 Relative frequency: 0.33 Example 2 ./cooccur b.txt 3 < b.in | tee b.out Checking pair a penny Count:3 cocount:3 Relative frequency: 1.00 Checking pair penny earned Count:4 cocount:1 Relative frequency: 0.25

The cooccur program takes two arguments: the filename of a text file to process and the window of words size to calculate relative frequencies within it. Then the program takes pairs of words from its standard input, one pair per line, to calculate count of appearance of the first word in the text processed and the cooccurrence count for the pair in that text. If the second word appears more than once in the window, only one appearance is counted.

Examples were taken here:

Michael Davies: Fixing broken Debian packages

Fri, 2016-10-07 11:06
In my job we make use of Vidyo for videoconferencing, but today I ran into an issue after re-imaging my Ubuntu 16.04 desktop.

The latest version of vidyodesktop requires libqt4-gui, which doesn't exist in Ubuntu anymore. This always seems to be a problem with non-free software targeting multiple versions of multiple operating systems.

You can work around the issue, doing something like:

sudo dpkg -i --ignore-depends=libqt4-gui VidyoDesktopInstaller-*.deb

but then you get the dreaded unmet dependencies roadblock which prevents you from future package manager updates and operations. i.e.

You might want to run 'apt-get -f install' to correct these:
 vidyodesktop : Depends: libqt4-gui (>= 4.8.1) but it is not installable
E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution).

It's a known problem, and it's been well documented. The suggested solution was to modify the VidyoDesktopInstaller-*.deb package, but I didn't want to do that (because when the next version comes out, it will need to be handraulicly fixed too - and that's an ongoing burden I'm not prepared to live with). So I went looking for another solution - and found Debian's equivs package (and thanks to tonyb for pointing me in the right direction!)

So what we want to do is to create a dummy Debian package that will satisfy the libqt4-gui requirement.  So first off, let's uninstall vidyodesktop, and install equivs:

sudo apt-get -f install
sudo apt-get install equivs

Next, let's make a fake package:

mkdir -p ~/src/fake-libqt4-gui
cd  ~/src/fake-libqt4-gui
cat << EOF > fake-libqt4-gui
Section: misc
Priority: optional
Standards-Version: 3.9.2

Package: libqt4-gui
Version: 1:100
Maintainer: Michael Davies <michael@the-davies.net>
Architecture: all
Description: fake libqt4-gui to keep vidyodesktop happy
EOF
And now, let's build and install the dummy package:
equivs-build fake-libqt4-guisudo dpkg -i libqt4-gui_100_all.deb
And now vidyodesktop installs cleanly!
sudo dpkg -i VidyoDesktopInstaller-*.deb

James Morris: LinuxCon Europe Kernel Security Slides

Fri, 2016-10-07 01:03

Yesterday I gave an update on the Linux kernel security subsystem at LinuxCon Europe, in Berlin.

The slides are available here: http://namei.org/presentations/linux_kernel_security_linuxconeu2016.pdf

The talk began with a brief overview and history of the Linux kernel security subsystem, and then I provided an update on significant changes in the v4 kernel series, up to v4.8.  Some expected upcoming features were also covered.  Skip to slide 31 if you just want to see the changes.  There are quite a few!

It’s my first visit to Berlin, and it’s been fascinating to see the remnants of the Cold War, which dominated life in 1980s when I was at school, but which also seemed so impossibly far from Australia.

Brandenburg Gate, Berlin. Unity Day 2016.

I hope to visit again with more time to explore.

Russell Coker: 10 Years of Glasses

Mon, 2016-10-03 23:02

10 years ago I first blogged about getting glasses [1]. I’ve just ordered my 4th pair of glasses. When you buy new glasses the first step is to scan your old glasses to use that as a base point for assessing your eyes, instead of going in cold and trying lots of different lenses they can just try small variations on your current glasses. Any good optometrist will give you a print-out of the specs of your old glasses and your new prescription after you buy glasses, they may be hesitant to do so if you don’t buy because some people get a prescription at an optometrist and then buy cheap glasses online. Here are the specs of my new glasses, the ones I’m wearing now that are about 4 years old, and the ones before that which are probably about 8 years old:

New 4 Years Old Really Old R-SPH 0.00 0.00 -0.25 R-CYL -1.50 -1.50 -1.50 R-AXS 180 179 180 L-SPH 0.00 -0.25 -0.25 L-CYL -1.00 -1.00 -1.00 L-AXS 5 10 179

The Specsavers website has a good description of what this means [2]. In summary SPH is whether you are log-sighted (positive) or short-sighted (negative). CYL is for astigmatism which is where the focal lengths for horizontal and vertical aren’t equal. AXS is the angle for astigmatism. There are other fields which you can read about on the Specsavers page, but they aren’t relevant for me.

The first thing I learned when I looked at these numbers is that until recently I was apparently slightly short-sighted. In a way this isn’t a great surprise given that I spend so much time doing computer work and very little time focusing on things further away. What is a surprise is that I don’t recall optometrists mentioning it to me. Apparently it’s common to become more long-sighted as you get older so being slightly short-sighted when you are young is probably a good thing.

Astigmatism is the reason why I wear glasses (the Wikipedia page has a very good explanation of this [3]). For the configuration of my web browser and GUI (which I believe to be default in terms of fonts for Debian/Unstable running KDE and Google-Chrome on a Thinkpad T420 with 1600×900 screen) I can read my blog posts very clearly while wearing glasses. Without glasses I can read it with my left eye but it is fuzzy and with my right eye reading it is like reading the last line of an eye test, something I can do if I concentrate a lot for test purposes but would never do by choice. If I turn my glasses 90 degrees (so that they make my vision worse not better) then my ability to read the text with my left eye is worse than my right eye without glasses, this is as expected as the 1.00 level of astigmatism in my left eye is doubled when I use the lens in my glasses as 90 degrees to it’s intended angle.

The AXS numbers are for the angle of astigmatism. I don’t know why some of them are listed as 180 degrees or why that would be different from 0 degrees (if I turn my glasses so that one lens is rotated 180 degrees it works in exactly the same way). The numbers from 179 degrees to 5 degrees may be just a measurement error.

Related posts:

  1. more on vision I had a few comments on my last so I...
  2. right-side visual migraine This afternoon I had another visual migraine. It was a...
  3. New Portslave release after 5 Years I’ve just uploaded Portslave version 2010.03.30 to Debian, it replaces...

Colin Charles: Speaking in October 2016

Sun, 2016-10-02 21:02
  • I’m thrilled to naturally be at Percona Live Europe Amsterdam from Oct 3-5 2016. I have previously talked about some of my sessions but I think there’s another one on the schedule already.
  • LinuxCon Europe – Oct 4-6 2016. I won’t be there for the whole conference, but hope to make the most of my day on Oct 6th.
  • MariaDB Developer’s meeting – Oct 6-8 2016 – skipping the first day, but will be there all day 2 and 3. I even have a session on day 3, focused on compatibility with MySQL, a topic I deeply care about (session schedule)
  • OSCON London – Oct 17-20 2016 – a bit of a late entrant, I do have a talk titled “Forking successfully”, and wonder if a branch makes more sense, how to fork, and what happens when parity comes?
  • October MySQL London Meetup – Oct 17 2016 – I’m already in London, I wouldn’t miss this meetup for the world! There’s no agenda yet, but I think the discussion should be fun.

Russell Coker: Hostile Web Sites

Sun, 2016-10-02 17:02

I was asked whether it would be safe to open a link in a spam message with wget. So here are some thoughts about wget security and web browser security in general.

Wget Overview

Some spam messages are designed to attack the recipient’s computer. They can exploit bugs in the MUA, applications that may be launched to process attachments (EG MS Office), or a web browser. Wget is a very simple command-line program to download web pages, it doesn’t attempt to interpret or display them.

As with any network facing software there is a possibility of exploitable bugs in wget. It is theoretically possible for an attacker to have a web server that detects the client and has attacks for multiple HTTP clients including wget.

In practice wget is a very simple program and simplicity makes security easier. A large portion of security flaws in web browsers are related to plugins such as flash, rendering the page for display on a GUI system, and javascript – features that wget lacks.

The Profit Motive

An attacker that aims to compromise online banking accounts probably isn’t going to bother developing or buying an exploit against wget. The number of potential victims is extremely low and the potential revenue benefit from improving attacks against other web browsers is going to be a lot larger than developing an attack on the small number of people who use wget. In fact the potential revenue increase of targeting the most common Linux web browsers (Iceweasel and Chromium) might still be lower than that of targeting Mac users.

However if the attacker doesn’t have a profit motive then this may not apply. There are people and organisations who have deliberately attacked sysadmins to gain access to servers (here is an article by Bruce Schneier about the attack on Hacking Team [1]). It is plausible that someone who is targeting a sysadmin could discover that they use wget and then launch a targeted attack against them. But such an attack won’t look like regular spam. For more information about targeted attacks Brian Krebs’ article about CEO scams is worth reading [2].

Privilege Separation

If you run wget in a regular Xterm in the same session you use for reading email etc then if there is an exploitable bug in wget then it can be used to access all of your secret data. But it is very easy to run wget from another account. You can run “ssh otheraccount@localhost” and then run the wget command so that it can’t attack you. Don’t run “su – otheraccount” as it is possible for a compromised program to escape from that.

I think that most Linux distributions have supported a “switch user” functionality in the X login system for a number of years. So you should be able to lock your session and then change to a session for another user to run potentially dangerous programs.

It is also possible to use a separate PC for online banking and other high value operations. A 10yo PC is more than adequate for such tasks so you could just use an old PC that has been replaced for regular use for online banking etc. You could boot it from a CD or DVD if you are particularly paranoid about attack.

Browser Features

Google Chrome has a feature to not run plugins unless specifically permitted. This requires a couple of extra mouse actions when watching a TV program on the Internet but prevents random web sites from using Flash and Java which are two of the most common vectors of attack. Chrome also has a feature to check a web site against a Google black list before connecting. When I was running a medium size mail server I often had to determine whether URLs being sent out by customers were legitimate or spam, if a user sent out a URL that’s on Google’s blacklist I would lock their account without doing any further checks.

Conclusion

I think that even among Linux users (who tend to be more careful about security than users of other OSs) using a separate PC and booting from a CD/DVD will generally be regarded as too much effort. Running a full featured web browser like Google Chrome and updating it whenever a new version is released will avoid most problems.

Using wget when you have to reason to be concerned is a possibility, but not only is it slightly inconvenient but it also often won’t download the content that you want (EG in the case of HTML frames).

Related posts:

  1. Google web sites and Chromium CPU Use Chromium is the free software build of the Google Chrome...
  2. How SE Linux Prevents Local Root Exploits In a comment on my previous post about SE Linux...
  3. Can SE Linux Stop a Linux Storm Bruce Schneier has just written about the Storm Worm [1]...