Planet Linux Australia

Syndicate content
Planet Linux Australia - http://planet.linux.org.au
Updated: 41 min 22 sec ago

Craige McWhirter: Teaching Python with Minecraft - Digital K - PyConAu 2016

Fri, 2016-08-12 12:19

by Digital K.

The video of the talk is here.

  • Recommended for ages 10 - 16
  • Why Minecraft?
    • Kids familiarity is highly engaging.
    • Relatively low cost.
    • Code their own creations.
    • Kids already use the command line in Minecraft
  • Use the Minecraft API to receive commands from Python.
    • Place blocks
    • Move players
    • Build faster
    • Build larger structures and shapes
    • Easy duplication
    • Animate blocks (ie: colour change)
    • Create games
Option 1:

How it works:

  • Import Minecraft API libraries to your code.
  • Push it to the server.
  • Run the Minecraft client.

What you can Teach:

  • Co-ordinates
  • Time
  • Multiplications
  • Data
  • Art works with maths
  • Trigonometry
  • Geo fencing
  • Design
  • Geography

Connect to External Devices:

  • Connect to Raspberry Pi or Arduino.
  • Connect the game to events in the real world.

Other Resources:

Craige McWhirter: Scripting the Internet of Things - Damien George - PyConAu 2016

Fri, 2016-08-12 12:19

Damien George

Damien gave an excellent overview of using MicroPython with microcontrollers, particularly the ESP8266 board.

Damien's talk was excellent and covered a broad and interesting history of the project and it's current efforts.

Craige McWhirter: ESP8266 and MicroPython - Nick Moore - PyConAu 2016

Fri, 2016-08-12 12:19

Nick Moore

Slides.

  • Price and feature set are a game changer for hobbyists.
  • Makes for a more playful platform.
  • Uses serial programming mode to flash memory
  • Strict power requirements
  • The easy way to use them is with a NodeMCU for only a little more.
  • Tool kits:
  • Lua: (Node Lua).
  • Javascript: Espruino.
  • Forth, Lisp, Basic(?!).
  • Mircopython works on the ESP8266:
    • Drives micro controllers.
    • The onboard Wifi.
    • Can run a small webserver to view and control devices.
    • WebRepl can be used to copy files, as can mpy-utils.
    • Lacks:
      • an operating system.
      • Lacks multiprocessing.
      • Debugger / profiler.
  • Flobot:
    • Compiles via MicroPython.
    • A visual dataflow language for robots.

ES8266 and MicroPython provide an accessible entry into working with micro-crontrollers.

Chris Smart: Command line password management with pass

Wed, 2016-08-10 21:03

Why use a password manager in the first place? Well, they make it easy to have strong, unique passwords for each of your accounts on every system you use (and that’s a good thing).

For years I’ve stored my passwords in Firefox, because it’s convenient, and I never bothered with all those other fancy password managers. The problem is, that it locked me into Firefox and I found myself still needing to remember passwords for servers and things.

So a few months ago I decided to give command line tool Pass a try. It’s essentially a shell script wrapper for GnuPG and stores your passwords (with any notes) in individually encrypted files.

I love it.

Pass is less convenient in terms of web browsing, but it’s more convenient for everything else that I do (which is often on the command line). For example, I have painlessly integrated Pass into Mutt (my email client) so that passwords are not stored in the configuration files.

As a side-note, I installed the Password Exporter Firefox Add-on and exported my passwords. I then added this whole file to Pass so that I can start copying old passwords as needed (I didn’t want them all).

About Pass

Pass uses public-key cryptography to encrypt each password that you want to store as an individual file. To access the password you need the private key and passphrase.

So, some nice things about it are:

  • Short and simple shell script
  • Uses standard GnuPG to encrypt each password into individual files
  • Password files are stored on disk in a hierarchy of own choosing
  • Stored in Git repo (if desired)
  • Can also store notes
  • Can copy the password temporarily to copy/paste buffer
  • Can show, edit, or copy password
  • Can also generate a password
  • Integrates with anything that can call it
  • Tab completion!

So it’s nothing super fancy, “just” a great little wrapper for good old GnuPG and text files, backed by git. Perfect!

Install Pass

Installation of Pass (and Git) is easy:
sudo dnf -y install git pass

Prepare keys

You’ll need a pair of keys, so generate these if you haven’t already (this creates the keys under ~/.gnupg). I’d probably recommend RSA and RSA, 4096 bits long, using a decent passphrase and setting a valid email address (you can also separately use these keys to send signed emails and receive encrypted emails).
gpg2 --full-gen-key

We will need the key’s fingerprint to give to pass. It should be a string of 40 characters, something like 16CA211ACF6DC8586D6747417407C4045DF7E9A2.
gpg2 --list-secret-keys

Note: Your fingerprint (and public keys) can be public, but please make sure that you keep your private keys secure! For example, don’t copy the ~/.gnupg directory to a public place (even though they are protected by a nice long passphrase, right? Right?).

Initialise pass

Before we can use Pass, we need to initialise it. Put the fingerprint you got from the output of gpg2 –list-secret-keys above (e.g. 5DF7E9A2).
pass init 5DF7E9A2

This creates the basic directory structure in the .password-store directory in your home directory. At this point it just has a plain text file (.password-store/.gpg-id) with the fingerprint of the public key that it should use.

Adding git backing

If you haven’t already, you’ll need to tell Git who you are. Using the email address that you used when creating the GPG key is probably good.
git config --global user.email "you@example.com"
git config --global user.name "Your Name"

Now, go into the password-store directory and initialise it as a Git repository.
cd ~/.password-store
git init
git add .
git commit -m "intial commit"
cd -

Pass will now automatically commit changes for you!

Hierarchy

As mentioned, you can create any hierarchy you like. I quite like to use subdirectories and sort by function first (like mail, web, server), then domains (like gmail.com, twitter.com) and then server or username. This seems to work quite nicely with tab completion, too.

You can rearrange this at any time, so don’t worry too much!

Storing a password

Adding a password is simple and you can create any hierarchy that you want; you just tell pass to add a new password and where to store it. Pass will prompt you to enter the password.

For example, you might want to store your password for a machine at server1.example.com – you could do that like so:
pass add servers/example.com/server1

This creates the directory structure on disk and your first encrypted file!
~/.password-store/
└── servers
    └── example.com
        └── server1.gpg
 
2 directories, 1 file

Run the file command on that file and it should tell you that it’s encrypted.
file ~/.password-store/servers/example.com/server1.gpg

But is it really? Go ahead, cat that gpg file, you’ll see it’s encrypted (your terminal will probably go crazy – you can blindly enter the reset command to get it back).
cat ~/.password-store/servers/example.com/server1.gpg

So this file is encrypted – you can safely copy it anywhere (again, please just keep your private key secure).

Git history

Browse to the .password-store dir and run some git commands, you’ll see your history and showing will prompt for your GPG passphrase to decrypt the files stored in Git.

cd ~/.password-store
git log
git show
cd -

If you wanted to, you could push this to another computer as a backup (perhaps even via a git-hook!).

Storing a password, with notes

By default Pass just prompts for the password, but if you want to add notes at the same time you can do that also. Note that the password should still be on its own on the first line, however.
pass add -m mail/gmail.com/username

If you use two-factor authentication (which you should be), this is useful for also storing the account password and recovery codes.

Generating and storing a password

As I mentioned, one of the benefits of using a password manager is to have strong, unique passwords. Pass makes this easy by including the ability to generate one for you and store it in the hierarchy of your choosing. For example, you could generate a 32 character password (without special characters) for a website you often log into, like so:
pass generate -n web/twitter.com/username 32

Getting a password out

Getting a password out is easy; just tell Pass which one you want. It will prompt you for your passphrase, decrypt the file for you, read the first line and print it to the screen. This can be useful for scripting (more on that below).

pass web/twitter.com/username

Most of the time though, you’ll probably want to copy the password to the copy/paste buffer; this is also easy, just add the -c option. Passwords are automatically cleared from the buffer after 45 seconds.
pass -c web/twitter.com/username

Now you can log into Twitter by entering your username and pasting the password.

Editing a password

Similarly you can edit an existing password to change it, or add as many notes as you like. Just tell Pass which password to edit!
pass edit web/twitter.com/username

Copying and moving a password

It’s easy to copy an existing password to a new one, just specify both the original and new file.
pass copy servers/example.com/server1 servers/example.com/server2

If the hierarchy you created is not to your liking, it’s easy to move passwords around.
pass mv servers/example.com/server1 computers/server1.example.com

Of course, you could script this!

Listing all passwords

Pass will list all your passwords in a tree nicely for you.
pass list

Interacting with Pass

As pass is a nice standard shell program, you can interact with it easily. For example, to get a password from a script you could do something like this.
#!/usr/bin/env bash
 
echo "Getting password.."
PASSWORD="$(pass servers/testing.com/server2)"
if [[ $? -ne 0 ]]; then
    echo "Sorry, failed to get the password"
    exit 1
fi
echo "..and we got it, ${PASSWORD}"

Try it!

There’s lots more you can do with Pass, why not check it out yourself!

Chris Smart: Setting up OpenStack Ansible All-in-one behind a proxy

Tue, 2016-08-09 09:03

Setting up OpenStack Ansible (OSA) All-in-one (AIO) behind a proxy requires a couple of settings, but it should work fine (we’ll also configure the wider system). There are two types of git repos that we should configure for (unless you’re an OpenStack developer), those that use http (or https) and those that use the git protocol.

Firstly, this assumes an Ubuntu 14.04 server install (with at least 60GB of free space on / partition).

All commands are run as the root user, so switch to root first.

sudo -i

Export variables for ease of setup

Setting these variables here means that you can copy and paste the relevant commands from the rest of this blog post.

Note: Make sure that your proxy is fully resolvable and then replace the settings below with your actual proxy details (leave out user:password if you don’t use one).

export PROXY_PROTO="http"
export PROXY_HOST="user:password@proxy"
export PROXY_PORT="3128"
export PROXY="${PROXY_PROTO}://${PROXY_HOST}:${PROXY_PORT}"

First, install some essentials (reboot after upgrade if you like).
echo "Acquire::http::Proxy \"${PROXY}\";" \
> /etc/apt/apt.conf.d/90proxy
apt-get update && apt-get upgrade
apt-get install git openssh-server rsync socat screen vim

Configure global proxies

For any http:// or https:// repositories we can just set a shell environment variable. We’ll set this in /etc/environment so that all future shells have it automatically.

cat >> /etc/environment << EOF
export http_proxy="${PROXY}"
export https_proxy="${PROXY}"
export HTTP_PROXY="${PROXY}"
export HTTPS_PROXY="${PROXY}"
export ftp_proxy="${PROXY}"
export FTP_PROXY="${PROXY}"
export no_proxy=localhost
export NO_PROXY=localhost
EOF

Source this to set the proxy variables in your current shell.
source /etc/environment

Tell sudo to keep these environment variables
echo 'Defaults env_keep = "http_proxy https_proxy ftp_proxy \
no_proxy HTTP_PROXY HTTPS_PROXY FTP_PROXY NO_PROXY"' \
> /etc/sudoers.d/01_proxy

Configure Git

For any git:// repositories we need to make a script that uses socat (you could use netcat) and tell Git to use this as the proxy.

cat > /usr/local/bin/git-proxy.sh << EOF
#!/bin/bash
# \$1 = hostname, \$2 = port
exec socat STDIO PROXY:${PROXY_HOST}:\${1}:\${2},proxyport=${PROXY_PORT}
EOF

Make it executable.
chmod a+x /usr/local/bin/git-proxy.sh

Tell Git to proxy connections through this script.
git config --global core.gitProxy /usr/local/bin/git-proxy.sh

Clone OpenStack Ansible

OK, let’s clone the OpenStack Ansible repository! We’re living on the edge and so will build from the tip of the master branch.
git clone git://git.openstack.org/openstack/openstack-ansible \
/opt/openstack-ansible
cd /opt/openstack-ansible/

If you would prefer to build from a specific release, such as the latest stable, feel free to now check out the appropriate tag. For example, at the time of writing this is tag 13.3.1. You can get a list of tags by running the git tag command.

# Only run this if you want to build the 13.3.1 release
git checkout -b tag-13.3.1 13.3.1

Or if you prefer, you can checkout the tip of the stable branch which prepares for the upcoming stable minor release.

# Only run this if you want to build the latest stable code
git checkout -b stable/matika origin/stable/mitaka

Prepare log location

If something goes wrong, it’s handy to be able to have the log available.

export ANSIBLE_LOG_PATH=/root/ansible-log

Bootstrap Ansible

Now we can kick off the ansible bootstrap. This prepares the system with all of the Ansible roles that make up an OpenStack environment.
./scripts/bootstrap-ansible.sh

Upon success, you should see:

System is bootstrapped and ready for use.

Bootstrap OpenStack Ansible All In One

Now let’s bootstrap the all in one system. This configures the host with appropriate disks and network configuration, etc ready to run the OpenStack environment in containers.
./scripts/bootstrap-aio.sh

Run the Ansible playbooks

The final task is to run the playbooks, which sets up all of the OpenStack components on the host and containers. Before we proceed, however, this requires some additional configuration for the proxy.

The user_variables.yml file under the root filesystem at /etc/openstack_deploy/user_variables.yml is where we configure environment variables for OSA to export and set some other options (again, note the leading / before etc – do not modify the template file at /opt/openstack-ansible/etc/openstack_deploy by mistake).

cat >> /etc/openstack_deploy/user_variables.yml << EOF
#
## Proxy settings
proxy_env_url: "\"${PROXY}\""
no_proxy_env: "\"localhost,127.0.0.1,{{ internal_lb_vip_address }},{{ external_lb_vip_address }},{% for host in groups['all_containers'] %}{{ hostvars[host]['container_address'] }}{% if not loop.last %},{% endif %}{% endfor %}\""
global_environment_variables:
  HTTP_PROXY: "{{ proxy_env_url }}"
  HTTPS_PROXY: "{{ proxy_env_url }}"
  NO_PROXY: "{{ no_proxy_env }}"
  http_proxy: "{{ proxy_env_url }}"
  https_proxy: "{{ proxy_env_url }}"
  no_proxy: "{{ no_proxy_env }}"
EOF

Secondly, if you’re running the latest stable, 13.3.x, you will need to make a small change to pip package list for the keystone (authentication component) container. Currently it pulls in httplib2 version 0.8, however this does not appear to respect the NO_PROXY variable and so keystone provisioning fails. Version 0.9 seems to fix this problem.

sed -i 's/state: present/state: latest/' \
/etc/ansible/roles/os_keystone/tasks/keystone_install.yml

Now run the playbooks!

Note: This will take a long time, perhaps a few hours, so run it in a screen or tmux session.

screen
time ./scripts/run-playbooks.sh

Verify containers

Once the playbooks complete, you should be able to list your running containers and see their status (there will be a couple of dozen).
lxc-ls -f

Log into OpenStack

Now that the system is complete, we can start using OpenStack!

You should be able to use your web browser to log into Horizon, the OpenStack Dashboard, at your AIO hosts’s IP address.

If you’re not sure what IP that is, you can find out by looking at which address port 443 is running on.

netstat -ltnp |grep 443

The admin user’s password is available in the user_secrets.yml file on the AIO host.
grep keystone_auth_admin_password \
/etc/openstack_deploy/user_secrets.yml

A successful login should reveal the admin dashboard.

Enjoy your OpenStack Ansible All-in-one!

Stewart Smith: Windows 3.11 nostalgia

Mon, 2016-08-08 21:00

Because OS/2 didn’t go so well… let’s try something I’m a lot more familiar with. To be honest, the last time I in earnest used Windows on the desktop was around 3.11, so I kind of know it back to front (fun fact: I’ve read the entire Windows 3.0 manual).

It turns out that once you have MS-DOS installed in qemu, installing Windows 3.11 is trivial. I didn’t even change any settings for Qemu, I just basically specced everything up to be very minimal (50MB RAM, 512mb disk).

Windows 3.11 was not a fun time as soon as you had to do anything… nightmares of drivers, CONFIG.SYS and AUTOEXEC.BAT plague my mind. But hey, it’s damn fast on a modern processor.

Matthew Oliver: Swift + Xena would make the perfect digital preservation solution

Mon, 2016-08-08 15:03

Those of you might not know, but for some years I worked at the National Archives of Australia working on, at the time, their leading digital preservation platform. It was awesome, opensource, and they paid me to hack on it.
The most important parts of the platform was Xena and Digital Preservation Recorder (DPR). Xena was, and hopefully still is amazing. It takes in a file, guesses the format. If it’s a closed proprietary format and it had the right xena plugin it would convert it to an open standard and optionally turned it into a .xena file ready to be ingested into the digital repository for long term storage.

We did this knowing that proprietary formats change so quickly and if you want to store a file format long term (20, 40, 100 years) you won’t be able to open it. An open format on the other hand, even if there is no software that can read it any more is open, so you can get your data back.

Once a file had passed through Xena, we’d use DPR to ingest it into the archive. Once in the archive, we had other opensource daemons we wrote which ensured we didn’t lose things to bitrot, we’d keep things duplicated and separated. It was a lot of work, and the size of the space required kept growing.

Anyway, now I’m an OpenStack Swift core developer, and wow, I wish Swift was around back then, because it’s exactly what is required for the DPR side. It duplicates, infinitely scales, it checks checksums, quarantines and corrects. Keeps everything replicated and separated and does it all automatically. Swift is also highly customise-able. You can create your own middleware and insert it in the proxy pipeline or in any of the storage node’s pipelines, and do what ever you need it to do. Add metadata, do something to the object on ingest, or whenever the object is read, updating some other system.. really you can do what ever you want. Maybe even wrap Xena into some middleware.

Going one step further, IBM have been working on a thing called storlets which uses swift and docker to do some work on objects and is now in the OpenStack namespace. Currently storlets are written in Java, and so is Xena.. so this might also be a perfect fit.

Anyway, I got talking with Chris Smart, a mate who also used to work in the same team at NAA, so it got my mind thinking about all this and so I thought I’d place my rambling thoughts somewhere in case other archives or libraries are interested in digital preservation and needs some ideas.. best part, the software is open source and also free!

Happy preserving.

David Rowe: SM2000 – Part 8 – Gippstech 2016 Presentation

Mon, 2016-08-08 07:03

Justin, VK7TW, has published a video of my SM2000 presentation at Gippstech, which was held in July 2016.

Brady O’Brien, KC9TPA, visited me in June. Together we brought the SM2000 up to the point where it is decoding FreeDV 2400A waveforms at 10.7MHz IF, which we demonstrate in this video. I’m currently busy with another project but will get back to the SM2000 (and other FreeDV projects) later this year.

Thanks Justin and Brady!

FreeDV and this video was also mentioned on this interesting Reddit post/debate from Gary KN4AQ on VHF/UHF Digital Voice – a peek into the future

Stewart Smith: OS/2 Warp Nostalgia

Sun, 2016-08-07 21:00

Thanks to the joys of abandonware websites, you can play with some interesting things from the 1990s and before. One of those things is OS/2 Warp. Now, I had a go at OS/2 sometime in the 1990s after being warned by a friend that it was “pretty much impossible” to get networking going. My experience of OS/2 then was not revolutionary… It was, well, something else on a PC that wasn’t that exciting and didn’t really add a huge amount over Windows.

Now, I’m nowhere near insane enough to try this on my actual computer, and I’ve managed to not accumulate any ancient PCs….

Luckily, qemu helps with an emulator! If you don’t set your CPU to Pentium (or possibly something one or two generations newer) then things don’t go well. Neither does a disk that by today’s standards would be considered beyond tiny. Also, if you dare to try to use an unpartitioned hard disk – OH MY are you in trouble.

Also, try to boot off “Disk 1” and you get this:
Possibly the most friendly error message ever! But, once you get going (by booting the Installation floppy)… you get to see this:

and indeed, you are doing the time warp of Operating Systems right here. After a bit of fun, you end up in FDISK:

Why I can’t create a partition… WHO KNOWS. But, I tried again with a 750MB disk that already had a partition on it and…. FAIL. I think this one was due to partition type, so I tried again with partition type of 6 – plain FAT16, and not W95 FAT16 (LBA). Some memory is coming back to me of larger drives and LBA and nightmares…

But that worked!

Then, the OS/2 WARP boot screen… which seems to stick around for a long time…..

and maybe I could get networking….

Ladies and Gentlemen, the wonders of having to select DHCP:

It still asked me for some config, but I gleefully ignored it (because that must be safe, right!?) and then I needed to select a network adapter! Due to a poor choice on my part, I started with a rtl8139, which is conspicuously absent from this fine list of Token Ring adapters:

and then, more installing……

before finally rebooting into….

and that, is where I realized there was beer in the fridge and that was going to be a lot more fun.

OpenSTEM: Remembering Seymour Papert

Thu, 2016-08-04 17:11

Today we’re remembering Seamour Papert, as we’ve received news that he died a few days ago (31st July 2016) at the age of 88.  Throughout his life, Papert did so much for computing and education, he even worked with the famous Jean Piaget who helped Papert further develop his views on children and learning.

For us at OpenSTEM, Papert is also special because in the late 1960s (yep that far back) he invented the Logo programming language, used to control drawing “turtles”.  The Mirobot drawing turtle we use in our Robotics Program is a modern descendant of those early (then costly) adventures.

I sadly never met him, but what a wonderful person he was.

For more information, see the media release at MIT’s Media Lab (which he co-founded) or search for his name online.

 

Lev Lafayette: Supercomputers: Current Status and Future Trends

Thu, 2016-08-04 17:11

The somewhat nebulous term "supercomputer" has a long history. Although first coined in the 1920s to refer to IBMs tabulators, in electronic computing the most important initial contribution was the CDC6600 in the 1960s, due to its advanced performance over competitors. Over time major technological advancements included vector processing, cluster architecture, massive processors counts, GPGPU technologies, multidimensional torus architectures for interconnect.

read more

Simon Lyall: Putting Prometheus node_exporter behind apache proxy

Tue, 2016-08-02 09:02

I’ve been playing with Prometheus monitoring lately. It is fairly new software that is getting popular. Prometheus works using a pull architecture. A central server connects to each thing you want to monitor every few seconds and grabs stats from it.

In the simplest case you run the node_exporter on each machine which gathers about 600-800 (!) metrics such as load, disk space and interface stats. This exporter listens on port 9100 and effectively works as an http server that responds to “GET /metrics HTTP/1.1” and spits several hundred lines of:

node_forks 7916 node_intr 3.8090539e+07 node_load1 0.47 node_load15 0.21 node_load5 0.31 node_memory_Active 6.23935488e+08

Other exporters listen on different ports and export stats for apache or mysql while more complicated ones will act as proxies for outgoing tests (via snmp, icmp, http). The full list of them is on the Prometheus website.

So my problem was that I wanted to check my virtual machine that is on Linode. The machine only has a public IP and I didn’t want to:

  1. Allow random people to check my servers stats
  2. Have to setup some sort of VPN.

So I decided that the best way was to just use put a user/password on the exporter.

However the node_exporter does not  implement authentication itself since the authors wanted the avoid maintaining lots of security code. So I decided to put it behind a reverse proxy using apache mod_proxy.

Step 1 – Install node_exporter

Node_exporter is a single binary that I started via an upstart script. As part of the upstart script I told it to listen on localhost port 19100 instead of port 9100 on all interfaces

# cat /etc/init/prometheus_node_exporter.conf description "Prometheus Node Exporter" start on startup chdir /home/prometheus/ script /home/prometheus/node_exporter -web.listen-address 127.0.0.1:19100 end script

Once I start the exporter a simple “curl 127.0.0.1:19100/metrics” makes sure it is working and returning data.

Step 2 – Add Apache proxy entry

First make sure apache is listening on port 9100 . On Ubuntu edit the /etc/apache2/ports.conf file and add the line:

Listen 9100

Next create a simple apache proxy without authentication (don’t forget to enable mod_proxy too):

# more /etc/apache2/sites-available/prometheus.conf <VirtualHost *:9100> ServerName prometheus CustomLog /var/log/apache2/prometheus_access.log combined ErrorLog /var/log/apache2/prometheus_error.log ProxyRequests Off <Proxy *> Allow from all </Proxy> ProxyErrorOverride On ProxyPass / http://127.0.0.1:19100/ ProxyPassReverse / http://127.0.0.1:19100/ </VirtualHost>

This simply takes requests on port 9100 and forwards them to localhost port 19100 . Now reload apache and test via curl to port 9100. You can also use netstat to see what is listening on which ports:

Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 127.0.0.1:19100 0.0.0.0:* LISTEN 8416/node_exporter tcp6 0 0 :::9100 :::* LISTEN 8725/apache2

 

Step 3 – Get Prometheus working

I’ll assume at this point you have other servers working. What you need to do now is add the following entries for you server in you prometheus.yml file.

First add basic_auth into your scape config for “node” and then add your servers, eg:

- job_name: 'node' scrape_interval: 15s basic_auth: username: prom password: mypassword static_configs: - targets: ['myserver.example.com:9100'] labels: group: 'servers' alias: 'myserver'

Now restart Prometheus and make sure it is working. You should see the following lines in your apache logs plus stats for the server should start appearing:

10.212.62.207 - - [31/Jul/2016:11:31:38 +0000] "GET /metrics HTTP/1.1" 200 11377 "-" "Go-http-client/1.1" 10.212.62.207 - - [31/Jul/2016:11:31:53 +0000] "GET /metrics HTTP/1.1" 200 11398 "-" "Go-http-client/1.1" 10.212.62.207 - - [31/Jul/2016:11:32:08 +0000] "GET /metrics HTTP/1.1" 200 11377 "-" "Go-http-client/1.1"

Notice that connections are 15 seconds apart, get http code 200 and are 11k in size. The Prometheus server is using Authentication but apache doesn’t need it yet.

Step 4 – Enable Authentication.

Now create an apache password file:

htpasswd -cb /home/prometheus/passwd prom mypassword

and update your apache entry to the followign to enable authentication:

# more /etc/apache2/sites-available/prometheus.conf <VirtualHost *:9100> ServerName prometheus CustomLog /var/log/apache2/prometheus_access.log combined ErrorLog /var/log/apache2/prometheus_error.log ProxyRequests Off <Proxy *> Order deny,allow Allow from all # AuthType Basic AuthName "Password Required" AuthBasicProvider file AuthUserFile "/home/prometheus/passwd" Require valid-user </Proxy> ProxyErrorOverride On ProxyPass / http://127.0.0.1:19100/ ProxyPassReverse / http://127.0.0.1:19100/ </VirtualHost>

After you reload apache you should see the following:

10.212.56.135 - prom [01/Aug/2016:04:42:08 +0000] "GET /metrics HTTP/1.1" 200 11394 "-" "Go-http-client/1.1" 10.212.56.135 - prom [01/Aug/2016:04:42:23 +0000] "GET /metrics HTTP/1.1" 200 11392 "-" "Go-http-client/1.1" 10.212.56.135 - prom [01/Aug/2016:04:42:38 +0000] "GET /metrics HTTP/1.1" 200 11391 "-" "Go-http-client/1.1"

Note that the “prom” in field 3 indicates that we are logging in for each connection. If you try to connect to the port without authentication you will get:

Unauthorized This server could not verify that you are authorized to access the document requested. Either you supplied the wrong credentials (e.g., bad password), or your browser doesn't understand how to supply the credentials required.

That is pretty much it. Note that will need to add additional Virtualhost entries for more ports if you run other exporters on the server.

 

Share

Chris Samuel: Playing with Shifter – NERSC’s tool to use Docker containers in HPC

Tue, 2016-08-02 01:01

Early days yet, but playing with NERSC’s Shifter to let us use Docker containers safely on our test RHEL6 cluster is looking really interesting (given you can’t use Docker itself under RHEL6, and if you could the security concerns would cancel it out anyway).

To use a pre-built Ubuntu Xenial image, for instance, you tell it to pull the image:

[samuel@bruce ~]$ shifterimg pull ubuntu:16.04

There’s a number of steps it goes through, first retrieving the container from the Docker Hub:

2016-08-01T18:19:57 Pulling Image: docker:ubuntu:16.04, status: PULLING

Then disarming the Docker container by removing any setuid/setgid bits, etc, and repacking as a Shifter image:

2016-08-01T18:20:41 Pulling Image: docker:ubuntu:16.04, status: CONVERSION

…and then it’s ready to go:

2016-08-01T18:21:04 Pulling Image: docker:ubuntu:16.04, status: READY

Using the image from the command line is pretty easy:

[samuel@bruce ~]$ cat /etc/lsb-release LSB_VERSION=base-4.0-amd64:base-4.0-noarch:core-4.0-amd64:core-4.0-noarch:graphics-4.0-amd64:graphics-4.0-noarch:printing-4.0-amd64:printing-4.0-noarch [samuel@bruce ~]$ shifter --image=ubuntu:16.04 cat /etc/lsb-release DISTRIB_ID=Ubuntu DISTRIB_RELEASE=16.04 DISTRIB_CODENAME=xenial DISTRIB_DESCRIPTION="Ubuntu 16.04 LTS"

and the shifter runtime will copy in a site specified /etc/passwd, /etc/group and /etc/nsswitch.conf files so that you can do user/group lookups easily, as well as map in site specified filesystems, so your home directory is just where it would normally be on the cluster.

[samuel@bruce ~]$ shifter --image=debian:wheezy bash --login samuel@bruce:~$ pwd /vlsci/VLSCI/samuel

I’ve not yet got to the point of configuring the Slurm plugin so you can queue up a Slurm job that will execute inside a Docker container, but very promising so far!

Correction: a misconception on my part – Shifter doesn’t put a Slurm batch job inside the container. It could, but there are good reasons why it’s better to leave that to the user (soon to be documented on the Shifter wiki page for Slurm integration).

This item originally posted here:

Playing with Shifter – NERSC’s tool to use Docker containers in HPC

OpenSTEM: Australia moves fast: North-West actually

Fri, 2016-07-29 15:03

This story is about the tectonic plate on which we reside.  Tectonic plates move, and so continents shift over time.  They generally go pretty slow though.

What about Australia?  It appears that every year, we move 11 centimetres West and 7 centimetres North.  For a tectonic plate, that’s very fast.

The last time scientists marked our location on the globe was in 1994, with the Geocentric Datum of Australia 1994 (GDA1994) – generally called GDA94 in geo-spatial tools (such as QGIS).  So that datum came into force 22 years ago.  Since then, we’ve moved an astonishing 1.5 metres!  You may not think much of this, but right now it actually means that if you use a GPS in Australia to get coordinates, and plot it onto a map that doesn’t correct for this, you’re currently going to be off by 1.5 metres.  Depending on what you’re measuring/marking, you’ll appreciate this can be very significant and cause problems.

Bear in mind that, within Australia, GDA94 is not wrong as such, as its coordinates are relative to points within Australia. However, the positioning of Australia in relation to the rest of the globe is now outdated.  Positioning technologies have also improved.  So there’s a new datum planned for Australia, GDA2020.  By the time it comes into force, we’ll have shifted by 1.8 metres relative to GDA94.

We can have some fun with all this:

  • If you stand and stretch both your arms out, the tips of your fingers are about 1.5 metres apart – of course this depends a bit on the length of your arms, but it’ll give you a rough idea.  Now imagine a pipe or cable in the ground at a particular GPS position,  move 1.5 metres.  You could clean miss that pipe or cable… oops!  Unless your GPS is configured to use a datum that gets updated, such as WGS84.  However, if you had the pipe or cable plotted on a map that’s in GDA94, it becomes messy again.
  • If you use a tool such as Google Earth, where is Australia actually?  That is, will a point be plotted accurately, or be 1.5 metres out, or somewhere in between?
    Well, that would depend on when the most recent broad scale photos were taken, and what corrections the Google Earth team possibly applies during processing of its data (for example, Google Earth uses a different datum – WGS 84 for its calculations).
    Interesting question, isn’t it…
  • Now for a little science/maths challenge.  The Northern most tip of Australia, Cape York, is just 150km South of Papua New Guinea (PNG).  Presuming our plate maintains its present course and speed, roughly how many years until the visible bits (above sea level) of Australia and PNG collide?  Post your answer with working/reasoning in a comment to this post!  Think about this carefully and do your research.  Good luck!

Binh Nguyen: Neuroscience in PSYOPS, World Order, and More

Fri, 2016-07-29 00:16
One of the funny things that I've heard is that people from one side believe that people from another side are somehow 'brainwashed' into believing what they do. As we saw in out last post there is a lot of manipulation and social engineering going on if you think about it, http://dtbnguyen.blogspot.com/2016/07/social-engineeringmanipulation-rigging.html We're going to examine just exactly why

Josh Stewart: License quibbles (aka Hiro & linux pt 2)

Thu, 2016-07-28 23:03

Since my last post regarding the conversion of media from Channel 9’s Catch Up service, I have been in discussion with the company behind this technology, Hiro-Media. My concerns were primarily around their use of the open source xvid media codec and whilst I am not a contributor to xvid (and hence do not have any ownership under copyright), I believe it is still my right under the GPL to request a copy of the source code.

First off I want to thank Hiro-Media for their prompt and polite responses. It is clear that they take the issue of license violations very seriously. Granted, it would be somewhat hypocritical for a company specialising in DRM to not take copyright violations within their own company seriously, but it would not be the first time.

I initially asserted that, due to Hiro’s use (and presumed modification) of xvid code, that this software was considered a derivative and therefore bound in its entirety by the GPL. Hiro-Media denied this stating they use xvid in its original, unmodified state and hence Hiro is simply a user of rather than a derivative of xvid. This is a reasonable statement albeit one that is difficult to verify. I want to stress at this point that in my playing with the Hiro software I have NOT in anyway reverse engineered it nor have I attempted to decompile their binaries in any way.

In the end, the following points were revealed:

  • The Mac version of Hiro uses a (claimed) unmodified version of the Perian Quicktime component
  • The Windows version of Hiro currently on Channel 9’s website IS indeed modified, what Hiro-Media terms an ‘accidental internal QA’ version. They state that they have sent a new version to Channel 9 that corrects this. The xvid code they are using can be found at http://www.koepi.info/xvid.html
  • Neither version has included a GPL preamble within their EULA as required. Again, I am assured this is to be corrected ASAP.

I want to reiterate that Hiro-Media have been very cooperative about this and appear to have genuine concern. I am impressed by the Hiro system itself and whilst I am still not a fan of DRM in general, this is by far the best compromise I have seen to date. They just didn’t have a linux version.

This brings me to my final, slightly more negative point. On my last correspondence with Hiro-Media, they concluded with the following:

Finally, please note our deepest concerns as to any attempt to misuse our system, including the content incorporated into it, as seems to be evidenced in your website. Prima facia, such behavior consists a gross and fundamental breach of our license (which you have already reviewed). Any such misuse may cause our company, as well as any of our partners, vast damages.

I do not wish to label this a threat (though I admit to feeling somewhat threatened by it), but I do want to clear up a few things about what I have done. The statement alleges I have violated Hiro’s license (pot? kettle? black?) however this is something I vehemently disagree with. I have read the license very careful (Obviously as I went looking for the GPL) and the only relevant part is:

You agree that you will not modify, adapt or translate, or disassemble, decompile, reverse engineer or otherwise attempt to discover the source code of the Software.

Now I admit to being completely guilty of a single part of this, I have attempted to discover the source code. BUT (and this is a really big BUT), I have attempted this by emailing Hiro-Media and asking them for it, NOT by decompiling (or in any other way inspecting) the software. In my opinion, the inclusion of that specific part in their license also goes against the GPL as such restrictions are strictly forbidden by it.
But back to the point, I have not modified, translated, disassembled, decompiled or reverse engineered the Hiro software. Additionally, I do not believe I have adapted it either. It is still doing exactly the same thing as it was originally, that is taking an incoming video stream, modifying it and decoding it. Importantly, I do not modify any files in any way. What I have altered is how Quicktime uses the data returned by Hiro. All my solution does is (using official OSX/Quicktime APIs) divert the output to a file rather than to the screen. In essence I have not done anything different to the ‘Save As’ option found in Quicktime Pro, however not owning Quicktime Pro, I merely found another way of doing this.

So that’s my conclusion. I will reply to Hiro-Media with a link to this post asking whether they still take issue with what I have done and take things from there.
To the guys from Hiro if you are reading this, I didn’t do any of this to start trouble. All I wanted was a way to play these files on my linux HTPC, with or without ads. Thankyou.

Josh Stewart: Channel 9, Catch Up, Hiro and linux

Thu, 2016-07-28 23:03

About 6 months ago, Channel 9 launched their ‘Catch Up’ service. Basically this is their way of fighting piracy and allowing people to download Australian made TV shows to watch on their PC. Now, of course, no ‘old media’ service would possibly do this without the wonders of DRM. Channel 9 though, are taking a slightly different approach.

Instead of the normal style of DRM that prevents you copying the file, Channel 9 employs technology from a company called Hiro. Essentially you install the Hiro player, download the file and watch it. The player will insert unskippable ads throughout the video, supposedly even targeted at your demographic. Now this is actually a fairly neat system, Channel 9 actually encourage you to share the video files over bittorrent etc! The problem, as I’m sure you can guess, is that there’s no player for linux.

So, just to skip to the punchline, yes it IS possible to get these files working on free software (completely legally & without the watermark)! If you just want to know how to do it, jump to the end as I’m going to explain a bit of background first.

Hiro

The Hiro technology is interesting in that it isn’t simply some custom player. The files you download from Channel 9 are actually xvid encoded, albeit a bastard in-bred cousin of what xvid should be. If you simply download the file and play it with vlc or mplayer, it will run, however you will get a nasty watermark over the top of the video and it will almost certainly crash about 30s in when it hits the first advertising blob. There is also some trickiness going on with the audio as, even if you can get the video to keep playing, the audio will jump back to the beginning at this point. Of course, the watermark isn’t just something that’s placed over the top in post-processing like a subtitle, its in the video data itself. To remove it you actually need to filter the video to modify the area covered by the watermark to darken/lighten the pixels affected. Sounds crazy and a tremendous amount of work right? Well thankfully its already been done, by Hiro themselves.

When you install Hiro, you don’t actually install a media player, you install either a DirectShow filter or a Quicktime component depending on your platform. This has the advantage that you can use reasonably standard software to play the files. Its still not much help for linux though.

Before I get onto how to create a ‘normal’ xvid file, I just want to mention something I think should be a concern for free software advocates. As you might know, xvid is an open codec, both for encoding and decoding. Due to the limitations of Quicktime and Windows Media Player, Hiro needs to include an xvid decoder as part of their filter. I’m sure its no surprise to anyone though that they have failed to release any code for this filter, despite it being based off a GPL’d work. IA(definitely)NAL, but I suspect there’s probably some dodginess going on here.

Using Catchup with free software

Very basically, the trick to getting the video working is that it needs to be passed through the filter provided by Hiro. I tried a number of methods to get the files converted for use with mplayer or vlc and in the end, unfortunately, I found that I needed to be using either Windows or OSX to get it done. Smarter minds than mine might be able to get the DirectShow filter (HiroTransform.ax) working with mplayer in a similar manner to how CoreAVC on linux works, but I had no luck.

But, if you have access to OSX, here’s how to do it:

  1. Download and install the Hiro software for Mac. You don’t need to register or anything, in fact, you can delete the application the moment you finish the install. All you need is the Quicktime component it added.
  2. Grab any file from the Catch Up Service (http://video.ninemsn.com.au/catchuptv). I’ve tested this with Underbelly, but all videos should work.
  3. Install ffmpegx (http://ffmpegx.com)
  4. Grab the following little script: CleanCatch.sh
  5. Run:
    chmod +x CleanCatch.sh
    ./CleanCatch.sh <filename>
  6. Voila. Output will be a file called ‘<filename>.clean.MP4′ and should be playable in both VLC and mplayer

Distribution

So, I’m the first to admit that the above is a right royal pain to do, particularly the whole requiring OSX part. To save everyone the hassle though, I believe its possible to simply distribute the modified file. Now again, IANAL, but I’ve gone over the Channel 9 website with a fine tooth comb and can see nothing that forbids me from distributing this newly encoded file. I agreed to no EULA when I downloaded the original video and their site even has the following on it:

You can share the episode with your friends and watch it as many times as you like – online or offline – with no limitations

That whole ‘no limitations’ part is the bit I like. Not only have Channel 9 given me permission to distribute the file, they’ve given it to me unrestricted. I’ve not broken any locks and in fact have really only used the software provided by Channel 9 and a standard transcoding package.

This being the case, I am considering releasing modified versions of Channel 9’s files over bittorrent. I’d love to hear people’s opinions about this before doing so though in case they know more than I (not a hard thing) about such matters.

Josh Stewart: The rallyduino lives

Thu, 2016-07-28 23:03

[Update: The code or the rallyduino can be found at: http://code.google.com/p/rallyduino/]

Amidst a million other things (Today is T-1 days until our little bubs technical due date! No news yet though) I finally managed to get the rallyduino into the car over the weekend. It actually went in a couple of weeks ago, but had to come out again after a few problems were found.

So, the good news, it all works! I wrote an automagic calibration routine that does all the clever number crunching and comes up with the calibration number, so after a quick drive down the road, everything was up and running. My back of the envelope calculations for what the calibration number for our car would be also turned out pretty accurate, only about 4mm per revolution out. The unit, once calibrated, is at least as accurate as the commercial devices and displays considerably more information. Its also more flexible as it can switch between modes dynamically and has full control from the remote handset. All in all I was pretty happy, even the instantaneous speed function worked first time.

To give a little background, the box uses a hall effect sensor mounted next to a brake caliper that pulses each time a wheel stud rotates past. This is a fairly common method for rally computers to use and to make life simpler, the rallyduino is pin compatible with another common commercial product, the VDO Minicockpit. As we already had a Minicockpit in the car, all we did was ‘double adapt’ the existing plug and pop in the rallyduino. This means there’s 2 computers running off the 1 sensor, in turn making it much simpler (assuming 1 of them is known to be accurate) to determine if the other is right.

After taking the car for a bit of a drive though, a few problems became apparent. The explain them, a pic is required:

The 4 devices in the pic are:

  1. Wayfinder electronic compass
  2. Terratrip (The black box)
  3. Rallyduino (Big silver box with the blue screen)
  4. VDO Minicockpit (Sitting on top of the rallyduino)

The major problem should be immediately obvious. The screen is completely unsuitable. Its both too small and has poor readability in daylight. I’m currently looking at alternatives and it seems like the simplest thing to do is get a 2×20 screen the same physical size as the existing 4×20. This, however, means that there would only be room for a single distance tracker rather than the 2 currently in place. The changeover is fairly trivial as the code, thankfully, is nice and flexible and the screen dimensions can be configured at compile time (From 2×16 to 4×20). Daylight readable screens are also fairly easily obtainable (http://www.crystalfontz.com is the ultimate small screen resource) so its just a matter of ordering one. In the long term I’d like to look at simply using a larger 4×20 screen but, as you can see, real estate on the dash is fairly tight.

Speaking of screens, I also found the most amazing little LCD controller from web4robot.com. It has both serial and I2C interfaces, a keypad decoder (with inbuilt debounce) and someone has gone to all the hard work of work of writing a common arduino interface library for it and other I2C LCD controllers (http://www.wentztech.com/radio/arduino/projects.html) . If you’re looking for such a LCD controller and you are working with an arduino, I cannot recommend these units enough. They’re available on eBay for about $20 Au delivered too. As much as I loved the unit from Phil Anderson, it simply doesn’t have the same featureset as this one, nor is it as robust.

So that’s where things are at. Apologies for the brain dump nature of this post, I just needed to get everything down while I could remember it all.

Josh Stewart: 1 + 1 = 3

Thu, 2016-07-28 23:03

No updates to this blog in a while I’m afraid, things have just been far too busy to have had anything interesting (read geeky) to write about. That said, I am indulging and making this post purely to show off.

On Monday 25th May at 8:37am, after a rather long day/night, Mel and I welcomed Spencer Bailey Stewart into the world. There were a few little issues throughout the night (Particularly towards the end) and he had some small hurdles to get over in his first hour, but since then he has gone from strength to strength and both he and Mum and now doing amazingly well.
Obligatory Pic:

Spencer Bailey Stewart

He’s a very placid little man and would quite happily sleep through an earthquake, much to our delight. And yes, that is a little penguin he’s holding on to in that pic

So that’s all really. I am very conscious of not becoming a complete baby bore so unless something actually worth writing about happens, this will hopefully be my only baby post for the sake of a baby post.

Josh Stewart: Boxee iView back online

Thu, 2016-07-28 23:03

Just a quick post to say the ABC iView Boxee app has been updated to version 0.7 and should now be fully functional once again. I apologise to anyone who has been using the app for how long this update has taken and I wish I could say I’ve been off solving world hunger or something, but in reality I’ve just been flat out with work and family. I’ve also got a few other projects on the go that have been keeping me busy. These may or may not ever reach a stage of public consumption, but if they do it’ll be cool stuff.

For anyone using Boxee, you may need to remove the app from My Apps and wait for Boxee to refresh its repository index, but eventually version 0.7 should pop up. Its a few rough in places so I hope to do another cleanup within the next few weeks, but at least everything is functional again.