Planet Linux Australia

Syndicate content
Planet Linux Australia - http://planet.linux.org.au
Updated: 1 hour 31 min ago

Russell Coker: SSD and M.2

Sun, 2016-08-14 01:02
The Need for Speed

One of my clients has an important server running ZFS. They need to have a filesystem that detects corruption, while regular RAID is good for the case where a disk gives read errors it doesn’t cover the case where a disk returns bad data and claims it to be good (which I’ve witnessed in BTRFS and ZFS systems). BTRFS is good for the case of a single disk or a RAID-1 array but I believe that the RAID-5 code for BTRFS is not sufficiently tested for business use. ZFS doesn’t perform very well due to the checksums on data and metadata requiring multiple writes for a single change which also causes more fragmentation. This isn’t a criticism of ZFS, it’s just an engineering trade-off for the data integrity features.

ZFS supports read-caching on a SSD (the L2ARC) and write-back caching (ZIL). To get the best benefit of L2ARC and ZIL you need fast SSD storage. So now with my client investigating 10 gigabit Ethernet I have to investigate SSD.

For some time SSDs have been in the same price range as hard drives, starting at prices well below $100. Now there are some SSDs on sale for as little as $50. One issue with SATA for server use is that SATA 3.0 (which was released in 2009 and is most commonly used nowadays) is limited to 600MB/s. That isn’t nearly adequate if you want to serve files over 10 gigabit Ethernet. SATA 3.2 was released in 2013 and supports 1969MB/s but I doubt that there’s much hardware supporting that. See the SATA Wikipedia page for more information.

Another problem with SATA is getting the devices physically installed. My client has a new Dell server that has plenty of spare PCIe slots but no spare SATA connectors or SATA power connectors. I could have removed the DVD drive (as I did for some tests before deploying the server) but that’s ugly and only gives 1 device while you need 2 devices in a RAID-1 configuration for ZIL.

M.2

M.2 is a new standard for expansion cards, it supports SATA and PCIe interfaces (and USB but that isn’t useful at this time). The wikipedia page for M.2 is interesting to read for background knowledge but isn’t helpful if you are about to buy hardware.

The first M.2 card I bought had a SATA interface, then I was unable to find a local company that could sell a SATA M.2 host adapter. So I bought a M.2 to SATA adapter which made it work like a regular 2.5″ SATA device. That’s working well in one of my home PCs but isn’t what I wanted. Apparently systems that have a M.2 socket on the motherboard will usually take either SATA or NVMe devices.

The most important thing I learned is to buy the SSD storage device and the host adapter from the same place then you are entitled to a refund if they don’t work together.

The alternative to the SATA (AHCI) interface on an M.2 device is known as NVMe (Non-Volatile Memory Express), see the Wikipedia page for NVMe for details. NVMe not only gives a higher throughput but it gives more command queues and more commands per queue which should give significant performance benefits for a device with multiple banks of NVRAM. This is what you want for server use.

Eventually I got a M.2 NVMe device and a PCIe card for it. A quick test showed sustained transfer speeds of around 1500MB/s which should permit saturating a 10 gigabit Ethernet link in some situations.

One annoyance is that the M.2 devices have a different naming convention to regular hard drives. I have devices /dev/nvme0n1 and /dev/nvme1n1, apparently that is to support multiple storage devices on one NVMe interface. Partitions have device names like /dev/nvme0n1p1 and /dev/nvme0n1p2.

Power Use

I recently upgraded my Thinkpad T420 from a 320G hard drive to a 500G SSD which made it faster but also surprisingly quieter – you never realise how noisy hard drives are until they go away. My laptop seemed to feel cooler, but that might be my imagination.

The i5-2520M CPU in my Thinkpad has a TDP of 35W but uses a lot less than that as I almost never have 4 cores in use. The z7k320 320G hard drive is listed as having 0.8W “low power idle” and 1.8W for read-write, maybe Linux wasn’t putting it in the “low power idle” mode. The Samsung 500G 850 EVO SSD is listed as taking 0.4W when idle and up to 3.5W when active (which would not be sustained for long on a laptop). If my CPU is taking an average of 10W then replacing the hard drive with a SSD might have reduced the power use of the non-screen part by 10%, but I doubt that I could notice such a small difference.

I’ve read some articles about power use on the net which can be summarised as “SSDs can draw more power than laptop hard drives but if you do the same amount of work then the SSD will be idle most of the time and not use much power”.

I wonder if the SSD being slightly thicker than the HDD it replaced has affected the airflow inside my Thinkpad.

From reading some of the reviews it seems that there are M.2 storage devices drawing over 7W! That’s going to create some cooling issues on desktop PCs but should be OK in a server. For laptop use they will hopefully release M.2 devices designed for low power consumption.

The Future

M.2 is an ideal format for laptops due to being much smaller and lighter than 2.5″ SSDs. Spinning media doesn’t belong in a modern laptop and using a SATA SSD is an ugly hack when compared to M.2 support on the motherboard.

Intel has released the X99 chipset with M.2 support (see the Wikipedia page for Intel X99) so it should be commonly available on desktops in the near future. For most desktop systems an M.2 device would provide all the storage that is needed (or 2*M.2 in a RAID-1 configuration for a workstation). That would give all the benefits of reduced noise and increased performance that regular SSDs provide, but with better performance and fewer cables inside the PC.

For a corporate desktop PC I think the ideal design would have only M.2 internal storage and no support for 3.5″ disks or a DVD drive. That would allow a design that is much smaller than a current SFF PC.

Related posts:

  1. Breaking SATA Connectors I’ve just broken my second SATA connector. This isn’t a...
  2. How I Partition Disks Having had a number of hard drives fail over the...
  3. Swap Space and SSD In 2007 I wrote a blog post about swap space...

Chris Samuel: Playing with Shifter Part 2 – converted Docker containers inside Slurm

Sun, 2016-08-14 01:02

This is continuing on from my previous blog about NERSC’s Shifter which lets you safely use Docker containers in an HPC environment.

Getting Shifter to work in Slurm is pretty easy, it includes a plugin that you must install and tell Slurm about. My test config was just:

required /usr/lib64/shifter/shifter_slurm.so shifter_config=/etc/shifter/udiRoot.conf

as I was installing by building RPMs (out preferred method is to install the plugin into our shared filesystem for the cluster so we don’t need to have it in the RAM disk of our diskless nodes). One that is done you can add the shifter programs arguments to your Slurm batch script and then just call shifter inside it to run a process, for instance:

#!/bin/bash #SBATCH -p debug #SBATCH --image=debian:wheezy shifter cat /etc/issue

results in the following on our RHEL compute nodes:

[samuel@bruce Shifter]$ cat slurm-1734069.out Debian GNU/Linux 7 \n \l

simply demonstrating that it works. The advantage of using the plugin and this way of specifying the images is that the plugin will prep the container for us at the start of the batch job and keep it around until it ends so you can keep running commands in your script inside the container without the overhead of having to create/destroy it each time. If you need to run something in a different image you just pass the --image option to shifter and then it will need to set up & tear down that container, but the one you specified for your batch job is still there.

That’s great for single CPU jobs, but what about parallel applications? Well turns out that’s easy too – you just request the configuration you need and slap srun in front of the shifter command. You can even run MPI applications this way successfully. I grabbed the dispel4py/docker.openmpi Docker container with shifterimg pull dispel4py/docker.openmpi and tried its Python version of the MPI hello world program:

#!/bin/bash #SBATCH -p debug #SBATCH --image=dispel4py/docker.openmpi #SBATCH --ntasks=3 #SBATCH --tasks-per-node=1 shifter cat /etc/issue srun shifter python /home/tutorial/mpi4py_benchmarks/helloworld.py

This prints the MPI rank to demonstrate that the MPI wire up was successful and I forced it to run the tasks on separate nodes and print the hostnames to show it’s communicating over a network, not via shared memory on the same node. But the output bemused me a little:

[samuel@bruce Python]$ cat slurm-1734135.out Ubuntu 14.04.4 LTS \n \l libibverbs: Warning: couldn't open config directory '/etc/libibverbs.d'. libibverbs: Warning: no userspace device-specific driver found for /sys/class/infiniband_verbs/uverbs0 -------------------------------------------------------------------------- [[30199,2],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: bruce001 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- libibverbs: Warning: couldn't open config directory '/etc/libibverbs.d'. libibverbs: Warning: couldn't open config directory '/etc/libibverbs.d'. Hello, World! I am process 0 of 3 on bruce001. libibverbs: Warning: no userspace device-specific driver found for /sys/class/infiniband_verbs/uverbs0 -------------------------------------------------------------------------- [[30199,2],1]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: bruce002 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- Hello, World! I am process 1 of 3 on bruce002. libibverbs: Warning: no userspace device-specific driver found for /sys/class/infiniband_verbs/uverbs0 -------------------------------------------------------------------------- [[30199,2],2]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: bruce003 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- Hello, World! I am process 2 of 3 on bruce003.

It successfully demonstrates that it is using an Ubuntu container on 3 nodes, but the warnings are triggered because Open-MPI in Ubuntu is built with Infiniband support and it is detecting the presence of the IB cards on the host nodes. This is because Shifter is (as designed) exposing the systems /sys directory to the container. The problem is that this container doesn’t include the Mellanox user-space library needed to make use of the IB cards and so you get warnings that they aren’t working and that it will fall back to a different mechanism (in this case TCP/IP over gigabit Ethernet).

Open-MPI allows you to specify what transports to use, so adding one line to my batch script:

export OMPI_MCA_btl=tcp,self,sm

cleans up the output a lot:

Ubuntu 14.04.4 LTS \n \l Hello, World! I am process 0 of 3 on bruce001. Hello, World! I am process 2 of 3 on bruce003. Hello, World! I am process 1 of 3 on bruce002.

This also begs the question then – what does this do for latency? The image contains a Python version of the OSU latency testing program which uses different message sizes between 2 MPI ranks to provide a histogram of performance. Running this over TCP/IP is trivial with the dispel4py/docker.openmpi container, but of course it’s lacking the Mellanox library I need and as the whole point of Shifter is security I can’t get root access inside the container to install the package. Fortunately the author of the dispel4py/docker.openmpi has their implementation published on Github and so I forked their repo, signed up for Docker and pushed a version which simply adds the libmlx4-1 package I needed.

Running the test over TCP/IP is simply a matter of submitting this batch script which forces it onto 2 separate nodes:

#!/bin/bash #SBATCH -p debug #SBATCH --image=chrissamuel/docker.openmpi:latest #SBATCH --ntasks=2 #SBATCH --tasks-per-node=1 export OMPI_MCA_btl=tcp,self,sm srun shifter python /home/tutorial/mpi4py_benchmarks/osu_latency.py

giving these latency results:

[samuel@bruce MPI]$ cat slurm-1734137.out # MPI Latency Test # Size [B] Latency [us] 0 16.19 1 16.47 2 16.48 4 16.55 8 16.61 16 16.65 32 16.80 64 17.19 128 17.90 256 19.28 512 22.04 1024 27.36 2048 64.47 4096 117.28 8192 120.06 16384 145.21 32768 215.76 65536 465.22 131072 926.08 262144 1509.51 524288 2563.54 1048576 5081.11 2097152 9604.10 4194304 18651.98

To run that same test over Infiniband I just modified the export in the batch script to force it to use IB (and thus fail if it couldn’t talk between the two nodes):

#!/bin/bash #SBATCH -p debug #SBATCH --image=chrissamuel/docker.openmpi:latest #SBATCH --ntasks=2 #SBATCH --tasks-per-node=1 export OMPI_MCA_btl=openib,self,sm srun shifter python /home/tutorial/mpi4py_benchmarks/osu_latency.py

which then gave these latency numbers:

[samuel@bruce MPI]$ cat slurm-1734138.out # MPI Latency Test # Size [B] Latency [us] 0 2.52 1 2.71 2 2.72 4 2.72 8 2.74 16 2.76 32 2.73 64 2.90 128 4.03 256 4.23 512 4.53 1024 5.11 2048 6.30 4096 7.29 8192 9.43 16384 19.73 32768 29.15 65536 49.08 131072 75.19 262144 123.94 524288 218.21 1048576 565.15 2097152 811.88 4194304 1619.22

So you can see that’s basically an order of magnitude improvement in latency using Infiniband compared to TCP/IP over gigabit Ethernet (which is what you’d expect).

Because there’s no virtualisation going on here there is no extra penalty to pay when doing this, no need to configure any fancy device pass through, no loss of any CPU MSR access, and so I’d argue that Shifter makes Docker containers way more useful for HPC than virtualisation or even Docker itself for the majority of use cases.

Am I excited about Shifter – yup! The potential to allow users build and application stack themselves right down to the OS libraries and (with a little careful thought) having something that could get native interconnect performance is fantastic. Throw in the complexities of dealing with conflicting dependencies between Python modules, system libraries, bioinformatics tools, etc, etc, and needing to provide simple methods for handling these and the advantages seem clear.

So the plan is to roll this out into production at VLSCI in the near future. Fingers crossed!

Stewart Smith: Microsoft Chicago – retro in qemu!

Sat, 2016-08-13 15:00

So, way back when (sometime in the early 1990s) there was Windows 3.11 and times were… for Workgroups. There was this Windows NT thing, this OS/2 thing and something brewing at Microsoft to attempt to make the PC less… well, bloody awful for a user.

Again, thanks to abandonware sites, it’s possible now to try out very early builds of Microsoft Chicago – what would become Windows 95. With the earliest build I could find (build 56), I set to work. The installer worked from an existing Windows 3.11 install.

I ended up using full system emulation rather than normal qemu later on, as things, well, booted in full emulation and didn’t otherwise (I was building from qemu master… so it could have actually been a bug fix).

Mmmm… Windows 3.11 File Manager, the fact that I can still use this is a testament to something, possibly too much time with Windows 3.11.

Unfortunately, I didn’t have the Plus Pack components (remember Microsoft Plus! ?- yes, the exclamation mark was part of the product, it was the 1990s.) and I’m not sure if they even would have existed back then (but the installer did ask).

Obviously if you were testing Chicago, you probably did not want to upgrade your working Windows install if this was a computer you at all cared about. I installed into C:\CHICAGO because, well – how could I not!

The installation went fairly quickly – after all, this isn’t a real 386 PC and it doesn’t have of-the-era disks – everything was likely just in the linux page cache.

I didn’t really try to get network going, it may not have been fully baked in this build, or maybe just not really baked in this copy of it, but the installer there looks a bit familiar, but not like the Windows 95 one – maybe more like NT 3.1/3.51 ?

But at the end… it installed and it was time to reboot into Chicago:
So… this is what Windows 95 looked like during development back in July 1993 – nearly exactly two years before release. There’s some Windows logos that appear/disappear around the place, which are arguably much cooler than the eventual Windows 95 boot screen animation. The first boot experience was kind of interesting too:
Luckily, there was nothing restricting the beta site ID or anything. I just entered the number 1, and was then told it needed to be 6 digits – so beta site ID 123456 it is! The desktop is obviously different both from Windows 3.x and what ended up in Windows 95.

Those who remember Windows 3.1 may remember Dr Watson as an actual thing you could run, but it was part of the whole diagnostics infrastructure in Windows, and here (as you can see), it runs by default. More odd is the “Switch To Chicago” task (which does nothing if opened) and “Tracker”. My guess is that the “Switch to Chicago” is the product of some internal thing for launching the new UI. I have no ideawhat the “Tracker” is, but I think I found a clue in the “Find File” app:

Not only can you search with regular expressions, but there’s “Containing text”, could it be indexing? No, it totally isn’t. It’s all about tracking/reporting problems:

Well, that wasn’t as exciting as I was hoping for (after all, weren’t there interesting database like file systems being researched at Microsoft in the early 1990s?). It’s about here I should show the obligatory About box:
It’s… not polished, and there’s certainly that feel throughout the OS, it’s not yet polished – and two years from release: that’s likely fair enough. Speaking of not perfect:

When something does crash, it asks you to describe what went wrong, i.e. provide a Clue for Dr. Watson:

But, most importantly, Solitaire is present! You can browse the Programs folder and head into Games and play it! One odd tihng is that applications have two >> at the end, and there’s a “Parent Folder” entry too.

Solitair itself? Just as I remember.

Notably, what is missing is anything like the Start menu, which is probably the key UI element introduced in Windows 95 that’s still with us today. Instead, you have this:

That’s about the least exciting Windows menu possible. There’s the eye menu too, which is this:

More unfinished things are found in the “File cabinet”, such as properties for anything:
But let’s jump into Control Panels, which I managed to get to by heading to C:\CHICAGO\Control.sys – which isn’t exactly obvious, but I think you can find it through Programs as well.The “Window Metrics” application is really interesting! It’s obvious that the UI was not solidified yet, that there was a lot of experimenting to do. This application lets you change all sorts of things about the UI:

My guess is that this was used a lot internally to twiddle things to see what worked well.

Another unfinished thing? That familiar Properties for My Computer, which is actually “Advanced System Features” in the control panel, and from the [Sample Information] at the bottom left, it looks like we may not be getting information about the machine it’s running on.

You do get some information in the System control panel, but a lot of it is unfinished. It seems as if Microsoft was experimenting with a few ways to express information and modify settings.

But check out this awesome picture of a hard disk for Virtual Memory:

The presence of the 386 Enhanced control panel shows how close this build still was to Windows 3.1:

At the same time, we see hints of things going 32 bit – check out the fact that we have both Clock and Clock32! Notepad, in its transition to 32bit, even dropped the pad and is just Note32!

Well, that’s enough for today, time to shut down the machine:

Craige McWhirter: Python for science, side projects and stuff! - PyConAu 2016

Sat, 2016-08-13 11:40

By Andrew Lonsdale.

  • Talked about using python-ppt for collaborating on PowerPoint presentations.
  • Covered his journey so far and the lessons he learned.
  • Gave some great examples of re-creating XKCD comics in Python (matplotlib_venn).
  • Claimed the diversion into Python and Matplotlib has helped is actual research.
  • Spoke about how using Python is great for Scientific research.
  • Summarised that side projects are good for Science and Python.
  • Recommended Elegant SciPy
  • Demo's using Emoji to represent bioinformatics using FASTQE (FASTQ as Emoji).

Craige McWhirter: MicroPython: a journey from Kickstarter to Space by Damien George - PyConAu 2016

Sat, 2016-08-13 10:47

Damien George.

Motivations for MicroPython:
  • To provide a high level language to control sophisticated micro-controllers.
  • Approached it as an intellectually stimulating research problem.
  • Wasn't even sure it was possible.
  • Chose Python because:
    • It was a high level language with powerful features.
    • Large existing community.
    • Naively thought it would be easy.
    • Found Python easy to learn.
    • Shallow but long learning curve of python makes it good for beginners and advanced programmers.
    • Bitwise operaitons make it usefult for micro-controllers.
Why Not Use CPython?
  • CPython pre-allocates memory, resulting in inefficient memory usage which is problematic for low RAM devices like micro controllers.
Usage:
  • If you know Python, you know MicroPython - it's implemented the same
Kickstarter:

Damien covered his experiences with Kickstarter.

Internals of MicroPython:
  • Damien covered the parser, lexer, compiler and runtime.
  • Walked us through the workflows of the internals.
  • Spoke about object represntation and the three machine word object forms:
    • Integers.
    • Strings.
    • Objects.
  • Covered the emitters:
    • Bytecode.
    • Native (machine code).
    • Inline assembler.
Coding Style:

Coding was more based on a physicist trying to make things work, than a computer engineer.

  • There's a code dashboard
  • Hosted on GitHub
  • Noted that he could not have done this without the support of the community.
Hardware:

Listed some of the micro controller boards that it runs on ad larger computers that currently run OpenWRT.

Spoke about the BBC micron:bit project. Demo'd speech synthesis and image display running on it.

MicroPython in Space:

Spoke about the port to LEON / SPARC / RTEMS for the European Space agency for satellite control, particularly the application layer.

Damien closed with an overview of current applications and ongoing software and hardware development.

Links:

micropython.org forum.micropython.org github.com/micropython

Craige McWhirter: Doing Math with Python - Amit Saha - PyConAu 2016

Fri, 2016-08-12 15:20

Amit Saha.

Slides and demos.

Why Math with Python?
  • Provides an interactive learning experience.
  • Provides a great base for future programming (ie: data science, machine learning).
Tools:
  • Python 3
  • SymPy
  • matplotlib

Amit's book: Doing Math with Python

Craige McWhirter: The Internet of Not Great Things - Nick Moore - PyConAu 2016

Fri, 2016-08-12 14:38

Nick Moore.

aka "The Internet of (Better) Things".

  • Abuse of IoT is not a technical issue.
  • The problem is who controls the data.
  • Need better analysis of the was it is used that is bad.
  • "If you're not the customer, you're the product."
    • by accepting advertising.
    • by having your privacy sold.
  • Led to a conflation of IoT and Big Data.
  • Product end of life by vendors ceasing support.
  • Very little cross vendor compatibility.
  • Many devices useless if the Internet is not available.
  • Consumer grade devices often fail.
  • Weak crypto support.
  • Often due to lack of entropy, RAM, CPU.
  • Poorly thought out update cycles.
Turning Complaints into Requirements:

We need:

  • Internet independence.
  • Generic interfaces.
  • Simplified Cryptography.
  • Easier Development.
Some Solutions:
  • Peer to peer services.
  • Standards based hardware description language.
  • Shared secrets, initialised by QR code.
  • Simpler development with MicroPython.

Craige McWhirter: OpenBMC - Boot your server with Python - Joel Stanley - PyConAu 2016

Fri, 2016-08-12 14:38

Joel Stanley.

  • OpenBMC is a Free Software BMC
  • Running embedded Linux.
  • Developed an API before developing other interfaces.
Goals:
  • A modern kernel.
  • Up to date userspace.
  • Security patches.
  • Better interfaces.
  • Reliable performance.
    • REST interface.
    • SSH instead of strange tools.
The Future:
  • Support more home devices.
  • Add a web interface.
  • Secure boot, trusted boot, more security features.
  • Upstream all of the things.
  • Support more hardware.

Craige McWhirter: Teaching Python with Minecraft - Digital K - PyConAu 2016

Fri, 2016-08-12 12:19

by Digital K.

The video of the talk is here.

  • Recommended for ages 10 - 16
  • Why Minecraft?
    • Kids familiarity is highly engaging.
    • Relatively low cost.
    • Code their own creations.
    • Kids already use the command line in Minecraft
  • Use the Minecraft API to receive commands from Python.
    • Place blocks
    • Move players
    • Build faster
    • Build larger structures and shapes
    • Easy duplication
    • Animate blocks (ie: colour change)
    • Create games
Option 1:

How it works:

  • Import Minecraft API libraries to your code.
  • Push it to the server.
  • Run the Minecraft client.

What you can Teach:

  • Co-ordinates
  • Time
  • Multiplications
  • Data
  • Art works with maths
  • Trigonometry
  • Geo fencing
  • Design
  • Geography

Connect to External Devices:

  • Connect to Raspberry Pi or Arduino.
  • Connect the game to events in the real world.

Other Resources:

Craige McWhirter: Scripting the Internet of Things - Damien George - PyConAu 2016

Fri, 2016-08-12 12:19

Damien George

Damien gave an excellent overview of using MicroPython with microcontrollers, particularly the ESP8266 board.

Damien's talk was excellent and covered a broad and interesting history of the project and it's current efforts.

Craige McWhirter: ESP8266 and MicroPython - Nick Moore - PyConAu 2016

Fri, 2016-08-12 12:19

Nick Moore

Slides.

  • Price and feature set are a game changer for hobbyists.
  • Makes for a more playful platform.
  • Uses serial programming mode to flash memory
  • Strict power requirements
  • The easy way to use them is with a NodeMCU for only a little more.
  • Tool kits:
  • Lua: (Node Lua).
  • Javascript: Espruino.
  • Forth, Lisp, Basic(?!).
  • Mircopython works on the ESP8266:
    • Drives micro controllers.
    • The onboard Wifi.
    • Can run a small webserver to view and control devices.
    • WebRepl can be used to copy files, as can mpy-utils.
    • Lacks:
      • an operating system.
      • Lacks multiprocessing.
      • Debugger / profiler.
  • Flobot:
    • Compiles via MicroPython.
    • A visual dataflow language for robots.

ES8266 and MicroPython provide an accessible entry into working with micro-crontrollers.

Chris Smart: Command line password management with pass

Wed, 2016-08-10 21:03

Why use a password manager in the first place? Well, they make it easy to have strong, unique passwords for each of your accounts on every system you use (and that’s a good thing).

For years I’ve stored my passwords in Firefox, because it’s convenient, and I never bothered with all those other fancy password managers. The problem is, that it locked me into Firefox and I found myself still needing to remember passwords for servers and things.

So a few months ago I decided to give command line tool Pass a try. It’s essentially a shell script wrapper for GnuPG and stores your passwords (with any notes) in individually encrypted files.

I love it.

Pass is less convenient in terms of web browsing, but it’s more convenient for everything else that I do (which is often on the command line). For example, I have painlessly integrated Pass into Mutt (my email client) so that passwords are not stored in the configuration files.

As a side-note, I installed the Password Exporter Firefox Add-on and exported my passwords. I then added this whole file to Pass so that I can start copying old passwords as needed (I didn’t want them all).

About Pass

Pass uses public-key cryptography to encrypt each password that you want to store as an individual file. To access the password you need the private key and passphrase.

So, some nice things about it are:

  • Short and simple shell script
  • Uses standard GnuPG to encrypt each password into individual files
  • Password files are stored on disk in a hierarchy of own choosing
  • Stored in Git repo (if desired)
  • Can also store notes
  • Can copy the password temporarily to copy/paste buffer
  • Can show, edit, or copy password
  • Can also generate a password
  • Integrates with anything that can call it
  • Tab completion!

So it’s nothing super fancy, “just” a great little wrapper for good old GnuPG and text files, backed by git. Perfect!

Install Pass

Installation of Pass (and Git) is easy:
sudo dnf -y install git pass

Prepare keys

You’ll need a pair of keys, so generate these if you haven’t already (this creates the keys under ~/.gnupg). I’d probably recommend RSA and RSA, 4096 bits long, using a decent passphrase and setting a valid email address (you can also separately use these keys to send signed emails and receive encrypted emails).
gpg2 --full-gen-key

We will need the key’s fingerprint to give to pass. It should be a string of 40 characters, something like 16CA211ACF6DC8586D6747417407C4045DF7E9A2.
gpg2 --list-keys

Note: Your fingerprint (and public keys) can be public, but please make sure that you keep your private keys secure! For example, don’t copy the ~/.gnupg directory to a public place (even though they are protected by a nice long passphrase, right? Right?).

Initialise pass

Before we can use Pass, we need to initialise it.
pass init

This creates the basic directory structure in the .password-store directory in your home directory. At this point it just has a plain text file with the fingerprint of the public key that it should use.

Adding git backing

If you haven’t already, you’ll need to tell Git who you are. Using the email address that you used when creating the GPG key is probably good.
git config --global user.email "you@example.com"
git config --global user.name "Your Name"

Now, go into the password-store directory and initialise it as a Git repository.
cd ~/.password-store
git init
git add .
git commit -m "intial commit"
cd -

Pass will now automatically commit changes for you!

Hierarchy

As mentioned, you can create any hierarchy you like. I quite like to use subdirectories and sort by function first (like mail, web, server), then domains (like gmail.com, twitter.com) and then server or username. This seems to work quite nicely with tab completion, too.

You can rearrange this at any time, so don’t worry too much!

Storing a password

Adding a password is simple and you can create any hierarchy that you want; you just tell pass to add a new password and where to store it. Pass will prompt you to enter the password.

For example, you might want to store your password for a machine at server1.example.com – you could do that like so:
pass add servers/example.com/server1

This creates the directory structure on disk and your first encrypted file!
~/.password-store/
└── servers
    └── example.com
        └── server1.gpg
 
2 directories, 1 file

Run the file command on that file and it should tell you that it’s encrypted.
file ~/.password-store/servers/example.com/server1.gpg

But is it really? Go ahead, cat that gpg file, you’ll see it’s encrypted (your terminal will probably go crazy – you can blindly enter the reset command to get it back).
cat ~/.password-store/servers/example.com/server1.gpg

So this file is encrypted – you can safely copy it anywhere (again, please just keep your private key secure).

Git history

Browse to the .password-store dir and run some git commands, you’ll see your history and showing will prompt for your GPG passphrase to decrypt the files stored in Git.

cd ~/.password-store
git log
git show
cd -

If you wanted to, you could push this to another computer as a backup (perhaps even via a git-hook!).

Storing a password, with notes

By default Pass just prompts for the password, but if you want to add notes at the same time you can do that also. Note that the password should still be on its own on the first line, however.
pass add -m mail/gmail.com/username

If you use two-factor authentication (which you should be), this is useful for also storing the account password and recovery codes.

Generating and storing a password

As I mentioned, one of the benefits of using a password manager is to have strong, unique passwords. Pass makes this easy by including the ability to generate one for you and store it in the hierarchy of your choosing. For example, you could generate a 32 character password (without special characters) for a website you often log into, like so:
pass generate -n web/twitter.com/username 32

Getting a password out

Getting a password out is easy; just tell Pass which one you want. It will prompt you for your passphrase, decrypt the file for you, read the first line and print it to the screen. This can be useful for scripting (more on that below).

pass web/twitter.com/username

Most of the time though, you’ll probably want to copy the password to the copy/paste buffer; this is also easy, just add the -c option. Passwords are automatically cleared from the buffer after 45 seconds.
pass -c web/twitter.com/username

Now you can log into Twitter by entering your username and pasting the password.

Editing a password

Similarly you can edit an existing password to change it, or add as many notes as you like. Just tell Pass which password to edit!
pass edit web/twitter.com/username

Copying and moving a password

It’s easy to copy an existing password to a new one, just specify both the original and new file.
pass copy servers/example.com/server1 servers/example.com/server2

If the hierarchy you created is not to your liking, it’s easy to move passwords around.
pass mv servers/example.com/server1 computers/server1.example.com

Of course, you could script this!

Listing all passwords

Pass will list all your passwords in a tree nicely for you.
pass list

Interacting with Pass

As pass is a nice standard shell program, you can interact with it easily. For example, to get a password from a script you could do something like this.
#!/usr/bin/env bash
 
echo "Getting password.."
PASSWORD="$(pass servers/testing.com/server2)"
if [[ $? -ne 0 ]]; then
    echo "Sorry, failed to get the password"
    exit 1
fi
echo "..and we got it, ${PASSWORD}"

Try it!

There’s lots more you can do with Pass, why not check it out yourself!

Chris Smart: Setting up OpenStack Ansible All-in-one behind a proxy

Tue, 2016-08-09 09:03

Setting up OpenStack Ansible (OSA) All-in-one (AIO) behind a proxy requires a couple of settings, but it should work fine (we’ll also configure the wider system). There are two types of git repos that we should configure for (unless you’re an OpenStack developer), those that use http (or https) and those that use the git protocol.

Firstly, this assumes an Ubuntu 14.04 server install (with at least 60GB of free space on / partition).

All commands are run as the root user, so switch to root first.

sudo -i

Export variables for ease of setup

Setting these variables here means that you can copy and paste the relevant commands from the rest of this blog post.

Note: Make sure that your proxy is fully resolvable and then replace the settings below with your actual proxy details (leave out user:password if you don’t use one).

export PROXY_PROTO="http"
export PROXY_HOST="user:password@proxy"
export PROXY_PORT="3128"
export PROXY="${PROXY_PROTO}://${PROXY_HOST}:${PROXY_PORT}"

First, install some essentials (reboot after upgrade if you like).
echo "Acquire::http::Proxy \"${PROXY}\";" \
> /etc/apt/apt.conf.d/90proxy
apt-get update && apt-get upgrade
apt-get install git openssh-server rsync socat screen vim

Configure global proxies

For any http:// or https:// repositories we can just set a shell environment variable. We’ll set this in /etc/environment so that all future shells have it automatically.

cat >> /etc/environment << EOF
export http_proxy="${PROXY}"
export https_proxy="${PROXY}"
export HTTP_PROXY="${PROXY}"
export HTTPS_PROXY="${PROXY}"
export ftp_proxy="${PROXY}"
export FTP_PROXY="${PROXY}"
export no_proxy=localhost
export NO_PROXY=localhost
EOF

Source this to set the proxy variables in your current shell.
source /etc/environment

Tell sudo to keep these environment variables
echo 'Defaults env_keep = "http_proxy https_proxy ftp_proxy \
no_proxy HTTP_PROXY HTTPS_PROXY FTP_PROXY NO_PROXY"' \
> /etc/sudoers.d/01_proxy

Configure Git

For any git:// repositories we need to make a script that uses socat (you could use netcat) and tell Git to use this as the proxy.

cat > /usr/local/bin/git-proxy.sh << EOF
#!/bin/bash
# \$1 = hostname, \$2 = port
exec socat STDIO PROXY:${PROXY_HOST}:\${1}:\${2},proxyport=${PROXY_PORT}
EOF

Make it executable.
chmod a+x /usr/local/bin/git-proxy.sh

Tell Git to proxy connections through this script.
git config --global core.gitProxy /usr/local/bin/git-proxy.sh

Clone OpenStack Ansible

OK, let’s clone the OpenStack Ansible repository! We’re living on the edge and so will build from the tip of the master branch.
git clone git://git.openstack.org/openstack/openstack-ansible \
/opt/openstack-ansible
cd /opt/openstack-ansible/

If you would prefer to build from a specific release, such as the latest stable, feel free to now check out the appropriate tag. For example, at the time of writing this is tag 13.3.1. You can get a list of tags by running the git tag command.

# Only run this if you want to build the 13.3.1 release
git checkout -b tag-13.3.1 13.3.1

Or if you prefer, you can checkout the tip of the stable branch which prepares for the upcoming stable minor release.

# Only run this if you want to build the latest stable code
git checkout -b stable/matika origin/stable/mitaka

Prepare log location

If something goes wrong, it’s handy to be able to have the log available.

export ANSIBLE_LOG_PATH=/root/ansible-log

Bootstrap Ansible

Now we can kick off the ansible bootstrap. This prepares the system with all of the Ansible roles that make up an OpenStack environment.
./scripts/bootstrap-ansible.sh

Upon success, you should see:

System is bootstrapped and ready for use.

Bootstrap OpenStack Ansible All In One

Now let’s bootstrap the all in one system. This configures the host with appropriate disks and network configuration, etc ready to run the OpenStack environment in containers.
./scripts/bootstrap-aio.sh

Run the Ansible playbooks

The final task is to run the playbooks, which sets up all of the OpenStack components on the host and containers. Before we proceed, however, this requires some additional configuration for the proxy.

The user_variables.yml file under the root filesystem at /etc/openstack_deploy/user_variables.yml is where we configure environment variables for OSA to export and set some other options (again, note the leading / before etc – do not modify the template file at /opt/openstack-ansible/etc/openstack_deploy by mistake).

cat >> /etc/openstack_deploy/user_variables.yml << EOF
#
## Proxy settings
proxy_env_url: "\"${PROXY}\""
no_proxy_env: "\"localhost,127.0.0.1,{{ internal_lb_vip_address }},{{ external_lb_vip_address }},{% for host in groups['all_containers'] %}{{ hostvars[host]['container_address'] }}{% if not loop.last %},{% endif %}{% endfor %}\""
global_environment_variables:
  HTTP_PROXY: "{{ proxy_env_url }}"
  HTTPS_PROXY: "{{ proxy_env_url }}"
  NO_PROXY: "{{ no_proxy_env }}"
  http_proxy: "{{ proxy_env_url }}"
  https_proxy: "{{ proxy_env_url }}"
  no_proxy: "{{ no_proxy_env }}"
EOF

Secondly, if you’re running the latest stable, 13.3.x, you will need to make a small change to pip package list for the keystone (authentication component) container. Currently it pulls in httplib2 version 0.8, however this does not appear to respect the NO_PROXY variable and so keystone provisioning fails. Version 0.9 seems to fix this problem.

sed -i 's/state: present/state: latest/' \
/etc/ansible/roles/os_keystone/tasks/keystone_install.yml

Now run the playbooks!

Note: This will take a long time, perhaps a few hours, so run it in a screen or tmux session.

screen
time ./scripts/run-playbooks.sh

Verify containers

Once the playbooks complete, you should be able to list your running containers and see their status (there will be a couple of dozen).
lxc-ls -f

Log into OpenStack

Now that the system is complete, we can start using OpenStack!

You should be able to use your web browser to log into Horizon, the OpenStack Dashboard, at your AIO hosts’s IP address.

If you’re not sure what IP that is, you can find out by looking at which address port 443 is running on.

netstat -ltnp |grep 443

The admin user’s password is available in the user_secrets.yml file on the AIO host.
grep keystone_auth_admin_password \
/etc/openstack_deploy/user_secrets.yml

A successful login should reveal the admin dashboard.

Enjoy your OpenStack Ansible All-in-one!

Stewart Smith: Windows 3.11 nostalgia

Mon, 2016-08-08 21:00

Because OS/2 didn’t go so well… let’s try something I’m a lot more familiar with. To be honest, the last time I in earnest used Windows on the desktop was around 3.11, so I kind of know it back to front (fun fact: I’ve read the entire Windows 3.0 manual).

It turns out that once you have MS-DOS installed in qemu, installing Windows 3.11 is trivial. I didn’t even change any settings for Qemu, I just basically specced everything up to be very minimal (50MB RAM, 512mb disk).

Windows 3.11 was not a fun time as soon as you had to do anything… nightmares of drivers, CONFIG.SYS and AUTOEXEC.BAT plague my mind. But hey, it’s damn fast on a modern processor.

Matthew Oliver: Swift + Xena would make the perfect digital preservation solution

Mon, 2016-08-08 15:03

Those of you might not know, but for some years I worked at the National Archives of Australia working on, at the time, their leading digital preservation platform. It was awesome, opensource, and they paid me to hack on it.
The most important parts of the platform was Xena and Digital Preservation Recorder (DPR). Xena was, and hopefully still is amazing. It takes in a file, guesses the format. If it’s a closed proprietary format and it had the right xena plugin it would convert it to an open standard and optionally turned it into a .xena file ready to be ingested into the digital repository for long term storage.

We did this knowing that proprietary formats change so quickly and if you want to store a file format long term (20, 40, 100 years) you won’t be able to open it. An open format on the other hand, even if there is no software that can read it any more is open, so you can get your data back.

Once a file had passed through Xena, we’d use DPR to ingest it into the archive. Once in the archive, we had other opensource daemons we wrote which ensured we didn’t lose things to bitrot, we’d keep things duplicated and separated. It was a lot of work, and the size of the space required kept growing.

Anyway, now I’m an OpenStack Swift core developer, and wow, I wish Swift was around back then, because it’s exactly what is required for the DPR side. It duplicates, infinitely scales, it checks checksums, quarantines and corrects. Keeps everything replicated and separated and does it all automatically. Swift is also highly customise-able. You can create your own middleware and insert it in the proxy pipeline or in any of the storage node’s pipelines, and do what ever you need it to do. Add metadata, do something to the object on ingest, or whenever the object is read, updating some other system.. really you can do what ever you want. Maybe even wrap Xena into some middleware.

Going one step further, IBM have been working on a thing called storlets which uses swift and docker to do some work on objects and is now in the OpenStack namespace. Currently storlets are written in Java, and so is Xena.. so this might also be a perfect fit.

Anyway, I got talking with Chris Smart, a mate who also used to work in the same team at NAA, so it got my mind thinking about all this and so I thought I’d place my rambling thoughts somewhere in case other archives or libraries are interested in digital preservation and needs some ideas.. best part, the software is open source and also free!

Happy preserving.

David Rowe: SM2000 – Part 8 – Gippstech 2016 Presentation

Mon, 2016-08-08 07:03

Justin, VK7TW, has published a video of my SM2000 presentation at Gippstech, which was held in July 2016.

Brady O’Brien, KC9TPA, visited me in June. Together we brought the SM2000 up to the point where it is decoding FreeDV 2400A waveforms at 10.7MHz IF, which we demonstrate in this video. I’m currently busy with another project but will get back to the SM2000 (and other FreeDV projects) later this year.

Thanks Justin and Brady!

FreeDV and this video was also mentioned on this interesting Reddit post/debate from Gary KN4AQ on VHF/UHF Digital Voice – a peek into the future

Stewart Smith: OS/2 Warp Nostalgia

Sun, 2016-08-07 21:00

Thanks to the joys of abandonware websites, you can play with some interesting things from the 1990s and before. One of those things is OS/2 Warp. Now, I had a go at OS/2 sometime in the 1990s after being warned by a friend that it was “pretty much impossible” to get networking going. My experience of OS/2 then was not revolutionary… It was, well, something else on a PC that wasn’t that exciting and didn’t really add a huge amount over Windows.

Now, I’m nowhere near insane enough to try this on my actual computer, and I’ve managed to not accumulate any ancient PCs….

Luckily, qemu helps with an emulator! If you don’t set your CPU to Pentium (or possibly something one or two generations newer) then things don’t go well. Neither does a disk that by today’s standards would be considered beyond tiny. Also, if you dare to try to use an unpartitioned hard disk – OH MY are you in trouble.

Also, try to boot off “Disk 1” and you get this:
Possibly the most friendly error message ever! But, once you get going (by booting the Installation floppy)… you get to see this:

and indeed, you are doing the time warp of Operating Systems right here. After a bit of fun, you end up in FDISK:

Why I can’t create a partition… WHO KNOWS. But, I tried again with a 750MB disk that already had a partition on it and…. FAIL. I think this one was due to partition type, so I tried again with partition type of 6 – plain FAT16, and not W95 FAT16 (LBA). Some memory is coming back to me of larger drives and LBA and nightmares…

But that worked!

Then, the OS/2 WARP boot screen… which seems to stick around for a long time…..

and maybe I could get networking….

Ladies and Gentlemen, the wonders of having to select DHCP:

It still asked me for some config, but I gleefully ignored it (because that must be safe, right!?) and then I needed to select a network adapter! Due to a poor choice on my part, I started with a rtl8139, which is conspicuously absent from this fine list of Token Ring adapters:

and then, more installing……

before finally rebooting into….

and that, is where I realized there was beer in the fridge and that was going to be a lot more fun.

OpenSTEM: Remembering Seymour Papert

Thu, 2016-08-04 17:11

Today we’re remembering Seamour Papert, as we’ve received news that he died a few days ago (31st July 2016) at the age of 88.  Throughout his life, Papert did so much for computing and education, he even worked with the famous Jean Piaget who helped Papert further develop his views on children and learning.

For us at OpenSTEM, Papert is also special because in the late 1960s (yep that far back) he invented the Logo programming language, used to control drawing “turtles”.  The Mirobot drawing turtle we use in our Robotics Program is a modern descendant of those early (then costly) adventures.

I sadly never met him, but what a wonderful person he was.

For more information, see the media release at MIT’s Media Lab (which he co-founded) or search for his name online.

 

Lev Lafayette: Supercomputers: Current Status and Future Trends

Thu, 2016-08-04 17:11

The somewhat nebulous term "supercomputer" has a long history. Although first coined in the 1920s to refer to IBMs tabulators, in electronic computing the most important initial contribution was the CDC6600 in the 1960s, due to its advanced performance over competitors. Over time major technological advancements included vector processing, cluster architecture, massive processors counts, GPGPU technologies, multidimensional torus architectures for interconnect.

read more

Simon Lyall: Putting Prometheus node_exporter behind apache proxy

Tue, 2016-08-02 09:02

I’ve been playing with Prometheus monitoring lately. It is fairly new software that is getting popular. Prometheus works using a pull architecture. A central server connects to each thing you want to monitor every few seconds and grabs stats from it.

In the simplest case you run the node_exporter on each machine which gathers about 600-800 (!) metrics such as load, disk space and interface stats. This exporter listens on port 9100 and effectively works as an http server that responds to “GET /metrics HTTP/1.1” and spits several hundred lines of:

node_forks 7916 node_intr 3.8090539e+07 node_load1 0.47 node_load15 0.21 node_load5 0.31 node_memory_Active 6.23935488e+08

Other exporters listen on different ports and export stats for apache or mysql while more complicated ones will act as proxies for outgoing tests (via snmp, icmp, http). The full list of them is on the Prometheus website.

So my problem was that I wanted to check my virtual machine that is on Linode. The machine only has a public IP and I didn’t want to:

  1. Allow random people to check my servers stats
  2. Have to setup some sort of VPN.

So I decided that the best way was to just use put a user/password on the exporter.

However the node_exporter does not  implement authentication itself since the authors wanted the avoid maintaining lots of security code. So I decided to put it behind a reverse proxy using apache mod_proxy.

Step 1 – Install node_exporter

Node_exporter is a single binary that I started via an upstart script. As part of the upstart script I told it to listen on localhost port 19100 instead of port 9100 on all interfaces

# cat /etc/init/prometheus_node_exporter.conf description "Prometheus Node Exporter" start on startup chdir /home/prometheus/ script /home/prometheus/node_exporter -web.listen-address 127.0.0.1:19100 end script

Once I start the exporter a simple “curl 127.0.0.1:19100/metrics” makes sure it is working and returning data.

Step 2 – Add Apache proxy entry

First make sure apache is listening on port 9100 . On Ubuntu edit the /etc/apache2/ports.conf file and add the line:

Listen 9100

Next create a simple apache proxy without authentication (don’t forget to enable mod_proxy too):

# more /etc/apache2/sites-available/prometheus.conf <VirtualHost *:9100> ServerName prometheus CustomLog /var/log/apache2/prometheus_access.log combined ErrorLog /var/log/apache2/prometheus_error.log ProxyRequests Off <Proxy *> Allow from all </Proxy> ProxyErrorOverride On ProxyPass / http://127.0.0.1:19100/ ProxyPassReverse / http://127.0.0.1:19100/ </VirtualHost>

This simply takes requests on port 9100 and forwards them to localhost port 19100 . Now reload apache and test via curl to port 9100. You can also use netstat to see what is listening on which ports:

Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 127.0.0.1:19100 0.0.0.0:* LISTEN 8416/node_exporter tcp6 0 0 :::9100 :::* LISTEN 8725/apache2

 

Step 3 – Get Prometheus working

I’ll assume at this point you have other servers working. What you need to do now is add the following entries for you server in you prometheus.yml file.

First add basic_auth into your scape config for “node” and then add your servers, eg:

- job_name: 'node' scrape_interval: 15s basic_auth: username: prom password: mypassword static_configs: - targets: ['myserver.example.com:9100'] labels: group: 'servers' alias: 'myserver'

Now restart Prometheus and make sure it is working. You should see the following lines in your apache logs plus stats for the server should start appearing:

10.212.62.207 - - [31/Jul/2016:11:31:38 +0000] "GET /metrics HTTP/1.1" 200 11377 "-" "Go-http-client/1.1" 10.212.62.207 - - [31/Jul/2016:11:31:53 +0000] "GET /metrics HTTP/1.1" 200 11398 "-" "Go-http-client/1.1" 10.212.62.207 - - [31/Jul/2016:11:32:08 +0000] "GET /metrics HTTP/1.1" 200 11377 "-" "Go-http-client/1.1"

Notice that connections are 15 seconds apart, get http code 200 and are 11k in size. The Prometheus server is using Authentication but apache doesn’t need it yet.

Step 4 – Enable Authentication.

Now create an apache password file:

htpasswd -cb /home/prometheus/passwd prom mypassword

and update your apache entry to the followign to enable authentication:

# more /etc/apache2/sites-available/prometheus.conf <VirtualHost *:9100> ServerName prometheus CustomLog /var/log/apache2/prometheus_access.log combined ErrorLog /var/log/apache2/prometheus_error.log ProxyRequests Off <Proxy *> Order deny,allow Allow from all # AuthType Basic AuthName "Password Required" AuthBasicProvider file AuthUserFile "/home/prometheus/passwd" Require valid-user </Proxy> ProxyErrorOverride On ProxyPass / http://127.0.0.1:19100/ ProxyPassReverse / http://127.0.0.1:19100/ </VirtualHost>

After you reload apache you should see the following:

10.212.56.135 - prom [01/Aug/2016:04:42:08 +0000] "GET /metrics HTTP/1.1" 200 11394 "-" "Go-http-client/1.1" 10.212.56.135 - prom [01/Aug/2016:04:42:23 +0000] "GET /metrics HTTP/1.1" 200 11392 "-" "Go-http-client/1.1" 10.212.56.135 - prom [01/Aug/2016:04:42:38 +0000] "GET /metrics HTTP/1.1" 200 11391 "-" "Go-http-client/1.1"

Note that the “prom” in field 3 indicates that we are logging in for each connection. If you try to connect to the port without authentication you will get:

Unauthorized This server could not verify that you are authorized to access the document requested. Either you supplied the wrong credentials (e.g., bad password), or your browser doesn't understand how to supply the credentials required.

That is pretty much it. Note that will need to add additional Virtualhost entries for more ports if you run other exporters on the server.

 

Share