Planet Linux Australia

Syndicate content
Planet Linux Australia - http://planet.linux.org.au
Updated: 50 min 7 sec ago

Linux Users of Victoria (LUV) Announce: LUV Main September 2014 Meeting: AGM + lightning talks

Tue, 2014-08-19 18:28
Start: Sep 2 2014 19:00 End: Sep 2 2014 21:00 Start: Sep 2 2014 19:00 End: Sep 2 2014 21:00 Location: 

The Buzzard Lecture Theatre. Evan Burge Building, Trinity College, Melbourne University Main Campus, Parkville.

Link:  http://luv.asn.au/meetings/map

AGM + lightning talks

Notice of LUV Annual General Meeting, 2nd September 2014, 19:00.

Linux Users of Victoria, Inc., registration number

A0040056C, will be holding its Annual General Meeting at

7pm on Tuesday, 2nd September 2014, in the Buzzard

Lecture Theatre, Trinity College.

The AGM will be held in conjunction with our usual

September Main Meeting. As is customary, after the AGM

business we will have a series of lightning talks by

members on a recent Linux experience or project.

The Buzzard Lecture Theatre, Evan Burge Building, Trinity College Main Campus Parkville Melways Map: 2B C5

Notes: Trinity College's Main Campus is located off Royal Parade. The Evan Burge Building is located near the Tennis Courts. See our Map of Trinity College. Additional maps of Trinity and the surrounding area (including its relation to the city) can be found at http://www.trinity.unimelb.edu.au/about/location/map

Parking can be found along or near Royal Parade, Grattan Street, Swanston Street and College Crescent. Parking within Trinity College is unfortunately only available to staff.

For those coming via Public Transport, the number 19 tram (North Coburg - City) passes by the main entrance of Trinity College (Get off at Morrah St, Stop 12). This tram departs from the Elizabeth Street tram terminus (Flinders Street end) and goes past Melbourne Central Timetables can be found on-line at:

http://www.metlinkmelbourne.com.au/route/view/725

Before and/or after each meeting those who are interested are welcome to join other members for dinner. We are open to suggestions for a good place to eat near our venue. Maria's on Peel Street in North Melbourne is currently the most popular place to eat after meetings.

LUV would like to acknowledge Red Hat for their help in obtaining the Buzzard Lecture Theatre venue and VPAC for hosting, and BENK Open Systems for their financial support of the Beginners Workshops

Linux Users of Victoria Inc., is an incorporated association, registration number A0040056C.

September 2, 2014 - 19:00

read more

Michael Still: Juno nova mid-cycle meetup summary: slots

Tue, 2014-08-19 18:27
If I had to guess what would be a controversial topic from the mid-cycle meetup, it would have to be this slots proposal. I was actually in a Technical Committee meeting when this proposal was first made, but I'm told there were plenty of people in the room keen to give this idea a try. Since the mid-cycle Joe Gordon has written up a more formal proposal, which can be found at https://review.openstack.org/#/c/112733.



If you look at the last few Nova releases, core reviewers have been drowning under code reviews, so we need to control the review workload. What is currently happening is that everyone throws up their thing into Gerrit, and then each core tries to identify the important things and review them. There is a list of prioritized blueprints in Launchpad, but it is not used much as a way of determining what to review. The result of this is that there are hundreds of reviews outstanding for Nova (500 when I wrote this post). Many of these will get a review, but it is hard for authors to get two cores to pay attention to a review long enough for it to be approved and merged.



If we could rate limit the number of proposed reviews in Gerrit, then cores would be able to focus their attention on the smaller number of outstanding reviews, and land more code. Because each review would merge faster, we believe this rate limiting would help us land more code rather than less, as our workload would be better managed. You could argue that this will mean we just say 'no' more often, but that's not the intent, it's more about bringing focus to what we're reviewing, so that we can get patches through the process completely. There's nothing more frustrating to a code author than having one +2 on their code and then hitting some merge freeze deadline.



The proposal is therefore to designate a number of blueprints that can be under review at any one time. The initial proposal was for ten, and the term 'slot' was coined to describe the available review capacity. If your blueprint was not allocated a slot, then it would either not be proposed in Gerrit yet, or if it was it would have a procedural -2 on it (much like code reviews associated with unapproved specifications do now).



The number of slots is arbitrary at this point. Ten is our best guess of how much we can dilute core's focus without losing efficiency. We would tweak the number as we gained experience if we went ahead with this proposal. Remember, too, that a slot isn't always a single code review. If the VMWare refactor was in a slot for example, we might find that there were also ten code reviews associated with that single slot.



How do you determine what occupies a review slot? The proposal is to groom the list of approved specifications more carefully. We would collaboratively produce a ranked list of blueprints in the order of their importance to Nova and OpenStack overall. As slots become available, the next highest ranked blueprint with code ready for review would be moved into one of the review slots. A blueprint would be considered 'ready for review' once the specification is merged, and the code is complete and ready for intensive code review.



What happens if code is in a slot and something goes wrong? Imagine if a proposer goes on vacation and stops responding to review comments. If that happened we would bump the code out of the slot, but would put it back on the backlog in the location dictated by its priority. In other words there is no penalty for being bumped, you just need to wait for a slot to reappear when you're available again.



We also talked about whether we were requiring specifications for changes which are too simple. If something is relatively uncontroversial and simple (a better tag for internationalization for example), but not a bug, it falls through the cracks of our process at the moment and ends up needing to have a specification written. There was talk of finding another way to track this work. I'm not sure I agree with this part, because a trivial specification is a relatively cheap thing to do. However, it's something I'm happy to talk about.



We also know that Nova needs to spend more time paying down its accrued technical debt, which you can see in the huge amount of bugs we have outstanding at the moment. There is no shortage of people willing to write code for Nova, but there is a shortage of people fixing bugs and working on strategic things instead of new features. If we could reserve slots for technical debt, then it would help us to get people to work on those aspects, because they wouldn't spend time on a less interesting problem and then discover they can't even get their code reviewed. We even talked about having an alternating focus for Nova releases; we could have a release focused on paying down technical debt and stability, and then the next release focused on new features. The Linux kernel does something quite similar to this and it seems to work well for them.



Using slots would allow us to land more valuable code faster. Of course, it also means that some patches will get dropped on the floor, but if the system is working properly, those features will be ones that aren't important to OpenStack. Considering that right now we're not landing many features at all, this would be an improvement.



This proposal is obviously complicated, and everyone will have an opinion. We haven't really thought through all the mechanics fully, yet, and it's certainly not a done deal at this point. The ranking process seems to be the most contentious point. We could encourage the community to help us rank things by priority, but it's not clear how that process would work. Regardless, I feel like we need to be more systematic about what code we're trying to land. It's embarrassing how little has landed in Juno for Nova, and we need to be working on that. I would like to continue discussing this as a community to make sure that we end up with something that works well and that everyone is happy with.



This series is nearly done, but in the next post I'll cover the current status of the nova-network to neutron upgrade path.



Tags for this post: openstack juno nova mid-cycle summary review slots blueprint priority project management

Related posts: Juno nova mid-cycle meetup summary: social issues; Juno nova mid-cycle meetup summary: nova-network to Neutron migration; Juno nova mid-cycle meetup summary: scheduler; Juno nova mid-cycle meetup summary: ironic; Juno nova mid-cycle meetup summary: conclusion; Juno nova mid-cycle meetup summary: DB2 support



Comment

Andrew Pollock: [life] Day 201: Kindergarten, some startup stuff, car wash and a trip to the vet ophthalmologist

Mon, 2014-08-18 22:25

Zoe woke up at some point in the night and ended up in bed with me. I don't even remember when it was.

We got going reasonably quickly this morning, and Zoe wanted porridge for breakfast, so I made a batch of that in the Thermomix.

She was a little bit clingy at Kindergarten for drop off. She didn't feel like practising writing her name in her sign-in book. Fortunately Miss Sarah was back from having done her prac elsewhere, and Zoe had been talking about missing her on the way to Kindergarten, so that was a good distraction, and she was happy to hang out with her while I left.

I used the morning to do some knot practice for the rock climbing course I've signed up for next month. It was actually really satisfying doing some knots that previously I'd found to be mysterious.

I had a lunch meeting over at the Royal Brisbane Women's Hospital to bounce a startup idea off a couple of people, so I headed over there for a very useful lunch discussion and then briefly stopped off at home before picking up Zoe from Kindergarten.

Zoe had just woken up from a nap before I arrived, and was a bit out of sorts. She perked up a bit when we went to the car wash and had a babyccino while the car got cleaned. Sarah was available early, so I dropped Zoe around to her straight after that.

I'd booked Smudge in for a consult with a vet ophthalmologist to get her eyes looked at, so I got back home again, and crated her and drove to Underwood to see the vet. He said that she had the most impressive case of eyelid agenesis he'd ever seen. She also had persistent pupillary membranes in each eye. He said that the eyelid agenesis is a pretty common birth defect in what he called "dumpster cats", which for all we know, is exactly what Smudge (or more importantly, her mother) were. He also said that other eye defects, like the membranes she had, were common in cases where there was eyelid agenesis.

The surgical fix was going to come in at something in the order of $2,000 an eye, be a pretty long surgery, and involve some crazy transplanting of lip tissue. Cost aside, it didn't sound like a lot of fun for Smudge, and given her age and that she's been surviving the way she is, I can't see myself wanting to spend the money to put her through it. The significantly cheaper option is to just religiously use some lubricating eye gel.

After that, I got home with enough time to eat some dinner and then head out to crash my Thermomix Group Leader's monthly team meeting to see what it was like. I got a good vibe from that, so I'm happy to continue with my consultant application.

Craige McWhirter: Introduction to Managing OpenStack Via the CLI

Mon, 2014-08-18 14:28
Assumptions: Introduction:

There's great deal of ugliness in OpenStack but what I enjoy the most is the relative elegance of driving an OpenStack deployment from the comfort of my own workstation.

Once you've configured your workstation as an OpenStack Management Client, these are some of the commands you can run from your workstation against an OpenStack deployment.

There are client commands for each of the projects, which makes it rather simple to relate the commands you want to run against the service you need to work with. ie:

$ PROJECT --version

$ cinder --version 1.0.8 $ glance --version 0.12.0 $ heat --version 0.2.9 $ keystone --version 0.9.0 $ neutron --version 2.3.5 $ nova --version 2.17.0 Getting by With a Little Help From Your Friends

The first slice of CLI joy when using these OpenStack clients is the CLI help that is available for each of the clients. When each client is called with --help, a comprehensive list of options and sub commands is dumped to STDOUT, which is useful but but not expected.

The question you usually find yourself asking is, "How do I use those sub commands?", which answered by utilising the following syntax:

$ PROJECT help subcommand

This will dump all the arguments for the specified subcommand to STDOUT. I've used the below example for it's brevity:

$ keystone help user-create usage: keystone user-create --name <user-name> [--tenant <tenant>] [--pass [<pass>]] [--email <email>] [--enabled <true|false>] Create new user Arguments: --name <user-name> New user name (must be unique). --tenant <tenant>, --tenant-id <tenant> New user default tenant. --pass [<pass>] New user password; required for some auth backends. --email <email> New user email address. --enabled <true|false> Initial user enabled status. Default is true. Getting Behind the Wheel

Before you can use these commands, you will need to set some appropriate environment variables. When you completed configuring your workstation as an OpenStack Management Client, you will have completed short file that set the username, passowrd, tenant name and authentication URL for you OpenStack clients. Now is the time to source that file:

$ source <username-tenant>.sh

I have one of these for each OpenStack deployment, user account and tenant that I wish to work with. Sourcing the relevent one before I commence a body of work.

Turning the Keystone

Keystone provides the authentication service, assuming you have appropriate privileges, you will need to:

Create a Tenant (referred to as a Project via the Web UI).

$ keystone tenant-create --name DemoTenant --description "Don't forget to \ delete this tenant" +-------------+------------------------------------+ | Property | Value | +-------------+------------------------------------+ | description | Don't forget to delete this tenant | | enabled | True | | id | painguPhahchoh2oh7Oeth2jeh4ahMie | | name | DemoTenant | +-------------+------------------------------------+ $ keystone tenant-list +----------------------------------+----------------+---------+ | id | name | enabled | +----------------------------------+----------------+---------+ | painguPhahchoh2oh7Oeth2jeh4ahMie | DemoTenant | True | +----------------------------------+----------------+---------+

Create / add a user to that tenant.

$ keystone user-create --name DemoUser --tenant DemoTenant --pass \ Tahh9teih3To --email demo.tenant@example.tld +----------+----------------------------------+ | Property | Value | +----------+----------------------------------+ | email | demo.tenant@example.tld | | enabled | True | | id | ji5wuVaTouD0ohshoChoohien3Thaibu | | name | DemoUser | | tenantId | painguPhahchoh2oh7Oeth2jeh4ahMie | | username | DemoUser | +----------+----------------------------------+ $ keystone user-role-list --user DemoUser --tenant DemoTenant +----------------------------------+----------+----------------------------------+----------------------------------+ | id | name | user_id | tenant_id | +----------------------------------+----------+----------------------------------+----------------------------------+ | eiChu2Lochui7aiHu5OF2leiPhai6nai | _member_ | ji5wuVaTouD0ohshoChoohien3Thaibu | painguPhahchoh2oh7Oeth2jeh4ahMie | +----------------------------------+----------+----------------------------------+----------------------------------+

Provide that user with an appropriate role (defaults to member)

$ keystone user-role-add --user DemoUser --role admin --tenant DemoTenant $ keystone user-role-list --user DemoUser --tenant DemoTenant +----------------------------------+----------+----------------------------------+----------------------------------+ | id | name | user_id | tenant_id | +----------------------------------+----------+----------------------------------+----------------------------------+ | eiChu2Lochui7aiHu5OF2leiPhai6nai | _member_ | ji5wuVaTouD0ohshoChoohien3Thaibu | painguPhahchoh2oh7Oeth2jeh4ahMie | | ieDieph0iteidahjuxaifi6BaeTh2Joh | admin | ji5wuVaTouD0ohshoChoohien3Thaibu | painguPhahchoh2oh7Oeth2jeh4ahMie | +----------------------------------+----------+----------------------------------+----------------------------------+ Taking a Glance at Images

Glance provides the service for discovering, registering and retrieving virtual machine images. It is via Glance that you will be uploading VM images to OpenStack. Here's how you can upload a pre-existing image to Glance:

Note: If your back end is Ceph then the images must be in RAW format.

$ glance image-create --name DemoImage --file /tmp/debian-7-amd64-vm.qcow2 \ --progress --disk-format qcow2 --container-format bare \ --checksum 05a0b9904ba491346a39e18789414724 [=============================>] 100% +------------------+--------------------------------------+ | Property | Value | +------------------+--------------------------------------+ | checksum | 05a0b9904ba491346a39e18789414724 | | container_format | bare | | created_at | 2014-08-13T06:00:01 | | deleted | False | | deleted_at | None | | disk_format | qcow2 | | id | mie1iegauchaeGohghayooghie3Zaichd1e5 | | is_public | False | | min_disk | 0 | | min_ram | 0 | | name | DemoImage | | owner | gei6chiC3hei8oochoquieDai9voo0ve | | protected | False | | size | 2040856576 | | status | active | | updated_at | 2014-08-13T06:00:28 | | virtual_size | None | +------------------+--------------------------------------+ $ glance image-list +--------------------------------------+---------------------------+-------------+------------------+------------+--------+ | ID | Name | Disk Format | Container Format | Size | Status | +--------------------------------------+---------------------------+-------------+------------------+------------+--------+ | mie1iegauchaeGohghayooghie3Zaichd1e5 | DemoImage | qcow2 | bare | 2040856576 | active | +--------------------------------------+---------------------------+-------------+------------------+------------+--------+ Starting Something New With Nova

Build yourself an environment file with the new credentials:

export OS_USERNAME=DemoTenant export OS_PASSWORD=Tahh9teih3To export OS_TENANT_NAME=DemoTenant export OS_AUTH_URL=http://horizon.my.domain.tld:35357/v2.0

Then source it.

By now you just want a VM, so lets knock one up for the user and tenant you just created:

$ nova boot --flavor m1.small --image DemoImage DemoVM +--------------------------------------+--------------------------------------------------+ | Property | Value | +--------------------------------------+--------------------------------------------------+ | OS-DCF:diskConfig | MANUAL | | OS-EXT-AZ:availability_zone | nova | | OS-EXT-SRV-ATTR:host | - | | OS-EXT-SRV-ATTR:hypervisor_hostname | - | | OS-EXT-SRV-ATTR:instance_name | instance-0000009f | | OS-EXT-STS:power_state | 0 | | OS-EXT-STS:task_state | scheduling | | OS-EXT-STS:vm_state | building | | OS-SRV-USG:launched_at | - | | OS-SRV-USG:terminated_at | - | | accessIPv4 | | | accessIPv6 | | | adminPass | W3kd5WzYD2tE | | config_drive | | | created | 2014-08-13T06:41:19Z | | flavor | m1.small (2) | | hostId | | | id | 248a247d-83ff-4a52-b9b4-4b3961050e94 | | image | DemoImage (d51001c2-bfe3-4e8a-86d8-e2e35898c0f3) | | key_name | - | | metadata | {} | | name | DemoVM | | os-extended-volumes:volumes_attached | [] | | progress | 0 | | security_groups | default | | status | BUILD | | tenant_id | c6c88f8dbff34b60b4c8e7fad1bda869 | | updated | 2014-08-13T06:41:19Z | | user_id | cb040e80138c4374b46f4d31da38be68 | +--------------------------------------+--------------------------------------------------+

Now you can use nova show DemoVM to provide you with the IP address of the host or acces the console via the Horizon dashboard.

Michael Still: Juno nova mid-cycle meetup summary: scheduler

Mon, 2014-08-18 13:27
This post is in a series covering the discussions at the Juno Nova mid-cycle meetup. This post will cover the current state of play of our scheduler refactoring efforts. The scheduler refactor has been running for a fair while now, dating back to at least the Hong Kong summit (so about 1.5 release cycles ago).



The original intent of the scheduler sub-team's effort was to pull the scheduling code out of Nova so that it could be rapidly iterated on its own, with the eventual goal being to support a single scheduler across the various OpenStack services. For example, the scheduler that makes placement decisions about your instances could also be making decisions about the placement of your storage resources and could therefore ensure that they are co-located as much as possible.



During this process we realized that a big bang replacement is actually much harder than we thought, and the plan has morphed into being a multi-phase effort. The first step is to make the interface for the scheduler more clearly defined inside the Nova code base. For example, in previous releases, it was the scheduler that launched instances: the API would ask the scheduler to find available hypervisor nodes, and then the scheduler would instruct those nodes to boot the instances. We need to refactor this so that the scheduler picks a set of nodes, but then the API is the one which actually does the instance launch. That way, when the scheduler does move out it's not trusted to perform actions that change hypervisor state, and the Nova code does that for it. This refactoring work is under way, along with work to isolate the SQL database accesses inside the scheduler.



I would like to set expectations that this work is what will land in Juno. It has little visible impact for users, but positions us to better solve these problems in Kilo.



We discussed the need to ensure that any new scheduler is at least as fast and accurate as the current one. Jay Pipes has volunteered to work with the scheduler sub-team to build a testing framework to validate this work. Jay also has some concerns about the resource tracker work that is being done at the moment that he is going to discuss with the scheduler sub-team. Since the mid-cycle meetup there has been a thread on the openstack-dev mailing list about similar resource tracker concerns (here), which might be of interest to people interested in scheduler work.



We also need to test our assumption at some point that other OpenStack services such as Neutron and Cinder would be even willing to share a scheduler service if a central one was implemented. We believe that Neutron is interested, but we shouldn't be surprising our fellow OpenStack projects by just appearing with a complete solution. There is a plan to propose a cross-project session at the Paris summit to cover this work.



In the next post in this series we'll discuss possibly the most controversial part of the mid-cycle meetup. The proposal for "slots" for landing blueprints during Kilo.



Tags for this post: openstack juno nova mid-cycle summary scheduler

Related posts: Juno nova mid-cycle meetup summary: nova-network to Neutron migration; Juno nova mid-cycle meetup summary: ironic; Juno nova mid-cycle meetup summary: conclusion; Juno nova mid-cycle meetup summary: DB2 support; Juno nova mid-cycle meetup summary: social issues; Juno nova mid-cycle meetup summary: slots



Comment

Michael Still: Juno nova mid-cycle meetup summary: bug management

Mon, 2014-08-18 13:27
Welcome to the next exciting installment of the Nova Juno mid-cycle meetup summary. In the previous chapter, our hero battled a partially complete cells implementation, by using his +2 smile of good intentions. In this next exciting chapter, watch him battle our seemingly never ending pile of bugs! Sorry, now that I'm on to my sixth post in this series I feel like it's time to get more adventurous in the introductions.



For at least the last cycle, and probably longer, Nova has been struggling with the number of bugs filed in Launchpad. I don't think the problem is that Nova has terrible code, it is instead that we have a lot of users filing bugs, and the team working on triaging and closing bugs is small. The complexity of the deployment options with Nova make this problem worse, and that complexity increases as we allow new drivers for things like different storage engines to land in the code base.



The increasing number of permutations possible with Nova configurations is a problem for our CI systems as well, as we don't cover all of these options and this sometimes leads us to discover that they don't work as expected in the field. CI is a tangent from the main intent of this post though, so I will reserve further discussion of our CI system until a later post.



Tracy Jones and Joe Gordon have been doing good work in this cycle trying to get a grip on the state of the bugs filed against Nova. For example, a very large number of bugs (hundreds) were for problems we'd fixed, but where the bug bot had failed to close the bug when the fix merged. Many other bugs were waiting for feedback from users, but had been waiting for longer than six months. In both those cases the response was to close the bug, with the understanding that the user can always reopen it if they come back to talk to us again. Doing "quick hit" things like this has reduced our open bug count to about one thousand bugs. You can see a dashboard that Tracy has produced that shows the state of our bugs at http://54.201.139.117/nova-bugs.html. I believe that Joe has been moving towards moving this onto OpenStack hosted infrastructure, but this hasn't happened yet.



At the mid-cycle meetup, the goal of the conversation was to try and find other ways to get our bug queue further under control. Some of the suggestions were largely mechanical, like tightening up our definitions of the confirmed (we agree this is a bug) and triaged (and we know how to fix it) bug states. Others were things like auto-abandoning bugs which are marked incomplete for more than 60 days without a reply from the person who filed the bug, or unassigning bugs when the review that proposed a fix is abandoned in Gerrit.



Unfortunately, we have more ideas for how to automate dealing with bugs than we have people writing automation. If there's someone out there who wants to have a big impact on Nova, but isn't sure where to get started, helping us out with this automation would be a super helpful way to get started. Let Tracy or I know if you're interested.



We also talked about having more targeted bug days. This was prompted by our last bug day being largely unsuccessful. Instead we're proposing that the next bug day have a really well defined theme, such as moving things from the "undecided" to the "confirmed" state, or similar. I believe the current plan is to run a bug day like this after J-3 when we're winding down from feature development and starting to focus on stabilization.



Finally, I would encourage people fixing bugs in Nova to do a quick search for duplicate bugs when they are closing a bug. I wouldn't be at all surprised to discover that there are many bugs where you can close duplicates at the same time with minimal effort.



In the next post I'll cover our discussions of the state of the current scheduler work in Nova.



Tags for this post: openstack juno nova mi-cycle summary bugs

Related posts: Juno nova mid-cycle meetup summary: nova-network to Neutron migration; Juno nova mid-cycle meetup summary: scheduler; Juno nova mid-cycle meetup summary: ironic; Juno nova mid-cycle meetup summary: conclusion; Michael's surprisingly unreliable predictions for the Havana Nova release; Juno nova mid-cycle meetup summary: DB2 support



Comment

Andrew Pollock: [tech] Solar follow up

Mon, 2014-08-18 11:25

Now that I've had my solar generation system for a little while, I thought I'd write a follow up post on how it's all going.

Energex came out a week ago last Saturday and swapped my electricity meter over for a new digital one that measures grid consumption and excess energy exported. Prior to that point, it was quite fun to watch the old analog meter going backwards. I took a few readings after the system was installed, through to when the analog meter was disconnected, and the meter had a value 26 kWh lower than when I started.

I've really liked how the excess energy generated during the day has effectively masked any relatively small overnight power consumption.

Now that I have the new digital meter things are less exciting. It has a meter measuring how much power I'm buying from the grid, and how much excess power I'm exporting back to the grid. So far, I've bought 32 kWh and exported 53 kWh excess energy. Ideally I want to minimise the excess because what I get paid for it is about a third of what I have to pay to buy it from the grid. The trick is to try and shift around my consumption as much as possible to the daylight hours so that I'm using it rather than exporting it.

On a good day, it seems I'm generating about 10 kWh of energy.

I'm still impatiently waiting for PowerOne to release their WiFi data logger card. Then I'm hoping I can set up something automated to submit my daily production to PVOutput for added geekery.

Sridhar Dhanapalan: Twitter posts: 2014-08-11 to 2014-08-17

Mon, 2014-08-18 09:27

Andrew Pollock: [life] Day 198: Dentist, play date and some Science Friday

Sun, 2014-08-17 21:25

First up on Friday morning was Zoe's dentist appointment. When Sarah dropped her off, we jumped in the car straight away and headed out. It's a long way to go for a 10 minute appointment, but it's worth it for the "Yay! I can't wait!" reaction I got when I told her she was going a few days prior.

Having a positive view of dental care is something that's very important to me, as teeth are too permanent to muck around with. The dentist was very happy with her teeth, and sealed her back molars. Apparently it's all the rage now, as these are the ones that hang around until she's 12.

Despite this being her third appointment with this dentist, Zoe was feeling a bit shy this time, so she spent the whole time reclining on me in the chair. She otherwise handled the appointment like a trooper.

After we got home, I had a bit of a clean up before Zoe's friend from Kindergarten, Vaeda and her Mum came over for lunch. The girls had a good time playing together really nicely for a couple of hours afterwards.

I was flicking through 365 Science Experiments, looking for something physics-related for a change, when I happened on the perfect thing. The girls were already playing with a bunch of balloons that I'd blown up for them, so I just charged one up with static electricity and made their hair stand on end, and also picked up some torn up paper. Easy.

After Vaeda left, we did the weekend grocery shop early, since we were going away for the bulk of the weekend.

It was getting close to time to start preparing dinner after that. Anshu came over for dinner and we all had a nice dinner together.

Michael Still: Juno nova mid-cycle meetup summary: cells

Fri, 2014-08-15 14:29
This is the next post summarizing the Juno Nova mid-cycle meetup. This post covers the cells functionality used by some deployments to scale Nova.



For those unfamiliar with cells, it's a way of combining smaller Nova installations into a thing which feels like a single large Nova install. So for example, Rackspace deploys Nova in cells of hundreds of machines, and these cells form a Nova availability zone which might contain thousands of machines. The cells in one of these deployments form a tree: users talk to the top level of the tree, which might only contain API services. That cell then routes requests to child cells which can actually perform the operation requested.



There are a few reasons why Rackspace does this. Firstly, it keeps the MySQL databases smaller, which can improve the performance of database operations and backups. Additionally, cells can contain different types of hardware, which are then partitioned logically. For example, OnMetal (Rackspace's Ironic-based baremetal product) instances come from a cell which contains OnMetal machines and only publishes OnMetal flavors to the parent cell.



Cells was originally written by Rackspace to meet its deployment needs, but is now used by other sites as well. However, I think it would be a stretch to say that cells is commonly used, and it is certainly not the deployment default. In fact, most deployments don't run any of the cells code, so you can't really call them even a "single cell install". One of the reasons cells isn't more widely deployed is that it doesn't implement the entire Nova API, which means some features are missing. As a simple example, you can't live-migrate an instance between two child cells.



At the meetup, the first thing we discussed regarding cells was a general desire to see cells finished and become the default deployment method for Nova. Perhaps most people end up running a single cell, but in that case at least the cells code paths are well used. The first step to get there is improving the Tempest coverage for cells. There was a recent openstack-dev mailing list thread on this topic, which was discussed at the meetup. There was commitment from several Nova developers to work on this, and notably not all of them are from Rackspace.



It's important that we improve the Tempest coverage for cells, because it positions us for the next step in the process, which is bringing feature parity to cells compared with a non-cells deployment. There is some level of frustration that the work on cells hasn't really progressed in Juno, and that it is currently incomplete. At the meetup, we made a commitment to bringing a well-researched plan to the Kilo summit for implementing feature parity for a single cell deployment compared with a current default deployment. We also made a commitment to make cells the default deployment model when this work is complete. If this doesn't happen in time for Kilo, then we will be forced to seriously consider removing cells from Nova. A half-done cells deployment has so far stopped other development teams from trying to solve the problems that cells addresses, so we either need to finish cells, or get out of the way so that someone else can have a go. I am confident that the cells team will take this feedback on board and come to the summit with a good plan. Once we have a plan we can ask the whole community to rally around and help finish this effort, which I think will benefit all of us.



In the next blog post I will cover something we've been struggling with for the last few releases: how we get our bug count down to a reasonable level.



Tags for this post: openstack juno nova mid-cycle summary cells

Related posts: Juno nova mid-cycle meetup summary: nova-network to Neutron migration; Juno nova mid-cycle meetup summary: scheduler; Juno nova mid-cycle meetup summary: ironic; Juno nova mid-cycle meetup summary: conclusion; Juno nova mid-cycle meetup summary: DB2 support; Juno nova mid-cycle meetup summary: social issues



Comment

Michael Still: More bowls and pens

Fri, 2014-08-15 13:27
The pens are quite hard to make by the way -- the wood is only a millimeter or so thick, so it tends to split very easily.



         



Tags for this post: wood turning 20140805-woodturning photo



Comment

Michael Still: Juno nova mid-cycle meetup summary: DB2 support

Fri, 2014-08-15 12:27
This post is one part of a series discussing the OpenStack Nova Juno mid-cycle meetup. It's a bit shorter than most of the others, because the next thing on my list to talk about is DB2, and that's relatively contained.



IBM is interested in adding DB2 support as a SQL database for Nova. Theoretically, this is a relatively simple thing to do because we use SQLAlchemy to abstract away the specifics of the SQL engine. However, in reality, the abstraction is leaky. The obvious example in this case is that DB2 has different rules for foreign keys than other SQL engines we've used. So, in order to be able to make this change, we need to tighten up our schema for the database.



The change that was discussed is the requirement that the UUID column on the instances table be not null. This seems like a relatively obvious thing to allow, given that UUID is the official way to identify an instance, and has been for a really long time. However, there are a few things which make this complicated: we need to understand the state of databases that might have been through a long chain of upgrades from previous Nova releases, and we need to ensure that the schema alterations don't cause significant performance problems for existing large deployments.



As an aside, people sometimes complain that Nova development is too slow these days, and they're probably right, because things like this slow us down. A relatively simple change to our database schema requires a whole bunch of performance testing and negotiation with operators to ensure that its not going to be a problem for people. It's good that we do these things, but sometimes it's hard to explain to people why forward progress is slow in these situations.



Matt Riedemann from IBM has been doing a good job of handling this change. He's written a tool that operators can run before the change lands in Juno that checks if they have instance rows with null UUIDs. Additionally, the upgrade process has been well planned, and is documented in the specification available on the fancy pants new specs website.



We had a long discussion about this change at the meetup, and how it would impact on large deployments. Both Rackspace and HP were asked if they could run performance tests to see if the schema change would be a problem for them. Unfortunately HP's testing hardware was tied up with another project, so we only got numbers from Rackspace. For them, the schema change took 42 minutes for a large database. Almost all of that was altering the column to be non-nullable; creating the new index was only 29 seconds of runtime. However, the Rackspace database is large because they don't currently purge deleted rows, if they can get that done before running this schema upgrade then the impact will be much smaller.



So the recommendation here for operators is that it is best practice to purge deleted rows from your databases before an upgrade, especially when schema migrations need to occur at the same time. There are some other takeaways for operators as well: if we know that operators have a large deployment, then we can ask if an upgrade will be a problem. This is why being active on the openstack-operators mailing list is important. Additionally, if operators are willing to donate a dataset to Turbo-Hipster for DB CI testing, then we can use that in our automation to try and make sure these upgrades don't cause you pain in the future.



In the next post in this series I'll talk about the future of cells, and the work that needs to be done there to make it a first class citizen.



Tags for this post: openstack juno nova mid-cycle summary sql database sqlalchemy db2

Related posts: Juno nova mid-cycle meetup summary: nova-network to Neutron migration; Juno nova mid-cycle meetup summary: scheduler; Juno nova mid-cycle meetup summary: ironic; Juno nova mid-cycle meetup summary: conclusion; Juno nova mid-cycle meetup summary: social issues; Juno nova mid-cycle meetup summary: slots



Comment

Michael Still: Review priorities as we approach juno-3

Fri, 2014-08-15 06:27
I just send this email out to openstack-dev, but I am posting it here in case it makes it more discoverable to people drowning in email:



To: openstack-dev Subject: [nova] Review priorities as we approach juno-3 Hi. We're rapidly approaching j-3, so I want to remind people of the current reviews that are high priority. The definition of high priority I am using here is blueprints that are marked high priority in launchpad that have outstanding code for review -- I am sure there are other reviews that are important as well, but I want us to try to land more blueprints than we have so far. These are listed in the order they appear in launchpad. == Compute Manager uses Objects (Juno Work) == https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:bp/compute-manager-objects-juno,n,z This is ongoing work, but if you're after some quick code review points they're very easy to review and help push the project forward in an important manner. == Move Virt Drivers to use Objects (Juno Work) == I couldn't actually find any code out for review for this one apart from https://review.openstack.org/#/c/94477/, is there more out there? == Add a virt driver for Ironic == This one is in progress, but we need to keep going at it or we wont get it merged in time. * https://review.openstack.org/#/c/111223/ was approved, but a rebased ate it. Should be quick to re-approve. * https://review.openstack.org/#/c/111423/ * https://review.openstack.org/#/c/111425/ * ...there are more reviews in this series, but I'd be super happy to see even a few reviewed == Create Scheduler Python Library == * https://review.openstack.org/#/c/82778/ * https://review.openstack.org/#/c/104556/ (There are a few abandoned patches in this series, I think those two are the active ones but please correct me if I am wrong). == VMware: spawn refactor == * https://review.openstack.org/#/c/104145/ * https://review.openstack.org/#/c/104147/ (Dan Smith's -2 on this one seems procedural to me) * https://review.openstack.org/#/c/105738/ * ...another chain with many more patches to review Thanks, Michael



The actual email thread is at http://lists.openstack.org/pipermail/openstack-dev/2014-August/043098.html.



Tags for this post: openstack juno review nova ptl

Related posts: Juno Nova PTL Candidacy; Thoughts from the PTL; Havana Nova PTL elections; Expectations of core reviewers; Juno nova mid-cycle meetup summary: social issues; Juno nova mid-cycle meetup summary: slots



Comment

Andrew Pollock: [life] Day 197: Ekka outing

Thu, 2014-08-14 21:26

I started the day with a yoga class. It was good to be back. I missed last week's class because I was under the weather, and was really missing yoga. It was just me and one other student this morning, so it was nice.

I picked up Zoe from Sarah's place this morning. She was a bit wrecked from an outing to the Ekka yesterday with Sarah, but adamant that she wasn't tired, and still wanted to go to the Ekka with me today.

I decided that given she was up for it, and tomorrow's schedule didn't really permit going, it had to be today or bust, so I figured we'd just go and take it gently, and go home at the first sign of trouble.

The day worked out perfectly fine. We caught the train in, and stopped at the animal nursery first. Zoe got to hand feed some lambs, goats and calves, as well as hold a baby chicken.

The main goal for the day was the rides, so we headed over there, after I'd gotten propositioned by the Surf Life Savers for a raffle ticket, and located some fairy floss for Zoe. Suitably sugared up, we hit the kids rides area.

I'd prepurchased a $40 ride card, at a $5 discount, and tried to impress upon Zoe that once it was exhausted, we were done with the rides. She seemed to get that. I did less well with convincing her to check out everything on offer before we started blowing money on rides.

The first thing she wanted to go on was the Magic Circus, which they mercifully only charged us entry for one on. It was a pretty cool multi-level physical sensory sort of thing. It was fun to go with her.

After that we waited in line for an eternity for bungy trampoline. This was where I wished we'd scouted around a bit first, because we waited in line for ages for a single trampoline, where there were another four standing idle a bit further down. I used the wait to grab a bit of food and share it with Zoe.

Next, we went on the dodgem cars. That was heaps of fun. Zoe couldn't reach the pedal, but she could steer (with a little bit of help occasionally). She seems to really enjoy rides where she gets thrown around. She's going to be a total adrenaline junkie when she's bigger I think.

What I thought was going to be her last ride was the Big Bubble Bump, those big air-inflated balls on a wading pool. She was very keen for that one, and the line was short. She had lots of fun tumbling all over the place.

With a little bit of extra assistance, we managed to squeeze one more go on the Magic Circus out of the ride card, which made her very happy.

After the obligatory strawberry sundae, she was pretty much done, and we'd managed to avoid the rain, so we headed home. I thought she was going to fall asleep on the train on the way home, but she didn't, and perked up by the time we got home. I tried to convince her to nap in my bed while I read a book, but she ended up just playing with Smudge while I read for a bit.

After the quiet time, we went for a scooter ride around the block, via the Hawthorne Garage, to collect some produce for making a fresh batch of vegetable stock concentrate, and then Sarah arrived to pick Zoe up.

It was a really good day, and Zoe went really well. I bet she crashes tonight.

Michael Still: Juno nova mid-cycle meetup summary: ironic

Thu, 2014-08-14 19:28
Welcome to the third in my set of posts covering discussion topics at the nova juno mid-cycle meetup. The series might never end to be honest.



This post will cover the progress of the ironic nova driver. This driver is interesting as an example of a large contribution to the nova code base for a couple of reasons -- its an official OpenStack project instead of a vendor driver, which means we should already have well aligned goals. The driver has been written entirely using our development process, so its already been reviewed to OpenStack standards, instead of being a large code dump from a separate development process. Finally, its forced us to think through what merging a non-trivial code contribution should look like, and I think that formula will be useful for later similar efforts, the Docker driver for example.



One of the sticking points with getting the ironic driver landed is exactly how upgrade for baremetal driver users will work. The nova team has been unwilling to just remove the baremetal driver, as we know that it has been deployed by at least a few OpenStack users -- the largest deployment I am aware of is over 1,000 machines. Now, this unfortunate because the baremetal driver was always intended to be experimental. I think what we've learnt from this is that any driver which merges into the nova code base has to be supported for a reasonable period of time -- nova isn't the right place for experiments. Now that we have the stackforge driver model I don't think that's too terrible, because people can iterate quickly in stackforge, and when they have something stable and supportable they can merge it into nova. This gives us the best of both worlds, while providing a strong signal to deployers about what the nova team is willing to support for long periods of time.



The solution we came up with for upgrades from baremetal to ironic is that the deployer will upgrade to juno, and then run a script which converts their baremetal nodes to ironic nodes. This script is "off line" in the sense that we do not expect new baremetal nodes to be launchable during this process, nor after it is completed. All further launches would be via the ironic driver.



These nodes that are upgraded to ironic will exist in a degraded state. We are not requiring ironic to support their full set of functionality on these nodes, just the bare minimum that baremetal did, which is listing instances, rebooting them, and deleting them. Launch is excluded for the reasoning described above.



We have also asked the ironic team to help us provide a baremetal API extension which knows how to talk to ironic, but this was identified as a need fairly late in the cycle and I expect it to be a request for a feature freeze exception when the time comes.



The current plan is to remove the baremetal driver in the Kilo release.



Previously in this post I alluded to the review mechanism we're using for the ironic driver. What does that actually look like? Well, what we've done is ask the ironic team to propose the driver as a series of smallish (500 line) changes. These changes are broken up by functionality, for example the code to boot an instance might be in one of these changes. However, because of the complexity of splitting existing code up, we're not requiring a tempest pass on each step in the chain of reviews. We're instead only requiring this for the final member in the chain. This means that we're not compromising our CI requirements, while maximizing the readability of what would otherwise be a very large review. To stop the reviews from merging before we're comfortable with them, there's a marker review at the beginning of the chain which is currently -2'ed. When all the code is ready to go, I remove the -2 and approve that first review and they should all merge together.



In the next post I'll cover the state of adding DB2 support to nova.



Tags for this post: openstack juno nova mid-cycle summary ironic

Related posts: Juno nova mid-cycle meetup summary: nova-network to Neutron migration; Juno nova mid-cycle meetup summary: scheduler; Juno nova mid-cycle meetup summary: conclusion; Juno nova mid-cycle meetup summary: DB2 support; Juno nova mid-cycle meetup summary: social issues; Juno nova mid-cycle meetup summary: slots



Comment

Michael Still: Juno nova mid-cycle meetup summary: containers

Thu, 2014-08-14 18:27
This is the second in my set of posts discussing the outcomes from the OpenStack nova juno mid-cycle meetup. I want to focus in this post on things related to container technologies.



Nova has had container support for a while in the form of libvirt LXC. While it can be argued that this support isn't feature complete and needs more testing, its certainly been around for a while. There is renewed interest in testing libvirt LXC in the gate, and a team at Rackspace appears to be working on this as I write this. We have already seen patches from this team as they fix issues they find on the way. There are no plans to remove libvirt LXC from nova at this time.



The plan going forward for LXC tempest testing is to add it as an experimental job, so that people reviewing libvirt changes can request the CI system to test LXC by using "check experimental". This hasn't been implemented yet, but will be advertised when it is ready. Once we've seen good stable results from this experimental check we will talk about promoting it to be a full blown check job in our CI system.



We have also had prototype support for Docker for some time, and by all reports Eric Windisch has been doing good work at getting this driver into a good place since it moved to stackforge. We haven't started talking about specifics for when this driver will return to the nova code base, but I think at this stage we're talking about Kilo at the earliest. The driver has CI now (although its still working through stability issues to my understanding) and progresses well. I expect there to be a session at the Kilo summit in the nova track on the current state of this driver, and we'll decide whether to merge it back into nova then.



There was also representation from the containers sub-team at the meetup, and they spent most of their time in a break out room coming up with a concrete proposal for what container support should look like going forward. The plan looks a bit like this:



Nova will continue to support "lowest common denominator containers": by this I mean that things like the libvirt LXC and docker driver will be allowed to exist, and will expose the parts of containers that can be made to look like virtual machines. That is, a caller to the nova API should not need to know if they are interacting with a virtual machine or a container, it should be opaque to them as much as possible. There is some ongoing discussion about the minimum functionality we should expect from a hypervisor driver, so we can expect this minimum level of functionality to move over time.



The containers sub-team will also write a separate service which exposes a more full featured container experience. This service will work by taking a nova instance UUID, and interacting with an agent within that instance to create containers and manage them. This is interesting because it is the first time that a compute project will have an in operating system agent, although other projects have had these for a while. There was also talk about the service being able to start an instance if the user didn't already have one, or being able to declare an existing instance to be "full" and then create a new one for the next incremental container. These are interesting design issues, and I'd like to see them explored more in a specification.



This plan met with general approval within the room at the meetup, with the suggestion being that it move forward as a stackforge project as part of the compute program. I don't think much code has been implemented yet, but I hope to see something come of these plans soon. The first step here is to create some specifications for the containers service, which we will presumably create in the nova-specs repository for want of a better place.



Thanks for reading my second post in this series. In the next post I will cover progress with the Ironic nova driver.



Tags for this post: openstack juno nova mid-cycle summary containers docker lxc

Related posts: Juno nova mid-cycle meetup summary: nova-network to Neutron migration; Juno nova mid-cycle meetup summary: scheduler; Juno nova mid-cycle meetup summary: ironic; Juno nova mid-cycle meetup summary: conclusion; Juno nova mid-cycle meetup summary: DB2 support; Juno nova mid-cycle meetup summary: social issues



Comment

Michael Still: Juno nova mid-cycle meetup summary: social issues

Thu, 2014-08-14 17:27
Summarizing three days of the Nova Juno mid-cycle meetup is a pretty hard thing to do - I'm going to give it a go, but just in case I miss things, there is an etherpad with notes from the meetup at https://etherpad.openstack.org/p/juno-nova-mid-cycle-meetup. I'm also going to do it in the form of a series of posts, so as to not hold up any content at all in the wait for perfection. This post covers the mechanics of each day at the meetup, reviewer burnout, and the Juno release.



First off, some words about the mechanics of the meetup. The meetup was held in Beaverton, Oregon at an Intel campus. Many thanks to Intel for hosting the event -- it is much appreciated. We discussed possible locations and attendance for future mid-cycle meetups, and the consensus is that these events should "always" be in the US because that's where the vast majority of our developers are. We will consider other host countries when the mix of Nova developers change. Additionally, we talked about the expectations of attendance at these events. The Icehouse mid-cycle was an experiment, but now that we've run two of these I think they're clearly useful events. I want to be clear that we expect nova-drivers members to attend these events at all possible, and strongly prefer to have all nova-cores at the event.



I understand that sometimes life gets in the way, but that's the general expectation. To assist with this, I am going to work on advertising these events much earlier than we have in the past to give time for people to get travel approval. If any core needs me to go to the Foundation and ask for travel assistance, please let me know.



I think that co-locating the event with the Ironic and Containers teams helped us a lot this cycle too. We can't co-locate with every other team working on OpenStack, but I'd like to see us pick a couple of teams -- who we might be blocking -- each cycle and invite them to co-locate with us. It's easy at this point for Nova to become a blocker for other projects, and we need to be careful not to get in the way unless we absolutely need to.



The process for each of the three days: we met at Intel at 9am, and started each day by trying to cherry pick the most important topics from our grab bag of items at the top of the etherpad. I feel this worked really well for us.



Reviewer burnout



We started off talking about core reviewer burnout, and what we expect from core. We've previously been clear that we expect a minimum level of reviews from cores, but we are increasingly concerned about keeping cores "on the same page". The consensus is that, at least, cores should be expected to attend summits. There is a strong preference for cores making it to the mid-cycle if at all possible. It was agreed that I will approach the OpenStack Foundation and request funding for cores who are experiencing budget constraints if needed. I was asked to communicate these thoughts on the openstack-dev mailing list. This openstack-dev mailing list thread is me completing that action item.



The conversation also covered whether it was reasonable to make trivial updates to a patch that was close to being acceptable. For example, consider a patch which is ready to merge apart from its commit message needing a trivial tweak. It was agreed that it is reasonable for the second core reviewer to fix the commit message, upload a new version of the patch, and then approve that for merge. It is a good idea to leave a note in the review history about this when these cases occur.



We expect cores to use their judgement about what is a trivial change.



I have an action item to remind cores that this is acceptable behavior. I'm going to hold off on sending that email for a little bit because there are a couple of big conversations happening about Nova on openstack-dev. I don't want to drown people in email all at once.



Juno release



We also took at look at the Juno release, with j-3 rapidly approaching. One outcome was to try to find a way to focus reviewers on landing code that is a project priority. At the moment we signal priority with the priority field in the launchpad blueprint, which can be seen in action for j-3 here. However, high priority code often slips away because we currently let reviewers review whatever seems important to them.



There was talk about picking project sponsored "themes" for each release -- with the obvious examples being "stability" and "features". One problem here is that we haven't had a lot of luck convincing developers and reviewers to actually work on things we've specified as project goals for a release. The focus needs to move past specific features important to reviewers. Contributors and reviewers need to spend time fixing bugs and reviewing priority code. The harsh reality is that this hasn't been a glowing success.



One solution we're going to try is using more of the Nova weekly meeting to discuss the status of important blueprints. The meeting discussion should then be turned into a reminder on openstack-dev of the current important blueprints in need of review. The side effect of rearranging the weekly meeting is that we'll have less time for the current sub-team updates, but people seem ok with that.



A few people have also suggested various interpretations of a "review day". One interpretation is a rotation through nova-core of reviewers who spend a week of their time reviewing blueprint work. I think these ideas have merit. An action item for me to call for volunteers to sign up for blueprint focused reviewing.



Conclusion



As I mentioned earlier, this is the first in a series of posts. In this post I've tried to cover social aspects of nova -- the mechanics of the Nova Juno mid-cycle meetup, and reviewer burnout - and our current position in the Juno release cycle. There was also discussion of how to manage our workload in Kilo, but I'll leave that for another post. It's already been alluded to on the openstack-dev mailing list this post and the subsequent proposal in gerrit. If you're dying to know more about what we talked about, don't forget the relatively comprehensive notes in our etherpad.



Tags for this post: openstack juno nova mid-cycle summary core review social

Related posts: Juno nova mid-cycle meetup summary: slots; Juno nova mid-cycle meetup summary: nova-network to Neutron migration; Juno nova mid-cycle meetup summary: scheduler; Juno nova mid-cycle meetup summary: ironic; Juno nova mid-cycle meetup summary: conclusion; Juno nova mid-cycle meetup summary: DB2 support



Comment

Linux Users of Victoria (LUV) Announce: LUV Beginners August Meeting : MythTV

Wed, 2014-08-13 21:29
Start: Aug 16 2014 12:30 End: Aug 16 2014 16:30 Start: Aug 16 2014 12:30 End: Aug 16 2014 16:30 Location: 

RMIT Building 91, 110 Victoria Street, Carlton South

Link:  http://luv.asn.au/meetings/map

MythTV is a free and open source home entertainment application with a simplified "10-foot user interface" design for the living-room TV, and turns a computer with the necessary hardware into a network streaming digital video recorder, a digital multimedia home entertainment system, or home theatre personal computer. It runs on various operating systems, primarily Linux, Mac OS X and FreeBSD.

This introduction to MythTV with live examples, will be presented by LUV Committee member Deb Henry.

LUV would like to acknowledge Red Hat for their help in obtaining the Buzzard Lecture Theatre venue and VPAC for hosting, and BENK Open Systems for their financial support of the Beginners Workshops

Linux Users of Victoria Inc., is an incorporated association, registration number A0040056C.

August 16, 2014 - 12:30

Michael Still: More turning

Wed, 2014-08-13 16:27
Some more pens, and then I went back to bowls for a bit.



The attraction of pens is that I can churn out a pen in about 30 minutes, whereas a bowl can take twice that. Therefore when I have a small chance to play in the garage I'll do a pen, whereas when I have more time I might do a bowl.



           



Tags for this post: wood turning 20140718-woodturning photo



Comment

Michael Still: I've been making pens

Wed, 2014-08-13 15:27
So, I've been making pens for the last few months. Here are some examples. The first two are my first "commission", in that my brother in law asked for some pens for a friend's farewell gift.



   



Tags for this post: wood turning 20140628-woodturning photo



Comment