Planet Linux Australia

Syndicate content
Planet Linux Australia - http://planet.linux.org.au
Updated: 45 min 32 sec ago

Ian Wienand: Durable photo workflow

Wed, 2016-03-30 15:25

Ever since my kids were born I have accumulated thousands of digital happy-snaps and I have finally gotten to a point where I'm quite happy with my work-flow. I have always been extremely dubious of using any sort of external all-in-one solution to managing my photos; so many things seem to shut-down, cease development or disappear, all leaving you to have to figure out how to migrate to the next latest thing (e.g. Picasa shutting down). So while there is nothing complicated or even generic about them, there are a few things in my photo-scripts repo that might help others who like to keep a self-contained archive.

Firstly I have a simple script to copy the latest photos from the SD card (i.e. those new since the last copy -- this is obviously very camera specific). I then split by date so I have a simple flat directory layout with each week's photos in it. With the price of SD cards and my rate of filling them up, I don't even bother wiping them at this point, but just keep them in the safe as a backup.

For some reason I have a bit of a thing about geotagging all the photos so I know where I took them. Certainly some cameras do this today, but mine does not. I have a two-progned approach; I have a geotag script and then a small website easygeotag.info which quickly lets met translate a point on Google maps to exiv2 command-line syntax. Since I take a lot of photos in the same place, the script can store points by name in a small file sourced by the script.

Adding comments to the photos is done with perhaps the lesser-known cousin of EXIF -- IPTC. Some time ago I wrote python bindings for libiptcdata and it has been working just fine ever since. Debian's python-iptcdata comes with a inbuilt script to set title and caption, which is easily wrapped.

What I like about this is that my photos are in a simple directory layout, with all metadata embedded within the actual image files in very standarised formats that should be readable by anywhere I choose to host them.

For sharing, I then upload to Flickr. I used to have a command-line script for this, but have found the web uploader works even better these days. It reads the IPTC data for titles and comments, and gets the geotag info for nice map displays. I manually coralle them into albums, and the Flickr "guest pass" is perfect for then sharing albums to friends and family without making them jump through hoops to register on a site to get access to the photos, or worse, host them myself. I consider Flickr a cache, because (even though I pay) I expect it to shut-down or turn evil at any time. Interestingly, their AI tagging is often quite accurate, and I imagine will only get better. This is nice extra metadata that you don't have to spend time on yourself.

The last piece has always been the "hit by a bus" component of all this. Can anyone figure out access to all these photos if I suddenly disappear? I've tried many things here -- at one point I was using rdiff-backup to sync encrypted bundles up to AWS for example; but I very clearly found the problem in that when I forgot the keep the key safe and couldn't unencrypt any of my backups (let alone anyone else figuring all this out).

Finally Google Nearline seems to be just what I want. It's off-site, redundant and the price is right; but more importantly I can very easily give access to the backup bucket to anyone with a Google address, who can then just hit a website to download the originals from the bucket (I left the link with my other "hit by a bus" bits and pieces). Of course what they then do with this data is their problem, but at least I feel like they have a chance. This even has an rsync like interface in the client, so I can quickly upload the new stuff from my home NAS (where I keep the photos in a RAID0).

I've been doing this now for 350 weeks and have worked through some 25,000 photos. I used to get an album up every week, but as the kids get older and we're closer to family I now do it in batches about once a month. I do wonder if my kids will ever be interested in tagged and commented photos with pretty much their exact location from their childhood ... I doubt it, but it's nice to feel like I have a good chance of still having them if they do.

James Morris: Linux Security Summit 2016 – CFP Announced!

Tue, 2016-03-29 21:27

The 2016 Linux Security Summit (LSS) will be held in Toronto, Canada, on 25th and 26th, co-located with LinuxCon North America.  See the full announcement.

The Call for Participation (CFP) is now open, and submissions must be made by June 10th.  As with recent years, the committee is looking for refereed presentations and discussion topics.

This year, there will be a $100 registration fee for LSS, and you do not need to be registered for LinuxCon to attend LSS .

There’s also now an event twitter feed, for updates and announcements.

If you’ve been doing any interesting development, or deployment, of Linux security systems, please consider submitting a proposal!

Simon Lyall: Wellington Open 2016

Tue, 2016-03-29 08:29

Over Easter 2016 (March 25th – 27th) I played in the Wellington Open Chess Tournament. I play in the tournament about half of the time. This year it was again being played at the CQ Hotel in Cuba street so I was able to stay at the venue and also visit my favorite Wellington cafes.

There were 43 players entered (the highest for several years) with around 9 coming down from Auckland. I was ranked 16th with a rating of 1988 and the top 4 Wellington players ( Dive, Wastney, Ker & Croad) who are all ranked in the Top 10 in NZ were playing.

See the Tournament’s page for details and downloads for the games. Photos by Lin Nah and me are also up on Flickr for Days one, two and three.

Round 1 – White vs Dominic Leman (unrated) – Result win

This game was over fairly quickly after my opponents 5th Move (Nf6) which let me win a free Bishop after ( 5.. Nf6 6.Nxc6 bxc6 7.Bxc5 ) and then they played (7.. Nxe4) to take the pawn which loses the Night since I just pin it again the King with Qe2 and pick it up a move or two later.

 

 

Round 2 – Black vs Michael Steadman ( 2338) – Result lose

Mike plays at my club and is rated well above me. However I put on a pretty poor show and made a mistake early in the Opening (which was one of my lines rather than something Mike usually plays). Error on move 5 lost me a pawn and left my position poor. I failed to improve and resigned on move 21.

Round 3 – White vs Kate Song (1701) – Result win After 6. ..a5

I was very keen on beating Kate. While she is rated almost 200 points lower than me she improving faster and beat me in the last round of the Major Open at the NZ Champs at the start of this year.

We were the same colours as our game in January so I spent some time prepping the opening to avoid my previous mistakes.

In that game Black played 6.. a5  (see diagram) and I replied with the inaccurate Be2 and got tied into knots on the Queen side. This time I played 7. Bd3 which is a better line. However after 7. ..Nh6 8. dxc5 Bxc5 9. O-O black plays Ng4 which gives me some problems. After some back and forth Black ended up with a bit of a mid-game advantage with a developed bishop pair. and control of the open C file.

 

27. Bg5 and I offer a draw

However on move 27 after the rooks had been swapped I was able to play Bg5 which threaten to swap Black’s good Bishop or push it backwards. I offered a draw.

Luckily for me Kate picked to swap the Bishops and Queens with 27. ..Bxg5 28.Nxg5 Qd1+ 29.Qxd1 Bxd1 which left me with almost all my pawns on black squares and pretty safe from her white squared bishop. I then was able to march my King over to the Queenside while my Kingside was safe from the Bishop. After picking up a the a-pawn when the Knight and Bishops swapped I was left with a King plus A&B pawns vs King an b-pawn with around 3 tempo in reserve for pushing back the Black king.

Round 3 – Michael Nyberg vs Leighton Nicholls Position after 71. Kxg4

Another game during round 3 went very long. This was the position after move 71 , White has just taken blacks last pawn. The game kept going till move 125! White kept try to force black to the edge of the board while black kept his king close to the centre and the Knight nearby (keeping the king away with checks and fork threats).

At move 125 Black (Nicholls) claimed a draw under the 50-move rule at which point Michael Nyberg asked “are you sure” and “are you prepared for any penalties?”. After Leighton confirmed he wanted to go ahead with the claim Michael claimed that the draw rules were changed a couple of years ago and that King+Rook vs King+Knight was allowed 75 moves. And that since the draw claim was incorrect Leighton should lose.

However a check of the Official FIDE rules online showed that there was no such special limited for the material, the rule is always 50 moves (Rule 9.3) . The penalty for incorrectly claiming a draw would also have been 2 minutes added to Michael’s time not Leighton losing the game (Rule 9.5b).

The Arbiter checked the rules and declared the game a draw while Michael grumbled about appealing it (which did not happen). Not a good way to end the game since I thought Leighton defended very well. Especially the way Michael was very aggressive while being completely in the wrong.

There have been exceptions to the 50-move draw rule in the past but it has been a flat 50 moves since at least 2001 since while some positions take longer in theory no human would actually be able to play them perfectly.

Round 4 – Black vs David Paul – Result win

Another game against somebody close to my rating but a little below. So while I should win it could be hard. I didn’t play the opening right however and ended up in a slightly poor position a couple of tempo down.

After 32 Re4 draw offered

After some maneuvering (and the odd missed move by both sizes) white offered a draw after move 32. I decided to press on with f6 and was rewarded when after 32. ..f6 33.Kf2 Kf7 White played 34.b4? which allowed me to play Nc3 and bounce my Night to b5 and then take the Bishop on d6 along with an extra pawn.

 

After 44. ..Kd6

A few moves later I’m a pawn up and with a clear path to the win although I made a mistake at the ended it wasn’t bad enough to be fatal.

 

 

 

Round 5 – White vs Russell Dive – Game lost

After getting onto 3 points after 6 rounds I was rewarded with playing the top seed. As often happens with stronger players he just seemed to make 2 threats with every move and my position slowly (well not that slowly) got worse and worse as I couldn’t counter them all (let alone make my own threats).

Eventually I resigned 3 pawns down with no play (computer assessed my position as -5.0)

Round 6 – Black vs Brian Nijman – Game Lost

Last round was once again against a higher rater play but one I had a reasonable chance against.

After 10. ..Bg6

I prepped a bit of the opening but he played something different and we anded up in a messy position with White better developed but not a huge advantage.

We both had bishops cutting though the position and Queens stuck to the side but it would be hard for me to develop my pieces. I was goign to have to work hard at getting them out into good positions

 

After 23. d5

After some swaps white ended up charging though my centre and with lots of threats. I spent a lot of time looking at this position workign out what to do.

White has the Bishop ready to take the pawn on b5 and offer check, possibly grab the Knight or pin the rook. While th Knight can also attack the rook. and the pawns can even promote.

I ended up giving up the exchange for a pawn but promptly lost a pawn when white castled and took on f7.

After 32. Ne2

I decided to push forward hoping to generate some threats and managed to when I threated to mate with two Knights or win a rook after 32. Ne2

34.Rxc5+ Kxc5 35.Be1 Rd8 36.Rc7+ followed but I played 36. ..Kd4 and blocked by Rook rather than Kb6 giving myself a tempo to move my rook to d1. This would have probably picked up another exchange and should have been enough for the win.

 

After 47. g6

And then I found another win. All I had to do was push the pawn. On move 47 I just have to put a piece on f2 to block the bishop from taking my pawn on g1. If 47. ..Nf2 48. Bxf2 Rxf2 49. g1=Q leaves me a Queen vs a rook and I can take the pawn on g6 straight away.

But instead I got Chess Blindness and just  swapped the pawn for the Bishop. I then tried to mate (or perpetual check) the King instead of trying to stop the pawns (the computer says 50. ..Nf4 is just in time). A few moves later I ran out of King-chasing moves and resigned. At which point everybody told me the move I missed

Pia Waugh: My personal OGPau submission

Mon, 2016-03-28 11:27

I have been fascinated and passionate about good government since I started exploring the role of government in society about 15 years ago. I decided to go work in both the political and public service arenas specifically to get a better understanding of how government and democracy works in Australia and it had been an incredible journey learning a lot, with a lot of good mentors and experiences along the way.

When I learned about the Open Government Partnership I was extremely excited about the opportunity it presented to have genuine public collaboration on the future of open government in Australia, and to collaborate with other governments on important initiatives like transparency, democracy and citizen rights. Once the government gave the go ahead, I felt privileged to be part of kicking the process off, and secure in my confidence in the team left to run the consultation as I left to be on maternity leave (returning to work in 2017). Amelia, Toby and the whole team are doing a great job, as are the various groups and individuals contributing to the consultation. I think it can be very tempting to be cynical about such things but it us so important we take the steering wheel offered, to drive this where we want to go. Otherwise it is a wasted opportunity.

So now, as a citizen who cares about this topic, and completely independently of my work, I’d like to contribute some additional ideas to the Australian OGP consultation and I encourage you all to contribute ideas too. There have already been a lot of great ideas I support, so these are just a few I think deserve a little extra attention. I’ve laid out some problems and then some actions for each problem. I’ve also got a 9 week old baby so this has been a bit tricky to write in between baby duties I’m keen to explore these and other ideas in more detail throughout the process but these are just the high level ideas to start.

Problem 1: democratic engagement. I think it is hard for a lot of citizens to engage in the range of activities of our democracy. Voting is usually considered the extent to which the average person considers participating but there are so many ways to be involved in the decisions and actions of governments, which affect us in our every day lives! These actions are about making the business of government easier for the people served  to get involved in.

Action (theme: public participation): Establish a single place to discover all consultations, publications, policies – it is currently difficult for people to contribute meaningfully to government because it is hard to find what is going on, what has already been decided, what the priorities of the government of the day are, and what research has been conducted to date.

Action: (theme: public participation): Establish a participatory budget approach. Each year there should be a way for the public to give ideas and feedback to the budget process, to help identify community priorities and potential savings.

Action: (theme: public participation): Establish a regular Community Estimates session. Senate Estimates is a way for the Senate to hold the government and departments to account however, often the politics of the individuals involved dominates the approach. What if we implemented an opportunity for the public to do the same? There would need to be a rigorous way to collect and prioritise questions from the public that was fair and representative, but it could be an excellent way to provide greater accountability which is not (or should not be) politicised.

Problem 2: analogue government. Because so much of the reporting, information, decisions and outcomes of government are published (or not published) in an analogue format (not digital or machine readable), it is very hard to discover and analyse, and thus very hard to monitor. If government was more digitally accessible, more mashable, then it would be easier to monitor the work of government.

Action: (theme: open data) XML feeds for all parliamentary data including Hansard, comlaw, annual reports, pbs’, MP expenses and declaration of interests in data form with notifications of changes. This would make this important democratic content more accessible, easier to analyse and easier to monitor.

Action: (theme: open data) Publishing of all the federal budget information in data format on budget night, including the tables throughout the budget papers, the data from the Portfolio Budget Statements (PBSs) and anything else of relevance. This would make analysing the budget easier. There have been some efforts in this space but it has not been fully implemented.

Action: (Freedom of Information): Adoption of rightoknow platform for whole of gov with central FOI register and publications, and a central FOI team to work across all departments consistently for responding to requests. Currently doing an FOI request can be tricky to figure out (unless you can find community initiatives like righttoknow which has automated the process externally) and the approach to FOI requests varies quite dramatically across departments. A single official way to submit requests, track them, and see reports published, as well as a single mechanism to respond to requests would be better for the citizen experience and far more efficient for government.

Action: (theme: government integrity): Retrospective open calendars of all Parliamentarians business calendars. Constituents deserve to know how their representatives are using their time and, in particular, who they are meeting with. This helps improve transparency around potential influencers of public policy, and helps encourage Parliamentarians to consider how they spend their time in office.

Problem 3: limits for reporting transparency. A lot of the rules about reporting of expenditure in Australia are better than most other countries in the world however, we can do better. We could lower the thresholds for reporting expenditure for instance, and others have covered expanding the reporting around political donations so I’ll stick to what I know and consider useful from direct experience.

Action: (theme: fiscal transparency): Regular publishing of government expenditure records down to $1000. Currently federal government contracts over $10k are reported in Australia through the AusTender website and ondata.gov.au however, there are a lot of expenses below $10k that arguably would be useful to know. In the UK they introduced expenditure reporting per department monthly at https://data.gov.uk/data/openspending-report/index

Action: (theme: fiscal transparency): A public register of all gov funded major projects (all types) along with status, project manager and regular reporting. This would make it easier to track major projects and to intervene when they are not delivering.

Action: (theme: fiscal transparency): Update of PBS and Annual Report templates for comparative budget and program information with common key performance indicators and reporting for programs and departmental functions. Right now agencies do their reporting in PDF documents that provide no easy way to compare outcomes, programs, expenditure, etc. If common XML templates were used for common reports, comparative assessment would be easier and information about government as a whole much more available for assessment.

Problem 4: stovepipe and siloed government impedes citizen centric service delivery. Right now each agency is motivated to deliver their specific mandate with a limited (and ever restricted) budget and so we end up with systems (human, technology, etc) for service delivery that are siloed from other systems and departments. If departments took a more modular approach, it would be more possible to mash up government data, content and services for dramatically improved service delivery across government, and indeed across different jurisdictions.

Action: (theme: public service delivery): Mandated open Application Programmable Interfaces (APIs) for all citizen and business facing services delivered or commissioned by government, to comply to appropriately defined standards and security. This would enable different data, content and services to be mashed up by agencies for better service delivery, but also enables an ecosystem of service delivery beyond government.

Action: (theme: government integrity): a consistent reporting approach and public access to details of outsourced contract work with greater consistency of confidentiality rules in procurement. A lot of work is outsourced by government to third parties. This can be a good way to deliver some things (and there are many arguments as to how much outsourcing is too much) however, it introduces a serious transparency issue when the information about contracted work is unable to be monitored, with the excuse of “commercial in confidence”. All contracts should have minimum reporting requirements and should make publicly available the details of what exactly is contracted, with the exception of contracts with national security where such disclosure creates a significant risk. This would also help in creating a motivation for contractors to deliver on their contractual obligations. Finally, if procurement officers across government had enhanced training to correctly apply the existing confidentiality test from the Commonwealth Procurement Rules, it would be reasonably to expect that there would be less information hidden behind commercial in confidence.

I also wholeheartedly support the recommendations of the Independent Parliamentary Entitlements System Report (https://www.dpmc.gov.au/taskforces/review-parliamentary-entitlements), in particular:

  • Recommendation 24: publish all key documents online;
  • Recommendation 25: more frequent reporting (of work expenses of parliamentarians and their staff) on data.gov.au as a dataset;
  • Recommendation 26: improved travel reporting by Parliamentarians.

I hope this feedback is useful and I look forward to participating in the rest of the consultation. I’m adding the ideas to the ogpau wiki and look forward to feedback and discussion. Just to be crystal clear, these are my own thoughts, based on my own passion and experience, and is not in any way representative of my employer or the government. I have nothing to do with the running of the consultation now and expect my ideas to hold no more weight than the ideas of any other contributor.

Good luck everyone, let’s do this

Tridge on UAVs: APM:Plane 3.5.2 released

Sat, 2016-03-26 14:39

The ArduPilot development team is proud to announce the release of version 3.5.2 of APM:Plane. This is a minor release with small changes.



The main reason for this release over the recent 3.5.1 release is a fix for a bug where the px4io co-processor on a Pixhawk can run out of memory while booting. This causes the board to be unresponsive on boot. It only happens if you have a more complex servo setup and is caused by too much memory used by the IO failsafe mixer.



The second motivation for this release is to fix an issue where during a geofence altitude failsafe that happens at low speed an aircraft may dive much further than it should to gain speed. This only happened if the thrust line of the aircraft combined with low pitch integrator gain led to the aircraft not compensating sufficiently with elevator at full throttle in a TECS underspeed state. To fix this two changes have been made:



  • a minimum level of integrator in the pitch controller has been added. This level has a sufficiently small time constant to avoid the problem with the TECS controller in an underspeed state.
  • the underspeed state in TECS has been modified so that underspeed can end before the full target altitude has been reached, as long as the airspeed has risen sufficiently past the minimum airspeed for a sufficient period of time (by 15% above minimum airspeed for 3 seconds).

Many thanks to Marc Merlin for reporting this bug!

The default P gains for both roll and pitch have also been raised from 0.4 to 0.6. This is to help for users that fly with the default parameters. A value of 0.6 is safe for all aircraft that I have analysed logs for.



The default gains and filter frequencies of the QuadPlane code have also been adjusted to better reflect the types of aircraft users have been building.



Other changes include:

  • improved QuadPlane logging for better analysis and tuning (adding RATE and QTUN messages)
  • fixed a bug introduced in 3.5.1 in rangefinder landing
  • added TECS logging of speed_weight and flags
  • improvements to the lsm303d driver for Linux
  • improvements to the waf build system



Happy flying!

James Purser: Changes, they are a happening

Fri, 2016-03-25 14:31

So, if you've been following my social medias then  you'll have noticed that I have very rapidly (in the space of maybe two weeks?) gone through the process of being retrenched, looking for work and aquiring a new job. In this, I have been actually very, very lucky and unlike some others I'm not going to claim that being unemployed is some sort of "freedom" or relaxing time. Instead it's a period where you immediately go "Okay, well shit, I have a family to support, so no time for relaxing, get back out there".

As I said, I've been EXTREMELY lucky in that I have managed to snag a good job so soon after being retrenched. I won't say who yet, but will say it's back in the media industry (an industry I haven't worked in since leaving WIN TV back in 2004). I will be leaving the moodle space and returning to Drupalland with forays into new areas (I really do like forays into new areas).

With any luck this won't affect my plans for rebooting Purser Explores The World and my other plans for Angry Beanie. I've already started working on a new episode of PETW (actually interviewed someone the other day, it felt awesome), and am in the process of organising more. Also have sekrit podcast project to get going as well.

On the tech side, I'm going to be moving this blog to Drupal 8 (because, well while the contrib modules aren't there yet for something as complex as Angry Beanie, it's certainly there for a blog like this), I'm also going to be delving more into the MVC side of things. I've played around with django and the like, but it's probably about time I get it knocked over.

Well that's it for the moment, hopefully will blog a bit more in the future, we'll see.

Oh, and if you're reading this on medium, I am thinking about a module that allows you to actually publish from Drupal to medium, but that's at the "It's an idea I had on the train" stage.

David Rowe: Project Whack a Mole Part 2

Fri, 2016-03-25 09:30

I’ve been steadily working on this project so here is an update. You might like to review Part 1 which describes how this direction finding system works.

The good news is it works with real off-air radio signals! I could detect repeatable phase angles using two antennas with an RF signal, first in my office using a signal generator, then with a real signal from a local repeater. However the experimental set up was delicate and the software slow and cumbersome. So I’ve put some time into making the system easier to use and more robust.

New RF Head

I’ve built a new RF Head based on a NE602 active mixer:



The 32 kHz LO is on the RHS of the photo. Here is the saga of getting the 32kHz oscillator to run.

The mixer has an impedance of about 3000 ohms across it’s balanced inputs and outputs so I’ve coupled the 50 ohm signals with a single turn loop to make some sort of impedance match. The tuned circuits also give some selectivity. This is important as I am afraid the untuned HackRF front end will collapse with overload when I poke a real antenna up above the Adelaide Plains and it can see every signal on the VHF and UHF spectrum.

Antenna 1 (A1) is coupled using a tapped tuned circuit, and with the mixer output forms a 3 winding transformer. Overall gain for the A1 and A2 signals is about -6dB which is OK. The carrier feed through from the A2 mixer is 14dB down. Need to make sure this carrier feed through stays well down on A1 which is on the same frequency. Otherwise the DSP breaks – it assumes there is no carrier feed through. In practice the levels of A1 and A2 will bob about due to multipath, so some attenuation of A2 relative to A1 is a good idea.

Real Time-ish Software

I refactored the df_mixer.m Octave code to make it run faster and make repeated system calls to hackrf_transfer. So now it runs real time (ish); grabs a second of samples, does the DSP foo, plots, then repeats about once every 2 seconds. Much easier to see whats going on now, here it is working with a FM signal:

You can “view image” on your browser for a larger image. I really like my “propeller plot”. It’s a polar histogram of the angles the DSP comes up with. It has two “blades” due to the 180 degree ambiguity of the system. The propellor gets fatter with low SNR as there is more uncertainty, and thinner with higher SNR. It simultaneously tells me the angle and the quality of the angle. I think that’s a neat innovation.

Note the “Rx signal at SDR Input” plot. The signals we want are centered on 48kHz (A1), 16 and 80kHz (A2 mixer products). Above 80kHz you can see the higher order mixer products, more on that below.

Reflections

As per Part 1 the first step is a bench test. I used my sig gen to supply a test signal which I split and fed into A1 and A2. By adding a small length of transmission line (38mm of SMA adapters screwed together), I could induce known amounts of phase shift.

Only I was getting dud results, 10 degrees one way then 30 the other when I swapped the 38mm segment from A1 to A2. It should be symmetrical, same phase difference but opposite.

I thought about the A1 and A2 ports. It’s unlikely they are 50 ohms with my crude matching system. Maybe this is causing some mysterious reflections that are messing up the phase at each port? Wild guess but I inserted some 10dB SMA attenuators into A1 and A2 and it started working! I measured +/- 30 +/-1 degrees as I swapped the 38mm segment. Plugging 38mm into my spreadsheet the expected phase shift is 30.03 degrees. Yayyyyyyy…..

So I need to add some built-in termination impedance for each port, like a 6dB “pad”. Why are they called “pads” BTW?

The near-real time software and propeller plot made it really easy to see what was going on and I could see and avoid any silly errors. Visualisation helps.

Potential Problems

I can see some potential problems with this mixer based method for direction finding:

  1. If the spectrum is “busy” and other nearby channels are in use the mixer will plonk them right on top of our signals. Oh dear.
  2. The mixer has high order output products – at multiples of the LO (32, 64, 96 ….. kHz) away from the input frequency. So any strong signal some distance away could potentially be mixed into our pass band. A strong BPF and resonant antennas might help. Yet to see if this is a real problem.

Next Steps

Anyway, onward and upwards. I’ll add some “pads” to A1 and A2, then assemble the RF head with a couple of antennas so I can mount the whole thing outdoors on a mast.

Mark has given me a small beacon transmitter that I will use for local testing, before trying it on a repeater. If I get repeatable repeater-bearings (lol) I will take the system to mountain overlooking the city and see if it blows up with strong signals. Gold star if I can pull bearings off the repeater input as that’s where our elusive mole lives.

Tridge on UAVs: New ArduPilot documentation site

Thu, 2016-03-24 18:18

If you have visited ardupilot.com recently you may have noticed you are redirected to our new documentation system on ardupilot.org. This is part of our on-going transformation of the ardupilot project that we announced in a previous post.

The new documentation system is based on sphinx and was designed by Hamish Willee. I think Hamish has done a fantastic job with the new site, creating something that will be easier to manage and update, while using less server resources which should make it more responsive for users.

Updates to the documentation will now be done via github pull requests, using the new ardupilot_wiki repository. That git will also host documentation issues, and includes all the issues from the old tracking repository imported to the new repository.

Many thanks to everyone who has helped with this conversion, including Hamish, Buzz, Jani and Peter.

We have endeavoured to make as many existing URLs auto-redirect to the correct URL on the new site, but there are bound to be some errors for which we apologise. If you find issues with the new site please either respond here or open an issue on the repository.

Happy flying!

David Rowe: Organic Potato Chips Scam

Thu, 2016-03-24 07:30

I don’t keep much junk food in my pantry, as I don’t like my kids eating too much high calorie food. Also if I know it’s there I will invariably eat it and get fat. Fortunately, I’m generally too lazy to go shopping when an urge to eat junk food hits. So if it’s not here at home I won’t do anything about it.

Instead, every Tuesday at my house is “Junk Food Night”. My kids get to have anything they want, and I will go out and buy it. My 17 year old will choose something like a family size meat-lovers pizza with BBQ sauce. My 10 year old usually wants a “slushie”, frozen coke sugar laden thing, so last Tuesday off we went to the local all-night petrol (gas) station.

It was there I spied some “Organic” potato chips. My skeptical “spidey senses” started to tingle…….

Lets break it down from the information on the pack:



OK so they are made from organic grains. This means they are chemically and nutritionally equivalent to scientifically farmed grains but we need to cut down twice as much rain forest to grow them and they cost more. There is no scientifically proven health advantage to organic food. Just a profit advantage if you happen to sell it.

There is nothing wrong with Gluten. Nothing at all. It makes our bread have a nice texture. Humans have been consuming it from the dawn of agriculture. Like most marketing, the Gluten fad is just a way to make us feel bad and choose more expensive options.

And soy is suddenly evil? Please. Likewise dairy is a choice, not a question of nutrition. I’ve never met a cow I didn’t like. Especially served medium rare.

Whole grain is good, if the micro-nutrients survive deep frying in boiling oil.

There is nothing wrong with GMO. Another scam where scientifically proven benefits are being held back by fear, uncertainty, and doubt. We have been modifying the genetic material in everything we eat for centuries through selection.

Kosher is a religious choice and has nothing to do with nutrition.

Speaking of nutrition, lets compare the nutritional content per 100g to a Big Mac:

Item Big Mac Organic Chips Energy 1030 kJ 1996 kJ Protein 12.5 g 12.5 g Carbohydrates 17.6 g 66 g Fat 13.5 g 22.4 g Sodium 427 mg 343 mg

This is very high energy food. It is exactly this sort of food that is responsible for first world health problems like cardio-vascular disease and diabetes. The link between high calorie snack food and harm is proven – unlike the perceived benefits of organic food. The organic label on these chips is dangerous, irresponsible marketing hype to make us pay more and encourage consumption of food that will hurt us.

Links

Give Us Our Daily Bread – A visit to a modern wheat farm.

Energy Equivalents of a Krispy Kreme Factory – How many homes can you run on a donut?

OpenSTEM: Soldering: if it smells like chicken, you’re holding it wrong!

Wed, 2016-03-23 09:30

This T-shirt sums up soldering basics quite well. Funny too. But I hear you say, surely you don’t need to really explain that?

I’d agree, and in our experience with soldering with primary school students in classrooms, we’ve never had any such fuss.

However, in stock photography, we find the following “examples”…

This stock photo model (they appear in many other photos) is holding a hot air gun of a soldering rework station, by the metal part! If the station were turned on, there’d be third degree burns and a distinct nasty smell…

The open hard disk assembly near the front is also quite shiny…..

As if one isn’t enough, here’s another stock photo sample, again held by the metal part:

On a practical level, it’s very unlikely you’d be dealing with a modern computer  main board using a regular soldering iron, on the component side.

But what actually annoyed me most about this photo is something else: the original title goes something like “beautiful woman … soldering …”. Relevance? The other photo doesn’t say “hot spunk soldering”, and although that would be just as irrelevant, fact is that with articles and photos of professional women, their appearance is more often than not made a key part of their description. Which is just sexist garbage, bad journalism and bad copy-writing.

Which brings us to this final soldering stock photo sample. Just What The?

Female body selling soldering iron? Come on now. “Bad taste” doesn’t even remotely sum up the wrongness of it all.

Note: the low-res stock photo samples in this article are shown in a satirical fair-use context.

Binh Nguyen: Psychological Warfare/Mind Control, More Economic Warfare, and More

Tue, 2016-03-22 23:01
Before we start, a lot of the following seems absolutely crazy but there are reasons for it and there is a history behind it. Moreover, all of these programs have been publicly acknowledged or de-classified... One of the things I've been curious about is the interplay between broadcast media, some commonly distributed substances and their relation to population control as well as mind control.

sthbrx - a POWER technical blog: Getting logs out of things

Tue, 2016-03-22 17:00

Here at OzLabs, we have an unfortunate habit of making our shiny Power computers very sad, which is a common problem in systems programming and kernel hacking. When this happens, we like having logs. In particular, we like to have the kernel log and the OPAL firmware log, which are, very surprisingly, rather helpful when debugging kernel and firmware issues.

Here's how to get them.

From userspace

You're lucky enough that your machine is still up, yay! As every Linux sysadmin knows, you can just grab the kernel log using dmesg.

As for the OPAL log: we can simply ask OPAL to tell us where its log is located in memory, copy it from there, and hand it over to userspace. In Linux, as per standard Unix conventions, we do this by exposing the log as a file, which can be found in /sys/firmware/opal/msglog.

Annoyingly, the msglog file reports itself as size 0 (I'm not sure exactly why, but I think it's due to limitations in sysfs), so if you try to copy the file with cp, you end up with just a blank file. However, you can read it with cat or less.

From xmon

xmon is a really handy in-kernel debugger for PowerPC that allows you to do basic debugging over the console without hooking up a second machine to use with kgdb. On our development systems, we often configure xmon to automatically begin debugging whenever we hit an oops or panic (using xmon=on on the kernel command line, or the XMON_DEFAULT Kconfig option). It can also be manually triggered:

root@p86:~# echo x > /proc/sysrq-trigger sysrq: SysRq : Entering xmon cpu 0x7: Vector: 0 at [c000000fcd717a80] pc: c000000000085ad8: sysrq_handle_xmon+0x68/0x80 lr: c000000000085ad8: sysrq_handle_xmon+0x68/0x80 sp: c000000fcd717be0 msr: 9000000000009033 current = 0xc000000fcd689200 paca = 0xc00000000fe01c00 softe: 0 irq_happened: 0x01 pid = 7127, comm = bash Linux version 4.5.0-ajd-11118-g968f3e3 (ajd@ka1) (gcc version 5.2.1 20150930 (GCC) ) #1 SMP Tue Mar 22 17:01:58 AEDT 2016 enter ? for help 7:mon>

From xmon, simply type dl to dump out the kernel log. If you'd like to page through the log rather than dump the entire thing at once, use #<n> to split it into groups of n lines.

Until recently, it wasn't as easy to extract the OPAL log without knowing magic offsets. A couple of months ago, I was debugging a nasty CAPI issue and got rather frustrated by this, so one day when I had a couple of hours free I refactored the existing sysfs interface and added the do command to xmon. These patches will be included from kernel 4.6-rc1 onwards.

When you're done, x will attempt to recover the machine and continue, zr will reboot, and zh will halt.

From the FSP

Sometimes, not even xmon will help you. In production environments, you're not generally going to start a debugger every time you have an incident. Additionally, a serious hardware error can cause a 'checkstop', which completely halts the system. (Thankfully, end users don't see this very often, but kernel developers, on the other hand...)

This is where the Flexible Service Processor, or FSP, comes in. The FSP is an IBM-developed baseboard management controller used on most IBM-branded Power Systems machines, and is responsible for a whole range of things, including monitoring system health. Among its many capabilities, the FSP can automatically take "system dumps" when fatal errors occur, capturing designated regions of memory for later debugging. System dumps can be configured and triggered via the FSP's web interface, which is beyond the scope of this post but is documented in IBM Power Systems user manuals.

How does the FSP know what to capture? As it turns out, skiboot (the firmware which implements OPAL) maintains a Memory Dump Source Table which tells the FSP which memory regions to dump. MDST updates are recorded in the OPAL log:

[2690088026,5] MDST: Max entries in MDST table : 256 [2690090666,5] MDST: Addr = 0x31000000 [size : 0x100000 bytes] added to MDST table. [2690093767,5] MDST: Addr = 0x31100000 [size : 0x100000 bytes] added to MDST table. [2750378890,5] MDST: Table updated. [11199672771,5] MDST: Addr = 0x1fff772780 [size : 0x200000 bytes] added to MDST table. [11215193760,5] MDST: Table updated. [28031311971,5] MDST: Table updated. [28411709421,5] MDST: Addr = 0x1fff830000 [size : 0x100000 bytes] added to MDST table. [28417251110,5] MDST: Table updated.

In the above log, we see four entries: the skiboot/OPAL log, the hostboot runtime log, the petitboot Linux kernel log (which doesn't make it into the final dump) and the real Linux kernel log. skiboot obviously adds the OPAL and hostboot logs to the MDST early in boot, but it also exposes the OPAL_REGISTER_DUMP_REGION call which can be used by the operating system to register additional regions. Linux uses this to register the kernel log buffer. If you're a kernel developer, you could potentially use the OPAL call to register your own interesting bits of memory.

So, the MDST is all set up, we go about doing our business, and suddenly we checkstop. The FSP does its sysdump magic and a few minutes later it reboots the system. What now?

  • After we come back up, the FSP notifies OPAL that a new dump is available. Linux exposes the dump to userspace under /sys/firmware/opal/dump/.

  • ppc64-diag is a suite of utilities that assist in manipulating FSP dumps, including the opal_errd daemon. opal_errd monitors new dumps and saves them in /var/log/dump/ for later analysis.

  • opal-dump-parse (also in the ppc64-diag suite) can be used to extract the sections we care about from the dump:

    root@p86:/var/log/dump# opal-dump-parse -l SYSDUMP.842EA8A.00000001.20160322063051 |---------------------------------------------------------| |ID SECTION SIZE| |---------------------------------------------------------| |1 Opal-log 1048576| |2 HostBoot-Runtime-log 1048576| |128 printk 1048576| |---------------------------------------------------------| List completed root@p86:/var/log/dump# opal-dump-parse -s 1 SYSDUMP.842EA8A.00000001.20160322063051 Captured log to file Opal-log.842EA8A.00000001.20160322063051 root@p86:/var/log/dump# opal-dump-parse -s 2 SYSDUMP.842EA8A.00000001.20160322063051 Captured log to file HostBoot-Runtime-log.842EA8A.00000001.20160322063051 root@p86:/var/log/dump# opal-dump-parse -s 128 SYSDUMP.842EA8A.00000001.20160322063051 Captured log to file printk.842EA8A.00000001.20160322063051

There's various other types of dumps and logs that I won't get into here. I'm probably obliged to say that if you're having problems out in the wild, you should probably contact your friendly local IBM Service Representative...

Acknowledgements

Thanks to Stewart Smith for pointing me in the right direction regarding FSP sysdumps and related tools.

sthbrx - a POWER technical blog: The Elegance of the Plaintext Patch

Tue, 2016-03-22 12:53

I've only been working on the Linux kernel for a few months. Before that, I worked with proprietary source control at work and common tools like GitHub at home. The concept of the mailing list seemed obtuse to me. If I noticed a problem with some program, I'd be willing to open an issue on GitHub but not to send an email to a mailing list. Who still uses those, anyway?

Starting out with the kernel meant I had to figure this email thing out. git format-patch and git send-email take most of the pain out of formatting and submitting a patch, which is nice. The patch files generated by format-patch open nicely in Emacs by default, showing all whitespace and letting you pick up any irregularities. send-email means you can send it to yourself or a friend first, finding anything that looks stupid before being exposed to the public.

And then what? You've sent an email. It gets sent to hundreds or thousands of people. Nowhere near that many will read it. Some might miss it due to their mail server going down, or the list flagging your post as spam, or requiring moderation. Some recipients will be bots that archive mail on the list, or publish information about the patch. If you haven't formatted it correctly, someone will let you know quickly. If your patch is important or controversial, you'll have all sorts of responses. If your patch is small or niche, you might not ever hear anything back.

I remember when I sent my first patch. I was talking to a former colleague who didn't understand the patch/mailing list workflow at all. I sent him a link to my patch on a mail archive. I explained it like a pull request - here's my code, you can find the responses. What's missing from a GitHub-esque pull request? We don't know what tests it passed. We don't know if it's been merged yet, or if the maintainer has looked at it. It takes a bit of digging around to find out who's commented on it. If it's part of a series, that's awkward to find out as well. What about revisions of a series? That's another pain point.

Luckily, these problems do have solutions. Patchwork, written by fellow OzLabs member Jeremy Kerr, changes the way we work with patches. Project maintainers rely on Pathwork instances, such as https://patchwork.ozlabs.org, for their day-to-day workflow: tagging reviewers, marking the status of patches, keeping track of tests, acks, reviews and comments in one place. Missing from this picture is support for series and revisions, which is a feature that's being developed by the freedesktop project. You can check out their changes in action here.

So, Patchwork helps patches and email catch up to what GitHub has in terms of ease of information. We're still missing testing and other hooks. What about review? What can we do with email, compared to GitHub and the like?

In my opinion, the biggest feature of email is the ease of review. Just reply inline and you're done. There's inline commenting on GitHub and GitLab, which works well but is a bit tacky, people commenting on the same thing overlap and conflict, each comment generates a notification (which can be an email until you turn that off). Plus, since it's email, it's really easy to bring in additional people to the conversation as necessary. If there's a super lengthy technical discussion in the kernel, it might just take Linus to resolve.

There are alternatives to just replying to email, too, such as Gerrit. Gerrit's pretty popular, and has a huge amount of features. I understand why people use it, though I'm not much of a fan. Reason being, it doesn't add to the email workflow, it replaces it. Plaintext email is supported on pretty much any device, with a bunch of different programs. From the goals of Patchwork: "patchwork should supplement mailing lists, not replace them".

Linus Torvalds famously explained why he prefers email over GitHub pull requests here, using this pull request from Ben Herrenschmidt as an example of why git's own pull request format is superior to that of GitHub. Damien Lespiau, who is working on the freedesktop Patchwork fork, outlines on his blog all the issues he has with mailing list workflows and why he thinks mailing lists are a relic of the past. His work on Patchwork has gone a long way to help fix those problems, however I don't think mailing lists are outdated and superceded, I think they are timeless. They are a technology-agnostic, simple and free system that will still be around if GitHub dies or alienates its community.

That said, there's still the case of the missing features. What about automated testing? What about developer feedback? What about making a maintainer's life easier? We've been working on improving these issues, and I'll outline how we're approaching them in a future post.

sthbrx - a POWER technical blog: No Network For You

Mon, 2016-03-21 14:23

In POWER land IPMI is mostly known as the method to access the machine's console and start interacting with Petitboot. However it also has a plethora of other features, handily described in the 600ish page IPMI specification (which you can go read yourself).

One especially relevant feature to Petitboot however is the 'chassis bootdev' command, which you can use to tell Petitboot to ignore any existing boot order, and only consider boot options of the type you specify (eg. 'network', 'disk', or 'setup' to not boot at all). Support for this has been in Petitboot for a while and should work on just about any machine you can get your hands on.

Network Overrides

Over in OpenPOWER1 land however, someone took this idea and pushed it further - why not allow the network configuration to be overwritten too? This isn't in the IPMI spec, but if you cast your gaze down to page 398 where the spec lays out the entire format of the IPMI request, there is a certain field named "OEM Parameters". This is an optional amount of space set aside for whatever you like, which in this case is going to be data describing an override of the network config.

This allows a user to tell Petitboot over IPMI to either;

  • Disable the network completely,
  • Set a particular interface to use DHCP, or
  • Set a particular interface to use a specific static configuration.

Any of these options will cause any existing network configurations to be ignored.

Building the Request

Since this is an OEM-specific command, your average ipmitool package isn't going to have a nice way of making this request, such as 'chassis bootdev network'. Rather you need to do something like this:

ipmitool -I lanplus -H $yourbmc -U $user -P $pass raw 0x00 0x08 0x61 0x80 0x21 0x70 0x62 0x21 0x00 0x01 0x06 0x04 0xf4 0x52 0x14 0xf3 0x01 0xdf 0x00 0x01 0x0a 0x3d 0xa1 0x42 0x10 0x0a 0x3d 0x2 0x1

Horrific right? In the near future the Petitboot tree will include a helper program to format this request for you, but in the meantime (and for future reference), lets lay out how to put this together:

Specify the "chassis bootdev" command, field 96, data field 1: 0x00 0x08 0x61 0x80 Unique value that Petitboot recognises: 0x21 0x70 0x62 0x21 Version field (1) 0x00 0x01 .. .. Size of the hardware address (6): .. .. 0x06 .. Size of the IP address (IPv4/IPv6): .. .. .. 0x04 Hardware (MAC) address: 0xf4 0x52 0x14 0xf3 0x01 0xdf .. .. 'Ignore flag' and DHCP/Static flag (DHCP is 0) .. .. 0x00 0x01 (Below fields only required if setting a static IP) IP Address: 0x0a 0x3d 0xa1 0x42 Subnet Mask (eg, /16): 0x10 .. .. .. Gateway IP Address: .. 0x0a 0x3d 0x02 0x01

Clearing a network override is as simple as making a request empty aside from the header:

0x00 0x08 0x61 0x80 0x21 0x70 0x62 0x21 0x00 0x01 0x00 0x00

You can also read back the request over IPMI with this request:

0x00 0x09 0x61 0x00 0x00

That's it! Ideally this is something you would be scripting rather than bashing out on the keyboard - the main use case at the moment is as a way to force a machine to netboot against a known good source, rather than whatever may be available on its other interfaces.

[1] The reason this is only available on OpenPOWER machines at the moment is that support for the IPMI command itself depends on the BMC firmware, and non-OpenPOWER machines use an FSP which is a different platform.

Chris Smart: Providing git:// (protocol) access to repos using GitLab

Mon, 2016-03-21 11:30

I mirror a bunch of open source projects in a local GitLab instance which works well.

However, by default, GitLab only provides https and ssh access to repositories, which can be a pain for continuous integration (especially if you were to use self-signed certificates).

However, it’s relatively easy to configure your GitLab server to run a git daemon and provide read-only access to anyone on any repos that you choose.

On my CentOS box, I just installed git-daemon and then edited the startup script at /usr/lib/systemd/system/git@.service like so:

[Unit]

Description=Git Repositories Server Daemon

Documentation=man:git-daemon(1)

 

[Service]

User=git

ExecStart=-/usr/libexec/git-core/git-daemon \

--base-path=/var/opt/gitlab/git-data/repositories/ \

--syslog --inetd --verbose

StandardInput=socket

The important part here is the base path /var/opt/gitlab/git-data/repositories/, which is specified at the default location that git repos are stored when using the GitLab omnibus package.

Now start and enable the service:

[root@gitlab ~]# systemctl start git.socket && systemctl enable git.socket

As per the git.service systemd file, you should now have git-daemon listening on port 9418, however you may need to open the port through the firewall:

[root@gitlab ~]# firewall-cmd --permanent --zone=public --add-port=9418/tcp

[root@gitlab ~]# systemctl reload firewalld

Now, to enable git:// access to any given repository, you need to touch a file called git-daemon-export-ok in that repo’s git dir (it should be owned by your gitlab user, which is probably git). For example, a mirror of the Linux kernel:



-sh-4.2$ touch /var/opt/gitlab/git-data/repositories/mirror/linux.git/git-daemon-export-ok

From your local machine, test your git:// access!



[12:15 chris ~]$ git ls-remote git://gitlab/mirror/linux.git |head -1

46e595a17dcf11404f713845ecb5b06b92a94e43 HEAD

Success!

Chris Smart: Mirroring git repositories (to GitLab)

Mon, 2016-03-21 11:30

There are several open source git repos that I mirror in order to provide local speedy access to. Pushing those to a local GitLab server also means people can easily fork them and carry on.

On the GitLab server I have a local posix mrmirror user who also owns a group called mirror in GitLab (this user is cannot be called “mirror” as the user and group would conflict in GitLab).

In mrmirror’s home directory there’s a ~/git/mirror directory which stores all the repos that I want to mirror. The mrmirror user also has a cronjob that runs every few hours to pull down any updates and push them to the appropriate project in the GitLab mirror group.

So for example, to mirror Linux, I first create a new project in the GitLab mirror group called linux (this would be accessed at something like https://gitlab/mirror/linux.git).

Then as the mrmirror user on GitLab I run a mirror clone:

[mrmirror@gitlab ~]$ cd ~/git/mirror

[mrmirror@gitlab mirror]$ git clone --mirror git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git

Then the script takes care of future updates and pushes them directly to GitLab via localhost:

#!/bin/bash

 

# Setup proxy for any https remotes

export http_proxy=http://proxy:3128

export https_proxy=http://proxy:3128

 

cd ~/git/mirror

 

for x in $(ls -d *.git) ; do

    pushd ${x}

    git remote prune origin

    git remote update -p

    git push --mirror git@localhost:mirror/${x}"

    popd

done

 

echo $(date) > /tmp/git_mirror_update.timestamp

That’s managed by a simple cronjob that the mrmirror user has on the GitLab server:

[mrmirror@gitlab mirror]$ crontab -l

0 */4 * * * /usr/local/bin/git_mirror_update.sh

And that seems to be working really well.

Tridge on UAVs: APM:Plane 3.5.1 released

Mon, 2016-03-21 08:52

The ArduPilot development team is proud to announce the release of version 3.5.1 of APM:Plane. This is a minor release with primarily small changes.



The changes in this release are:

  • update uavcan to new protocol
  • always exit loiter in AUTO towards next waypoint
  • support more multicopter types in quadplane
  • added support for reverse thrust landings
  • added LAND_THR_SLEW parameter
  • added LAND_THEN_NEUTRL parameter
  • fixed reporting of armed state with safety switch
  • added optional arming check for minimum voltage
  • support motor test for quadplanes
  • added QLAND flight mode (quadplane land mode)
  • added TECS_LAND_SRC (land sink rate change)
  • use throttle slew in quadplane transition
  • added PID tuning for quadplane
  • improved text message queueing to ground stations
  • added LAND_THR_SLEW parameter
  • re-organisation of HAL_Linux bus API
  • improved NMEA parsing in GPS driver
  • changed TECS_LAND_SPDWGT default to -1
  • improved autoconfig of uBlox GPS driver
  • support a wider range of Lightware serial Lidars
  • improved non-GPS performance of EKF2
  • allow for indoor flight of quadplanes
  • improved compass fusion in EKF2
  • improved support for Pixracer board
  • improved NavIO2 support
  • added BATT_WATT_MAX parameter



The reverse thrust landing is particularly exciting as that adds a whole new range of possibilities for landing in restricted areas. Many thanks to Tom for the great work on getting this done.



The uavcan change to the new protocol has been a long time coming, and I'd like to thank Holger for both his great work on this and his patience given how long it has taken to be in a release. This adds support for automatic canbus node assignment which makes setup much easier, and also supports the latest versions of the Zubax canbus GPS.



My apologies if your favourite feature didn't make it into this release! There are a lot more changes pending but we needed to call a halt for the release eventually. This release has had a lot of flight testing and I'm confident it will be a great release.



Happy flying!



Lev Lafayette: The constellation is changed, the disposition is the same

Sat, 2016-03-19 22:30

Ars Technica has reported of a relatively small GPU-Linux cluster which can crack by brute force standard eight-character MS-Windows passwords in under six hours. There are, of course, a reasons and caveats. Firstly, as online servers will typically block repeat password attempts, is system is most effective against offline password hashes, which then of course can be used for online exploits.

read more

Leon Brooks

Sat, 2016-03-19 21:50
“What do you actually need it to do?”That one question can simplify a process so very much.



The outcome as a whole can become simpler, when features which are not necessary for this to be able to do are discarded.



The outcome can become cheaper, as less resources are required to perform fewer functions.



The outcome can become untangled from ‘political’ factors such as who might have a vested interest in things happening a certain way, or who might expect to derive consequential benefits of various kinds.



The outcome can arrive sooner, as less needs to be done — in simpler ways — with fewer dependencies — to make it happen.



The final result is likely to be more flexible, as it is less burdened by specific (unnecessary) features — and so by implied limitations — than a poorly-targeted or very generalised solution.



For a very simplistic result, rather than buy a new PC, this 3rd-hand desktop box over here, with this video card plugged into it, plus these two 3rd-hand screens, this mouse, this keyboard, these two hard-disk drives (all free) and this Linux distribution will do absolutely everything required to source (and edit) words and images to reliably make a newsletter every certain amount of time.



It will also do other things (flexibility as a kind of a bonus), however by staying true to purpose it does not need expensive hardware, expensive software, a virus scanner, constant maintenance, or any one of a dozen other complex and/or pricey components to continue operating indefinitely.



As another bonus, some computer hardware which may have partially become scrap metal but mostly land-fill, continues to provide utility without any additional input in terms of energy, finance or transport.

Russell Coker: Ethernet Interface Naming With Systemd

Sat, 2016-03-19 13:27

Systemd has a new way of specifying names for Ethernet interfaces as documented in systemd.link(5). The Debian package should keep working with the old 70-persistent-net.rules file, but I had a problem with this that forced me to learn about systemd.link(5).

Below is a little shell script I wrote to convert a basic 70-persistent-net.rules (that only matches on MAC address) to systemd.link files.

#!/bin/bash

RULES=/etc/udev/rules.d/70-persistent-net.rules

for n in $(grep ^SUB $RULES|sed -e s/^.*NAME..// -e s/.$//) ; do

  NAME=/etc/systemd/network/10-$n.link

  LINE=$(grep $n $RULES)

  MAC=$(echo $LINE|sed -e s/^.*address….// -e s/…ATTR.*$//)

  echo "[Match]" > $NAME

  echo "MACAddress=$MAC" >> $NAME

  echo "[Link]" >> $NAME

  echo "Name=$n" >> $NAME

done

Related posts:

  1. Ethernet Interface Naming As far as I recall the standard for naming Linux...
  2. Maintaining Screen Output In my post about getting started with KVM I noted...
  3. Ethernet bonding Bonding is one of the terms used to describe multiple...