Planet Linux Australia
I got a telescope a few years back and though it works well for looking through with human eyes, it's been close to impossible to use with with a digital SRL camera mounted at the eyepiece. The problem is that the camera body can't move close enough to the tube to obtain focus on objects futher away than about 20 metres. Of course, that's not very useful for a telescope (unless you'e into bird-watching).
The camera can be made to focus with the addition of a barlow lens, but the only one of those I have magnifies by a factor of two and adds some blurring, so that's not really an ideal solution either. What I really want is to put the camera at prime focus using only the primary and secondary mirror.
On one of my bi-annual google searches for a solution I stumbled across the suggestion of a Hubble style operation to mount the telescopes primary mirror a bit closer to the secondary mirror, so making the focal plane move a bit further away from the tube. However, the original post is rather low on detail.
From the images added to the original post, it looked like the poster had used book binding screws known as "chicago screws" or "sex bolts" (not be confused with Andrew) to replace the thumb screws on the end of the telescope, to give the mirror assembly more inward travel and longer springs to prevent vibration.Components
I found chicago screws at a craft store, but they turned out to be a bit short and made it nigh impossible to collimate the mirror by giving close to no purchase (compared to the thumb screws).
On my quest to find a matching longer screw head for the chicago screws, I ended up at a hardware store where one of the clerks actually found some used 5mm × 45mm machine screws that appeared perfect for my needs, but unfortunately he couldn't find any springs to match.
A bit more googling on the tram home though, led me to the RS Components website, which lists a plethora of varied size and strength springs. Including one that appears to fit :-) I ordered a set and a few days later I had everything I needed for my Hubble style telescope surgery.Disassembly
To remove the primary mirror assembly, unscrew the small black screws from the bottom of the telescope (you can stand it on its front for this) and carefully lift the entire assembly out of the tube. Unfortunately, the screws that keep the assembly attached to the backing plate are half obscrured by the rpimary mirror, so that will need to be removed too. You need a small screw driver to carefully undo the six screws that keep the mirror in place. Carefully lift the mirror off, put it in a safe place and cover it to keep dust off.
Remove the thumb screws and the backing plate, then turn the assembly over. You can now remove the screws that attache the mirror assembly to the backing plate and replace them with your longer ones.
Provided you got the correct springs (mine are 11mm diameter, 56mm long, 1mm piano wire), they should fit perfectly and push the mirror away from the backing plate with a fair bit of force. Add the backing plate and put the thumb screws back on. You may need to compress the springs quite firmly to accomplish this.
When that's done, all that remains is to reinstall the mirror and gently reinsert the whole assembly back into the tube.Insert error
When I performed this last step I found that it was close to impossible to insert the mirror back into the tube. On closer inspection, some scratches on the mirror assembly implied it was catching on the small screws that keep the end cap in place. These would appear to be just the slightest bit too long. D'oh!
I wasn't about to go off again and find some more screws, so instead I simply reversed these. The screw head is now on the inside of the tube and the hex nut is on the outside, allowing the mirror assembly free travel. If you do this, just be sure to not have any clothing catch on that screw when you're out in the field.
I've not yet had the time to properly try this new setup to see if it makes any difference to focusing, fingers crossed!Tags: telescopehubblemirrorDIY
Just after DrupalCon Sydney at the start of February of this year, I overheard some people wondering why they should sponsor a DrupalCon. Considering the people who attend, there's not a lot of product selling you can do if you're a Drupal shop and unless you're looking to hire delegates as new staff, there's not a lot of direct benefit from having a sponsor booth or table.
Obviously, helping to fund a DrupalCon and the Drupal Association via a sponsorship are good things to be doing for the community, but the payoff isn't necessarily immediately apparent. However, there definitely is one. There just hasn't been a metric for it, let alone a testable metric.
Most Drupal development is done by people in disparate locations around the world. They communicate via irc, email and the issue queue, but don't necessarily meet face to face.
We all know that face to face communication is far more efficient. The use of intonation, facial expressions and body language all make it much harder to misunderstand each other. Additionally, meeting face to face allows for social non-code interactions such as having breakfast, lunch or dinners, parties or just "hanging out", which all help build the team.
With a better team spirit and less misunderstanding between team members, I propose that there are fewer arguments (hissy-fits, if you will) between developers the more often they meet face to face. And the more sponsorship, the easier it is to run events and be able to meet face to face and have a good time.
My sponsorship metric then would be a decrease in hissy-fits per core release.
Since it's nice to measure something and have a larger number be better, I think a little bit of elementary algebra would give us the inverse release hissy-fit (RH-1).
Now there are only three things left to do, find a common name for the unit, give it a symbol and actually measure it over time :-)drupaldrupalconsponsorship
Like many people, I love the beautiful images we receive from space telescopes and spacecraft that orbit other worlds in the solar system. Also like many other people, I expect, I never really stop to think how we get those images, just assuming they get sent to earth via some magic space internet.
However, there is no internet (magic or otherwise, yet) in space and getting the data to create these pretty images (and to do science) is rather involved.
Quite by accident I got a chance to learn a lot more about that process.SocialSpaceWA
Whilst not working, I stumbled across a retweet by the European Space Agency, asking for people to apply to visit their deep space tracking station in New Norcia, Western Australia (NNO) as part of their SocialSpace programme. I didn't really have anything on, qualified to apply by way of having an ESA member nation passport, don't live more than 16 hours flying away, so I thought "why not?".
Why not indeed. I applied a few days before the closing date and only a week later I got the happy news I'd been selected to attend. I immediately grabbed some return tickets to Perth and then started fretting about doing this thing with 15 total strangers. Eep!
Time-lapse of land-fall over southern WA, after crossing the Great Australian Bight.
Of course, fretting was totally unwarranted. ESA had organised a bus to drive us all to New Norcia from Perth, and a bunch of delegates organised to meet up with Daniel (the ESA chef-de-mission) before heading to the bus pick-up. Of course, my fellow delegates were all space geeks too and we all got on really well (especially once Daniel started handing out ESA swag :-)
The trip to New Norcia was in a lovely airconditioned bus, which made coping with the heat wave rather easy.Introductions
As an ice-breaker, we all shared a group dinner that evening at the New Norcia hotel. After a round of 140 character introductions, we split into groups and each group was joined by an ESA engineer, who talked a little about who they were and the work they did on the site.
After dinner, John Goldsmith gave a talk about astrophotography and the sights of the night sky in preparation for an observing session with some people from the Perth Observatory, who'd driven up with cars full of (rather lovely) telescopes. Sadly I missed the talk because I was volunteered to help out with the telescopes. On the up-side, that resulted in my first TV appearance ever on Channel 10 in Perth.
The seeing was excellent (New Norcia has proper dark skies) so it ended up being a fairly late night.
Unfortunately, that meant the morning wasn't quite as early as I'd hoped it would be. Because of the dark skies, and three hour time difference with home, I had planned to not go back to Perth for the night. Instead, I wanted to stay in New Norcia and then get up early to catch the planetary alignment in action. I ended up seeing it just fine, but it was getting a little bit too light at that stage to easily cature all planets on camera.
Because most delegates elected to stay back in Perth overnight (where the hotels have airco) they wouldn't be back before 10am, which gave me time to have a nice and relaxing early morning at the hotel, with fresh coffee.
Once my partners in crime had arrived, we all moved to the ESA education room at the New Norcia monastery for some enlightening sessions about the ESA Tracking Network (ESTRACK) and NNO by ESA engineers.ESTRACK
Yves Doat spoke about why the ESTRACK network is needed and what it currently consists of. He showed us highlights of some of the missions they've supported over the past decades, from the Giotto mission past Halley's Comet in 1986 through to the current Rosetta/Philae mission to comet 67P Churyumov-Gerasimenko.Deep Space Comms
Klaus-Jürgen Schulz dove into the details of deep space communications and paid particular attention to the difficulties of communicating with spacecraft that are close to the sun (which is an issue for the BepiColombo mission to Mercury, of course!) he finished his presentation by telling us about the future of deep space communications, using light rather than radio, to obtain much higher rates of data transmission.Ground Station Operations
Next, Marc Roubert explained the operational intricacies of running ground stations. Since they are generally located in relatively remote radio-silent areas, getting construction materials and equipment to the site can pose a real problem. Bush fires, sand storms, snow and the occasional leopard (for the Argentinian site) can interfere with operations as well.
Their location can also pose problems for the power supply. The sites use a lot of power to cryo-cool the amplifiers. Fire can cut power lines, so generators are needed.
All delegates became very excited when he said that due to the cost of power in Australia, NNO was actually going solar. ESA have built a 250kW solar plant on the New Norcia site, which will pay for istelf in only 7 years and save about 400 tons of CO2 per year.
They're not yet allowed to feed power back into the grid, because the infrastructure wouldn't be able to cope. But they built the plant to produce only as much power as they need, so there isn't that much to feed back currently anyway.The trouble with big antennas
Gunther Sessler then gave us the low-down on the new NNO-2 antenna. How it was constructed and what it can do that the 35m NNO-1 antenna can't, which is mainly obtain signal from spacecraft even if they're slightly off-course (which can happen easily if a rocket slightly over- or underperforms at launch).
As it turns out, the 35m NNO-1 antenna has a beam with of 60 millidegrees and to acquire a signal from a spacecraft, it has to be somewhere within that beam. I did the maths on that, and 60 millidegrees equates to a circle with a diameter of only 200m at a distance of 1000km (eg: a spacecraft on its way to orbit just clearing the horizon) Now 200m sounds like a lot, but when you realise a spacecraft is doing upwards of 5km/sec at that point, locking on to it becomes a much harder problem!
That's where the wider beam width of the 4.5m NNO-2 antenna comes in. It can see a larger part of the sky, so can pick up spacecraft that are slightly off-course a lot easier. And if the space craft is even more off-course, the 0.75m antenna has a wider beam width still.
With some smarts, once the 0.75m antenna locks on to a spacecraft, it can be used to center the 4.5m dish on it in turn. And once the 4.5m antenna is locked, its data can be used to in turn lock the 35m NNO-1 on the craft.Putting it all together
The final presentation was by Peter Droll, who put it all together and gave us an overview of how ESTRACK was used to send the Lisa Pathfinder mission on its way to the L1 langrange point. That was done by boosting its orbit with several engine burns, after ach of which the crafts position needed to be known exactly in order to caculate the next burn.
LPF is trialing equipment for detecting gravitational waves in space and should have started science operations today. Fittingly, this presentation was on the morning of the LIGO announcement :-)Tour
We had a quick lunch after the presentations and then hopped back on the bus to go see the NNO dishes. The Inmarsat Cricket Team had prepared well and gave us a tour of NNO-1, allowing us to stick our heads absolutely eveywhere.
The only spanner in the works in terms of social media was that the inside of the dish is really well shielded against radio interference, so all our phones stopped working! Luckily, the Nikon with borrowed fish-eye lens worked fine.
You can see all of my SocialSpaceWA photos on Flickr.
We toured the NNO-1 dish, as well as the the generator and battery buildings and the control room. Two lucky souls managed to score the chance to actually operate NNO-1 and I grabbed a bit of video whilst Matt took the dish for a joyride. I am assured that New Norcia doesn't do hayrides like Parkes, and that nobody plays cricket in the dish either (but they do play football!)
Taking NNO-1 for a joyride.
After the tour, VIPs started arriving for the formal inauguration ceremony. After a welcome to country, we heard talks from the WA deputy premier and the European Union ambassador to Australia, praising the virtues of scientific cooperation. I definitely hope there will be more of that in the future, if only to make more space infrastructure more readily accessible for visiting! :-)
Speeches over, we all hopped back on the bus to finally go and see the new NNO-2 antenna. It's located a few hundred meters away from the main complex and since we were still enjoying the heat wave, the transport was most welcome. That is, until the smaller of the buses couldn't cope with the rather steep hill and we all had to do the last hundred meters or so on foot.
The sun was setting as we arrived at the NNO-2 site and with the thin crescent moon it made a rather lovely backdrop for the blessing of the new facility by three monks from the New Norcia Monastery, followed by the antenna doing a little dance.
Good luck on you mission, NNO-2!
Image: Vaughan Puddey.
The formal proceedings over, we were all bused back to the monastery where ESA treated us to a delicious dinner as the stars came out. The monks are New Norcia turn out to make a rather decent drop of wine as well. I'm not a fan of beer, but I'm told their ale is pretty good too :-)
Finally it was time to hop on the bus and head back to Perth and after a final farewell drink, all delegates went their separate ways again.
But one thing we did all agree on: if you ever get the chance to do some accidental space tourism, take that chance with both hands and don't let go!
Thank you, ESA, Inmarsat and New Norcia!Tags: SocialSpaceWAdeep spacespaceadventureESA
A few weeks ago I noticed a retweet by ESA, asking for expression of interest from space enthusiasts to attend and social-media (verb) the inauguration of a new antenna at their New Norcia deep spacetracking site in Western Australia.
After some um-ing and ah-ing, I decided to apply. After all, when I'm on holiday elsewhere I try to visit observatories and other space related things and am always a bit disappointed when a fence keeps me at a distance.
Last week I got an email with the the happy news that I was one of the fifteen lucky people selected to attend!
So, over the next week you'll probably see a lot of space tweets from me with impressive radio hardware, behind the scenes looks at things, and a lot of excited people.
Tags: spaceSocialSpaceWAESAdeep spaceastronomy
At last week's telescope driver training I found out that Melbourne contains a 1 to 1 billion scale model of the solar system. It's an artwork by Cameron Robbins and Christopher Lansell.
The Sun is located at St Kilda marina and the planets are spaced out along the beach and foreshore back towards the city.
Since the weather was lovely today, I thought ... why not? The guide says you can walk from the Sun to Pluto in about an hour and a half, which would make your speed approximately three times the speed of light, or warp 1.44, if you like.
The Sun. It's big.
Mercury, 4.9mm across and still at the car park, 58m from the Sun.
Venus, 1.2cm across on the beach at 108m from the Sun.
Mars is 6.8mm across and at the far end of the beach, 228m from the Sun.
The walk now takes you past the millions of tiny grains of sand that make up the asteroid belt and the beach.
Jupiter is 14cm across and lives 778m from the Sun, near the Sea Baths.
The outer solar system is rather large, so it would be wise to purchase an ice cream at this point.
Saturn (12cm, rings 28cm) is nearly twice as far away from the Sun at 1.4km near St Kilda harbour as Jupiter was.
Uranus (5cm) is so far from the Sun (2.9km) that it's actually in the next suburb, near the end of Wright Street in Middle Park.
Neptune (4.9cm) is in the next suburb again (Port Melbourne) at 4.9km from the Sun.
No longer a planet, but still included. Pluto is a 2mm pellet on the beach at Garden City (yet another suburb along again) at 5.9km from the Sun.
On this scale, the nearest star (Proxima Centauri, an uassuming red dwarf) is about 40 trillion kilometers away from the sun. Which, on a one to billion scale, happens to be about the same as once around the Earth.
Proxima Centauri is just on the other side of the Sun from the rest of the solar system, a cool 4.2 light years away.
I trek out to a fairly dark sky site on the odd Friday evening to partake of some amateur astronomy at an observatory in the Dandenong ranges.
A few weeks ago, we decided to have a go at locating what was then a fairly dim object, C/2014 Q2. We found it with binoculars and in a relatively small (10") telescope. I'd just gotten a tripod-mounted motor drive for my DSLR, so of course we decided to have a go at imaging the comet.
I don't really have a great zoom lens (although), so instead I decided on the 50mm f/1.8 lens. That lets in a lot of light, which is handy when you try to photograph dim objects in a dark night sky and you need to find them in the viewfinder :-) Even with its wide angle though, you still need a motor drive to avoid stars turning unto lines ony any exposure longer than about 10 seconds.
This past weekend it was nice and clear and so I went back to the observatory, where I was able to take another photo of the comet.
And with two photos of an objects that moves across the sky fairly fast, it's of course time for an animation! I've annotated the brightest stars and a few open clusters down the bottom of the image.
I used GIMP to process the levels and colour balance for both images, so they approximately matched (the moon was out when the first image was taken on Dec 12) and then to rotate and stretch the second image to align with the first one. It's not perfect, but it'll do I think.
In fact, there are a few multi-pixel size blotches that seem to be present in one image and not the other, as well as some that move by more than the half-pixel or so that image alignment is out by.
Some are likely artefacts and noise, but I can't help but wonder if some are asteroids. I've checked against the ten or so thousand largest and/or most visible asteroids, but none of them are anywhere near.
I suppose I should get a third image in another week or two... to be continued?Tags: cometastronomyphotographyprocessing
A while ago, I was contacted by MobileZap, a reseller of mobile phone accessories and asked if I was interested in reviewing an iPhone zoom lens attachment. Unfortunately the widget only attached to the iPhone 5S - mine's a 5c - so I wasn't able to. I also mostly do wide angle photography (landscapes) so a zoom lens would be sort of wasted on me anyway.
When browsing the website, I did stumble across the olloclip wide-angle/fisheye/macro lens kit, which piqued my interest. When I mentioned this, they sent me one and asked me to write a blog about my experiences using it. Happily, I was about to go on a road trip past some very large holes in the ground where it would come it very handly indeed!
The (to give it its full and proper name) olloclip iPhone 5S / 5 Fisheye, Wide-angle, Macro Lens Kit comes in a fully recyclable plastic and paper package. It includes the phone adapter with lenses, an insert to make the adapter fit iPods and a fabric pouch to keep the lenses free of scratches when not in use. The pouch also doubles as a lens cloth.
First Light: Royal Park
I've been taking an image of Melbourne's CBD once a day (when I'm in the country) from the same spot in Royal Park for close to a year, so I thought I'd start by using the olloclip for the same image:
iPhone 5c standard.
iPhone 5c with olloclip wide angle lens.
iPhone 5c with the olloclip fisheye lens.
Oops, it turns out the lens kit doesn't really fit the iPhone 5c! The adapter is made for the 5s model and the rounded edge on the 5c means it doesn't slide all the way on, so the lens and the camera don't quite align. Mind you, a little bit of image editing to trim this image still results in something useable for blogs and twitter :-)
To give you an idea of the field of view of each of the lens adapters, I've stacked the three images on top of each other at the approximate same size:
Relative sizes of the fields of view of the olloclip wide anfle and fish eye lenses.
Big Things: Road Trip
The main reason I agreed to review this lens kit was to play on the road trip, which was through the south eastern USA, from Los Angeles to Austin. Happily, that included a few choice large holes in the ground subjects for wide angle photography, as well as a friend with a iPhone 5S, on which the olloclip fit just fine.
Cathedral, Sedona, AZ. iPhone 5S with olloclip fish-eye lens.
Barringer Crater, Flagstaff, AZ. iPhone 5S with olloclip fish-eye lens.
Grand Canyon South Rim, Desert View, AZ. iPhone 5c with olloclip wide angle lens.
As you can see, the olloclip fits just fine on the iPhone 5S - there is no assymmetric distortion like there was in the iPhone 5c fish-eye image.
Small Things: Macro
The wide angle lens consists of two lenses, the top one of which you can unscrew and remove to make a macro lens. I didn't really have anything to take photos of, until a house move left me with a large pile of small change to sort through.
It turns out that some old Australian coins have minting errors or oversights, which make them sought after by collectors. Specifically, the some of the 2 cent coins are missing the designer's initials (S.D.).
Here was a lovely way to try out the macro lens. It works fine as a magnifying glass, too!
Australian 2 cent piece with 'SD' initials (just left of the lizard's toe), iPhone 5c with olloclip macro lens.
Australian 2 cent piece without initials, iPhone 5c with olloclip macro lens.
Australian 1917 penny, iPhone 5c with olloclip macro lens.
The macro attachment has come in incredibly handy for close-up images of items to put on eBay. The fact that it doesn't quite fit the iPhone 5c has not been a hindrance, in the way it was for the fish-eye lens.
All in all, I've found the olloclip to be a nifty little attachment and nice to have handy.
I am not affiliated with olloclip or MobileZap. MobileZap provided me with a free olloclip to review.Tags: AdvertorialphotographyiPhoneOlloclipiPhotography
Did you know that the pi symbol (π) was only introduced in 1706, William Jones (1675-1749) used it in his Synopsis palmariorum matheseos, most likely after the initial letter of the Ancient Greek περιφέρεια (periphéreia), meaning periphery (the line around the circle).
Before then, instead of π, the long Latin phrase “quantitas, in quam cum multiplicetur diameter, provenient circumferentia” had been used, meaning “the quantity which, when the diameter is multiplied by it, gives the circumference”. Clear, but quite a mouthful! In any case they had worked out that there was a mathematical constant there, which I reckon is already pretty cool.
Anyhow, π approximations! So who got close, and when? The answer may surprise you.
- The Babylonians got to π = = 3.125 (sunbaked clay tablet found in 1936 at Susa).
- The Rhind Papyrus of Egypt (c. 1650 BCE) contains a solved problem that states that “the area of a circle of nine length units in diameter is the same as the area of a square whose side is eight units of length”, which (skipping some maths that is difficult to represent in a blogpost) comes to π = = 3.160 49
- The Greeks began with π = 3 for every day use (eek!); for more serious stuff they developed other values which were not much better, e.g. π = the square root of 10 = 3.162 2.
- In the 2nd century BCE, Hipparchus (c. 147 – after 127 BCE) did some extensive computations and proposed the value π = = 3.141 66…, which is not bad at all.
- Archimedes ( c. 287 – 212 BCE), regarded as the greatest scientist-mathematician of antiquity, worked out
3.140 8… < π < 3.142 8…
Archimedes got to that value by applying his method for calculation of arc length to determine π. Beginning with regular hexagons – inscribed in, and circumscribed to a circle – and doubling the number of sides four times until he had a pair of regular 96-gons, he calculated the length of the perimeters of the successive polygons.
After that, no essentially new ideas for the calculation of π were suggested until the development of calculus towards the end of the 17th century! Of course, this had everything to do with the fall of the West Roman Empire in 476, which triggered the “Dark Ages” in Europe for a 1000 years. Mathematics and other sciences progressed very slowly in Europe during that time. But outside of Europe, things were actually moving along.
- Aryabatha, in 499, published π = 3.141 6…;
- Bhaskara (born in 1114), held that π = 3.141 56…;
unfortunately neither bothered telling how they got to those figures.
- Liu Hui in 263 CE published the limits
3.140 24… < π < 3.142704… obtained for a pair of 96-gons (same method as Archimedes, but could he have been aware of that?); for a 3072-gon, he found π = 3.141 59… which is absolutely brilliant!
- Astronomer Tsu-Chung-chih (430-501) suggested π = = 3.141 592 9… which is correct to six decimal places, and was not to be bettered in Europe until the 16th century, more than a thousand years later.
- Jamshid Masud al-Kashi in 1424 published Risala al-muhitiyya (“Treatise on the Circumference”) with the results of his calculations on an inscribed substantial n-gon (3 x 3), arriving at π = 3.141 592 653 589 793 25… which is correct to sixteen decimal places, thereby surpassing all earlier determinations of π.
Back to Europe (after the end of the Dark Ages):
Ludolph van Ceulen (1540-1610), a fencing master teaching arithmetic, surveying and fortification at the engineering school at Leiden in Holland, got to 20 decimals of π in Van den Circkel (“About the Circle”, 1596), followed by 32 decimals in Arithmetische en Geometische fondamenten (“Arithmetic and Geometric fundamentals”), published posthumously in 161, and 35 decimals published in 1621 by his pupil Willebrord Snel,
π = 3.141 592 653 589 793 238 462 643 383 279 502 88
the last three digits of that were engraved as an epitaph on van Ceulen’s tombstone! Van Ceulen’s accomplishment so impressed his contemporaries that π was often called the Ludolphine constant.
In conclusion… van Ceulen’s later accomplishments notwithstanding, it’s important to recognise the very significant achievements of the early Chinese and Persians over a thousand years earlier! (I’d include the Indians but they really should have shown their working)
Many awesome facts sourced from: Mathematics, from the Birth of Numbers (Jan Gulberg).
I'd seen the Reacher movie (it was ok, but not amazing), but was trapped in an airport with a book too close to the end for comfort. So I bought the first Jack Reacher novel. I'm impressed to be honest -- its well written, readable, and not trying to be Tom Clancy. Where Clancy would get lost in the blow by blow details of how military hardware works, this story is instead about how the main character feels and where their intuition is up to at that point. Sure, he explains that the shot gun pointed at his is dangerous, but doesn't get too lost in the detail.
I enjoyed this book, and its a well written mystery tale. I'll read more from this series I am sure.
Tags for this post: book lee_child jack_reacher murder mystery
Related posts: Lock In; A Talent for War; Winchester Mystery House Comment Recommend a book
As I write up comments on books I've read in the last little while but left lying around my desk instead of blogging and filing, I find this book sitting there taunting me. I really wanted to like this book, I was quite excited when I bought it. However, Its Cherryh at her worst -- wordy and kind of goes nowhere. There's an interesting idea here, but the book needs to be half its current length. I got half way through and gave up. A disappointment.
Tags for this post: book c_j_cherryh civil_war colonization space_travel
Related posts: Cyteen: The Vindication; The Martian; The Moon Is A Harsh Mistress; Cyteen: The Betrayal; Marsbound; Red Mars Comment Recommend a book
At the end of the previous Spike Milligan war memoir, Spike and his comrades had just been packed up into a ship to start travelling to Africa to engage the Nazis. This book picks up straight from there are follows them from first arrival in Africa to their first experiences of combat. Spike fought in the Battle of Longstop Hill, where his artillery unit played a part in victory. Along the way Spike loses his first close friend to enemy fire.
Spike has an amazing talent for taking a tough subject and making it interesting and light hearted. Its not disrespectful, but shows that there were moments of levity in difficult times. Much like the previous book this one was very readable and I enjoyed it.
Tags for this post: book spike_milligan combat ww2 biography
Related posts: Adolf Hitler: My Part in His Downfall; Monty: His Part in My Victory; Cryptonomicon; The Man in the Rubber Mask; Skimpy; The Crossroad Comment Recommend a book
This is the third book in Spike Milligan's war memoirs (volume 1; volume 2). Combat has now died down in Africa, and no one is ready to be shipped to a new field of combat yet. The troops are therefore getting bored. Suddenly the establishment recalls that Milligan can play the trumpet and the band reforms. Most of this book is spent being shuffled between army staging areas, and performing music. Regardless of little "happening", still an engaging read.
Tags for this post: book spike_milligan combat ww2 biography
Related posts: Adolf Hitler: My Part in His Downfall; Rommel? Gunner Who?; Cryptonomicon; The Man in the Rubber Mask; Skimpy; The Crossroad Comment Recommend a book
Most people experience anxiety in their lives. For some, it is just a bad, passing feeling, but, for many, anxiety rules their day-to-day lives, even to the point of taking over the decisions they make.
Scientists at the University of Pittsburgh have discovered a mechanism for how anxiety may disrupt decision making. In a study published in The Journal of Neuroscience, they report that anxiety disengages a region of the brain called the prefrontal cortex (PFC), which is critical for flexible decision making. By monitoring the activity of neurons in the PFC while anxious rats had to make decisions about how to get a reward, the scientists made two observations. First, anxiety leads to bad decisions when there are conflicting distractors present. Second, bad decisions under anxiety involve numbing of PFC neurons.
Brady O’Brien, KC9TPA, has been working hard on two new FreeDV modes for VHF/UHF radio. To the existing Codec 2 1300 bit/s mode, he has added framing/sync logic and our high performance 4FSK modem. This mode is designed to be “readability 5” at -132dBm, which is 10dB beyond the point where analog FM and 1st generation DV systems stop working.
Brady tested the system by setting up a low power transmitter using a HackRF connected directly to an antenna (tx power about 20mW). A GNU Radio system was used to play FreeDV 2400A and analog FM signals at the same transmit power:
He then went for a drive and found a spot 2.5km away where the signal was weak, but still decodable.
Here is a spectogram of the two signals, FM/2400A/FM/2400A.
SDR radios are required to reach the performance goals for this mode. FreeDV 2400A is not designed to be run on legacy FM radios, even those with data ports. The RF bandwidth is 5kHz, too wide for SSB radios. This represents a complete departure from “FM” friendly VHF DV modes – DStar/C4FM/DMR which pass through an analog FM modem, and suffer performance degradation because of it. The mode has been designed without compromise in the modem and to explore new ground. It is also completely open source – especially the codec.
However we are also developing FreeDV 2400B – which is designed to run though any FM radio, even a $40 HT. Some test results on that soon.
FreeDV 2400A is available now in the FreeDV API and can be tested using the FreeDV command line utilities, for example:./freedv_tx 2400A ../../raw/ve9qrp_10s.raw - | ./freedv_rx 2400A - - | play -t raw -r 8000 -s -2 -
It requires a 48kHz interface to the SDR.
Some information on the FreeDV 2400A mode:Bit Rate 2400 bit/s RF Bandwidth 5 kHz Suggested Channel Spacing 6.25 kHz Modulation 4FSK with non coherent demodulation Symbol Rate 1200 symbols/s Tone Spacing 1200 Hz Frame Period 40ms Bits/Frame 96 Unique Word 16 bits/frame Codec 2 1300 52 bits/frame Spare Bits 28 bits/frame
The spare bits are currently undefined but could be used for data, routing information, or FEC. It’s early days but this is an important first step – well done Brady!
Since the last post in this series Rick, KA8BMA, has been working steadily on the CAD work for the SM2000 VHF Radio. We now have the SM2000 schematic and 80% of the PCB layout is complete. Rick has taken a modular approach, laying out each building block that I prototyped last year.
Here is the current state of the PCB layout, which is 160mm x 160mm
On the waveform side, Brady, KC9TPA, has done a fine job porting a 4FSK modem to C and developing two new VHF FreeDV modes. ModeA is an “optimal” 4FSK mode that runs at 2400 bit/s, has a 5kHz RF bandwidth and a MDS of -132dBm. ModeB use Manchester-encoded 2FSK at 2400 bit/s and will run over any FM radio, even $40 HTs.
Brady’s modem is also being used for our high speed balloon telemetry work.
There is plenty of software work (e.g. STM32F4 micro-controller code) to be done for the SM2000. Help wanted!
SM2000 Part 1 – Introducing the project
SM2000 SVN – CAD Files for the project
Recently most of us attended LCA2016. This is one set of reflections on what we heard and what we've thought since. (Hopefully not the only set of reflections that will be posted on this blog either!)
LCA was 2 days of miniconferences plus 3 days of talks. Here, I've picked some of the more interesting talks I attended, and I've written down some thoughts. If you find the thoughts interesting, you can click through and watch the whole talk video, because LCA is awesome like that.Life is better with Rust's community automation
This talk is probably the one that's had the biggest impact on our team so far. We were really impressed by the community automation that Rust has: the way they can respond to pull requests from new community members in a way that lets them keep their code quality high and be nice to everyone at the same time.
The system that they've developed is fascinating (and seems fantastic). However, their system uses pull requests, while we use mailing lists. Pull requests are easy, because github has good hook support, but how do we link mailing lists to an automatic test system?
I liked this talk, as I have a soft spot for formal methods (as I have a soft spot for maths). It covers applying a bunch of static analysis and some of the less intrusive formal methods (in particular cbmc) to an operating system kernel. They were looking at eChronos rather than Linux, but it's still quite an interesting set of results.
We've also tried to increase our use of static analysis, which has already found a real bug. We're hoping to scale this up, especially the use of sparse and cppcheck, but we're a bit short on developer cycles for it at the moment.Adventures in OpenPower Firmware
Stewart Smith - another OzLabber - gave this talk about, well, OpenPOWER firmware. This is a large part of our lives in OzLabs, so it's a great way to get a picture of what we do each day. It's also a really good explanation of the open source stack we have: a POWER8 CPU runs open-source from the first cycle.What Happens When 4096 Cores All Do synchronize_rcu_expedited()?
Paul McKenney is a parallel programming genius - he literally 'wrote the book' (or at least, wrote a book!) on it. His talk is - as always - a brain-stretching look at parallel programming within the RCU subsystem of the Linux kernel. In particular, the tree structure for locking that he presents is really interesting and quite a clever way of scaling what at first seems to be a necessarily global lock.
I'd also really recommed RCU Mutation Testing, from the kernel miniconf, also by Paul.What I've learned as the kernel docs maintainer
In early February I had the opportunity to go the the NICTA Systems Summer School, where Cyril and I were invited to represent IBM. There were a number of excellent talks across a huge range of systems related subjects, but the one that has stuck with me the most was a talk given by Luis Ceze on a topic called approximate computing. So here, in hopes that you too find it interesting, is a brief run-down on what I learned.
Approximate computing is fundamentally about trading off accuracy for something else - often speed or power consumption. Initially this sounded like a very weird proposition: computers do things like 'running your operating system' and 'reading from and writing to disks': things you need to always be absolutely correct if you want anything vaguely resembling reliability. It turns out that this is actually not as big a roadblock as I had assumed - you can work around it fairly easily.
The model proposed for approximate computing is as follows. You divide your computation up into two classes: 'precise', and 'approximate'. You use 'precise' computations when you need to get exact answers: so for example if you are constructing a JPEG file, you want the JPEG header to be exact. Then you have approximate computations: so for example the contents of your image can be approximate.
For correctness, you have to establish some boundaries: you say that precise data can be used in approximate calculations, but that approximate data isn't allowed to cross back over and pollute precise calculations. This, while intuitively correct, poses some problems in practise: when you want to write out your approximate JPEG data, you need an operation that allows you to 'bless' (or in their terms 'endorse') some approximate data so it can be used in the precise file system operations.
In the talk we were shown an implementation of this model in Java, called EnerJ. EnerJ allows you to label variables with either @Precise if you're dealing with precise data, or @Approx if you're dealing with approximate data. The compiler was modified so that it would do all sorts of weird things when it knew it was dealing with approximate data: for example, drop loop iterations entirely, do things in entirely non-determistic ways - all sorts of fun stuff. It turns out this works surprisingly well.
However, the approximate computing really shines when you can bring it all the way down to the hardware level. The first thing they tried was a CPU with both 'approximate' and precise execution engines, but this turned out not to have the power savings hoped for. What seemed to work really well was a model where some approximate calculations could be identified ahead of time, and then replaced with neural networks in hardware. These neural networks approximated the calculations, but did so at significantly lower power levels. This sounded like a really promising concept, and it will be interesting to see if this goes anywhere over the next few years.
There's a lot of work evaluating the quality of the approximate result, for cases where the set of inputs is known, and when the inputs is not known. This is largely beyond my understanding, so I'll simply refer you to some of the papers listed on the website.
The final thing covered in the talk was bringing approximate computing into current paradigms by just being willing to accept higher user-visible error rates. For example, they hacked up a network stack to accept packets with invalid checksums. This has had mixed results so far. A question I had (but didn't get around to asking!) would be whether the mathematical properties of checksums (i.e. that they can correct a certain number of bit errors) could be used to correct some of the errors, rather than just accepting/rejecting them blindly. Perhaps by first attempting to correct errors using the checksums, we will be able to fix the simpler errors, reducing the error rate visible to the user.
Overall, I found the NICTA Systems Summer School to be a really interesting experience (and I hope to blog more about it soon). If you're a university student in Australia, or an academic, see if you can make it in 2017!
After the unfortunate crash of our last QuadPlane build with a failed ESC CanberraUAV wanted to build a new version with more redundancy in the VTOL part of the build. The result is the above OctaQuadPlane which we successfully flew for the first time yesterday.
When we were first designing a large QuadPlane we did consider an octa design, but rejected it due to what we thought would be a high degree of complexity and unnecessary weight. Needing wiring for both 8 vertical lift motors plus 4 controls for fixed wing flight, and extra controls for ignition cut and auxiliary functions like remote engine start, choke and payload control we worked out we'd need 15 PWM outputs. The Pixhawk only has 14 outputs.
After the failed ESC on the previous plane lost us the aircraft we looked again at an octa design and found that it would not only be possible, but could potentially be simpler for wiring than our last aircraft and be lighter as well, while having more lift.
To start with we looked for motors with a better power to weight ratio than the NTM Prop Drive 50-60 motors we used on the last build. We found them in the t-motor 3520-11 400kV motors. These very well regarded motors have a considerably better power to weight ratio, and back-to-back mounting of them was extremely simple and light with the above clamping arrangement.
To manage the complexity of the wiring we added support in ArduPilot for high speed SBUS output, and used one of these SBUS to PWM adapters embedded in each wing:
that allowed us to have a single wire from a Y-lead on the SBUS output port of the Pixhawk going to each wing. We modified the BRD_SBUS_OUT parameter in ArduPilot to allow setting of the SBUS frame rate, with up to 300Hz SBUS output. For a large QuadPlane 300Hz is plenty for multi-rotor control.
The arm mounting system we used was the same as for the previous plane, with two 20x20x800 CF square section tubes per wing, mounted on a 300x100x1 CF flat plate. The flat plate is glued to the wing with silicon sealant, and the CF tubes are glued to that plate with epoxy.
For ESCs we used the HobbyWing 40A, which has a burst rating of 60A. That is well above our expected hover current of 15A per motor. Combined with the redundancy of the OctaQuad and the better reputation of HobbyWing ESCs we were confident we wouldn't have a repeat of our previous crash.
For batteries we switched to 6S, using one 5Ah battery per wing. That gives us over 4 minutes of hover flying time while keeping the weight well below our last build (thanks largely to the lighter motors and ESCs).
For this initial test flight we had the fuel tank mounted horizontally unlike the previous vertical arrangement. This was OK as we only filled it a small amount for these test flights. We will be converting to a vertical arrangement again for future fights to reduce the impact of fuel slosh causing CoG oscillations. We may also fill it with fuel anti-slosh foam as we have done on some other aircraft (particularly the helicopters).
Overall the build came out about 1.5kg lighter than the previous build, coming in at 12.5kg dry weight. With a full load of fuel we'd expect to be about 13.5kg.
We did three test flights yesterday. The first was just a quick hover test to confirm everything was working as expected. After that we did the first transition test under manual control, which worked very nicely.
The copter part of the tuning could definitely do with some work, but we thought it was stable enough to do a full auto mission.
It was a short mission, with just two full circuits before landing, but it nicely demonstrated autonomous VTOL takeoff, transition to fixed wing flight, transition back to hover and auto landing. The landing came in within a meter of the desired landing point.
We had set the distance between the transition point and landing point a bit short, and we hadn't included a mission item for the plane to slow down before it transitioned which led to a more abrupt transition than is really good for the airframe. Going from 100km/hr to zero over a distance of 77 meters really puts a lot of stress on the wings. We'll fix that for future missions. It is nice to know it can handle it though.
The transition to hover also caused it to climb a fair bit which meant it spent more time in VTOL landing than we would have liked, chewing through the battery. That was caused by it having to pitch up to slow down enough to achieve the stopping point in the mission which caused it to climb as it still had a lot of lift from the wings, combined with a high angle of attack. It didn't help that we had a slow slew rate on the petrol motor, reducing the motor from full throttle to zero over a 3 second period. We chose a slow slew rate to reduce the chance of the engine cutting due to fast throttle changes. We can fix that with a bit of engine tuning and a longer transition distance - probably 130 meters would work better for this aircraft.
The climb on transition also took the aircraft into the sun from the pilots point of view, which isn't ideal but did result in a quite picturesque video of the plane silhouetted against the sun.
The full flight log is available here if anyone wants to see it. The new features (new SBUS output support and OctaQuad support for quadplanes) will be in the 3.5.1 plane release.
We're not done yet with QuadPlanes. Jack is building another one based on the Valiant, which you can see here next to the Porter:
Combined with the GX9 tradheli build that Greg has put together:
Many thanks to everyone who helped with the build and as always to CMAC for providing a great flying field!
Did you know you can bring down an entire HPC cluster with an old script? Well, this week I had such an experience. As the systems administrator for a seriously aging cluster with over 800 post-graduate and post-doctoral researchers, "stress" is a normal part of daily life (for future reference: it's probably killing me).
Today Mark and I spent an afternoon working on a 115 kbit/s FSK data system for high altitude balloons. Here is a video of Mark demonstrating the system:
In our previous tests, we needed -75dBm to get jpeg images through the system, much higher that the calculated MDS of -108dBm + NF. So we devised a series of tests – “divide and conquer” – to check various parts of the system in isolation.
First, some SDR noise figure tests. We added to these measurements today by trying a few SDRs with a low noise pre-amplifier. First, we measured the pre-amp NF. This was quoted as 0.6dB, we measured 2dB with the spec-an (which has 1.5dB uncertainty). We then tested combinations of the pre-amp with various SDR gains:RTLSDR G=20.......: Pin: -100.0 Pout: 15.5 G: 115.5 NF: 19.4 dB RTLSDR G=50.......: Pin: -100.0 Pout: 38.8 G: 138.8 NF: 5.6 dB RTLSDR G=50 Preamp: Pin: -100.0 Pout: 59.0 G: 159.0 NF: 2.0 dB AirSpy G=10.......: Pin: -100.0 Pout: -1.0 G: 99.0 NF: 19.4 dB AirSpy G=21.......: Pin: -100.0 Pout: 33.9 G: 133.9 NF: 6.7 dB AirSpy G=21 Preamp: Pin: -100.0 Pout: 53.1 G: 153.1 NF: 2.8 dB
The RTLSSDR was a R820T and the pre-amp model number PSA4-5043.
When the pre-amp was used, it boosted the overall gain of the system, and set the system NF to 2dB. It was great to see system NFs close to our measured pre-amp NF – gave us some confidence that our measurements were OK.
The SDRs require high RF gain levels to achieve a low NF. We did notice that at high RF gain levels, birdies appeared in the SDR output spectrum, and there were some signs of compression. We will look into SDR gain distribution more in future.
Testing The Modem Off Line
I wanted to test the modem over the RF link, and the best way to do that is with a BER test. Mark configured the Rpi Tx to send fixed frames of known test data. We used a HackRF to down-convert the received FSK test frames and store them to file.
The data rate of the link is Rs=115.2 kbit/s, which is sampled by the demodulator at Fs = 115200*8 = 921.6 kHz. However to the modem it just looks like an 8 times oversampled signal. So here is 10 seconds of the modem signal replayed at Fs=9600 Hz. You can hear the packets starting by the sound of the header. When replayed at this low sample rate, the bit rate is 1200 baud, and the packets are a few seconds long. At the full sample rate they are just 23ms long.
The non-real time reference Octave demodulator was used to demodulate the FSK sample files, pop up some plots, and measure the BER. Much easier to see what’s going on with the off-line simulation version of the modem. After a few hours of wrangling with fsk_horus.m, I managed to decode Mark’s test frames.
The tx signal we sampled was noise free, so I added calibrated AWGN noise in the Octave simulation to test BER performance of the real-time FSK modulator and transmitter. It was spot on – 1% BER at an Eb/No of 9dB. Great! I do like FSK – real world implementations (like the FSK tx chip) work quite close to theory. This verified that the hardware tx side was all OK in terms of modem performance.
I did however discover a fairly large baud rate error (sample clock offset), of around 1700ppm. This suggested the tx was actually sending at 115,200*1.0017 = 115.396 kbit/s.
Simultaneously – Mark worked out how to measure the baud rate of the RPi serial port. He used the clever idea of sending 0x55 bytes, which when combined with the RS232 start and stop bits, leads to a …010101010… sequence on the RS232 tx line – a square wave at half the baud rate. We connected a frequency counter, and measured the actual baud rate as 115.387 kbit/s, right in line with my numbers above.
The C version of the demod doesn’t like such a large clock offset (baud rate error), so we tweaked the resampling code to adjust for the error. Could this baud rate error could be our “smoking gun” – the reason such a high rx level was required to push images through?
You need to test receivers at very low signal levels. The problem was a relatively high power transmitter nearby generating the tx signal. In our case, the tx power of 25mW (14dBm) is attenuated down to -110dBm, a total of 124dB attenuation. The tx signal tends to radiate around the attenuator (it is a radio transmitter after all). When you reduce the step attenuator 10dB but the signal on the spec-an doesn’t drop 10dB, you have a RF radiation problem.
We solved this by putting the tx in a metal box in another room, then (at Matt, VK5ZM’s suggestion), connecting the tx to the attenuator and rx using coax with a intentionally high loss at UHF. This keeps the high level tx signals well away from the rx.
Mark fired up the real time code again, adjusted for the baud rate error, and suddenly we were getting much better results – good images at -94dBm. He added the pre-amp, and now we could receive images at -106dBm. This is exactly, almost suspiciously close to what we calculated:MDS = Eb/No + 10*log10(B) - 174 + NF MDS = 15 + 10*log10(115E3) - 174 + 2 MDS = -106.4 dBm
It doesn’t usually work out this well ….. guess we are getting our head around this radio caper!
Dropping the signal level to -111dBm meant just a few SSTV packets were making it through. Most of them were bombing on a CRC error. At -111dBm, our Eb/No = 10dB, or a BER of 3E-3. Now our packets are a few thousand bits long, so with BER = 3E-3, we are very likely to cop a few bit errors per packet, which are then discarded due to a CRC fail. So this fits exactly with what was observed at this signal level. Check.
Here a video of Mark running through the experimental set up:
RTLSDR Samples Captured using:rtl_sdr -s 1000000 -f 440000000 -g 20 - | csdr convert_u8_f > rtlsdr_gain20_sig.bin
AirSpy samples captured using:airspy_rx -f440.0 -r /dev/stdout -a 1 -h 21 | csdr convert_s16_f > airspy_gain21_sig.bin
Commands for complete end-to-end decoding (assuming csdr is installed):rtl_sdr -s 1000000 -f 441000000 -g 35 - | csdr convert_u8_f | csdr bandpass_fir_fft_cc 0 0.4 0.1 | csdr fractional_decimator_ff 1.08331 | csdr realpart_cf | csdr convert_f_s16 | ./fsk_demod 2X 8 923096 115387 - - | ./drs232 - - | python rx_ssdv.py --partialupdate 8
The python code is here.
Visualising our Packets
Brady, KC9TPA, the author of the C FSK modems, sent us a neat visualization of our test packet:
You can see the 0x55 bytes in the header, the unique word, then a sequence of 0x0 too 0xff, sent LSB first, with the RS232 start (0) and stop (1) bits.
Brady created that image using a screen capture of this:xxd -g 10 -c 10 -s 2 115.bin | tr '1' '*' | tr '0' ' '