Planet Linux Australia

Syndicate content
Planet Linux Australia - http://planet.linux.org.au
Updated: 20 min 17 sec ago

Andre Pang: iPhone: Currency Converter

Tue, 2014-07-08 05:26

A small tip for l’iPhone digerati (aw haw haw haw!): if you, like me, like to look up currency rates, forget about all this Web browser and Web application malarkey. Use the Stocks application instead:

  1. go to the Stocks application,
  2. add a new stock,
  3. use a stock name of AUDUSD=X for the Australian to US dollar, USDGBP=X for USD to the British Pound, etc. (Use the Yahoo! finance page if you don’t know the three-letter currency codes.)

Since a picture is worth a thousand words, here you go:

Pretty!

Andre Pang: Neverwinter Nights 2 Hints and Tips

Tue, 2014-07-08 05:26

I’ve added a games section to the site. There’s some small cheat… uhh, tricks, that you may find handy, some information on crafting items, and also a small patch to remove the XP penalty for multiclassing if you think it’s stupid (which I do). Have fun!

Andre Pang: Interview with Marshall Kirk McKusick

Tue, 2014-07-08 05:26

A website named Neat Little Mac Apps is not the kind of place you’d expect to find an interview with a operating systems and filesystems hacker. Nevertheless, one of their podcasts was just that: an interview with UNIX and BSD legend Marshall Kirk McKusick. (He has his own Wikipedia page; he must be famous!)

There’s some great stuff in there, including the origin of the BSD daemon (Pixar, would you believe? Or, well, Lucasarts at the time…), and a great story about how a bug was introduced into the 4.2 BSD version of the pervasive UNIX diff utility. Marshall’s full of energy, and it’s a great interview; it’s a little amusing to see the stark contrast between the interviewer and McKusick, both of whom have rather different definitions of what constitutes an operating system.

Andre Pang: In-ear Headphones Adventures

Tue, 2014-07-08 05:26

I’ve been on a bit of a headphones splurge lately: anyone who knows me will know that I like my music, and I love my gadgets. I like in-ear headphones for travelling since they’re small, unobtrusive, yet can still give excellent sound quality. While nothing beats a nice pair of full-head cans for comfortable listening, they’re a little bit awkward to tote around. Interestingly, I’ve found that my experiences with in-ear headphones have been quite different from most of the reviews that I’ve read on the ‘net about them, so I thought I’d post my experiences here to bore everyone. The executive summary: make sure that whatever in-ear phones you buy, wherever you buy them, be prepared to bear the cost if you don’t like them, or make sure you can return them.

Note: For comparative purposes, my preferred non-in-ear headphones are the Sennheiser PX-100s (USD$60), IMHO one of the best open earphones in its price range. I don’t consider myself to be an audiophile at all: I think I know sound reasonably well, but I’m not fussy as long as the general reproduction is good. A general guideline is that if I state that something lacks bass compared to something else, you’ll be able to hear the difference unless you’re totally deaf, but I’m not fussy about whether the bass is ‘soft and warm’ and all that other audiophile crap. For serious listening/monitoring, I currently use Beyerdynamic DT 531s, and previously used AKG-141s. The Sennheiser PX-100 nor any of the headphones below are any real comparison to these beasts: the 531s and 141s are meant for professional monitoring with serious comfort; it’s a whole different target market.

So, here are some in-ear headphones that I’ve tried over the years:

  • Apple iPod earbuds: Mediocre sound quality, trendy white, comes with iPod. Stays in the ear reasonably well. Not enough bass for my liking, but that’s no surprise considering they come for free. That said, they’re a hell of a lot better than the earbuds that come with most MP3/CD players; the Sony PSP also came with trendy-white headphones, and they sucked a lot more than the iPod earbuds.

  • Griffin Earjams (USD$15): Designed as an attachment to the iPod earbuds. This doobie significantly changes the output of the earbuds so that you now get a good amount of bass, with very little treble. Not good enough: I like bass in my doof-doof music, but I don’t want to hear just the doof-doof-doof, you know? If you’re tempted to get these, I’d suggest trying Koss’s The Plug or Spark Plug in-ear phones instead, which are just as cheap, have just as much (if not more bass) reproduction, and don’t suffer so badly on the treble end.

  • Apple in-ear Headphones (USD$40): I really liked these. About the only thing I don’t like about them is that they don’t fit in my ear well enough: they’re fine for a couple of minutes, but after about 10 minutes I realise that they’re falling out a bit and I have to wiggle them in again. I don’t find this too much of an annoyance, but no doubt other people will. If you plan to do exercise with your in-ear phones, you will definitely want to give these a very good break in and find out whether they come loose during your gymming/running/skiing/breakdancing/whatnot.

    I’m fairly sure the loose fitting is due to a different design they use: the actual earpiece/driver seems to be much bigger than the other in-ear headphones I’ve tried here, and I guess they’re designed to be used further away from the ear rather than being stuck deep into your ear canal, as with the Sony and Shures I tried (and no doubt like the Etymotics in-ear phones too). The nice thing about this design is that you don’t have to stick them really, really deep, which is the main reason I like them. I find the Sony and the Shures want to be too deep for my comfort level; see below.

  • Sony MDR-EX71SL (USD$49): Available in white or black. After much internal debate and agony, I got the white model. (Trendy or street-wise? Trendy or street-wise? Trendy it is!) This was the first model I bought after the Apple in-ear headphones, expecting to have seriously better sound. After all, everyone knows that for audio, more expensive always means better—especially those gold-plated, deoxygenated, shiny-looking, made-from-unobtanium Monster Cables. Anyway, I was fairly disappointed with their sound quality: less bass than the Apple’s iPod earbuds. Well, OK, let’s be honest: I’m sure they do sound really good if not far better, but I’ll never find out, because I really don’t want to shove them in my ear that far. Unlike the Koss Plug series, these Sonys don’t come with foam earpieces that can be inserted at a comfortable distance, so I think you have to put the earpiece far deeper into your ear than I did. In the very unlikely scenario that I did insert them as far as they were meant to go, they sounded pretty crap indeed.

  • Koss Spark Plug (USD$20): The sequel to Koss’s suspiciously cheap older model, which was simply named “The Plug”. The Spark Plug’s just as cheap as The Plug, but supposedly better (or at least different). For $20, I don’t think you can complain a whole lot: they give you a pretty good amount of bass (comparable to the Griffin Earjams), but they’re definitely lacking at the high end, though not as much as the Earjams are. I find the foam earpieces that come with these fit my ear pretty well, don’t have to be put in that deep to deliver that doof-doof bass I’m looking for, and they’re also reasonably comfortable. They’re not quite as comfortable as Apple’s in-ear headphones, but still, better than I’d expect for a piece of foam stuck into my ear. If your sound-playing device has an equaliser, you can likely correct for the Spark Plug’s lack of treble response by putting it on an Eq setting that boosts the treble a bit: I found the Spark Plug to be sound pretty good on iPod’s Treble Boost or Electronic Eqs. A good, safe (and pretty cheap) buy.

  • Shure E3c/E4c (USD$179 and USD$299): So, I found the Apple Store stocking one of the most acclaimed in-ear phones I’ve ever read about, which would be the Shure E3c. (Note: the ‘c’ in E3c stands for consumer, and basically translates to “trendy iPod white”. The Shure E3 is exactly the same, but is black, and therefore faster.) Additionally, they also had the Shure E4c in stock, which are far more pricey than the E3c (and therefore better). Unfortunately, I put both of these headphones into the same category as the Sony MDR-EX71SLs: I’m sure they’re as absolutely awesome as all the glowing reviews say, but I’m just a wimp and refuse to shove those damn things that far into my brain. Oh yeah, on a not-very-relevant note, Shure changed the packaging between the E3c and the E4c. The E3c had a (gasp) easy-to-open plastic pack, but apparently Shure have since been attending the American School of Plastic Packaging, and used them evil blister packs instead.

  • Griffin EarThumps (USD$20): Released at the start of 2006, I managed to find a pair of EarThumps calling me at my local AppleCentre, and I like them alot. They’re pretty cheap (they’re sure as hell not in the Shure E3c/E4c price range), and they sound great. The bass production is nearly as good as Koss’s Plug series, and they actually manage to produce pretty decent treble too, so you won’t have to play around with your graphic equaliser too much to make these sound decent. Additionally, they appear to have the larger drivers that Apple used for their in-ear headphones: this means that you don’t have to shove them so far in your ear. In fact, these were even easier to get into your ear than Apple’s in-ear headphones, and were much less annoying to put in than the Koss ‘phones. They’re also reasonably comfortable and stay in your ear quite well, and come in both black and white models if colour-matching your iPod is an important thing to you.

For on-the-road listening, my current preferred earphones are the Griffin EarThumps: small enough to fit next to an iPod nano in one of those evilly cute iPod socks, with good enough sound and a good comfort level. I was previously using Koss’s Spark Plug, and before that, Apple’s in-ear headphones. The EarThumps are a set of earphones that I can heartily recommend to friends, since they’re reasonably cheap, and more importantly, I know that they won’t have to shove some piece of equipment halfway inside their brain to get any decent sound out of it.

Andre Pang: Immutability and Blocks, Lambdas and Closures [UPDATE x2]

Tue, 2014-07-08 05:26

I recently ran into some “interesting” behaviour when using lambda in Python. Perhaps it’s only interesting because I learnt lambdas from a functional language background, where I expect that they work a particular way, and the rest of the world that learnt lambdas through Python, Ruby or JavaScript disagree. (Shouts out to you Objective-C developers who are discovering the wonders of blocks, too.) Nevertheless, I thought this would be blog-worthy. Here’s some Python code that shows the behaviour that I found on Stack Overflow:

Since I succumb to reading source code in blog posts by interpreting them as “blah”1, a high-level overview of what that code does is:

  1. iterate over a list of strings,
  2. create a new list of functions that prints out the strings, and then
  3. call those functions, which prints the strings.

Simple, eh? Prints “do”, then “re”, then “mi”, eh? Wrong. It prints out “mi”, then “mi”, then “mi”. Ka-what?

(I’d like to stress that this isn’t a theoretical example. I hit this problem in production code, and boy, it was lots of fun to debug. I hit the solution right away thanks to the wonders of Google and Stack Overflow, but it took me a long time to figure out that something was going wrong at that particular point in the code, and not somewhere else in my logic.)

The second answer to the Stack Overflow question is the clearest exposition of the problem, with a rather clever solution too. I won’t repeat it here since you all know how to follow links. However, while that answer explains the problem, there’s a deeper issue. The inconceivable Manuel Chakravarty provides a far more insightful answer when I emailed him to express my surprise at Python’s lambda semantics:

This is a very awkward semantics for lambdas. It is also probably almost impossible to have a reasonable semantics for lambdas in a language, such as Python.

The behaviour that the person on SO, and I guess you, found surprising is that the contents of the free variables of the lambdas body could change between the point in time where the closure for the lambda was created and when that closure was finally executed. The obvious solution is to put a copy of the value of the variable (instead of a pointer to the original variable) into the closure.

But how about a lambda where a free variable refers to a 100MB object graph? Do you want that to be deep copied by default? If not, you can get the same problem again.

So, the real issue here is the interaction between mutable storage and closures. Sometimes you want the free variables to be copied (so you get their value at closure creation time) and sometimes you don’t want them copied (so you get their value at closure execution time or simply because the value is big and you don’t want to copy it).

And, indeed, since I love being categorised as a massive Apple fanboy, I found the same surprising behaviour with Apple’s blocks semantics in C, too:

You can see the Gist page for this sample code to see how to work around the problem in Objective-C (basically: copy the block), and also to see what it’d look like in Haskell (with the correct behaviour).

In Python, the Stack Overflow solution that I linked to has an extremely concise way of giving the programmer the option to either copy the value or simply maintain a reference to it, and the syntax is clear enough—once you understand what on Earth what the problem is, that is. I don’t understand Ruby or JavaScript well enough to comment on how they’d capture the immediate value for lambdas or whether they considered this design problem. C++0x will, unsurprisingly, give programmers full control over lambda behaviour that will no doubt confuse the hell out of people. (See the C++0x language draft, section 5.1.2 on page 91.)

In his usual incredibly didactic manner, Manuel then went on to explain something else insightful:

I believe there is a deeper issue here. Copying features of FP languages is the hip thing in language design these days. That’s fine, but many of the high-powered FP language features derive their convenience from being unspecific, or at least unconventional, about the execution time of a piece of code. Lambdas delay code execution, laziness provides demand-dependent code execution plus memoisation, continuations capture closures including their environment (ie, the stack), etc. Another instance of that problem was highlighted by Joe Duffy in his STM retrospective.

I would say, mutability and flexible control flow are fundamentally at odds in language design.

Indeed, I’ve been doing some language exploration again lately as the lack of static typing in Python is really beginning to bug me, and almost all the modern languages that attempt to pull functional programming concepts into object-oriented land seem like a complete Frankenstein, partially due to mutability. Language designers, please, this is 2011: multicore computing is the norm now, whether we like it or not. If you’re going to make an imperative language—and that includes all your OO languages—I’ll paraphrase Tim Sweeney: in a concurrent world, mutable is the wrong default! I’d love a C++ or Objective-C where all variables are const by default.

One take-away point from all this is to try to keep your language semantics simple. I love Dan Ingall’s quote from Design Principles Behind Smalltalk: “if a system is to serve the creative spirit, it must be entirely comprehensible to a single individual”. I love Objective-C partially because its message-passing semantics are straightforward, and its runtime has a amazingly compact API and implementation considering how powerful it is. I’ve been using Python for a while now, and I still don’t really know the truly nitty-gritty details about subtle object behaviours (e.g. class variables, multiple inheritance). And I mostly agree with Guido’s assertion that Python should not have included lambda nor reduce, given what Python’s goals are. After discovering this quirk about them, I’m still using the lambda in production code because the code savings does justify the complexity, but you bet your ass there’s a big comment there saying “warning, pretentous code trickery be here!”

1. See point 13 of Knuth et al.’s Mathematical Writing report.

UPDATE: There’s a lot more subtlety at play here than I first realised, and a couple of statements I’ve made above are incorrect. Please see the comments if you want to really figure out what’s going on: I’d summarise the issues, but the interaction between various language semantics are extremely subtle and I fear I’d simply get it wrong again. Thank you to all the commenters for both correcting me and adding a lot of value to this post. (I like this Internet thing! Other people do my work for me!)

Update #2

I’ve been overwhelmed by the comments, in both the workload sense and in the pleasantly-surprised-that-this-provoked-some-discussion sense. Boy, did I get skooled in a couple of areas. I’ve had a couple of requests to try to summarise the issues here, so I’ll do my best to do so.

Retrospective: Python

It’s clear that my misunderstanding of Python’s scoping/namespace rules is the underlying cause of the problem: in Python, variables declared in for/while/if statements will be declared in the compound block’s existing scope, and not create a new scope. So in my example above, using a lambda inside the for loop creates a closure that references the variable m, but m’s value has changed by the end of the for loop to “mi”, which is why it prints “mi, mi, mi”. I’d prefer to link to the official Python documentation about this here rather than writing my own words that may be incorrect, but I can’t actually find anything in the official documentation that authoritatively defines this. I can find a lot of blog posts warning about it—just Google for “Python for while if scoping” to see a few—and I’ve perused the entire chapter on Python’s compound statements, but I just can’t find it. Please let me know in the comments if you do find a link, in which case I’ll retract half this paragraph and stand corrected, and also a little shorter.

I stand by my assertion that Python’s for/while/if scoping is slightly surprising, and for some particular scenarios—like this—it can cause some problems that are very difficult to debug. You may call me a dumbass for bringing assumptions about one language to another, and I will accept my dumbassery award. I will happily admit that this semantics has advantages, such as being able to access the last value assigned in a for loop, or not requiring definitions of variables before executing an if statement that assigns to those variables and using it later in the same scope. All language design decisions have advantages and disadvantages, and I respect Python’s choice here. However, I’ve been using Python for a few years, consider myself to be at least a somewhat competent programmer, and only just found out about this behaviour. I’m surprised 90% of my code actually works as intended given these semantics. In my defence, this behaviour was not mentioned at all in the excellent Python tutorials, and, as mentioned above, I can’t a reference for it in the official Python documentation. I’d expect that this behaviour is enough of a difference vs other languages to at least be mentioned. You may disagree with me and regard this as a minor issue that only shows up when you do crazy foo like use lambda inside a for loop, in which case I’ll shrug my shoulders and go drink another beer.

I’d be interested to see if anyone can come up an equivalent for the “Closures and lexical closures” example at http://c2.com/cgi/wiki?ScopeAndClosures, given another Python scoping rule that assignment to a variable automatically makes it a local variable. (Thus, the necessity for Python’s global keyword.) I’m guessing that you can create the createAdder closure example there with Python’s lambdas, but my brain is pretty bugged out today so I can’t find an equivalent for it right now. You can simply write a callable class to do that and instantiate an object, of course, which I do think is about 1000x clearer. There’s no point using closures when the culture understands objects a ton better, and the resulting code is more maintainable.

Python summary: understand how scoping in for/while/if blocks work, otherwise you’ll run into problems that can cost you hours, and get skooled publicly on the Internet for all your comrades to laugh at. Even with all the language design decisions that I consider weird, I still respect and like Python, and I feel that Guido’s answer to the stuff I was attempting would be “don’t do that”. Writing a callable class in Python is far less tricky than using closures, because a billion times more people understand their semantics. It’s always a design question of whether the extra trickiness is more maintainable or not.

Retrospective: Blocks in C

My C code with blocks failed for a completely different reason unrelated to the Python version, and this was a total beginner’s error with blocks, for which I’m slightly embarrassed. The block was being stack-allocated, so upon exit of the for loop that assigns the function list, the pointers to the blocks are effectively invalid. I was a little unlucky that the program didn’t crash. The correct solution is to perform a Block_copy, in which case things work as expected.

Retrospective: Closures

Not all closures are the same; or, rather, closures are closures, but their semantics can differ from language to language due to many different language design decisions—such as how one chooses to define the lexical environment. Wikipedia’s article on closures has an excellent section on differences in closure semantics.

Retrospective: Mutability

I stand by all my assertions about mutability. This is where the Haskell tribe will nod their collective heads, and all the anti-Haskell tribes think I’m an idiot. Look, I use a lot of languages, and I love and hate many things about each of them, Haskell included. I fought against Haskell for years and hated it until I finally realised that one of its massive benefits is that things bloody well work an unbelievable amount of the time once your code compiles. Don’t underestimate how much of a revelation this is, because that’s the point where the language’s beauty, elegance and crazy type system fade into the background and, for the first time, you see one gigantic pragmatic advantage of Haskell.

One of the things that Haskell does to achieve this is the severe restriction on making things immutable. Apart from the lovely checkbox reason that you can write concurrent-safe algorithms with far less fear, I truly believe that this makes for generally more maintainable code. You can read code and think once about what value a variable holds, rather than keep it in the back of your mind all the time. The human mind is better at keeping track of multiple names, rather than a single name with different states.

The interaction of state and control flow is perhaps the most complex thing to reason about in programming—think concurrency, re-entrancy, disruptive control flow such as longjmp, exceptions, co-routines—and mutability complicates that by an order of magnitude. The subtle difference in behaviour between all the languages discussed in the comments is exemplar that “well-understood” concepts such as lexical scoping, for loops and closures can produce a result that many people still don’t expect; at least for this simple example, these issues would have been avoided altogether if mutability was disallowed. Of course mutability has its place. I’m just advocating that we should restrict it where possible, and at least a smattering of other languages—and hopefully everyone who has to deal with thread-safe code—agrees with me.

Closing

I’d truly like to thank everyone who added their voice and spent the time to comment on this post. It’s been highly enlightening, humbling, and has renewed my interest in discussing programming languages again after a long time away from it. And hey, I’m blogging again. (Though perhaps after this post, you may not think that two of those things are good things.) It’s always nice when you learn something new, which I wouldn’t have if not for the excellent peer review. Science: it works, bitches!

Andre Pang: Google Talk and Jabber vs the Rest

Tue, 2014-07-08 05:26

I have a lot of friends on the MSN Messenger chat network that I quite enjoy talking to, but unfortunately my favourite instant messaging program — iChat on Mac OS X — can’t talk to MSN. (Linux people, keep reading, I promise this post will get more relevant to you despite my reference to iChat.) iChat, can, however, talk to Jabber servers. I’ve known for a while that many Jabber servers have the capability to bridge networks, so that you can talk to people on all the other chatting networks (ICQ, Yahoo!, AIM, MSN, and Google Talk) by simply logging into a single Jabber server. However, I never bothered configuring iChat to do this until today.

I’m very happy to say that one of the first guides I found on Google about how to set up iChat to talk to MSN was on the Jabber Australia site, which is also one of the most professional Web sites I’ve seen for a free, open service. The process was quite painless and nearly completely Web-based: the Jabber Australia site imports all your current MSN contacts into your new Jabber contacts list, so you don’t have to re-type them all in. The Jabber<->MSN bridge even sends across all your buddies’ icons, so I can see iChat proudly displaying pictures of all my mates. Very schmick.

Step the second: I also have friends on Google Talk, and since Google opened up their servers to Jabber federation in January 2006, that means I also get to chat with all my friends who have Gmail accounts and never bother logging into the IM networks. (Muahaha, and they thought they could avoid me!) Now I’m finally gotten into the 2000 era and have one chat client that can talk to any of MSN, ICQ, Yahoo!, Google Talk, and AIM; woohoo, clever me.

However, the story’s much bigger than just simple chat network interoperability. Google’s move into the IM market by unleashing Google Talk might have seemed rather underwhelming when it was first announced: after all, it was just like Skype… but Google voice chat was Windows-only when Skype is Windows/Mac/Linux, and oh yeah, it also had, like, none of Skype’s user base. However, Google Talk has one massive weapon behind it: open standards. In the past few months, Google has done two very significant things:

  • They’ve opened up their servers so that they can interoperate with other Jabber servers.
  • Published their voice-call protocol as an open standard, and even provided third-party developers with a library (libjingle) that can be integrated into any IM client.

The second point is huge: in one move, Google has brought voice capability to the entire Jabber federation chat network. (And, if you haven’t used Google Talk, the voice quality is damn good: better than Skype, and possibly on par or better than iChat AV.) The implication of this is that there’s going to be a big split in the short-term between the official ICQ/Yahoo!/AIM/MSN clients and everyone else (i.e. Trillian, Gaim, AdiumX, etc). The official clients will, of course, only work with their own network since they want to lock you in, but every other IM client that doesn’t currently support voice chatting — which means everybody except for Skype and iChat AV — is very, very likely to be putting Google’s voice protocols into their own chat clients. Look a couple of years ahead, and I think you’ll find that every IM chat client is going to have voice support, and that they’ll be divided into two camps: the ones that support Google’s voice protocol because it’s an open standard, and everybody else.

The thing is, right now, that “everybody else” is really only one other group: Skype. There’s also iChat AV, but that’s small fry compared to Skype, and since Apple piggybacks off the AIM network right now, they don’t have a large interest in locking customers into one particular network. (It’d be relatively easy for Apple to migrate all their .Mac accounts over to a Jabber-federated network just by throwing a couple of Jabber servers up for their .Mac users and publishing a new version of iChat that talked to that.) This means that it’s more-or-less going to be Skype vs Google Talk in the next coming years, and while Skype has absolutely huge mindshare right now, I don’t think they have a hope of holding out, because they’re the only damn network right now that absolutely requires you to use their own client. The one killer advantage that Skype has compared to Google Talk is that you can use Skype call-out and call-in to do phone calls, but once Google gets SIP support into Jingle, Google Talk will have that capability as well. Unless Skype do something radical, they’re going to be extinct in a few years as developers start pushing Jingle support into every single IM client.

Heh, not a bad situation at all: in one move, Google’s not only guaranteed some measure of success for themselves in the IM market, but they’ve also made the world a slightly better place by giving users client choice and software choice thanks to open standards. One voice protocol to rule them all, and in the darkness bind them…

Andre Pang: Thoughts to Kathy Sierra

Tue, 2014-07-08 05:26

For those of you who were fortunate enough to see the magnificent Kathy Sierra keynote at Linux.conf.au this year but don’t read her blog, she’s received death threats and sex threats from some anonymous bloggers and comments. It was serious enough that she cancelled a presentation at an upcoming conference, and the police have been informed.

Wikipedia has some updated information on her harassment. Dave Winer, in a remarkably objective post, reckons it’s just a bunch of trolls and that those kind of death threats are nothing new. I think it’s a little too early to tell yet exactly what the hell is going on, but even if it is “just some trolls”, it’s still outrageous behaviour. Be sure to also read her update on the situation if you’re checking out the other links.

Lesson learned: don’t start a Web site that encourages abusive behaviour unless you’re prepared to deal with the consequences. In fact, just don’t start a Web site that encourages abusive behaviour at all. As Kathy herself says, angry and negative people can be bad for you. I wonder whether it was that article that triggered off some power-hungry kid’s frontal lobe.

Godspeed, Kathy. The world needs more people like you. Hopefully whoever made those comments will be punished—and redeem themselves.

Andre Pang: Gmail Losing Emails?

Tue, 2014-07-08 05:26

Like a lot of other folks, I’d switched over to Gmail my primary email account. Their Web interface is great, and Gmail’s spam filtering is possibly the best I’d ever seen. However, there was one rather small problem, where by “rather small problem” I mean “kinda huge problem”: I was losing emails. As in, I know the person had sent me emails, and I never received them. Yep, I checked the spam folder. The emails never showed up1.

I was willing to forgive this once or twice, seeing as how complicated and fragile all these SMTP shenanigans is. (Seriously, SMTP has to contend with FTP for the Most Stupid Protocol Ever Award.) However, after Gmail lost emails three or four times and I received the emails successfully at my other, non-Gmail accounts, I couldn’t ignore the problem any longer. The last straw was when one of my friends forwarded me an email three times from another Gmail account and none of the mails came through.

I’ve since switched to Pobox.com and have been a happy chappy since. Pobox’s spam filtering isn’t quite as good as Gmail’s, but it’s good enough, and their once-per-day spam report where you can simply click on a false positive spam to whitelist and retrieve it is just brilliant. That feature alone is worth the $20/year. If they had an email forwarding address where I could send them the rest of my spams to improve their overall spam training, Pobox would be perfect. If you’re looking for a powerful, reliable email forwarding service, I can recommend Pobox without hesitation.

So, here’s a question: has anybody else out there lost emails with Gmail? Surely it must’ve happened to somebody besides me. (Oh, and if you read this, work for Google and would like to figure out what’s going on, drop me a mail: I have the Message-IDs of at least a couple of the emails that were lost. Maybe we can all work this out.)

1 The first time I experienced any email lossage with Gmail was, very unfortunately, a business-related mail: one of our RapidWeaver customers had a Gmail address, and despite me sending two or three emails to him, he claimed never to have never received them. Of course, said customer thought that our customer support was rather lacking when he never received an email within a month, despite me sending one off within 14 hours of seeing the problem. A small drama ensued, threats were being made to post about our lack of customer care to the front page of Digg, etc etc. Thankfully everything was sorted out at the very last minute and unhappy people were made happy again, but geez, it’s a nice reminder of how things can go wrong very fast when communication channels break down.

Andre Pang: git-svn & svn:externals

Tue, 2014-07-08 05:26

I’ve written before about git-svn and why I use it, but a major stumbling block with git-svn has been been a lack of support for svn:externals. If your project’s small and you have full control over the repository, you may be fortunate enough to not have any svn:externals definitions, or perhaps you can restructure your repository so you don’t need them anymore and live in git and Subversion interoperability bliss.

However, many projects absolutely require svn:externals, and once you start having common libraries and frameworks that are shared amongst multiple projects, it becomes very difficult to avoid svn:externals. What to do for the git-svn user?

If you Google around, it’s easy enough to find solutions out there, such as git-me-up, step-by-step tutorials, explanations about using git submodules, and an overview of all the different ways you can integrate the two things nicely. However, I didn’t like any of those solutions: either they required too much effort, were too fragile and could break easily if you did something wrong with your git configuration, or were simply too complex for such a seemingly simple problem. (Ah, I do like dismissing entire classes of solutions by wand-having them as over-engineering.)

So, in the great spirit of scratching your own itch, here’s my own damn solution:

git-svn-clone-externals

This is a very simple shell script to make git-svn clone your svn:externals definitions. Place the script in a directory where you have one or more svn:externals definitions, run it, and it will:

  • git svn clone each external into a .git_externals/ directory.
  • symlink the cloned repository in .git_externals/ to the proper directory name.
  • add the symlink and .git_externals/ to the .git/info/excludes/ file, so that you’re not pestered about it when performing a git status.

That’s pretty much about it. Low-tech and cheap and cheery, but I couldn’t find anything else like it after extensive Googling, so hopefully some other people out there with low-tech minds like mine will find this useful.

You could certainly make the script a lot more complex and do things such as share svn:externals repositories between different git repositories, traverse through the entire git repository to detect svn:externals definitions instead of having to place the script in the correct directory, etc… but this works, it’s simple, and it does just the one thing, unlike a lot of other git/svn integration scripts that I’ve found. I absolutely do welcome those features, but I figured I’d push this out since it works for me and is probably useful for others.

The source is on github.com at http://github.com/andrep/git-svn-clone-externals/tree/master. Have fun subverting your Subversion overlords!

Andre Pang: git-svn, and thoughts on Subversion

Tue, 2014-07-08 05:26

We use Subversion for our revision control system, and it’s great. It’s certainly not the most advanced thing out there, but it has perhaps the best client support on every platform out there, and when you need to work with non-coders on Windows, Linux and Mac OS X, there’s a lot of better things to do than explain how to use the command-line to people who’ve never heard of it before.

However, I also really need to work offline. My usual modus operandi is working at a café without Internet access (thanks for still being in the stone-age when it comes to data access, Australia), which pretty rules out using Subversion, because I can’t do commits when I do the majority of my work. So, I used svk for quite a long time, and everything was good.

Then, about a month ago, I got annoyed with svk screwing up relatively simple pushes and pulls for the last time. svk seems to work fine if you only track one branch and all you ever do is use its capabilities to commit offline, but the moment you start doing anything vaguely complicated like merges, or track both the trunk and a branch or two, it’ll explode. Workmates generally don’t like it when they see 20 commits arrive the next morning that totally FUBAR your repository.

So, I started using git-svn instead. People who know me will understand that I have a hatred of crap user interfaces, and I have a special hatred of UIs that are different “just because”, which applies to git rather well. I absolutely refused to use tla for that reason—which thankfully never seems to be mentioned in distributed revision control circles anymore—and I stayed away from git for a long time because of its refusal to use conventional revision control terminology. git-svn in particular suffered more much from (ab)using different terminology than git, because you were intermixing Subversion jargon with git jargon. Sorry, you use checkout to revert a commit? And checkout also switches between branches? revert is like a merge? WTF? The five or ten tutorials that I found on the ‘net helped quite a lot, but since a lot of them told me to do things in different ways and I didn’t know what the subtle differences between the commands were, I went back to tolerating svk until it screwed up a commit for the very last time. I also tried really hard to use bzr-svn since I really like Bazaar (and the guys behind Bazaar), but it was clear that git-svn was stable and ready to use right now, whereas bzr-svn still had some very rough edges around it and isn’t quite ready for production yet.

However, now that I’ve got my head wrapped around git’s jargon, I’m very happy to say that it was well worth the time for my brain to learn it. Linus elevated himself from “bloody good” to “true genius” in my eyes for writing that thing in a week, and I now have a very happy workflow using git to integrate with svn.

So, just to pollute the Intertubes more, here’s my own git-svn cheatsheet. I don’t know if this is the most correct way to do things (is there any “correct” way to do things in git?), but it certainly works for me:

* initial repository import (svk sync): git-svn init https://foo.com/svn -T trunk -b branches -t tags git checkout -b work trunk * pulling from upstream (svk pull): git-svn rebase * pulling from upstream when there's new branches/tags/etc added: git-svn fetch * switching between branches: git checkout branchname * svk/svn diff -c NNNN: git diff NNNN^! * commiting a change: git add git commit * reverting a change: git checkout path * pushing changes upstream (svk push): git-svn dcommit * importing svn:ignore: (echo; git-svn show-ignore) >> .git/info/exclude * uncommit: git reset <SHA1 to reset to>

Drop me an email if you have suggestions to improve those. About the only thing that I miss from svk was the great feature of being able to delete a filename from the commit message, which would unstage it from the commit. That was tremendously useful; it meant that you could git commit -a all your changes except one little file, which was simply deleting one line. It’s much easier than tediously trying to git add thirteen files in different directories just you can omit one file.

One tip for git: if your repository has top-level trunk/branches/tags directories, like this:

trunk/ foo/ bar/ branches/ foo-experimental/ bar-experimental/ tags/ foo-1.0/ bar-0.5/

That layout makes switching between the trunk and a branch of a project quite annoying, because while you can “switch” to (checkout) branches/foo-experimental/, git won’t let you checkout trunk/foo; it’ll only let you checkout trunk. This isn’t a big problem, but it does mean that your overall directory structure keeps changing because switching to trunk means that you have foo/ and bar/ directories, while switching to a foo-experimental or bar-experimental omits those directories. This ruins your git excludes and tends to cause general confusion with old files being left behind when switching branches.

Since many of us will only want to track one particular project in a Subversion repository rather than an entire tree (i.e. switch between trunk/foo and branches/foo-experimental), change your .git/config file from this:

[svn-remote "svn"] url = https://mysillyserver.com/svn fetch = trunk:refs/remotes/trunk branches = branches/*:refs/remotes/* tags = tags/*:refs/remotes/tags/*

to this:

[svn-remote "svn"] url = https://mysillyserver.com/svn fetch = trunk/foo:refs/remotes/trunk ; ^ change "trunk" to "trunk/foo" as the first part of the fetch branches = branches/*:refs/remotes/* tags = tags/*:refs/remotes/tags/*

Doing that will make git’s “trunk” branch track trunk/foo/ on your server rather than just trunk/, which is probably what you want. If you want to track other projects in the tree, it’s probably better to git-svn init another directory. Update: Oops, I forgot to thank Mark Rowe for help with this. Thanks Mark!

As an aside, while I believe that distributed version control systems look like a great future for open-source projects, it’s interesting that DVCS clients are now starting to support Subversion, which now forms some form of lowest common denominator. (I’d call it the FAT32 of revision control systems, but that’d be a bit unkind… worse-is-better, perhaps?) Apart from the more “official” clients such as command-line svn and TortoiseSVN, it’s also supported by svk, Git, Bazaar, Mercurial, and some great other GUI clients on Mac OS X and Windows. Perhaps Subversion will become a de-facto repository format that everyone else can push and pull between, since it has the widest range of client choice.

Andre Pang: git & less

Tue, 2014-07-08 05:26

For the UNIX users out there who use the git revision control system with the oldskool less pager, try adding the following to your ~/.gitconfig file:

[core] # search for core.pager in # <http://www.kernel.org/pub/software/scm/git/docs/git-config.html> # to see why we use this convoluted syntax pager = less -$LESS -SFRX -SR +'/^---'

That’ll launch less with three options set:

  • -S: chops long lines rather than folding them (personal preference),
  • -R: permits ANSI colour escape sequences so that git’s diff colouring still works, and
  • +'/^---': sets the default search regex to ^--- (find --- at the beginning of the line), so that you can easily skip to the next file in your pager with the n key.

The last one’s the handy tip. I browse commits using git diff before committing them, and like being able to jump quickly back and forth between files. Alas, since less is a dumb pager and doesn’t understand the semantics of diff patches, we simply set the find regex to ^---, which does what we want.

Of course, feel free to change the options to your heart’s content. See the less(1) manpage for the gory details.

As the comment in the configuration file says, you’ll need to use the convoluted less -$LESS -SFRX prefix due to interesting git behaviour with the LESS environment variable:

Note that git sets the LESS environment variable to FRSX if it is unset when it runs the pager. One can change these settings by setting the LESS variable to some other value. Alternately, these settings can be overridden on a project or global basis by setting the core.pager option. Setting core.pager has no affect on the LESS environment variable behaviour above, so if you want to override git’s default settings this way, you need to be explicit. For example, to disable the S option in a backward compatible manner, set core.pager to "less -+$LESS -FRX". This will be passed to the shell by git, which will translate the final command to "LESS=FRSX less -+FRSX -FRX".

(And sure, I could switch to using a different pager, but I’ve been using less for more than a decade. Yep, I know all about Emacs & Vim’s diff-mode and Changes.app. It’s hard to break old habits.)

Andre Pang: Learning Photography with the Panasonic GF1

Tue, 2014-07-08 05:26

Thanks to several evil friends of mine, I started to take an interest in photography at the end of last year. I’ve always wanted to have a “real” camera instead of a point and shoot, so at the start of 2010, I bit the bullet and bought myself a Panasonic Lumix DMC-GF1, usually just called The GF1 amongst the camera geeks.

I tossed up between the GF1 and the then-just-released Canon EOS 550D (a.k.a. the Rebel T2i in the USA) for a long time. I figured that getting a compact camera would make me tote it around a lot more, and after ten months of using it, I think I was right. I recently went to a wedding in Sydney, and I literally kept the camera in my suit pocket instead of having to lug it around strapped to my body or neck. I definitely wouldn’t be able to do that with a Canon or Nikon DSLR. The camera’s so small with the kit 20mm f/1.7 lens that I stopped using the UV filter with it, because I didn’t like the 2-3mm that the filter added to the camera depth. Here’s a size comparison of the Nikon D3000 vs the GF1.





(Image stolen from dpreview.com’s review.)

I won’t write up a comprehensive review of the GF1 here; other sites have done that, in much more depth than I can be bothered to go into. If you’re after a good review, the three articles that swayed me to the GF1 in the end were DPreview’s review, and Craig Mod’s GF1 photo field test article and video tests. What follows is my own impressions and experiences of using the camera. The one-sentence summary: the GF1 perfect for a DSLR newbie like me, the micro four-thirds lens system it uses looks like it has enough legs that your lens investments will be good for the future, and learning photography with the GF1 is great and deserves a Unicode snowman: ☃.

The reason you want the camera is to use the 20mm f/1.7 lens. For the non-photography geeks, that means that it’s not a zoom lens, i.e. you can’t zoom in and out with it, and the f/1.7 means that it can take pretty good shots in low light without a flash. All the reviews of it are right: that lens is what makes the camera so fantastic. Do not even bother with 14-45mm kit lens. The 20mm lens is fast enough that you can shoot with it at night, and the focal length’s versatile enough to take both close-up/portrait shots (whee food porn), and swing out a bit wider for landscape photos or group photos. It’s no wide-angle nor zoom and it’s not a super-fast f/1.4, but it’s versatile enough and so tiny that you will end up using it almost all the time. It feels weird to get a DSLR and only have one lens for it, but the pancake 20mm lens is so damn good that it’s all you really need. The only thing it really can’t do at all is go super-zoomalicious, for wildlife/distance shots.

The 20mm non-zoom (a.k.a. “prime”) lens has another advantage: it teaches you to compose. Despite all the technology and all the geek speak, photography is ultimately about your composition skills. Prime lenses force you to move around to find the perfect framing for the shot you’re trying to do; I think learning with a prime lens moves your composition skills along much faster than it would if you were using a standard zoom lens. If you’re a photography beginner, like me, shoot with a prime. It’s a totally different experience than shooting with a zoom, and you learn a lot more. Plus, primes are cheap: the Canon 50mm f/1.8 is USD$100, and Canon’s near top-of-the-line 50mm f/1.4 is USD$350. The Canon 35mm f/2, for something that’s similar in focal length to the Panasonic 20mm prime, is USD$300. (You need to multiply the 20mm by 2 to convert between micro four-thirds and 35mm framing, so the actual focal length of the 20mm in traditional camera speak is 20mm*2=40mm.)

After playing it for a few months, you realise that the GF1 is a fun camera to use. The combination of the 20mm prime lens, the super-fast focus, the size, and the great UI design just begs you to take pictures with it. You feel like you’re wielding a real camera rather than a toy: one that wants you to shoot with it. It’s not imposing like a bigger DSLR so it doesn’t feel like the camera is with you all the time, but it’s not so small that you feel like you’re just snipping super-casually with something that’s cheap. And did I mention the excellent UI? It’s excellent. The better controls are a good reason to get the GF1 over its rivals, the Olympus EP series.

One big bonus: I’ve found that the full-auto mode (“iAuto” as Panasonic brands it) very rarely gets stuff wrong. This is useful if you hand the camera over to someone else who doesn’t know how to use DSLRs so that they can take a picture of you… or, like me, if you just don’t know quite what aperture/shutter speeds to use for the particular shot you’re taking. The full-auto just adds to the joy of using it. I usually shoot in full-auto or aperture priority mode, but honestly, I could probably shoot on full-auto mode all the time. I can’t recall a single occasion where it didn’t guess f/1.7 or a landscape correctly.

Do follow DPreview and Craig Mod’s advice and shoot RAW, not JPEG. Honestly, I’d prefer to shoot JPEG if I could, but RAW lets you turn some bad shots into good shots. I use it because gives you a second chance, not because I want to maximise picture quality. Here’s one photo that was accidentally taken with the wrong settings: I had the camera on full-manual mode, and didn’t realise that the shutter speed and ISO settings were totally incorrect.

However, since I shot it in RAW, I could lift up the exposure up two stops, pulled up the shadows and pulled down the highlights, and here’s the result:

Seriously, that’s just frickin’ amazing. Sure, that photo might not be super-awesome: it’s a little grainy, and it looks a bit post-processed if you squint right, but it’s still a photo of a precious memory that I wouldn’t have otherwise had, and you know what? That photo’s just fine. If I shot JPEG, I would’ve had no choice but to throw it away. RAW’s a small pain in the arse since the file sizes are far bigger and you need to wait a long time for your computer to do the RAW processing if you’ve taken hundreds of photos, but boy, it’s worth it.

I did finally buy a wide-angle lens for the GF1—the Olympus 9-18mm f/4-5.6—and have been using it a lot for landscape shots. I bought the Olympus 9-18mm over the Panasonic 7-14 f/4.0 because it was cheaper, and also smaller. I figured that if I was getting a GF1, it was because I wanted something compact, so I wanted to keep the lenses as small as possible. (Otherwise, if you don’t care about size, then a full-blown Canon or Nikon DSLR would probably serve you much better.) I’ve always wanted a wide-angle lens from the first day that I asked “how do those real estate agents make those rooms look so bloody large?”, so now I have one, woohoo. The next lens on my shopping will probably be the Panasonic 45-200mm. (Never mind the quality, feel the price!) Here’s a shot taken with the Olympus 9-18mm; click through to see the original picture on Flickr.

The main thing that I wish for in a future version of the camera would be image stabilisation. Panasonic follow the Canon path and put image stabilisation in the lens, rather than in the body. I think Olympus made the right decision by putting image stabilisation in the body for their compact DSLRs; you can keep the lenses smaller that way, and you then get image stabilisation with all your lenses instead of the ones that only support it explicitly, e.g. the 20mm f/1.7 prime doesn’t have image stabilision, boo. In-body image stabilisation just seems more in-line with the size reduction goal for micro four-thirds cameras. I’d love to get my hands on an Olympus EP for a week and shoot with the 20mm to see if image stabilisation makes a difference when it’s dark and the environment is starting to challenge the f/1.7 speeds.

The only other thing I wish for would be a better sensor. The GF1’s great up to ISO 800: at ISO 1600+, it starts getting grainy. 1600 is acceptable, and you can do wondrous things with the modern noise reduction algorithms that are in Lightroom if you really need to save a shot. Shoot at ISO 3200+ though, and it’s just too grainy. This is the main advantage that more traditional DSLRs have: their larger sensors are simply better than the GF1’s. I’ve seen shots taken with a Nikon D50 at ISO 6400 in the dark because a slower f/4 lens was being used, and bam, the shot comes out fine. Don’t even try to compare this thing to a Canon 5D Mk II. The GF1 just can’t do high ISO. Here’s an ISO 3200 shot, which is just starting to get a little too grainy. It’s fine for Facebook-sized images, but if you click through to the original, you’ll see it’s noisy.

But y’know, despite the two nitpicks above, the GF1 is a fine camera, and the 20mm f/1.7 lens is an amazing do-it-all lens that’s absolutely perfect to learn with. There’s really nothing else out there like it except for the Olympus EP range (the EP-1, EP-2 and EPL-1), which you should definitely consider, but get it with the 20mm f/1.7 lens if you do. I’ve had a total blast learning photography with the GF1, and I’ve captured hundreds of memories in the past year that made the investment completely worthwhile. I don’t think I’m at the point yet where I feel like I need another camera yet, but it feels good knowing that the micro four-thirds format will be around for a while so that I can use my existing lenses with future cameras I buy. If you’re interested in learning photography, the GF1 is a fantastic starting point.

Update: Thom Hogan did a comparison between the most popular mirrorless cameras: the Olympus E-PL1, the Panasonic GF1, Samsung NX100, and Sony NEX-5. It’s written for people who know photography rather than for novices, but basically, the GF1 came out on top, with the E-PL1 being recommended if you can live with the worse screen and the far worse UI. That’s pretty much exactly my opinion, too.

Andre Pang: Geek Culture and Criticism

Tue, 2014-07-08 05:26

What’s happened to Kathy Sierra, and what she wrote about angry and negative people, inspired me to write a bit, so let me indulge myself a little. I live in the computing community, with other like-minded geeks. Computing geeks have a (deserved) reputation for being a little negative. This is not without cause: there’s a lot of things wrong in our world. A lot of the technology we use and rely on every day is brittle and breaks often, and as Simon Peyton-Jones says, we’re quite often trying to build buildings out of bananas. Sure, you can do it, but it’s painful, and it’s downright depressing when the bricks are just over there, just out of reach. Our efforts for releasing software is often met with never-ending bug reports and crash reports, and it’s quite sobering looking at our task trackers.

It’s impossible to resist ragging on something or abusing something. This is part of geek computing culture. We have to work with a lot of crap, so it’s easy to be critical and complain about everything around you. However, from this day forward, I’m going to try to at least make any criticism not totally destructive. (I don’t think I’m vitriolic, mind you, but I’ll make a conscious effort to be more constructive now.) Wrap it up in some humour; offer some suggestions or alternatives. Resist using inflammatory language as much as you can when you’re personally attacked, or simply walk away from it. Re-read everything you write and think “Is what I’m writing simply making people more bitter? Is it actually worth somebody’s time to read this?”

Be more gentle with your language and kinder to your fellow netizens. Don’t participate in flamewars. Don’t join the mob mentality and rail on Microsoft or C++ or Ruby or Apple or Linux when everyone else does. (You’re meant to be a scientist after all, aren’t you?) Break away from that self-reinforcing sub-culture that often comes with being a geek.

Now that I’ve got that off my chest, back to work!

Andre Pang: Oldskool!

Tue, 2014-07-08 05:26
I was cleaning up my room the other day, and lo and behold, look what I found...







That, sir, would be the entire 147-page printed manual for FrontDoor 1.99b, an endearing piece of software for all of us who used to run BBSs1. FidoNet, SIGNet and AlterNet indeed. (For all you BinkleyTerm chumps, yeah, I ran that as well, with the holy trinity of BinkleyTerm, X00 and Maximus all under OS/2. Don’t even get me started on ViSiON-X, Oblivion/2, Echo forums and all that stuff… oh, I dread to think the number of hours I must’ve spent looking at every single BBS package under the sun.)

Ah, but the BBS as we know it is dead, Jim. Long live the Internet!

1 And hey Mister Ryan Verner: yes, I ran it with the BNU FOSSIL driver for a while ;-).

Andre Pang: FOMS and Linux.conf.au 2007

Tue, 2014-07-08 05:26

Wow.

FOMS and Linux.conf.au 2007 absolutely rocked the house. I go to my fair share of conferences each year, and even though I’m mostly a Mac OS X user these days, I can heartily say that there really is nothing that matches the flair, co-operation and vibrance of the Linux community.

The Foundations of Open Media Software mini-conference and workshop took place the week before Linux.conf.au to discuss problem areas in open-source media software and how to tackle them, and a number of worthy goals came out of it. One really important project is the advancement of totally free multimedia codecs that sites such as Wikipedia can use for their video: we’re gunning for Theora (video) and Vorbis (audio) support out-of-the-box for Firefox 3, which means that everyone will finally be able to watch video in a Web browser in a non-patented, totally open format without installing plugins or any other nonsense. Putting all the faces to names and seeing all the different projects co-operating to improve open-source multimedia and hit common goals is fantastic.

Linux.conf.au was just as stellar: the atmosphere was vibrant, talks were casual and informative, the organisation was the best I’ve ever seen for any conference, the parties were great, and the community just wonderful. Kathy Sierra’s keynote about Creating Passionate Users was the best keynote I’d ever seen at a conference, even rivalling Steve Jobs’s famous reality distortion field (and Kathy’s was arguably better, since she was actually delivering a ton of information rather than just unveiling “ooo shiny iPhone!”). As jdub would say, awesome. Thanks so much to the amazing Seven Team for organising the best conference I’ve ever been to, all the volunteers and helpers that made it go so smoothly, the A/V team that preserved the talks for all eternity (plug: including mine, of course :-).

Now to catch up on these 700+ RSS articles that I’ve abandoned reading for the week and await the return of life to normality. See you all at FOMS and Linux.conf.au next year!

Andre Pang: FOMS 2008

Tue, 2014-07-08 05:26

The Foundations of Open Media Software (FOMS) workshop took place last month, from the 24th to 25th of January. FOMS is a rare opportunity for open-source multimedia developers and industry folks to get together all in one place, and the result is two days of intense discussion about issues such as encapsulation formats, codecs, video and audio output APIs, media commons, and metadata—not to mention sharing a common hatred of Flash. The first FOMS was held last year in 2007, and was a great melting pot for people from very different open-source multimedia projects, such as xine, xiph.org, GStreamer and Nokia, to get together. This year’s FOMS proved to be just as successful; this time with folks from Sun, Opera and the BBC joining the fray.

One wonderful thing about this FOMS was that a large number of the xiph.org folks (Monty, Derf, Rillian, Jean-Marc and MikeS) were all there. xiph.org are one of the main providers of freely available multimedia standards, and it’s rare that their members have an opportunity to meet in person. It’s a little strange that they met in Melbourne rather than in the USA where the majority of their members are, but hey, I’m sure they won’t complaining about that!

For me, there was a bit of an ominous atmosphere leading up to FOMS due to the recent outbreaks of “discussion” in December 2007 about the HTML5 recommended video codec. (I use the one “discussion” lightly here, since it was a lot more like hearing one’s angry neighbour trying to break down a brick wall with their head, for about a week.) It seemed obvious that the HTML5 video codec problem would be discussed at length at FOMS, but I hoped that it wouldn’t dominate discussion, since there were a lot better things to do with the combined intellectual might of all the attendees than talk about issues that were mostly political and arguably largely out of their hands to solve.

I’m glad to say that the HTML5 video codec problem was definitely discussed, but with a great focus on finding a solution rather than wailing on about the problem. Ogg Theora and Dirac, represented by xiph.org and the BBC at FOMS, are two of the contenders for the HTML5 baseline video codec recommendation, and it was excellent to see that people were discussing technical aspects that may be hindering their adoption by the W3C, always keeping the bigger picture in mind.

There were also breakout groups that threw down some short-term and long-term goals for the FOMS attendees: I personally took part in a discussion about metadata, text markup of video (subtitling, closed captions, and transcriptions), and video composition and aggregation (“video mashups”). Shane Stephens would present a great talk at Linux.conf.au a few days later about Web 2.0-style community-based video remixing; if you’re interested at all in video mashups and video mixups, be sure to check out his talk!

If you’re interested at all in the open-standards multimedia space, the proceedings of FOMS are available online thanks to the FOMS A/V team, with a big thanks to Michael Dale for bringing his incredible metavid video content management system to the humble FOMS site. (You may also be interested in the W3C Video on the Web Workshop Report that was very recently released.) In an area that’s as complicated as multimedia, FOMS is tremendously valuable as a place for open-source developers to meet. It was a great complement to Linux.conf.au, and here’s hoping it’ll be on again next year!

Andre Pang: Firewire and Robust Connectors

Tue, 2014-07-08 05:26

I read somewhere a long time ago that the cable connectors for IEEE 1394 (better known as Firewire) were inspired by the connectors that Nintendo used for their game consoles. This assertion now seems to be on Wikipedia, so it must be true:

The standard connectors used for FireWire are related to the connectors on the venerable Nintendo Game Boy. While not especially glamorous, the Game Boy connectors have proven reliable, solid, easy to use and immune to assault by small children.

Clever. The 6-pin design for Firewire cables is great: absolutely impossible to plug in the wrong way, and you won’t damage a thing even if you try really hard to plug it in the wrong way. There are no pins exposed on the connector that can be broken, unlike, say, those damn 9-pin serial or VGA cables (or even worse, SCSI cables, ugh). It’s like the Firewire was designed by Jonathan Ive. (I dunno if Ive designed the iPod dock connector, but that’s definitely not as robust as a Firewire one.)

Yay for good industrial design.

Andre Pang: A Call For A Filesystem Abstraction Layer

Tue, 2014-07-08 05:26

Filesystems are fundamental things for computer systems: after all, you need to store your data somewhere, somehow. Modern operating systems largely use the same concepts for filesystems: a file’s just a bucket that holds some bytes, and files are organised into directories, which can be hierarchical. Some fancier filesystems keep track of file versions and have record-based I/O, and many filesystems now have multiple streams and extended attributes. However, filesystem organisation ideas have stayed largely the same over the past few decades.

I’d argue that the most important stuff on your machine is your data. There are designated places to put this data. On Windows, all the most important stuff on my computer should really live in C:\Documents and Settings\Andre Pang\My Documents. In practice, because of lax permissions, almost everyone I know doesn’t store their documents there: they splatter it over a bunch of random directories hanging off C:\, resulting in a giant lovely mess of directories where some are owned by applications and the system, and some are owned by the user.

Mac OS X is better, but only because I have discipline: /Users/andrep is where I put my stuff. However, I’ve seen plenty of Mac users have data and personal folders hanging off their root directory. I’m pretty sure that my parents have no idea that /Users/your-name-here is where you are meant to put your stuff, and to this day, I’m not quite sure where my dad keeps all his important documents on his Mac. I hope it’s in ~/Documents, but if not, can I blame him? (UNIX only gets this right because it enforces permissions on you. Try saving to / and it won’t work. If you argue that this is a good thing, you’re missing the point of this entire article.)

One OS that actually got this pretty much correct was classic Mac OS: all system stuff went into the System folder (which all the Cool Kids named “System ƒ”, of course). The entire system essentials were contained in just two files: System, and Finder, and you could even copy those files to any floppy disk and make a bootable disk (wow, imagine that). The entire rest of the filesystem was yours: with the exception of the System folder, you organised the file system as you pleased, rather than the filesystem enforcing a hierarchy on you. The stark difference in filesystem organisation between classic Mac OS and Mac OS X is largely due to a user-centric approach for Mac OS from the ground-up, whereas Mac OS X had to carry all its UNIX weight with it, so it had to compromise and use a more traditionally organised computer filesystem.

As an example, in Mac OS X, if you want to delete Photoshop’s preferences, you delete the ~/Library/Preferences/com.adobe.Photoshop.plist file. Or; maybe you should call it the Bibliothèque folder in France (because that’s what it’s displayed as in the Finder if you switch to French)… and why isn’t the Preferences folder name localised too, and what mere mortal is going to understand why it’s called com.adobe.Photoshop.plist? On a technical level, I completely understand why the Photoshop preferences file is in the ~/Library/Preferences/ directory. But at a user experience level, this is a giant step backwards from Mac OS, where you simply went to the System folder and you trashed the Adobe Photoshop Preferences file there. How is this progress?

I think the fundamental problem is that Windows Explorer, Finder, Nautilus and all the other file managers in the world are designed, by definition, to browse the filesystem. However, what we really want is an abstraction level for users that hides the filesystem from them, and only shows them relevant material, organised in a way that’s sensible for them. The main “file managers” on desktop OSs (Finder and Windows Explorer) should be operating at an abstraction level above the filesystem. The operating system should figure out where to put files on a technical (i.e. filesystem) level, but the filesystem hierarchy should be completely abstracted so that a user doesn’t even realise their stuff is going into /Users/joebloggs.

iTunes and iPhoto are an example of what I’m advocating, because they deal with all the file management for you. You don’t need to worry where your music is or how your photos are organised on the filesystem: you just know about songs and photos. There’s no reason why this can’t work for other types of documents too, and there’s no reason why such an abstracted view of the filesystem can’t work on a systemwide basis. It’s time for the operating system to completely abstract out the filesystem from the user experience, and to turn our main interaction with our documents—i.e. the Finder, Windows Explorer et al—into something that abstracts away the geek details to a sensible view of the documents that are important to us.

One modern operating system has already done this: iOS. iOS is remarkable for being an OS that I often forget is a full-fledged UNIX at its heart. In the iOS user experience, the notion of files is completely gone: the only filenames you ever see are usually email attachments. You think about things as photos, notes, voice memos, mail, and media; not files. I’d argue that this is a huge reason that users find an iPhone and iPad much more simple than a “real” computer: the OS organises the files for them, so they don’t have to think that a computer deals with files. A computer deal with photos and music instead.

There are problems with the iOS approach: the enforced sandboxing per app means that you can’t share files between apps, which is one of the most powerful (and useful) aspects of desktop operating systems. This is a surmountable goal, though, and I don’t think it’d be a difficult task to store documents that can be shared between apps. After all, it’s what desktop OSs do today: the main challenge is in presenting a view of the files that are sensible for the user. I don’t think we can—nor should—banish files, since we still need to serialise all of a document’s data into a form that’s easily transportable. However, a file manager should be metadata-centric and display document titles, keywords, and tags rather than filenames. For many documents, you can derive a filename from its metadata that you can then use to transport the file around.

We’ve tried making the filesystem more amenable to a better user experience by adding features such as extended attributes (think Mac OS type/creator information), and querying and indexing features, ala BFS. However, with the additional complexity of issues such as display names (i.e. localisation), requiring directory hierarchies that should remain invisible to users, and the simple but massive baggage of supporting traditional filesystem structures (/bin/ and /lib/ aren’t going away anytime soon, and make good technical sense), I don’t think we can shoehorn a filesystem browser anymore into something that’s friendly for users. We need a filesystem abstraction layer that’s system-wide. iOS has proven that it can be done. With Apple’s relentless progress march and willingness to change system APIs, Linux’s innovation in the filesystem arena and experimentation with the desktop computing metaphor, and Microsoft’s ambitious plans for Windows 8, maybe we can achieve this sometime in the next decade.

Andre Pang: An Optical Mouse of a Mouse

Tue, 2014-07-08 05:26

The optical LED for Apple’s Mighty Mouse emits a cute little picture of a mouse if you look hard enough, and apparently this has always been the case for all of Apple’s optical mouses. Kudos to them for the little design touches! Now if I could only find another Logitech MX-300, which is the best weighted and most comfortable mouse I’ve ever used.

(Man, you know you’re a total nerd when you start talking about the weight of your frigging mouse.)

Andre Pang: Tom Bradley, Terminal 8, Terminal 5 ...

Tue, 2014-07-08 05:26

Well, my impression of the American airline industry just gets better and better: my United flight to Salt Lake City at 1:19pm was cancelled, so they shoved me on a Delta Airlines flight at 3pm instead. Oh vey, another 2 hours of killing time at Los Angeles airport! (Though I admit I’ve actually been productive and have been hacking on TextExtras as I intended to in the last blog, so it wasn’t much of a waste of time.) At least the staff are pretty friendly: they do seem quite sympathetic to the numerous screw-ups that keep happening. I don’t think I’ve had a single trip to the USA in the past 2 or 3 years where something didn’t go wrong.

The good part of the story is that I managed to book myself on an earlier Delta flight which left Los Angeles at 1pm, so oddly enough I ended up catching an earlier flight than the one which was cancelled. The bad part of the story is that means I had to move terminals yet again. My Qantas flight into Los Angeles arrived at the Tom Bradley International terminal, from which point I walked with Don to Terminal 5 since I was killing time and didn’t have anything better to do. Then, onward to Terminal 8, where my cancelled United flight was, and then back to Terminal 5 for my final Delta flight. In conjunction with the Auckland stopover, that meant I’d gone through security screening four times. Fun, fun, fun.

The really bad part of the story is that my baggage didn’t travel with me on the flight and got misdirected somewhere between Los Angeles and Salt Lake City, although the baggage assistant dude who helped me claimed that it’s very likely coming on the next United flight, which lands in Salt Lake City at 6pm. Since there’s not much else I can do about that, I’ll just pray it does come, otherwise I’ll be wearing my Afterglow long-sleeve shirt again tomorrow …