Planet Linux Australia
[Sorry if you get this post twice—let’s say that our internal builds of RapidWeaver 4.0 are still a little buggy, and I needed to re-post this ;)]
Xcode, Apple’s IDE for Mac OS X, has this neat ability to perform distributed compilations across multiple computers. The goal, of course, is to cut down on the build time. If you’re sitting at a desktop on a local network and have a Mac or two to spare, distributed builds obviously make a lot of sense: there’s a lot of untapped power that could be harnessed to speed up your build. However, there’s another scenario where distributed builds can help, and that’s if you work mainly off a laptop and occasionally join a network that has a few other Macs around. When your laptop’s offline, you can perform a distributed build with just your laptop; when your laptop’s connected to a few other Macs, they can join in the build and speed it up.
There’s one problem with idea, though, which is that distributed builds add overhead. I had a strong suspicion that a distributed build with only the local machine was a significant amount slower than a simple individual build. Since it’s all talk unless you have benchmarks, lo and behold, a few benchmarks later, I proved my suspicion right.
- Individual build: 4:50.6 (first run), 4:51.7 (second run)
- Shared network build with local machine only: 6:16.3 (first run), 6:16.3 (second run)
This was a realistic benchmark: it was a full build of RapidWeaver including all its sub-project dependencies and core plugins. The host machine is a 2GHz MacBook with 2GB of RAM. The build process includes a typical number of non-compilation phases, such running a shell script or two (which takes a few seconds), copying files to the final application bundle, etc. So, for a typical Mac desktop application project like RapidWeaver, turning on shared network builds without any extra hosts evokes a pretty hefty speed penalty: ~30% in my case. Ouch. You don’t want to leave shared network builds on when your laptop disconnects from the network. To add to the punishment, Xcode will recompile everything from scratch if you switch from individual builds to distributed builds (and vice versa), so flipping the switch when you disconnect from a network or reconnect to it is going to require a full rebuild.
Of course, there’s no point to using distributed builds if there’s only one machine participating. So, what happens when we add a 2.4GHz 20” Aluminium Intel iMac with 2GB of RAM, via Gigabit Ethernet? Unfortunately, not much:
- Individual build: 4:50.6 (first run), 4:51.7 (second run)
- Shared network build with local machine + 2.4GHz iMac: 4:46.6 (first run), 4:46.6 (second run)
You shave an entire four seconds off the build time by getting a 2.4GHz iMac to help out a 2GHz MacBook. A 1% speed increase isn’t very close to the 40% build time reduction that you’re probably hoping for. Sure, a 2.4GHz iMac is not exactly a build farm, but you’d hope for something a little better than a 1% speed improvement by doubling the horsepower, no? Gustafson’s Law strikes again: parallelism is hard, news at 11.
I also timed Xcode’s dedicated network builds (which are a little different from its shared network builds), but buggered if I know where I put the results for that. I vaguely remember that dedicated network builds was very similar to shared network builds with my two hosts, but my memory’s hazy.
So, lesson #1: there’s no point using distributed builds unless there’s usually at least one machine available to help out, otherwise your builds are just going to slow down. Lesson #2: you need to add a significant amount more CPUs to save a significant amount of time with distributed builds. A single 2.4GHz iMac doesn’t appear to help much. I’m guessing that adding a quad-core or eight-core Mac Pro to the build will help. Maybe 10 × 2GHz Intel Mac minis will help, but I’d run some benchmarks on that setup before buying a cute Mac mini build farm — perhaps the overhead of distributing the build to ten other machines is going to nullify any timing advantage you’d get from throwing another 20GHz of processors into the mix.
- Meet new people (✓),
- Catch up with fellow Aussies I haven’t seen in years (✓),
- Go to parties (✓),
- Behave appropriately at said parties (✓),
- Use the phrase “Inconceivable!” inappropriately (✓),
- Work on inspiring new code (✓),
- Keep up with Interblag news (✗),
- Keep up with RSS feeds (✗),
- Keep up with personal email (✗),
- Keep up with work email (✗),
- Installed Leopard beta (✓),
- Port code to work on Leopard (✗),
- Successfully avoid Apple Store, Virgin, Banana Republic et al in downtown San Francisco (✓),
- Keep family and friends at home updated (✓),
- Mention the words “Erlang”, “Haskell” and “higher-order messaging” to puny humansfellow Objective-C programmers (✓),
- Write up HoPL III report (✗),
- Find and beat whoever wrote NSTokenField with a large dildo (✗),
- Get food poisoning again (✗),
- Sleep (✗),
- Actually attend sessions at the conference (✓ ✗).
- Me: “Can I have myself an André discount at all?”
- Manager: “Hmmm… well, normally I would, but I can’t do that today. How about I throw in a free copy of World of Warcraft? Yes, that sounds like an excellent idea…”
Nooooooooooooooooooooo! Tom, I officially hate you. Do you know how long I’ve been trying to avoid playing this frigtard game? Goodbye sunshine, it’s been nice knowing you. If I don’t reply to any emails from now on, I’m either dead, or I’m playing this bloody MMORPG that I’ve been avoiding so successfully up until now. Bye all!
If you have a VPN at your workplace, chances are good that it’s one of those Cisco 3000 VPN Concentrator things, which seem to be an industry standard for VPN equipment. Chances are also good that you’ve likely been forced to use the evil, evil proprietary Cisco VPN client, which has been known to be a source of angsta majora for Mac OS X and Linux users. (And if you think Windows users have it good, think again: the Cisco VPN client completely hosed a friend’s 64-bit Windows XP system to the point where it wouldn’t even boot.)
Enter vpnc, an open-source VPN client that works just fine on both Mac OS X and Linux. Linux people, I assume that you know what you’re doing — all you should need is a TUN/TAP kernel driver and you’re good to go. Mac OS X folks, you’ll need to be comfortable with the UNIX terminal to use it; unfortunately no GUI has been written for it yet. If you’re a Terminal geek, here’s a small guide for you:
- Download and install a tun/tap driver for Mac OS X.
- Download and install libgcrypt. If you have DarwinPorts (neé MacPorts) installed, simply do “port install libgcrypt”. Otherwise, grab it from the libgcrypt FTP site and install it manually.
- You’ll need to check out the latest version of the vpnc code from its Subversion repository: “svn checkout http://svn.unix-ag.uni-kl.de/vpnc/”. The latest official release (0.3.3, as of this writing) will not compile properly on Mac OS X, which is why you need the code from the Subversion trunk.
- After doing the standard “make && make install” mantra, run your Cisco VPN .pcf profile through the pcf2vpnc tool and save the resulting .vpnc file in /etc/vpnc.
- ./vpnc YourProfile.vpnc, and that should be it. While you’re debugging it, the --nodetach and --debug 1 options may be useful.
Muchas gracias to Mario Maceratini at Rising Sun Pictures for hunting down vpnc for me.
This is a Public Service Announcement for Australians: if you’re looking for mobile broadband access for your laptop (and what geek isn’t?), Vodafone are doing a pretty spectacular deal at the moment for ‘net access via their 3G/HSDPA network.
For $39/month, you get 5GB of data; no time limits; no speed caps; and fallback from 3G to GPRS in regional areas where HSDPA isn’t available yet. It’s a fantastic deal for people who live in metropolitan areas and work on the road a lot.
The main catch is that it’s a 24-month contract, so is a somewhat long time to be locked in to a plan. However, I have a feeling that no other mobile Internet offering is going to be competitive with 5GB for $40/month within the next two years. (Hell, $39/month for decent mobile Internet access is competitive with even some fixed-line ADSL2 providers.) One other small catch is that you also can’t use multiple devices on the plan: it’s tied to the single SIM card that you purchase with the plan. So, all you cool kids with 3G/GPRS-capable mobile phones, you can’t include that device on part of the bundle (looks sadly at iPhone). Other than that, it’s really a pretty bloody good deal.
To compare this with other plans:
- Vodafone themselves offer a craptacular 100MB for $29/month, which is barely enough to just check email these days. (And that doesn’t include the modem, which is another $200). A mere 1GB of data is $59/month, or $99 per month with no contract!
- Telstra are even worse (this is my surprised face): $59 for 200MB. I’ll say that again: $59 per month for 200MB. 1GB is $89.
- Bigpond (who are different from Telstra1) offer vaguely competitive plans if you’re OK with a 10-hour-per-month time limit: that goes for $35/month. (This translates to around 30 minutes per business day, which may be OK if you just hop online occasionally to check email.) The $35 plan is the only timed plan, though: other than that, it’s $55 for 200MB (puke), or $85 for 1GB.
- I can’t even find out whether Optus have mobile broadband plans available. Comments?
- Virgin Mobile Broadband used to be pretty spectacular at $10/month for 1GB, and is still somewhat OK at $80/month for the same 1GB if it’s bundled with a phone plan. Considering that Vodafone’s $39/month for 5GB, you can still pair their deal with a phone plan of your choice and have 5GB instead of 1GB, though.
- Three (or 3, or whatever) just launched the next best alternative with their new X-Series plans. Their Gold plan is $30/month for 1GB, and their Platinum plan is $40/month for 2GB. Interestingly, the X-Series plans give you a ton of free Skype minutes (2000 minutes on the 1GB plan and 4000 minutes on the 2GB plan), so if you’re a really heavy Skype person and chat about 130 hours per month, the Three deal may be better than Vodafone’s.
The 3G modem they use is a Huawei E220, which looks like it’s the same modem used by Virgin and Three. There appears to be Linux support for it, and I can confirm that Mac supports works fine on Mac OS X 10.5 (Leopard) thanks to an alternative driver.
So, if you’re interested, visit the Vodafone 5GB webpage. You can sign up through the Internet on the spot. However, you can also sign up over the phone, and if you do, you have a 30-day “cooling off” period where you can opt out of your contract if you’re not happy with the service. (Stupidly enough, you can’t get the 30-day cooling off period if you pop into a Vodafone store, because phone service has different conditions to face-to-face service. Ja, whatever man.) Hurry though: the deal expires on December 31, 2007. Get it as a late Christmas present for yourself, I guess!
1 Telstra Mobility Broadband is a completely separate service from Bigpond Broadband, and Telstra and Bigpond are separate entities. I found this out the hard way, when I was on a 10-hour-per-month CDMA/EVDO plan with Telstra, and couldn’t upgrade to the 10-hour-per-month 3G plan with Bigpond, because Telstra and Bigpond are separate things. Ahuh. (I couldn’t upgrade to a 10h plan on Telstra, because Telstra doesn’t even offer hourly plans anymore.) Way to go for rewarding all your mobile Internet early adopters that braved EVDO, you frigtards.
Parallels Desktop for Mac was the first kid on the block to support virtualisation of other PC operating systems on Mac OS X. However, in the past fortnight, I’ve found out that:
- Parallels allocates just a tad too many unnecessary Quartz windows1, which causes the Mac OS X WindowServer to start going bonkers on larger monitors. I’ve personally seen the right half of a TextEdit window disappear, and Safari not being able to create a new window while Parallels is running, even with no VM running. (I’ve started a discussion about this on the Parallels forums.)
- Parallels does evil things with your Windows XP Boot Camp partition, such as replace your ntoskrnl.exe and hal.dll file and rewriting the crucial boot.ini file. This causes some rather hard-to-diagnose problems with some low-level software, such as MacDrive, a fine product that’s pretty much essential for my Boot Camp use. Personally, I’d rather not use a virtualisation program that decides to screw around with my operating system kernel, hardware abstraction layer, and boot settings, thank you very much.
VMware Fusion does none of these dumbass things, and provides the same, simple drag’n’drop support and shared folders to share files between Windows XP and Mac OS X. I concur with stuffonfire about VMware Fusion Beta 3: even in beta, it’s a lot better than Parallels so far. Far better host operating system performance, better network support, hard disk snapshots (albeit not with Boot Camp), and DirectX 8.1 support to boot. (A good friend o’ mine reckons that 3D Studio runs faster in VMware Fusion on his Core 2 Duo MacBook Pro than it does natively on his dedicated Athlon 64 Windows machine. Nice.) The only major feature missing from VMware Fusion is Coherence, and I can live without that. It’s very cool, but hardly necessary.
Oh yeah, and since VMWare Fusion in beta right now, it’s free as well. Go get it.
1 Strictly speaking, allocating a ton of Quartz windows is Qt’s fault, not Parallels’s fault. Google Earth has the same problem. However, I don’t really care if it’s Qt’s fault, considering that it simply means running Parallels at all (even with no VM open) renders my machine unstable.
You know that Which Operating System Are You quiz?
Well, they’re gonna have to expand it to include all six versions of Windows Vista, whenever that decides to be unleashed unto the world. Hello, six versions? With the starter edition only being able to use 256MB of RAM and run three applications at once? Even eWeek says that “you would be better off running Windows 98”. You know what, instead of choosing between Vista Starter, Vista Home Basic, Vista Home Premium, Vista Business, Vista Enterprise or Vista Ultimate, how about I just run Mac OS X or Linux instead, you stupid tossers?
Jesus, the excellent lads over at Microsoft Research (who produce some truly world-class work) must be just cringing when they hear their big brother company do totally insane stuff like this.
Whoops, those of you who had problems downloading Vimacs will find that the download links work properly now. (What the hell, people besides me actually use Vimacs?)
One of the features of the new video iPod (the “Generation 5.5” one) is that it handles videos bigger than 640×480 just fine. This shouldn’t be surprising for geeks who own the older video iPod that plays 320×240 video, since the alpha geeks will know that the older video iPods could play some videos bigger than 320×240 just fine.
A nice side-effect of this is that if you are ripping DVDs to MPEG-4, you can very likely rip them at native resolution: I had zero problems playing Season 2 of Battlestar Galactica on a new video iPod, and it had a native resolution of 704×400. (Note: This is with a standard MPEG-4 video profile, not H.264 baseline low-complexity.) This is pretty cool, since you can now hook up a little video iPod direct to a big-ass TV and know that video resolution is no longer a differentiating factor between DVDs and MPEG-4 video. Now if only the iPod had component video connectors available…
Where’s André in June?
- June 7-10: San Diego (for the History of Programming Languages III conference.)
- June 11-20: San Francisco (for Apple’s WWDC 2007 bash)
- June 20-28: Las Vegas
- June 24-28: Los Angeles
If you’ll be in town on any of those dates or going to HoPL or WWDC, drop me an email!
As an aside, HoPL III looks incredible: Waldemar Celes (Lua), Joe Armstrong (Erlang), Bjarne (C++), David Ungar (Self), and the awesome foursome from Haskell: Paul Hudak, John Hughes, Simon Peyton Jones and Phil Wadler. (Not to mention William Cook’s great paper on AppleScript, which I’ve blogged about before.) Soooo looking forward to it.
I always used to get confused between UCS-2 and UTF-16. Which one’s the fixed-width encoding and which one’s the variable-length encoding that supports surrogate pairs?
Then, I learnt this simple little mnemonic: you know that UTF-8 is variable-length encoded1. UTF = variable-length. Therefore UTF-16 is variable-length encoded, and therefore UCS-2 is fixed-length encoded. (Just don’t extend this mnemonic to UTF-32.)
Just thought I’d pass that trick on.
1 I’m assuming you know what UTF-8 is, anyway. If you don’t, and you’re a programmer, you should probably learn sometime…
I’m not too sure that I can go much farther
I’m really not sure things are even getting better
I’m so tired of the me that has to disagree
I’m so tired of the me that’s in control
I woke up to see the…
Sun shining all around me
How could it shine down on me?
You think that it would notice that I can’t take any more
Had to ask myself,
… what’s it really for?
Everything I tried to do, it didn’t matter
Now I might be better off just rolling over
‘cos you know I try so hard but couldn’t change a thing
And it hurts so much I might as well let go
I can’t really take the…
Sun shining all around me
Why would it shine down on me?
You think that it would notice that I no longer believe
Can’t help telling myself
… it don’t mean a thing.
I woke up to see the…
Sun shining all around me
How could it shine down on me?
Sun shining all its beauty
Why would it shine down on me?
You think that it would notice that I can’t take any more
Just had to ask myself,
… what’s it really for?
—Yoko Kanno and Emily Curtis, What’s It For
Trust in love to save, baby. Bring on 2007!
First, fuel costs are down:
Second, I actually finished an entire tube of Blistex before I lost the stupid thing. I believe this is the second time in my life that this has happened:
Fourth, my personal inbox looks like this right now:
Zero messages, baby. Yeah! (Well, OK, my work inboxes still have a ton of messages… but zero personal mails left is really pretty nice.)
Plus, this is being published from Auckland airport, on the way to San Francisco. Not a bad day at all.
If you haven’t had much experience with the wonderful world of multithreading and don’t yet believe that threads are evil1, Edward A. Lee has an excellent essay named “The Problem with Threads”, which challenges you to solve a simple problem: write a thread-safe Observer design pattern in Java. Good luck. (Non-Java users who scoff at Java will often fare even worse, since Java is one of the few languages with some measure of in-built concurrency control primitives—even if those primitives still suck.)
His paper’s one of the best introductory essays I’ve read about the problems with shared state concurrency. (I call it an essay since it really reads a lot more like an essay than a research paper. If you’re afraid of academia and its usual jargon and formal style, don’t be: this paper’s an easy read.) For those who aren’t afraid of a bit of formal theory and maths, he presents a simple, convincing explanation of why multithreading is an inherently complex problem, using the good ol’ explanation of computational interleavings of sets of states.
His essay covers far more than just the problem of inherent complexity, however: Lee then discusses how bad threading actually is in practice, along with some software engineering improvements such as OpenMP, Tony Hoare’s idea of Communicating Sequential Processes2, Software Transactional Memory, and Actor-style languages such as Erlang. Most interestingly, he discusses why programming languages aimed at concurrency, such as Erlang, won’t succeed in the main marketplace.
Of course, how can you refuse to read a paper that has quotes such as these?
- “… a folk definition of insanity is to do the same thing over and over again and to expect the results to be different. By this definition, we in fact require that programmers of multithreaded systems be insane. Were they sane, they could not understand their programs.”
- “I conjecture that most multi-threaded general-purpose applications are, in fact, so full of concurrency bugs that as multi-core architectures become commonplace, these bugs will begin to show up as system failures. This scenario is bleak for computer vendors: their next generation of machines will become widely known as the ones on which many programs crash.”
- “Syntactically, threads are either a minor extension to these languages (as in Java) or just an external library. Semantically, of course, they rhoroughly disrupt the essential determinism of the languages. Regrettably, programmers seem to be more guided by syntax than semantics.”
- “… non-trivial multi-threaded programs are incomprehensible to humans. It is true that the programming model can be improved through the use of design patterns, better granularity of atomicity (e.g. transactions), improved languages, and formal methods. However, these techniques merely chip away at the unnecessarily enormous non-determinism of the threading model. The model remains intrinsically intractable.” (Does that “intractable” word remind you of anyone else?)
- “… adherents to… [a programming] language are viewed as traitors if they succumb to the use of another language. Language wars are religious wars, and few of these religions are polytheistic.”
If you’re a programmer and aren’t convinced yet that shared-state concurrency is evil, please, read the paper. Please? Think of the future. Think of your children.
1 Of course, any non-trivial exposure to multithreading automatically implies that you understand they are evil, so the latter part of that expression is somewhat superfluous.
2 Yep, that Tony Hoare—you know, the guy who invented Quicksort?
Two years ago, I had a wonderful job working on a truly excellent piece of software named cineSync. It had the somewhat simple but cheery job of playing back movies in sync across different computers, letting people write notes about particular movie frames and scribbling drawings on them. (As you can imagine, many of the drawings that we produced when testing cineSync weren’t really fit for public consumption.) While it sounds like a simple idea, oh boy did it make some people’s lives a lot easier and a lot less stressful. People used to do crazy things like fly from city to city just to be the same room with another guy for 30 minutes to talk about a video that they were producing; sometimes they’d be flying two or three times per week just to do this. Now, they just fire up cineSync instead and get stuff done in 30 minutes, instead of 30 minutes and an extra eight hours of travelling. cineSync made the time, cost and stress savings probably an order of magnitude or two better. As a result, I have immense pride and joy in saying that it’s being used on virtually every single Hollywood movie out there today (yep, even Iron Man). So, hell of a cool project to work on? Tick ✓.
Plus, it was practically a dream coding job when it came to programming languages and technologies. My day job consisted of programming with Mac OS X’s Cocoa, the most elegant framework I’ve ever had the pleasure of using, and working with one of the best C++ cross-platform code bases I’ve seen. I also did extensive hacking in Erlang for the server code, so I got paid to play with one of my favourite functional programming languages, which some people spend their entire life wishing for. And I got schooled in just so much stuff: wielding C++ right, designing network protocols, learning about software process, business practices… so, geek nirvana? Tick ✓.
The ticks go on: great workplace ✓; fantastic people to work with ✓; being privy to the latest movie gossip because we were co-located with one of Australia’s premiere visual effects company ✓; sane working hours ✓; being located in Surry Hills and sampling Crown St for lunch nearly every day ✓; having the luxury of working at home and at cafés far too often ✓. So, since it was all going so well, I had decided that it was obviously time to make a life a lot harder, so I resigned, set up my own little software consulting company, and start working on Mac shareware full-time.
Outside of the day job on cineSync, I was doing some coding on a cute little program to build websites named RapidWeaver. RapidWeaver’s kinda like Dreamweaver, but a lot more simple (and hopefully just as powerful), and it’s not stupidly priced. Or, it’s kinda like iWeb, but a lot more powerful, with hopefully most of the simplicity. I first encountered RapidWeaver as a normal customer and paid my $40 for it since I thought it was a great little program, but after writing a little plugin for it, I took on some coding tasks.
And you know what? The code base sucked. The process sucked. Every task I had to do was a chore. When I started, there wasn’t even a revision control system in place: developers would commit their changes by emailing entire source code files or zip archives to each other. There was no formal bug tracker. Not a day went by when I shook my fist, lo, with great anger, and thunder and lightning appeared. RapidWeaver’s code base had evolved since version 1.0 from nearly a decade before, written by multiple contractors with nobody being an overall custodian of the code, and it showed. I saw methods that were over thousand lines long, multithreaded bugs that would make Baby Jesus cry, method names that were prefixed with with Java-style global package namespacing (yes, we have method names called com_rwrp_currentlySelectedPage), block nesting that got so bad that I once counted thirteen tabs before the actual line of code started, dozens of lines of commented-out code, classes that had more than a hundred and twenty instance variables, etc, etc. Definitely no tick ✗.
But the code—just like PHP—didn’t matter, because the product just plain rocked. (Hey, I did pay $40 for it, which surprised me quite a lot because I moved to the Mac from the Linux world, and sneered off most things at the time that cost more than $0.) Despite being a tangled maze of twisty paths, the code worked. I was determined to make the product rock more. After meeting the RapidWeaver folks at WWDC 2007, I decided to take the plunge and see how it’d go full-time. So, we worked, and we worked hard. RapidWeaver 3.5 was released two years ago, in June 2006, followed by 3.5.1. 3.6 followed in May 2007, followed by a slew of upgrades: 3.6.1, 3.6.2, 3.6.3… all the way up to 3.6.7. Slowly but surely, the product improved. On the 3rd of August 2007, we created the branch for RapidWeaver 3.7, which we didn’t realise yet was going to be such a major release that it eventually became 4.0.
And over time, it slowly dawned on me just how many users we had. A product that I initially thought had a few thousand users was much closer to about 100,000 users. I realised I was working on something that was going to affect a lot of people, so when we decided to call it version 4.0, I was a little nervous. I stared at the code base and it stared back at me; was it really possible ship a major new revision of a product and add features to it, and maintain my sanity?
I decided in my naïvety to refactor a huge bunch of things. I held conference calls with other developers to talk about what needed to change in our plugin API, and how I was going to redo half of the internals so it wouldn’t suck anymore. Heads nodded; I was happy. After about two weeks of being pleased with myself and ripping up many of our central classes, reality set in as I realised that I was very far behind on implementing all the new features, because those two weeks were spent on nothing else but refactoring. After doing time estimation on all the tasks we had planned out for 4.0 and realising that we were about within one day of the target date, I realised we were completely screwed, because nobody sane does time estimation for software without multiplying the total estimate by about 1.5-2x longer. 4.0 was going to take twice as long as we thought it would, and since the feature list was not fixed, it was going to take even longer than that.
So, the refactoring work was dropped, and we concentrated on adding the new required features, and porting the bugfixes from the 3.6 versions to 4.0. So, now we ended up with half-refactored code, which is arguably just as bad as no refactored code. All the best-laid plans that I had to clean up the code base went south, as we soldiered on towards feature completion for 4.0, because we simply didn’t have the time. I ended up working literally up until the last hour to get 4.0 to code completion state, and made some executive decisions to pull some features that were just too unstable in their current state. Quick Look support was pulled an hour and a half before the release as we kept finding and fixing bugs with it that crashed RapidWeaver while saving a document, which was a sure-fire way to lose customers. Ultimately, pulling Quick Look was the correct decision. (Don’t worry guys, it’ll be back in 4.0.1, without any of that crashing-on-save shenanigans.)
So, last Thursday, it became reality: RapidWeaver 4.0 shipped out the door. While I was fighting against the code, Dan, Aron, Nik and Ben were revamping the website, which now absolutely bloody gorgeous, all the while handling the litany of support requests and being their usual easygoing sociable selves on the Realmac forums. I was rather nervous about the release: did we, and our brave beta testers, catch all the show-stopper bugs? The good news is that it seems to be mostly OK so far, although no software is ever perfect, so there’s no doubt we’ll be releasing 4.0.1 soon (if only to re-add Quick Look support).
A day after the release, it slowly dawned on me that the code for 4.0 was basically my baby. Sure, I’d worked on RapidWeaver 3.5 and 3.6 and was the lead coder for that, but the 3.5 and 3.6 goals were much more modest than 4.0. We certainly had other developers work on 4.0 (kudos to Kevin and Josh), but if I had a bad coding day, the code basically didn’t move. So all the blood, sweat and tears that went into making 4.0 was more-or-less my pride and my responsibility. (Code-wise, at least.)
If there’s a point to this story, I guess that’d be it: take pride and responsibility in what you do, and love your work. The 4.0 code base still sucks, sitting there sniggering at me in its half-refactored state, but we’ve finally suffered the consequences of its legacy design for long enough that we have no choice but to give it a makeover with a vengeance for the next major release. Sooner or later, everyone pays the bad code debt.
So, it’s going to be a lot more hard work to 4.1, as 4.1 becomes the release that we all really wanted 4.0 to be. But I wouldn’t trade this job for pretty much anything else in this world right now, because it’s a great product loved by a lot of customers, and making RapidWeaver better isn’t just a job anymore, it’s a need. We love this program, and we wanna make it so good that you’ll just have to buy the thing if you own a Mac. One day, I’m sure I’ll move on from RapidWeaver to other hopefully great things, but right now, I can’t imagine doing anything else. We’ve come a long way from RapidWeaver 3.5 in the past two years, and I look forward to the long road ahead for RapidWeaver 5. Tick ✓.
svk—a distributed Subversion client by Chia Liang Kao and company—is now an essential part of my daily workflow. I’ve been using it almost exclusively for the past year on the main projects that I work with, and it’s fantastic being able to code when you’re on the road and do offline commits, syncing back to the main tree when you’re back online. Users of other distributed revision control systems do, of course, get these benefits, but svk’s ability to work with existing Subversion repositories is the killer reason to use it. (I’m aware that Bazaar has some Subversion integration now, but it’s still considered alpha, whereas svk has been very solid for a long time now.)
The ability to do local checkins with a distributed revision control client has a nice side-effect: commits are fast. They typically take around two seconds with svk. A checkin from a non-distributed revision control client such as Subversion requires a round-trip to the server. This isn’t too bad on a LAN, but even for a small commit, it can take more than 10 or 15 seconds to a server on the Internet. The key point is that these fast commits have a psychological effect: having short commit times encourages you to commit very regularly. I’ve found that since I’ve switched to svk, not only can I commit offline, but I commit much more often: sometimes half a dozen times inside of 10 minutes. (svk’s other cool feature of dropping files from the commit by deleting them from the commit message also helps a lot here.) Regular commits are always better than irregular commits, because either (1) you’re committing small patches that are easily reversible, and/or (2) you’re working very prolifically. Both of these are a win!
So, if you’re still using Subversion, try svk out just to get the benefits of this and its other nifty features. The svk documentation is quite sparse, but there are some excellent tutorials that are floating around the ‘net.
My dad’s been on a Steven Seagal action movie rampage, recently. How many friggin’ movies has this guy made, you think? A half-dozen? A dozen? Nope, thirty-two. And they’re all exactly the damn same, although some of them have hilarious titles (such as Today You Die, Half Past Dead and Out for a Kill) with equally hilarious taglines (“Whoever set him up is definitely going down”).
Please add Steven Seagal to the list of heroes who I want to be when I grow up. Life just can’t be that bad when you keep starring in action movies with hot Asian chicks in half of them.
It’s rare that I find a good, balanced article on the (dis)advantages of static vs dynamic typing, mostly because people on each side are too religious (or perhaps just stubborn) to see the benefits of the other. Stevey’s blog rant comparing static vs dynamic typing is one of the most balanced ones that I’ve seen, even if I think half his other blog posts are on crack.
I lean toward pretty far toward the static typing end of the spectrum, but I also think that dynamic typing not only has its uses, but is absolutely required in some applications. One of my favourite programming languages is Objective-C, which seems to be quite unique in its approach: the runtime system is dynamically typed, but you get a reasonable amount of static checking at compile-time by using type annotations on variables. (Survey question: do you know of any Objective-C programmers who simply use id types everywhere, rather than the more specific types such as NSWindow* and NSArray*? Yeah, I didn’t think so.) Note that I think Objective-C could do with a more a powerful type system: some sort of parameterised type system similar in syntax to C++ templates/Java generics/C# generics would be really useful just for the purposes of compile-time checking, even though it’s all dynamically typed at runtime.
One common thread in both Stevey’s rant and what I’ve personally experienced is that dynamic typing is the way to go when your program really needs to be extensible: if you have any sort of plugin architecture or long-lived servers with network protocols that evolve (hello Erlang), it’s really a lot more productive to use a dynamic typing system. However, I get annoyed every time I do some coding in Python or Erlang: it seems that 50% of the errors I make are type errors. While I certainly don’t believe that static type systems guarantee that “if it compiles, it works”, it’s foolish to say that they don’t help catch a large class of errors (especially if your type system’s as powerful as Haskell’s or Ocaml’s), and it’s also foolish to say that unit tests are a replacement for a type system.
So, the question I want to ask is: why are programming languages today so polarised into either the static and dynamic camp? The only languages I know of that strive to accommodate for the benefits of both are Objective-C, Perl (though I’d say that writing Perl without use strict is an exercise in pain, since its only three types are scalars, arrays and hashes), and (gasp) Visual Basic. Programming languages and programming language research should’ve looked at integrating static and dynamic typing a long time ago. C’mon guys, it’s obvious to me that both approaches have good things to offer, and I ain’t that smart. I think a big reason they haven’t is largely for religious reasons, because people on both sides are too blinded to even attempt to see each other’s point of view. How many academic papers have there been that address this question?
I hope that in five years, we’ll at least have one mainstream programming language that we can write production desktop and server applications in, that offer the benefits of both static and dynamic typing. (Somebody shoot me, now I’m actually agreeing with Erik Meijer.) Perhaps a good start is for the current generation of programmers to actually admit that both approaches have their merit, rather than simply get defensive whenever one system is critiqued. It was proved a long time ago that dynamic typing is simply staged type inference and can be subsumed as part of a good-enough static type system: point to static typing. However, dynamic typing is also essential for distributed programming and extensibility. Point to dynamic typing. Get over it, type zealots.
P.S. Google Fight reckons that dynamic typing beats static typing. C’mon Haskell and C++ guys, unite! You’re on the same side! Down with those Pythonistas and Rubymongers! And, uhh, down with Smalltalk and LISP too, even though they rule! (Now I’m just confusing myself.)