Planet Linux Australia

Syndicate content
Planet Linux Australia - http://planet.linux.org.au
Updated: 1 hour 54 min ago

Linux Users of Victoria (LUV) Announce: LUV July Meeting: La Trobe Valley Linux Miniconf and installfest

Mon, 2014-07-07 15:29
Start: Jul 19 2014 09:45 End: Jul 19 2014 09:45 Start: Jul 19 2014 09:45 End: Jul 19 2014 09:45 Location: 

Saturday July 19, 2014

Saturday, July 19, 2014 9:45 AM to 5:15 PM St. Mary's Anglican Church 6-8 La Trobe Road, Morwell

Linux Australia in partnership with Linux Users Victoria is pleased to invite you to the inaugural La Trobe Valley Linux Miniconf and installfest which will also see the formation of a new regional of chapter of Linux

users specifically for the La Trobe Valley region.

The lecture program includes "Why Linux Is The Future of Computing", "Moving from MS-Windows to Linux", "The Variety of Linux Distributions and Desktops", "LibreOffice and Other Office Applications", "Computer Security

LUV would like to acknowledge Red Hat for their help in obtaining the Buzzard Lecture Theatre venue and VPAC for hosting, and BENK Open Systems for their financial support of the Beginners Workshops

Linux Users of Victoria Inc., is an incorporated association, registration number A0040056C.

July 19, 2014 - 09:45

read more

Andrew Pollock: [life] Day 158: Piedmont Park

Mon, 2014-07-07 11:25

I went to bed ridiculously late last night, and because we had to get up relatively early to try and get away to Piedmont Park for the morning and be back in time for lunch and James' nap, this morning was a bit tough.

Piedmont Park was gorgeous. It's the Central Park of Atlanta apparently. It had a really huge dog park, and some lakes and a playground. Most importantly, it had a big fountain the kids could run through.

Everyone had a great time playing in the fountain, which was the last stop at the park before we turned around and headed back to the car.

I'm really loving how green and leafy Atlanta is. I realised today what I really love about Chris and Briana's house is it feels like it's in the middle of a forest. Looking out the window from the upstairs bathroom, all you can really see is trees. Lovely, straight and tall trees with big green leaves. Not quite as nice as a redwood forest, but pretty close.

The girls played "makeup over" after lunch, and then Briana took them out for frozen yogurt while James napped. After James woke up, Chris and James and I ran some errands. The girls had all gotten back by the time we got back, and were playing in the front yard.

I called a bunch of my friends while I wasn't having to entertain Zoe and caught up with them. It's good to be in a more conducive timezone for catching up with people.

I managed to get Zoe to bed at a more normal hour tonight, because we need to be up pretty early to get to the aquarium in the morning.

Lev Lafayette: The Innovation Patent Review and Free Software

Mon, 2014-07-07 10:29

Presentation to Linux Users of Victoria, 1st July, 2014

1. About Patents

A definition from the World Intellectual Property Organisation "A patent is an exclusive right granted for an invention, which is a product or a process that provides, in general, a new way of doing something, or offers a new technical solution to a problem. To get a patent, technical information about the invention must be disclosed to the public in a patent application." [1]

read more

Arjen Lentz: Probabilistic vs Pilot-Wave view of Quantum Physics | WIRED

Mon, 2014-07-07 10:25

http://www.wired.com/2014/06/the-new-quantum-reality/

Interesting. I’ve always had issues with the probabilistic view, and I like the pilot wave (Bohmian) view. It makes more sense in my head. That doesn’t mean it’s right, of course, but just saying – I find it more elegant and satisfactory. It doesn’t require magic.

It’s important to realise that both views are of the same quantum physics. Feynman said “no one truly understands quantum mechanics”.

I reckon it’s worthwhile putting way more research into the pilot-wave view again, as it may well be able to help resolve other related issues such as the unified theory. Different perspectives often help to do that (and that’s even the case if they’re wrong!)

Also consider this tidbit:

In a groundbreaking experiment, the Paris researchers used the droplet setup to demonstrate single- and double-slit interference. They discovered that when a droplet bounces toward a pair of openings in a damlike barrier, it passes through only one slit or the other, while the pilot wave passes through both. Repeated trials show that the overlapping wavefronts of the pilot wave steer the droplets to certain places and never to locations in between — an apparent replication of the interference pattern in the quantum double-slit experiment that Feynman described as “impossible … to explain in any classical way.” And just as measuring the trajectories of particles seems to “collapse” their simultaneous realities, disturbing the pilot wave in the bouncing-droplet experiment destroys the interference pattern.

Related Posts:
  • No related posts

Sridhar Dhanapalan: Twitter posts: 2014-06-30 to 2014-07-06

Mon, 2014-07-07 00:27

Matt Palmer: Witness the security of this fully DNSSEC-enabled zone!

Sun, 2014-07-06 20:25

After dealing with the client side of the DNSSEC puzzle last week, I thought it behooved me to also go about getting DNSSEC going on the domains I run DNS for. Like the resolver configuration, the server side work is straightforward enough once you know how, but boy howdy are there some landmines to be aware of.

One thing that made my job a little less ordinary is that I use and love tinydns. It’s an amazingly small and simple authoritative DNS server, strong in the Unix tradition of “do one thing and do it well”. Unfortunately, DNSSEC is anything but “small and simple” and so tinydns doesn’t support DNSSEC out of the box. However, Peter Conrad has produced a patch for tinydns to do DNSSEC, and that does the trick very nicely.

A brief aside about tinydns and DNSSEC, if I may… Poor key security is probably the single biggest compromise vector for crypto. So you want to keep your keys secure. A great way to keep keys secure is to not put them on machines that run public-facing network services (like DNS servers). So, you want to keep your keys away from your public DNS servers. A really great way of doing that would be to have all of your DNS records somewhere out of the way, and when they change regenerate the zone file, re-sign it, and push it out to all your DNS servers. That happens to be exactly how tinydns works. I happen to think that tinydns fits very nicely into a DNSSEC-enabled world. Anyway, back to the story.

Once I’d patched the tinydns source and built updated packages, it was time to start DNSSEC-enabling zones. This breaks down into a few simple steps:

  1. Generate a key for each zone. This will produce a private key (which, as the name suggests, you should keep to yourself), a public key in a DNSKEY DNS record, and a DS DNS record. More on those in a minute.

    One thing to be wary of, if you’re like me and don’t want or need separate “Key Signing” and “Zone Signing” keys. You must generate a “Key Signing” key – this is a key with a “flags” value of 257. Doing this wrong will result in all sorts of odd-ball problems. I wanted to just sign zones, so I generated a “Zone Signing” key, which has a “flags” value of 256. Big mistake.

    Also, the DS record is a hash of everything in the DNSKEY record, so don’t just think you can change the 256 to a 257 and everything will still work. It won’t.

  2. Add the key records to the zone data. For tinydns, this is just a matter of copying the zone records from the generated key into the zone file itself, and adding an extra pseudo record (it’s all covered in the tinydnssec howto).

  3. Publish the zone data. Reload your BIND config, run tinydns-sign and tinydns-data then rsync, or do whatever it is PowerDNS people do (kick the database until replication starts working again?).

  4. Test everything. I found the Verisign Labs DNSSEC Debugger to be very helpful. You want ticks everywhere except for where it’s looking for DS records for your zone in the higher-level zone. If there are any other freak-outs, you’ll want to fix those – because broken DNSSEC will take your domain off the Internet in no time.

  5. Tell the world about your DNSSEC keys. This is simply a matter of giving your DS record to your domain registrar, for them to add it to the zone data for your domain’s parent. Wherever you’d normally go to edit the nameservers or contact details for your domain, you probably want to do to the same place and look for something about “DS” or “Domain Signer” records. Copy and paste the details from the DS record in your zone into there, submit, and wait a minute or two for the records to get published.

  6. Test again. Before you pat yourself on the back, make sure you’ve got a full board of green ticks in the DNSSEC Debugger. if anything’s wrong, you want to rollback immediately, because broken DNSSEC means that anyone using a DNSSEC-enabled resolver just lost the ability to see your domain.

That’s it! There’s a lot of complicated crypto going on behind the scenes, and DNSSEC seems to revel in the number of acronyms and concepts that it introduces, but the actual execution of DNSSEC-enabling your domains is quite straightforward.

Russell Coker: Desktop Publishing is Wrong

Sun, 2014-07-06 17:26

When I first started using computers a “word processor” was a program that edited text. The most common and affordable printers were dot-matrix and people who wanted good quality printing used daisy wheel printers. Text from a word processor was sent to a printer a letter at a time. The options for fancy printing were bold and italic (for dot-matrix), underlines, and the use of spaces to justify text.

It really wasn’t much good if you wanted to include pictures, graphs, or tables. But if you just wanted to write some text it worked really well.

When you were editing text it was typical that the entire screen (25 rows of 80 columns) would be filled with the text you were writing. Some word processors used 2 or 3 lines at the top or bottom of the screen to display status information.

Some time after that desktop publishing (DTP) programs became available. Initially most people had no interest in them because of the lack of suitable printers, the early LASER printers were very expensive and the graphics mode of dot matrix printers was slow to print and gave fairly low quality. Printing graphics on a cheap dot matrix printer using the thin continuous paper usually resulted in damaging the paper – a bad result that wasn’t worth the effort.

When LASER and Inkjet printers started to become common word processing programs started getting many more features and basically took over from desktop publishing programs. This made them slower and more cumbersome to use. For example Star Office/OpenOffice/LibreOffice has distinguished itself by remaining equally slow as it transitioned from running on an OS/2 system with 16M of RAM in the early 90′s to a Linux system with 256M of RAM in the late 90′s to a Linux system with 1G of RAM in more recent times. It’s nice that with the development of PCs that have AMD64 CPUs and 4G+ of RAM we have finally managed to increase PC power faster than LibreOffice can consume it. But it would be nicer if they could optimise for the common cases. LibreOffice isn’t the only culprit, it seems that every word processor that has been in continual development for that period of time has had the same feature bloat.

The DTP features that made word processing programs so much slower also required more menus to control them. So instead of just having text on the screen with maybe a couple of lines for status we have a menu bar at the top followed by a couple of lines of “toolbars”, then a line showing how much width of the screen is used for margins. At the bottom of the screen there’s a search bar and a status bar.

Screen Layout

By definition the operation of a DTP program will be based around the size of the paper to be used. The default for this is A4 (or “Letter” in the US) in a “portrait” layout (higher than it is wide). The cheapest (and therefore most common) monitors in use are designed for displaying wide-screen 16:9 ratio movies. So we have images of A4 paper with a width:height ratio of 0.707:1 displayed on a wide-screen monitor with a 1.777:1 ratio. This means that only about 40% of the screen space would be used if you don’t zoom in (but if you zoom in then you can’t see many rows of text on the screen). One of the stupid ways this is used is by companies that send around word processing documents when plain text files would do, so everyone who reads the document uses a small portion of the screen space and a large portion of the email bandwidth.

Note that this problem of wasted screen space isn’t specific to DTP programs. When I use the Google Keep website [1] to edit notes on my PC they take up a small fraction of the screen space (about 1/3 screen width and 80% screen height) for no good reason. Keep displays about 70 characters per line and 36 lines per page. Really every program that allows editing moderate amounts of text should allow more than 80 characters per line if the screen is large enough and as many lines as fit on the screen.

One way to alleviate the screen waste on DTP programs is to use a “landscape” layout for the paper. This is something that all modern printers support (AFAIK the only printers you can buy nowadays are LASER and ink-jet and it’s just a big image that gets sent to the printer). I tried to do this with LibreOffice but couldn’t figure out how. I’m sure that someone will comment and tell me I’m stupid for missing it, but I think that when someone with my experience of computers can’t easily figure out how to perform what should be a simple task then it’s unreasonably difficult for the vast majority of computer users who just want to print a document.

When trying to work out how to use landscape layout in LibreOffice I discovered the “Web Layout” option in the “View” menu which allows all the screen space to be used for text (apart from the menu bar, tool bars, etc). That also means that there are no page breaks! That means I can use LibreOffice to just write text, take advantage of the spelling and grammar correcting features, and only have screen space wasted by the tool bars and menus etc.

I never worked out how to get Google Docs to use a landscape document or a single webpage view. That’s especially disappointing given that the proportion of documents that are printed from Google Docs is probably much lower than most word processing or DTP programs.

What I Want

What I’d like to have is a word processing program that’s suitable for writing draft blog posts and magazine articles. For blog posts most of the formatting is done by the blog software and for magazine articles the editorial policy demands plain text in most situations, so there’s no possible benefit of DTP features.

The ability to edit a document on an Android phone and on a Linux PC is a good feature. While the size of a phone screen limits what can be done it does allow jotting down ideas and correcting mistakes. I previously wrote about using Google Keep on a phone for lecture notes [2]. It seems that the practical ability of Keep to edit notes on a PC is about limited to the notes for a 45 minute lecture. So while Keep works well for that task it won’t do well for anything bigger unless Google make some changes.

Google Docs is quite good for editing medium size documents on a phone if you use the Android app. Given the limitations of the device size and input capabilities it works really well. But it’s not much good for use on a PC.

I’ve seen a positive review of One Note from Microsoft [3]. But apart from the fact that it’s from Microsoft (with all the issues that involves) there’s the issue of requiring another account. Using an Android phone requires a Gmail account (in practice for almost all possible uses if not in theory) so there’s no need to get an extra account for Google Keep or Docs.

What would be ideal is an Android editor that could talk to a cloud service that I run (maybe using WebDAV) and which could use the same data as a Linux-X11 application.

Any suggestions?

Related posts:

  1. Desktop Equivalent Augmented Reality Augmented reality is available on all relatively modern smart phones....
  2. Linux on the Desktop I started using Linux in 1993. I initially used it...
  3. Lenny SE Linux on the Desktop I have been asked about the current status of Lenny...

Ben Martin: Ending up Small

Sun, 2014-07-06 14:27
It's easy to get swept up in trying to build a robot that has autonomy and the proximity, environment detection and inference, and feedback mechanisms that go along with that. There's something to be said about the fun of a direct drive robot with just power control and no feedback. So Tiny Tim was born! For reference, his wheels are 4 inches in diameter.





A Uno is used with an analog joystick shield to drive Tim. He is not intended and probably never will be able to move autonomously. Direct command only. Packets are sent over a wireless link from the controller (5v) to Tim (3v3). Onboard Tim is an 8Mhz/3v3 Pro Micro which I got back on Arduino day :) The motors are driven by a 1A Dual TB6612FNG Motor Driver which is operated very much like an L298 dual hbridge (2 direction pins and a PWM). Tim's Pro Micro also talks to an OLED screen and his wireless board is put out behind the battery to try to isolate it a little from the metal. The OLED screen needed 3v3 signals, so Tim became a 3v3 logic robot.



He is missing a hub mount, so the wheel on the left is just sitting on the mini gearmotor's shaft. At the other end of the channel is a Tamiya Omni Wheel which tilts the body slightly forward. I've put the battery at the back to try to make sure he doesn't flip over during hard break.



A custom PCB would remove most of the wires on Tim and be more robust. But most of Tim is currently put together from random bits that were available. The channel and beams should have a warning letting you know what can happen once you bolt a few of them together ;)



Andrew Pollock: [life] Day 157: Pottering around the house and more time in the pool

Sun, 2014-07-06 13:25

I managed to sleep all night last night! I felt so much better today as a result.

It was a beautiful day today, with unseasonably low temperature and humidity.

The girls played around in the back yard for a bit in the morning once everyone got up and got going for the day, and after lunch we were going to go to a local neighbourhood park, but only got as far as the pool instead.

All of this time in the pool is certainly doing wonders for Zoe's confidence in deep water. She did lots of deep diving and swimming today.

I realised how much I'm enjoying the limited downtime of this holiday. Just the break from having to cook is massive. Skipping out on cooking but still having heaps of quality time with Zoe has been fantastic. I'm also really loving the leafy green neighbourhood.

I took the Peikerts out for dinner tonight to say thanks for having us. We had a nice dinner at a local bar. After we got home and all the kids were bathed, Zoe wanted to watch Clara practice the piano, which devolved into an impromptu piano lesson for Zoe, so I didn't end up getting her to bed until quite late. Hopefully we can all sleep in a bit tomorrow morning.

Michael Davies: LCA2015 CFP Closing Real Soon Now

Sun, 2014-07-06 11:54
It's July, which means the LCA2015 CFP is open... but not for much longer.



I've been reading through what's been submitted so far, and it looks like linux.conf.au will again have an excellent program.  But, as Co-Chair of the Papers Committee, I want the program to be even better! :-)



So if you're working on a open-source or open-hardware project, and you're doing cool stuff, why not come to Auckland in January and speak at one of the best community-driven open-source conferences in the world?  We've got some great information on how to get your proposal accepted (also in video) to help you put your proposal together.



But to be a speaker at LCA2015 you need to make a proposal to speak to the CFP (which closes next Friday on July 13).  So hurry up, and submit your proposal today!



David Rowe: Democratising HF Radio Part 1

Sun, 2014-07-06 08:29

I recently submitted a Shuttleworth Fellowship grant application. I had planned to use the funding to employ people and accelerate the roll out of the project described below. I just heard that my application was unsuccessful (they wanted something more experimental). Never mind, the ideas lives on!

I’m exploring some novel ideas for messaging over 100 km ranges in unconnected parts of the developing world. Radios are migrating from hardware to software, making the remaining hardware component very simple. Software can be free, so radio communication can be built at very low cost, and possibly by local people in the developing world. Even from e-waste.

I have a theory that this can address the huge problem of “distribution”. I’ve been involved in a few projects where well meaning geeks have tried to help people using technology. However we get wound up in our own technology. If you have a hammer, every problem is a nail. I think we have the technology – it’s physically getting it into peoples hands at the right cost and in a way that they can control and maintain it that is the problem. I also hit this problem in my small business career – it’s called “distribution”, it was really tough in that field as well.

Here is the video part of a Shuttleworth Fellowship grant application:

And here are the slides.

I’ll be moving this project forward in 2015. The world needs to get connected.

Andrew Pollock: [life] Day 156: Independence Day

Sun, 2014-07-06 02:25

Zoe woke up at briefly at about 1:30am wanting her blankets pulled up. I managed to get back to sleep after that, but woke up at 4:30am for the day. A slight improvement.

I'd wanted to time our visit so that we could be here for the 4th of July. We pottered around at home in the morning and then walked down the street to the neighbourhood pool for a 4th of July ice cream social. It was a bit more than ice cream though, we had sandwiches as well.

After some time in the pool, we had a nap so we'd last until after the fireworks, and after an early dinner, headed into downtown Decatur in preparation for the fireworks.

The girls had a great time playing in a local park for a while, and then we parked over in a multi-story parking garage downtown, and got into position on the roof to watch the fireworks.

We joined a friend of Chris' (also named Chris) and his two daughters, while we waited for it to get dark. Chris, Chris and I played Carcassonne on Chris' phone (it looks like a cool game, I'll have to try and get better at it) while the girls played cards and generally fooled around.

The fireworks started a bit after 9pm. There's one thing about America, it knows how to put on a good fireworks show, and this was just Decatur. Zoe was very impressed. So I'm glad I got to show her a 4th of July fireworks show.

We got home by about 10:30pm, and crashed.

Peter Miller: Phasmatodea: Ripe Cages

Sat, 2014-07-05 01:25
I reared phasmids for 14 years. Along the way I noticed a few things that came along.  My first phasmids were, adult females of Extatosoma tiaratum tiaratum and Eurycnema goliath; well doesn’t everyone? This was 1997. I kept a journal and notes, and I was able to pull some populations studies from the data.  I just […]

Brad Hards: OpenChange at Exchange RPC Plugfest (24-27 Jan 2011)

Fri, 2014-07-04 15:26
KDE Project:

Late last month, I was fortunate to be invited to Microsoft for the Exchange RPC Plugfest as part of the OpenChange team.

I decided to arrive into Redmond a few days early to get over the worst of the jetlag, and to spend some pre-plugfest hacking time with Julien Kerihuel and Jelmer Vernooij. That produced some excellent planning work, and a bit of new code. I also caught up with Tom Devey (a Microsoft consultant, who is basically the customer representative for Exchange RPC protocols) and enjoyed a nice wood-fired pizza lunch, and some night skiing at Stevens Pass. My skiing was never really good, and lack of practice and some pretty unusual conditions didn't help. It was still a lot of fun though.

The Exchange RPC plugfest kicked off on Monday (24 January 2011), with breakfast and getting oriented in the lab environment. The labs we used were in the "Platform Adoption Center" (Building 20 on the Microsoft campus at Redmond), which is a nice facility for this kind of thing. Monday afternoon had presentations from Simon Xiao and Xianming Xu, who were part of the team from Microsoft (well, a contract / outsource organisation paid by Microsoft) on the test suite we'd be using to test OpenChange with for the rest of the week. In this context, the testing is aimed at protocol specification compliance. Microsoft has similar tests for some of the windows protocols, but the suite we used was specific to the Exchange RPC ("MAPI") wire-protocol.

Over the course of the week, the test suites showed us quite a number of issues (from minor non-compliances through to crashing the server, and occasionally causing problems with the test suite). We fixed some of it during the plugfest, and we also did some refactoring of the backend store that will allow us to address other issues and build a more stable and reliable server.

We typically ran tests from 0830 through to about 1900 (sometimes a bit earlier / later) on Tuesday, Wednesday and Thursday. Microsoft had also arranged Exchange developers (Darrell Brunsch, Joe Warren and Juan Pablo Muraira) to give presentations, which helped enormously with understanding the protocol. Although it took time away from the testing, it was still well worth while to both attend the talks and to discuss things before and after the talks.

There we also "shared" talks with the Active Directory plugfest that was going on at the same time, and presentations by Nick Meier (one of only a few Linux guys inside Microsoft) and Paul Long (on Netmon) amongst other talks helped a lot. Seeing the inside of the "Enterprise Engineering Center" was also illuminating - thanks to Darryl Welch for organising the tour and the EEC team for hosting us.

It was a real shame to miss LCA 2011 (since I'd already committed to the Plugfest when the LCA dates changed), but given what we learned about the OpenChange server, then I'd make the same decision again.

Thanks everyone who attended and contributed, but particularly to Virginia Bing and Tom Devey for all the organisational work and putting on the event.

Brad Hards: dealing with Microsoft Exchange, when you really want to use SMTP

Fri, 2014-07-04 15:26
KDE Project:

I've been doing some work on OpenChange, including parsing RFC2822 format messages into Exchange RPC properties. One of the test tools I have can parse up some kinds of RFC2822 / MIME messages (plain text, HTML, some mime/alternative and text/calendar) and upload the results to a Microsoft Exchange server as a particular user.

With a little bit of unix-style tool combination, you can plug this into something like Postfix. So if you're in a situation where you want to integrate a tool that wants to send mail via SMTP, but your network is pretty much "Exchange." (and you can't send SMTP directly), then this might be useful.

When using Postfix, you'd add a transport entry, either a wildcard:

* oc:localhost

or a restricted list if you like.

Enable the transport mapping in main.cf:

transport_maps = hash:/etc/postfix/transport

Then add an entry to master.cf to match the required transport:

oc unix - n n - 1 pipe flags= user=bradh argv=/path/to/script.sh

where the script receives the mail, and invokes the oxcmail_test application:

#!/bin/sh SCRIPT_TMP_DIR=/var/spool/myscript OC_SEND="/home/bradh/openchange/branches/oxcmail/liboxcmail/test/oxcmail_test --dump-data -d9 --database=/home/bradh/.openchange/profiles.ldb" # you could also choose to remove $SCRIPT_TMP_DIR/oc_send.$$.log if you don't need the records trap "rm -f $SCRIPT_TMP_DIR/$$.msg" 0 1 2 3 15 cat > $SCRIPT_TMP_DIR/$$.msg $OC_SEND $SCRIPT_TMP_DIR/$$.msg > $SCRIPT_TMP_DIR/oc_send.$$.log exit $?

I would caution that the code is pretty experimental - its still in a branch for a reason! However I would appreciate some external testers if you're feeling particularly brave, and have the right kind of environment for testing this. You can pick up the code from svn (http://svnmirror.openchange.org/openchange/branches/oxcmail) and build it using the instructions on the OpenChange wiki. In addition, note that you can't usually set the From address, so you'll probably be restricted to sending from one pre-configured user (or perhaps a small set of users)

If you have any questions, try to catch me on IRC (#openchange on freenode), leave blog comments or use the devel mailing list.

Future plans involve a sendmail-like application for those applications that want to sendmail instead of SMTP. Hopefully that will allow me to send from KMail without needing to go via the SMTP path.

Brad Hards: Openchange goes to Redmond, and a shout out to Inverse / SoGo

Fri, 2014-07-04 15:26
KDE Project:

Julien and I met up in Redmond last week, just before the Exchange Open Specifications event at Microsoft. It was a productive time, where we did some serious planning and a little coding, and learned quite a lot more about the protocols from some of the main developers of Exchange and Outlook. Thanks to Microsoft for hosting it.

I also wanted to highlight some impressive work from the people at Inverse (in particular, Wolfgang Sourdeau) in building a backend for the SoGo groupware suite that uses OpenChange to provide native Outlook connectivity (using Exchange RPC) to the SoGo server. There is a screencast video that shows access to the SoGo server via Outlook and Firefox (web UI). The video goes by quite quickly, but you can see new folder creation, messages moved between folders and creation of a new contact.

There is still a way to go, but this is a very promising start. It also gives us confidence in the OpenChange server architecture, and shows where we need to focus our attention.

Brad Hards: OpenChange team meeting

Fri, 2014-07-04 15:26
KDE Project:

The OpenChange team had a short online (IRC) meeting on Friday. The meeting record is at http://tracker.openchange.org/projects/openchange/wiki/Meeting_of_2010-07-30

We're considering holding an "open session" meeting (again on IRC), possibly in a couple of weeks. If you'd be interested in attending, please leave a comment on the best days and times (relative to UTC) so we can accommodate as many people as possible.

Brad Hards: Trying OpenChange server, easy way

Fri, 2014-07-04 15:26
KDE Project:

OpenChange is an important project, but it does require quite a lot of work to get it all to build. We're working on the process, but in the mean time, we've (ok, Julien Kerihuel with nothing from me except encouragement) has built a Virtual Box image that provides OpenChange all built, configured, set up and ready to try.

See http://tracker.openchange.org/projects/openchange/wiki/OpenChange_Appliance for the download (ftp or rsync) location and setup procedures.

Have fun, and let us know how it goes!

Brad Hards: Recent happenings in OpenChange

Fri, 2014-07-04 15:26
KDE Project:

I haven't been doing a lot of KDE stuff recently (happy user, although I'd be happier if I could find some extra time for development...). Instead, I've been doing some "real" work, and also going some OpenChange work. [For those that tuned in late, OpenChange is an implementation of the Exchange RPC protocols on both the client (i.e. "Outlook") and server (i.e. "Microsoft Exchange") sides of the network protocol]

Some recent changes:

- we've arranged some new infrastructure servers, with the help of the Free Software Conservancy and FSF France (especially thanks to Loic Dachary for his work on this)

- we've migrated from Trac to Redmine, which is working out pretty well so far

- got the buildbot back up (on http://buildbot.openchange.org:8010 for those who'd like to check it out)

- significant progress on the server side (including getting outlook to show a message), although there is still a lot to do here

- better handling of some data types (especially unicode strings)

- a lot of little code cleanups

Thanks to the Novell (Evolution) team for their work in identifying issues and suggesting fixes, and to the Microsoft Open Specification team for following up on our obscure questions.

Future (near-term) plans mainly focus on server-side functionality, and fixing up some of the current bugs that are biting client users.

Brad Hards: Interoperability with Microsoft File Formats.

Fri, 2014-07-04 15:26
KDE Project:

I recently realised that much of the code I find interesting is about interoperability. That is, I'm interested in making sure we can get at data in a range of formats. Work on libtiff, poppler, okular generators and openchange are all examples of that. I also like Qt as a very nice cross-platform API. The convergence of those interests is having Qt-style libraries and tools that can get access to data, especially data in widely used proprietary formats (e.g. those produced by Microsoft products).

I've set up a gitorious repository (http://gitorious.org/microsoft-qt-interop/microsoft-qt-interop) for some of that stuff.

At the moment, it mainly has a Compound File Binary Format (aka "OLE") parser, written from the MS-CFB specification.

I plan to add an EMF ("Enhanced Metafile") format parser / renderer (already written and currently used in KOffice) at some point too - just need to find some more time.

There are a lot more things that could go in there (e.g. converting the various things in MS-DTYP into Qt equivalents), but I've only implemented those things I actually need.

Contributions are welcome - I'm pretty flexible on format. If you have some suggestions, please add them to the project wiki on gitorious.