linux . conf . au - february 2002
february 6 - 9
university of qld,
Linux.conf.au 2002 Abstracts
Erik will provide an introduction to digital signal processing of audio signals, ranging from a brief introduction to the physics of sound, to the working and implementation of audio effects such as chorus and reverb, using the GNU Octave and other Free Software packages.
Atendees are encouraged to bring along a Linux laptop with the following packages and tools installed:
It would help if the laptop has a CDROM or floppy drive and working sound output.
Linux makes a great platform for deploying diskless workstations that boot from a network server. The Linux Terminal Server Project (http://www.LTSP.org) is an open source project whose focus is to provide everything you need to run thin client computers in a GNU/Linux environment.
The LTSP has been deployed in tens of thousands of locations throughout the world, in homes, small offices, large offices, schools, universities, libraries, government agencies, cyber cafes and more.
The LTSP is an excellent way to deploy large numbers of workstations very inexpensively. By eliminating the moving parts from a workstation, the reliability is greatly increased. Also, there are huge cost savings, both in terms of initial cost of deployment and total cost of ownership (TCO).
The proposed tutorial covers all aspects of setting up diskless thin clients using LTSP.
Starting with a quick introduction and demonstration of the capabilities of an LTSP workstation, we then move right into installing LTSP on a fresh Linux system. We'll go through the entire installation process, including configuration and troubleshooting. We will cover various methods of loading the kernel into the memory of the workstation, including Etherboot, PXE, MBA, Flash and Floppy disk.
Attendees will learn about installing LTSP, burning bootroms, configuring DHCP, TFTP, NFS, X Windows, compiling kernels, Security, troubleshooting the boot process and tuning the server for many users. Sizing and scaleability will also be addressed.
We will also show how to configure the thin client to run applications locally, utilizing the hardware on each desk.
Application programs, such as StarOffice and Mozilla will be discussed and demonstrated.
In the past, this has been a very interactive and exciting tutorial, resulting in the audience gaining an excellent understanding of running Thin clients on Linux.
PHP is a popular scripting language designed specifically for creating web-based applications. The first half of this fast-paced tutorial, taught by the original developer of the language, will quickly run through the basic characteristics of the language and demonstrate how to use PHP to create images, pdf's and flash on the fly. In the second half things will get more complex with topics such as SQL, LDAP, XML, XSL/XSLT plus a peek into the internals of PHP's extension API. Attendees should have a basic understanding of HTML and the basic HTTP client/server model. For the latter topics a bit of C programming experience would be helpful
PostgreSQL and Berkeley DB3 offer developers two sophisticated alternatives to proprietary database technologies. They are feature-rich, reliable and widely used in industry.
Yet these two databases differ greatly in designed. Mostly apparently: PostgreSQL is an object relation database with an SQL interface while Berkeley is an embedded database.
While lightly addressing this basic difference in my tutorial, I will take a more in-depth look at usage of these two database systems in open source projects. Firstly, why have the masses swarmed on MySQL and not PostgreSQL? What is the trend now and what should it be? Secondly -- and more importantly -- why do developers tend to use an SQL database as opposed to an embedded database? I will look at implementation issues surrounding this choice. This will focus on a computational cost analysis of the two databases with particular reference to the parsing overhead of SQL. Complementing this will be a discussion of the problems surrounding the deployment of an application which uses a TCP/UNIX Domain Socket driven database (as is the case with PostgreSQL) as opposed to an embedded database (Berkeley DB) which does not suffer the same problems for simple applications.
This completed, the tutorial will look extensively at the C language APIs for both databases.
For PostgreSQL this will include coverage of: database connection functions, query execution (including the asynchronous processing interface), data retrieval, large objects and debugging.
For the Berkeley DB, the tutorial will cover: access methodology (including cursors), the database environment, transactional processing and distributed databases.
Each analyses will be accompanied by relevant code examples.
Finally, I will discuss the future of these two projects.
This will be a practical demonstration of setting up LDAP in a networked environment. Participants are welcome to bring along a laptop or other computer running Linux and join in the fun.
This tutorial will cover the following topics:
In January 1992 as a procrastinating student at the Australian National University in Canberra I released "Server 1.0", now better known as Samba. This talk will dredge up some of the stranger events that have marked the history of Samba and perhaps inadvertantly give some insight into the software development process.
The major focus for many Linux developers has been in those parts of the kernel that support server type applications. However a rapidly expanding area of Linux development has been in the desktop applications area. A key aspect of this has been in the use of USB peripherals - especially keyboards, mice and mass storage devices. The hot-plug nature of USB devices has caused some of the underlying assumptions about Unix type devices to be re-assessed, and new or modified approaches to be adopted.
Linux USB development started in 1997, and has had a somewhat tortured history, with splintered development and several re-writes. It captures many of the issues associated with open source development - difficulty in obtaining device programming information, differences in opinion between developers concerning architecture, devices that are "almost but not quite" compliant to published specifications, and the difficulty in finding enough time to get all that coding, testing and documentation done.
The introduction of USB 2.0, with significantly higher speeds, has required some new drivers, but has also identified speed and efficiency problems with some of the existing drivers. Future enhancements to include new drivers, and improve existing ones, seem likely. As Linux emerges to become an important part of the desktop market, more manufacturer like is expected.
In this presentation, Chris DiBona will go over what it takes to install, configure and customize the "Slashdot Like Automated Storytelling Homepage" software. SLASH is the software which slashdot.org uses to serve upwards of 2 million page views a day.
SLASH is exceptionally powerful software designed to stand up to amazing amounts of abuse and contains extensive facilities for defeating abusive scripts, trolls and crackers. Being very powerful and complex software means that SLASH can be daunting for some during install and operation. By attending this session you will find that it does not have to be this way. As part of this talk you will learn the kinds of things to look out for and how to avoid pitfalls one can encounter during installation and use.
As an adjunct to this presentation Chris will talk about how the slashdot.org site works from an editoral and systems management perspective as well as the demands a site like Slashdot puts on hardware, machines and people.
In the open source community, programs are assumed to be bug free and secure by default, when often this is not the case. It is rare to find "make check", the consistent use of defect tracking software, and secure design methodologies and techniques in most open source projects.
As more critical functions are being hosted upon open source platforms, it is essential that both local and remote exploits are minimised.
A comprehensive security audit of setuid/setgid tools in RedHat 7.1 default server installation reveals many basic security flaws in tool design and implementation.
The paper presents three case studies, and presents developers with a set of guidelines to use when developing secure software. The presentation will go through at a common faults at a high-level, and provide an action list for developers to take away and use in their own open source projects.
In recent years, the success of Linux has obscured the fact that many components of the original UNIX were available in source code without charge, and that one of the leading versions of traditional UNIX, BSD UNIX, has been available as Open Source for longer than Linux.
This paper investigates the freely available derivatives of BSD, also known as Berkeley UNIX, and compares them with Linux.
One of the shortcomings of NFS service in Linux is the handling of client authentication. The current implementation requires the system to keep complete knowledge of which clients have the filesystem mounted. Further, there is no provision for adding crypto-graphic authentication mechanisms such as Kerberos and RPCSEC/GSS
This paper will discuss the changes necessary to the kernel NFS server to increase flexability with authentication and, in particular, to overcome the above problems.
Hacking isn't just about writing software, it is also about using a soldering iron and power tools to modify your hardware. This paper presents some of the latest research work being performed at the Wearable Computer Lab, the Tinmith-evo5 system, which is an outdoor backpack computer designed for augmented reality environments. Using a head mounted display and high accuracy tracking devices, it is possible to allow a user to experience an artificial reality which combines both virtual and real worlds simultaneously.
We have designed new 3D user interface technology to use with the system, and the user controls the system using a custom built set of pinch gloves, with motion tracking from a video camera mounted on the head, with real time image processing. The gloves allow us to interact with the environment using natural hand gestures and metaphors, and does not require a keyboard or mouse for control. This system is one of the first in the world to create new technology to allow complex user interaction while walking around outdoors.
This paper discusses the software architecture which was used to implement the system, as well as the use of image processing libraries for target tracking, the construction and design of the Tinmith-Glove device, and the various other hardware components which are required. I have written much of the software myself and use freely available code for the rest of the system. Users will be able to use the information from the talk to construct their own virtual environments on the desktop using free software components.
In depth look at the ideas and code that hold Debian's "testing" suite together, from its initial genesis, through basic prototypes, to the "final" implementation and the couple of rewrites it's had since. The numerous optimisations used to make the ideas actually operate in an even vaguely acceptable amount of time would be examined; and the various tricks and tools used in development and debugging will be examined (including malloc debugging, writing C extensions to perl and python, and libapt versus libdpkg).
Micro-Controller Linux is a set of patches for Linux that enable it to run on microprocessors without Memory Management Units (MMU). These types of processors have traditionaly made up the bulk of processors used in embedded systems.
This session will cover the basic architecture of micro-controller Linux (uClinux), and in particular the changes required to deal with not having any memory management. The kernel, library and application level changes will be detailed and explained.
The process of developing and building embedded systems around uClinux will be covered. Particular emphisis will be given to the impliciations of running and developing without memory protection.
The focus will be on the practical issues of development, and will include coverage of tools required, porting Linux to different hardware platforms (including different CPU architectures), porting and developing device drivers, porting and developing network applications and user interfaces within the uClinux environment.
This session is aimed at giving not only an introduction to uClinux, but also a reasonable working knowledge to begin building systems based around it. It will be ideal for embedded systems engineers working with 32bit MMU-less processors (for example embedded 68k, ColdFire, ARM, i960, Sparc, etc).
In the current environment with an evergrowing push towards automation and redundancy, the size of networks is beinging to increase at an alarming rate. While most of us don't have problems at the scale of companys like akami or google who need to manage 8000+ servers, day to day system administration is beginnning to become a daunting task.
In this talk I will cover some opensource tools which can be used to automate both the building and day to day management of a large server farm.
This system is currently in use and is currently being used to manage the clients of Bulletproof Networks across australia.
Details will begiven on the current state of the system which from machine delivery to deployment involves editing some config file on the main config server and then inserting a floppy disk into the virgin machine. After about 10 minutes the machine is ready to be shipped.
User-mode Linux (UML) is the port of Linux to its own system call interface resulting in a virtual Linux machine running in a set of Linux processes. This paper will briefly describe the design and implementation of UML and cover in somewhat more detail the work done in the year since LCA 2001. The rest of the paper will focus on the future development of UML and how it will straddle normally inviolable boundaries.
The work over the past year has concentrated on stabilization, but there have also been some notable new features. These include a port to Linux/ppc, the management console, and the copy-on-write (COW) block driver. The management console provides an interface to the kernel which provides access to the SysRq driver and its functions, and allows devices to be plugged and unplugged from the virtual machine at run time. The COW block driver allows a UML block device to consist of a private writable device layered on a shared readonly device.
Future UML work is varied, but there is a common theme, which is that it blurs boundaries between things that are normally considered quite separate. For example, UML can be packaged as a normal userspace library. Applications that link against it gain access to kernel facilities such as threading, virtual memory, and filesystems for their internal use. Such an application can be considered a process which has the Linux kernel available internally or a Linux virtual machine with a kernel thread that implements the application.
A UML instance may be spread across multiple hosts. This is a type of clustering, since the virtual machine has access to the combined resources of those hosts. If the hosts OSes are different, then an application inside the UML has access to all of the facilities of those OSes simultaneously.
The Australian Army currently fields two main information technology and telecommunication systems to commanders in the field, a telephone system (Parakeet) and an IT system (BCSS). Currently, these two systems are disparate with interconnection only at the WAN. In the commercial world there is a trend to integrate voice and data systems to provide a common transport mechanism. This convergence can be seen with applications such as VoIP.
This paper details the design and construction of the Parakeet Virtual Cable (PVC), a device that enables Parakeet telephone traffic to be carried across the BCSS local area infrastructure. The device consists of several custom printed circuit boards and an embedded Linux PC in the form of an uCSimm from Lineo. The paper will cover the various sub-systems comprising the device and a discussion of some novel solutions to the problems encountered due to the low processing power of the embedded Linux PC.
iproxy comprises of a client-side proxy and a server-side proxy that allows arbitrary TCP/IP services to run over Broadcast, Multicast or Unicast UDP. It was originally conceived as a method to configure servers that had not been given an IP address on the LAN using an web-based interface.
This paper will focus the implementation issues in sending and receiving UDP traffic when nothing is known about the network the machine is attached to. It will illustrate how to create simple Multicast, Broadcast and Unicast UDP clients and servers. It will also discuss the problems and solutions realised when trying to carry TCP connections over UDP.
802.11b is an IEEE standard for wireless local area networks. It has become quite popular due to the convenience of wireless networking, and the reasonable flexibility and preformance of the protocol. It supports both "ad-hoc" and infrastructure modes of operation. In ad-hoc (or IBSS) mode, several nearby stations communicate without designated central co-ordination. In infrastructure mode, stations communicate via a co-ordinating Wireless Access Point (AP). This is frequently, although not necessarily, also attached to a wired network. This mode also supports roaming, where a station can be moved at will between several APs without reconfiguration of its network parameters.
Most of the common 802.11b cards are supported under Linux, but the support is not entirely mature and consolidated. The two most common types of card (both based on Intersil's Prism II chipset, but using different firmwares) are each supported by several different drivers. However, each driver supports a different subset of any particular card's functionality. Only a few drivers support some of the more specialised functionality of the wireless devices such as acting as an AP, or passively monitoring a wireless network. Progress is slowed by the fact the chipset and firmware vendors have provided little information to open source developers about the specifications of the MAC controller and its firmware.
This paper will cover the state of support for the 802.11 protocol and 802.11b devices under Linux. In particular, the internals of the existing kernel code will be discussed, with a particular focus on the "orinoco" device driver.
Samba 3.0, which is now in alpha release, is the first version of Samba to support membership in a Microsoft Active Directory realm. In this talk I will give some technical background on Active Directory from an implementors perspective and describe the capability of Samba 3.0. I will also make some wild predictions about the future directions of Samba.
This paper will discuss existing features in the ext2 and ext3 filesystems, and planned improvements to this filesystems. In some cases (directory indexing, tail merging, relaxed metadata layout, extended attributes/access control lists) the enhancements exist in patches that need to be integrated into the mainline sources. In others cases (extent maps, V2 inode structure, high watermarking/contiguous block allocation), the design of the enhancements will be laid out.
There will also be discussion on how the dependencies between the various enhancements allow development in parallel and where the dependencies require serialization in the development process, and how the codebase can be factored to allow for easier development of new filesystem features.
This work wants to show how the open source software, specialy Linux is the solution for a real digital democracy.
JFFS, and the shiny new JFFS2, are file systems designed specifically for use in embedded devices, on FLASH memory. Flash is an 'interesting' storage medium because it has unusual limitations - the smallest size of block which can be erased and rewritten is typically 64KiB.
The traditional way to use flash has been to use a 'translation layer', which is a kind of pseudo-filesystem, to emulate a block device with a normal 512-byte sector size - then to use a normal file system on top of that. This effectively layers two file systems on top of each other, and is horribly inefficient, especially as each layer is required to perform journalling independently to achieve reliability.
JFFS is a log-structured file system, originally developed by Axis Communications AB, which addresses this problem by storing data directly on the flash chips. JFFS2 is the second generation of this design, written from scratch by Red Hat to address some of the limitations of the original JFFS, adding compression and support for hard links and improving performance.
The presentation will give an overview of the problems and restrictions imposed by flash memory storage and how these shaped the design of JFFS, and will explain the implementation of JFFS and the improvements made in JFFS2.
Starting from a very fast and reliable database engine -- but from a limited feature set -- MySQL Server has grown to the world's largest Open Source database with over two million installations.
Today, MySQL is a full-fledged database server with a feature set that leaves little to be desired for. Nonetheless, we often get questions about when MySQL will get features now long since implemented.
The aim of this presentation is to, against the background history of the development of MySQL, describe some of the lesser-known features in the stable release 3.23, such as
and also to give an overview of the new features in the alpha release 4.0, such as
including a look towards the timetable for implementing Subselects, Foreign Keys and Stored Procedures.
David Axmark is co-founder of MySQL AB and responsible for the Open Source relations of the company.
Machine Automation Tools is a project to write an open source Soft-PLC, that is, a suite of software for machine control in industrial automation. The project is currently alpha, with the library written and several modules in a ready or near-ready state, others in progress or planned.
I'm not sure to what extent this corresponds with the spirit of l.c.a, but if you think it would interest other attendees, I can give a talk on it, its architecture and the milieu where it fits.
The industrial automation world is sometimes somewhat alien to those used to desktops and servers. In part this is simply due to separate tradition and separate history, in part because with control of physical machinery, failures can have destructive effects on the machinery, workers' health, environment etc.
Unlike the commercial desktop OS market, the commercial industrial control market is balkanized, with several manufacturers making mostly incompatible products. Obviously, this complicates interoperating with the commercial alternative, but may position the project as a bridge between systems which are otherwise difficult (and expensive) to connect up.
The project URL is http://mat.sourceforge.net/.
FlightGear is an open source project under the GPL to create a high quality flight simulation. It runs on multiple platforms such as Linux and Windows, using the OpenGL Graphics API. It features accurate terrain, network support, multiple views running across multiple computers, and has scenery for the entire world. Work is currently underway for certification as Personal Computer Aviation Training Device with the FAA, but it is also suitable for entertainment/gaming. Several alternative flight dynamics models can be used, allowing FlightGear to be connected to large scale professional supercomputer simulations as a graphical output. The simulator also allows the creation of "add-ons" such as new aircraft models, panels and scenery areas. The project is currently at version 0.7.8, and is working towards improving the accuracy and realism of the simulation, and adding such features as buildings, trees and three dimensional clouds. Session would include explanations of the history of the project, information on the use of the simulator's various functions, configuration and install information.
This talk will reflect on the past ten years of Linux, starting with a retrospective of Linux history, so we know from where we've come, leading to an examination to where the Linux and Open Source communities are today, and leading to an examination of challenges which we need to face in the future.
Some of the topics discussed will include Open Source and Legal Issues, Open Source and Business Models, the need to to make Open Source Software easier to use, and the relationship between Open Source and Propietary Software.
Awarinet is a set of standard protocols and APIs for networked, "real-time" applications -- with a difference.
Awarinet applications create a "world" of interconnected peers. The world may consist of any number of peers, potentially an infinite number.
Such scalability is possible because the peers only communicate with those peers closest to them in virtual space.
Just as a person is aware of others nearby, but not those at a distance, so Awarinet peers use locality of reference to prevent unnecessarily propagating data.
Thus Awarinet is not routed (except for the routing performed by the IP stack, of course).
This approach makes servers unnecessary, although they can be used to make the system easier to use and more efficient. The server infrastructure is not infinitely scalable, but more scalable than DNS, hence "good enough".
A program using the Awarinet APIs is always in contact with the most proximal -- hence most relevant -- peers in its world.
(Recall that this is not physical distance of computers, but a distance derived from the positions of the clients in virtual space. Any number of user-defined position vectors can be used to define this position.)
This infrastructure provides programmers with a simple, consistent interface. The protocols are open and API implementations are all cross- platform. All relevant code is released under the GNU LGPL or GPL, as appropriate.
Potential applications in collaboration, simulation, games, mobile phones, other position-aware appliances, and other as-yet unforeseen applications.
The Circle is an implementation of a decentralized hash-table written in Python, with a number of interesting uses. The most obvious is file sharing, another is instant messaging. Because The Circle is decentralized, it has no single point of failure, and should be largely immune to harware failures and legal attacks. If we add a trust-network layer on top of this architecture, most client-server type internet services could be implemented reliably in a decentralized way, with peers proxying for each other as they go on an off line. In this context, the hash-table provides a way to locate these services irrespective of their physical location.
(See also http://yoyo.cc.monash.edu.au/~pfh/circle/)
IBM has embraced Linux. IBM has embraced a group of Linux and Open Source Hackers known as "OzLabs". How has OzLabs embraced IBM?
This presentation includes both anecdotal and technical content. The target audience is not necessarily hard-core Linux kernel hackers, but probably the rest of the people attending the conference: Linux users and admins will hear about what it is like to be a Linux developer. What's more, curious students and after-hours Linux hackers with MCSEs will get some insight into what it is like "doing Linux" inside a really, really big company that has Windows on nearly every desktop. If I can get it past the lawyers then you will hear about it!
Topics of discussion include:
Look out! There's some technical stuff here, including a 6.5 minute description of how our router automatically SOCKSifies certain outgoing TCP connections so that we can (almost) forget that we work behind a really gross firewall. We'll look at some iptables rules, a line from an inetd.conf file, a shell script and some C code. Less technical members of the audience (and journalists) may choose to slip out for a latte at this point.
There's also some discussion about things like how to handle the fact that Lotus Notes is the corporate mail system.
There may be some "before and after" photos to let the audience decide if the experience has visibly changed some member of the OzLabs team.
"Here we are now, entertain us..."
LyX is built on LaTeX, but it requires Perl for reLyX (to import LaTeX); it exploits ispell/aspell/libpspell for spell checking; it uses kdvi/xdvi/ghostview for previews; it will use imagemagick and other tools to convert graphics files; it can use chktex for typographical checking; nroff for Ascii renderings of tables, and so on. Most of this is optional: If you have it, you can exploit it. If you don't, you can still work. A configure script detects available software and adapts LyX correspondingly.
LyX also supports an external material inset which allows you to plug basically any existing program into LyX, without requiring that this software supports KParts, Bonobo or other such component standards. As a result, LyX is the only word processor with integrated support for chess diagrams through xboard.
The main lesson is this: If you leverage existing quality software, you can get very far. LyX does NOT require any specific interface for the external programs. It just uses what they provide from the box. This aspect of LyX is fairly unique. This is the traditional Unix philosophy: A lot of good, small tools that work together, but we have never really seen how it should be done in a GUI program before.
Basically, it's a question of trading development resources for consistency and ease of use: Yes, it would be better if the chess diagrams were inline in the document, and that the chess editor used the same GUI buttons and so on as LyX, but such an effort would take years. It's more economical to just reuse existing software as it is.
Through the use of various LaTeX, SGMLtools and other formatting tools LyX is able to support exported document formats of DocBook, PDF, PostScript, DVI, Latex or ASCII text as standard. Exports and imports from other formats can be achieved by defining convertor pipelines.
LyX supports multi-lingual and multi-part documents. Languages such as Hebrew, Arabic, Chinese, Japanese and Korean are all supported. (CJK requires a third-party patch which will be incorporated in the coming months)
Several third-party tools incorporate facilities to interact with LyX through the LyX-Server pipe. Many of the these programs are bibliography database tools.
Qt2 and GNOME ports of LyX are under development as part of a push for GUI independence. Once LyX is GUI independent we will move onto system independence and thereby break our Unix/Linux/X-windows platform dependencies making native ports to other established or new platforms and window systems easy (eg. Berlin, BeOS, Aqua or even MS-Windows).
This session will describe the progress of the Portable.NET project, which is a free software alternative to Microsoft's .NET initiative, distributed under the GNU General Public License. In particular, the Common Language Runtime (CLR) and C# language parts of .NET.
The session will discuss the progress to date, and describe the benefits of bringing the CLR and C# to GNU/Linux. It will also dive into the internal workings of the system, and detail the challenges involved in replicating a significant, not to mention constantly changing, Microsoft system.
Portable.NET is also part of the DotGNU project (www.dotgnu.org), an initiative of the GNU Project and FreeDevelopers.net. The session will describe how Portable.NET fits into the DotGNU picture, and also touch on where DotGNU is at.
More information on Portable.NET can be found at the following site: http://www.southern-storm.com.au/portable_net.html
Ferris presents a C++ interface that makes heavy use of the STL and IOStreams. Currently ferris has two main internal abstractions: Context and Attribute. A context is much like a traditional file or directory in a file system, the major differences being that a context can have both byte content (like a file) and subcontexts (like a directory). An attribute is a chunk of metadata about a context. Contexts can have many attributes. Some attributes may be large, for example a base 64 encoded version of the context's content (133% context size). On the other hand an attribute can be small, for example the file size is exposed as an attribute.
Access to all contexts and attributes is performed by first requesting either an IStream or IOStream for that context or attribute. In this way the same context/attribute can be open many times at the same time, just like normal kernel based IO.
While auditing the running processes on my laptop, I came to the realisation that knowing when particular files are updated is a common problem for many daemons. In all the cases I noticed, the solution used is to poll the files and/or directories containing them using the (f)stat(2) system call. This results in several processes on an otherwise idle system waking up at intervals varying from a few minutes right down to 1 second and stat(2)ing files.
In the past this has seemed a reasonable solution as using up idle time would not have appeared to be a problem and there was no other way to do the job anyway. Today, however, with laptops proliferating and battery technology not keeping up with the power requirements of modern CPUs (and all the gadgets we "need" installed), maximising the idle time on a machine can be very important. Also, we now have the tools necessary to signal asynchronous events when files and directories are modified.
This paper will discuss alternate methods, including directory notification and file leases, that can be used to eliminate the polling loops in some common programs and the effects that these can have on the power consumption of laptops (and other computers for the benefit of the Californians in the audience :-)).