Thursday at linux.conf.au 2008 kicked off with Stormy Peter’s keynote Would you do it again for free?. Stormy discussed the effect of paid remuneration on open-source developers who have previously worked on a project for love, glory or ideology. For me, the most interesting part of the talk was the presentation of related research- check out Stormy’s blog for links. Often incentives are thought of as quite simple- if an activity X becomes more profitable, then more people will start doing X or people already doing X will do it more. And vice versa. But the research Stormy presented shows that sometimes financial incentives work in a more complex way, especially when they interact with other motivations such as social norms.
While I understand that this issue is of interest to Stormy’s employer, I’m not sure it has much relevance to the wider community. The involvement of companies in the development of free and open source software has been a huge success, and projects seem to move fairly easily from being developed mostly by volunteers to being developed mostly by people paid to do it. The only example given of a project damaged by commercial involvement was Easel, so it doesn’t seem to be a widespread problem.
Making money selling OSS is not necessarily easy, but I’m going to make a handwaving argument that using an open source development model actually makes finding and keeping good people easier. For example, while I was at LCA I met Giuseppe Maxia from MySQL. He told me that he was a regular contributor to MySQL before being hired by MySQL AB, and that’s pretty much how the company does recruitment- by cherry picking the best out of the wider developer community. As a company looking for talented people, it must be great having a steady source of potential hires who have both interest and expertise in your product. Open source development also allows for much wider recognition of ability and contribution, a major attraction for talented people that costs the company paying them nothing.
So overall this keynote was interesting, but I didn’t find the topic particularly compelling.
Last year I attended a SLUG meeting where Eric De Castro Lopo gave an introductory talk on flex and bison. With the knowledge gained, I used these venerable Unix tools to implement a configuration file parser for my rroller project, and while they got the job done, they also left much to be desired. First implemented in the early 70’s, they don’t quite meet modern standards for flexibility and ease-of-use. In particular, getting bison to produce good error messages is more work than it should be.
The ANTLR Parser Generator promises to change all this and late last year I spent some time checking it out for use on a personal project. While I wouldn’t describe it as difficult to use, I did think the online documentation could be better. For this reason I picked up a copy of The Definitive ANTLR Reference. I haven’t actually read it since I decided to postpone the project before the book arrived from Amazon, but I’ll get around to it eventually.
This is a long-winded way of explaining why I decided to attend Clinton Roy’s tutorial An Introduction to ANTLR: A parser toolkit for problems large and small. The tutorial combined Clinton’s amusingly deadpan outlook on life with a very lucid introduction to the basics of ANTLR. Things started to make sense very quickly and if you’re looking to get started with ANTLR you could do a lot worse than check out the video. The first hour or so is the best part; towards the end the pace began to stall. During the tutorial Clinton made extensive use of ANTLRWorks, a Java-based visual development environment for ANTLR grammars. This tool looked incredibly useful even though Clinton claimed that it’s not quite ready for prime time in terms of stability.
Another great (but non LCA-related) ANTLR resource is a 5-part tutorial by Jason Sankey over at A Little Madness.
After lunch I attended a talk on NUMA pagecache replication presented by the remarkably unassuming Nick Piggin from Suse. Kernel hackers are generally- how can I put this- quite assertive, but Nick seems more chilled out. In any case he knows how to give a good talk, with adequate time spent at the beginning to bring the less NUMA-aware members of the audience up to speed on NUMA architectures and the particular performance challenges they present for the kernel.
The other cool thing about this talk was that the pagecache replication patch is small, only about 700 lines, so it’s the sort of thing that can be explained adequately in less than an hour.
Once Nick started discussing the new data structures introduced in the patch there was a bit of a pile-on from the audience with the many kernel hackers in the room observing that Nick’s current implementation is sub-optimal in several ways. This continued until Dave Miller pointed out that it’s probably best Nick gets things working correctly first before people go crazy with micro-optimizations. There are also no performance benchmarks available yet, so it’s not clear what the performance benefit will be- and whether it will justify the increase in complexity of the page cache.
In contrast, the Parrot VM is most definitely not the sort of thing that be explained in less than an hour, but Allison Randal had a go anyway in her talk Parrot: a VM for Dynamic Languages. Before the talk I was under the totally mistaken assumption that Parrot is the VM for Perl 6, but it’s much more than this- it’s designed to support any number of dynamic languages and provides a bunch of powerful tools for creating new ones. By creating more powerful tools, the Parrot crew hope to accelerate the pace of dynamic language development.
Allison gave an overview of how Pynie (Python on Parrot) is implemented using the Parrot compiler construction tools. This was interesting but moved a bit fast for me. At the end of the presentation she said that “The amount of knowledge that you have now is actually enough that you could write a compiler”, but I think I’d a little bit more time to get up to speed.
One nugget I found particularly interesting is that Parrot is a register-based virtual machine. I’ve spent some time recently looking at the very cool LLVM project, their VM is of a similar vintage to Parrot and is also register-based. The most widely used VMs today are the Java VM and the .NET CLR; these are both stack-based but the trend seems to be towards register-based designs. Allison cited a paper titled The case for virtual register machines which demonstrates considerable performance advantages for the register approach.
This year at work I’ve been spending much of my time doing multimedia stuff, mostly audio processing, so I thought I’d get some useful information from Michael Smith’s presentation GStreamer: More than just playback. I was intrigued to find out that GStreamer’s design is heavily influenced by Microsoft’s DirectShow which I’ve been getting intimately familiar with recently. Just as DirectShow is built on top of COM, GStreamer is built on the GLib 2.0 object model. The basic architectural components such as elements, pads and caps also have direct counterparts in DirectShow.
Michael gave a gentle introduction to GStreamer and had a cool demo where he streamed audio from his laptop to a laptop belonging to an audience member. Because of the funky over-the-network synchronization features in GStreamer, the audio playback from the laptop was perfectly synchronized with the video image playing on Michaels laptop. Nice.
I’ve always taken an interest in automated construction, probably because I’ve read Kim Stanley Robinson’s Mars Trilogy a few times. Automated machines that build machines are essential if the human race is going to pull off stunts like building a space elevator or terraforming Mars. In Robinson’s trilogy, a space elevator is constructed by robots that capture an asteroid, move it into orbit and then mine it for the raw material used to make the elevator cable. Very cool stuff.
With these lofty ambitions in mind, I wandered into The Replicators Are Coming! given by Viktor Olliver. Viktor is a primary contributor to the open source Reprap (Replicating Rapid-prototyper) project, and he somehow managed to get his Reprap through customs for our edification. This reprap can produce some of its own parts from extruded plastic (see photos), and considering the low cost of the device (under $AUD 1000), the strength and precision of the extruded parts is really impressive.
Slightly disappointing is the number of its own parts the reprap can’t make, including metal rods, electronics, wires and fasteners. So there is quite a way to go yet. Still, I loved Viktor’s grand vision of what’s possible and the long-term perspective required in starting to move towards a difficult goal. Very inspiring work.
linux.conf.au 2008 is now old news. But I’ve got all these notes lying around, and I’m not letting them go to waste.
Wednesday at linux.conf.au started off with Bruce Schneier’s keynote on Reconceptualising Security. Since the talk hit Slashdot about 3 hours later, I won’t rehash the contents, which you can now watch online. As an avid reader of Bruce’s blog, I was already familiar with some of the ideas he mentioned, such as the very cool lemon market explanation for poor quality security products.
The main theme of the talk was that there are conceptually two parts to security - there’s the feeling of security, and then there’s the reality of security. Although Bruce has spent plenty of time denigrating security products that provide the feeling and not the reality, he presented a more balanced view this time. He made a good case that just making people feel more secure is often important, mostly because humans can be pretty poor at evaluating the seriousness of threats, and overestimating dangers can cause as many problems as underestimating them.
The first session after the keynote on both Wednesday and Thursday was a tutorial. I’ve seen Shane Stephens present at SLUG before, and the stuff that he and others have been doing on Annodex is very cool so I fronted up to his tute on “Building a video remixing web-site using Annodex“.
On a slight tangent, towards the end of the tutorial this very cool SVG demo from Chris Double was mentioned. You’ll need to download a developer release of Firefox from here if you want to check it out.
After lunch it was back to kernel-land with Jonathan Corbet of LWN presenting the now-traditional Kernel Report. A lot of the material would be familiar to regular LWN readers, including work done by Greg Kroah-Hartmann and Jonathan looking at the people and companies contributing patches to the kernel. A good summary of this can be found here.
What blew me away were the numbers describing the pace of kernel development over 2007. In that year just passed, 30,100 changesets were merged into the kernel that changed over 2 million lines of code. 750,000 lines of code were added, a rate of over 2,000 lines per day. It would be interesting to know how those numbers stack up against other large open source codebases (and certain proprietary operating systems), but I doubt there are many projects sustaining such a rapid pace.
Jonathan gave a brief summary of the experimentation with the kernel development process that’s occurred in recent years, and I don’t think ther’s much argument that process improvements are one of the main factors allowing such a high rate of change. From time to time it is still claimed that for Linux to become a real contender it needs a stable binary module API, and I think Linux pays price for not having one - but not having one is another reason why things can move fast.
That, and the total 1337ness of the core kernel developers
During question time, it was asked if the rate of patches meant that another “Linus doesn’t scale” problem may be imminent. Jonathan thought not: “Linus at this point seems to be living a pretty easy life to the point where he will go off and flame people on the Git list instead.” However, he thinks it’s possible that there may be “Andrew Morton doesn’t scale” issues because he does a lot of the integration work. Apparently some people are researching cloning to deal with this.
After Andrew Tannenbaum’s fantastic keynote on Minix 3 last year, it was good to see that microkernels did not slip from the agenda in 2008. Gernot Heiser from Open Kernel Labs took the opportunity to present a talk entitled Do Microkernels Suck?. This was a response to a paper [pdf] presented by Christoph Lameter at the Ottawa Linux Symposium in 2007: Extreme High Performance Computing or Why Microkernels suck.
The title of Christoph’s talk is quite glib and it’s not a general denunciation of microkernels. The most important claim in the paper is that a microkernel architecture will prevent an OS from scaling to the very large systems of up to 4000 processors on which Linux now runs. Christoph was intimately involved in work done at SGI to allow Linux to scale to these machines, and in the conclusion of his paper says that:
A monolithic operating system such as Linux has no restrictions on how locking schemes can be developed. A unified address space exists that can be accessed by all kernel components. It is therefore possible to develop a rich multitude of synchronization methods in order to make best use of the processor resources. The freedom to do so has been widely used in the Linux operating system to scale to high processor counts…
It seems that microkernel based designs are fundamentally inferior performance-wise because the strong isolation of the components in other process contexts limits the synchronization methods that can be employed…
Linux would never have been able to scale to these extremes with a microkernel based approach because of the rigid constraints that strict microkernel designs place on the architecture of operating system structures and locking algorithms.
To me this seems like a pretty strong claim in a paper that was written after working on Linux. Gernot rebutted the claim first with a series of graphs showing the Tornado microkernel scaling very well up to 16 processors. These results were presented in a paper published in 1999, so they’re somewhat dated now. I don’t think that showing a microkernel that scales to 16 processors necessarily proves anything about whether it could scale to 4096. YMMV. Gernot also argued that
- Synchronization in a well-designed system is local to subsystems
- There is no reason why subsystems can’t share memory, even if microkernel-based
Although I guess a system that employed (2) would no longer be a “strict microkernel design”.
My take on this dustup is basically this: Christoph describes the process of making a operating system scale as one of incremental improvement - find a bottleneck, fix it, repeat. Since the most recent paper Gernot had on this was from 1999, it seems that no-one has put a whole lot of effort into doing this with a microkernel. Once they have, we’ll know the answer either way. Right now the arguments are just a bit too hand-wavy on both sides.
Still, it was a fun presentation and although there was quite a bit of giggling from the back of theater throughout the talk (some people apparently find microkernels vastly amusing), the applause at the beginning and end of the talk was very enthusiastic. Although it’s a Linux conference, people seem to appreciate talks on more general OS topics as well.
For the last session of the day I really wanted to see Timothy Terriberry explain the inner workings of the Ogg Theora video codec, but a fire alarm went off about 5 minutes in. After evacuation I wandered over to hear Carl Worth and his unscheduled co-presenter Eric Anholt talk on X Acceleration that finally works.
The first part of the talk outlined work done over the past few years to allow the X Render extension to take advantage of the features provided by graphics hardware for accelerating 2D operations such as fills, alpha blending and scaling. 2D acceleration was originally provided in X by XAA, the XFree86 Acceleration Architecture, but this didn’t provide the operations that modern desktop applications require. Later, the X Render Extension was written by Keith Packard in an afternoon to provide image compositing operations but did not allow for hardware acceleration.
Acceleration designed to support X Render was first implemented in KDrive, an experimental X Server as KAA- Keith’s Acceleration Architecture. This was ported to the standard (then XFree86) X server by Eric to become EXA - Eric’s Acceleration Architecture. Carl complained that he was spending “Way too much time talking about history”, but I found it interesting and definitely required to put the later parts of the talk in context.
After the background material, the talk covered recent work to improve the performance of the Intel i965 graphics chipset driver. Intel has really come to the party on this one, releasing the required technical documents to Redhat under an NDA.
This was very fortunate as a ton of work has been done on this driver since. It was based on earlier Intel graphics drivers, but apparently the differences are large enough that good performance requires a quite different approach. To this end, Dave Airlie converted the i965 driver composite operation to use TTM, an in-kernel memory manager for graphics device memory, and batch buffers, a method of queuing multiple commands for efficient execution by the graphics device. This involved major modifications to the driver but unfortunately resulted in a major decrease in performance. However, Eric realized recently that this is caused by overly enthusiastic cache-flushing and we should see major performance increases soon.
Other operations apart from composite are already performing well. Carl showed a simple demo application written by Keith in his very own Nickel language that showed a big increase in speed for rescaling an image of a Penguin.
Overall this talk was entertaining and useful, although I found it difficult to keep up at times since my knowledge of graphics hardware and X server architecture is limited. At least I now know what a GART is.
The lightning talks gave 60 / n minutes to talk to anyone who volunteered. Any subject was allowed, but the talk had to be sans slideware. On this particular occasion n = 6 people took the opportunity and all had some very interesting stuff to say.
Grant Grundler from Google kicked off with the intention of countering perceptions about the “Google black hole” - the idea that free software goes into Google but none comes out. He described a number of the contributions that Google is making to the kernel and talked about which areas of development are important for Google and which are not.
First up are containers - Google is interested in this so they don’t have to use a full virtualization solution such as Xen or kvm. Apparently this kind of solution doesn’t provide any benefit to them, though exactly why wasn’t made clear.
Kernel filesystems are also a priority. Google uses ext2, but they would like to move to something else. Unfortunately ext3 performance isn’t good enough, and the journalling features of ext3 are redundant in Google’s environment because everything is mirrored anyway. Google have backported a few changes from ext3 to ext2, but Grant didn’t drop any hints about Google sponsoring a new filesystem anytime soon.
Grant mentioned Google’s role in fighting the good fight for Linux drivers - the company is constantly evaluating new technologies and pushing vendors for Linux support. Because of the volume of hardware Google buy, they have more leverage than most.
Google are also interested in CPU performance tools and are sponsoring perfmon2 development because oprofile is not adequate for their needs.
Matthew Wilcox then described a very interesting project he’s working on to eliminate un-killable tasks from Linux. This annoying situation is quite common and is caused when a task calls
down() to take a semaphore and then goes to sleep waiting for some event to occur. In this state the task cannot receive a signal; if the expected event does not occur the task cannot be killed. For this reason it’s preferable to call
down_interruptible() rather than
down()- in this state a signal can be received- but there are quite a few situations where the task just can’t be interrupted.
Matthew’s patch adds a third variant of the function called
down_killable(). Once a task is in this state, it will be interrupted only by fatal signals. After receiving such a signal, the task will die as soon as it returns to user-space, so it will never see the effect of the terminated system call.
The somewhat tedious task of implementing
down_killable() for 22 architectures is now complete, but there is still the larger task of changing all the calls to
down() (430 according to a helpful audience member) to
down_killable(). In each case, the call has to be changed, the return code checked, and if a signal was received the task must unwind whatever it was doing gracefully. There are also the 449 calls to
lock_kernel() that should be changed to
lock_kernel_killable(). Although this adds up to a pile of work, it can be done incrementally as with moving away from the Big Kernel Lock.
Matthew mentioned that both Ingo Molnar and Nick Piggin are in favour of the patch because they’re responsible for the OOM killer. This patch should allow the OOM killer to work more effectively because currently a task chosen for termination in a OOM situation may not actually be killable.
Next up was Zach Brown from Oracle who gave a brief teaser for his talk on Friday on the Coherent Remote Filesystem (CRFS). Zach described it as a new network filesystem that can be used in place of NFS "if you want it to be reliable and perform well", which drew a few weary laughs from the audience.
Zach is trying to drum up interest in and contributions to the filesystem, which is still under heavy development. Cool tricks such as a cache coherency protocol are being used and it has groovy features such as checksums, snapshots and a unique way of handling filesystem metadata that gives big performance gains over NFS. And doesn’t use the BKL! Zach has some preliminary performance data available in this blog post.
I usually enjoy filesystem talks- disks are such ornery beasts that the solutions people come up with are invariably interesting- so I think I’ll be attending Zach’s talk on Friday.
Val Henson took the mike next to muse about a pet theory she has about disk IO scheduling: that it’s possible to have much more information than currently available about how to submit IO requests to a storage device. With such information, instead of needing multiple schedulers, it would be possible to have a generic scheduler with tuning parameters that could be tweaked for a specific device. Val pointed out that while the things people know about disk operating parameters – e.g. assumptions like “sequential IO is fast”- have been true for a long time, they are changing very quickly as large-capacity solid-state storage becomes more common.
Val suggested a few parameters that might be interesting:
- The number of IOs the device prefers to have outstanding
- The maximum possible IOs per second
- Preferred size for writes/reads
- The exact tradeoff for sequential vs random IO. Random IO still incurs a penalty with SSDs, but it’s not as severe as with magnetic drives
- The time taken to switch between IO at two different addresses
- The device’s preferred alignment.
Val also speculated on how this information could be obtained – it could be specified in a configuration file or the kernel might determine the device parameters experimentally by profiling the device. Either way, the kernel currently has a very simple model for IO which could be improved greatly. Hopefully there will be some interesting developments around these ideas in the near future.
Paul McKenney next posed a question for architecture maintainers about a possible problem with RCU in situations where the system is returning from a low power state and a NMI or SMI handler performs a specific type of RCU operation. That’s what I managed to get anyway, Paul talks fast and was not catering for the uninitiated. Most of the talk consisted of a high-bandwidth data exchange between Paul and Dave Miller, who thought the problem might occur on SPARC. I wish I was able to follow more of what was said but I don’t have the background knowledge.
Last up was Dave Miller himself who gave us an overview of what’s been going on with networking. He has just made a pull request to Linus for 2.6.25 which contains just under 1500 patches, 700 for non-driver changes. A large number of these are for the network namespaces feature which is required to support containers.
Dave outlined the recent changes in the data structures for NAPI (as described in this LWN article) that severs the one-to-one relationship between network devices and interrupt lines. Modern network devices have multiple transmit and receive channels and multiple interrupts, and the driver must support this for best performance. Dave mentioned a new Neptune device (presumably this) that allows 32 RX channels, and 24 TX channels! It has a hardware packet classifier so that RX interrupts for certain packet types can be routed to specific CPUs.
Unfortunately, handling multiple transmit channels is not so easy because of the presence of the packet scheduler layer- load balancing on transmit can break the prioritisation done by the scheduler. Fixing the problem may involve a change in the default queueing discipline.
Zack Brown asked if there were any automated mechanisms for assigning a process to the CPU where the packets destined for it are being received. Dave has queried Ingo Molnar about this, and apparently the scheduler will push processes to CPUs where their wakeup events occur. However, this is not a panacea as a process will then lose locality- it will no longer be enjoying the benefits of a hot CPU cache.
A question was also asked about that hardy LCA perennial, netchannels. Dave described the idea as "not dead but it is smoldering." Netchannels introduce some difficult problems with packet filtering, and it’s a very big change with no evolutionary path. From Dave’s response it seems unlikely we’ll see movement on this soon, notwithstanding the work done by Evgeniy Polyakov.
linux.conf.au: where too much kernel is barely enough!
Full on. Today felt like about 3 three conference days in one. Between the Distro Roundup, the Kernel Mini-conf Lightning talks, the Kernel Panel discussion and other sessions I must have heard close to 20 people speak.
A lifetime ago at 8:30 this morning I sat down at breakfast across the table from Paul McKenney. Now to me he seemed like just a J. Random Bearded Hacker, but he’s actually the main guy behind the RCU implementation in the Linux kernel. Val Henson introduced him this afternoon at the kernel lightning talks as “one of the best computer science researchers I know”. Apparently he’s already done too many talks on RCU, so to get on the conference schedule this years he’s talking on his involvement in adding concurrency to the terribly exciting C++0x standard. Now that’s one talk I will be attending. Don’t you wish you were at LCA now?
I was thus slightly late for the first session of the day as I lost track of time chanting “We are not worthy” while Mr McKenney was trying to eat his cereal. I wandered into the Distro Roundup where community members representing various distros gave an overview of the history and current status of their distribution. Representatives from Oracle, Mandriva and Gentoo gave useful reports in the time that I was present. Mr Debian spent some time talking about the difficult political/ideological issues that have caused friction within the Debian community - how to deal with firmware “binary blobs” and the status of documentation covered by the GNU Free Documentation License. Binary blobs are not just an issue for Debian, but because of the project’s strict adherence to the Debian Free Software Guidelines, they have taken the problem very seriously and now will not ship such non-free firmware. Similarly, Debian regards the GNU FDL as a non-free license. It was clear from the talk that not all members of the community agree with these decisions, so the controversy could continue in spite of the current policies.
After morning tea I stuck with the Distro summit to hear Shane Owenby, Senior Director for Linux and Open Source at Oracle talk on “Why would a large corporate create their own distro?” I should probably have migrated to the Kernel Mini-conf at this point but Shane was an engaging speaker and it was interesting to hear about Oracle’s goals for their Linux products apart from making money. Oracle wants to promote the adoption of Linux in the data centre by lowering the barriers to entry, which given the size and scope of their customer base they’re uniquely positioned to do. Shane engaged in some lively discussion with Bdale Garbee on Oracle’s Premier Backporting service. Bdale’s question, I think, concerned how Oracle can backport fixes to stable releases when other ISVs will only guarantee their applications on certain (unpatched) Oracle Enterprise Linux versions. No clear answer was given to this.
These and other discussions made Shane’s talk go overtime, so Jonathan Oxer didn’t have time for the full version of his very useful talk on Release Monkey. Simplifying, this is a set of scripts to help build packages for more than one distribution. This is a very common problem for small ISVs who want to distribute their products for Linux, as the time and cost in building for multiple distros can be prohibitive. I’ve stumbled over Release Monkey before when I was looking for a solution to just this problem for one of my previous employers. We were attempting to distribute a single product for Suse, Redhat 9, Debian 3.0, etc, etc and it was not a pleasant experience. James cooked up a system that worked pretty well, but I think there is a real need for a ready-made, full-featured tool for this task.
Jonathan emphasised that one of the main problems when packaging for multiple distros is that there’s no good way to capture the metadata required- stuff like package dependencies, version numbers, build instructions, etc. Release Monkey has adopted the (hackish) solution of using the Debian metadata and munging it for other distros. In our case, we maintained separate files for each type of package - .spec files for building RPMs and control/rules files for building Debian packages. This obviously introduced some maintenance overhead. Jonathan suggested that the ideal solution would be to define a distro-agnostic metadata format, but little progress has been made on this so far.
At this point I’d had my fill of distro-talk so I wandered over to the Kernel Mini-conf hoping to hear Arnd Bergmann talk on “How not to invent kernel interfaces”, but his talk had been moved to 9:15 so I lucked out. Instead I listened to Jörn Engel speak on “Cache-efficient Data Structures”. This is a very interesting topic but since I missed the start of the talk, I couldn’t quite follow the comparative performance numbers he had on his slides. There were a few interesting comments from the audience, including from Dave Miller and Linus (no link required). Dave is the kernel networking maintainer and knows a few things about hash tables as they are used extensively in the network subsystem for stuff like holding socket descriptors. Discussion followed on the problems involved in resizing hash tables. Currently several (large) hash tables are allocated at kernel boot time in one of two sizes, depending on the memory installed on the system. Some thought has been given to making these re-sizeable at runtime to allow for both minimal memory usage and best performance, but synchronization issues make this very difficult. It sounds like there’s a fun project here for anyone who’s game enough.
After lunch I stuck with the Kernel Mini-conf to hear Jesse Barnes from Intel’s Open Source Technology Center talk on “Enhancing Linux Graphics”, or alternatively “Why Graphics on Linux suck and what we are doing about it”. Jesse described some of the major enhancements that are taking place to rationalize the motley assortment of software components involved in graphics on a Linux system- the kernel fb layer, DRM, X, Mesa, DirectFB, etc. This work (described here) will enable graphics without X, since things like modesetting will be handled by the kernel. From comments made by Dave Airlie, this is something of a holy grail for the graphics guys. Perhaps more importantly, Jesse’s work will finally allow displaying a “Blue Penguin of Death” when a kernel oops occurs, the absence of which has long hampered Linux’s ability to compete with rival operating systems.
Next up was Joshua Root from Gelato UNSW talking on “The state of the Elevator I/O scheduling in Linux”. The Gelato guys want to create documentation to help system administrators choose and tune an IO scheduler. Obviously, the performance of the 4 different schedulers in the kernel varies greatly with different load profiles. In particular Gelato have been looking at IO scheduler performance when software and hardware RAID are in use. Along the way they have found (and fixed) a number of bugs in the schedulers.
One thing I didn’t realize is the number of tools available for doing this kind of performance analysis on Linux. The
blktrace tool (built into the kernel) can record everything that is happening in the block layer for later analysis using
btt, the block trace timeline tool.
btreplay can replay an event trace recorded with
iomkc can be used to generate a Markov chain model of the trace so that workloads can be reproduced (or emailed) in kB rather than GB. Joshua showed some graphs (Yay!) of his performance results. Interestingly, while the more complex schedulers (anticipatory and CFQ) give better throughput in most situations, the simpler schedulers can give much lower average latency in some tests. As with much performance analysis, “it depends”.
This blog post has now dragged on far too long, and I still haven’t covered the very interesting kernel lightning talks or the kernel developer’s panel. I’ve got extensive notes on both, but they’ll have to wait.
I arrived at linux.conf.au 2008 at the University of Melbourne last night, but didn’t manage to register until this morning. Everything went smoothly as usual except for the friendly (female) registration person addressing me as “Madam”. No doubt it had been a stressful morning.
The conference swag was pretty good this year, the bag is a good size and I can never have too many Redhat caps or Trolltech beer coolers. The t-shirt is also a great design, easily the best of the LCA shirts I have lying around. This one can actually be worn in public without looking too uncool, a considerable achievement. It makes sense what with Melbourne being Australia’s fashion capital.
I kicked off with a presentation by Stuart Middleton as part of the Embedded Mini-conf. Stuart is a type of geek previously unknown to me, a “robotics artist”. Hexapod creations are his speciality. He told us a great story about convincing the Wellcome Trust to give him $2million to build a giant hexapod walking platform for Stelarc. The first version costing $1million twice tore itself apart as soon as it was started because the design “wasn’t quite right.” Such expensive failures can be embarrassing, but apparently this is not too much of a problem because according to Stuart “being artists we can usually come up with some bullshit to explain it”. Very entertaining.
In the second session I stayed with the Embedded Mini-conf for Ben Leslie explaining how to port the OKL4 operating system to a new platform - in this case the Goldfish simulator provided with the Android SDK. I’ve seen Ben present before at SLUG and he always pulls off a slick talk. But he moves fast! This talk was a good introduction to both OKL4 and embedded programming in general.
I then jumped ship to the Security Mini-conf to hear Enno Davids talk on “Self Healing networking”. After a general introduction to network security threats and countermeasures he started talking about the most severe current threat to modern networks- DoS and DDoS attacks. There are currently few effective countermeasures available to deal with the huge botnets that are now being created for profit by well-organized criminal groups. Enno claimed that large botnets can now create aggregate data rates of up to 24Gbps, which is more than the total bandwidth connecting Australia to the rest of the ‘net!
Enno presented some defensive strategies that use ICMP redirect packets to force the botnet zombies to redirect their traffic somewhere else (say 127.0.0.1), but this is not trivial to do and in any case not effective against the largest botnets. He also proposed some small extensions to ICMP that if implemented could help mitigate against such attacks in the future. There was some discussion with the audience of the possibility of distributed responses to DDoS attacks, i.e. calling on friendly networks to help repel an attack. At some point this boils down to “my botnet versus your botnet”, which some wit announced is “coming soon to a Fox channel near you” All up a very interesting talk.
After lunch I headed to the Fedora Mini-conf to see Eugene Teo talk on “Writing SystemTap Scripts”. The talk was a good basic introduction to this very useful tool. I attended a similar talk last year at LCA in Sydney, and I’m sorry to say I haven’t actually used SystemTap in the intervening time. But I still think it’s way cool. Eugene also showed us some of the SystemTap scripts he’s been writing, which was fine, but I would have liked it better if he had used the scripts to generate some data suitable for munging into pretty graphs. But that’s just me, I really like graphs.
Next up I checked in on the Community Wireless Mini-conf to hear James Cameron speak on Wireless Design & Testing for the One Laptop Per Child project. James is a resident of somewhere in rural and regional Australia and was sent some XO units to do wireless testing because of the quiet radio environment, similar to areas in the developing world where the XO will be deployed. He also tested an antenna extension gadget that seems to be still in development. James presented some numbers on the achievable range using XO. With two machines 1.5m above the ground, they can communicate as far as 1.6km 95% of the time, which sounds pretty impressive. Unfortunately, due to reasons known only to RF gurus the range drops off significantly when the XOs are closer to the ground. Jim Gettys was in the audience which made for a great Q&A session as he could fill in any gaps in James’ knowledge of the project.
Following afternoon tea I saw Mikko Leppanen talk on “Adventures in Consumer Electronics with GStreamer” as part of the Multimedia Mini-conf. I should probably have spent more time in this mini-conf since I do multimedia stuff for a living now, but that’s just how it worked out. Mikko works for Nokia, specifically writing media playback software for the n810 Internet Tablet. gstreamer is used extensively in the product and Mikko is obviously a big fan, praising gstreamer for being popular, scalable, pluggable and hackable. During question time I asked Mikko how he would compare gstreamer to other multimedia frameworks that he’s used- he commented that the key to a good multimedia framework is a good codec abstraction, and compared to others he’s used such as Helix and the Symbian multimedia framework, gstreamer is clearly superior. He also claimed that Openmax has taken quite a few ideas from gstreamer, which he considers a strong endorsement of gstreamer’s design.
Last up in today’s open-source onslaught was Richard Keech from Redhat talking on “Provisioning Red Hat/Fedora systems using custom builds and Kickstart” as part of the Fedora Mini-conf. Frankly this is not the sort of thing I do on a daily basis, but I like automation and packaging so I had to go. Richard laid out the considerable benefits of his approach- it becomes very easy to reproduce the same machine configuration for testing, development, disaster recovery, etc, but you still get much more flexibility than when creating HD images. During the talk he built, installed (on vmware) and booted a custom build of RHEL. This can be done quickly with a reduced number of packages in the installation.
All up the day was a strong start to what should be another fantastic LCA.
After my last post I thought I should look a little deeper into code metrics. Unsurprisingly, a lot has been done in this area- researchers have been investigating metrics since at least the mid-70s. I’m not sure how active the field is today.
There are numerous commercial offerings of tools that will generate metrics for a codebase, but relatively few open source ones, at least for C and C++. Presumably this is because of the difficulty of developing a parser for the tortured syntax of C++. The best open-source tool I found was
cccc which unfortunately is no longer under active development.
cccc was written by Tim Littlefair for his PhD at Edith Cowan University in Perth, making it home-grown open source. Cool! It uses PCCTS (The Purdue Compiler-Compiler Tool Set) as a parser and generates XML and HTML files containing the calculated metrics.
The range of metrics calculated is good, although the HTML output is fairly basic (sorry Tim), and there’s no graphs. I ran
cccc over my pet project Springysim, the resulting output is here.
The metrics produced by
cccc are divided into three groups: procedural, object-oriented and structural:
Procedural metrics include Lines of Code (LOC), Lines of Comment (COM), McCabe’s cyclomatic complexity measure and various ratios of these numbers. The concept of cyclomatic complexity was introduced by McCabe in his 1976 paper and the
cccc documentation has this to say about it:
The formal definition of cyclomatic complexity is that it is the count of linearly independent paths through a flow of control graph derived from a subprogram. A pragmatic approximation to this can be found by counting language keywords and operators which introduce extra decision outcomes. This can be shown to be quite accurate in most cases. In the case of C++, the count is incremented for each of the following tokens: ‘if’,'while’,'for’,’switch’,'break’,'&&’,'||’
This intuitively seems like a useful metric, although I’d like to read some studies validating it in practice.
Objec-oriented metrics produced by
cccc for each class include:
- Weighted methods per class (WMC). In the simplest case the weighting of each method is just one.
cccc also provides WMCv, which only counts public and protected methods.
- Depth of inheritance tree (DIT)
- Number of children (NOC)
- Coupling between objects (CBO). This is the number of other classes that are coupled to a class either as clients or a suppliers.
All these metrics were originally proposed by Chindamber and Kemerer in their 1994 paper A Metrics Suite for Object Oriented Design. It’s not a bad read, but does spend quite some time proving that the proposed metrics satisfy various formal properties proposed by Weyuker in her 1988 paper Evaluating Software Complexity Measures; these parts might be a little dry for some. But it’s not all ivory tower stuff, they also evaluated the metrics by collecting empirical samples at two different software development organisations. However, no attempt was made to correlate the code metrics with project outcomes such as defect rates or maintenance costs.
cccc does not calculate the 5th and 6th metrics suggested by Chindamber and Kemerer. The 6th metric, Lack of Cohesion in Methods (LCOM), examines which instance variables are used by which methods of a class. A class with a single instance variable that is used by all methods has high cohesion, while a class with many instance variables each used by few methods will have a low cohesion. This seems like an interesting metric for OO designers to know.
The structural metrics calculated by
- Fan-in: The number of other modules that pass information into a module.
- Fan-out: The number of other modules that a module passes information to.
- An “Information Flow measure” calculated as the square of the product of the fan-in and fan-out of a single module.
These metrics were proposed by Henry and Kafura in their 1981 paper Software Structure Metrics based on Information Flow, this unfortunately does not seem to be freely available. This paper is super-cool as the code base they use for evaluating the metrics is UNIX, version 6. The Lions book is cited as a reference- even cooler!
Tragic fawning over old-school UNIX aside, the paper shows that the information flow measure described above is strongly correlated with the occurrence of changes in the UNIX sources. That is, modules with a high value of the metric also had many changes made to them. The number of changes in a module is used as a proxy for the number of errors in a module, on the assumption that these two measures are strongly correlated.
cccc looks like an interesting tool, or at least the beginning of one. To be useful during development, it would be nice to see how these metrics are changing over time, and
cccc doesn’t provide any facilities for that.
Sometimes I think “I really need an app that does x“, and then ten minutes later I’ve found an app that does x. And it’s free!
That’s not what happened with xvidcap, but it was pretty close. I needed something that would create a screen capture video, and xvidcap does just that. It only took a few minutes to find, but the 1.1.4 RPM package I installed from the Fedora livna repository segfaulted as soon as I tried to do a capture. I built the more recent 1.1.5 release from source, and that worked fine.
xvidcap seems to be under intensive development - I just checked back at Sourceforge and 1.1.6 was released yesterday.
And yeah, that means there will be some videos up soon.
Chris Blizzard has a very useful post up summarizing the announcements made at the recent Redhat Summit. I’ve been following this but still missed some cool stuff like Redhat Exchange.
The basic idea behind RH Exchange seems to be that you download RHEL with third-party applications already installed and ready to go. I notice that Alfresco are already a partner; this is interesting since a colleague of mine just spent a couple of days trying to get Alfresco up and running on Fedora Core 6. Alfresco uses OpenOffice for document format conversions, and Oo was the component that put up the most resistance during the installation process. I think it would have been better to just give Redhat money and avoid the hassle.
No doubt there are very good reasons for it, but it’s mildly amusing that Redhat are going to write some Windows drivers to take advantage of RHEL’s paravirtualization features. Hopefully the guys writing these won’t get teased too much by the other Redhat developers.
I use free software every day at home and at work, but I’ve never been really big on converting non-technical people to use it. For my purposes it’s clearly superior, but I’m not sure that’s true for my family and friends. Possibly the only exception is Firefox. I do a bit of Windows tech support for friends, and without fail I install the latest Firefox release and issue stern instructions that for security reasons, it’s the only browser they should be using.
This has had mixed results- my little brother is still a dedicated IE user. However it looks like I might get him to start using Lyx, the world’s only What You See Is What You Mean document processor. This year he is doing a thesis for his B.E. in Civil Engineering, and I suggested that he use Lyx rather the ubiquitous Microsoft Word. Once I showed him a printed version of my B.E. thesis he was really keen, as it looks very shmick- more like a journal article than an undergraduate essay.
I find that for a given level of graphic design ability (i.e. none), Latex documents look much better.
The other big selling point of Lyx (and Latex) is stability. In my experience, Word is great for relatively short, simple documents. A 100 page document with many images, footnotes, sections and other features can go pear-shaped. This has happened to my brother, so a solution that doesn’t suffer from this was appealing to him. And, of course, Lyx runs on Windows. I don’t think he’s interested in a Ubuntu install any time soon.
I’m interested to see what his experience will be like, how easy Lyx is to use, and how well it met his needs. Stay tuned for updates on this riveting usability study.
So I finally finished installing FC6 on my dual-Athlon box (hostname kj) at home. This was more work than it sounds like as I did a test install on an old Celeron 500 I have lying around (thanks mum!) to make sure that none of my mission-critical *cough* apps were broken on FC6. Amazingly, the Fedora graphical installer requires 256MB of RAM- with less than this you can only run the text mode installer! Getting round this hurdle required some hasty PC133 reorganization from the other relics cluttering up my room.
Since I have Wordpress installed on kj, I had to figure out how to install Wordpress and phpMyAdmin again. It turns out that the whole export MySQL database -> import database -> upgrade Wordpress process is pretty easy.
I also tested to make sure that my Canon LiDE 60 scanner and EDIROL UA-25 USB audio interface worked OK. Not that I expected problems. Although I had to do some hairy config file editing when I originally set up the LiDE 60 with Fedora Core 4, this time everything worked out of the box.
The only significant problem I had was getting my monitor recognized. It’s a Dell P991 and has happily been doing 1280 x 1024 at 85Hz for years. I tried using
system-config-display, but the modifications it made to
/etc/X11/xorg.conf would only give a refresh rate of 60Hz at that resolution. Copying the
Monitor section from my old FC4
xorg.conf fixed it.
If you are installing FC6 I strongly recommend having a look at Mauriat Miranda’s guide, it makes things much easier.
It would probably have been quicker to just do an upgrade rather than a full install, but I have never really believed in the whole upgrade thing. Too risky. Much better to get a new drive and install on that. Then, if it turns out the Fedora folks have done something heinous like omitting my fave console font from the release, it’s easy to stage a strategic retreat.
Unfortunately, my joy will be shortlived. Fedora 7 is due for release on the 24th May.