Sometimes I think “I really need an app that does x“, and then ten minutes later I’ve found an app that does x. And it’s free!
That’s not what happened with xvidcap, but it was pretty close. I needed something that would create a screen capture video, and xvidcap does just that. It only took a few minutes to find, but the 1.1.4 RPM package I installed from the Fedora livna repository segfaulted as soon as I tried to do a capture. I built the more recent 1.1.5 release from source, and that worked fine.
xvidcap seems to be under intensive development - I just checked back at Sourceforge and 1.1.6 was released yesterday.
And yeah, that means there will be some videos up soon.
I was going to write a post describing various dodgy methods for tying together version control and issue tracking systems, but I’ve found a better way! Read on for the thrilling details.
In his post on bug tracking techniques, James mentions one reason to refer to issue-tracker Issues in version control commit messages:
The solution I came up with was to ask everyone to put a specially formatted string in the commit log that noted the bug number that was resolved in that commit. The version control system will keep track of the code that gets integrated between branches, and carries the commit messages with it. That means getting a reliable list of bugs fixed in any given version is as simple as enumerating the changesets integrated into that branch.
As described, this is a method for generating a list of changes in a given software release. Being able to look at a source code change and know what Issue the change was made for is another benefit. This helps with those “Why the hell did he do that?” questions that come up when looking at Other People’s Code.
After adding these strings to commit messages for a while, I realized that it’s just as useful to have a mapping the other way - from the issue tracker to the version control system. This allows viewing the changes committed to fix a given bug or implement a feature. I did this by adding a comment to an Issue, e.g.
Fixed on 5.1 branch in Perforce change 12345.
This is really easy with Perforce since the changelist number uniquely identifies a set of changes to multiple files that were atomically committed to the repository. Doing it with CVS is trickier.
This manual method was useful but very primitive. A better method is to use the post-commit hook provided by the version control system to append a message to an Issue. In the example above, the changelist number (and perhaps the commit message) would be appended to the Issue page by a script when a commit is done. This is possible because the Issue number is (manually) included in the commit message.
The best method is to use one of the pre-existing solutions to tie the issue tracker and version control system together. Until recently, I wasn’t aware that there are a bunch of these for Perforce, as listed here. There are plugins for Jira, FogBugz and Bugzilla among others.
For the JIRA/Perforce combination I’ve used previously, a Jira plugin is available from Atlassian and it looks really cool. For each Jira Issue you can see a list of Perforce changes, including the commit message and a list of changed files. Each Perforce change entry can be hyperlinked to the change itself as displayed by p4web.
I think this would also help with James’ original problem of compiling a change list for a particular release. Jira and Perforce are integrated using the Perforce jobs feature, and as described here, jobs are preserved across integrations. So it’s possible to ask Perforce which Issues (jobs) have been fixed on any branch without having to add a specially formatted string to commit messages.
As someone who spends much of their life at a bash prompt, I should really use GNU readline commands more often. The only one I ever use is Ctrl-R, to search backwards through the history. The full list of bindable readline commands is here.
A few that look particularly useful:
Move to the first line in the history.
Move to the end of the input history, i.e.,
the line currently being entered.
Kill backward from the cursor to the
beginning of the current line.
Kill the text from point to the end of
Move forward to the end of the next word.
Words are composed of letters and digits.
Move back to the start of the current or
previous word. Words are composed of letters
One of the many bad habits I’m yet to break is slumming it through Slashdot comment threads. They’re mostly dross, of course, but occasionally there’s something useful. Today there was a post on Optimizing PHP and Apache. Like any good slashdotter I didn’t bother to read the article, but I did find a comment that claimed:
We have recently ported Sugar CRM PHP/Apache to NetKernel and lost over 95% of the code and subsecond response times … For me performance is important but maintainability is equal. The less code the easier to maintain. There is a great white paper from the NK guys here.
As a firm believer in the maxim that the way to programming nirvana is to write the absolute minimum amount of code, this sounded interesting. I had a read of the whitepaper. It’s overly wordy and has a generally breathless tone, e.g.:
To demonstrate that these statements are in fact simple facts, the paper introduces and builds upon a foundation of fundamental principles. It is likely that these principles will challenge your understanding of the nature of computation.
A foundation of fundamental principles? Eww.
But there was some interesting stuff in there. The white paper introduces a model termed Resource-Oriented Computing (ROC), which in a sentence is like the web crossed with shell pipes. Yahoo pipes is used as an example of a Resource-Oriented System. The idea also seemed to have something in common with Service-oriented architecture, particularly the emphasis on combining loosely coupled, interoperable services. NetKernel is the framework built by 1060 Research to support the implementation of ROC applications.
The white paper didn’t really give me a picture of what a full scale Resource-Oriented application would look like, but I’m looking forward to the promised next installments. Very excellent blogger Jon Udell took a look at NetKernel a few years ago and seemed to be quite impressed.
I’ve done a fair bit of performance optimization on server applications and device drivers, but not much with web applications, since I don’t normally do this type of development. Today was different.
The problem was that Roundup, our issue tracking system, was slow. Really slow. Loading a simple page could take up to 10 seconds. When it’s a tool you use all day, this starts to be annoying very quickly.
One of the many cool things about Roundup is that it can use a number of different storage backends- sqlite, mySQL, PostgreSQL, etc. We are using sqlite, and I suspected that since the number of issues is growing rapidly, we had outgrown it and needed to move to a “proper” database.
I decided to migrate to the MySQL backend, as it’s the database I’m most familiar with. I duly exported all the data from sqlite, set up a test installation of MySQL, and then imported everything into MySQL as described in the Roundup documentation.
I rushed to my browser to check the speed improvement, but it was pretty much the same. More thought was required.
I hypothesized that since Roundup is running on same server as our version control system, the slow page loads were being caused by the heavy disk IO associated with checkouts, updates, tagging and the like. I begged my friendly neighborhood system administrator for a spare machine and installed Roundup and MySQL on that.
The spare machine has dual 2.80GHz Xeons with Hyperthreading, 512KB cache and 2GB of memory, so it should have plenty of grunt for a little issue tracker like ours. Unfortunately, it didn’t improve the page load times either.
I installed the wonderful htop on the spare machine to get a better picture of what was going on. Requesting a page in my browser, I could watch the Roundup server process and
mysqld using up massive gobs of CPU over the full 5-10 seconds that the request took to run. I’m thinking: what the hell is it doing?
I spent some time wandering around the Roundup source tree, but nothing seemed to leap out at me. I checked out the database using phpMyAdmin, but all the tables seemed to have indexes where they should be. Not that I would really know…
I decided it would help if I knew what queries were running on the database. A few minutes looking at the MySQL manual led me to this page that describes how to activate the general query log. When enabled, all queries run by
msqld will be logged to a specified file. For some reason this must be done with a command-line parameter rather than using the configuration file, so I edited
/etc/init.d/mysqld to pass the
-l flag to
mysqld when it starts.
This was a good trick. I requested one page. The resulting query log file was was 1.4 MB in size and full of
select statements. Lots of
$ grep -i select query.log | wc -l
So rendering a single page required the database to run 15,000 queries! That might, just might, explain the slow page loads.
Once I had the log, it was fairly easy to figure out what was generating all the queries. On the “issue” page, there is a drop-down menu that allows specifying the issue’s parent issue. Creating a parent/child relationship betwen issues is often useful to break down large units of work into smaller, more managable tasks. The problem here was that the drop-down menu was populated with the “title” of every issue in the system - all 2,500 of them. Due to the way the database abstraction works in Roundup, around 6 queries are necessary to retrieve each issue.
Commenting out the offending menu in the page template (Roundup uses the Template Attribute Language from Zope) reduced the number of queries required to display a page from 15,000 to 200. Page load times were correspondingly improved.
If I has more experience with web applications I would have been able to sort this problem out much faster. Using the MySQL query log was the key step that allowed me to understand the problem. It’s my own fault though- I disobeyed the first rule of performance optimization - measure first, then make changes. Don’t assume you know where the problem is.
UPDATE: Looks like this is a known issue.
When I installed my laptop at work, I divided the hard drive for a dual boot setup with Windows XP and Redhat EL4. I never actually dual booted since, well, it sucks and I have another RHEL machine I can SSH into anyway.
A couple of weeks ago I ran out of space on the NTFS partition, mostly because 25GB is used storing my MP3 collection. So I needed to wipe the multiple ext3 partitions I created for the RHEL installation and combine them into a single NTFS partition.
I’ve never had to do this under Windows before, but a small amount of Googling led me to a Microsoft support page that describes how to use the XP disk management utility. This utility is really pretty good. Go Redmond! It was very easy to make the changes I needed, and within 10 minutes or less I had a shiny new NTFS partition as my E: drive providing a much-needed 40GB of additional storage.
The only problem was that GRUB was still installed on the MBR, so next time I booted the machine, nothing happened. Disappointing.
I fished out my XP installation CD and tried to do a recovery. I tried various commands, but none of them worked. More Googling led to this comment that describes the command sequence required:
> fixboot c:
Using either of these commands seperately doesn’t work, at least not in my situation.
This was one of the strangest gigs I’ve been to in quite a while. I met my friend Jeremy at the one and only Lansdowne Hotel before the show, where he informed me that:
- As reported in Drum Media and at PyroMusic, the ticket price had been reduced to $30, and ticket holders could bring a friend for free, presumably because of poor ticket sales;
- It wasn’t clear what support bands were playing since ads placed by the Manning Bar listed two different support acts from ads placed by the promoter; and
- As reported on blabbermouth.net, Obituary would be playing one man short since lead guitarist Allen West is in jail.
I particularly liked the phrasing of West’s statement on the matter: “I have gotten into trouble that can only be resolved by my incarceration.” All the best to him of course, and I hope to see him play next time Obituary tour Australia.
Undaunted by these setbacks, we set off across Victoria Park to the gig. Appropriately, the flower beds had just enjoyed a liberal application of fertilizer, making the whole place smell totally rank.
We missed the first support but saw most of The Day Everything Became Nothing, who put on a decent show.
Nothing was gonna prepare me for the aural assault delivered by Obiturary. To their credit, they delivered a killer show without a lead guitarist. I thought rhythm player Trevor Peres would bust out a few leads to fill the gap, but he didn’t- making this probably the only metal gig in human history completed without a guitar solo. Peres kicked off the set with a slow, grinding riff- I’m not sure of the song- that went on for ages and had me quivering in my Mack boots.
Back when I was a live sound engineer, I always preferred mixing bands with one guitarist, mostly because it’s much easier to get a good sound. Both guitars usually occupy the same frequence range, so doing a mix such that it’s possible to hear each guitar clearly can be difficult. Live, less is usually more- a simpler sound with fewer instruments often communicates better to the audience, and this was certainly true at this gig. Just having a rhythm guitar made the constant riffing brutal. Although the sound and the feel were totally metal, the single guitar gave the band more of a punk or hardcore edge to my ears, which ain’t a band thing at all.
Drummer Donald Tardy was also enormously impressive. His drum solo intro to the encore was a great piece of showmanship and drumming skill.
Although ticket sales were presumably slow at first, prompting the ticket discount, by the time Obituary took the stage there was a very good sized crowd. It wasn’t packed like at Testament a few months ago, but I think everyone went home happy. I was well satisfied with a great gig, a new Cause of Death t-shirt, and a loud ringing in my ears that lasted for several days.
James over at Code Lore has a post up with some good tips on using bug trackers. This is great news because I have been planning to write a very similar post but have not gotten around to it. Prior to my departure from my previous employer, James was the Program Manager for the product I was working on. There were a few things we were doing that were kinda cool (or at least new to me) and worth blogging about.
First off, I don’t like calling them “bug trackers”. I prefer “issue tracker”. Everything should go in the issue tracker. This is how our team worked at Sensory and I thought it was great. New features, bugs, and any other tasks that have to be done- including things that will not result in code modifications- are all added to the issue tracker.
Having everything recorded in one place is vastly preferable to other methods that I’ve used- say where bugs are in the bug tracker but new features are “tracked” in a Word document lying around on a shared drive in a folder that’s rarely touched except by the guy who updates the document on the odd occasion when he’s had enough coding for the day. Such techniques are variously referred to as “lightweight” and “totally lame”.
And some random comments on what James wrote:
Logging - i.e., using the “comment” feature of the bug tracker to keep track of progress (or lack thereof). Yes. Yes. Yes.
Bug Triage - I didn’t do any of this since I was a mere developer, but I was present when James was on some of his bug triage rampages. They were brutal.
Scheduling - that is, keeping track of the estimated and actual amount of time spent working on an issue. I was quite skeptical when we started doing this, as I thought is would be too much work and too inaccurate to be very worthwhile. Most developer’s days are filled with many different activities, some done concurrently, so it’s often difficult to know how to allocate time to different issues.
I eventually realized that these numbers do not need atomic clock precision to be very useful. I use and recommend Joel Spolsky’s method for working out what time went where:
You do not really have to watch your stopwatch while you code. Right before you go home, or go to sleep under the desk if you’re one of those geeks, pretend you’ve worked for 8 hours (ha!), figure out which tasks you’ve worked on, and sprinkle about 8 hours in the elapsed column accordingly.
Joel is talking about using Excel for scheduling, but the advice applies just as well when using an issue tracker.
James also discussed integrating an issue tracker with a version control system, and I’ve got another rant about that in the pipeline.
Chris Blizzard has a very useful post up summarizing the announcements made at the recent Redhat Summit. I’ve been following this but still missed some cool stuff like Redhat Exchange.
The basic idea behind RH Exchange seems to be that you download RHEL with third-party applications already installed and ready to go. I notice that Alfresco are already a partner; this is interesting since a colleague of mine just spent a couple of days trying to get Alfresco up and running on Fedora Core 6. Alfresco uses OpenOffice for document format conversions, and Oo was the component that put up the most resistance during the installation process. I think it would have been better to just give Redhat money and avoid the hassle.
No doubt there are very good reasons for it, but it’s mildly amusing that Redhat are going to write some Windows drivers to take advantage of RHEL’s paravirtualization features. Hopefully the guys writing these won’t get teased too much by the other Redhat developers.
On Friday night I took De to see the Howling Bells at the Metro . I’m not familiar with their stuff but it was a great show. The Metro was pretty much full and the crowd gave them a very warm response.
In a previous life (prior to 2004, I think) Howling Bells were known as Waikiki. I saw them under this name at Newtown RSL back in 1999 or 2000, when I was working as the in-house sound engineer. I’ll admit I don’t remember them that well, but I remember well enough to know they’ve come a long way since then. The Bells have done a lot of touring over the last couple of years with some big names (including The Killers and Placebo) and it really shows in the quality of their performance. I will be picking up the album for sure.
The Metro itself was also in fine form on the night. Back in the days when I used to go more often, it had a battered old Martin Audio system which had probably been deafening punters since the early eighties. It sounded passable on good nights. These days it’s a Nexo Alpha system, a vast improvement. Kudos to the Bells’ front of house engineer as the gig was a pleasure to listen to.
If you are after a more comprehensive review of the night check out this one by Chock.