Tuesday, February 26, 2008

Lots of GNOME/Mono FUD Lately

It would seem that lately there are a lot of FUD-spreading trolls crawling out of the woodwork trying to frighten people into thinking that GNOME somehow depends on Mono.

Let's take a look at their most widely repeated claims:

GNOME depends on Mono

This is simply untrue... to see for yourself, try removing Mono from your Linux system using yum, zypper, or apt (or whatever) and you will plainly see that it does not remove GNOME - it may remove Tomboy, F-Spot, Banshee and/or Beagle (if you have any of them installed), but it will not remove any core components of your GNOME system.

Now that we've proven that GNOME does not depend on Mono, let's move on to their next claim:

GNOME depends on libbeagle, a Mono program

Again, false. GNOME's help browser has an optional dependency on a C-library called libbeagle which can be used to make search queries via IPC (Inter-Process-Communication) to the Beagle daemon if and only if Beagle is installed. I.E., Yelp does not depend on Beagle, it is simply able to take advantage of Beagle if it is available. As far as I'm aware, there are plans to replace libbeagle with a more generic library able to communicate with either Beagle or Tracker via IPC, depending on which one the user has installed.

NDesk-DBus is replacing DBus in GNOME

There was a big stink about this a while ago by a very angry person who didn't understand how libraries or software development in general works.

GNOME continues to depend on the C-implementation of DBus (called libdbus) and that is unlikely to change unless DBus itself (the technology, not the C-library) gets replaced.

NDesk-DBus, written by the famous Alp Toker, is a replacement for the current DBus-Sharp .NET binding around libdbus, the C-implementation. The main difference between NDesk-DBus and DBus-Sharp is that DBus-Sharp wraps libdbus while NDesk-DBus is a fully managed implementation of the DBus wire protocol.

GNOME not only does not, but it cannot, replace DBus with NDesk-DBus - any half-way competent programmer would know and understand why this is so. Native C programs cannot easily call into managed code.

Someday soon it will be practically impossible to write any app for GNOME without being forced to use MONO

Considering that in order for this to happen, core GNOME libraries would have to be rewritten in a .NET language, this is unlikely to ever happen, never mind anytime soon.

As you can plainly see, these types of claims are being made by people who do not understand the most basic concepts of software development.

GNOME is riddled with Mono applications

If by "riddled" they mean "not", then yes. ;-)

There is currently only one official GNOME application that uses Mono and that application is Tomboy. Tomboy, as useful a tool as it is, is not even arguably a core component of the GNOME desktop in that removing it will not somehow make your GNOME desktop useless.

Novell is forcing GNOME to include Mono

Ask any core GNOME developer and he or she will tell you that this is absurd. To the best of my knowledge, Novell has not even once requested that any of its Mono-based applications be accepted as core GNOME applications, never mind forced one to be accepted. I should also note that Tomboy, the only Mono application in GNOME currently, was not a Novell project at the time it as accepted for inclusion in GNOME.

This post has been brought to you by the letters G and M and the number 2.

Saturday, February 23, 2008

Speaking of Hack Week Projects...

I didn't really have any ideas for Hack Week this year, so I had started off working with Paolo and Zoltan on optimizing Mono's RegEx engine. Unfortunately 1 week wasn't enough to accomplish what I had hoped to accomplish - luckily, Paolo and Zoltan were able to improve Mono's RegEx performance significantly without the need for the hacks I had planned to implement (and in fact did some a terrific job, I'm not sure my ideas would have made much of a difference).

That said, I'd like to jot down a few ideas I just had for the next Hack Week:

  • Implement an "Evolution Plugin" project type for MonoDevelop which would setup everything you need to start writing plugins for Evolution in C#. This would include things like templating the E-Plug xml files, pulling in appropriate Evolution# bindings and the like.
  • Implement an Evolution Plugin in C# that would filter spam messages based on charset information. Most of the spam I get seems to use a russian or asian charset, so a filter like this would be extremely valuable to me.

My biggest interest is in templating out an Evolution Plugin project type for MonoDevelop because I'd really like to see more outside developers writing plugins for Evolution and I think that this would be a great way to lower the bar, so to speak.

Wednesday, February 6, 2008

Optimizing GMime's UUEncoder

This past weekend I was talking with Andreia about how Pan is built on top of GMime and takes advantage of my awesomely speedy uuencode/uudecode routines which reminded me that I had done some performance comparisons of GMime's uuencode program vs. the one in the GNU sharutils package a number of years ago.

I had compared GMime 1.90.0 (which was a pre-release of GMime 2.0) and GNU sharutils 4.2.0 and the results were pretty impressive... GMime's uuencoder was on the order of 3 times faster than the one in sharutils and produced exactly the same results.

The uudecoder and the base64 encoder/decoder were all roughly on the order of 7 times faster than those in GNU sharutils, so all around GMime outperformed GNU sharutils by quite a bit.

Anyways, re-reading my test results got me thinking that my uuencode routines could probably be optimized a bit more as they were lagging a bit behind the base64 encoder routine and there's really no reason it should be that far off.

Well, tonight I finally got off my butt and decided to take a look and figure out why. Upon scrolling down to my uuencode_step() routine, I immediately saw why:

Each loop would collect up to 3 bytes from the input and bit shift them into a 32bit 'saved' variable (which is a state variable used for incremental uuencoding an input stream). Then, if I had successfully extracted 3 bytes from the input, I would extract them out of 'saved' into 3 unsigned char variables. At this point I would then encode them into a temporary output buffer. When this output buffer ('uubuf') grew to 60 bytes, I'd flush it to the real output buffer with a memcpy().

All of this extra copying of data around adds up after a while and really begins to impact performance.

Before making any changes, I timed how long it took the original version of my uuencode_step() function to encode linux-2.6.24.tar.gz on my system[1]. An average result over numerous runs was as follows:

[fejj@localhost ~]$ time `gmime-uuencode linux-2.6.24.tar.gz linux-2.6.24.tar.gz > /dev/null`
real    0m0.470s
user    0m0.412s
sys     0m0.052s

After my rewrite, my new results were closer to:

[fejj@localhost ~]$ time `gmime-uuencode linux-2.6.24.tar.gz linux-2.6.24.tar.gz > /dev/null`
real    0m0.291s
user    0m0.252s
sys     0m0.024s

For the sake of comparison, the best time I could manage to get from GNU sharutils 4.6.2 was as follows:

[fejj@localhost ~]$ time `uuencode linux-2.6.24.tar.gz linux-2.6.24.tar.gz > /dev/null`
real    0m1.386s
user    0m1.276s
sys     0m0.092s

The new implementation of uuencode_step() in gmime/gmime-utils.c has been committed to the gmime svn module on GNOME's subversion server, revision 1216 - this change should appear in the next release of GMime which will likely be 2.2.17.

Notes:

1. The system I tested this on was my Lenovo T61 laptop w/ a 7200 RPM harddrive running OpenSuSE 10.3 with various updates. The kernel was version 2.6.22.13-0.3-bigsmp.

From /proc/cpuinfo:

model name : Intel(R) Core(TM)2 Duo CPU T7700 @ 2.40GHz
cpu MHz : 800.000

(e.g. my cpu was scaled down at the time of testing)

2. The GMime uuencode implementation uses a GMimeStreamFs for input as well as output. This stream class is a wrapper around the POSIX I/O system functions which unfortunately has a sub-optimal need to perform an lseek() before each read() or write() call in order to make sure that the underlying file descriptor is in the expected position. This is necessary because it is possible for multiple streams to re-use the same fd.

I mention this because an obvious rebuttal to GMime's superior performance might be to suspect that GMime's uuencode implementation "cheated" by using an mmap()'d input buffer where the GNU sharutils implementation might not.

Monday, February 4, 2008

Re: DBus OOM handling

I didn't find a way to comment on Havoc's insights on OOM error handling, so I'm posting here.

Havoc, thanks for your first hand experience on the topic! A 30-40% increase in lines of code is insane, but not far from what I expected. The fact that you discovered that most of your original OOM handling code was broken was also not a surprise (and I don't mean to suggest that your code is sub-par, but rather that it's not easy to get right unless you have test cases for each code path).

I'll have to check out the code when my eyes are a bit less strained (to the point where they are beginning to tear up), I'm am now very interested!

Sunday, February 3, 2008

Worse is Better in the form of Autosave

There's recently been some talk about how GLib is poorly designed software because g_malloc() abort()s when the underlying malloc implementation returns NULL (suggesting an OOM condition) and therefor should never be used to write real-world applications because your calling code doesn't have a chance to do proper error checking (altho it was brought up that you can actually use g_try_malloc() and/or plug in your own malloc implementation underneath g_malloc() which could trivially notify the application that an OOM condition was met before returning to g_malloc()).

While at first, this argument seems correct and you begin to think "oh my god, the sky is falling", it's important to actually stop and think about the issue a bit first.

GLib was originally a utility library written as part of the Gtk+ widget toolkit in order to make their lives easier. When designing a widget toolkit (Gtk+ in this case) for real-world programmers to use, simplicity is key. If your widget toolkit is hard to use because it offers a way to notify the application developer of every conceivable error condition, then no one would use it because it would be "too hard".

What good is a library that is too hard to use that nobody uses it? It's no good, that's what.

The problem is that the idealists complaining about glib's g_malloc() have only considered being able to check g_strdup()'s return for NULL or maybe as far as gtk_foo_new() and have not considered that the render pipeline may require allocating memory as it renders the widget which might not have an ideal way to pass the OOM error back up to the application because the top of the call stack may in fact be gtk_main() and not some callback function implemented in the application's code itself.

The idealists argue that without the ability to check every malloc() for a NULL return and chain it back up to a high enough level in the call stack to handle properly means that users could lose their unsaved document if the application is, for example, a word processor. They argue that a properly designed application will always properly handle all error conditions and pass errors back up the stack where an emergency buffer can be used to show the user an "Out of memory" error dialog and/or save all of the user's unsaved work.

The problem with this school of thought is that the simple act of rendering your error dialog may require memory that you do not have available (if we are truly OOM as opposed to simply being unable to allocate that ~4GB buffer that the application tried to allocate due to poor arithmetic).

There is one thing that they are correct on, however, and that is that losing a user's document is a Bad Thing(tm).

What they haven't considered, however, is that it's possible to prevent data loss without the need to implement their really complex OOM handling code:

I dub thee, auto-save.

Yes, I will assert that auto-save is our savior and that it is, in fact, the only feasible solution in what I affectionately refer to as: The Real World.

In the Real World, applications are built on top of other people's code that you do not control and do not have time nor luxury to audit, you simply have to trust that they work as advertised.

Let's imagine, for a minute, that you write a word processor application using some toolkit (other than Gtk+, obviously) that upholds your idealist design principles in that it is designed in such a way as to be able to notify your word processor application about OOM conditions that it experienced way down in the deep dark places of the rendering pipeline. And let's, for argument's sake, assume that your application is flawlessly written - because you are an idealist and thus your code is perfectly implemented in all aspects, obviously.

Now imagine that a user is using your word processor application and the version of the widget toolkit (s)he's using has a bug in some error handling case that is unexpectedly hit and the error doesn't properly get passed up the call stack, but instead crashes the application because the toolkit's bug corrupts some memory.

Oops.

All the hard work you did, making sure that every possible error condition in your code properly handles the error, never gets hit because the application crashed in a library you trusted to be implemented flawlessly, so the user loses his/her document that they were writing.

Your effort was all for naught in this particular case.

What's the solution? Auto-save.

What have we learned from this? Auto-save is needed even if your toolkit is sufficiently designed to pass all errors (including OOM errors) back up the stack.

Once you've implemented auto-save, though, what are all those custom OOM-checks for each and every malloc() call in your application really worth?

Zilch.

So why not use something like g_malloc() at this point?

Once your system is OOM, the only reasonable thing you can do is save any state you don't already have saved and then abort the application (not necessarily using the abort() call). But if you already have all your important state pre-saved, then all you have left to do is shut down the application (because you don't have enough memory resources to continue running).

Where does Worse is Better come in, you ask?

Well, arguably, the auto-save approach isn't as ideal as implementing proper fallback code for every possible error condition.

Auto-save is, however, Better because it works in the Real World and is Good Enough in that it achieves the goal of preventing the loss of your user's document(s) and because it is far easier to implement with a lot fewer points of failure.

Fewer points of failure means that it is a lot more likely to work properly. By using the auto-save approach, you can focus on making that code robust against every conceivable error condition with far less developer time and resources meaning you are able to keep both cost and data loss down which makes everyone happy.

Code Snippet Licensing

All code posted to this blog is licensed under the MIT/X11 license unless otherwise stated in the post itself.