It's actually quite disappointing to see that the Linux kernel performance essentially never increased over 5 years. A lot of nice functionality was added, so at least the lack of a net loss is quite fantastic.
I wonder if this report might give the kernel folks the boost they need to be truly innovative moving forward.
the choice of some of these benchmarks is weird. they have a bunch of cpu-bound benchmarks. why exactly would you expect these to change as the kernel changed? (i would expect changing compiler to have a bigger influence.)
the ones that do change (and improve mostly) are ones where the kernel actually plays a role--file system, network, etc.
the iozone benchmark is a bit surprising, it shows no improvement. but they are hitting 200MB/s, so i wonder if this is simply because linux is hitting the max speed of the disk.
In general Moronix doesn't know what they're doing, but such benchmarks can be useful as regression tests; if the kernel isn't giving you full hardware performance then you have a problem.
the choice of some of these benchmarks is weird. they have a bunch of cpu-bound benchmarks. why exactly would you expect these to change as the kernel changed?
Scheduling is a non-trivial task, and the kernel scheduler underwent significant work during 2.4 and early 2.6. I think the reason those graphs are so boring is because the tests involve single process/single thread benchmarking.
Simple way to get me to NOT read your article: give it a flash-based popup-ad and split the article into 8 separate pages. I just want to see the data. Fail.
I wonder if this report might give the kernel folks the boost they need to be truly innovative moving forward.