It is almost always the case that programs written for clusters -- including those which use MPI -- are communication bound. Right now the limiting factor is not these little processors, but rather the interconnect speed. If I remember correctly, Raspberry Pi has 10/100 megabit ethernet. [Edit: just checked, this is the case] So while this looks like a lot of fun, it's not very useful for anything meaningful yet.
Of course it's not fair to compare this to an infiniband cluster (that's not the point of this exercise), but I'd really be interested to see a cluster built on $0.50 ARM chips with at least a gigabit ethernet interconnect. A couple of years from now -- given the low entry cost and lower infrastructure costs (cooling/power consumption/etc) -- that could be a game changer.
There are a few different companies that have ARM + custom interconnect systems out there or in development. They're not necessarily cost-competitive yet, but they're an interesting start.
Oddly enough, Cray recently sold their interconnect tech to Intel [0]. Intel seems to be planning to integrate it on-chip down the road [1], which seems to leave Cray serving as a somewhat quirky system integrator longer-term.
It's even worse because the Ethernet is connected via USB. What I would love to see are 16 or 32 ARM cores on a single card connected via high speed bus such as Infiniband and pack 4 or 8 of these cards into a chassis.
Of course it's not fair to compare this to an infiniband cluster (that's not the point of this exercise), but I'd really be interested to see a cluster built on $0.50 ARM chips with at least a gigabit ethernet interconnect. A couple of years from now -- given the low entry cost and lower infrastructure costs (cooling/power consumption/etc) -- that could be a game changer.