Fair enough. I've read that using CUDA (or another GPU-based language) you can get at least a 10x the GFLOPS of a 4-CPU Xeon [1], though, and RSA cracking should easily parallelize, if I'm understanding the process correctly. And the high-end NVidia cards in the G2 instances have 1,536 CUDA cores each. No, I'm not kidding. The one benchmarked in the link above is about 1/3 the GFLOPS of the one in the G2 instances.
And it looks like a reserved G2 instance is 0.65/hour (though can be lower on the spot market and in the reserved instance marketplace). So if there's a 120x speed improvement over the "single core 2.2GHz AMD Opteron" (and that's assuming each core is as fast as the Xeon core above), for only 11x the cost...well, it gets a lot cheaper.
In fact, it ends up, if I haven't done my math wrong, at about $94,900 of full instance time (less if you get spot or reserved instances). [2] To win the $200k prize. Hmm....
[2] "the equivalent of almost 2000 years of computing on a single core 2.2GHz AMD Opteron": That's 17,520,000 hours. If the G2 instance gets you 120x performance improvement, that's 146,000 hours. At 0.65/hour, that's $94,900.
And it looks like a reserved G2 instance is 0.65/hour (though can be lower on the spot market and in the reserved instance marketplace). So if there's a 120x speed improvement over the "single core 2.2GHz AMD Opteron" (and that's assuming each core is as fast as the Xeon core above), for only 11x the cost...well, it gets a lot cheaper.
In fact, it ends up, if I haven't done my math wrong, at about $94,900 of full instance time (less if you get spot or reserved instances). [2] To win the $200k prize. Hmm....
[1] http://archive.benchmarkreviews.com/index.php?option=com_con...
[2] "the equivalent of almost 2000 years of computing on a single core 2.2GHz AMD Opteron": That's 17,520,000 hours. If the G2 instance gets you 120x performance improvement, that's 146,000 hours. At 0.65/hour, that's $94,900.