In second, this jumps out at me: "The first issue is that the problem instances where the comparison is being done are basically for the problem of simulating the D-Wave machine itself. There were $150 million dollars that went into designing this special-purpose hardware for this D-Wave machine and making it as fast as possible. So in some sense, it’s no surprise that this special-purpose hardware could get a constant-factor speedup over a classical computer for the problem of simulating itself."
Actually gives me an idea. Instead of comparison to BF competitors, I could actually just compare a massively-parallel, BF CPU to 256 interpreters communicating with each other through IPC running on a general-purpose computer. I'd show the CPU performed many times better. It's the closest thing I can think of to how D-Wave is doing benchmarking. The difference is $150 million is not in either my bank account or addition of transaction history.
"One interesting factoid is that they were able to translate Wikipedia in its entirety from English to Russian using 90% of the currently deployed FPGAs in around 100 ms. Insane stuff."
Didn't know about that project. Pretty cool. Yeah, the FPGA projects have been doing all kinds of stuff like that going back to at least the 90's from my reading. The speedups could be over fifty fold. Some claimed three digits. Other programs harder to parallelize & reduce... which is basically what they do on FPGA... might have under 100% speed up, tiny speed up, or even a loss if it was sequential algorithm vs ultra-optimized, sequential CPU like Intel's. The latest work, which started in 90's projects I believe, was to create software that automatically synthesizes FPGA logic from the fast path of applications in a high-level language then glues them into the regular application on a regular CPU. You can't get speed-up of actual hardware design but makes boosts easier if problem supports good synthesis. Tensilica is another example of a company whose Xtensa CPU is one that's customized... from CPU to compilation toolchain... to fit your application. Container people are compiling and delivering containers. Tensilica compiles and delivers apps with a custom CPU.
http://www.scottaaronson.com/blog/?p=1400
https://news.mit.edu/2015/3q-scott-aaronson-google-quantum-c...
In second, this jumps out at me: "The first issue is that the problem instances where the comparison is being done are basically for the problem of simulating the D-Wave machine itself. There were $150 million dollars that went into designing this special-purpose hardware for this D-Wave machine and making it as fast as possible. So in some sense, it’s no surprise that this special-purpose hardware could get a constant-factor speedup over a classical computer for the problem of simulating itself."
Actually gives me an idea. Instead of comparison to BF competitors, I could actually just compare a massively-parallel, BF CPU to 256 interpreters communicating with each other through IPC running on a general-purpose computer. I'd show the CPU performed many times better. It's the closest thing I can think of to how D-Wave is doing benchmarking. The difference is $150 million is not in either my bank account or addition of transaction history.
"One interesting factoid is that they were able to translate Wikipedia in its entirety from English to Russian using 90% of the currently deployed FPGAs in around 100 ms. Insane stuff."
Didn't know about that project. Pretty cool. Yeah, the FPGA projects have been doing all kinds of stuff like that going back to at least the 90's from my reading. The speedups could be over fifty fold. Some claimed three digits. Other programs harder to parallelize & reduce... which is basically what they do on FPGA... might have under 100% speed up, tiny speed up, or even a loss if it was sequential algorithm vs ultra-optimized, sequential CPU like Intel's. The latest work, which started in 90's projects I believe, was to create software that automatically synthesizes FPGA logic from the fast path of applications in a high-level language then glues them into the regular application on a regular CPU. You can't get speed-up of actual hardware design but makes boosts easier if problem supports good synthesis. Tensilica is another example of a company whose Xtensa CPU is one that's customized... from CPU to compilation toolchain... to fit your application. Container people are compiling and delivering containers. Tensilica compiles and delivers apps with a custom CPU.