At first blush, it sounds like this kind of hardware would be extremely useful in data compression, specifically entropy encoding. LZMA, for example, applies a probability prediction to each bit. Any type of data compression could probably make use of a "probability engine," which would be a huge boon for anything that deals with data. If every device could encode/decode data an order of magnitude faster, more aggressive compression could be added to everything: movies, music, HTML, voice communication.
"...has received more than $20 million in government funding from DARPA and other agencies..."
I'd also speculate that cryptographic implementations could benefit from this sort of technology, although I'm not very familiar with the inner workings of AES, etc. Maybe that explains their DARPA funding.
Further, it's my understanding that one of the primary limitations of GPGPU processing (and parallelism in general) are the large performance penalties involved when dealing with branching, severely limiting the use of flow control. Perhaps it would be possible to offload these instructions to a probability processor, alleviating some of the slowdown in regards to flow control.
The one example I could find on their web site used sudoko which really requires only logic, possibly with backtracking, in order to solve. I wonder what probability calculations they have put into hardware. Bayes Theorem? Expectation? That one is just inner product.
I suppose the potential uses would be more clear if a language spec was available. It's all very foggy to me at the moment.
Basically, they use lower voltage than is traditionally needed. Lowering the CMOS supply voltage turns the logic probabilistic due to thermal noise, and huge power savings are achieved when traded for some probability of error.
Now, they can design their logic to tolerate the errors, or use the probabilistic logic to actually present random variables, where the output of the logic is a sample from the distribution.
Doubtful; more likely analog circuits, in my opinion. Memristors just haven't been used to build anything substantial in reality yet, and I'd imagine the first people to do so will be at HP or similar rather than a startup.
"...has received more than $20 million in government funding from DARPA and other agencies..."
I'd also speculate that cryptographic implementations could benefit from this sort of technology, although I'm not very familiar with the inner workings of AES, etc. Maybe that explains their DARPA funding.
Further, it's my understanding that one of the primary limitations of GPGPU processing (and parallelism in general) are the large performance penalties involved when dealing with branching, severely limiting the use of flow control. Perhaps it would be possible to offload these instructions to a probability processor, alleviating some of the slowdown in regards to flow control.
Or maybe I've just read too much science fiction.