Hacker News new | past | comments | ask | show | jobs | submit login

I think your question is excellently phrased. The answer for anything data science-y is "no." The bottleneck will be transferring the input data onto the quantum CPU.

For algorithms like HHL that have superclassical performance, a complex superposition encoding the data needs to be created first. This state is subsequently "consumed" by the algorithm. The no-cloning theorem forbids creating copies of the encoded state, and hence the encoding step needs to be repeated every time the algorithm is run.

For another example, consider Grover's search that is sub-linear in calls to an oracle function. If the oracle references a linear array of data, for example, it needs to work on superpositions of array indices. In other words, the entire dataset needs to fit in "quantum" memory.

Using a quantum cpu can only be sensible for computationally difficult problems where the hard problem instances can be specified by a relatively small number of bits.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: