Hacker News new | past | comments | ask | show | jobs | submit login

So, an inherently error-prone computation is being corrected by another very error prone computation?





I feel like this is basically how humanity operates as a whole, and that seems to produce usable results, so why the heck not?

No problem, said von Neumann. https://www.scottaaronson.com/qclec/27.pdf

what he actually said: "as long as the physical error probability ε is small enough" you can build a reliable system from unreliable parts.

So it remains for you to show that AI.ε ~= QC.ε since JvN proved the case for a system made of similar parts, that is vacuum tubes, with the same error probability.

(p.s. thanks for the link)


A quick careless Google didn’t yield Scott Aaronson’s take on this, which as a layperson is the one take I’d regard seriously.

Has he remarked on it and my search-fu failed?


I've never seen so much money spent on a fundamentally flawed tech, since maybe Theranos. I'm really starting to doubt the viability of the current crop of quantum computing attempts. I think there probably is some way to harness quantum effects, but I'm not sure computing with inherently high margin of error is the right way to do it.

I feel like these are extremely different things being compared.

For a lot of technology, most really, the best way to study how to improve it is to make the best thing you know how to and then work on trying to make it better. That's what's been done with all the current quantum computing attempts. Pretty much all of the industry labs with general purpose quantum computers can in fact run programs on them, they just haven't reached the point where they're running programs that are useful beyond proving out and testing the system.


I'm optimistic about current quantum computers, because they are a tool to study wave function collapse. I hope that they will help to understand the relation between the number of particles and a time how long a system can stay in entangled state, which will point to a physical interpretation of quantum mechanics (different from "we don't talk about wave function collapse" Copenhagen interpretation).

The non-experts here might be interested in why you’d want to do that. Do you have explanations or links about it?

In short, quantum mechanics has a major issue at its core: quantum states evolve by purely deterministic, fully time reversible, evolutions of the wave function. But, once a classical apparatus measures a quantum system, the wave function collapses to a single point corresponding to the measurement result. This collapse is non-deterministic, and not time reversible.

It is also completely undefined in the theory: the theory doesn't say anything at all about what interaction constitutes "a quantum interaction", that keeps you in the deterministic time evolution regime; and what interactions constitute "a measurement" and collapse the wave function.

So, this is a major gap in the core of quantum mechanics. Quantum computers are all about keeping the qubits in the deterministic evolution state while running the program, and performing a measurement only at the end to get a classical result out of it (and then repeating that measurement a bunch of times, because this is a statistical computation). So, the hope is that they might shed some light on how to presicsely separate quantum interactions from measurements.


I think quantum computing research makes a lot more sense through the lens of “real scientists had to do something for funding while string theory was going on”.

Quantum computing may or may not get industrial results in the next N years, but those folks do theory, they often if not usually (in)validate it by experiment: it’s science.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: