He's not wrong about the narrow algorithmic use cases (so far), but he's completely missing the utility for simulation of quantum phenomena (chemistry, microbiology, materials science). That use case alone completely justifies investing into them even if you don't care about advancing science.
For chemistry, you usually care about the quantum mechanics of an ensemble. Simulating a single molecule is less useful and chemistry is already a very good heuristic system for this. We would care about say: we have a 2-dimensional membrane with lipids and cholesterols, and, say, two proteins interacting in this membrane. Now, give me a molecule that either enhances or inhibits the interaction, and please constrain your search to molecules that can cross the blood brain barrier and which we can synthesize in under five "undergrad ochem" steps[0] using a set of feeder molecules with low supply chain risk (let's not depend on glutamate from China, e.g.)
I'm not sure this is something that QC can easily solve.
[0] undergrad ochem is actually a pretty good heuristic for which reactions one can perform at industrial scale, though high scale reactions might require catalysts you don't learn about in ochem
> That use case alone completely justifies investing into them
Not at very massive scale of investments required.
Take for example, GPGPU (general purpose graphics processing unit). The wast investment needed to get there was funded by gaming industry. Supercomputers as a investment target were tiny compared to gaming and general purpose computing.
AI boom was created on a tails of gaming industry and the benefits spilled into scientific computing as well.
To start, we'll neglect the bosonic degrees of freedom (the displacement of the atoms themselves) and look at only the fermionic DoF (the electrons).
Water has 8 electrons (which QC can treat exactly without any extra work) in a number of orbital. In general we need 2 qubits per orbital.
Most QC demonstrations so far were performed using so-called minimal basis sets, which have a small number of orbitals and thus give inaccurate results. A better approach would be to take a large orbital basis, do a classical relatively expensive Hartree-Fock calculation then use the orbitals from that to do the QC. This technique when done on classical computers is called MRCI (multi-reference configuration interaction) and is the gold standard in Quantum Chemistry.
So, provided we can pay the cost of doing a large orbital HF calculation (and we can do that for fairly large molecules), we can get pretty good result using n electrons in n orbitals MRCI. So production electronic calculations of water molecules would take about 16 qubits per molecule.
The more frustrating problem is that the number of electronic interaction terms is N^4 the number of orbitals so that we would very rapidly need extremely deep circuits which are not feasible without error correction (which involve using like 8 actual qubits for every calculation qubit). There are proposal to use plane wave basis sets (N^2 interactions) but then we need many more orbitals and thus many more qubits.
We are in practice very far from QC having a significant impact on real-life quantum chemistry. It's not at all clear that we'll ever be able to do QC on a molecule the size of a typical drug, let alone a protein.
because Qbits are just random numbers. They want to make u believe that Qbits represent a range of values, which is true in theory, but in reality it's just a plain old single random number. So all QC calculation are basically random + random * random - random = ... random ofc. I think google it's only QC application was a program that generates some random hash. Other than that, QC hasnt done anything else, nor do i believe it can.
really? How come reality keeps agreeing with me then? Where are these amazing quantum mechanics applications? U really believe in super position?? really? How long have they been working on QC? 50 years now? it never seems to work or be ready.. i wonder why.
I have no idea why you think reality agrees with you. I “believe” in superposition, entanglement, and the like because I’ve tested it myself in a lab with a six qubit quantum computer I built.
Yeah the problem is in the name and the marketing. People think it is meant to replace normal computers, but that is totally wrong. The simulation use cases as well as probing fundamental physics are far more immediate and exciting
It hasn't happened yet or won't happen any time soon, but if there's a chance RSA is going to be broken I'd rather be aware of the possibility 10 years earlier than 10 days after...
What would be the example of an outstanding, reasonably-sized (even quantum computers of the future have finite resources) quantum simulation problem that is intractable today but would unlock some economic potential, if solved?
Drug molecules are much too large for near term QC, let alone the interaction of two of them in a solvent. We're talking thousands of qubits and circuits millions of operations deep.
Quantum chemistry isn't magic, it's been around since the sixties, we understand pretty well the resources required for calculations. I happen to have a PhD in it too. AFAIK on quantum computers there are only two algorithms that are currently considered near-term feasible for this problem (VQE, QPE) and they both require a number of qubits = 2 x number of orbitals N and a number of operations between N^2 and N^4.
It's true that there could be neat quantum computing shortcuts that maintain calculation accuracy and that aren't doable on classical computers... but then we could also imagine that some neat Quantum Chemistry trick might make classical computers much better too. (We actually have a bunch of these already but they are approximative: DFT, machine learning, pseudo potentials etc.)
Thank you for the response. i'll need to read more into VQE, QPE. Still, I'm hopeful that geometric/energy minimization methods like invariant point attention in alphafold and other geometric deep learning findings will translate into quantum chemistry someday
I wonder if quantum computing is just a mirage that results from looking too much at the time complexity of quantum algorithms versus the cost in qubits, which are still wildly expensive. Maybe qubits will just never scale to the number of qubits needed to meaningfully outperform classical computers.
Yeah lots of people think decoherence will prevent QC from becoming practically useful. Some researchers seem to think differently, but I never found an explanation for why I should be as optimistic as they are.
Doesn't he work on QC from the theory side? I assume he has at least some faith if he's spending his time on it. He's on Sean Carroll's podcast talking about this stuff but I can't remember what stance he took on it.
I know Stephen Wolfram isn't too excited. But I believe his issue was more that algorithms end with "and then a measurement happens", which might be quite tricky and possibly even cancel out any potential gains. Not sure about the details of that, but it makes superficial sense given the nature of measurement in QM.
Why would quantum measurement be a problem in any way? Obviously measurement doesn’t destroy quantum effects, else we would not be able to see QM in the first place. Obviously you have to factor in what the measurement does to your quantum computer, but that’s obviously just part of your design in the first place
I believe his point was more that the quantum measurement could very well end up costing enough time to be an issue. He didn't to into a great amount of detailed as it was mentioned more as a tangent.
The sceptic arguments I read focus on accidental decoherence during the computation. If that goes well, and classical computers only "catch up" during measurement, wouldn't this mean that the time cost of measurement has to scale super-polynomially? Is that plausible?
I got tired of speaking on behalf of a vague memory of Stephen Wolfram speaking that I just looked up the transcript.
6 SW: Yeah, I think… I think it’s not going to be true[that QC gives a speedup], that’s my guess. I think what’s going to happen is, if you take short algorithm for factoring, which is primarily a quantum Fourier transform, that Fourier transform is done beautifully quickly because there are all these threads that are running in parallel. The problem is, every thread is somewhere in a different place in branchial space, that thread, that us observers, we have to corral all those things back together again in order to tell what actually happened, and that’s… So there’s a… Usually in quantum computing one just says, “and then there’s a measurement.”
1:26:49.0 SW: Now, in actuality, when you have an actual device, you have all kinds of issues in making that measurement, all kinds of… How quickly does it decohere, all these kinds of things. There are all these kinds of very practical experimental features, and I think people have generally said, given the formalism of quantum mechanics, it’s like, well, all this quantum stuff happens and then boom, we do a measurement, and the boom, we do a measurement is actually pretty difficult in practice with actual experiments, that people have said, but if we do these experiments well enough, it will become the mathematical idealization that von Neumann and others made about how measurement works, and I don’t think that’s going to be true. I mean, we’re not sure yet, but it seems likely that there will be no way to do… To sort of, if you’re honest about how the measurement works, the measurement takes effort.
That reads like Wolfram is generally pretty ignorant of the current state of quantum measurements. We can do continuous measurements now that barely perturb the system (Google “quantum non demolition” and “weak measurement”). Also while measurements are difficult to explain in QM (incompatible with Schrodinger equation) our theory of them is quite rich and complete now. This just seems rambling and imprecise from Wolfram. We can actually do the idealised measurement, which is why the standard quantum limit is such a big deal in gravitational wave detectors
EDIT: I should actually qualify the above, in order to surpass the standard quantum limit we will turn GW detectors into QND (quantum non demolition) detectors using techniques such as frequency dependent squeezing (see Kimble 2000) which is an ideal measurement
Neat. I have no trouble believing Stephen Wolfram could be out of touch with the current state of QM given he's been busy with his physics project and what not. And he's a terrible speaker. That podcast was the longest in Mindscape history due to that.
I think part of his view is also partially based on the Wolfram Project multiway causal graph framework and its implications on the measurement problem. It's an interesting podcast if nothing else. Not very comprehensible, but interesting nonetheless.
Yeah. Frankly I don't know much at all about Wolfram's work, other than that Mathematica is the best maths software in the world. As far as I have gathered, his work is so strange and different that it has very little contact with most physicists. I work in quantum measurement and non-linear quantum optics and no one discusses him, either on the experimental or theoretical side. He seems to be a smart maverick with a bunch of money. I don't know if anything will come of it in the end
Without having a first clue about the mathematical details, I can tell you it's most related to the work out of Sean Carrol et al trying to derive GR from QM, with spacetime being an emergent property of networks of quantum entanglement. Wolfram's idea is even more stripped down to hypergraphs of nodes with nothing but identity, and some number of neighbour nodes. Then they're exploring the space of possible various update rules for these graphs(he calls this rulial space). He claims that it should be possible to derive "all of physics" from this model, including the Schrodinger equation. He says the resulting emergent QM is most similar to Many Worlds, but interestingly in their model it seems like branches in fact merge together again(eventually, though it might take the entire lifespan of the universe). Carrol seems to think that starting with the Schrödinger equation is cleaner and more austere(of course he's rather biased), and I tend to agree.
My gut reaction to his mention of a rulial space is to be reminded of the Calabi-Yao manifold situation in string theory. If the space is even close to similar in size to that parameter space, that would be a theoretical nightmare.
I wanted to add that indeed he does have a lot of money, and he has hired some very talented postdocs and PhDs with some of it. I have no doubt we'll see a lot of interesting new theory out of his group at least. And that's never a bad thing, even if the theory doesn't pan out.
This is the typical stephen wolfram thing where he read feynman's proposal for quantum computing and thought about it by himself since then, and then, I would guess, figured that the experimentalists haven't had much to contribute.
How about a purely economic perspective. Something like the number of bits of DRAM you can get for the cost of a single qubit. Surely, for quantum computers to become usefulfrom a cost POV, this number needs to shrink, yes?
So it could then be interesting to graph this ratio over time and see where it's headed. My gut is that given the state of the art nature of QC, the cost of 1 qubit should be very high, whereas one bit of DRAM should be very low. Which mean 1 qubit is worth a crazy amount of DRAM.
That's probably because "real" quantum computing today can't even factor 4-bit numbers.
The coherent-bits are really not scaling fast enough (and this is to be expected given their entanglement IMO) for anyone to really care other than snazzy startups scamming investors of their money.
Barring the sky falling, our best bet for NP hard problems is approximate solutions that have provable lower bounds on approximation ratio.
From that purview, a general heuristic for approximating combinatorial optimization problems would be a godsent. Even if algorithms like the QAOA do not improve(or tightly match) approximation ratios beyond that of a carefully crafted classical algorithm... A turn key algorithm on a coherent high-qubit-count quantum computer would change the industrial use landscape forever. The number of developer hours saved would be uncountable.
Kind of like how instead of modeling the entire economy and figuring out how to influence the behavior, you just identify a key parameter such as inflation or interest rates and modify that to affect every possible outcome.
I came here to say precisely that. We don’t know yet what they could be useful for because we don’t have a single one that’s as capable as conventional computers were in 1943.
Normal computers would have suffered the same fate if they were only good for scientific calculations.
Computer revolution was funded by demand in business, gaming and automation. It's the cash flow that funds long term R&D. Not potential benefit.
There is no such demand for quantum computers. Everyday problems have too small complexity for them to be useful even if algorithm exists.
I suspect that computational biochemistry will have funders from drug firms and companies doing materials science to keep funding the research for quantum computing, but the total sum will not be billions per year like it was in computer revolution.
Scientific demand is enough to start something, not sustain huge annual investments.
Global supercomputer market is roughly $6-7 billion. Nvidia, AMD together spend more into R&D than the whole market has sales. Scientific use is minuscule compared to business and entertainment needs.
Quantum computers research gets millions or maybe few tens of millions funding. That's not enough.
From my limited understanding, a quantum computer performs computations over probability distributions, rather than over numerical values. Performing precise computations over probability distributions is a serious problem in many domains - physical simulations are just one. Another example is that could allow much more efficient and precise data fusion, and as a result - a leap in AI abilities, across domains where AI is used.
Unfortunately, even though a quantum computer "manipulates wavefunctions", you can still only measure the outcome once. The magic has to happen somewhere in between state preparation and measurement. It's more like a computer with an inbuilt RNG - but if you can reach the same result via multiple execution paths, the probabilities don't just add, but they can interfere like in-phase/out-of-phase oscillatins.
Are quantum computers being used in economically significant ways that only are possible with quantum computers yet? If so, what is the best example?
Last time I checked few years ago, my understanding was the answer was still no; specifically, believe I asked Scott Aaronson as a follow up question to a talk he gave:
The short answer is no, I think Ising machines are much closer or even there for specific problems. They are somewhat similar to quantum computers, but entirely classical (Dwave was shown to be equivalent to an Ising machine IIRC).
He sort of glosses by the need to upgrade from current cryptographic algorithms. But I wonder if in 20-30 years, we'll look back on the bad timing of having designed blockchains before post-quantum cryptography was on solid footing. Even if there are few problems where quantum computers have a clear advantage, it seems at least unfortunate that a lot of people spent a lot of time and resources building up "value" explicitly protected by those problems ... and decentralizing them in a way that makes upgrades challenging.
We like to imagine that quantum computers will be creating value for society by doing simulations about cutting-edge scientific problems. What if it just lets rich companies impersonate long-dead bitcoin whales?
I consider the odds of having quantum computers able to break discrete log within 3 decades to be rather minute. The challenges involved seem way larger than for something like making nuclear fusion commercially viable.
I also have a very hard time believing that
1) physical reality supports computing with a superposition of 2^{256} states each with an accurate amplitude of magnitude 2^{-128}, and
2) that we can engineer systems preserving such accuracies.
I get that the engineering challenges are hard (or we'd have figured it out by now). And everyone seems pretty bad at predicting how long it will take for us to build something fundamentally new.
But from the physical impossibility side, what would have to be true/what would we learn if we discovered that physical reality _doesn't_ support quantum computation with a large number of qubits and sufficient accuracy?
I did do a course which included some quantum computation material and the professor highlighted that it wasn't actually demonstrated whether nature would be so "extravagant" ... but I haven't heard anyone describe what the universe would need to be like to both support the quantum phenomena that we've already studied and demonstrated, but to disallow some larger/more complex arrangements of what seem to be the same principles.
Quantum computers are a part of quantum mechanics that s more of a curiosity than something to be engineered at scale. There's so much more to explore in quantum mechanics outside computation that is not receiving commensurate funding. I guess it's because "computing" is what BigTech (who has the money to fund it) understands
If error correction can be made to work they would revolutionize human life by allowing the microscopic world to be predictable. The recent result about error correction in surface codes from the Google group is encouraging, but still several OOMs away from anything meaningful.
Is there any demonstration of practical, usable QC yet?
I'm thinking of some kind of common computational work, with a clearly defined input and verified output, shown to be carried out at least an order of magnitude faster than our conventional computers.
If you are able of imagining the future, and capable of logical coherent thoughts, the statement "the need for quantum computers remains small" is just... damn narrow-minded and perhaps just plain stupid.
If you can shave off a factor n in O(n^3) then O-B-V-I-O-U-S-L-Y it will change the world. If you don't see the obviousness in this, then why are you working with computers?
Before you hate on me, did you even google "quantum computer"? Did you read the introduction section of the Wikipedia article?
I've read the introduction (and, uh, a lot more). What specific algorithms do you have in mind where quantum supremacy seems a) likely (or even possible) and b) enough of an improvement to be world-changing?
The proportion of comp sci tasks that have practical importance for which a quantum computer can shave off a factor of ~000s seems to be sufficiently small to doubt it will "change the world".
"Computers" are named after the teams of humans previously tasked with doing what they now do, a relatively common job in the first half of the 20th century and even irregularly earlier. The need was clear.
I remember when the Commodore 64 came out. "64K RAM?", I thought, "What could anyone do with so much RAM?" Power to do something new always generates new things to do with it.
You're only following the winners, but computer science has also had a lot of flops, too. Time will tell if quantum computers is one of those, but it sure isn't looking great so far.
At that time QL Sinclair was a loser. A good computer that never got programs. It was the computer of Lius Torwalds and the reason he had to write himself programs. Without QL Sinclair, we may never had Linux.
While that's a neat story, I'm not sure if it has a lot to do with the practical applicability of quantum computing. I didn't mean to single out Commodore or their CBM64, I really mean to say that there are plenty of examples of entire technologies that were hyped and then flopped, not just market competitors in a given technology that is already mostly proven.
Oh, come on. Linux was a clone of MINIX with a GPL license. If Linus had not written it, somebody else would have; and at worst (if that is worse), we would have all been running FreeBSD now.
Back then a digital computer couldn’t hold a candle to an analog one and processed only a few bits at a time. It was slow and imprecise. Look at where we are now.
A general quantum computer is a fundamentally different kind of computer. Those who cannot imagine what can be done with one lack the imagination to understand foundationally new technology and are unable to invent their own.
We went from analog to digital computers because of the concrete and specific attributes of digital computers that made them useful. Saying we went from A to B, therefore we will go from B to C, regardless of what C is makes no sense. "Things change, therefore this new thing is good and you simply lack the imagnation to understand why" is about as bad of an argument as it gets.
It's also funnily enough mirrors the everlasting blockchain advocacy.
No quantum computing platform today has meaningful error correction. What you can get, in some cases, is limited error detection based on the outcome of the question computation, if it's built into your specific algorithm.
There is a big difference between a practical QC and the QC we have now. 65bits isn't practical for any known real problem but ask also how long it takes to boot it, how many cycles it can execute per run and how many runs it can do per day (or week).
These thing are not like your laptop, they are more like particle accelerators.
The need is high... for bad people doing bad things to everyone else (trying to de-secure the world)?
There's some hypothetical travelling salesperson style questions we might ask. But we don't seem incapable of doing this work today. 98% the needs seem like: can we break the world's crypto. How is this anything beyond a chaotic evil mis-use? How will this do anything but de-secure & instigate risk across the planet?
At least when 99.5% of the engineers were working on ads, they were just wasting their time doing amoral shit. This seems actively immoral.
I really struggle to understand what this all is good for. There probably are some valid & good uses. But it's all hyper-abstract, with little grounding. The attempt to hipster-ize the facilities, to make the physical systems themselves look cool, to present an impressive front: it all works counter to the very essence that made computing cool for so much of my growing-up period: computing in my era was personal. It empowered people. I don't see how this will help actual people at all. It seems mostly like a big hard problem for which the winner reaps some eventual explotative spoils. At likely cost to general world order & peace. Hiss boo.
The advent of SPICE [1] meant that with the right models electrical engineers could simulate complex electrical systems, do sensitivity analysis and make integrated circuits that had a high probability of working.
Imagine a (quantum) simulator that can rapidly simulate all or part of the human body. The effects of medicines could be rapidly simulated, or the simulation could guide the design of treatments. Eventually, if the simulation becomes as accurate as SPICE can be today, it would be possible to go directly from the design of a treatment to its administration, secure in the knowledge that it is unlikely to cause problems.
Why do you believe a quantum computer can actually solve that problem? The whole point of the article is that quantum computers are not and can never be general-purpose computers. And something with as many inputs as "simulate the human body" seems like the exact opposite of what you can encode in qubits, evolve as a state vector, and usefully read out.
Because at the lowest level quantum mechanics is at work. My feeling is that a full simulation will be a hybrid system. The lowest "microscopic" levels will be done on a quantum computer, but the higher "macroscopic" levels, where decoherence kicks in, will be done on a classical computer. Maybe the future of quantum computers is as co-processors to classical computers?
The point is that for these sort of simulations you don't want a general purpose computer. Ideally you have a "equivalent" of the quantum mechanical Hamiltonian that you can manipulate/design and read effects out from. Now to simulate the human body probably requires a prohibitively large number of qubits, however for many very useful things you don't need that.
So my actual knowledge of quantum computing is mostly limited to what I remember from seminars in (physics) grad school, and from listening to my classmates who were actually doing quantum computing research complain about why things weren't working... but this side comment [0] is totally on the money with what I remember. I don't believe we will ever be able to run quantum simulations of interestingly-sized things, though that is certainly an opinion rather than a statement of fact and has a good chance of being inaccurate.
However, any way you look at it, "[quantum] simulating the human body" is complete batshit science wingnut nonsense (though I tried to be diplomatic about it above). There isn't even any way to measure the input state there! It's ill-defined, it's subject to measurement uncertainty, it's just plain chaotic. As an actual former scientist, it would help my blood pressure if science wingnuts who do not understand the first thing about what they're talking about could please stay quiet.
> I really struggle to understand what this all is good for.
Defence?
If your secrets (or more likely, your nation's secrets) might be exposed by a quantum attack, then you'd better understand how such an attack might work, so you can defend against it. Nations (some) do research into chemical/biological weapons to learn how to defend against them (purportedly).
That's a "devil's advocate" answer. I lean to the view that, for very big secrets, it's better not to have a secret at all. At least, don't have secrets that need to be kept long-term. That sounds glib and handwavy, and it is; but most secrets seem to get out, sooner rather than later, and usually not as a result of someone cracking the cryptography.
The need is to understand quantum systems and advance our understanding of theoretical computer science.
We do not dissuade particle physicists from performing physics because some bad people want atomic bombs; We do not dissuade Archimedes from doing his thing, for the fear of better siege engines;
Breaking a coin's crypto doesn't sound all that amoral to me. People holding those coins have specifically chosen to put their financial faith in tech, instead of society (which they are arguably undermining).
A have less respect for those hacking the minds of the general public.