I was sceptical because 'AI' and 'quantum' seems to be used interchangeably and fits your regular snakeoil salestalk but google has done enormous amounts of research into non-classical computing. They've also done their AI projects to solve protein folding faster and more accurate than any contemporary solving models[1]. which is why the name sort of makes sense even though many on HN would appreciate nuance.
"Nature is quantum mechanical: The bonds and interactions among atoms behave probabilistically, with richer dynamics that exhaust the simple classical computing logic."
"Already we run quantum computers that can perform calculations beyond the reach of classical computers."[citation needed]
>"Already we run quantum computers that can perform calculations beyond the reach of classical computers"
Yea this isn't true. Their 54 qubit machine can simulate random circuits pretty fast but
1) that's not at all useful and specifically contrived as a test of "quantum supremacy".
2) totally debatable whether it's actually out of reach:
>In the paper, it is argued that their device reached “quantum supremacy” and that “a state-of-the-art supercomputer would require approximately 10,000 years to perform the equivalent task.” We argue that an ideal simulation of the same task can be performed on a classical system in 2.5 days and with far greater fidelity
Ibm is a competitor in this space. Although there is some debate about google's claim, ibm is a very biased source as they have financial incentives to bad talk google.
As far as point 1 goes, nobody ever claimed otherwise. Its just an arbitrary milestone. Walk before you run. Quantum supremacy is a step in the journey, not the destination. You make a hello world program before making actual useful programs.
That IBM is a competitor increases my trust in their refutation. IBM could have raced to to the same test, but on 56 qubits or whatever, and claimed quantum supremacy themselves.
I've seen a lot of criticism that DWave is only good at making hardware that's hard to simulate without doing anything useful with it. What's different here?
Has their Feb '21 paper been refuted? They claim a scaling advantage (in addition to a large speed advantage) over path integral Monte Carlo, which appears to be best in class for simulating their hardware. Do you know of a better algorithm?
Also, what do you mean by "allegedly"? Aren't the basics of their hardware pretty well-understood, or do you think they're sitting on a secret classical algorithm that's fooled external researchers (including Google) into believing that they've demonstrated multiqubit entanglement?
It is reasonable to temper one's skepticism about something like this based on whether the claiming party is willing to put their money where their mouth is. The greater likelihood is that they know something I might not (just speaking for myself as I don't know how much expertise you have in the quantum or AI fields).
> I was sceptical because 'AI' and 'quantum' seems to be used interchangeably and fits your regular snakeoil salestalk
That was my first take - and akin to saying "quantum intelligence" which just reeks of marketing on a synergy overdrive mission.
Of course they are just using quantum hardware/software approach towards AI type problems. So for me it may of been better to say Quantum Annealed AI. But then, as a campus name, it don't have that marketing ring going for it.
One question though that will arise down the line will be ethics, with a classical computer you can drill down and understand fully every bit of decision making if needs be. With a quantum computer, not so easy at all.
Maybe possible that they create a good quantum AI system but equally at the same time it is also an evil AI system, only that they observe only the good as that is what they are looking for.
Philosophy is going to have a whole avenue of debate over this in years to come and who knows - AI psychology might be the future job we never expected to happen.
You realize quantum computing is ultimately just a few linear algebra operations right? There is no more magic in it than conventional neural network based models. Standard ML ethical frameworks are more than sufficient.
Adding "quantum" simply means speed ups for a few specific types of operations. You are not going to get an AGI with the current state of the art in quantum computing.
>You realize quantum computing is ultimately just a few linear algebra operations right?
Quantum systems have wavefunctions though, which collapse to a state. And before collapse, these can interfere. The math of this involves way more than normal linear algebra. Especially when you consider the things we've simplified away -- e.g. how exactly the wave function collapses. (We just say it's 'abrupt' and kinda leave it there. But it's possible this has implications for quantum computing, once we think in quantum theory terms rather than CS.)
A significant part of quantum computation is just a sequence of unitary transformations, represented by matrices. So essentially just a series of big matrix multiplications. The nonlinearity is introduced in the measurement. You have to repeat the calculation several times to build up a probability distribution.
Wave functions are just another type of vectors. Measurement (and collapse) is just an application of matrix diagonalization. It has its complications and its own beauty, but it is indeed just fanciful linear algebra (I work professionally on this).
I don't understand what you are saying. The parent comment said that quantum allows a few types of operations to get faster, and your response was "No," followed by a specific algorithm that is faster. Where do you disagree?
I think the point was it doesn't only speed up a small set of linear algebra calculations, but allows other, more complex, operations to be sped up as well.
How big does n need to be before the expected runtime is less than n/2? And at that n, what gate fidelity is necessary to ensure that the answer will usally be correct?
2^64 is hardly horribly insecure. Its on the edge of what a gigantic compute cluster can do. So its not secure, but hardly horribly insecure. Especially even if you can get a quantum computter working, its not going to be on the same level of operations that a million dollars in AWS credits will get you. At least not for a very long time.
Besides,in most places where that is an issue, its trivial to switch to 256bit algorithms
> (CAVEAT: where the speedup is applicable, which is often hard.)
Grover's algorithm has pretty wide applicability. Its the exponential speed ups like shor's algorithm that have super limited applicability.
I think the point is it lowers security by a factor of 2 in the exponent. Going from 2^64 to 2^63 lowers the amount of time to bruteforce a key by half.
For those who are unaware, there's a good reason to put this in Santa Barbara: it's already the home of Microsoft Station Q, a quantum computing research facility on the campus of UCSB. When I left the math department there in 2014, there were more and more graduate students attaching themselves to it. Not to mention the growing tech industry in Goleta (the city UCSB is actually located in). So it's a perfectly sensible place to put a quantum AI lab. Even if you don't know what that means, yet!
I think QNN are very interesting from a compsci perspective, and interesting from a quantum tech perspective, but not so much from a real world perspective.
As I understand it loading classical data into a quantum computer - into quantum ram - is a big bottleneck. So running a QNN over a picture of a cat can't give a speedup vs. running it on a classical machine. Is this wrong HN?
I haven't found a result showing QNN's do offer strong speedups for training or testing - I have found papers saying it looks good - but I haven't found the result. I think this may be a literature search fail by me though.
For generalisation I have seen
papers claiming that there will be better generalisation with QNN but I have failed to understand this result and do need to work harder!
I also believe that the most promising algorithm for quantum ML (HHL) has been "dequantized" I think that Grover's and QMC are pretty secure but also only quadratic in speed up (I say only - this is because that means there is a window of quantum advantage that may or may not be useful before the quantum algorithms fall off a cliff as well.
Ok - I need to understand this stuff for real, so please shoot me to bits !
QC for optimization and other hard search problems is an interesting area for deeper exploration. It's possible that quantum optimizers could be exponentially faster than existing optimization techniques by evaluating multiple minima simultaneously.
Throwing AI on the research campus does help focus what researchers will do there - e.g. research algorithms which can plausibly improve training, inference, and generalization of neural networks/ML models. Rather than researching other more "practical" QC applications such as cryptography.
> It's possible that quantum optimizers could be exponentially faster than existing optimization techniques by evaluating multiple minima simultaneously.
Is there any known quantum algorithm that gives a speedup over classical algorithms? I don't mean "call Grover as a subroutine" during your standard classical optimization algorithm.
I don't understand this question? Grover's algorithm is itself faster than classical search algos.
It is as of yet unknown whether quantum computers are more powerful than classical computers; there is no proof that BQP is strictly bigger than P. There is oracle separation but that's not the same thing.
that's what everyone says but imagine 10 years from now someone proves BQP < P. since that proof will entail a ptime reduction you'll immediately have a ptime shor's and etc.
I know next to nothing about all this but 'bottlenecks' don't worry me when talking about such a deep and ~nascent field. My first desktop had a whopping 4MB DRAM module in it .. we all saw the history of discrete computing manufacturing.
I distinctly remember reading an article in a scientific magazine promising the marvels of quantum computing in 95. Let's just say some progress since then certainly exists but it is quite not commensurate to the progress made on traditional computers. I'll not be surprised if we still talk about a "nascent field" in 50 years. (After all some still consider today programming to be so young that we don't really know how to do it - and at the risk of going on a tagent I find it ironical that traditionnal hw computer designers seem to have meanwhile master their own nascent discipline well enough)
The IO video introducing Quantum AI campus was fantastically grating to me. There's so little to say, so little to share, about what is happening, what this is, what it's for: it's just pure marketing fluff, with the thinnest veneer of introductory technical material. 'This is some kind of chip. It goes in this cold thing we built. We're hoping to to inspire others.' Gee frigging thanks.
Tech is either esoteric or exoteric. It either is a thing for only experts to understand and use and put to use, or it is something illuminating, something shareable, is a conveyable experience. Quantum AI combines two of the most opaque, hard to understand fields to make something whose prestige in large part rests upon it being entirely indecipherable to 99.9999999% of humanity.
To which I just keep wanting to say, can we please make personal computing a thing again?
>> Within the decade, Google aims to build a useful, error-corrected quantum computer. This will accelerate solutions for some of the world’s most pressing problems, like sustainable energy and reduced emissions to feed the world’s growing population, and unlocking new scientific discoveries, like more helpful AI.
Are "sustainable energy", "reduced emissions to feed the world's growing population" and "unlocking new scientific discoveries like more helpful AI" goals that Google is currently working towards?
Regarding the need to feed "the world's growing population", note that absolute increase in global population per year has levelled off for several decades and may even be decreasing:
The hardware side has been up there for a long time, and the theoretical side of the team has been in the Venice office. I guess they got a new building in Santa Barbara and wanted an announcement. I wonder if they are forcing the theorists to move up north?
It means you brute force problems with linear algebra on a quantum computer, but quantum computers big enough to brute force things don't exist yet, so they've got a couple-hundred year plan to bootstrap themselves up there.
Two buzzwords that sound like things that congress would want to have happening in America, lest anyone start thinking about taking action over monoploistic practices elsewhere in the company.
It would be great if the audio clips had the standard seek bar with the ability to pause/play. Perhaps when I scroll up or down to a section, pause the current audio clip. Then resume playing it when I come back. But also allow me to seek around. Rather than just restarting from the beginning. Because these clips are several minutes long and I am given no indication of their length.
Currently all I can do is either continue listening for an unknown amount of time, or go to the next/previous section and completely lose my progress.
We intended for each clip to be much shorter, which might have diminished the importance of a feature like the one you suggested. Even after we cut down the audio we got from the team, there was still a lot of great exposition, but not enough time to revisit that part of the design.
That said, I'll be sure to share this with the team.
Related/expanded: If you haven't seen Devs on Hulu, it's really great! I wasn't expecting to enjoy it so much. Nick Offerman takes a bit of getting used to in a dramatic role and they should have cast Jonathan "Mike Ehrmantraut" Banks as Kenton instead, but hey, whatcha gonna do.
Personally I think we’ll soon discover that what we’re doing in ‘quantum’ is indistinguishable from classical analog at that frequency and noise temperature, and that will also be the point where it becomes broadly useful and scalable.
"Nature is quantum mechanical: The bonds and interactions among atoms behave probabilistically, with richer dynamics that exhaust the simple classical computing logic."
"Already we run quantum computers that can perform calculations beyond the reach of classical computers."[citation needed]
[1] https://www.deepmind.com/blog/article/AlphaFold-Using-AI-for...