Hacker News new | past | comments | ask | show | jobs | submit login
Unveiling our new Quantum AI campus (blog.google)
141 points by asparagui on May 18, 2021 | hide | past | favorite | 74 comments



I was sceptical because 'AI' and 'quantum' seems to be used interchangeably and fits your regular snakeoil salestalk but google has done enormous amounts of research into non-classical computing. They've also done their AI projects to solve protein folding faster and more accurate than any contemporary solving models[1]. which is why the name sort of makes sense even though many on HN would appreciate nuance.

"Nature is quantum mechanical: The bonds and interactions among atoms behave probabilistically, with richer dynamics that exhaust the simple classical computing logic."

"Already we run quantum computers that can perform calculations beyond the reach of classical computers."[citation needed]

[1] https://www.deepmind.com/blog/article/AlphaFold-Using-AI-for...


>"Already we run quantum computers that can perform calculations beyond the reach of classical computers"

Yea this isn't true. Their 54 qubit machine can simulate random circuits pretty fast but

1) that's not at all useful and specifically contrived as a test of "quantum supremacy".

2) totally debatable whether it's actually out of reach:

>In the paper, it is argued that their device reached “quantum supremacy” and that “a state-of-the-art supercomputer would require approximately 10,000 years to perform the equivalent task.” We argue that an ideal simulation of the same task can be performed on a classical system in 2.5 days and with far greater fidelity

https://www.ibm.com/blogs/research/2019/10/on-quantum-suprem...


Ibm is a competitor in this space. Although there is some debate about google's claim, ibm is a very biased source as they have financial incentives to bad talk google.

As far as point 1 goes, nobody ever claimed otherwise. Its just an arbitrary milestone. Walk before you run. Quantum supremacy is a step in the journey, not the destination. You make a hello world program before making actual useful programs.


That IBM is a competitor increases my trust in their refutation. IBM could have raced to to the same test, but on 56 qubits or whatever, and claimed quantum supremacy themselves.

I've seen a lot of criticism that DWave is only good at making hardware that's hard to simulate without doing anything useful with it. What's different here?


My understanding was that dwave was easy to simulate (albeit using a different algo than it was allegedly using)


Has their Feb '21 paper been refuted? They claim a scaling advantage (in addition to a large speed advantage) over path integral Monte Carlo, which appears to be best in class for simulating their hardware. Do you know of a better algorithm?

Also, what do you mean by "allegedly"? Aren't the basics of their hardware pretty well-understood, or do you think they're sitting on a secret classical algorithm that's fooled external researchers (including Google) into believing that they've demonstrated multiqubit entanglement?


>Ibm is a competitor in this space

I'm well aware - doesn't mean they're not right.

>You make a hello world program before making actual useful programs.

how hilarious would it be if some chip manufacturer bragged about the performance of a hello world on their chip...?


It is reasonable to temper one's skepticism about something like this based on whether the claiming party is willing to put their money where their mouth is. The greater likelihood is that they know something I might not (just speaking for myself as I don't know how much expertise you have in the quantum or AI fields).


Nature is the way it is. Quantum mechanics is one model people use to explain/predict nature


> I was sceptical because 'AI' and 'quantum' seems to be used interchangeably and fits your regular snakeoil salestalk

That was my first take - and akin to saying "quantum intelligence" which just reeks of marketing on a synergy overdrive mission.

Of course they are just using quantum hardware/software approach towards AI type problems. So for me it may of been better to say Quantum Annealed AI. But then, as a campus name, it don't have that marketing ring going for it.

One question though that will arise down the line will be ethics, with a classical computer you can drill down and understand fully every bit of decision making if needs be. With a quantum computer, not so easy at all. Maybe possible that they create a good quantum AI system but equally at the same time it is also an evil AI system, only that they observe only the good as that is what they are looking for.

Philosophy is going to have a whole avenue of debate over this in years to come and who knows - AI psychology might be the future job we never expected to happen.


You realize quantum computing is ultimately just a few linear algebra operations right? There is no more magic in it than conventional neural network based models. Standard ML ethical frameworks are more than sufficient.

Adding "quantum" simply means speed ups for a few specific types of operations. You are not going to get an AGI with the current state of the art in quantum computing.


Are those quantum speed ups for neural nets even good?

More specifically, are they better than the ones from "neuromorphic" chips like Intel Loihi?


I suppose that’s the reason google made the investment- to find out.


I'm not an expert on either, but based on my one course in neural networks I took, they are basically a series of linear algebra equations.


>You realize quantum computing is ultimately just a few linear algebra operations right?

Quantum systems have wavefunctions though, which collapse to a state. And before collapse, these can interfere. The math of this involves way more than normal linear algebra. Especially when you consider the things we've simplified away -- e.g. how exactly the wave function collapses. (We just say it's 'abrupt' and kinda leave it there. But it's possible this has implications for quantum computing, once we think in quantum theory terms rather than CS.)


A significant part of quantum computation is just a sequence of unitary transformations, represented by matrices. So essentially just a series of big matrix multiplications. The nonlinearity is introduced in the measurement. You have to repeat the calculation several times to build up a probability distribution.


Wave functions are just another type of vectors. Measurement (and collapse) is just an application of matrix diagonalization. It has its complications and its own beauty, but it is indeed just fanciful linear algebra (I work professionally on this).


> Standard ML ethical frameworks are more than sufficient.

Agreed. All ethical frameworks should include Hindley-Miller type inference.


No. See for example the Grover search algorithm. You can use to find whether an item is inside a list in O(sqrt(n)).


I don't understand what you are saying. The parent comment said that quantum allows a few types of operations to get faster, and your response was "No," followed by a specific algorithm that is faster. Where do you disagree?


I think the point was it doesn't only speed up a small set of linear algebra calculations, but allows other, more complex, operations to be sped up as well.


How big does n need to be before the expected runtime is less than n/2? And at that n, what gate fidelity is necessary to ensure that the answer will usally be correct?


Slightly tangential but how significant is O(sqrt(n)) speed up? Fast algorithms are slightly faster but intractable algorithms are still intractable?


It makes memory, not compute, the limit.

It halves the exponent on the number of operations. 2^128 - reasonably secure by todays standards! 2^64 - horribly insecure.

(CAVEAT: where the speedup is applicable, which is often hard.)


2^64 is hardly horribly insecure. Its on the edge of what a gigantic compute cluster can do. So its not secure, but hardly horribly insecure. Especially even if you can get a quantum computter working, its not going to be on the same level of operations that a million dollars in AWS credits will get you. At least not for a very long time.

Besides,in most places where that is an issue, its trivial to switch to 256bit algorithms

> (CAVEAT: where the speedup is applicable, which is often hard.)

Grover's algorithm has pretty wide applicability. Its the exponential speed ups like shor's algorithm that have super limited applicability.


I think the point is it lowers security by a factor of 2 in the exponent. Going from 2^64 to 2^63 lowers the amount of time to bruteforce a key by half.


For those who are unaware, there's a good reason to put this in Santa Barbara: it's already the home of Microsoft Station Q, a quantum computing research facility on the campus of UCSB. When I left the math department there in 2014, there were more and more graduate students attaching themselves to it. Not to mention the growing tech industry in Goleta (the city UCSB is actually located in). So it's a perfectly sensible place to put a quantum AI lab. Even if you don't know what that means, yet!


The other obvious reason is that Google is already doing quantum research there, this is just them moving into bigger digs.

https://www.sciencedaily.com/releases/2019/10/191023133358.h...


So will Santa Barbara turn into Quantum Valley?


If it was in a valley, maybe.


I think QNN are very interesting from a compsci perspective, and interesting from a quantum tech perspective, but not so much from a real world perspective.

As I understand it loading classical data into a quantum computer - into quantum ram - is a big bottleneck. So running a QNN over a picture of a cat can't give a speedup vs. running it on a classical machine. Is this wrong HN?

I haven't found a result showing QNN's do offer strong speedups for training or testing - I have found papers saying it looks good - but I haven't found the result. I think this may be a literature search fail by me though.

For generalisation I have seen papers claiming that there will be better generalisation with QNN but I have failed to understand this result and do need to work harder!

I also believe that the most promising algorithm for quantum ML (HHL) has been "dequantized" I think that Grover's and QMC are pretty secure but also only quadratic in speed up (I say only - this is because that means there is a window of quantum advantage that may or may not be useful before the quantum algorithms fall off a cliff as well.

Ok - I need to understand this stuff for real, so please shoot me to bits !


QC for optimization and other hard search problems is an interesting area for deeper exploration. It's possible that quantum optimizers could be exponentially faster than existing optimization techniques by evaluating multiple minima simultaneously.

Throwing AI on the research campus does help focus what researchers will do there - e.g. research algorithms which can plausibly improve training, inference, and generalization of neural networks/ML models. Rather than researching other more "practical" QC applications such as cryptography.


> It's possible that quantum optimizers could be exponentially faster than existing optimization techniques by evaluating multiple minima simultaneously.

Is there any known quantum algorithm that gives a speedup over classical algorithms? I don't mean "call Grover as a subroutine" during your standard classical optimization algorithm.


I don't understand this question? Grover's algorithm is itself faster than classical search algos.

It is as of yet unknown whether quantum computers are more powerful than classical computers; there is no proof that BQP is strictly bigger than P. There is oracle separation but that's not the same thing.


I think that BQP > P is not so important. Exponential advantage of known qalg vs known c_alg when the hardware is available is what's important.


that's what everyone says but imagine 10 years from now someone proves BQP < P. since that proof will entail a ptime reduction you'll immediately have a ptime shor's and etc.


We have a p time Shor's !

I don't think BQP<P makes sense... BQP <NP? I think you may be mocking me :(


Oh sure, Shore's.

Also a bunch more.


I know next to nothing about all this but 'bottlenecks' don't worry me when talking about such a deep and ~nascent field. My first desktop had a whopping 4MB DRAM module in it .. we all saw the history of discrete computing manufacturing.


I distinctly remember reading an article in a scientific magazine promising the marvels of quantum computing in 95. Let's just say some progress since then certainly exists but it is quite not commensurate to the progress made on traditional computers. I'll not be surprised if we still talk about a "nascent field" in 50 years. (After all some still consider today programming to be so young that we don't really know how to do it - and at the risk of going on a tagent I find it ironical that traditionnal hw computer designers seem to have meanwhile master their own nascent discipline well enough)


It might be more fair if you compare this to how long it took to move from Babbage's blueprints to something like ENIAC (more than a century?).


I think Preskill just had a paper out showing average case advantages for common QML tasks.


do you have a link?


Sorry, I misspoke slightly, but there is an advantage - https://twitter.com/RobertHuangHY/status/1393263028150231041


Powered by not just one, but two buzzwords.


The IO video introducing Quantum AI campus was fantastically grating to me. There's so little to say, so little to share, about what is happening, what this is, what it's for: it's just pure marketing fluff, with the thinnest veneer of introductory technical material. 'This is some kind of chip. It goes in this cold thing we built. We're hoping to to inspire others.' Gee frigging thanks.

Tech is either esoteric or exoteric. It either is a thing for only experts to understand and use and put to use, or it is something illuminating, something shareable, is a conveyable experience. Quantum AI combines two of the most opaque, hard to understand fields to make something whose prestige in large part rests upon it being entirely indecipherable to 99.9999999% of humanity.

To which I just keep wanting to say, can we please make personal computing a thing again?


we should have expected a superposition at some point


>> Within the decade, Google aims to build a useful, error-corrected quantum computer. This will accelerate solutions for some of the world’s most pressing problems, like sustainable energy and reduced emissions to feed the world’s growing population, and unlocking new scientific discoveries, like more helpful AI.

Are "sustainable energy", "reduced emissions to feed the world's growing population" and "unlocking new scientific discoveries like more helpful AI" goals that Google is currently working towards?

Regarding the need to feed "the world's growing population", note that absolute increase in global population per year has levelled off for several decades and may even be decreasing:

https://en.wikipedia.org/wiki/Population_growth


It's great that Google is doing this. Quantum computing is the kind of capital-intensive research that perfectly suits a corporate research lab.


In Santa Barbara?

That should be interesting. There's been some good physics from there. Flash LIDAR came from Advanced Scientific Concepts there.

UC Santa Barbara is Hollywood's vision of a college. Everyone is good-looking and the college is right on the beach.


It's also where Station Q, Microsoft's quantum computing research station is.


The hardware side has been up there for a long time, and the theoretical side of the team has been in the Venice office. I guess they got a new building in Santa Barbara and wanted an announcement. I wonder if they are forcing the theorists to move up north?


Why is it called Quantum AI? What does that mean?


It means you brute force problems with linear algebra on a quantum computer, but quantum computers big enough to brute force things don't exist yet, so they've got a couple-hundred year plan to bootstrap themselves up there.


Here is a little explanation from TF quantum website. https://www.tensorflow.org/quantum/concepts


Two buzzwords that sound like things that congress would want to have happening in America, lest anyone start thinking about taking action over monoploistic practices elsewhere in the company.


I’d also be curious the goals/budget of Quantum AI.


The problem with defining clear goals for Quantum AI is that if you try to measure them then they will change.


The one you measure won't change, just all the rest.


underrated comment


Tiny feedback for anyone reading who worked on this page: https://quantumai.google/learn/lab

It would be great if the audio clips had the standard seek bar with the ability to pause/play. Perhaps when I scroll up or down to a section, pause the current audio clip. Then resume playing it when I come back. But also allow me to seek around. Rather than just restarting from the beginning. Because these clips are several minutes long and I am given no indication of their length.

Currently all I can do is either continue listening for an unknown amount of time, or go to the next/previous section and completely lose my progress.


Thanks for the feedback!

We intended for each clip to be much shorter, which might have diminished the importance of a feature like the one you suggested. Even after we cut down the audio we got from the team, there was still a lot of great exposition, but not enough time to revisit that part of the design.

That said, I'll be sure to share this with the team.


i saw a great documentary about this on hulu


'Devs' was the first thing I thought of when I saw this announcement.


Related/expanded: If you haven't seen Devs on Hulu, it's really great! I wasn't expecting to enjoy it so much. Nick Offerman takes a bit of getting used to in a dramatic role and they should have cast Jonathan "Mike Ehrmantraut" Banks as Kenton instead, but hey, whatcha gonna do.

Go watch it!


I was pleasantly surprised to see Alison Pill (played Kim Pine in Scott Pilgrim vs the World) in Devs.

I guess she was in at least one episode of ST:PIC but I haven't followed up on that.


Doesn't make sense until we have at least thousands of qubits.


The article mentions a million qubits is the goal.


Devs anyone?


Can't wait to find out how this tech is going to be used to better separate me from the money in my wallet


I don't even know how to make strings in C, how the hell am I going to code on a quantum computer. Sigh.


You're programming on a computer that run on C and others today, I'm sure you'll manage in the future too. "On the shoulders of giants" and all that.


> how the hell am I going to code on a quantum computer

Just handwave in front of your Quantum AI HAL-9000 and it will imagine a create software for you!


Personally I think we’ll soon discover that what we’re doing in ‘quantum’ is indistinguishable from classical analog at that frequency and noise temperature, and that will also be the point where it becomes broadly useful and scalable.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: