Hacker News new | past | comments | ask | show | jobs | submit login
IBM unveils 127-qubit quantum processor (ibm.com)
319 points by ag8 on Nov 16, 2021 | hide | past | favorite | 313 comments



Quantum Computers sitting in their cryogenic chambers are such works of art, stacks of giant brass plates and hundreds of heat pipes (or coolant pipes? liquid helium I suppose) twisted and coiling throughout the structure hanging like some steampunk chandelier (why do they hang from above anyway?) EDIT: changed to a few direct links to pics: [0][1][2]

The esoteric design reminds me of the Connection Machine blog posted the other day, "to communicate to people that this was the first of a new generation of computers, unlike any machine they had seen before." [3]

I'm curious what they do with these prototypes once they are obsoleted in a matter of months, are the parts so expensive they tear it down to reuse them? Or will the machines be able to go on tour and stand in glass cases to intrigue the next generation of engineers? I know it had a tremendous effect on me to stand in front of a hand-wired lisp machine at the MIT museum.

[0] https://img-s-msn-com.akamaized.net/tenant/amp/entityid/AANs...

[1] https://img-s-msn-com.akamaized.net/tenant/amp/entityid/AANs...

[2] https://static.reuters.com/resources/r/?m=02&d=20191023&t=2&...

[3] https://tamikothiel.com/theory/cm_txts/


Cray supercomputers were also aesthetically beautiful machines:

https://cdn.britannica.com/11/23611-050-81E61C8A/Cray-1-supe...

So happy to be able to find a picture of the wirewrap inside: https://s-media-cache-ak0.pinimg.com/originals/e2/d2/47/e2d2...


When I was a kid I helped my grandpa move out of his office when he retired. He was a senior engineering fellow with Monsanto, specializing in bubble trays in refinery processes. He worked out of their big campus in St Louis, and gave me a tour as a thanks for helping him with the boxes and such. There's several things that have stuck in my memory from that tour.

One was seeing their Cray. I forget which specific model it was. It was gray and mauve, and had the fountain with the logo on the unit that pumped the coolant. Monsanto had a dedicated computer room for it with glass walls so you could see it. The overall effect was to make a very strong impression that this was something very special.

Another thing that stuck in my mind was seeing their bio labs. These were long concrete hallways dug halfway into the ground, I assume to make climate control easier. They had row upon row of corn plans under artificial light. These were the labs that developed roundup ready seed. I had no idea at the time the significance of what I was seeing or how contentious it would be now.

Last thing I'll mention is when we were walking outside and he pointed out the separate building the CEO et all worked out of. It was literally a bunker with earthen birms and such around it. My grandpa bragged that it was built to be bombproof in case terrorists attacked the CEO. At the time I was somewhat mystified why anyone would bomb the CEO of a chemical company. I certainly understand why now.

But anyhow, it was a cool experience and seeing that Cray probably helped inspire my interest in learning to program later.

Edit:

Random other thing I'll mention is an email exchange from a mailing list back in the late 90s that focused on APL style languages. Someone told their story about how back during Cray's glory days they worked in a lab where he did interactive APL programming on a Cray machine. I can only imagine what that must have felt like at the time, typing arcane terse syntax into a prompt that would execute them shockingly fast.


Thanks for sharing. I worked on the UIUC campus for some time right by the greenhouses where they experimented with cultivars of corn and other grasses, always funny to see giant 8 foot tall plants lit with sodium lamps through the winter.

As for APL, I haven't really got past an orientation in the language but it's held a total mystique to me since seeing this video circa 1975 [0] walking through the language with a Selectric teletype acting as a REPL, totally flipped my understanding of computer history, I assumed it was all punchcard programming back in the old black and white days xD (I am born 1990 for reference, trying to catch up with what happened before me)

[0] (30min) https://www.youtube.com/watch?v=_DTpQ4Kk2wA


Re APL I totally know what you mean. I've never done anything with that category of language that wasn't just golfing / goofing around, but conceptually it's made an impression. Once the big picture concept clicks you look at the way we write most code and see so much bookkeeping that's just managing names and values of iterators and indexes. Lifting that up to bulk transformation of multidimensional objects is powerful, and much closer to the intuitive picture of what's going on I have in my imagination.


To this day, whenever I hear the term "supercomputer", I can only visualize the Cray. I don't want to know what the latest supercomputer looks like, because I suspect it's just another boring bunch of rack aisles. Maybe with a snazzy color end-cap or blue LEDs on the doors.

Bring back the impractical, space-eating circular design! I don't care about space efficiency. It's supposed to look cool.



This is an area where I wouldn't mind a little splurge in design. These are supposed to be the greatest computing machines made, they should look fabulous and mysterious.

Well, a bunch of racks with green LEDs is more practical, cheap and functional, I guess.


Haha so green LEDs are cool again? I made a call back in 2004 that after blue LEDs became commonplace and white LEDs had their turn as the new hot thing, red would make a comeback. And it did. ;)

I imagine there have been multiple cycles through the spectrum since then…


Isn't that due to green being the highest energy of light for an LED available at that time? IIRC, everytime a new material was created to get the increase in band gap for a higher energy photon (i.e. blue or violet), there was a Nobel prize given out.

Tuning the band gap with a new material back then was difficult I think.


green has a lower band gap relative to blue/white, ~2V depending on chemistry, but there's something like that going on - IIRC human eyes are most sensitive to green around the ~500/550nm wavelength that green LEDs emit, so you can get good brightness out of low power.


Then consider that everybody's smartphones would make it into the TOP500 in 1993.


I'm a big fan of some of the graphics you see on the big rack farms too. Like the Jaguar art Cray used a decade or so back.


The computer history museum has a CRAY you can see very close ( https://computerhistory.org/ ). Worth the visit.


At the 1990 TeX Users Group meeting at Texas A&M University, one of the events was a tour of the computer center where we got to be in the room with their Cray. I think I sat on the bench.


You’ve just given me my weekend plans, thank you very much! They have a working IBM 1401 that you can see in action too!


Also a truly astonishingly beautiful babbage engine. I came from the UK to CA to see it.


You do know that there's one in London at the Science Museum (the one that was¹ in California was built at the same time for Nathan Myhrvold).

⸻⸻⸻

1. It was on loan to the Computer History Museum from Myhrvold and returned to him in 2016. It's unclear whether he did re-loan it or if he's busily calculating the values of polynomials with it.² The Computer History Museum website makes it sound like it's currently on display but I can find no news stories about it going back to the museum.

2. Just kidding about him calculating polynomials—it's (I think) on display in the lobby of Intellectual Ventures.


I was curious enough to actually contact IV and confirm that the other Difference Engine No 2 is, in fact, in their lobby.


CHM homepage says it’s closed to the public ‘until later this year’.


Damn it


I have a picture of a cray been serviced as an A2 framed print on my living room wall, it predates the missus which is why it was on the living room wall ;)


Someday when quantum computers are the size of dust particles and we're surrounded by them there'll be some version of a steampunk subculture that values decorating their homes with these ancient beautiful and laughably incapable devices.

Maybe in 30 ~ 50 years or so.


I'm very excited to configure DNS blocking for the quantum dust particles trying to serve me advertisements in my home.


At least analytics won’t be able to tell if you looked at the ad or not


“Why do you look at the speck of quantum dust in your brother’s eye and pay no attention to the GPU in your own eye? How can you say to your brother, ‘Let me take the speck out of your eye,’ when all the time there is a GPU in your own eye?"

https://www.biblegateway.com/passage/?search=Matthew%207:3-5...


But they can tell how probable that was


I can't help but think of turning them into horrible sounding organs.


I would gladly decorate my home with one. They're beautiful in their own way.


They hang from the ceiling because they use evaporative cooling (the high-energy particles escape, and the low-energy particles remain in the bucket), each lower stage a bit cooler than the one above it.


Also because it looks cool, which is only appropriate for a cooling system.


AFAIK, another reason to work downward, historically at least, was to lower an assembly into an open-neck dewar.


The TV show DEVS has a computer that looks a lot like those first three links. I always thought their prop was a set designer's imagination run wild, not actually based in what quantum computers look like.


Some additional information on ³He/⁴He dilution refrigeration that makes up the chandelier: https://en.wikipedia.org/wiki/Dilution_refrigerator


I was lucky enough to tour the IBM Thomas J. Watson Research Center in New York a few months ago and captured several sound recordings of this[0] room housing a quantum computer. Not only do they look cool, they sound very intense! [1] A stark contrast to the minimalist/austere design of the actual enclosure, or maybe it's fitting, depending on your perspective...

[0] https://www.ft.com/__origami/service/image/v2/images/raw/htt...

[1] https://drive.google.com/file/d/1CeZjXUH6Y8ZvfcS0IM0MWoLNAYJ...


Yep, definitely the sound of an AI actively hijacking my brain D: thanks


I look forward to the day we can look back at these "quantum chandeliers" with nostalgia, like we look back on those massive, room-sized mainframes today.


I can't wait to be in the vintage quantum computing club, where we build working replicas of the "quantum chandeliers" with more modern and stable parts, and tinker with them as functional room decoration.

Related, the DEC PDPs certainly look stylish!


The pipes you mentioned are microwave conduits (for various control signals).


0.141" semi-rigid coax, diameter of champions.


> 0.141" semi-rigid coax, diameter of champions.

I wonder if we measure it more precisely we'll get to something closer to 1.4142135623730950488...


It would be a bit of a surprise since the unit of measure is inches and I'm not sure what inches would have to do with quantum computing. I feel like the inch is essentially a random factor here.


I’d make this cable as a form of art.


Close, as the ratio of the inner and outer conductor diameters, to exactly get 50 Ohms, is an irrational number.


Thanks! Is my assumption of liquid helium running somewhere correct? I figure that's the only way to reach the temperatures required (single digit kelvins, no?)


It is even worse than single digit Kelvin.

Liquid Nitrogen with pumping: 40 K for a few thousand dollars.

Run of the mill Liquid Helium: 4 K for tens to hundreds of thousands of dollars.

But for these devices you need 15mK which is reachable only if you mix two different isotopes of Helium and pump the mixture into vacuum. Such a device is up to 1M$ and more.

And the insides of that device are in vacuum (actually, air freezing into ice on top of the chip can be a problem). The brass is basically the heat conductor between the chip and the cold side of your pumped He mixture (which is *not* just sloshing inside the whole body of the cryostat where the chips are).

Another reason you do not want the He sloshing around is because you will be opening this to make changes to the device and do not want all the extremely expensive He3 (the special isotope you need for the mixture) to be lost.


FWIW... small DR's are under 400k USD. The big ones are ~1M USD or more.


What’s a DR? Something refrigerator?


Dilution refrigerator. They are the type of refrigerator used to chill quantum computing devices. The wikipedia article has a pretty good description of how they work. It took me a few reads to understand it!


It's much colder than that, single-digit millikelvin. They use He-3/4 dilution refrigerators [1]. Getting things cold and electromagnetically-quiet enough that the quantum state doesn't collapse is a big challenge in the field.

1 - https://en.wikipedia.org/wiki/Dilution_refrigerator


And if you need colder than 2mk, nuclear demagnetization:

https://en.wikipedia.org/wiki/Magnetic_refrigeration#Nuclear...


And some people think the brain can be deterministic at 300K.


Why not? Computers are deterministic at higher temperatures.


Even assuming a computer model, think of an analog computer with long integration times, potentially high gain for thermal noise terms, simulating a chaotic system.

Or, look at literature on error rates for people performing simple repetitive tasks.


Unless I am missing something, I believe that these pictures are at 99% the cryogenic system.


Whoa, that is so cool! I thought the Eischer esque machine in Devs was mostly Hollywood fluff but it absolutely looked almost just like that.


The Devs producer met with the Google team in Santa Barbara (which works on very similar superconducting qubits), so it should look very close!

That said the whole hovering in free space thing was perhaps a bit over the top in the show.

https://qz.com/1826093/devs-creator-alex-garland-describes-t...


So the chandelier looking thing is a dilution fridge and is just used to make the processor cold. The processor is usually pretty small, not that different from your CPU. That part is indeed iterated on but the fridges aren’t changed much. The wiring is sometimes switched out, but the fridges get used forever. I’m using one now that’s maybe 30 years old. The dilution units are very hard to make, and there’s a very small academic family to which everyone who can make one traces back to.

They mostly hang like that since most dil units are designed so the coldest part is usually the lowest, and they’re generally orientation sensitive. You want easy access to the bottom part, so you just put the plates in descending order of temperature and you hang the thing from the ceiling.


What's that picture of what looks like an exploded CPU package on IBMs site? Is that metaphorical or is that really what the processor looks like? It looks small and not-chandelier like.


Are there any pictures of the hand-wired LISP Machine online?

I searched, but couldn't find it.

Also - when LISP was invented (1958) - what was the state of computers at the time? Doing some research - it seems like direct keyboard input to computers was only available for 2 years prior. It seems like languages were decades ahead of hardware.

I guess I'm having trouble fathoming how languages were so far ahead while computers were seemingly VERY primitive.

Are there any articles on the process for how LISP was designed and implemented??


Photos are indeed sparse, here's the highest-res I could find of the machine I saw at the museum (built 1979, much more compact now that we can forego the vacuum tubes), thankfully there are enough pixels to read the post-it note naming the machine "Marvin": https://upload.wikimedia.org/wikipedia/commons/7/7d/MIT_lisp...


> I guess I'm having trouble fathoming how languages were so far ahead while computers were seemingly VERY primitive.

My intuition is that back then getting run time on computers was so scarce that the best programmers and mathematicians spent a great deal of brain time considering exactly what their software should be. If you only get one run a day, if that, you're gonna do your best to make it count. Today we're often in the opposite situation, where it can be entirely rational to burn incredible amounts of computation in the absolute sense to save brain time.


> I guess I'm having trouble fathoming how languages were so far ahead while computers were seemingly VERY primitive.

I have a much bigger emotional conflict when contrasting that with the current state of mainstream programming languages, that are only just beginning to tread onto territories like algebraic data types and pattern matching that ML paved almost 50 years ago. Is there any hope for true dependent typing to become popular before 2040?


Don't Prolog and Erlang have pattern matching built in??


Lean4 comes pretty close to a general purpose dependently typed programming language.


I can't give you a great answer because I wasn't there, but I did find a John McCarthy paper describing its usage by way of an automatic typewriter [0] as referenced by the wiki on Lisp [1], first implemented via punchcards on the IBM 704. That would be vacuum tube logic and magnetic core memory. ~19,000 pounds, 12k flops (36bit), 18kB of RAM, 3.75-ish Megabytes of storage per 2,400 feet of mylar tape. [2]

As for languages ahead of the hardware, you might read up about Charles Babbage and Ada Lovelace, the latter a mathematician who translated problems into machine instructions for a machine that wouldn't be built for a hundred years - Babbage's design worked, but he spent all the money the Royal Society was willing to give trying improve the tolerances on his logical-clockwork. [3] But anyway, back to John McCarthy's paper, last page:

  APPENDIX - HUMOROUS ANECDOTE
The first on-line demonstration of LISP was also the first of a precursor of time-sharing that we called “time-stealing”. The audience comprised the participants in one of M.I.T.’s Industrial Liaison Symposia on whom it was important to make a good impression. A Flexowriter had been connected to the IBM 704 and the operating system modified so that it collected characters from the Flexowriter in a buffer when their presence was signalled by an interrupt. Whenever a carriage return occurred, the line was given to LISP for processing. The demonstration depended on the fact that the memory of the computer had just been increased from 8192 words to 32768 words so that batches could be collected that presumed only a small memory.

The demonstration was also one of the first to use closed circuit TV in order to spare the spectators the museum feet consequent on crowding around a terminal waiting for something to happen. Thus they were on the fourth floor, and I was in the first floor computer room exercising LISP and speaking into a microphone. The problem chosen was to determine whether a first order differential equation of the form M dx + N dy was exact by testing whether ΔM/Δy = ΔM /Δy, which also involved some primitive algebraic simplification. Everything was going well, if slowly, when suddenly the Flexowriter began to type (at ten characters per second) “THE GARBAGE COLLECTOR HAS BEEN CALLED. SOME INTERESTING STATISTICS ARE AS FOLLOWS:” and on and on and on.

The garbage collector was quite new at the time, we were rather proud of it and curious about it, and our normal output was on a line printer, so it printed a full page every time it was called giving how many words were marked and how many were collected and the size of list space, etc. During a previous rehearsal, the garbage collector hadn’t been called, but we had not refreshed the LISP core image, so we ran out of free storage during the demonstration.

[0] http://jmc.stanford.edu/articles/lisp/lisp.pdf

[1] https://en.wikipedia.org/wiki/Lisp_(programming_language)

[2] https://en.wikipedia.org/wiki/IBM_704

[3] Jacquard's Web by James Essinger is the book you want to read for more.


One of the IBM scientists seems to have an old quantum computer as a piece of office decoration, so I guess there's some hope to preserve them: https://www.youtube.com/watch?v=OWJCfOvochA


They hang because they sit at the bottom of dilution refrigerators.


My antivirus client won't let me open that second link, claims some kind of malicious activity, didn't look into the details.


Strange, it is a personal blog with literally no javascript (just checked my network tab), might be worth investigating what your antivirus has against it. It's a very good read, so just in case your antivirus is friendly with archive.org: https://web.archive.org/web/20211113093602/https://tamikothi...


Amazing images, nothing that would look out of place in the Villa Straylight.


is this most advanced tech at this very moment?


Tangential question, what are the areas of technology where we can expect to see substantial progress or breakthroughs within 2030, i.e. what are the most exciting areas to follow and look forward to? Here's my list:

- Nuclear fusion (Helion, ZAP, TAE, Tokamak Energy, CFS, Wendelstein).

- Self-driving cars.

- New types of nuclear fission reactors.

- Spaceflight (SpaceX Starship).

- Supersonic airplanes (Boom).

- Solid state batteries.

- Quantum computing.

- CPUs and GPUs on sub-5nm nodes.

- CRISPR-based therapies.

- Longevity research.


I'd say fusion is a sleeper. You still have that stupid "30 years away and always will be" meme but there is real progress being made. Fusion would completely change the world, though not overnight because it would take another decade or so before it would advance enough to be cost competitive.

I'm semi-optimistic about space flight and longevity. I think Starship will fly, but I wouldn't be surprised if some of its most ambitious specs get dialed back a bit. I'll be somewhat (but not totally) surprised if the "chopsticks" idea works.

We will probably see aging-reversal to some limited extent within 10-20 years, but the effect will probably be more to extend "health span" than add that much to life span. (I'll take it.)

I'll add one not on the list: the use of deep learning to discover theories in areas like physics and math that have not occurred to humans and maybe are not capable of being found by ordinary human cognition.

Wildcard, but plausible: detection of a strong extrasolar biosphere candidate using JWST or another next-generation telescope. Detection would be based on albedo absorption spectra, so we wouldn't know for sure. Talk of an interstellar fly-by probe would start pretty quickly.

I wouldn't list sub-5nm as "far out." We will almost definitely get sub-5nm. AFAIK 3nm is in the pipeline. Sub-1nm is "far out" and may or may not happen.


Sadly I'd also qualify most of these as things that consumers are overly excited about but will never reach their expected potential (at least in our lifetimes) due to technological limits. Same as flying cars, 3D TVs, 3D printing, household robots, holograms, AR glasses.


3D TVs will come of age once autostereoscopic displays reach the right level of quality. After being blown away by my first glimpse of a display in around 1998 I fully expected them to be useable years ago. I guess we might still be another "10 years" away.

https://en.wikipedia.org/wiki/Autostereoscopy


I really doubt that there's going to be huge desire for 3D TVs at any point. People can already look at video on a 2D display and interpret 3D visuals from it. And if you want to be fully immersed in something, maybe you want VR instead.


"People can already look at video on a 2D display and interpret 3D visuals from it. " way off, depends heavily on contrast sensitivity which is a function of brightness and displays have long way to go (esp with ambient around) +HDR even breaks current VR chain because stray light.


Google Starline seemed amazing to me. I would love to have a large immersive 3D display for videotelephony, sports, nature documentaries, etc.


I want 3D TV for sports but apparently I'm the only one.


What happens when two people want to watch the TV?


Well, I'm assuming autostereoscopic displays of the future will solve issues of multiple sets of eyeballs on the same display.


AR glasses will hit a wall but passthrough AR will be converged on rapidly. Starting 2022


Any particular insight why 2022?

Hololens 2 has shown that it isn't so easy to advance the field.

I don't think an Apple device is forthcoming or likely to leapfrog.


> Any particular insight why 2022?

Facebook, Apple and others are releasing their first AR glasses then.


And there will be approximately zero non-gimmick software for them until at least 2032, if it ever materializes at all.


AR "FaceTime" (beaming in an avatar of another person to spend time with) will be a killer app.


But you aren't "spending time" with that person. You're spending time with that person's poorly rendered avatar. You can't even hear them properly. You can't see them at all. You can't touch them or read a lot of the nonverbal cues. And for the privilege of not being in their presence, you also need to pay several hundred (if not thousands, this being Apple) dollars, and both sides need to use Apple products. I very strongly suspect that all but the most ardent Apple fans will pass on this generous offer.


Passthrough AR isn't glasses AR, and is much more likely to be rapidly made capable. Lynx-R launches in Q1, and Meta's and Apple's headsets will likely use passthrough AR next year.

https://lynx-r.com/


I definitely hope for something novel with video see-through HMDs as they used to call them in 2002 [1] when I last worked on them. Latency wasn't solved last I checked and viewpoint offset is still an issue that throws users off.

[1] https://static.aminer.org/pdf/PDF/000/273/730/ar_table_tenni...


Latency is pretty good on Quest 1 - I haven't tried Quest 2. Looking forward to getting a Lynx to gauge how close we are to something good.


>flying cars At least you can buy them now.


Don't forget carbon sequestration and geoengineering.

I think longevity research is a path to stagnation, and ultimately counter-productive, and should cease. If science advances one funeral at a time, then longevity is counterproductive for all other progress.


So you have chosen death.

There are reasons to worry about the ethics of longevity research (especially if the benefits of it are not justly shared), but I don't think you can justify withholding life-improving medical treatment from people just because you want to help science by letting people die early.

That sort of thinking is how we get Logan's Run.


It's dishonest to characterize what I'm saying as "choosing death", as if I've got a nihilistic urge to nuke the planet. Death, birth, and renewal is part of the entire biosphere which has been existed for billions of years. The cells that defy death are called "cancer". The messiness with which humans come into being and then pass away again is NOT something to be engineered away - it is something to experience and learn to appreciate.

The thing that lives on, that can be effectively immortal (if we choose to protect it) is the biosphere in which we're embedded and, to a lesser extent, the nest of symbols humans have fashioned for themselves over the last few millenia. It is fascinating to imagine what it would be like to live through human history; however it is terrifying to imagine what the "great men" of history would have done or become if not cut down by time. The inevitability of death has surely stopped some great things from being done, but I'm equally sure it has stopped even worse things from being done - imagine human history if the Pharoahs of Egypt had had access to immortality! It's too horrible to imagine.

BTW the Logan's Run system was purely about maintaining homeostasis given limited resources, NOT about maintaining (or even enhancing) dynamism in the population by decreasing average life-span. In other words, unrelated.


I apologise for the "choosing death" meme, but I think it is only as inappropriate as you equating human beings with "cancer". You're right, though, that humans have to learn to experience and in some sense come to terms with the messiness of death.

I think what we disagree on is what it means to "engineer away" death. Are we engineering away death if we cure a disease, but don't extend the maximum lifespan of humans? Is extending the average lifespan to 100 years all right as long as those treatments are designed to not work on people over 100 years old? If a treatment is later found that helps 100 year olds to extend their age to 101, is that the treatment that should be banned, or is there some number N where adding N years to the previous maximum is morally wrong and the whole world has to agree on banning it?

Your point about the Pharaohs is maybe not as strong as you think, since of course the Pharaonic system did outlast any of the individual office holders. I don't think it was old age that lead to the fall of that regime, and there are plenty of regimes which manage to be equally horrible within a single lifetime, or that are overthrown within the space of one lifetime.

Thank you for that succinct explanation of the premise of Logan's Run. I wasn't sure if it worked as an analogy, since, as you say, the motivation of the society was different from the one you are advocating for, but I think the most relevant aspect of Logan's Run is the dystopian nature of a society which imposes age limits on its members, against their wishes.


Nothing wrong with improving quality of the life we do have, which naturally would mean increasing lifespan a little. In fact, I'd argue that's precisely the right way to spend resources - quality, not quantity.

I'm not at all against small increases in lifespan, and certainly for improving quality of life (e.g. defeating disease). I'm specifically against individual immortality because I strongly suspect it would quickly and inexorably lead to stagnation and death for our species.


I would argue that most of the items on this list can be subdivided into two types of hype.

short term hype (real advances that will happen in 1-2 years, but won't matter by 2030, because they are just a generational iteration)

Over-hyped far-future research. (things where the possibilities have yet to be brought down to earth by the practical limits of implementing them broadly / cost effectively) When these things do happen, they tend to be a bit of a let-down, because they don't actually provide the promised revolutionary changes. These things basically have to be over-hyped in order to get the necessary funding to bring them to reality.

Of the examples you have, I am only really excited about CRISPR, and to a lesser extent commercial spaceflight, and new nuclear. These have promise IMO, but I also don't expect them to be decade defining.

Personally I don't think we know what the next breakthrough will be yet. I expect it to take us very much by surprise, and start out as something unthreatening which then grows to a disruptive size / scale.


Interesting, I think all of those items have a good chance of actually happening ("longevity research" happening means some sort of meaningful progress).

I hope there will be some unexpected breakthroughs too.


As someone in their 30s who suffers from baldness and arthritis, two simple conditions yet no promising solutions in sight, I find it cute when people think we can somehow cheat death or aging in the next 300 years.


Synthetic meat of all kinds, ARM based processors, deep learning + AI, agtech/vertical farming/etc.


> ARM based processors

Compared to most of the other things listed, this is more of a nerd-aestheticism thing rather than something which is hugely important technologically.


But haven’t ARM based processors put already computers in billions of peoples hands that otherwise wouldn’t have been able to access otherwise? I could argue that is hugely important.


ARM is merely a commercially successful microarchitecture that turned out to be good at being optimized for mobile application.


The ARM revolution has already happened, in this case I viewed the mention to be more of a wishlist for the types of computers the global well-off buy.


Delivery drones: Wing, Amazon, Zipline, Volansi, etc.

Synthetic fuels.


- Crypto

- Unfortunately, even less local computing, with everything provisioned from the cloud under a SaaS payment model.

- More mRNA applications

- Power/energy networks and markets across borders.

- Theranos, but legitimate. Better, cheaper and more convenient early monitoring/diagnostics for vitamin deficiencies and early stages of disease.

- Carbon neutral combustible fuels.

- Cheaper grid-scale storage.

- Better understanding of the gut-brain connection.


I hate to tell you, but your list looks like it came straight out of the 1970's:)

- Spaceflight (SpaceX Starship).

- Supersonic airplanes (Boom).

Been there, done that.


They are modern takes on the Space Shuttle and Concorde respectively, but with the benefit of hindsight as well as half a century of advances in material science and control systems. But really the defining feature is that the Space Shuttle and Concorde were government-funded prestige projects, while their modern incarnations are economically viable.


- Psychadelics

- VR/AR (photonic override, more specifically)

- Fundamental physics (unlocked by tech)


Can you elaborate on “photonic override”? Googling that phrase pretty much just returns more HN comments and tweets by you :)


A hardware/software proxy that governs all photons you see.


This is desirable?


Desirable or not is subjective, but what's not as subjective is the liklihood of this technology stack arriving and becoming popular, which seems very likely/guaranteed at this point.


it does seem likely to arrive but whether it becomes popular more difficult to predict as it's conditioned on the sway of public opinion, which seems heterogeneous regarding AR/VR


Certainly for someone with defective default hardware ;)


I certainly see the utility there!


Look into “diminished reality” - the ability to block things out that would be a distraction, for instance.


Non animal food. Will transform land use.

Remote education. Available to any kid or adult anywhere.


I think we're learning the limits of screen-based education now. There's something about being in the same room with a person at a chalkboard that is far more effective - at least anecdotally. (I'd be surprised if there wasn't research backing this up). And this seems deeply unfortunate if true, because it means the cost of learning can't go all the way to the ground.


Two thoughts: Africa can’t afford in-person teachers. The ones in my country are not great on average.

I couldn’t learn maths in class. Too distracted, too annoyed with stupid questions. But I increased 3 symbols in 3 school terms with a slide projector and audio tape, where I could focus and rewind. Teacher there was for bits I didn’t learn from the slides. I’m probably in the minority but I’m sure there are more of me.

Digital education catches kids like me and kids who have no access to excellent educators. And marginal cost is zero, so no harm in giving access to the world.


mRNA-based therapies?


Half of these are on their way into "valley of disillusionment" before they become generally useful, though perhaps not as grandiose as originally promised.

And longevity research is not even a real need - we already live too long as it is, from the evolutionary and economic standpoint. I'd much rather someone came up with a way to cheaply and painlessly end one's life once quality of life begins to deteriorate due to chronic disease and such. Some kind of company (and legislative framework) where you pay, say $1K and they put you into a nitrogen chamber and then cremate and flush the ashes down the toilet afterwards. Or perhaps use them as fertilizer. I'd use the service myself at some point in distant future.


The How is not the issue. A combination of various drugs or an opiate overdose should do the trick. It's already legal in Switzerland, Canada, and Belgium.

Voluntary euthanasia is ultimately challenging because of similar legal issues as with the death penalty - it cannot be undone, and there are forces in society that can lead individuals to use it for other reasons than just being over and done with suffering through old age.


That's the point. As long as I'm lucid I should have full bodily autonomy, including the decision to shuffle off this mortal coil. In fact I already have control over this decision.

> and there are forces in society

So? You're going to tell me I can't go anytime I want to? That's not the case even now. It's just that now I'd have to procure the nitrogen myself (which isn't difficult), and my relatives would have to deal with the body. I'm merely suggesting a service that resolves this purely logistical complication, and excludes the possibility of not quite dying but living the rest of one's life as a vegetable.

Think of what we have now: people spend years, sometimes decades suffering from chronic diseases, or just plain not having anything or anyone to live for. And it'll get worse as medicine "improves", and lifespans "improve" with it. Is it humane to withhold the option to end it all from them? I don't think it is. I will grant you that there are likely tens of millions of such people on the planet right now. I will also grant that this is not an uncontroversial thing to suggest. But the alternative we have now doesn't seem any more humane or dignified to me.

If this still doesn't sit right with people, we could age and condition-restrict it, or require a long waiting period for when this is not related to acute incurable disease.

> as with the death penalty

Which is also inhumane, IMO. It's much worse to spend the rest of one's days in confinement instead of 30 seconds until barbiturates kick in. That's what the sadists who are against the death penalty are counting on.


Again, I am aware that there is no core technological or logistical issue. The issue is purely societal. Yes, I can get behind enabling people to specify policies on what to do when untreatable or mental diseases kick in.

The death penalty does not exist to reduce the suffering of the convicted, but to get rid of them. The true issue with the death penalty is that it can't be graduated (except by adding "cruel and unusual punishment") and it can't be undone. Prison sentences can be legally challenged and the innocent can be freed early.

There is a real slippery slope here: what length of prison sentence is considered to be worse than the death penalty? An additional thing to consider is that many countries without the death sentence actually don't impose true life sentences, but very longish ones (upwards from 20 years). Confinement for life is for those irredeemly judged to be a threat to society after their sentence. Compared to that, many death row inmates actually spend decades fighting their sentence. They could end it at any time if they wanted.


Regarding commercialization of suicide: I think you're missing my point entirely somehow. The "societal" issue where old people are unwanted already exists, and it will exist irrespective of any innovation of this kind. Moreover, old or terminally ill people already have full control in terms of whether they choose to live or kick the bucket. There's nothing whatsoever anyone can do about that. Tens of thousands of people in the United States take their lives every year. It's just that if they care about their loved ones (if any) the logistics of dying are horrific. I wouldn't want to subject anyone to that, but I'm afraid if I were terminally ill, that'd be a pretty shitty reason to continue living, and make everyone I love suffer with me.

> The death penalty does not exist to reduce the suffering of the convicted

There's an easy way out of your moral dilemma that you go into after this sentence, much like what I suggest for those on the outside: let the convicts choose whether they want to suffer for the rest of their days in prison, or be humanely and painlessly killed. I know which way I'd go, under the circumstances. And yes, I do insist that the killing must be humane, dignified, and painless. We have the technology to ensure all three of those things.


I can empathize with the first point.

Regarding humane, dignified and painless killing: the Lethal Injection was supposed to be exactly this. But we humans are pretty good at botching things...


It is incredibly dishonest of them to post this without any details about the noise parameters of the system.

When reading "127-qubit system" you would expect that you can perform arbitrary quantum computations on these 127 qubits and they would reasonably cohere for at least a few quantum gates.

In reality the noise levels are so strong that you can essentially do nothing with them except get random noise results. Maybe averaging the same computation 10 million times will just give you enough proof that they were actually coherent and did a quantum computation.

The omission of proper technical details is essentially the same as lying.


Basically little more than having a bathtub and claiming you've built a computer that does 600e23 node fluid dynamic calculations. But a lot more expensive.


I haven't read anything on this one yet, but your analogy fits Google's “quantum supremacy” paper really well, I likes it.


Ugh, the point is a fine one but it appears it has to be made:

Validation of experimental theory through the characterization and control of an entire system is not the same as building the same system and simply seeing the final state is what you expect. The latter is much easier and says very little about your understanding.

Here's an analogy: Two people can get drunk, shack up for the night, and 9 months later have created one of the most powerful known computers: A brain. Oops. On the flip, it's unlikely we'll have a full characterization and understanding of the human brain in our lifetimes – but if we ever do, the things we'll be able to do with that understanding will very likely be profound.


My reply was glib but I think in principle correct. The idea of a strictly controlled system in the NISQ domain to validate quantum supremacy in theory is an interesting approach, but it feel deceptive to me because this 127-qubit computer cannot in fact factor 127-bit numbers with Shor's algorithm or anything like that.

The accomplishment is more akin to creating a bathtub with 127 atoms and doing fluid dynamic simulations on that, which is a much harder problem in many ways than doing the 6e25 version of the experiment. But it is very questionable to me whether any claims of quantum supremacy retain validity when leaving the NISQ domain and trying to do useful computations.

Gil Kalai's work in the area [1] continues to be very influential to me, especially what I consider the most interesting observations, namely that classical computers only barely work -- we rely on the use of long settlement times to avoid Buridan's Principle [2], and without that even conventional computers are too noisy to do actual computation.

[1] https://gilkalai.wordpress.com/2021/11/04/face-to-face-talks... is a recent one

[2] https://lamport.azurewebsites.net/pubs/buridan.pdf


I mean, maybe in the theoretical sense, but do conventional computers barely work in practice? Seems a bit of a pedantic argument.

Gil Kalai and others with similar arguments play an important role in the QC community. They keep the rest of us honest and help point out the gaps. But I do think the ground they have to stand on is shrinking, and fast. Ultimately, they might still be right – that much is certain – but it seems to me that the strides being made in error correction, qubit design, qubit control, hardware architecture, and software are now pushing the field into an exponential scaling regime.

To me, the big question is much less whether we'll get there, and much more "what will they be good for?"


It seems like IBM has blown their credibility so many times. As soon as I saw IBM mentioned in the lead of the title, I knew what was to follow is almost entirely actual-content-free marketing spin.


With the caveat that the paper is from a competitor that I like, this benchmark paper [1] makes me inclined to disregard this result. See figure 1.

__EDIT:__ whoops wrong figure, just read section iv or see the first figure here [2]

[1] - https://arxiv.org/abs/2110.03137

[2] - https://ionq.com/posts/october-18-2021-benchmarking-our-next...


It's over here: [1]. James Wootton makes some comments about it here [2]. Looks like 1% error rate which I think is too high to do much with, but it is still exciting progress for those of us working in the field!

[1] https://quantum-computing.ibm.com/services?services=systems&...

[2] https://twitter.com/decodoku/status/1460616092959883265


It's particularly jarring given that IBM came up with the concept of quantum volume [0]...

[0] https://en.wikipedia.org/wiki/Quantum_volume


    > The omission of proper technical details is essentially the same as lying. 
Welcome, child, to the beautiful/"special" world of marketing!


Be gone fetus, for there has always been honest marketing. It's just harder to find, sadly.


If it’s below the noise floor, how do we separate it from ‘accidental (or incompetent?) honest marketing’?


Oh, I do not deny there are honest marketing people. But there are plenty of people in that game happy to omit some inconvenient factoids. And as always, those are the ones giving their craft as a whole a bad name.


Welcome to IBM. Is Quantum the new Watson?


I guess they should show that they have achieved quantum advantage:

The Chinese did show it some time ago:

https://www.globaltimes.cn/page/202110/1237312.shtml


Also another problem: you now have 2^127 output values leaving the quantum processor. If you're using a hybrid quantum algorithm that requires classical processing as well (which are most algos used today), you'd need more than a yottabyte of RAM. We can get around this problem by storing all 2^127 pieces of output data into other data types that compress the total size, but if you genuinely are trying to use all 2^127 outputs, you'd still need to do some pretty intensive searching to even find meaningful outputs. I guess this is where Grovers search could come really handy, right?


You don't get the entire wave-function as output; the wave-function is not observable. Different measurements might reveal information about certain components of the state, at least probabilistically, but those same measurements will always destroy some information. See the No-cloning Theorem.


Right, but you would still get the basis states for all 127 qubits right? And that would be 2^127 output states. Yes, you could do some sort of search maybe to find highest probability outputs only, but if you needed every output value for a follow up algorithmic step (like in VQE for ground state prep wherein you keep using previous results to adjust the wavefunctions until ground is reached), then wouldn't it be a bit tough to use?


You have 127 qubits that you measure and you end up with a classical string of length 127. Sure, that classical string, the measurement result, could have ended being any of 2^127 possible different values into which the wavefunction collapses. But that is no different from saying that there are 1^1024 possible states that a 1kB of classical RAM can be in. It is not related to the (conjectured) computational advantage that quantum computers have.


Right okay makes sense...guess I am just too used to NISQ and having to run many thousands of shots for high enough fidelity..if all you wanted was one output, then yeah one classical string is easy enough, thanks


I hope US esp. but also our EU scientist friends eat everybody else's lunch. Better tech. Better science. Better math. Better algos. And, yes, let's get those details right too. Noise management is a key discriminator between POC and practical.


Any idea where one can find qubit lifetimes and gate fidelities? The classical RF engineering behind controlling that many qubits is certainly great, but it is hard to get excited about the "quantumness" without these figures of merit.


You can create a free account at IBM Quantum and peek at the latest calibration data there.

Edit: Only a fraction of the qubits of the 127 qubit system were calibrated when I looked.


No they didn't; there is no such thing as a quantum computer or a quantum processor outside of theoretical papers. I know I'll be downvoted by saying that (like I was last time) but that doesn't change facts; they have a random number generator that is capable of generating a lot of very random numbers... cool, cool, cool.


If you added one word of qualification – practical quantum computer or quantum processor – this would actually be a reasonable position to argue.


No, I mean in the real sense, there is no actual quantum processor anywhere in the world (unless some secret organization actually built something amazing and is hiding it).

Not a single quantum logical gate exists that actually behaves as described in the theory, let alone circuits of quantum gates that do even the most basic of computation. Actually, I'll take that one step further and say that no one has produced a single qubit (actual logical qubit as described in the theoretical literature), these 127 (if that) are what they call "physical qubits" or what you and me would call chaotic sources of uncontrolled entropy (i.e. random number generators).

I'm not saying quantum computing in the physical world is impossible, I'm just saying no one has accomplished it (yet).


Uncontrolled entropy? If you can repeatedly throw a ball and hit a target but not the bullseye, that doesn't mean you have no control, it just means you need to practice more.

In the spirit of that, there are recent experimental demonstrations of logical qubits:

- NV in diamond qubits (https://arxiv.org/abs/2108.01646)

- Superconducting qubits (https://journals.aps.org/prxquantum/pdf/10.1103/PRXQuantum.2...)

True, a universal logical gate set has yet to be practically realized (at least to my knowledge). But, it seems you're looking for perfection and QC will never be that – even with error correction. Fundamentally, QCs are analog and probabilistic; all discrete states we define theoretically will be approximations in practice if and when they can be demonstrated.

But if the approximation is exponentially close to the target state and that is sufficient for the practical purpose at hand...does it matter?


Like the parent comment, your absolutism seems unreasonable. A wave-plate is a darn-near-perfect single qubit gate for a dual-rail encoded photonic qubit. It is just that we can not really generate photons on demand in a scalable way. Hence agreeing both with "there are some small noisy unreliable quantum computers" and with "there are no scalable useful practical quantum computers".


I am all in for down votes. I am not a physicist but the cost of cooling must astronomical for the output you get. CF. When autonomous vehicle technology gets released into production it is likely they won't make your car look like its wearing a dunce's hat whereas when quantum computers enter production we will still need mK temperatures for them to operate. The cost of cooling will burn the planet up further, just so someone can crack a SHA-256 encrypted password in seconds...random numbers indeed.


There are technologies that would not need the 15mK operating temperatures, they are just in their infancy.

The most interesting applications of quantum computing have little to do with encryption or breaking codes. Chemistry and optimization problems are much more exciting.

SHA-256 is a hash, not an encryption algorithm. And quantum computers have nothing to contribute to reversing hashes or breaking symmetric encryption.


While I don't doubt that advances are being made, this has the hallmarks of cold fusion all over again. By the time it is in production, DeepMind will have made significant head way using conventional CPU/GPU.


Scott Aaronson had a brief comment on the news on his blog: https://scottaaronson.blog/?p=6111

Seems we're still a bit light on details. I hope to see a lot more on this. The progress on quantum computing lately is exciting though!


Will this system be finally able to factor number 35 using Shor? IBM tried and failed in 2019


For those interested, here's the paper where the IBM Quantum Computer failed to factor 35: https://doi.org/10.1103/PhysRevA.100.012305

I found a cite to it claiming this was evidence of success in factoring 35, 'cos they didn't read past the abstract (which could be read as indicating success) to the words in the paper "Eventually, the algorithm fails to factor N = 35."

But yeah, "call me when it factors 35."


came here to post "call me when you can factor 25"


> IBM Quantum System Two is designed to work with IBM's future 433-qubit and 1,121 qubit processors.

what's the smallest useful (as in, 'non-toy', or maybe 'worth buying time on') quantum computer?


If you are a researcher from another field that just wants to contract out the computation of a numerical solution to some chemistry problem infeasible on a classical supercomputer, a million "physical" qubits is a fairly reasonable guestimate.

If you are a quantum computation person developing near term applications, you probably would already start getting excited with a 100 (sufficiently long-lived) qubits.

The "sufficiently long-lived" is the problematic part. Every lab has its own bespoke figure of merit (quantum volume, CLOPS, fidelities, etc). It is basically impossible to compare devices without being a researcher in the field for now. But at some point a novel drug or material will be developed thanks to a quantum computer and then we should really get excited about renting time on these devices.


>a million "physical" qubits is a fairly reasonable guestimate.

...

>If you are a quantum computation person developing near term applications, you probably would already start getting excited with a 100 (sufficiently long-lived) qubits.

...

>"logical qubit" or less formally "long-lived qubit" ... With 100 logical qubits (i.e. 100k physical qubits)

lol you're literally guilty of playing the same trick that people are bemoaning in another thread.

incidentaly, having taken a QC systems class from Fred Chong, i believe you guys are all working on vaporware.


Read the second comment you are quoting to the end. I think you misread something.


This your lede answer to what is the smallest useful non toy QC:

>If you are a quantum computation person developing near term applications, you probably would already start getting excited with a 100 (sufficiently long-lived) qubits.

Then you go on and on and on and have one sentence about what you can do with 100 full stop period qubits.

So what exactly did I misunderstand?

It's just funny to me how all of you guys - from the crypto QC grifter, all the way to PIs and postdocs like you play the same word game


I guess I should be more careful when using handwavy terms. "Long-lived" does not have a defined meaning, so in my second answer I spelled out what you can do with a bunch of "logical" qubits and with a bunch of "physical" qubits, providing clear definitions of these words. I am not sure why you are so angry about the use of a handwavy term though, given the whole point of this thread was to introduce the topic to lay people. Either way, the second comment is clear and it avoids the ill defined terms, so hopefully you would be happy with it if you read it in its full length. There is no such thing as "full stop period" qubits, hence the confusion.


>There is no such thing as "full stop period" qubits, hence the confusion.

of course there is - this is just more of exactly the same word play lol

>In 1995, Ben Schumacher provided an analogue to Shannon’s noiseless coding theorem, and in the process defined the ‘quantum bit’ or ‘qubit’ as a tangible physical resource

...

> For our elementary coding system we choose the two-level spin system, which we will call a "quantum bit" or qubit.

you can talk about surface plasmons or transmons or josephson junctions or whatever you want but the definition is always physical not logical (that's an abstraction!).

>I am not sure why you are so angry

I'm not angry - I already said what I am and that's tickled/mirthful but also bemused by the consistent evasion by the QC community to talk about physical reality.

So I think the truth is I'm not angry but you're defensive because a 100 qubit QC is useless, and moreso a 1000 qubit, even 100,000 qubit QC would be as well. But that's quite inconvenient for a research community that's publishing papers and submitting grant proposals.


You are making up claims about what people say and then get angry/tickled/mirthful about those made up claims. All throughout this thread people working in the field explicitly say that you can not use this device for simulating chemistry or factoring numbers and admit that you need at least about a million physical qubits (about a thousand logical qubits) to start doing these things reliably. How can you then claim that we are trying to hide it because it is "inconvenient"? This is the big thing to solve in our field, mentioned in every single research proposal, not some hidden secret we are ashamed of. And your refusal to distinguish physical and logical qubits is just silly and leading to your confusion.


>And your refusal to distinguish physical and logical qubits is just silly and leading to your confusion.

...bruh i'm a phd student whose work supports SQMS at fermilab (not in physics but cs). i'm not confused about absolutely any of the terms or definitions. hint: you're not the only QC researcher in the room at all times.

>You are making up claims about what people say and then get angry/tickled/mirthful about those made up claims.

i'm not making anything up - this comment

https://news.ycombinator.com/item?id=29245025

doesn't say absolutely anything about error correction and just vaguely alludes to coherent qubits being somehow different from physical qubits. like are you kidding me claiming that you're being transparent while reporting 100 anything without immediately revealing that it's actually 100k? somehow in your mind 3 orders of magnitude isn't a big deal when communicating relevancy/value/merit?

for a farcical analogy: can you imagine me reporting 100 dead corps and then come to find out i'm talking about 100 corporations being massacred, each corporation employing 1000 people.

there is no other academic discipline that plays this slight-of-hand. i'll give you another analogy that should be near and dear to your heart and will illustrate the point very precisely: can you imagine daniel simon saying he proved separation of BQP and BPP and not immediately (in the same sentence) revealing that it was oracle separation?


Again, read that series of comments again. I am saying I would be excited to work with a device with 100 physical qubits, not 100k. You are making up the least charitable possible interpretation of an offhand comment and getting angry at your imagination.


lololol you keep dancing around the issue - now we're on to comparing "excited" to "useful"; the question wasn't what you're excited to work with (i'm excited to work with 5 qubits through qiskit). the question, plain as day:

>what's the smallest useful (as in, 'non-toy', or maybe 'worth buying time on') quantum computer?

very obviously this person isn't asking about useful for writing papers...

>You are making up the least charitable possible interpretation of an offhand comment

nothing imagined here. just english. sorry.

> and getting angry at your imagination.

lol you keep insisting i'm angry. i mean if a reviewer reviews your submission and calls you out for inflating numbers i guess they're angry too? oh well


You are splitting hairs and picking arbitrary ways to interpret what I am saying. I would be excited, it would be useful, it would be great to work with a 100 physical qubits, so that we can test control schemes, small error correction procedures, maybe some entanglement purificant circuits, etc. You are willfully misinterpreting at this point.


no you are willfully misinterpreting this question posed by a layman

>what's the smallest useful (as in, 'non-toy', or maybe 'worth buying time on') quantum computer?

this question is not about papers or research - it is about value for problems/questions outside QC. simple as that. the answer to that question is ~100,000 physical qubits not 100.

again this whole exchange with you just further reaffirms that there's a very very strong reality distortion field around this entire area of academia.


And if you actually read my answer you see that I agree and in the first paragraph say that you would need around 1M physical qubits for anything practical!? The second paragraph specifically says it is about researchers in the field trying to push the field forward, playing with 100 physical qubits (yes, that usually involves writing papers).


What near term applications would be possible with just a 100 long-lived qubits?


With long-lived ones, gosh, a ton. When you hear a researcher talk about "logical qubit" or less formally "long-lived qubit" they mean a reliable abstract computational component. When we talk about "physical qubits" we mean the unreliable real implementations like the one from this article. A rough rule of thumb is that you need a 1000 physical qubits to make one logical qubit.

With 100 logical qubits (i.e. 100k physical qubits), you can start thinking about running chemistry simulations on the edge of what is possible with classical supercomputers. That is what I am excited about. There are also optimization problems, and some pretentious claims about quantum machine learning, which I am certain would be fun, but I am not as excited about.

With 100 physical qubits, you can start testing non-trivial control schemes, circuit compilations, error correction methods, and many other building blocks.


For example routing algorithms used to change railroad traffic in case of a problem with a junction or track.


Is it accurate to say the primary use of this is likely to design further quantum computers?


Yeah the R&D that goes into this is mainly part of the longer term effort to make increasingly large quantum computers.

In case you meant it the other way: the number of qubits here is still far too small for any real world application, including for simulations that would help design larger chips.


In a way, yes. I usually call such devices "technology demonstrators". But I am certain there would be researchers offended by such a trivialization of their work.


Kinda depends on what you want to use it for. If you want to simulate a physical system then you need at least as many qubits as the system has (and an understanding that such a simulation will have terrible noise). If you want to do something “useful” like run Shor’s algorithm to factor large encryption keys then it would take millions of qubits and quantum error correction.

To me the significance of this kind of increase in number of qubits is that many detractors of quantum computing had argued we’d never even reach this point, so I am slightly more optimistic that we’ll eventually reach the scale required for reliable abstract computations.


Why 433 and 1121? In classical computers, everyone knows why a lot of things are done in powers of 2. Is there a different base number useful for quantum computation that I don't know about (for example, is this an integer multiple of a specific sin/cos value)?


Not really. For this particular device they are laying out the physical qubits in some geometric lattice constrained by where they can put traces for microwave inductors and capacitors, and it just happens that this is the convenient size at which to try to build the lattice.


It depends on your use case. By some measure, the Google collaboration that demonstrated a time crystal was a practical quantum advantage in the space of condensed matter research. But in that space, practical means validating theory with experiment. That's a much lower bar for a quantum advantage than, say, showing that you've implemented a quantum algorithm that can, with high statistical likelihood (e.g. an error rate of 1/10e5), provide fleet routing solutions for a logistics company that are 10% more efficient than any known classical routing algorithm.


A real random number generator, this is basically the "hello world" on a quantum computer.


I'm starting to get some fatigue about quantum computer news, and I just dismiss them now. If there really is a valuable breakthrough, everyone will be talking about it for months non-stop so I won't miss it.


This is exactly correct. The moment something important happens in quantum computing the community will recognize it as such. In the meantime most of us are just waiting for somebody to do something interesting that couldn't really have been solved (possibly approximately) on classical systems.


In that sense, this is a pretty big step because for the first time there is a quantum computer that cannot be simulated by a classical one. It's also expected they will have a processor with over 400qubits next year and over 1000 in 2 years. This are huge leaps.

Still they aren't currently doing anything useful and need to prove themselves.


I wouldn't trust the claim it can't be simulated; right after google declared quantum supremacy another group noticed that the classical solution was actually much faster than google said (and that's before any real optimization work gets started). https://arxiv.org/abs/2110.14502

Also as pointed out, these aren't full qubits, nor does any doubling mean the QC will suddenly be able to do anything useful.


I am surprised to see that growth given my intuition that each additional qubit would be harder than the previous to add to the system.

Are there any moores law type predictions/historical trackers for qubits?


My understanding is that adding qubits is not hard, but using them together is hard. Also, it sounded like some qubits are only added for error correction.


What are the innovations allowing for these large leaps? It seems like forever that progress was extremely slow.


Note: I'm a total laymen and have a single source of information, which I read just before seeing the article on hacker news:

https://arstechnica.com/science/2021/11/ibm-clears-the-100-q...

The Ars article goes into a little more technical detail (a little) than the press announcement.


That's kind of like saying, i'm tired of political news - if we all actually die in a nuclear holocaust armegedon it will be hard to miss.


Actually, what you just said is the equivalent of saying, "We just developed a quantum computer that allows us to solve not just every physical world question we have, but every metaphysical one as well."

Since you're literally comparing the end of most life on Earth, I think its fair for me to compare the development of a computer that provides limitless understanding.

They're both equally ridiculous, in other words.


To be clear i am not saying they are equally likely.

In the near term (next 5-10 years) the end of the world/WW3 is much more likely than a quantum computer that can solve crypto as in use today (Say RSA 2048bit). I would say orders of magnitude more likely. That's not saying that nuclear war is likely, its just a statement about how unlikely practical implementation of shor algorithm is in the near term based on where we are today.

However, that said, its kind of besides the point. The point of the comparison is both events are really unlikely. Quibiling if its 1 in a million or 1 in hundred thousand is besides the point.

The news is interesting because its a step in that direction. Just like how news about destabilizing world events are interesting even if they aren't literal full war. Its a step on a path. The destination is decades away.


I feel like a nuclear war is a much likelier (yet very low probability) event than your computational straw man.


It's not like political news. It's like new battery tech news.


If this was the 1600s and nobody had made a working battery yet, i would find battery tech news very interesting even if it was of the quasi-theoretical, only works in a lab, variety.


That's fair. So why are you posting in this thread? If you follow your strategy, you would save a lot of time no?

What I'm trying to say is please don't hijack HackerNews threads to just complain.


Reminds me of computing history with vacuum tube computers the size of a room. Even if the actual hardware isn't practical today, the lessons learned will still apply in the future.


Yeah I agree. There's so many details to work out about controlling these things that this is still significant progress. It's like we need an Elon Musk of quantum to tell us what's going on. We are making these just to blow them up, they aren't going to the moon.


Interesting that the number of qubits is approximately doubling every year according to the article.


Yeah, and with the QuEra MIT startup announcing a 256 qubit computer today — just a day after IBM — the rate of exponential change might be even higher. We could get to a million Qubits by mid December!

https://news.ycombinator.com/item?id=29259549


How do we call this 'law'?


A quanta article (prematurely in my view), tried to dub this Neven's law: https://www.quantamagazine.org/does-nevens-law-describe-quan...


Investor Steve Jurvetson calls it Rose's Law, after Geordie Rose, founder of D-Wave Systems:

https://www.flickr.com/photos/jurvetson/50399541811

> We first invested in 2003, and Geordie predicted that he would be able to demonstrate a two-bit quantum computer within 6 months. There was a certain precision to his predictions. With one bit under his belt, and a second coming, he went on to suggest that the number of qubits in a scalable quantum computing architecture should double every year. It sounded a lot like Gordon Moore’s prediction back in 1965, when he extrapolated from just five data points on a log-scale (his original plot is below).

> So I called it “Rose’s Law” and that seemed to amuse him.


Schrodinger’s shrinking cat


Perhaps: Moore's Second Law


Moore's law?


well Google Sycamore (53 qubits) arrived in 2019, and it's 2021 now - so 2 years (being generous).


This is fascinating to me! Quantum computing is such an incredible frontier. But I suppose it will mostly be used to decrypt all the currently un-decryptable internet traffic being archived at the Utah Data Center [1]. But, maybe it will also be used for something good for humanity, too.

[1] https://en.wikipedia.org/wiki/Utah_Data_Center


The cryptography uses are fairly boring in my opinion. We already have classical cryptography techniques which are not susceptible to quantum computers.

On the other hand, being finally able to fully simulate large molecules with significant quantum effects (not possible even on classical super computers) would be amazing.


I like this talk... and especially around the 10 minute mark.

https://www.ted.com/talks/craig_costello_in_the_war_for_info...


Where does this sit on the "applicably breaking secure classical cryptography" spectrum?


It’s completely irrelevant to that problem. Breaking crypto requires quantum error correction, which we think requires thousands (perhaps hundreds of thousands) of physical qubits per logical qubit. And the operations of the computation require many more logical qubits than just representing the target value.

It’s still gonna be awhile. But this is still pretty interesting because a lot of the detractors of quantum computing thought there was strong evidence that we’d never even manage to get this far. So it seems _slightly_ more likely that large scale abstract quantum computation is feasible.


Many years away. A good guestimate is that you need 1M physical qubits factor numbers fast.

Also, there is classical public-key cryptography (i.e. encryption algorithms that run efficiently on today's classical computers) that is not susceptible to quantum computers. And symmetric cryptography has never been susceptible to quantum computers.


So if we draw an exponential curve from 53 to 127 qubits, we're looking at 13 doublings or about 2 decades?

Neat!


Gidney and Ekerå have you covered: https://arxiv.org/abs/1905.09749

Short answer, this probably won't even register as a pitstop on the technical pathway.


There are probably entities saving encrypted data right now for when that day arrives. Brings up many interesting lines of thought beyond "is it breakable right now?"


Does it really though? Every secure system design pretty much assumes that the cryptographic standard will be broken in a few decades. Is there really any secret today that would be problematic if made public 30 years in the future? And that would not have been made public by some other method anyway?


> Is there really any secret today that would be problematic if made public 30 years in the future?

Sure. For one, that all the major earthquakes in the last 30 years, resulting tsunamis, destruction and loss of life, were manmade and intentionally caused by, say, Nabisco. Also, it would be a little shocking to the public if it were revealed there are no humans left, only alien-hybrids.


Both of these facts would be quite irrelevant when ultimately discovered. It is hard to make people care about anything that happened 30 years ago that did not affect them personally, and in the latter case the alien-hybrids would have to just... accept the fact.


For me, the name 'Eagle' reminds me of the book "Soul of a New Machine" by Tracy Kidder. That book is about the design and bring-up of another new (at the time) computer system, the Data General MV/8000, which was also code-named Eagle.

I wish IBM great success with their work.

edit: Clarified that the MV/8000 project was also code-named Eagle.


Haven’t read that yet but I read “House” by him recently and it was quite good


I enjoyed both Soul of a New Machine and House.

A scene from the former that I remember. The principle architect of the machine (Tom West) is talking with Edson DeCastro (CEO of Data General) and is asking for a new oscilloscope to help with the bring-up of the machine.

DeCastro tells West, essentially, he's not authorizing a new, expensive scope. West, flabbergasted, asks why. DeCastro lowers his head, peering over the top of his glasses at West, and says "Because scopes cost money, and engineering overtime is free."

I'm telling this from memory and some of the details are wrong. I guess I should go get a Kindle copy of the book and re-read it. :-)


I read the book when it was new. Just a handful of years later I graduated college and got a job at a start up. One of the guys who interviewed me and ended up in the office directly across from my cubicle was Carl Alsing.

A few months later someone mentioned that Carl was in "Soul of a New Machine" (Carl was the seasoned hand who was in charge of the "microkids", the green engineers responsible for writing the microcode).

I re-read the book. When Kidder first introduces Alsing he sums him up in a few sentences, and damn if he wasn't spot on. Later in the book is an entire chapter about Carl and his unorthodox work habits, and again, it all was so on the mark with what I had learned of Carl firsthand that it gave the rest of the book a great deal of credibility.


House is his other book that clicked with me given I had bought a fixer-upper at the time I read it. I really liked both books but none of the topics of his other books really clicked with me.


It's a fantastic book, highly recommended. I have gifted it tens of times by now, I used to buy them by the box :)


Anyone who wants to evoke the pioneering spirit of the moon landing™ uses the name. I worked on a project codenamed "Eagle" just earlier this year.


I can avoid thinking that as a species, QC is a bad investment. I think we’re trying just too early, like Charles Babbage. Great idea but the world doesn’t have the tech required.

I think that if they are honest with themselves, most researchers know they won’t see the day QC are a practical reality, but everyone is trying to become the “father of QC”.


I hear what you are saying, but maybe on the way to the moon, we can invent microwave ovens


Not to be brazen, but if your argument could be used to justify any investment, I’m not sure it’s a good argument.


Any investment that requires orders of magnitude improvements in technology. Most investments don’t.


> The best way to predict the future is to create it


Yeah, but trying to build things too early makes them meaningless.

Babbage designed a computer that had essentially no impact, because it could not be built, and by the time the tech was around we had better ways to build computers.

As a global effort, it may be wiser to shift focus to other things more achievable, and try QC again in 2100 (IDK, just some random future time).


Isn't the problem, though, that you don't really know what is, and isn't, achievable until you actually get there and have the benefit of hindsight?


Babbage's technology was feasible in principle. The differential engine was successfully built as a museum project, though it was complicated by Babbage not leaving behind detailed plans[0]. It can be argued that gear-based computational engines face inherent technical challenges that electrical designs don't. But simpler, far more successful versions of mechanical calculators have been developed as well[1].

There is a far greater resolve to invest in Quantum Computing technology compared to Babbage's technology in the 19th century. It remains to be seen whether ultimately something useful emerges. But we might discover other things along the way, just as the Manhattan project and the Race to the Moon were the pathway to develop several other useful technologies.

[0]: https://www.computerhistory.org/babbage/modernsequel/

[1]: https://en.m.wikipedia.org/wiki/Curta


> The differential engine was successfully built as a museum project

…with modern CNCs. It was simply not feasible at the time to build the parts required with enough precision, that’s what I meant by “no impact”, no one built on it because by the time we had the tools to build it, the tools where better than the thing itself.

As for the “along the way” argument, for me that’s a non-starter. We could use the same argument to justify investing in scams. “Cold fusion might not be possible, but think of what could we discover on our way there!”


The link I referenced stated that care was taken to ensure the tolerances required were achievable by 19th century technology. The biggest difficulty was in refining the designs into actionable manufacturing plans. Of course they used CNC; why mess up at the last step.

For both moon landing and quantum computing the challenges were recognized to be enormous, but achievable by unthrottling the money faucet. Cold fusion is different because there is not even a theoretical venue ahead. It would be a true search in the dark at this point.

Yes, I recognize that these astronomical projects were seldom launched with purely scientific goals in mind, and I wish humanity would do it for the science in the future.


I wonder if, at the same level of technology, quantum and classical computers end up performing similarly — do you need exponentially less noise for more qubits? If so, it seems similar to me to requiring exponentially more classical compute power to equal it, and they kinda end up equivalent?


For those who know how all this works or how it should work.

If it is true that a type of quantum computer might be able factor large number and if it is true that it would allow the users to read lots of encrypted data then quantum computing would be at the very top of the list of every intelligence agency out there in every country. I am thinking high multi billion labs yr/labs.

It would be a direct threat / issue / opportunity to national security.

I am burying myself in assumptions I cannot begin to justify.)

If that is the case, is what we are seeing here from IBM, or Google, state of the art?

What are the chances that some (secret) government lab somewhere ( not necessarily in the US) has a much more advanced model already working?

Is there any chance that a working crypto breaker could be operational?

Of course, if there was such a thing, out there, it would be in the greatest interest of whatever fraction had it to ensure nobody knew about it. Since it would give an enormous advantage to posses and use it, it would be critical to not let anyone know.

I came across some declassified docs covering NSA a long long time ago, from what I learned it seemed like they had access to technology that was not commercially available at the time.

(Sorry,. I like to write fictional stories on my spare time. I may have dipped into that territory too much in this post.)


Intelligence agencies are doing this the other way around. They're recording existing encrypted traffic to decrypt later should a useful quantum computer come into existence.

Cryptographers are working on post-quantum cryptography. This is expected to work in practice but they have to make it efficient and it's going to go through a couple of generations of new attack methods being discovered and then thwarted. At this point the level of deployment is basically zero.

Notably, pre-shared key systems (i.e. systems that use symmetric cryptography) are not as vulnerable to quantum computers, if you need something that works right now.


I'm quite confident by the press release alone that these systems are still almost entirely useless for breaking crypto BECAUSE of the fact we're hearing about them. If IBM had a chip that could fundamentally break crypto, the US government would almost assuredly immediately embargo any mention of it. Sure a lot of the science behind it would likely be in public journals, but as we approach a point in which it could actually break * crypto, it would become a state secret.

I have no doubt the leaders of IBM are well aware of that fact (as well as google, d-wave, the University of Science and Technology of China, etc).


As someone who knows little about quantum computing, what is the significance of 127-qubits? (as opposed to classical computing's 128 bits, as a reference)


Nothing. It is just how many they were able to manufacture and control reliably. Probably were aiming for some power-of-two number but had a couple of defects.


Computational power for quantum computers goes as 2^n, where n is the number of qubits, so unlike classical computing this machine should be orders of magnitude superior to Google's 53-quibit Sycamore.

If you wanna learn more about the subject, a couple of years back I wrote a introduction to quantum computing for programmers which you may find useful:

- https://thomasvilhena.com/2019/11/quantum-computing-for-prog...


What does "computational power" mean though? Which specific problem can these systems solve that classical computers cannot, or more effectively?


Roughly, computational power = number of operations per cycle.

Classical computers can only perform a number of operations per cycle linearly proportional to the amount of hardware architecture available (ex: one core, two cores, quad-core). Quantum computers can take advantage of superposition and entanglement to, given some restrictions, perform multiple operations per cycle, proportional to "two" to the power of the number of qubits.


That's not quite it. The power of a QC comes from modeling an exponentially large probabilistic state space using entanglement and superposition. The operations performed by a QC are also different, they can be analog (arbitrary rotations), but circuit depths (i.e. the number of operations) are still expected to be polynomial.

The difference is that the probabilistic state space uses probability amplitudes, which are complex valued and can be positive or negative, allowing for constructive and destructive interference over the probabilities tied to each state. Orchestrate the right kind of interference, and for some problems, you have an algorithm that outputs a solution to that problem with (relatively high probability) in time that, depending on the problem, may be exponentially faster. Examples of those problems include prime factorization/discrete logarithms (Shor's algorithm) and ones in quantum simulation (hence the interest in QC by chemists, physicists, etc.)


Thanks for detailing. I was explaining in laymen terms, using less technical details, with some reservations ("roughly" and "given some restrictions").

> Orchestrate the right kind of interference, and for some problems, you have an algorithm that outputs a solution to that problem with (relatively high probability) in time that, depending on the problem, may be exponentially faster.

Exactly. Given some restrictions, it's possible to implement algorithms that are equivalent to performing an exponentially large amount of classical operations per "cycle".

> Examples of those problems include prime factorization/discrete logarithms (Shor's algorithm)

Indeed. I provide an implementation of the Deutsch–Jozsa algorithm [1][2] based in my own quantum computing simulator that I linked in my blog post (in the original comment) to address this.

[1] https://en.wikipedia.org/wiki/Deutsch%E2%80%93Jozsa_algorith...

[2] https://github.com/TCGV/QuantumSim/blob/master/Tcgv.QuantumS...


Thus far, only D-Wave has truly scalable control over their qubits. They accomplished that by moving to an entirely different computational regime. Until gate-model efforts find a solution to scalable control, every "we built a bigger chip" announcement is a milestone in microwave engineering.


As someone who is very much a lay person, my understanding is the qubitsness of a quantum computer is closer to the ram capacity of a classical computer than anything


Qubits? Wanna impress me? Tell me about the great strides you've made in fault tolerance.


A new MIT startup, QuEra, just announced a 256 quantum computer —- and they used it to make pixel art. https://www.technologyreview.com/2021/11/17/1040243/quantum-...

Discussion: https://news.ycombinator.com/item?id=29259549


Aw, they couldn't stuff one more in there to get a power of two? (I know it doesn't really matter for qubits, but still.)


I wonder how fast quantum computers are progressing. Like are we seeing similar increases in performance that we saw with silicon computers back in the 60s->Now? Or is it slower due to the intense cooling we need to give them?


IBM has been making the coolest-looking (and coolest) computers... https://www.youtube.com/watch?v=a0glxDw700g


It's crazy that a lot of the freezers they use, that can get temperatures down to a few millikelvin, are commodities at this point. You could go and buy one if you wanted to for $50k!


Very press-releasey, which I guess is fair given that it is, indeed, a press release. I'd love to hear more context from people knowledgeable in the field.


So what kind of qubits is IBM going for? transmons?


I like how their emails are on the article like "hey, I'm a quant C.S. that works at IBM. Hire me somewhere great!"


Anyone here writing quantum programs? :)


Yes.


Care to elaborate? :P

What are you programming? Which languages? I'm keen to learn more!


There are a lot of languages. I use Python. Right now I'm working on a project using the Pennylane library for quantum machine learning. Other languages exist – dedicated ones like Q# or quipper (built on top of Haskell) or libraries in Julia, C++, etc.

One paradigm, variational quantum algorithms (part of the broader class of hybrid quantum-classical algorithms), has several similarities to classical deep learning. The approach can be summed up as:

1. Construct a parameterized quantum circuit – one where the quantum gates are controlled by real-valued parameters. Ideally this circuit is one you think could reasonably solve some problem based off of your knowledge of the problem, input data, and behavior of quantum information.

2. Define an objective function.

3. Iteratively adjust the circuit parameters. This is done by measuring the output of the circuit and using that information in conjunction with some optimization function (e.g. stochastic gradient descent).

In general terms, I'm working on QML algorithms that may some day be used for applications in biology and medicine.

For a serious discussion of the approach, see: https://www.nature.com/articles/s42254-021-00348-9

There's also several nice walkthroughs of the basics on Pennylane's website: https://pennylane.ai/qml/demos/tutorial_variational_classifi...


"IBM measures progress in quantum computing hardware through three performance attributes: Scale, Quality and Speed."

That's nice. The real world (let alone the quantum one - whatever that means) doesn't.

Please Mr IBM: Get your scientists to inform your press releases. At the moment you sound like a bit of a nob with an unfortunate affliction.


"As far as I could see, the marketing materials that IBM released yesterday take a lot of words to say absolutely nothing about what, to experts, is the single most important piece of information: namely,"

As a clever bloke (Aaronson) said: "Get a grip and give us the science (in paper form)".

His blog is quite animated.


IBM’s next ad campaign:

“IBM is quantum computing. Run your business on quantum computing”


Can’t wait to install Windows on this!


This is IBM, it will only run OS/2.


I prefer the old standby responses:

> Just imagine a Beowulf cluster of these

Or:

> But can it run Crysis?


Amazing!


surprisingly not a word about quantum supremacy


> It heralds the point in hardware development where quantum circuits cannot be reliably simulated exactly on a classical computer.


I mean, that was practically already the case with the Google Sycamore processor. IBM claimed that they could simulate the 53-qubit circuit in something like 24 hours on a supercomputer (with a bunch of bespoke optimizations), but a 54-qubit version would have been completely classically intractable. We didn’t need to get to 100+ qubits, and those came out before now.


Google's claim was widely criticized for being an unimportant and uninteresting benchmark, making the quantum s*premacy claim nonsense.

Later research proposes that an exascale-level supercomputer could simulate it in "dozens of seconds".

https://arxiv.org/abs/2111.03011


That's such a stupid criticism. Quantum supremacy is an arbitrary benchmark by definition.


This is a funny comparison.

"Here is this device with 50 components. It can be simulated by a device with 1000000000000 components, so we should not really be impressed."


Truth. Not to mention the stunning asymmetry in energy usage. At a minimum, if we can solve an equivalent computational problem using a quantum computer with orders of magnitude less energy than a classical HPC, QC merits consideration. The carbon footprint of data centers is far from negligible.

Computational advantages aren't the only types advantages we should care about.


We can barely simulate a rough approximation of the brain of a worm. The world is filled with many things more impressive than early examples quantum computing.


That is a very good counter-argument, in the style of this comics I really like: https://www.smbc-comics.com/comic/2013-07-19

The big difference is that these early quantum devices (non-scalable noisy quantum computers) are *programmable* and *universal*. It is the difference between an analog computer that can simulate one thing of fundamentally bounded size and digital computers that can simulate "anything" with *in principle* unbounded size.


When it costs a few dollars to make the latter and tens of millions to make the former then it’s fair to not be super impressed.


You are off by 9 orders of magnitude at least. The classical super computer in these comparisons costs 0.3 Billion dollars and this does not count the many Trillions that took to develop the tech.

Even on this measure, the (useless for now) quantum tech wins.


I was assuming it was transistor count to qubit count, didn’t count the 0’s but either way not 9 orders of magnitude off so not sure what exactly you are saying, but either way all I am saying is that it isn’t impressive to a lot of people precisely because it is useless right now. So you are off by 100% in my opinion, way more than 9 orders of magnitude ;)


The people that have created these devices never claimed that they can be used for any useful computation, neither did the people you are talking to here. However, these technology demonstrators do show a programmable computation (sampling from a particular probability distribution) that is infeasible on anything but a supercomputer and becomes just impossible once you add a couple more qubits.

Sure, we do believe these devices, when made more reliable, will also do "useful" computations that are infeasible on supercomputers, but we are aware that we need to build the devices first in order to convince you.

9 orders is the difference between a few dollars and a billion dollars.


Because we all know “Quantum Supremacy” is just another marketing term, like “Militar grade encryption“.


Maybe because China has that now?


Quantum supremacy just means that a quantum computer is better than a classical computer at solving some (toy) task, not that one country's quantum computer is better than another country's.


Just keep telling yourself that.


Is there evidence to your claim, or are you just partaking in the incredibly banal "China is going to overtake us in ____" doomsday porn?


I see you missed the announcement last week. Whether to believe it is up to you.


https://www.scmp.com/news/china/science/article/3153727/chin...

"Lead researcher Pan Jianwei said the Zuchongzhi 2 – a 66-qubit programmable superconducting quantum computer named after a 5th century mathematician"

IBMs quantum computer is 127 qubit.


"Mine is bigger!" does not define quantum supremacy. In particular, IBM does not claim to have performed any computations with theirs. By what they do report, that is because it cannot do any, yet.


Can it run doom though?


For the first time ever, the answer appears to be no. A truly unprecedented machine.


Can you flip qbits with rowhammer?


>'Eagle' is IBM's first quantum processor developed and deployed to contain more than 100 operational and connected qubits. It follows IBM's 65-qubit 'Hummingbird' processor unveiled in 2020 and the 27-qubit 'Falcon' processor unveiled in 2019.

I guess I missed last years announcement of the 65 qubit one.

So okay we have a 127 qubit machine, what did they do with it afterwards?

The Q3 financials were released so this article can't have been released to pump up the stock price.


In the months after such an announcement you can expect articles with various attempts at an application to pop up on arxiv. I am saying "attempts at an application", not because the papers are not impressive, rather because the devices are still too small and noisy to excite anyone but researchers in the field. As a researcher in the field I am certainly very excited, because the figure of merit I care about have been drastically improved and this reinforced my belief that we will have a device solving classically-infeasible chemistry simulations soon (anything between 5 and 15 year ;)


As much as I would love to see quantum computers contributing to quantum chemistry, it's unclear that having more computation would magically solve any actual practical real world applied problem in chemistry.

I assume you're saying classically infeasible to refer to the O(n*7) scaling of some QM basis functions?


Your assumption is correct, and your cautiousness is warranted. I do expect the polynomial complexity to become better with future improvements in algorithms. Either way, it is on us fanboys to make devices that fulfill these claims.


Best of luck. having my skepticism disproved by the QC folks woudl be a major win, but it's not something I'd spend my time on. I think it makes more sense to improve existing codes to run as fast as possible on the biggest supercomputers we have, although even that isn't super useful because, as far as I can tell, better chemical simulations don't lead to better applied science in the field of chemistry.


Would you have examples of such simulations at hand? Or links describing some of them? I know next to nothing about quantum computing but I’ve always loved a good infeasible problem.


Feynman's 1981 "Simulating Physics with Computers" is one of the first mentions of how it is (naively) exponentially expensive to store on a classical computer the quantum state of something with n degrees of freedom (a molecule made of multiple atoms). He suggests (vaguely) the notion of a quantum computer. https://www.google.com/search?hl=en&q=simulating%20physics%2...

It is less known that a Russian scientist made similar remarks at the same time.

This "Science" news blurb pops up on google as an intro as well https://www.science.org/content/article/quantum-computer-sim... . Although it makes you laugh when you notice that the principle was suggested in 1981, formalized in the mid 90s, initial experimental successes in late 00s, and today we are barely simulating 3 atom molecules. In our defense, it was a 100 years between Babbage, passing through Turing, and getting to something like ENIAC. And a few more decades before the PC.


Yuri Manin is "way less well known"? I just spit my coffee out when I read that.


I apologize, what I was attempting to say was "here it is way less known that a scientist in Russia made the same observations at the same time". I will edit my comment.


Thank you!


> So okay we have a 127 qubit machine, what did they do with it afterwards?

Characterize it's performance, review what they've learned, and start on the next design.

We're either thousands of qubits, or a major theoretical breakthrough in error correction away from using a quantum computer for something other than learning about building quantum computers.


Obligatory PSA: Scott Locklin's "Quantum computing as a field is obvious bullshit":

"When I say Quantum Computing is a bullshit field, I don’t mean everything in the field is bullshit, though to first order, this appears to be approximately true. I don’t have a mathematical proof that Quantum Computing isn’t at least theoretically possible. I also do not have a mathematical proof that we can or can’t make the artificial bacteria of K. Eric Drexler’s nanotech fantasies. Yet, I know both fields are bullshit. Both fields involve forming new kinds of matter that we haven’t the slightest idea how to construct. Neither field has a sane ‘first step’ to make their large claims true.

.....

“quantum computing” enthusiasts expect you to overlook the fact that they haven’t a clue as to how to build and manipulate quantum coherent forms of matter necessary to achieve quantum computation. A quantum computer capable of truly factoring the number 21 is missing in action. In fact, the factoring of the number 15 into 3 and 5 is a bit of a parlour trick, as they design the experiment while knowing the answer, thus leaving out the gates required if we didn’t know how to factor 15. The actual number of gates needed to factor a n-bit number is 72 x n^3; so for 15, it’s 4 bits, 4608 gates; not happening any time soon".

[1]: https://scottlocklin.wordpress.com/2019/01/15/quantum-comput...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: