Hacker News new | past | comments | ask | show | jobs | submit login
The CIA and Jeff Bezos Bet on Quantum Computing (technologyreview.com)
103 points by iProject on Oct 4, 2012 | hide | past | favorite | 61 comments



The societal impact of quantum computing would be immense -- the entire security infrastructure of the web would have to be rebuilt from the ground up.

However, many experts -- including Scott Aaronson (who is mentioned in the article) -- have cast doubt on D-Wave's claims, because the company has not yet proved that its machines are producing results by exploiting quantum phenomena like particle entanglement and superposition. In other words, D-Wave's devices may very well be different, but still classical, computers. D-Wave's engineers acknowledge as much: they are quoted in the article stating that "they don't yet know for sure what's happening inside the chip."

The fact that Bezos, a very smart guy, has invested in D-Wave, has changed my perception of the company from "these are likely crazy people making outrageous claims" to "maybe these guys are unto something." Exciting and a bit scary. Time will tell.


I think "don't yet know for sure what's happening" is a bit misleading to the casual reader. If they're actually being honest it's more like: don't know that anything "special" is happening.

Frankly, their devices just aren't quantum computers until they can show they perform quantum mechanically. Until then, they are just hype. Bezos' investment is perfectly rational as an affiliation move -- if he was serious, he'd donate to, say, the Perimeter Institute in Waterloo where real quantum computer development is taking place. But the press releases for their accomplishments get several orders of magnitude less attention, so why bother? (or, is it that you just can't buy in to IBM -- also real work -- or HRL)

Their job ads [1-3] focus is very telling to me; which are mostly for new algorithms people, with long unfilled placements for physicists and materials engineers. The big development in QC this decade has been that "materials matter", and that you can't just throw qubits together willy-nilly. D-Wave scaled their qubit count (see the ridiculous "Rose's Law" slide [4]) basically by ignoring all these problems.

[1-3] http://www.dwavesys.com/en/files/20120627_d-wave_algorithms_... http://www.dwavesys.com/en/files/senior_engineer_2011.pdf http://www.dwavesys.com/uploads/20091026_Design_Engineer.pdf (http://www.dwavesys.com/en/careers.html)

[4] http://www.dwavesys.com/en/dev-tutorial-hardware.html


Just a side note - you're probably conflating Perimeter Institute (www.perimeterinstitute.ca) and Institute for Quantum Computing (http://iqc.uwaterloo.ca/). Both are in Waterloo, but it's the latter that is involved in developing quantum computing implementations.


But the problem (much like a qubit) can go one of two ways. Either D-Wave have built a 512-qubit quantum computer - in which case the CIA / NSA will be strongly suggesting that they don't tell anyone, or they haven't - in which case they won't be telling anyone. The fact that they're not supplying any proof, therefore, contains no useful information.


The largest societal impact of quantum computing would be to solve the 3d structure of biological molecules from sequence data.

This would be produce an almost immediate revolution in all areas of biotechnology whose implications would be difficult to predict.


Bezos may be a very smart guy, but I don't see - short of him becoming an expert in quantum physics - how his investing or not investing in this company means anything with respect to them being onto something.


I don't think the quote

they don't yet know for sure what's happening inside the chip

means what you think it means. I think in context this is an attempt to describe how quantum algorithms work, where they need a nonobserved superposition to operate.

They may well not understand their chips, but that's not supported by that quote in that context. You don't need to know the exact operations involved to solve an optimization problem with quantum computers (or for that matter with classical ones).


"means what you think it means"

Can we find some other way to convey this message without using that same tired old phrase over and over and over again? As soon as I see this phrase by brain automatically shuts down because it tells me the author lacks creativity.

Are people who post to nerd forums really that deficient in English language skills? Or are they just hoplessly compelled to mimic the language used by others?

"means what you think it means" is like some kind of lingusitic meme.

Future posters: Let's be creative and devise another way to state this.

This comment is not personal. It is just a randomexample of a phenomena. Many, many, many nerds use this phrase. I'm just tired of reading it. I can't be the only one. Next time you are tempted to type "doesn't mean what you think it means", please stop yourself. Write it some other way.


The phrase '<> doesn't mean what you think it means' is overused in some contexts, like on reddit and as a quip, but I don't feel that cynicalkane was using it as a quip or joke. He was using it to simply and politely say: 'that doesn't mean what you think it means'.

In short, cynicalkane was not using it in the context you think it was contextualized in.


That's why I said it was not personal. I just randomly chose this one random post in one random thread. There are hundreds to choose from.

Anyway, I'm over it. Maybe using these oft repeated phrases serves some purpose.

Your last sentence made me laugh again. Was that intentional? Are you poking fun at me?

If I took these phrases about what other's are thinking seriously, then everyone in web forums apparently knows what everyone else is thinking. Like they're clairvoyant or something.

I never presume to know what anyone else is thinking. If I want to know, then I ask them.


I suppose you have been downvoted because your post is not really on-topic and also somewhat ranty. But I do share your feeling sometimes although not necessarily with this particular phrase.

For me it's posts that start with:

"This."

The runner up would be the somewhat similar technique of putting a dot between every word of a sentence. That. Is. Quite. Annoying.

But I can't help but wonder why such things annoy us so much? Like you, I get so annoyed with this parrot style of writing that I have a tendency to ignore the post altogether. Why can't I just ignore the form and concentrate on the content of a post? After all, it's a forum, not a creative writing competition.

I've actually tried to better understand what drives people to use the same kind of phrases etc. repeatedly in their posts by deliberately using it myself. Perhaps it is some kind of peer pressure? A feeling of wanting to belong? I couldn't tell, because it didn't do anything for me.

But hey, at least I now got a pretty lengthy post out of this - although there isn't really much content. It's the post about nothing. Jerry Seinfeld would be proud of me.

Really.


I've noticed I do this (use "message templates"), and I suspect it's the result of learning English by reading stuff on the internet. I assume whatever people use the most without being corrected is the right way to use English in informal situations and I try to do the same.

But in my native language I do get irritated by bad style, so I can only sympathize with you. After all - most of the world abuse English :)


It's baffling. Human mimicry I guess. At least I know there are some people reading this stuff who also see it as strange as I do.


>Can we find some other way to convey this message without using that same tired old phrase over and over and over again? As soon as I see this phrase by brain automatically shuts down because it tells me the author lacks creativity.

You keep using this word "creativity". I don't think it means what you think it means.

Not to mention that he didn't want to be "creative", he just wanted to make a specific point. Do you avoid common programming idioms because they've been "done before" or do you just use what is more effective for what you want to build?

>Are people who post to nerd forums really that deficient in English language skills? Or are they just hoplessly compelled to mimic the language used by others?

Or do they merely want to use the colloquiums of their day, their trade and their pop culture? How about that?

>Future posters: Let's be creative and devise another way to state this.

Or let's not. I can understand the intention and the purpose of that phrase in less time than it takes to even read it properly (because I can pattern-recognition the letter shapes it consists off). It's a very common and well understood idiom. It's also kind of fun.


Oh man, I lol'd when you wrote "don't think it means what you think it means". Whether it was intended or not, you have cured my annoyance. I guess a laugh was all that was needed. Thank you! Carry on with the mindless memes.

(I did initially get an upvote for stating the annoyance, so anyone else who is annoyed by this meme, good luck. I feel your pain.)

When I said creative, I meant creative use of language. Yes, memes are extraordinarily popular in forums like Slashdot, Reddit and HN. But I only attribute creativity to the person who created the meme, not the ones who use it incessantly... unless they use it in some creative way, as you did (if indeed you were making a joke).

Idioms. Yes. I take your point. I never looked at memes that way, although I have certainly relied on idioms for learning computer languages.


>When I said creative, I meant creative use of language. Yes, memes are extraordinarily popular in forums like Slashdot, Reddit and HN. But I only attribute creativity to the person who created the meme, not the ones who use it incessantly... unless they use it in some creative way, as you did (if indeed you were making a joke).

Well, I was. I see your pain in a way, I myself have some pet peeves when it comes to language. This particular case I found innocent enough, like using stock phrases like "shit hits the fan" or "out of your league", "to die for", etc.

While I get that part about the person who created the meme being the most creative, meme's get their power and significance from repetition. It's kind of a "network effect", where the more people use the same meme, the more impact it gets in conveying its message. In essence it's like proverb creation, only now we get to track it in real time through the internet.


Beating on the dead horse a little more... (stock phrase :)

The thing that bugs me about "doesn't mean what you think" phrase (in general... forget its use in this thread) is that in my opinion no one can know what someone else is thinking. It's presumptuous to imply that one knows what someone else is thinking. At least, it comes across that way to me. The issue seems to be that things can have multiple definitions. And two people may be operating with different definitions. But unless someone clearly states x means y, then I find it presumptuous for anyone to say to that someone, "I know what your definition is". What they are really doing is taking a guess. In my opinion.

The examples of stock phrases you gave are qualitatively different from something like "doesn't mean what you think it means". I'm not sure how to describe the difference. Maybe they are metaphors? I don't know. But they are different. For one, they don't on their own presume anything about anyone else.

Repetition is fine up to a point. Maybe it's like a song. Even if it's a good song, if you listen to it too many times in a short period of time, it loses something. You may not even want to hear it anymore. Too much of a good thing. Stock phrase #2.


> It's presumptuous to imply that one knows what someone else is thinking.

I agree and sympathize. But incidentally, the original use of that phrase was not so presumptuous. In case you (or anyone reading this) don't know, the original quote is from the movie "The Princess Bride", where one character repeatedly says that some possible event would be "inconceivable" and then that event happens--he also exclaims that some events that have just happened are "Inconceivable!". Another character tells him, "You keep using that word. I do not think it means what you think it means." It's "don't think it means", not "doesn't mean": this contains a proper amount of uncertainty for being a deduction about someone's likely thoughts based on observation of their actions.

And, as a matter of fact, cynicalkane's comment was similar: "I don't think [thing] means what you think it means." Also he didn't include an adaptation of the "You keep using that word" part--I would have found that irritating, being an excessive use of movie references that would make nonsensical writing if the movie didn't exist--but cynicalkane's comment flows naturally, and I consider it a proper use of the phrase. I can only assume that you have seen a bunch of other comments using the presumptuous form (if we want to be grumpy old men, we might say "perversion") of the quote, and cynicalkane's comment was similar enough that you thought of the other comments and decided to complain here. (I haven't actually seen the presumptuous form myself, though I'm willing to believe you've seen it enough to be irritated--and a quick google suggests it does have a large presence. I'd probably be annoyed too.)


> the entire security infrastructure of the web would have to be rebuilt from the ground up.

I don't think so. It would probably suffice to add NTRU support to SSL.


This is the first I've heard of NTRU. Has it been proven immune to quantum computational attacks, or has an efficient attack algorithm just not been found yet?

The former would be world-shaking, so that's probably not the case unless I've been living under a rock. The latter just means that it might be safe, but it might be broken at any time if somebody comes up with the right algorithm. If you're making long-term information infrastructure plans that's something you have to account for.


Yes, but that problem also exists without quantum computers (qc). Advances may break existing crypto, even if proven secure under certain assumptions. This is because the advance may introduce a shortcut that proves the assumptions are unsufficient.

NTRU has been proven safe from known quantum attacks and has been around for some time. There certainly were efforts to attack NTRU using qc.

If it would be included in OpenSSL and NSS, it is likely those efforts would be intensified. I therefore suggest that getting it into those libraries may be the best we can do right now.



Would it be overly conspiratorial to suggest that the CIA and NSA are in fact interested in a large-scale implementation of Shor's algorithm? Perhaps I underestimate how much use intelligence agencies have for optimization...


> Would it be overly conspiratorial to suggest that the CIA and NSA are in fact interested in a large-scale implementation of Shor's algorithm?

The only thing conspiratorial is that it wasn't mentioned in the article. Hmmmm.


I would be worried if they weren't interested in Shor's algorithm. That's something that fits their mandate at least.



They would be derelict in their duty to NOT be very closely tracking the (open literature) state of the art in factorization. I.e., it's not a conspiracy, it's their job, and it always has been.

I also presume they have people working to keep abreast of the state of the art in secret work of other countries.


Can anyone provide an explanation of this in layman's terms? I took a look at wikipedia but am having a hard time wrapping my head around it..


A lot of public key cryptography methods used in the real world are based on the fact that factoring really big numbers is computationally unfeasible. Shor's algorithm makes it feasible, but it's a quantum algorithm that can't be run on a classical computer. Basically, alot of stuff won't be secure anymore when it's feasible to run Shor's algorithm.


But would new forms of currently-infeasible encryption become possible at the same time? Many forms of encryption have fallen over the years as technology and mathematics have advanced; while at the same time better encryption methods have been developed.


Oh, of course. The dance between defense and offense will continue as usual. However, there may be a window of time where a lot of services are left vulnerable in the transition.


Did anyone else catch the temperature miscalculation?

> It is actually a cold gun: the structure is a chilly -259 °F (4 °Kelvin) at the wide end and a few thousandths of a degree above absolute zero at its tip...

4 degrees Kelvin is approx -452 Fahrenheit is approx -269 Celsius. Absolute zero is −459.67° F


4 Kelvin is probably the correct number since that's the temperature of liquid helium.


You can go a bit lower than the temperature of liquid helium just as you can cool something to lower than the temperature of liquid water just by using evaporating water. However, 4 Kelvin is probably correct. The image they show appears to be of a cryostat that would almost certainly use helium as a cryogen. Such cryostats are realistically limited to around 2-3 Kelvin. They can go a bit lower when empty, but generally not if you stick a complex experiment with a lot of heat-conducting fibers and wires into them.

4K is probably warmer than they'd like for a quantum circuit, but getting their apparatus colder would require a much beefier cryostat or more exotic technologies like magnetic cooling built inside a helium cryostat. Such setups are capable of getting complex setups down to the hundreds of millikelvin, which is much nicer for this sort of work!

As for writing a scientific article using Fahrenheit... Please don't do that. Just don't.


Well obviously you can go lower, but 4K is the boiling point of Liquid Helium, so it's the easiest temperature to hold it at.

In the article they mention they chip itself is kept at millkelvin temperatures.

And I disagree about using Fahrenheit - this is not a paper in a journal, it's a lay article. By all means use Fahrenheit if that's what your audience understands. Don't let pedantry get in the way of clarity. (Kelvin has some real meaning, but C vs F is totally arbitrary. One is the freezing point of water, the other is the lowest temperature you can get with a salt/ice mixture.)


Quantum computing and nuclear fusion power generation are probably 2 of the biggest problems to solve in order to make the next technological leap.

If D-Wave comprehensively cracks Quantum computing I reckon Bezos may become the world's first Trillionaire.


In the past, technology would provide efficiency to a problem, and a lot of times up root a job in the process. Which is okay, because a new job was created in the wake. Astute observers though might notice that the skill level has been increasing each time a job is transformed. This is okay when an evolution in technology is equivalent to a single lifetime, but as the evolution of technology speeds up the adaption becomes more difficult. I feel like we are seeing the effects of the last technological revolution today. As a software engineer, I get 4 emails a day from recruiters... but those with no technical skills are being unemployed in troves, and finding a job is near impossible.

So my question. What happens when an assumed intelligence explosion disrupts, an already disrupted world? I absolutely don't believe there's anything we can (or should) do to slow technological progress... but its an interesting topic to see how it affects society. There seems to be a disconnect, technology evolves exponentially, and society evolves linearly.


> He (Arronson) and other critics say the company must still prove two things: that its qubits really can enter superpositions and become entangled, and that the chip delivers a significant "quantum speed-up" compared to a classical computer working on the same problem.

I'm curious how one goes about "proving superposition"? If one does the most obvious thing, opening the box containing Schrodinger's cat to observer and hence prove state, then you've just lost the superposition.

Anyhow, for those wondering about addressing the impacts of successful quantum computing on cryptography (i.e. security of most things, including your banking transactions on the web), you should check out the efforts of Dan J. Bernstein (DBJ) and friends on the "Post-Quantum Crypto" web site:

http://pqcrypto.org/

Though research is still on-going, at present it seems some algorithms are resistant to quantum cryptanalysis. It might take a decade or two to retool all the systems using crypto vulnerable to quantum cryptanalysis (assuming someone succeeds in building a quantum computer), but at least it seems like there's hope.


Stop reading after the second paragraph, quantum computing isn't some magic wand you wave at problems to make them disappear. It's simply a new way of approaching them, and it requires algorithms designed to take advantage of this new approach to computing to solve said problems.


Those approaches are increasingly well-studied and a lot of them (Grover's algorithm, anyone?) look an awful lot like magic.

(For people who don't want to look that up, Grover's algorithm does searches on unsorted databases in root-n time, that is, in fewer operations than would be required to inspect every element of the database.)


Grover's algorithm does indeed look rather like magic, but even better is the geometrical proof - WP's description looks good at a quick skim. In my (fairly limited) experience, this sort of geometrical thinking is the best way to understand quantum algorithms intuitively.


Although Grover's algorithm is extremely impressive...it's a probabilistic algorithm, which means that the result is not guaranteed, but very likely. And the likelihood can be increased by running the algorithm more than once.


The likelihood increases exponentially because each time you run the algorithm it's independent of the previous times. Even if it's only 75% sure, you only need to run it a few dozen times before it's more likely for there to be a hardware fault in your display giving you the wrong answer.


[deleted]


They probably have a dilution refrigerator. It's a pretty standard piece of equipment, and can achieve millikelvin level temperatures, far lower than the 3 Kelvin microwave background.

http://en.wikipedia.org/wiki/Dilution_refrigerator

There are other, more sophisticated methods of cooling, which can go even lower in temperature.


Back when I was an undergrad at MIT I was a solder monkey for a group that used a dilution refrigerator with a very similar sort of setup, also trying to use current in niobium loops as qbits. However, IIRC we were only able to reach temperatures in the 100mK range, whereas the article claims this group is using temperatures less than 1mK. So I think they would probably have to be using something on top of the dilution refrigerator to get that low.


http://en.wikipedia.org/wiki/Magnetic_refrigeration

Dilution refrigerators have their limits, especially when you're trying to cool a complex experiment with a lot of wires and fibers going into it (they all bring in heat). I've seen magnetic refrig units stuck inside of helium cryostats to give them the extra oomph to get such experiments down to millikelvin levels.


>I doubt it.

Everyone is pointing out that you're incorrect, but can I just ask why this particular line would have caused you to dismiss the whole article, even in the event you were right? It's such a pointless statement to get caught up on. This obviously isn't a research paper but there were some interesting ideas to us not familiar with the history of D-Wave. Why the dismissive attitude?


[deleted]


> I can't be incorrect. I just expressed an opinion.

http://lesswrong.com/


It's not as stupid as it seems. Due to the cosmic microwave background, the equilibrium temperature far from other natural radiation sources is about 3K. But, as we don't know the temperatures in the whole universe and there are natural processes that can cool things below their equilibrium temperature, they should have said "... anywhere in the known natural universe".


i'm curious why you dismiss it on this. won't the minimum in the "natural universe" be ~2.7K (the background temperature)? while they seem to be a few thousandths of a K from zero, if i understand the text right.

your post was pretty strong / dismissive, so i'm wondering what your (presumably powerful) argument is. what other systems in the universe are "naturally" in a quantum ground state?


There are some places in the "natural universe" that are colder than the CMB. That's because they aren't in equilibrium:

http://en.wikipedia.org/wiki/Absolute_zero#Very_low_temperat...


thanks, i was wondering if there was something like that, but couldn't imagine how it would work (i deleted my other answer earlier because it seemed to all be getting a bit unpleasant; i still think that sentence from the article is pretty inoffensive).


[deleted]


>Therefore, the statement suggests (with certainty) that there are no intelligent/scientific aliens capable of creating lower temperatures. I take issue with that interpretation.

Yes I'm certain that is what the author is trying to convey. I suggest you re-read the article replacing the line that offended you with "This computer is kept in a very cold place".

>In short, the statement is pompous, insulting and almost certainly incorrect.

The evidence is on his side. You just derided the author for his level of certainty, then suggested he's almost certainly wrong because aliens might exist.


The black-body temperature of the background radiation of the universe is about 3K. That means achieving a lower temperature requires an active heat pump.


Also, the link in the GP post that contains proof of a world government run by an artificially intelligent quantum computer has no place on Hacker News. Keep your awe inspiring, jaw dropping news with wide reaching social, moral, and technological implications to yourself!


On a related note, D-wave is sure posting some interesting job opportunities: https://www.mitacs.ca/o/2012/07/cognition-and-creativity-fra...


I don't think their big contract with Lockheed Martin should be a good indicator of actual quantum computing. What they built is probably optimized for some of the problems that Lockheed Martin is trying to solve...


I have heard from people familiar with the Lockheed project that the purchase was largely motivated by tax incentives. Something along the lines of a Canadian-American agreement, in which companies could claim 10x as much business expenses if they were making purchases of research equipment from Canadian suppliers. The implication was that Lockheed never had strong confidence in this technology, but had little to lose.


No one has explained exactly what kind of problems would Quantum computer could solve well. From few articles I've read it is not for all kinds of tasks a magic ax.

if someone can expand on that, that would be great.


What happened with all of the posts from this guy mocking D-Wave?: http://www.scottaaronson.com/blog/?p=431


"I hereby announce my retirement as Chief D-Wave Skeptic, a job that I never wanted in the first place." -http://www.scottaaronson.com/blog/?p=639




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: