Hacker News new | past | comments | ask | show | jobs | submit login
Atomic Spins Evade Heisenberg Uncertainty Principle (scientificamerican.com)
73 points by okket on April 2, 2017 | hide | past | favorite | 27 comments



Clickbait headline, I'd say. The experiment fully agrees with known quantum mechanics. (Nevertheless, it is an impressive technical achievement.)

The authors of the original Nature article say that "spins obey non-Heisenberg uncertainty relations", which is true if "Heisenberg uncertainty relation" is (narrowly) defined as the uncertainty relation for position and momentum, Δx⋅Δp ≥ ħ/2. But since spins don't have position and momentum, they are not expected to observe this uncertainty relation anyway. The more general uncertainty relation for any two observables A and B reads ΔA⋅ΔB ≥ |⟨[A,B]⟩|/2 (where [A,B]=AB-BA is the commutator, and ⟨⟩ is the expectation value) and for spins which have three components Sx, Sy, Sz the commutator is [Sx,Sy]=iħSz etc, therefore the uncertainty relation for two spin components reads ΔSx⋅ΔSy ≥ ħ/2⋅|⟨Sz⟩|, hence it does not seem so surprising that it can be made rather small if the expectation value of the third spin component can be made small.

Edit: typo


Also: "This is because to measure its position you have to disturb its momentum...". Another article in the long, illustrious tradition of mis-explaining HUP as being due to measurement "disturbing" some property or the other.


"Another article in the long, illustrious tradition of mis-explaining HUP as being due to measurement "disturbing" some property or the other."

Disagree, it's a perfectly reasonable explanation of what's happening.

'Measuring' the system is really 'operating' on it. A quantum operation puts the system into one of the operators eigenstates, and also yields the eigenvalue as the measurement itself.

Two non-commuting operators, A and B, will have different eigenstates. Applying operator A on the system puts the system into some eigenstate a, which will be some linear combination of the eigenstates b, of operator B.

So yes, operating on the system with operator A will put the system into a superposition of b states. If the system was in a well-defined b eigenstate previously, it's not anymore. It was 'disturbed' by the 'measurement' done by operator A.

Position and momentum operators are examples of non-commuting operators. So are the 'spin' operators that measure angular momentum along different axes.


> it's a perfectly reasonable explanation of what's happening.

It's not false, but it's not entirely true. The uncertainty principle doesn't state that you can't _measure_ p and q with arbitrary precision. It states that you cannot _imagine_ them with arbitrary precision. It's like the double slit experiment: when you measure which slit is traversed, interference patterns disappear; but, when you don't measure, you cannot even _think_ that the photon is passing through one slit instead of the other.


I don't follow your comment on imagining the measurements.

Uncertainty Principle refers to the arbitrary precision of two operators. You can measure each operator individually to arbitrary precision. In theory, that is.

If the two operators commute you can get theoretical arbitrary precision on both measurements together. Since their commutator is zero.

If they don't commute, the product of standard deviations of each measurement operator is restricted to no less than half the expectation value of their commutator. (photon-torpedo's comment at the top of this thread gives a better summary).


It's not "imagining the measurements".

The UP is not only: "You can't know position and momentum together." It rather: "Position and measurement _cannot be defined_ together."

In classical mechanics, you can imagine a particle being at a certain position with a certain momentum. In quantum mechanics, you cannot.


Sorry, I have no idea what you're trying to convey in the first two paragraphs.


> the uncertainty principle states...

That the product of the standard deviations of position and momentum are at least half of h-bar.

It says nothing else about what that "means", the mechanism by which it comes about, or whether it is a fundamentally "true" description of reality.


Neither it says anything about measurement. What I wanted to stress is that it isn't "just" a limitation about measurement precision: it forces you to switch to a description of nature which is far away from the classical one, at least at the microscopic level.


The point is that one does not have to modify a system to exhibit uncertainty. You said it yourself: if the particle is in an eigenstate of A, it is not in an eigenstate of B. That is, the value of B is fundamentally uncertain, whether or not you thereafter "disturb" the system.

Per Wikipedia (not the best source, but accurate here):

https://en.wikipedia.org/wiki/Uncertainty_principle

"Historically, the uncertainty principle has been confused[5][6] with a somewhat similar effect in physics, called the observer effect, which notes that measurements of certain systems cannot be made without affecting the systems, that is, without changing something in a system. Heisenberg offered such an observer effect at the quantum level (see below) as a physical "explanation" of quantum uncertainty.[7] It has since become clear, however, that the uncertainty principle is inherent in the properties of all wave-like systems,[8] and that it arises in quantum mechanics simply due to the matter wave nature of all quantum objects.

...

Bohr was compelled to modify his understanding of the uncertainty principle after another thought experiment by Einstein. In 1935, Einstein, Podolsky and Rosen (see EPR paradox) published an analysis of widely separated entangled particles. Measuring one particle, Einstein realized, would alter the probability distribution of the other, yet here the other particle could not possibly be disturbed. This example led Bohr to revise his understanding of the principle, concluding that the uncertainty was not caused by a direct interaction."


Replying to your first paragraph.

You're casually ignoring how it got to eigenstate a in the first place. The particle is in an eigenstate a because it was acted on by the A operator.

If it was in an eigenstate b prior to this, or any other state that isn't a single eigenstate of A, the action of operator A disturbed that original state.

The HUP itself is a direct consequence of measuring the uncertainty of two noncommuting operators, and its derivation is a fairly straightforward, though a bit annoying, mathematical result.


A tradition started by Heisenberg himself!


That's what's funny about the whole thing; Heisenberg mistakenly thought his own equation was due to the observer effect.

There's a good article on him almost failing his PhD exam because he had difficulty with a related concept (https://www.aps.org/publications/apsnews/199801/heisenberg.c...). It's amusing because he was later awarded the Nobel prize for "the creation of quantum mechanics".


Ugh, I believed this too. Could you elaborate so that the misconceptions are cleared? (or give a link if the explanation would be too long)


The reason this explanation is so widespread is that it is actually not false. However, the actual uncertainty in the Uncertainty Principle takes place at a much deeper existential level, in a more-important way, so this is a woefully incomplete explanation.

The actual uncertainty comes from the fact that the two quantities are Fourier transforms of each other... and just by that relationship, inherently, if one gets very localized (== very high frequency bump in its space), its Fourier dual gets spread out very far through space. (You can sort-of analogize this if you know about audio ... a sharp spike in temporal space, when transformed into frequency space, becomes a very big spread of values, because all those frequencies are relatively blunt and they have to somehow fit together to make this sharp thing, which requires an enormous number of them. Or if you go the other way, a sharp spike in frequency space means one frequency, which transforms into an infinitely-long sine wave in temporal space. So think about that kind of thing, except instead these are waveforms where the y value is kind-of the probability of getting that particular x-value as a result if you perform a measurement.)


That's a rather classical explanation and although it's correct, I would say the uncertainty of non-commuting operators is more fundamental.


That's only because of the model. Which could be wrong.


I'm sure I can find a better resource than Wikipedia, but for now:

> Historically, the uncertainty principle has been confused[5][6] with a somewhat similar effect in physics, called the observer effect, which notes that measurements of certain systems cannot be made without affecting the systems, that is, without changing something in a system. Heisenberg offered such an observer effect at the quantum level (see below) as a physical "explanation" of quantum uncertainty.[7] It has since become clear, however, that the uncertainty principle is inherent in the properties of all wave-like systems,[8] and that it arises in quantum mechanics simply due to the matter wave nature of all quantum objects. Thus, the uncertainty principle actually states a fundamental property of quantum systems, and is not a statement about the observational success of current technology.[9] It must be emphasized that measurement does not mean only a process in which a physicist-observer takes part, but rather any interaction between classical and quantum objects regardless of any observer.[10][note 1]

> Bohr was compelled to modify his understanding of the uncertainty principle after another thought experiment by Einstein. In 1935, Einstein, Podolsky and Rosen (see EPR paradox) published an analysis of widely separated entangled particles. Measuring one particle, Einstein realized, would alter the probability distribution of the other, yet here the other particle could not possibly be disturbed. This example led Bohr to revise his understanding of the principle, concluding that the uncertainty was not caused by a direct interaction.

https://en.wikipedia.org/wiki/Uncertainty_principle


The article's introduction explains the uncertainty principle as the observer effect, where the process of measurement changes a system. Which it's not, quantum uncertainty is a fundamental consequence of the underlying physics, it has nothing to do with influence from the measurement process.

And that's about as far as I understand it. The whole thing seems intended to make my primitive brain hurt. But any experiment which starts off by cooling things down to "a few microkelvin" definitely has my appreciation.


> Quantum uncertainty is a fundamental consequence of the underlying physics, it has nothing to do with influence from the measurement process.

If you can prove that, your Nobel prize awaits. :)

The truth is that nobody knows why there is quantum uncertainty. The common belief is that the uncertainty is "real", but non-local hidden variable theories have not been proven to be impossible.

(In a recent survey of physicists, 47% favoured explicitly real uncertainty - https://arxiv.org/pdf/1612.00676.pdf)


This is a technique called squeezing, it in no way violates our fundamental understanding of physics. Heisenberg uncertainty is generally: 1/2ћ ≤ ΔxΔp. If you measure something in a way that you have complete uncertainty in the quantity p then you can have arbitrarily fine resolution in x.

It's interesting, but squeezing is a common technique, and this headline misrepresents the importance of this experiment.


It's not just the headline. Sentences like this are false:

> ...measure the spin precession rate much more accurately than previously thought possible.

since no one thought different.


Heh. Ask only what you want to know, and nothing more.

Good for QM because it gives you a sink to dump uncertainty, good for hiring because it helps you not get sued, good for avoiding miscommunications in general, ......


In a related way, I often talk about the Heisenberg Uncertainty Principle of Metrics. The more you measure with the intent of adjusting the process, the more people intentionally and unintentionally drive their behaviour based on the metric. So it's a real skill to minimise measurement so that you are able to control the areas you need to control without creating incentives for people to do bizarre things in areas you didn't think about. It always drives me crazy when managers start measuring things "just in case I want to use the information in the future". It pretty much guarantees unpredictable behaviour of the workers.


This sounds like Goodhart's Law (https://en.wikipedia.org/wiki/Goodhart%27s_law)


This reminds me of a recent observation by (I think) some Princeton ethical philosopher, that any theory of human behavior becomes invalid once it becomes known by a significant proportion of the population, because people's behavior anticipates expectations of their behavior.


This is simply thinking about thinking, which is allowed (at least in Zen) as long as one doesn't think about thinking about thinking. In other words, holding irrational beliefs and spreading those beliefs in a viral way, either in the population, or inside one's own head, creates suffering if there is no means to end holding the irrational beliefs.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: