Hacker News new | past | comments | ask | show | jobs | submit login
Puzzling quantum scenario appears not to conserve energy (quantamagazine.org)
76 points by theafh on May 16, 2022 | hide | past | favorite | 60 comments



I'm just an armchair physicist, but I thought we had already established that in quantum mechanics conservation laws only hold on average and not on a per run basis.

In https://news.ycombinator.com/item?id=24762436, @HackOfAllTrades notes that angular momentum is not preserved on a per run basis.

In a Mermin Device a pair of entangled spin particles is set to two Stern-Gerlach experiments. The two particles has net (spin) angular momentum of 0 because that's was the net angular momentum of starting material. But if you measure the angular momentum of the two particles in two non-parallel directions, and if we also require that the only answers you are allowed to get are +hbar/2 or -hbar/2, then the sum of the angular momentum you get by adding +/-hbar/2 times one direction plus +/-hbar/2 times a different direction can never be 0.


But couldn't the missing angular momentum still be imparted onto the measurement device? I.e. maybe our Stern-Gerlach apparatus will start spinning ever so slightly if they were floating in space?


What does starting with an entangled pair add to that argument?

You could say simply that if you have prepared a (half) spin state |z+> the angular momentum along the x axis is zero but if you measure the spin along the x axis you will find a non-zero value.


This is not correct. The expectation value of the angular momentum along the x-axis might be zero, but the state itself simply does not have a definite angular momentum.

I like your example because it clearly shows the subtlety that the original comment by rssoconnor also misses. Energy, momentum, and angular momentum absolutely are conserved quantities. But if you prepare your initial state such that it does not have a definite value of these quantities then you cannot with certainty predict the measured value, either.


Good point. Anyway, the original example reduces to this. After the first measurement [it doesn’t really matter which one is considered as first] we have a couple of complementary states +/- for the measured axis but it’s not well defined for other directions and in general a second measurement will break the symmetry.


What makes these quantities conserved in QM/QFT?


Isn't this like the Maxwell's demon? Deciding what to do on micro level can create macro level changes that break physics, but it's not actually possible, so there's no paradox.


There's nothing really "not actually possible" about Maxwell's Demon. It is maybe impractical from an engineering point of view to create the demon, but this doesn't resolve the problem, which is based in the more fundamental physics. I think a more accurate description of the resolution is that this "deciding what to do on the micro level" must itself have an energy cost -- making it a great example of the link between information and thermodynamic entropy.


Yeah these though experiments all seem to depend on some kind of very tiny perfect subatomic process which has no energy input. Flip it around and the violation of conservation of energy should place a minimum requirement on the energy required to run a real Maxwell's Demon, or the mirror in this experiment (and real mirrors aren't infinitely thin mathematical abstractions).


I think this is exactly the point.

A good way to ask "what does our model mean, exactly" is to imagine the perfect processes with no unnecessary energy losses. Real mirrors might not be infinitely thin mathematical abstractions, but if there isn't an actual, physically defined fundamental limit to how thin a mirror can be, then it would be weird if we could violate fundamental laws of the universe, but for want of such a mirror.

Maxwell's Demon is neat because we can whittle things down and eventually get to needing to account for the physical cost of the bits in the thing's 'brain.' An interesting example of the fact that information is actually a physical quantity.


How do you build a mechanism that has a perfect knowledge about a system while being part of said system?


I'm not sure I see the link here (although it is definitely possible that I'm jsut missing something, I'm no physicist). I don't think Maxwell's demon needs perfect knowledge of the whole system -- it is just locally deciding to let through "fast" molecules and block "slow" ones.


I'm no physicist either, I just like to ask questions :)

How does the demon attain the knowledge of what is "fast" and "slow" without continuous observation (and thus interaction) with the particles. Velocity is just function of position over time, so the demon needs at least 2 samples to make the most basic approximation. Where is the entropy for doing that coming from? How does the interference of the measuring apparatus factor into the whole process - what if the sole act of measurement changes the state of the particle from "fast" to "slow" or vice versa? Do we need to measure twice? But what if the second measurement causes the transition it was meant to detect?


If there is nothing "not actually possible" what's the problem that needs to be solved?


At the time it was proposed, it was thought that a thermal reservoir on its own could never do useful work. This is always what's observed in practice and laws of heat were based around it e.g. U = Q - W.

In order to derive those heat equations from the second law, we now know it requires an assumption: that you (or any entropy-containing component not modeled in the system) cannot have specific knowledge of the microstate of the system, only its macroscopic properties. For a long time it seemed absurd that such an assumption would necessary at all, for these seemingly universal laws, and there was no clear way to thermodynamically model the knowledge of the actor inside this system such that you wouldn't need such an assumption.


> For a long time it seemed absurd that such an assumption would necessary at all, for these seemingly universal laws,

Not so long. The second law dates from 1850 and the requirement of such an assumption is what Maxwell’s demon illustrated less than forty years later.

> and there was no clear way to thermodynamically model the knowledge of the actor inside this system such that you wouldn't need such an assumption.

There is still no way to model an actor inside a thermodynamical system in equilibrium - by definition.


There isn't a problem.

The initially apparent problem is that the demon appears to be generating a temperature gradient "for free" by just swinging a gate open (for fast molecules) or closed (for slow ones) (because there's no fundamental physical cost for gate-swinging).

It is resolved by taking into account the fact that the demon must use at least a bit of memory, while acquiring the 'status' of the molecule (let through or don't). So, we can at least say that the demon, no matter how slow and lazy (efficient) it wants to be, must pay the information-theory based cost of erasing that bit.


> There isn't a problem.

I agree.

> The initially apparent problem is […]

Why would that be a problem? If it’s because the second law of thermodynamics says that it cannot happen spontaneously in a thermodynamical system in equilibrium why would that be applicable when we’re not considering just a system in thermodynamical equilibrium?

As Maxwell wrote: “This is only one of the instances in which conclusions which we have drawn from our experience of bodies consisting of an immense number of molecules may be found not to be applicable to the more delicate observations and experiments which we may suppose made by one who can perceive and handle the individual molecules which we deal with only in large masses.”


I'm not sure -- if the gate is closed, the chambers are perfectly insulated, and we aren't erasing bits from the memory of the demon, the system should be at equilibrium, right? So, I think it is not really a situation like he describes in the quote. The demon could hypothetically have whatever large amount of information storage is required, to meet his threshold for "large masses."

This quote seems more compelling in cases where the statistics are known, but and we just need to have enough samples, such that the expected values, etc, show themselves.


Opening a tiny gate to let one particular particle go through is definitely the kind of thing that Maxwell meant by "more delicate observations and experiments which we may suppose made by one who can perceive and handle the individual molecules which we deal with only in large masses".

It's the very example that he was discussing, which he had described just before: "Now let us suppose that such a vessel is divided into two portions, A and B, by a division in which there is a small hole, and that a being, who can see the individual molecules, opens and closes this hole, so as to allow only the swifter molecules to pass from A to B, and only the swifter molecules to pass from A to B, and only the slower ones to pass from B to A."

Another interesting quote of his: "Available energy is energy which we can direct into any desired channel. Dissipated energy is energy we cannot lay hold of and direct at pleasure, such as the energy of the confused agitation of molecules which we call heat. Now, confusion, like the correlative term order, is not a property of material things in themselves, but only in relation to the mind which perceives them. A memorandum-book does not, provided it is neatly written, appear confused to an illiterate person, or to the owner who understands thoroughly, but to any other person able to read it appears to be inextricably confused. Similarly the notion of dissipated energy could not occur to a being who could not turn any of the energies of nature to his own account, or to one who could trace the motion of every molecule and seize it at the right moment. It is only to a being in the intermediate stage, who can lay hold of some forms of energy while others elude his grasp, that energy appears to be passing inevitably from the available to the dissipated state."


It’s a circular argument. The kTln(2) cost of erasing a bit is itself derived from the second law. So it cannot be used to prove the second law.


Is it merely impractical though? In the thought experiement the demon needs to be able to know the state of the impending particle before it arrives, so it can decide whether to open the door. It can't do that without interacting with them and changing their state.


I think it depends on how you define the demon. I first heard of it in the final, resolved version, so I see it as just impractical (because the solution is baked in).

The interaction between the demon and the particles is the key. The final version, at least as far as I'm concerned, comes from Landauer and Bennett [https://en.wikipedia.org/wiki/Maxwell%27s_demon#Criticism_an...] -- the demon must accumulate information (which it can only do finitely) or erase it. Erasing information has a real physical cost, resolving the issue.

When the demon was first invented, information theory hadn't been developed yet. So the mystery to Maxwell was that it looked like it was violating energy conservation, but that's just because there's a sneaky place we can store entropy temporarily or finitely.

So, I think I was imprecise (or... wrong). A demon that does what we really want (generates free energy) is impossible. But it is impossible for weird reasons that Maxwell wouldn't have been aware of, and I don't think he explicitly explored them.


Here’s a more mechanical variation of Maxwells Demon: https://en.m.wikipedia.org/wiki/Brownian_ratchet


Yes, that's what I meant. This is similar - because you have to put a mirror at "just the right time" and you ignore the energetic cost of that.


What if we have a mirror moving randomly around the box? That would lead to the same effect, whereas a randomized Maxwell's Demon would not.


I'm not sure. Wouldn't the introduced waves cancel out then?


Only if it lead to abnormally cool photons as much as anomalously hot ones.


Yes, computation "costs" energy and increases entropy. The demon must compute its decisions before it can carry them out.


This was my first gut instinct, except even simpler than Maxwell's demon: by partitioning the apparatus with the mirror, you're altering the statistical ensemble of possible wave states (increasing the relative information entropy) and that's where your energy comes from.

That's just my gut though, I'm not a professional physicist.


Non-inertial reference frames do not abide by the special principle of relativity or global Lorentz covariance. From the article: "energy isn’t conserved in situations where gravity warps the fabric of space-time, since this warping changes the physics in different places and times, nor is it conserved on cosmological scales, where the expansion of space introduces time-dependence". The principle of covariance (see General covariance) in GR implies "local" Lorentz covariance such that the Lie group GL4(R) is a fundamental "external" symmetry of the world. But, the fluctuation theorem has no such requirement and does not imply or require that the distribution of time averaged dissipation be Gaussian. FT (together with the universal causation proposition) gives a generalization of the second law of thermodynamics which includes as a special case, the conventional second law. When combined with the central limit theorem, the FT also implies the Green-Kubo relations for linear transport coefficients, close to equilibrium. Given general covariance along with the differentiable notion of time and space geometry requires linear transport, then its validity would be dependent on the applicability central limit theorem. In addition, the Gallavotti-Cohen fluctuation relation is limited to chaotic dynamical systems with microscopic reversibility when the fluctuations of a suitably defined function of the phase space trajectories, taken as a measure of violation of the detailed balance, i.e. of entropy production, are measured.


That doesn't seem to be what's going on in this thought experiment, as it's not using GR.

But while we are on the topic I'm curious if the physics you describe make it possible to build a perpetuum mobile.


GC used in GR implies conservation of energy due to the GL4 symmetry with respect to the time dimension in its differentiable geometry along with Noether's theorem. A breakage of GR on the cosmic scale might be with the super oscillatory phenomenon appearing around an isolated and fairly inactive dark star ("black hole") when a star passes in line behind it and the corresponding frequency bump in the spectrum as light is "mirrored" or trapped around the dark star is observed. Systems that do not adhere to the requirements of the FT could show perpetual motion while the causality proposition seems less likely to find violations, although a duality is "rough"ly around those two.


So this experiment seems to create a virtual atom (the box with the mirrors), yet unlike a real atom, this system isn't quantized internally, so it's possible that you could get quanta of any energy up to the sum of all photons inside the box.

Alternatively, it could be that the language used by physicists has overloaded too many conventional words, and the impedance mismatch between them and the public can not be overcome.


This seems confusing to me (although I am just an engineer so it isn't surprising if I've missed something). The photon is basically a packet energy. It bounces out of the box, I guess reducing the energy in the box (right?), which is just normal photons-bouncing-out-of-box behavior.

In this case, they've managed to come up with a configuration, via superoscillation, that results in an unusually large packet of energy. But is this a conservation of energy issue? I don't see how this is any "worse" for conservation of energy than bouncing out multiple red photons.

Is the inability of a red box to release higher energy photons actually a deep physical principle, or just a general trend because configurations that can generate superocillations are rare? I guess I don't see the link between "photon is too big" and "conservation of energy" -- probably it is an obvious link for the physicists here, though.


My experience of this is generally:

“Physicists discover limits of simplifying assumptions, pretend to be surprised ignoring things leads to inaccurate results in extreme circumstances.”


They have a photon with energy=x, and are detecting a photon with energy=x+y. The question is where the y comes from. Since they've ruled out the usual suspects, it seems to be a violation of conservation of energy.


My understanding is that at the beginning of the experiment there is only one red photon in the box.


Oh wow, that is a crucial aspect that I totally missed, which makes the result much more surprising!


Sometimes when people 'shut up and calculate' [0] they find surprises.

[0][https://aeon.co/essays/shut-up-and-calculate-does-a-disservi...] (Baggott, 2021): (Revisited this earlier today, coincidentally)

"a dogma of indifference to philosophical questions was at least as much to blame for the rejection of foundational enquiry as anything Bohr might have said."


Energy conservation seems to be a sleuthing tool for our pitiful existence to find some god damn consistency. If quantum mechanics is fundamentally formulated around statistics then energy conservation shouldn't surprise us to become statistical either.


> Quickly put a mirror in the photon’s path right where the wave function superoscillates, keeping the mirror there for a short time.

Varying things quickly in space or in time requires a lot of energy. The energy could be coming from moving the mirror too quickly. For example, if you modeled the mirror's movement as being driven by a force then you might find the mirror's motion is damped by reflecting the photon; losing energy.

Instantaneously switching the driving Hamiltonian can easily spread energy around, unless they commute. The Hamiltonians for "no mirror" (H1) and "yes mirror" (H2) won't commute. It sounds like they've arranged for the eigenstates of H1 to overlap with a huge range of eigenvalues of H2, and vice versa. So when you switch from H1 to H2 you end up in a superposition of all kinds of different energies. Evolve there for a bit so switching back won't destructively interfere you back down to exactly where you started, and voila.

I think if they account for these kinds of "changing the Hamiltonian ain't free" effects, they'll find where the energy came from.


I was under the impression there is no such thing as an isolated wave function. They are all part of a larger wave function. If the isolated wave function is missing energy, it's in the larger wave function.


Neither do sinusoidal oscillations exist, since they would have to be eternal.


science is great...love it....got a degree in it...but i never pretend that we are anything but children when it comes to understanding the universe...we got a loooooonnnnnggg way to go...


I have a PhD in theoretical quantum optics, currently doing postdoc and moving to experimental stuff, and I have to agree. We don’t know how quantum mechanics works at a basic level, what is a measurement? Are wave functions real? Are we just ignorant or are things really inherently probabilistic? These (and others) are huge questions underpinning our basic reality, but we have only made very modest progress on them, if any. It’s not clear that we will ever know, there’s certainly not a clear path forward


A guy like Einstein or Newton comes along every few hundred years. I can’t even imagine what we will find out if we keep doing science for a few thousand years more.


Plenty of people have come. Gödel was one, JVN was one, Terry Tao is likely one.

But it's also worth remembering that some of the smartest people in our society are optimizing ad serving.


Physics and maths are both so specialised and large now that it’s not really possible to have such great minds now. Einstein had a good grasp on almost all physics of his time, Newton did for literally all of it for his time. It isn’t possible for any modern physicist to have a full grasp of even a single sub field


>Newton did for literally all of it for his time.

Even Newton wasn't wary enough of prejudices that slowed useful insights. As a result: "Until the early 19th century, most scientists shared Isaac Newton's view that no small objects could exist in the interplanetary space - an assumption leaving no room for stones falling from the sky." [http://www.meteorite.fr/en/basics/meteoritics.htm]


That's the problem with science.. To make linear progress we need an exponential amount of scientists. All the low hanging fruits have been picked.


Yep exactly, if 1 scientist = 1 progress, imagine where we would be right now. Instead there are far more scientists than ever but progress on the big or fundamental questions and topics has arguably slowed down


I like to think that it's an enormous tree, and with every step that we take, the number of open topics effectively doubles.


Sure but there is such thing as the "base topics" which, by specialising, we do get further from. Most people aren't working on fundamentals of quantum mechanics for example. A lot of the problem is that in modern research you need to say your plan _before_ you get funding, and putting a new researcher into a field without a clearly defined project for them is considered a death sentence for any career possibility for them. Because of that reason, fields where the research has stopped progressing effectively stagnate for very long periods. No one will work on something unless there is a guarantee that high quality, impactful papers can be produced that will propel into a career


I think there is a limit, and that progress is not linear


What would the limit look like? Like would we just suddenly discover some wall in physics that makes it impossible to pursue further inquiry?


Yeah, some noise wall that we can’t surpass by our technology, not being able to measure to small enough precisions or with enough energy, etc.


We may hit a wall in one area and make progress in others which then feeds back into the first area. I think it was always that way.


We would keep trying to chip at the wall regardless.


I don’t think there is a reason to think that we are close to any limit. I agree that progress is not linear.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: