Hacker News new | past | comments | ask | show | jobs | submit login
ATLAS sees first direct evidence of light-by-light scattering at high energy (atlas.cern)
253 points by explore on Aug 16, 2017 | hide | past | favorite | 79 comments



Neat-o. A little bit more info at https://en.wikipedia.org/wiki/Two-photon_physics . Basically two photons with enough energy to interact via virtual pairs created.

It will be interesting to see the cross-section rates and maybe hopefully some new physics (as the wikipedia page weakly hints).


Yeah. The article is a little confused with terminology. Photon/photon interaction is "illegal" in QED in the same way and for the same reasons that wave solutions to classical electromagnetism don't interact (well, except to propagate). What's happening here are interactions between the two photons and the vacuum.

So the new science here isn't unexpected confirmation of new physics, it's our ability to probe our existing understanding of the vacuum with new interactions.


Right, the new physics the wikipedia article is alluding to is that in previous experiments indirectly measuring two photon interaction cross-sections, the cross-section was higher than what the standard model would predict (but those experiments didn't have tight enough error bounds to say anything definitive).


>> Basically two photons with enough energy to interact via virtual pairs created.

Or could it be that a photon bends space-time as though it had mass, and they simply follow the local curvature and it looks like scattering? Just speculating, because for some purposes photons act like a solid particle with their energy equivalent mass traveling at speed c.


It is not gravitational. That effect would be far, far too weak.


For that to happen, the energy-equivalent of the photons would have to be close to the Planck mass. That's an unbelievably high value for photons. We're not going to see that reproduced experimentally any time soon.


Just as a reminder, those photons are high energy ones (Gamma-ray range) so no light sabers, holograms or anything involving visible light I am afraid


Electrolasers kind of seem slightly light saber-esque. I'm assuming miniaturisation of them would be tricky though, as i'm assuming ionization with a laser requires a lot of power.


I was about to make a comment regarding using UV lasers and ionization - then decided to take a second to google it, and found this interesting (though short) discussion:

https://www.physicsforums.com/threads/laser-to-ionize-air-to...

In short, I think your conclusion is likely correct...


Invisible light sabers. Cool


Just make sure you have some sunscreen on


> Just make sure you have some sunscreen on

in the thickness equivalent to 1cm lead sheet for a gamma-ray-screen factor of 2


So NIST is telling me that's about 16cm of titanium. I suppose it would be easier to make yourself clothing out of full lotion bottles than try to slather that much on.


given this is generated by accelerating lead atoms - your light sabre should be quite capable of covering you (and everything around you) with tasty lead


Can someone explain this in easier terms for a layman like myself?


Photons carry no charge, so normally they don't interact with each other. However, at high enough energies they do. The explanation given by the best available theory is that there is enough energy that a virtual electron and virtual anti-electron form, and since those have charges, they interact, and from the outside it appears that the photons interacted.

This is totally impossible in the classical view, but is fine in the quantum electrodynamic (QED) view.


High energies are not absolutely necessary. Couples of "virtual" particles can be created at any energy, it's just less likely for smaller energies. I think the real problem was the experimental set-up: particle accelerators use electric field to transfer energy to particles and magnetic field to control their trajectory and eventually to cause collisions, but all of this only works with charged particles. And photons carry no charge, as you said! As the article mentions, the observation was finally made possible by accelerating bunches of lead ions, as they carry around a cloud of high energy photons (I suppose due to bremsstrahlung caused by chaotic movements within the bunches).


> Couples of "virtual" particles can be created at any energy

doesn't the energy need to be at least the mass of the virtual particle?


The vacuum itself has enough energy that there are particle pairs popping into existence and annihilating each other constantly.

https://en.wikipedia.org/wiki/Vacuum_energy


No, it doesn't. Remember that the mass of a particle is just an average: energy and time are linked by the uncertainty principle as position and momentum are, thus decaying particles (pretty much all of them) do not have a sharply defined mass.


No. For example the weak force is created by the very heavy W and Z bosons.

The uncertainty principle says you can "fudge" the energy if your time-scale is short enough, and this is what happens. A result of this is that the distance the weak force can operate over is limited, since the timescale has to be short, so the particles don't have the time to travel very far.


what does virtual mean in this case? is this word mean something in quantum physics?


Virtual means that the particle does not obey the Einstein energy-momentum equation: E^2 = p^2c^2 + m^2c^4 . You might recognize the shorter form, where the particle has zero momentum, as E=mc^2.

More info here: https://en.wikipedia.org/wiki/On_shell_and_off_shell


>Photons carry no charge, so normally they don't interact with each other.

Not having charge doesn't imply they don't interact. They just don't through the electromagnetic force. Neutrons also have no charge but interact via the strong force, for example.


They think they got photons to bounce off each other. It's a really old prediction but really rare, so hard to prove. They think it happened 13 times across 4 billion events.


The bit about the trigger was interesting. Is that software/firmware? Or is that a hardware trigger they are discussing.


I think at the moment they use a level 1 hardware trigger and a level 2 software trigger. ALICE is moving to a pure software trigger.


When I last participated to ATLAS, working in the Trigger & Data Acquisition group, ATLAS had 3 levels of trigger: the first was hardware, the others were software. Regarding the first level, software was not an option due to the extremely high bunch-crossing frequency (40 million times per second). The difference between the 2nd and 3rd levels consisted of data availability (region-of-interest only for 2nd, full detector for 3rd) and algorithms permitted (light algorithms for 2nd, full reconstruction and advanced analysis for 3rd). The maximum allowed output were, IIRC, 100 kHz for 1st level, 3 kHz for 2nd level and 100 Hz for 3rd level.


They've since scrapped L2, now it's only L1 (HW) and the High Level Trigger (HLT) running on a server farm


Thanks for the info. I left ATLAS beginning 2009, many other things evolved, I suppose.


So is LHCb. The two experiments may share a data centre on the Prevessin site.


As far as I know, nowadays there is a lot of high speed FPGA processing involved, therefore the line between software and hardware is blurred.


Anyone else here is having a hard time seeing what could be applied usage of light-by-light scattering? (ie new computers? new internet? new telescopes?)

Can someone here give us some insights?


Currently, very little. According to the article, you need crazy particle-accelerator-levels of energy to have them happen at all.

However, by studying and understanding such events, we can test and improve our theories of physics, which may lead to breakthroughs in technology we can only dream of.

For example, general relativity would have been considered completely useless knowledge for any practical purpose when it was first formulated in the early 20th century. It only has any measurable impact at extreme velocities and gravities. In the late 20th century, it proved instrumental to creating accurate enough models for measuring the time delay of signals from satellites, creating what we now know as the GPS system.


I've heard people make the argument about GR being essential to the GPS system before, but the skeptic in me has doubts. I don't deny that it's useful to have a correction derived from relativity, but is it really as essential as people make it out to be? If we hadn't known about GR when we built the GPS system, wouldn't we have realised there was an unexpected drift in the clocks and found a way to recalibrate them periodically?


Yes its essential. Because the satellites are moving at orbital velocity, their frame of time is ever so slightly different than ours. To keep accuracy timewise without special and general relativity would be difficult if not impossible without understanding relativity generally. Could we have done GPS without relativity? Maybe, but we'd have to discover it with the satellites to be able to use it.

A quick note from: (i'm too lazy to do the calculations right now) http://www.astronomy.ohio-state.edu/~pogge/Ast162/Unit5/gps....

    If these effects were not properly taken into account, a navigational fix based on the GPS constellation would be false after only 2 minutes, and errors in global positions would continue to accumulate at a rate of about 10 kilometers each day! The whole system would be utterly worthless for navigation in a very short time.


I think I see what he's saying. Those satellites are moving at a fixed velocity. So the time dialation effect on them relative to the ground should be static.

It could reasonably be the case that we just adjusted the clocks on the satellites by .05% or whatever the drift rate is by measuring them against known points until we get it right. Good enough engineering.

Eventually someone would have asked some scientists to explain why it's happening.


> I think I see what he's saying. Those satellites are moving at a fixed velocity. So the time dialation effect on them relative to the ground should be static.

The issue you have is that the dilation effects aren't static, they're all relative to each satellite and ground observer and are constantly shifting based on the orbits. Basically the premise you have to accept to allow for "just adjust the clocks" is too basic. This is why both general and special relativity come into play in GPS. You might get away with adjusting for a single observer, but not all observers.

Note that we already have the clocks purposefully skewed to account timewise for their orbital speed and still require constant updates. I just don't see that happening through "good enough engineering". If you're a nanosecond off you're off by over 1 kilometer and things get worse from there.

Plausible as a hypothetical gedanken experiment? I suppose, plausible in reality? I'm skeptical that you'd be able to do it. Would be akin to launching a rocket to the moon without understanding how to fly.


Nice, thanks for the info! I'd never seen the inaccuracy quantified like that before.


> If we hadn't known about GR when we built the GPS system, wouldn't we have realised there was an unexpected drift in the clocks and found a way to recalibrate them periodically?

And hence have discovered GR in the process. Alternatively (and more likely) we would have decided the idea simply doesn't work.


"Alternatively (and more likely) we would have decided the idea simply doesn't work."

No, we would just keep beating on it until it works. It's a myth that science always precedes engineering. It's a very regular drift, and worst case scenario, if they just couldn't work it out, they'd build several ground stations in known locations, derive corrections as those drift, and upload corrections into the system periodically. (Similar things are done today: https://en.wikipedia.org/wiki/Differential_GPS )

Even with the science that we have, there's still fudge factors and empirically-determined values we have to use anyhow, because the Earth is not homogeneous and we have to take its slightly-lumpy gravity field as a given, not something we can "scientifically" determine. (Of course we "use science" to determine the lumps, but the lumps themselves are simply a given.)


Given the choice between trial and error engineering and science backed engineering I much prefer the latter. That does not mean doing engineering without the science is impossible just so much harder. They both go hand in hand, nothing mythological here. Advancements in one leads to ideas in the other and vice-versa.


"Given the choice between trial and error engineering and science backed engineering I much prefer the latter."

That's not the question; the question is, are we forced to give up engineering if we don't have "the science" yet?

And the answer is an objective "no", from abundant past human history. There's this myth sold that science always precedes engineering that is very, very popular. I'm not even sure where it's coming from. Oral history in primary education, maybe. But it's a myth that engineers themselves can ill afford.

The vast majority of practical programming is programming running way, way ahead of the "science", which occasionally takes point samples of how 10 college sophomores behave under a certain limited experiment.


We just overbuild until we have the science and the manufacturing capacity to back it up. Every once in a while I hear some civil engineering fan comment about how the roman aqueducts are still standing because they built a huge margin of safety into them (because they couldn't do the calculations, not because they wanted them to last a thousand years)


Maybe it's because for instance there were exactly zero engineers working with radio waves before Maxwell and Hertz, or zero engineers working with semiconductor transistors before, basically, Brattain, Bardeen and Shockley. Electromagnetism and electronics happen to come straight from science labs. You could check also Idk... chemical synthesis, polymer science... nuclear physics?

I don't understand why would you come up with such "objective" statements unless you really think it's not necessary to know anything about the history of science and engineering to have a strong opinion about them.


> Maybe it's because for instance there were exactly zero engineers working with radio waves before Maxwell and Hertz

Well, humans have been crafting optical lens way before Maxwell ;). And even way before [1] Descartes and Newton decided to study light.

For everything out of reach of our senses, like the examples you gave, we need formal science. But for everything humans can see, smell or touch, we're pretty good with empirical observations : chemistry, fluide dynamics, genetics and mechanics where comonly used way before formal science was even a thing.

[1]: https://en.m.wikipedia.org/wiki/Nimrud_lens


Isnt the more relevant question "does engineering benefit from (basic) science?"

Possibly amended to include "to a degree that justifies the spending"


> And hence have discovered GR in the process.

Fascinating to wonder how modern physics community worldview would be different, if GR had gone [empirical anomaly -> new theory], and ended up in the same place rather than [thought experiment -> empirical proof]. (IAN a historian of physics - forgive me if I have that wrong.)

> Alternatively (and more likely) we would have decided the idea simply doesn't work.

I don't think that's true. Consider the Bell Labs guys who won the Nobel Price for Microwave Background Radiation, who were just trying to remove the noise their antenna was receiving.


Bell Labs and the CMB is nice example! At the same time, it unfortunately doesn't give us any intuition about how many times somebody doing engineering had less tenacity or experience and gave up instead of ending up with a Nobel Prize. :)


If you don't have GR, you aren't going to do gravity maps (because you have no reason to think you need them).

Without that, you don't have much opportunity to discover the basis of the correction (which, as others have pointed out, is just discovering GR, anyway.)


I think it's reasonable to believe that if we hadn't found GR, and implemented GPS, that observant scientists would have found GR.

The reason is that for econmics reasons, GPS developers work hard on noise reduction. Once you eliminate all the "technical" forms of noise, you're left with the "scientific noise".


> an unexpected drift in the clocks

In every kind of clock.

I would hope in this scenario that that would have been noticed and it would have aroused sufficient curiosity to investigate and eventually discover that the effect was due to time itself flowing at a different rate.


Imagine trying to find that bug.

Finally, after an enormous effort being able to tell your manager that the error is in the space-time of the universe.


Riiight?

It reminds me of something I read about the search for what we would now call "violations of conservation of energy". Scientists kept coming up with more experiments to try to show this phenomenon, but none of them worked. Eventually they were forced to conclude that that's just the way it is:

"A perfect conspiracy is a law of nature."


The bleeding edge of physics and other science and math if often just a lottery. You research something, show that it's probably correct even if it's not practical in any way, and then years later someone says "you know, if we just applied..."

Modern cryptography would be impossible without number theory, yet all the breakthroughs that made it possible took place decades and centuries beforehand. All they needed was "with a computer" for it to become vital to our everyday lives.


The laser is a good example of this phenomenon: it struggled to find early applications. I can't pin down a source, but apparently it was once described as "a solution looking for a problem".


Wait 50-100 years then I'll get back to you.

Cryogenic freezing of magnets so that they would be useful as superconductors allowed people to create MRI.



"Finding" may be too strong. The thing is, the photons scatter more often if there are more kinds of charged particles that could be created. The photons seem to scatter more often than would be predicted from the charged particles we already know about. That doesn't say anything at all about what the unknown charged particles are. But it may tell us how many unknown charged particles there are, which is just amazing.



light seeking light doth light of light beguile


One step closer to light sabers.


Not sure you'd want some high energy photons scattering on your face after the collision of the light sabers. Except if you want to call it "the photon menace".


Sounds just like watching the prequels.


One step closer to holograms.


Could you explain?

Also, do you not think that current implementations that use eye tracking are not real holograms?

What's to say that holograms similar to those of Star Wars could not be produced by inspecting the environment and automatically determining where people's head and eyes are, e.g. via a combination of technologies similar to the following:

Eye-tracking holographic table requiring 3D glasses: http://www.euclideonholographics.com/

3D TV not requiring special glasses for the 3D effect: http://www.ultra-d.com/

And what about things that have been called and accepted as holographs created since the mid-20th century?

https://en.wikipedia.org/wiki/Holography


This is closer: http://www.pocket-lint.com/news/131622-holograms-are-finally...

Star Wars-style holograms implies that I don't need additional equipment or special angles or anything, just a (basically) magical "hologram" projector. As that link shows, it may not be entirely out of the question, but, well... I'm not sure I want to be in the same room as one of those.


A real hologram, like in your last link, make the photons actually come from the correct angles so that everything is not in focus in your eye.

Stereo imaging we see in current VR and TV technologies do not simulate that effect, which makes it hard for some people to use as everything is focused at infinity or on the screen. What the lens (in your eye) is currently focused on is a very important depth cue for our brain and the conflicting information coming from the lens and stereo vision make some people unable to use stereo imaging or cause eye pain and headaches.

Unfortunately, real holograms are currently restricted to static images meticulously constructed using very advanced equipment that produces seemingly random images on films that interfere with each other to produce the hologram. Doing the same thing with moving images in real time would require too much computing power to currently be possible.

Another way could be to track the lens refraction properties and the pupil size in real time and fake some depth of field on the projected stereo images.


There's an interesting implication here for redshift observations and Hubble expansion, insofar as it provides a mechanism for photons to lose energy while in a highly sparse medium, populated largely only by other photons.

I'm keeping my money on expansion and dark matter being hooey brought about by incomplete of incorrect understanding of light.

Only recently there was that rather interesting piece about simulated momentum transfer from photons in media.

It's exciting - we'll potentially be lopping off a huge branch of dead wood from the tree of science, from which new ideas can grow.


> I'm keeping my money on expansion and dark matter being hooey brought about by incomplete of incorrect understanding of light.

That's a very interesting hypothesis. Unfortunately, it's easy to verify that photon-photon scattering doesn't explain the expansion of the universe.

1) photon-photon scattering is elastic, so no energy is lost, and no redshift is occuring

2) if it isn't elastic, the scattering is a random process. So different photons will lose a different amount of energy, which means that measured spectra are going to be blurred.

3) rather than seeing redshift, due to photon-photon scattering, you'd see fog, which gets cloudier and cloudier with distance.


Those are all solid points, so I retract - but the momentum paradox bit stands!

I wish I had someone else to chat physics with - isolation leads to screwy notions which can easily be quashed by the right counterpoints.

Edit: a thought. Please (genuinely!) tell me where I'm wrong. As it's elastic, could we not end up with groups of lower energy photons with the same vector as an original high energy photon, which would similarly explain redshift? Doesn't address your point re: fog, however, unless they're universally tightly grouped.


It looks like from what we know, 2 high energy (it's much less common with low energy) photons collide and change direction: from what I understand, they don't split into low energy photons. It'd be basically impossible for all the high energy photons to scatter directly away from earth (why would the earth be special?) leaving only low energy ones towards us.


I doubt that is the case. As the article nores photon-photon scattering has been a predicted for a long time by QED. Even if it wasn't detected before it's impact on redshift could be calculated.

If you look at the abstract on the linked nature page the interaction has a cross-section of 70nb, which corresponds to a circle of radius 1.5e-9 nanometer[1]. It might occur a few times near a supernova or the swirl of a an accretion disk but I doubt it has a meaningful effect in deep-space.

A more exciting result would have been if they wouldn't have seen this effect. That would mean the theory was wrong and we had a new datapoint to look at.

[1]: these measurements are only for a specific energy range, they might vary dramatically with different energy levels but the key point is that this result agrees with theoretical predictions meaning that is should also be possible to calculate the contribution to redshift due to photon-photon scattering.


but I doubt it has a meaningful effect in deep-space.

Don't underestimate how often unlikely events can happen in a big enough space...

As to photon-photon scattering, I might be wrong, but I don't believe it's considered in any models for expansion - similarly, if you do build light momentum transfer into your model, then expansion goes away - but because we "know" expansion to be true, those results aren't considered or published.

I think our givens are wrong. It'd hardly be the first time. I think I might be wrong. That'd hardly be the first time either.

But, as a betting physicist, my money is on expansion (at the very least acceleration) being bunkum.


>Don't underestimate how often unlikely events can happen in a big enough space...

That's true but the density of photons (from a given source) also drops of cubically as you move away from the source.

> imilarly, if you do build light momentum transfer into your model, then expansion goes away

I'm not really a cosmologist, but I know some physics (QFT in particular). If you have a derivation for a formula giving the impact of YY-scattering on redshift I'd love to read it.

> but because we "know" expansion to be true, those results aren't considered or published.

We know redshift to be true, and we have evidence for expansion based on measurements. Something that simply denies this will have a hard time being published. A theory that explains those findings without using expansion however would certainly make waves.

As and aside, since the cross-section of photon-photon scattering is energy dependent (and, as far as I can tell cosmological redshift is not), wouldn't that be a way to distinguish them? Scattering should occur more often at higher energies meaning that after a significant amount of scattering a bundle's spectrum should clump up more into the red.


The way I've always heard it, as a layman, is that red shift could be explained only by an expanding universe.

I'm sure there's more rigor to it than that, but it always seem s like a pretty big conclusion. A mildly surprising claim explained by a wildly surprising theory.


As a physics grad, red shift can only be explained by an expanding universe with our understanding of light physics in the mid 20th century.

If you look at the world only through green glasses, you could be forgiven for thinking that everything was green. Your interpretation of observations is only as good as the theory you use to interpret. Think of epicycles, which for millenia seemed obvious and correct, until heliocentricity (for all bodies, not just earth) became the dominant model due to improved theory - but no new observations.

If light behaves even slightly differently to how we currently believe, everything from galactic rotation to expansion goes out of the window.

Here's the bit on momentum transfer - they even explicitly cite the implication for Hubble expansion.

https://www.sciencedaily.com/releases/2017/06/170630085627.h...


The article referenced in that piece is available here, from the arxiv: https://arxiv.org/pdf/1603.07224.pdf The article has been up for some time, but has just now been published by Physical Review A; it is common practice for physicists to post a preprint on arxiv prior to publication, which happens after peer-review.

The momentum paradox you refer to is (possibly?) the Abraham-Minkowski controversy ( https://en.wikipedia.org/wiki/Abraham-Minkowski_controversy ) about electromagnetic momentum in dielectric media. I'm not an expert on this subject, but I would doubt that this new work definitively settles the controversy. Of course this is, no doubt, work towards settling the issue. My (limited) understanding is that the controversy is really about interpreting certain quantities that behave like momentum in certain contexts, and which contexts apply in certain experiments to measure them. I do not believe it constitutes a crisis in our understanding of light; this is a very technical detail.

Of course, using the mechanism of momentum transfer to the transmitting medium as an explanation of redshift - and by doing so, refuting the expansion of the universe - is just yet another "tired light" explanation. (This refers to the idea of explaining redshift through a path-dependent loss of energy for photons traveling from great distances.) This is not a new notion, dating at least to Zwicky in the fifties. This article ( https://arxiv.org/abs/astro-ph/0106566 ) details efforts to demonstrate the reality of this expansion. These efforts do not assume anything about the exact mechanism responsible for tired light, merely the notion that light loses energy as it travels. They refute this to better than 10 sigma.

>with our understanding of light physics in the mid 20th century. >If light behaves even slightly differently to how we currently believe, everything from galactic rotation to expansion goes out of the window.

I... suppose I have to admit that if the current theory of light is wrong, then there may be changes to how we interpret these results. You should know though that this would be extremely unlikely. Generally revolutions in physics tend to subsume the effective results of the theories that are replaced. Quantum mechanics provides a good example: if you take the limit h -> 0 (making Planck's constant zero), you recover classical mechanics. Relativity provides another: if you take the limit c -> infinity in relativity, you also recover classical mechanics. Our understanding of light is quite good.

Forgive me, but I'm detecting a little bit of an "international scientific conspiracy" vibe here. Unpopular work is published all the time, provided that it withstands scientific scrutiny. I know that sounds like I'm dodging the issue, but really, you can apply a cui bono here: what do scientists stand to gain by propping up "wrong" science? We don't get paid a lot, you know.

>Think of epicycles, which for millenia seemed obvious and correct, until heliocentricity (for all bodies, not just earth) became the dominant model due to improved theory - but no new observations.

I don't mean to pick nits, but it was improved observations (Kepler, Brahe, etc.) that drove the acceptance of the heliocentric model. Kepler was famous partly due to his unprecedentedly accurate measurements. Theory was not necessarily rigorous at the time, often referencing theological arguments; heliocentrism was hotly debated, but not novel. Later it was realized that epicycles form a basis set for any trajectory on the surface of a sphere; any trajectory can be reproduced using a sufficiently large number of them. This was a pitfall that astronomers of the time could never have known. Remember that they did not have Newton's insights yet.

On a final note, its great to hear that ATLAS has some good evidence for gamma-gamma interaction!

edit: fix link


do you mean interaction between high energy gamma rays from distant quasars? That could change background radiation density, right?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: