Hacker News new | past | comments | ask | show | jobs | submit login
LIGO black hole echoes hint at general-relativity breakdown (nature.com)
254 points by privong on Dec 10, 2016 | hide | past | favorite | 96 comments



> The echoes could be a statistical fluke, and if random noise is behind the patterns, says Afshordi, then the chance of seeing such echoes is about 1 in 270, or 2.9 sigma. To be sure that they are not noise, such echoes will have to be spotted in future black-hole mergers. “The good thing is that new LIGO data with improved sensitivity will be coming in, so we should be able to confirm this or rule it out within the next two years.”

Can I take a moment to commend the level-headed non-hype of this paragraph? It gives the impression that their first priority is finding the "truth" (I know, I know; I'm using it as a shorthand), not whatever they want to be confirmed. Gives the research much more credibility.

I mean, I know it's Nature so we should expect it, but it's still nice to see.


It's actually a bit hyped for particle physics. It ignores the fact that you're very likely to see a lot of "significant" effects in physics because there are tens of thousands of data analyses performed every year, so we're bound to find some chance occurrences at this significance level. See the diphoton excess a few years ago.

This is why the particle physics community has a very strict unofficial standard of 5 sigma for significance. People don't generally publish "serious" papers at 3 sigma.


We've only observed a small handful of black hole collisions though so seeing something at 2.9 sigma would be somewhat more surprising, right? Definitely not rising to the level of a discovery yet of course.


Only a small handful of observations (only one, last I heard, but I may be out of date), but lots and lots of analyses. And for every scientist that has their own model and does their own data processing, there's a chance that the model lines up with some arbitrary noise in the data.


You also have to add in that once these experimental results were released that a hundred or so theoretical physicists immediately started working on massaging the data into supporting their pet theories. After every anomalous result there's immediately hundreds of published papers by someone trying to simply be the first to publish in case their idea happens to pan out. It is a kind of shotgun approach to winding up being the next Dirac discovering the positron from math or whatever.

Since there's 99 other PhDs who probably looked at this data and found their pet theories didn't match the data and haven't been able to publish you have to account for that filtering effect that this was the 1-in-100 paper that managed to match the data. Adding that "Look Elsewhere Effect" to the 2.9 sigma would push the global significance (in the literal sense of global -- meaning all the research teams across the whole world) of this result down into meaninglessness.

Of course its likely that any discovery would start out looking like something on the edge of significance exactly like this. The safe bet is that this disappears, but all we can do is wait for more data to come in and see if the significance improves or disappears.

And I do really hope that someone finds something like this via the LIGO data. I'm convinced there's something very interesting out there to find, and sooner or later it should pop up experimentally and shake up our model of the universe.


Yes, this is basically a distributed version of the issues discussed in psychology regarding researcher degrees of freedom with a single data set. If you throw enough models at the data, one of them is bound to stick, whether it is predictive or not.


You're right, of course - while not a particle physicist, I know about the sigma standards, and mistakenly assumed others would see the "2.9 sigma" and immediately knew what that implied.


it wasn't obvious to this layman. thanks for clarifying.


I assume one reason for that high standard is that particle physics experiments are repeatable and can be observed with very high precision.

None of that is true of LIGO observations of black hole collisions.


yea, 1 in 270 seems fairly likely given the circumstances.


There was a time when Nature was a highly respected publication. Now, their headlines are at tabloid level.


TFA is in "news & comments" rather than "research" (which is the respected part).


It's a for-profit publication, and I agree -- sometimes it shows.


This is super exciting stuff! We know that the two most accurate models of the physical world, Quantum Mechanics and General Relativity, contradict each other so at least one, and probably both, are approximations to the real laws that govern our universe. Since the QM and GR disagree about what happens for small massive objects, and in particular black hole event horizons, this is a place to look for divergence to existing theories. If these echos holds up under repeated measurements, it could be one of the most consequential measurements of this century. This is another example of how taking measurements to verify a theory you think you know can lead you in completely unexpected directions.

Though, for now, the LIGO team is apparently saying that these results could be the result of noise which would occur 1 out of 270 times. That's not strong enough evidence (in my mind) to overcome the overwhelmingly likely prior that General Relativity is correct. In time, we'll see.

Also, the article mentions that LIGO has witnessed 3 black hole mergers. Last I heard LIGO had only witnessed 2.


> Also, the article mentions that LIGO has witnessed 3 black hole mergers. Last I heard LIGO had only witnessed 2.

Two have been published as bona fide BH mergers. The third is a signal in the LIGO data that doesn't formally meet the statistical significance criteria, and so is classified as a "GW candidate" rather than a detection. Unfortunately I don't have a reference for it, but this was discussed during a seminar I attended last month, given by Pedro Marronetti (NSF Program Director for LIGO, I believe).


Neat! I'd love to read more about it.


While writing the original comment, I quickly browsed the LIGO publications list[0], but nothing jumped out at me as being about the candidate, specifically. The papers with "candidate" in the titles are pre-upgrade, so I don't think they include the candidate that was referenced in the seminar I attended.

[0] https://www.lsc-group.phys.uwm.edu/ppcomm/Papers.html

EDIT: Found something. The third candidate event (LVT151012) is mentioned in this paper, in the 2nd paragraph of the introduction and discussed in more detail in Section C (beginning on page 11): https://arxiv.org/abs/1606.04856 ("Binary Black Hole Mergers in the first Advanced LIGO Observing Run", LIGO+VIRGO).


For reference, at least in particle physics the standard for a discovery is usually taken to be five-sigma, a 1 out of 1.7 million error probability, plus an independent verification to eliminate systematic errors that would not show up in the statistics.


This isn't particle physics. Particle physics has the luxury and burden of dealing with trillions of experimental trials, not a measly 2.


From someone who hasn't worked with statistics in awhile, isn't sigma partly dependent on the number of independent events? If so, and LIGO detects occur infrequently, how long are we talking about, even assuming the data supports?


We use a technique called time sliding to generate more background data to understand our detector noise better, and thus set better upper limits on the false alarm rate. So even with a 3 month observation time we can claim 5+ sigma.


Thanks for the additional information!

I'm assuming you're referring to the method documented in Ch 4 here? https://gwic.ligo.org/thesisprize/2011/capano_thesis.pdf

If so, would it be fair to say (as a simplification) that instead of increasing the confidence in GW events by collecting more of them, you're doing so by holding the number of collected events almost constant and increasing the confidence in the characteristics of noise the detector is reporting?


Yes, exactly that. We don't have the luxury of many events, so to increase our confidence we generate more background. One of the problems is what to do with data that contains more than one event. In the second detection paper, one of the plots contains the background with and without the other event taken into account. This is because during the time sliding process you will necessarily slide the time series from one detector across a gravitational wave signal in the other, producing false background noise.

There's no real standard way of dealing with this, so we show both cases - but both claim 5+ sigma.


The article said two years.


First a bit of background on QM vs GR, then I'll return to your point about horizons.

It's not that they disagree, it's just that when you do perturbative quantization of General Relativity you find out you can't use the renormalization (by power-set counting) that is done on other perturbatively quantized field theories. This is only a problem in strong gravity -- where in this context, "strong" means more than one loop of gravitons on a Feynman diagram. With multi-loop Feynman diagrams involving other massless propagators we can use a number of techniques to reduce the integrals; these techniques do not work with (massless) gravitons.

One could describe this form of renormalization as a enabling a reduction of infinite modes by making a finite set of measurements at high energy to fix a parameter; this works well for light, for example. Einstein gravity, on the other hand, is perturbatively non-renormalizable, at least using the same techniques, because you need to make an ever increasing set of these types of parameter-fixing measurements at higher energies, and ultimately in strong gravity (as defined above) you need an infinite number of them.

Since there is an overwhelming amount of evidence for Einstein gravity everywhere we have been able to look so far, this poses a problem: how do you quantize gravity in a way that makes useful predictions about systems in strong gravity? There are lots of research programmes looking at this problem.

(The opposite side of the coin is that programmes to geometrize the fields of the Standard Model exist, but they run into difficulty too. Taken together, this is the underlying situation supporting the claim that QM and GR disagree about QM scale physics where local curvature distorts lengths and times at the scale of SM scatterings.)

Outside of strong gravity, however, perturbatively quantized gravity on the one hand, and semiclassical gravity on the other, are perfectly fine effective field theories (effective field theories in the Kenneth G Wilson sense; General Relativity as an effective field theory in this sense I'll call "the EFT" below).

One of the features of black holes generically is that strong curvature is found only very near the (unremovable by change of coordinates in General Relativity, gravitational) singularity. The event horizon or horizons, depending on metric the black hole sources, are found further from the singularity as the black hole's mass increases. Around an arbitrarily high mass black hole, one will find the horizon is in a region of arbitrarily flat spacetime. If the compact massive objects we observe to date are astrophysical black holes, they do not have strong gravity at the horizon.

This is one of the critical parts of the AMPS firewalls "paradox", wherein one of the set of properties of a black hole which cannot all be true is the no-drama conjecture: an infaller at a sufficiently massive black hole will experience no tidal stresses.

Afshordi, one of the authors of the paper that's the topic of the article has been doing productive phenomenology in his various collaborations, and his argument that there are classes of quantum gravity theory that can be excluded if the observations do not disappear under analysis (e.g., they're not systematic errors, and aren't "lucky" low-sigma correlations).

It's more than a bit provocative to suggest that the observations suggest that the EFT fails outside the horizon (one of the possibilities that has been discussed by Polchinski and others in the wake of AMPS (he's the "P")), which is fine, since the point will stand or fall on the basis of evidence rather than the consequences for various QG research programmes. :-)

For now it's pretty safe to work on the assumption that the EFT is fine at least everywhere outside the event horizon and the extremely hot dense early universe, and that a UV completion to GR will be completely compatible with the EFT outside strong gravity and inside a region at least a little bigger than the observable universe.

Finally, all this means that although I won't exactly endorse your wording in your third last sentence, I do certainly agree with the sentiment.


Any reading suggestions for someone who wants to understand this and has a very strong math background but no physics background beyond basic second-year QM?


It's hard to gauge (pardon the pun) where to start you based on that.

I'll guess that you're keen on understanding the General Relativity part in detail.

Carroll and ‘t Hooft have kindly put up lecture notes that might be a good starting place. Stefan Waner has made available good lecture notes on differential geometry in the GR context.

https://arxiv.org/abs/gr-qc/9712019 [Carroll] http://www.staff.science.uu.nl/~hooft101/lectures/genrel_201... ['t Hooft] http://www.zweigmedia.com/diff_geom/tc.html [Waner]

If you can wrap your head around those you could proceed to any of the standard grad texts on GR (MTW, Wald, Weinberg mainly). Weinberg is popular with people who like concise maths.

If it's all too novel, then Hartle, Schutz and Carroll all have excellent introductory texts aimed at grad students.

Once you understand how General Relativity works as a general background to any field theory -- classical or quantum -- then you'd be ready for semiclassical gravity or various quantizations of GR.

An alternative approach might be to aim you instead towards QFTs via group theory, Lie groups, Yang-Mills theory, renormalization, renormalization group flow, and so forth.

Eventually you hit on gauge/group correspondence arguments in general, which will equip you to understand the attractions of AdS/CFT in moving the tedious calculations from one setting to another setting in which they're a lot less tedious, and hopefully not fall too hard for the idea that AdS/CFT automatically helps us with gravity and matter theories in our universe.

There is certainly ample scope for talented mathematicians to test the correspondence argument (and especially whether AdS/CFT specifically or gauge/gravity generally really is a duality) rigourously.

I think that'd cover all the ideas touched on in comment you replied to.

PS: Sorry I meant to list off some QFT resources for you but I have run out of time today. :(


Thank you! I will look into the "QFTs via group theory" approach, since I already have a rough idea how GR works, but QFT is a complete mystery to me.


I think that as you have some rough exposure to relativity already, you could first absorb the idea that Minkowski (flat) spacetime is a theory where at every point the Poincaré group is the isometry group. That's a good way to hit on representation theory.

Representations of the Poincaré group: http://www2.ph.ed.ac.uk/~s0948358/mysite/Poincare%20Chapters...

and generalizing: https://www.wikiwand.com/en/Particle_physics_and_representat...

Introductions to QFT tend to assume you know a lot of physics. An example is the Preface for Students in Srednicki's prepublication: http://web.physics.ucsb.edu/~mark/ms-qft-DRAFT.pdf

However, Lancaster and Blundell's book has some reviews suggesting that someone good at math should be able to work through it without the background needed by textbooks like Srednicki's https://www.dur.ac.uk/physics/qftgabook/ (I have not read it though).


I got a bit frustrated with Maggiore, and I'm now taking a crack at Lancaster & Blundell. The opening chapters are very promising.


Thanks again. Your suggestions led me to scan the QC174.45 shelves at a nearby university library. I settled on Maggiore's A Modern Introduction to Quantum Field Theory, which seems to be almost all Math.


Some combination of the following would be a good start, depending on your background:

Quantum Theory for Mathematicians : https://www.amazon.com/Quantum-Theory-Mathematicians-Graduat...

Quantum Mechanics for Mathematicians : https://www.amazon.com/Quantum-Mechanics-Mathematicians-Grad...

Quantum Field Theory and the Standard Model : https://www.amazon.com/Quantum-Field-Theory-Standard-Model/d...


I honestly can't tell whether that's the output of a Markov chain trained on arXiv. I need to read more until I can tell the difference again.


If a Markov chain could generate something that long and coherent, it would be a much greater find than a simple explanation of gravity in General Relativity.


If it's any help, I can, and it's not. I just learned a ton from that post, in terms of clarifying the framework for things I already know a little about.


>"Quantum Mechanics and General Relativity, contradict each other so at least one, and probably both, are approximations to the real laws that govern our universe. [...] overwhelmingly likely prior that General Relativity is correct"

If you think both QM and GR are likely incorrect, then why do you use "a overwhelmingly likely prior" that GR is correct?


Neither is "incorrect"; the Standard Model and General Relativity are two of our best physical theories in that they both accord entirely with observational and experimental evidence to date.

Either or both may be incomplete, however. Correctness and completeness of any theory in mathematical physics are esentially orthogonal. You can have a complete theory that is just wrong, for example.

As I wrote a bit earlier in this thread, the most straightforward approach to quantizing General Relativity fails in strong gravity. Additionally, the classical field theory that is General Relativity is defined on a smooth manifold and yet so far we have been unable to escape the conclusion that some systems of mass-energy inevitably produce a non-smooth discontinuity. A completion of classical General Relativity requires the smoothing of these regions. Sharpening this, the problem with GR is the prediction of a gravitational singularity; if singularities are physical at all (even if they are in a region of spacetime that is inaccessible outside event horizons), then General Relativity is incomplete in its own terms.

The Standard Model as a paradigm of quantum field theory, on the other hand, is defined against a flat spacetime and thus relies on the result from General Relativity that the flat spacetime metric is induced on the tangent space of every point in a smooth spacetime. So if GR is incomplete, so is the Standard Model, in its own terms. (This is not just an academic point; any theory of gravity that does not reproduce the Poincaré invariance of flat spacetime in the energy scales of the Standard Model has a terrible correctness problem.) Additionally, the Standard Model is not especially well-defined at GUT energy scales. Additionally, the Standard Model does not describe the whole of the non-gravitational content of the universe; for example, it is silent on dark matter.

The Standard Model is highly correct, however, in the limits where it is effectively complete. It's a pity it has so many free parameters that have to be determined by experiment.

Likewise, General Relativity is both highly correct in the limits of present observability, and it is complete in its own terms if one admits the possibility that gravitational singularities only arise in our idealized models and that, for example, there are no exactly Schwarzschild black holes anywhere in the past, present or future of our universe. (One have to show that, and also that there are no other physically realizable systems of matter that can generate non-smoothness in our spacetime. That's not an easy ask. Although General Relativity has only one of the free paramaters complained about in the previous paragraph, it doesn't offer much guidance about how to show that you can't actually generate a low-Q Kerr-Newman metric in reality, and worse, some of that guidance must come from the high-energy behaviour of matter fields -- we can only be as complete as the Standard Model right now.)


Posts like this is why I read HN. Thanks a lot! :)


I think they mean an overwhelmingly strong prior that GR is correct in this particular instance.


Sorry, good catch! I really mean to say a prior that we wouldn't find a falsification of GR in this particular arena.


Thanks, makes much more sense.


Just a heads up: the authors of the paper this article is talking about are not part of the LIGO collaboration. They ran their own analysis on the publicly available data.


Yup. This is where, after almost a century, the Quantum Mechanics Rubber finally meets the General Relativity Road.

May the best theory win!


Theory is the operative word here both theories could be, and likely are to some degree, false. It's a common human bias to being to believe one's theories as fact.


That's not really what 'scientific theory' means, outside of creationist circles.


What does this possibly have to do with creationism? I guess you think that because I'm calling modern scientific thought into question that I must be a creationist. Sorry but that's not the case.


I mean that your understanding of what a scientific theory is inaccurate. It's a misunderstanding that is most common among creationists so that's where you see it discussed most. Here's one overview:

https://en.wikipedia.org/wiki/Evolution_as_fact_and_theory

Scientific theory doesn't mean 'sort of a suggestion' as you seem to think.


You are confused about my understanding. However, scientific theory does not mean "really it's true". It means we think it's true. Or, this is our best guess so far. To believe anything else is science as religion.


I'm pretty sure I'm not but ok. I won't even say who is fond of the 'science as religion' thing!


Personally, I would not be surprised if we discover that our understanding of general relativity is wrong for extreme values (i.e. very high mass densities). The whole problem of dark energy and dark matter -which has failed to show up in any conceivable form so far- also gives reason to doubt the validity of our current theory of gravitation.

I think we're in a similar situation like at the end of the 19th century, were many physicists thought that everything that could be discovered already was, apart from some "edge phenomena" that would need to be resolved somehow using the current theories. In the end, these edge cases turned out to be the first hints of some completely new theories that dramatically improved our understanding of nature. I think that gravity and quantum mechanics are due for a similar change, and in the coming decades we might just get the data that we need to make this change happpen.

I also have difficulties "buying" the current theory of black hole physics, especially the concepts of an event horizon and the infinite mass density, as well as the problems which arise from them (e.g. black hole energy evaporation through virtual particle generation at the horizon). And as previous theories of gravitation have broken down at points of extreme value (high energy, high speed), I think black holes are a hot candidate for breaking general relativity.


There's plenty of hot dark matter coursing right through you right now !

Fermi -> Wang Ganchang -> Harrison, Kruse & McGuire took a while. I wouldn't expect a quick detection of something with the properties of a WIMP, as I would expect any such particle to be even harder to detect than a neutrino, especially if it doesn't feel the weak nuclear force.

I don't know why you think that dark energy has failed to show up in any conceivable form -- how do you explain the cosmological redshift without it? Dark energy in its simplest form is just the cosmological constant, and can be an inertial effect.

In any case, even if the concordance cosmology is simply wrong, that does not mean that GR is incorrect as much as we are wrong about the mechanisms that generate the metric (Afshordi, one of the authors of the paper at the top, has proposed non-universal coupling to the single metric of GR), or alternatively we are wrong about the way we choose and stitch together metrics (i.e., we're misusing GR in a way that introduces serious errors at long length scales).

Do you really think there are working scientists who think that there's nothing more to discover? Conversely, are there many who deny that the huge preponderance of evidence we have so far favours the Standard Model and General Relativity? Even if we "demote" SM and GR to effective field theories, the effective limit of each is very nearly everywhere readily accessible, isn't it?

Buying the BH singularity would be, I think, a pretty extreme position. Every viable post-GR effort I know about is to some extent focused on abolishing singularities somehow. (You could alternatively keep them always hidden and resolve things like BH thermodynamics; if you always keep information locked up in another region of spacetime -- behind a horizon -- there's no information loss problem to consider, and you can "cut" singularities out of the manifold recovering everywhere-smoothness. But black holes might evaporate completely in the far de Sitter like future.)

Not buying an event horizon in a system of local physics with a maximum propagation of local state from one point to another seems even more extreme. The existence of a maximum local speed -- whatever it is, it could be much faster than light -- sets the slope of a nonempty open convex cone of tangent vectors (a causal cone at each point for the field-values at that point) which in turn lets us fix a first order quasilinear system of PDEs admitting a hyperbolization, and in that you can always find an observer that sees an event horizon.

The formation of the BH creates a dynamical spacetime with an acceleration between observers before and after the collapse, and that alone is sufficient to produce an event horizon.

Abolishing 'c' (as a general free parameter defined at every point; the definition can even vary by location in spacetime) seems a lot harder to swallow than abolishing event horizons. If you accept 'c', then while Schwarzschild event horizons is pretty easy (nonzero J or not-always-zero Q at all physical compact dense objects, for example), abolishing all event horizons requires a lot of contortions to avoid immediate conflict with local experiment, much less astrophysical observation.


> There's plenty of hot dark matter coursing right through you right now !

That statement is exactly the problem as I see it, as dark matter is an attempt to save an existing paradigm using a trick that makes use of unknown but conceptually understandable matter.

The same was and is true for Einstein's cosmological constant: It's a hack that was necessary to make a theory match with the observations.

Introducing hypothetical/imvisible matter to make a theory fit observations does not mean that this matter really exists.

I did not say that scientists think that there is nothing more to discover, just that there is a tendency to try to fix up existing theories instead of accepting that they might be wrong. I'm no expert in particle physics or relativity (my field is quantum mechanics), so I'm not able to judge the merit of different theories involving dark matter, I'm just not convinced that dark matter / dark energy is real. If anyone shows me compelling experimental evidence I'll be happy to change my mind.

So far we haven't seen any convincing arguments for the existence of dark energy or dark matter though, and I think there's a chance that they end up as the 21st century equivalent of the "ether".


I'm pretty sure raattgift was referring to neutrinos when he said hot dark matter was "coursing right through you". He wasn't assuming the existence of any speculative form of dark matter.


Yes. "Hot" because neutrinos move quickly compared to the speed of light, "dark" because they do not feel electromagnetism, and "matter" because they couple to the metric.

They explain the anomalous momentum in beta decays, among other things, and are still difficult to detect.

To explain the anomalous momentum we infer around large scale structures at z << 1, it's pretty reasonable to consider neutrinos or neutrino-like particles that are "cold" -- moving slowly compared to the speed of light, thus more likely to "hang around" in a region of spacetime instead of quickly running away to infinity. Although they interact very weakly with matter, they still impart momentum, so hot dark matter would tend to smear apart gas clouds rather than encouraging them to collapse into denser objects like stars. Likewise, it is perfectly reasonable to search for them in ways analogous to how the neutrino itself was searched for experimentally and observationally, and like with the first detection of the neutrino, it is liable to take time to detect or let various non-detections exclude all the regions of the particle mass vs nucleon cross-section parameter space.

Moreover, the search for this sort of cold dark matter does not preclude concurrent searches for other possibilities.

So I can't agree with ThePhysicist that there is a problem here, other than that there is apparently a communications gap that affects even people with backgrounds in quantum mechanics.


Dark matter hasn’t been proven to exist. Here is another hypothesis[1].

[1] https://www.quantamagazine.org/20161129-verlinde-gravity-dar...


Verlinde's paper [ https://arxiv.org/abs/1611.02269 ] is the wrong place to be looking for an argument against dark matter.

It starts with simply accepting that dark energy is part of the unremovable background of spacetime (in particular, his paper considers a de Sitter vacuum with a straightforward positive cosmological constant, and treats it as fully described by classical General Relativity (GR)) and then proposes an emergent thermal-entropic force that may produce deviations from General Relativity on galactic and cluster scales by way of a long-distance entanglement among the fundamental material from which this take on perturbative gravity emerges.

That fundamental material -- strings (sec 2.3) -- generating the thermal-entropic force does not feel electromagnetism, and so is dark. That fundamental material additionally is a component of the matter tensor in the low-temperature General Relativistic limit of his theory (eqn 4.23).

Verlinde's theory is, in fact, a "cold" "dark" "matter" one; as the cosmological constant is involved too, it unsurprisingly reproduces the successess of \Lambda-CDM. (CDM == cold dark matter). The only qualification of this is that "matter" is the standard GR definition of everything that is not the gravitational field content. It's not much like the matter of the Standard Model (SM), while WIMPs are expected to be by virtue of some extension of the SM. (MACHOs, by contrast, can have very little to do with SM or extended-SM particles.)

Nevertheless, the major difference between Verlinde's theory and the standard cosmology is the emergence of the standard cosmology from string theory.

Quoting the paper: "in our ... framework one has to add a dark component to the stress energy tensor, which behaves very much like the cold dark matter needed to explain structure formation, but which in its true origin is an intrinsic property of spacetime rather than being caused by some unknown particle".

In other words, the paper proposes that some (sec 8.2) of the dark matter of the standard cosmology can emerge from strings in a way that does not produce a particle like a heavy neutrino.

(The paper is interesting many other ways, though.)


The breathlessness of this article is fairly annoying.

LIGO wasn't really setup to confirm GR around black holes. It was designed to study highly energetic, high-curvature gravitational phenomena where it would be expected that there might be deviations from GR. Measuring deviations from GR predictions is exactly why you'd built the experiment and isn't "ironic" at all (and not even in the Alanis Morrissette sense since finding something new would be more like having a party on your wedding day rather than raining on it).

GR is also fully expected to break down at the central singularity of a black hole. The curvature of space-time become infinite there with infinite force. At the very least its expected that quantum gravity would smear this out.

The problem of black hole entropy at the event horizon of the black hole has also been known for decades and is one of the drivers behind doing research like LIGO. The "firewall" problem is recently all the rage in the west coast theoretical community, but its been known for some time that we can't make sense of black hole entropy entirely classically with GR, so that finding non-GR effects near the event horizon is at least hoped-for, if not expected, and LIGO is precisely the kind of experiment that could shed light on that.

Its legitimately very exciting, but its the result of methodically grinding away at a very hard problem for decades.


2.9 sigma is hardly evidence of anything in fundamental physics. There was a 4 sigma evidence of diphoton excess from ATLAS and CMS last year which went away this year. 3-4 sigma discrepancies come and go. It's not for nothing that physicists have the discovery criteria set at 5 sigma.

What's more, one should be extremely skeptical when observations seem to violate long held physical theories. The superluminal neutrinos from OPERA ostensibly had >5 sigma evidence but nobody (correctly) took it seriously as it violated special relativity. Unsurprisingly, it was ultimately traced to a loose GPS cable.


Minor comment from another physicist here, not in this field but from what my friends from this field say, most people expected hints of quantum gravity to come specifically from black holes, so if there is anything new to be learned about GR's limits, looking at black holes is the right place to look.


I've had this idea in my mind how Black Holes could be connected to universe generation. It came about when I learned that the known universe would be a black hole if its mass was concentrated in the center, i.e. its size is about the same as the event horizon for such a mass.

Thinking backwards, obviously the universe at some point would have been described as a black hole by GR. Then of course spacetime expansion comes into play, that somehow makes it into not-a-black hole.

So here is the idea: What if a Big Bang is exactly what happens when matter falls into itself until the original spacetime continuum breaks? I.e. the energy of the original structure forms and gets linked into a new spacetime continuum - part of it as dark energy that expands the new spacetime, part of it as normal energy and matter.

Is there anything we know makes my idea impossible? If it were true, would there be a chance that we could combine our empirical knowledge of the Big Bang with this new empirical knowledge of gravitational waves to come up with a testable unified theory (i.e. Quantum Gravity)?


The idea that the universe is "inside" a black hole in another universe is a pretty old idea, but there have been several papers on it in recent years, such as this one [1], explained here [2], which offers a holographic origin.

[1] https://arxiv.org/abs/1309.1487

[2] https://www.perimeterinstitute.ca/news/black-hole-birth-univ...


> the known universe would be a black hole if its mass was concentrated in the center

The universe doesn't have a "center". The universe did have a much higher density right after the Big Bang, but it was expanding rapidly; that's why it was (and is) not a black hole.

> Is there anything we know makes my idea impossible?

Yes, the fact that it's based on a misconception about the universe's spacetime geometry. See above.

There are certainly "bounce" models being considered for what preceded the Big Bang (although they're by no means the only models being considered). But they don't work like what you are describing.


My point is that the way the universe works, i.e. spacetime expansion, inflation and acceleration (dark energy) could all be governed by processes inside a black hole's singularity - something we afaik don't have a good model yet. It's only a black hole from the reference point of the parent universe. I don't mean the old bounce model, more like bubbles around a water hose that get smaller the farther away from the source - i.e a stellar black hole creates a mini universe through its own spacetime rip.


> My point is that the way the universe works, i.e. spacetime expansion, inflation and acceleration (dark energy) could all be governed by processes inside a black hole's singularity

The singularity doesn't have an "inside". See below.

> something we afaik don't have a good model yet

The models that are being looked at get rid of the singularity altogether. They don't try to model it as being made up of internal parts.

> a stellar black hole creates a mini universe through its own spacetime rip.

Some physicists have considered models in which black holes give birth to "baby universes" (Hawking and Lee Smolin are two that come to mind). But these models don't "rip" spacetime; they remove the singularity, which in the standard classical GR model is just a spacelike surface--a moment of time--that represents a boundary of spacetime, and instead just extend the spacetime further on, into the spacetime of the new universe.


Going quickly through the awesomely titled paper [1], they fit a template to the data and obtain something like 2.9 sigma for their best value, without a obvious way how they deal with the look elsewhere effect. On the other hand, this is probably the window on nature that is worst understood, we understand gravity at solar system field strength and distances very well, we have great data from particle physics, but until last year our only evidence for the high field regime of gravity came from pointing telescopes toward astronomical objects - and note that telescopes work with the electro-magnetically, they are using the wrong force.

I think this is exciting, but only the first step. With this, one can deal with the look elsewhere effect by pointing to this paper and using their analysis, but I wouldn't think of this by itself as a hint towards deviations from general relativity.

[1] Abedi &al. "Echoes from the Abyss: Evidence for Planck-scale structure at black hole horizons" https://arxiv.org/abs/1612.00266


>"if random noise is behind the patterns, says Afshordi, then the chance of seeing such echoes is about 1 in 270, or 2.9 sigma. To be sure that they are not noise, such echoes will have to be spotted in future black-hole mergers."

We are getting closer and closer... So finally we see a correct interpretation of a p-value in the media, but the connection to the following sentence is not clear so I am not sure the meaning was really understood.

How does spotting more such echos allow us to "be sure they are not noise", and how does this relate to that 1/270 number?

If the probability of such an observation was 1/1.7 million assuming a random noise model (rather than 1/270), would that mean we could "be sure it was not noise"? Shouldn't that depend on how well the observations could be fit by alternative models?


General relativity makes certain predictions, we now measured a deviation from those predictions assuming a specific model of the source of the observation. So more measurements will help to build up confidence that the deviations are not just measurement errors.

After that you have to explain those deviations and the two most obvious options are that either the model of the event or the theory itself is wrong or at least not accurate enough to describe the observations. Then you can look for changes to the model, varying masses or the number of objects involved, or maybe even look at entirely different possible source of gravitational waves to explain the observations.

But if you return with empty hands you will have to take the option serious that a modification of the theory is required. For example if you theory only allows a quantity to decrease monotonically over time but your observations show oscillations, then you have pretty strong evidence that your theory requires modifications.


Sure, but if every other explanation is even less likely, random noise is still the best guess.


You throw a dice 3 times. It always show 6. How do you know the dice is loaded, and that it wasn't a fluke? Repeat the experiment, and see if it starts looking random (pattern disappears) or if the pattern is strengthened (always saying 6).


Also, I couldn't get the original documents at the time, but you reminded me of this:

>'The simplest assumption about dice as random-number generators is that each face is equally likely, and therefore the event “five or six” will occur with probability 1/3 and the number of successes out of 12 will be distributed according to the binomial distribution. When the data are compared to this “fair binomial” hypothesis using Pearson’s 2 test without any binning, Pearson found a p-value of 0.000016, or “the odds are 62,499 to 1 against such a system of deviations on a random selection.”' https://galton.uchicago.edu/about/docs/2009/2009_dice_zac_la...

The point is you will always find deviations (with extremely low p-values) if you look hard enough. It is about collecting data as carefully as possible, and determining which model fits best, not which fits perfectly.


> The point is you will always find deviations (with extremely low p-values) if you look hard enough

I don't know how you can get that point from the article instead of 'an 1894-era die is biased but you need a lot of statistical power in order to see that'.


I think the lesson is clear... the more messed up your methods the easier to see the deviation (ie the historical vs modern experiment). Also, where do you get this:

>"you need a lot of statistical power in order to see that"


For a rational person, it depends who gave you the dice. It sounds like you just put a low prior on dice being loaded since you are so used to getting well made dice from neutral parties...


You can factor in the expected false positive base rate and the confidence to figure out by how much you should boost your prior expectation of the theory being correct. "Being sure" is shorthand for "having a prior very close to 1 or 0".


"The most exciting phrase to hear in science, the one that heralds new discoveries, is not 'Eureka!' but 'That's funny...' " -- Isaac Asimov


Does any layman expect GR NOT to break down? To me it's intuitively only correctly describing emergent phenomena in the limiting case of large scales; it's bound to be less accurate than something that is correct on small scales.


They're hopelessly out of their depth. This kind of talk makes zero sense


No aspersions here. I'm keeping the faith, but can anyone recall what these allude to?

1) Utah has solved the energy crisis on a table top with deuterium

2) That bump in the collider data is looking pretty odd

3) Remind your child to chelate if the autism acts up

4) Wow those neutrinos are moving so fast

5) Bigotry can stop if we'd go door to door and talk about it

6) Arsenic can kill but it enables growth for at least one family

7) This theory will be perfect if we get rid of Λ

In all fairness, this topic is a little different because we know for sure something big has to happen eventually to reconcile QM/Gravity.


It's weird that they had the idea of firewall just in 2012. I had this exact idea maybe ten years prior while being a school student. A fairly obvious one if you think of that - some photons will orbit the black hole.


It's potentially more complicated than that.

Someone on Reddit asked a brilliant question a while back - what happens to quantum fields at the event horizon?

In QFT, fields are everywhere. But to support a field, you need a mechanism that allows causal propagation - which is exactly what isn't allowed across an event horizon.

So at the very least you have a discontinuity where three and possibly all four fundamental forces stop working, and which is separate to any hypothetical relativistic singularity.

Whatever is left is going to be some kind of unimaginably weird sub-quantum soup.

I don't know if that's the same firewall that was invented in 2012. But the takeaway is that relativity isn't complete enough to model black holes. You absolutely need to include quantum effects - and when you do, things get very strange indeed.


> to support a field, you need a mechanism that allows causal propagation - which is exactly what isn't allowed across an event horizon.

That's because the event horizon is a lightlike surface. Causal propagation isn't allowed across any lightlike surface. For example, there are lightlike surfaces that contain the event where you are right now. Causal propagation isn't allowed across them. Yet QFT works just fine in your vicinity.

What the Reddit questioner apparently did not realize is that the event horizon is defined globally, not locally. In other words, its location is defined in terms of the global properties of the spacetime, not in terms of any local properties. Locally, the EH is just a lightlike surface, and is no different, from a QFT point of view, from any other lightlike surface.


That's a neat way of putting it (at least broadly) in the first four pargaraphs. Thanks.

I'd add that the event horizon is the boundary below which the propagators of causality can only move further below the horizon itself.

The "unimaginably weird sub-quantum" part doesn't follow from those paragraphs, though, in the region outside the event horizon, or even a little ways inside. It is however a fair way of describing the problem of the singularity; the goal of nearly all quantum gravity research programmes is keeping the singularity from ever existing, and "weird microscopic behaviour" is a reasonable way of describing what that may entail.

I made some comments relevant to your last paragraph elsewhere in this discussion.


In QFT, fields are everywhere.

The fields in quantum field theory are mathematical tools, they are not physical entities.

But to support a field, you need a mechanism that allows causal propagation - which is exactly what isn't allowed across an event horizon.

But that is only a one-way thing - the future light cones of events inside the event horizon are contained inside the event horizon but the future light cones of events outside the event horizon certainly overlap the inside of the black hole.


Can you explain what you mean by

> The fields in quantum field theory are mathematical tools, they are not physical entities.

If you mean that the world itself is not the mathematics then I can accept it (although I might resist, the philosophical rejection of things like Tegmark's Mathematical Universe Hypothesis), but if you mean that what we actually have is a bunch of particles doing one thing or another, I must object, because it is extremely difficult to explain nonperturbative phenomena like instantons or sphaelerons if you have that attitude.


Quantum fields have gauge symmetry so they do not have any definite values, you can assume them to have more or less any value as long as you are consistent. My views on this topic are heavily influenced by Nima Arkani-Hamed and here [1] are 30 seconds from a lecture where he is very explicit about this. But I am aware that this is not really a topic with universal agreement, at least this is what it looks to a non-physicist like me. And looking at your user profile I am pretty sure you are going to tell me that I am asking for to much realism.

[1] https://www.youtube.com/watch?v=tnA7bh7dTqY&t=1669


> The fields in quantum field theory are mathematical tools, they are not physical entities.

Still, they only maintain physical relevance as long as they are continuous, no? Otherwise you literally have a break in reality.


> a break in reality.

Such as a singularity (e.g. gravitational)? I think in Physics (just as when analysing functions), the interesting things happen when you approach such limits.


Do you mean that they can not have holes?


> A fairly obvious one if you think of that - some photons will orbit the black hole.

That some photons would orbit black holes is fairly well-known. There's an "innermost stable circular orbit"[0] ("photon sphere" for rotating black holes [1]), inside which stable (circular) orbits do not exist. These are at larger radii than the event horizon and so would be unrelated to the firewall proposal.

[0] https://en.wikipedia.org/wiki/Innermost_stable_circular_orbi...

[1] https://en.wikipedia.org/wiki/Photon_sphere


Photons orbit the black hole in the photon sphere which is located outside of the event horizon, at 1.5 times the Schwarzschild radius in case of a Schwarzschild black hole. As far as I know such orbits are also usually not stable. For a photon to orbit the black hole in the photon sphere it has to move tangentially, to reach the photon sphere the photon needs to move with at least some radial component and a photon can of course not just fire its rockets when reaching the photon sphere to enter the orbit. The idea of the firewall is very different from the idea of photons orbiting a black hole which was studied a long time ago.


Wouldn't you get orbiting photons simply by having them emitted from falling bodies, i.e. hot gas?

Any photons emitted in the right direction by incoming material in the right place would enter an orbit; and given that hot material emits photons in all directions continuously, and there's a steady supply of infalling material, there's going to be a continuous source of orbiting photons.


IANAP so don't take my word for it, but I would assume the photons would have to be emitted EXACTLY tangentially at EXACTLY the photon sphere. Just a tiny bit earlier or later or with a tiny bit of radial momentum and the photon would eventually escape or fall into the black hole. And a real black hole is not a perfect Schwarzschild black hole, it will almost certainly have at least some charge or angular momentum, surrounding matter will change the metric somewhat. But I can not tell whether that makes it more or less likely for a photon to stay in orbit but I tend towards less likely because this seems to add more possibilities to randomly push the photon out of its orbit one way or another.


All orbits should 'eventually' decay as more matter comes in and perturbs the orbit. Stability is a matter of how quickly it decays, not if it decays.


My understanding is that photons are uniquely unstable. By virtue of traveling at a constant (the speed of light), they lack the self-regulating orbital mechanics of matter (where going deeper into the gravity well speeds you up, and going farther away slows you down.)

Dealing with spherical cows, before accounting for drag, you can describe the possible space for valid orbits as a volume, and you could describe the bounds of the velocity vector via another volume, for all positions and velocities at those positions that describe - at least on paper - mathematically perfectly stable orbits.

Dealing with a photonic cow, before accounting for drag, you can describe the possible space for mathematically stable orbits as not a volume but an area - the surface of a sphere. The velocity vector has a very specific magnitude (c), and a very limited range of possible directions (exactly perpendicular to the normal of the sphere's surface). The shape you would use to describe the bounds of this velocity vector have 0 volume, 0 area. It only has a length.

Both cows can be further perturbed by drag, tidal effects, non-point gravity sources, etc. to further worsen the problem. Individual collisions with particles of space dust will push the spherical cow from one mathematically stable orbit to another, until eventually they add up enough to most likely deorbit it. Individual collisions with particles of space dust will instead take the photonic cow from one mathematically unstable orbit to another, with a decent chance of it ending up on an escape trajectory instead.


Where photons orbit a black hole is called a Photon Sphere.

https://en.m.wikipedia.org/wiki/Photon_sphere

It's at 1.5 times the radius of the event horizon for a non-charged, non-rotating black hole. But it's more of a theoretical than practical construct. You wouldn't expect to actually find any photons there because the orbit is unstable and any deviance from a perfect orbit would accumulate exponentially.


Perhaps, then, you should consider that you're misunderstanding the situation? A firewall is not "orbiting photons". See https://en.wikipedia.org/wiki/Firewall_(physics).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: