I see a number of people commenting on the size of the proof (roughly 900 pages) which is not uncommon in this particular sub-field of PDEs. For context, I had the distinct privilege of studying under Sergiu Klainerman for my PhD on this topic. My own dissertation was about 600 pages. From my personal experience, I have come to understand a few factors that contribute to large proof sizes.
1. A lot of work is on inequalities involving integrals with many terms. These are difficult to express without taking up substantial space on the page. Some inequality derivations themselves might take multiple pages if you want to go step-by-step to illustrate how they are done.
2. Writing a proof of this size is not unlike building a medium-to-large size codebase. You have a lot of Theorems/Classes that need to fit together, and by employing some form of separation of concerns you can end up with something quite large and complex.
3. Verifying this kind of proof isn't usually done all at once. A lot of verification happens on the individual lemmas before they're pieced together. Once the entire paper is written, verification is more of a process where you rely on intuition for what the "hard parts" of the proof are and drilling down on those. But when writing the paper, you must of course account for all the details regardless of whether they are "easy" or "hard", and there can be many.
Having said all this, I have not read their paper and it has been 5 years since I was in this space. This is a truly remarkable accomplishment and the result of decades of hard work!
I'll end with an amusing anecdote. A fellow grad student, when deciding between U of Chicago and Princeton for his PhD program was pitched by a U of Chicago professor who once said something like "Of course you could go to Princeton and write 700 page papers that nobody reads." When this story was shared during a conversation over tea at Princeton, another professor retorted, "Or you could have gone to U Chicago to work with him and write 70 page papers that nobody reads!"
If these proofs really are like codebases, wouldn't we eventually expect these proofs to be written as software?
You'd install lemmas using a package manager and then import them into your proof.
You can then install updates to proofs. Maybe someone has found the proof to be wrong, in which case you either find a different proof or invalidate the lemma so all the dependents can be invalidated automatically.
That is not too far from what proof assistants actually do.
Agda is a dependently typed language (strongly resembling Haskell in Syntax, but with a lot more Unicode) where you rarely, if ever, "run" your programs but rather just check if they "compile", i.e. type check. If it does type check, you proved what you set out to prove.
And the individual lemmas could have author and chronology metadata attached, then you could plot the DAG as a roadmap/tech tree of sorts with an axis corresponding to time.
You'd be able to at a glance see the year a result entered public domain, who authored it, etc
I have wondered this as well. In theory, I want to say yes. But in practice, at least in this subfield, there are so many unimportant details that might make this really difficult.
It's not a perfect analogy, but some parts of the proof feel more like neural networks than procedural algorithms. So instead of verifying the composition of two procedural algorithms g(f(x)), you have something more like g(nn(f(x))), where nn is some sort of ML model / neural network. Interestingly, we are starting to see progress in importing ML models as libraries (eg Huggingface), so maybe that can carry over someday? I don't know.
Another challenge is simply a practical one. You would need someone heavily interested in both black hole mathematics and formal proof verification to be able to do this. Both of these require years of training.
Can anyone build on results that are so hard-won and complex that understanding them is as much effort as learning the basics of some entire fields of study?
Yes. That's how basically all expansion to the field of human knowledge is constructed today.
A human can only ingest and understand so much information in so much time. As Matt Might[1] eloquently described in "The Illustrated Guide to a PhD" [2], learning the basics of an entire field of study is what a bachelor's degree is for, a master's degree gives you a specialty, graduate students reading research papers like this one is how you get to the edge of human knowledge...only then can you start building on that sum of knowledge.
To summarize, a PhD often (maybe even typically) does not bring someone to the edge of human knowledge. Often a PhD gets people near it, but given the shear amount of scientific literature out there, it's difficult to know where the edge is.
I believe more than a few scientific PhDs do as well, and more than a few can become journal articles, or amount to a compilation of previously successfully accepted journal articles.
This item, an outstanding PhD thesis of an outstanding historian.
Free Soil, Free Labor, Free Men: The Ideology of the Republican Party before the Civil War
Some graduate students will likely spend a substantial part of their PhD understanding this paper. They will learn a lot in the process, and then they can contribute by either extending the result or finding a way to simplify a part of the proof. Or if they have a (very) related interest, they may be able to adapt some of the techniques to the problem they're interested in. Slowly over time, through this process, the knowledge might diffuse to other less related areas.
What an insane effort. And to think the peers of these folks that had to edit/accept this stuff had to check and verify the new math they came up with for the proof.
That’s my one issue — is it dubious we have to come up with new math to prove stuff? Or is that readable especially when dealing with such exotic things like black holes?
Would it be possible to create a proof of a proof?
Like : Given this list of assumptions -- This list of conclusions is proven to be true.
Maybe even with a confidence rating.
Then you could package your proof inside the proofproof. Thus sparing us the effort of reading it, and maybe even make your proof more widely appreciated.
In principle you could provide what's called a Probabilistically Checkable Proof. This would be a long string of bits, and a verifier would only have to sample 3 random bits to check the validity of the whole thing.
In practice we don't even make "normal" machine checkable proofs. They are just too much work. Maybe in the future when the machines are better at understanding us.
It probably wouldn't be too surprising if at least one error was lurking somewhere in those 900 pages. But if such an error were found, the authors would almost certainly be able to address/patch it very quickly. They have very good intuition for the "hard" parts of the problem and focus so much on those parts that it would be highly unlikely to overlook a critical error.
Karl Popper would have been sent spinning at a headline like that.
Our knowledge of Black holes as physical objects is hardly affected by someone mathematically “proving” that a mathematical model is consistent. Only empirical observations/experiments can change physical fact.
This confusion of mathematics with physics is not helping us understand the Universe as it really is. At most, this many lead to a prediction that may be falsified, but we kinda had the “stability of black holes” on the todo list already.
I know people who stare at protons to see if they really are stable. Current lifetime is around 1e39 years or something…
Now I’m ranting on (sorry, this is bad HN etiquette), I both enjoy and regret the seemingly reverence of the physical “laws” by non-physicists. Applicability is not equal to universality, great mathematical models of physical phenomena such as Electromagnetism, General Relativity and the whole Quantum Field Complex, are testaments to the human imagination and resourcefulness, but they are just someone’s mathematical representation of an idea about how the universe act. We should be more aggressively prodding at the weak points of these models but also seek to explore novel representations of the same phenomena with the hope of new predictive power from changing the basis.
I also hoped for more “general searches” and robotic experimentation to allow experiments without the “bias” of the theorists imagination to influence the way we look for new physics.
Didn't this paper exactly prod a model's weak point until they eventually determined the model was consistent with the observed universe? Isn't that exactly what you're asking for?
That is where Popper comes in. According to him, we can’t prove a physical model, only falsify it. So we can say that the authors derived a prediction from the model, that can be tested. If a black hole is observed to unstable it will falsify the theory.
My issue is with link title, not the work itself.
You would be surprised how much Popper's work is falsified. I get what you mean but it only stays as a romantic goal and the reality is much more messed up. The title says given the current understanding of fhe Black Hole dynamics, if we plug in the numbers to the model the resulting dynamics can be shown to be stable whichnis exactly what the result is set out to do. Hence no popperian borders have been crossed.
>In a 912-page paper posted online on May 30, Szeftel, Elena Giorgi of Columbia University and Sergiu Klainerman of Princeton University have proved that slowly rotating Kerr black holes are indeed stable.
I wouldn't have expected it to be a short paper, but... 912 pages!
While I'm not a cosmologist, I usually enjoy reading through the papers that pop up. I think I might end up skipping this one and just stick to the Quanta article, unfortunately.
Hopefully Anton Petrov does a summary video on the paper.
> an 800-page paper by Klainerman and Szeftel from 2021, plus three background papers that established various mathematical tools — totals roughly 2,100 pages in all.
It basically comes in three volumes. What an astonishing amount of work.
The people behind the above prove and similar works do research relatively far away from cosmology, in a field usually called mathematical General Relativity. In particular, they are usually mathematicians by training, not physicists.
Sad side note: Most physicists (even many of those doing research in General Relativity) have never heard of mathematical GR.
The preprint is in math.AP (Analysis of PDEs) alone. Most of the preprints by the three authors are found in math.AP, give or take some in math.CA (classical analysis & ODEs) and math.DG (diffgeo), with significant crossposting into math-ph and gr-qc.
Amusingly, Alan Coley's [open problems in] "Mathematical General Relativity" <https://arxiv.org/abs/1807.08628> is only in gr-qc (§2.4 is relevant to the whole discussion here) and although he posts almost all his papers there, he is very much a mathematician <https://www.dal.ca/faculty/science/math-stats/faculty-staff/...>. (The earlier Chruściel, Galloway, Pollack "Mathematical general relativity: a sampler", citation [9] in Coley, <https://arxiv.org/abs/1004.1016> is in gr-qc, math.AP, and math.DG.)
> (even many of those doing research in General Relativity) have never heard of mathematical GR
Postdoc research? How did they avoid contact with mathematical GR? Hyper-hyper-specialization into an area where one diagonalizes almost as often as one breathes or spends all one's time in Cactus? If their shelves are surprisingly barren, they are liable to get thrown a copy of Wald, John M. Stewart (CUP 1991), and/or any of Choquet-Bruhat's books (esp. OUP 2008), and then Wald again.
I obviously can't speak for everyone but, yes, at least it's been my impression that mathematical GR as a research field in mathematics is quite disconnected from physics and its existence, to some, completely unknown. Only very few people like Chruściel manage to bridge the gap and talk to and collaborate with people on both sides.
To give a few data points:
If you mostly do research on e.g. gravitational waves (think LIGO) you're usually far away from the typical geometric-analytical questions surrounding the foundations of GR and much more concerned with e.g. numeric simulations, elimination and modeling of noise and systematic errors, machine learning and such.
Years ago, I talked to a professor in theoretical high-energy physics (mostly string theory / supergravity) and asked him for career advice because I wanted to do research on gravity and especially black holes. He told me that "GR as a research field is dead" and that, if I wanted to study black holes, there was "no way around string theory". Sure, he was probably biased and clearly living in a bubble relatively far away from "classic GR" but it still says a lot I would say.
I mean even the people working on the Event Horizon Telescope don't know too much about mathematical/foundational questions in GR. They are much more interested in modeling and solving their specific problems, in this case: radioastronomical problems, problems in relativistic plasma physics etc. (Source: I have spoken to two leading scientists at the EHT about mathematical questions surrounding GR.)
My understanding is that GR forms the basis of the discipline of cosmology, but your comment implies it doesn't. Could you expand on that a bit? What differentiates GR and 'mathematical GR'? Why is the formation/stability of blackholes not considered to be a part of cosmology, when the understanding of black holes is central to understanding how to universe formed?
e.g. "General Relativity forms the basis for the disciplines of cosmology (the structure and origin of the Universe on the largest scales) and relativistic astrophysics (the study of galaxies, quasars, neutron stars, etc.)"
As the sibling said, even if you consider cosmology a subfield of GR (which it is only to a small degree), not all relativists will be cosmologists. But you wanted me to expand on this a bit, so here you go:
Cosmology is concerned with the cosmos as whole, i.e. the largest scales of the universe. GR looks at large scales, but not necessarily the largest ones. (A typical black hole is much smaller than a galaxy.)
But this is not the only difference between the fields: General Relativity is just one tool inside a cosmologist's toolbox. Most of them also need quantum field theory, statistics and image processing in their daily work. The reason is that, while Relativity by itself is usually only concerned with theory, and mathematical GR even more so, cosmologists have to match their theories to actual observations. They are interested in the history of the universe and in the large-scale (average) behavior of what you can see on the sky.
Meanwhile, mathematical GR is mostly a mathematical endeavor. People in that field tend to be mathematicians, with a background either in analysis / PDE theory or in geometry or both (geometric analysis). They are interested in physics-inspired questions but not exclusively so.
Mathematical GR seeks to rigorously derive mathematical properties of the equations underlying GR, such as the existence, stability, and long-range asymptotics of their solutions. It's largely a subfield of branch of math called "nonlinear wave equations".
and that surprised me, because it doesn't seem that far away in my eyes. Which is why I was genuinely asking for clarification. They might not be cosmologists, but this paper seems pretty close to cosmology, not 'far away' from it.
If it's not, I'm happy to be wrong, but I'd like to be corrected rather than just told I'm wrong.
The Quanta overview basically answered this - they consider the ration of the black hole's angular momentum to its mass. A "slow" black hole is one where this ratio is much less than one. How much less than one it has to be, the paper's authors apparently don't derive.
tl;dr: gravitational waves do hard-to-calculate things inside a strong ergoregion around a fast-spinning black hole, and may do hard-to-calculate things to the black hole, and those calculations are outside the scope of the paper.
> [how to define] 'slowly rotating' [might be] pretty cool
It's for a pretty cool reason. Bear in mind that I am not a superhuman so have not read the 900-page document rather than scanning through the most interesting bits. Also, forgive me, I got a bit lazy and have left in a bunch of redundancy below rather than trimming it out or reorganizing it.
The question in a nutshell is to whether any setup of initial conditions that are reasonably similar to a Kerr spacetime will settle into another set of conditions reasonably similar to Kerr spacetime eventually. The conditions are roughly (eternal!) Kerr black hole in the distant distant past plus a bunch of gravitational radiation to it's "left" travelling rightwards. The "present" is the collision of the waves and the black hole. The far far future is the remnants of the gravitational waves waaay to the black hole's right, plus the original black hole (albeit with a slightly different spin or mass). There is never anything but gravitational waves and the black hole. How do we know from the mathematics of General Relativity that the far-future black hole still spins at all? Or that it hasn't self-destructed? Does this hold up even as we add more gravitational waves coming from different directions? That's what the authors set out to show.
Lumpy, slowly rotating noncompact (i.e. entirely outside of their Kerr horizons) self-gravitating objects see their surfaces smooth out over time through various processes. Stars find themselves in hydrostatic equilibrium even as things fall into them. Planets are defined as being in hydrostatic equilibrium: their lumpiness and tumbling fades away and tends towards being nearly spherical and spinning around a single axis, even though rocks or ice might fall onto them from time to time.
A "peaceful" black hole horizon is highly analogous to the surface of a body in hydrostatic equilibrium. There are some differences here: we can throw "too much" at a rocky planet or star, totally disrupting them. You can't break apart a black hole in the setting under consideration (and likely not at all). You can "bounce" a rock with a glancing blow off a rocky planet, maybe turning the planet into something like Earth-Moon. Again, you can't split a black hole, and you can't bounce something off a black hole's event horizon. You can throw too much at a star and create a supernova. You can't explode a black hole that way. So we want to restrict our analogizing to the case where a relatively small body lands on (but does not destroy or blow chunks out of) a rocky body or star.
Also, while we can spin a star or planet so fast that it disintegrates, we can't do that to a black hole. A glancing blow interaction might speed up the spin of a star or planet, possibly making it spin so fast it rips apart. Spinning a black hole as fast as you can does weird things in the immediate neighbourhood of a black hole, but should not rip the black hole apart. The paper sets out to prove that.
The Kerr stability conjecture that is central to this paper considers the case where the Kerr black hole is perturbed by gravitational radiation "thrown" at it. The black hole relaxes back into Kerr (this is "stability for Cauchy data"). The paper also considers the case where if you are sufficiently far from the perturbed black hole you only see Kerr anyway when light-speed news of the event catches up to your gravitational-wave observatory (this is "stability for scattering data")).
More colloquially, if you start with a slowly-spinning black hole and a (relatively distant) "mess" of gravitational waves, do you eventually end up with a slowly-spinning black hole (or something very close to it) or do you destroy your black hole?
Going from "of course the remains of this sort of interaction must include a spinning black hole, because linearizations of General Relativity show that to be the case for black hole mergers" ("smash two black holes together, you get one bigger black hole and a bunch of gravitational radiation and we can see all of this at LIGO/VIRGO and other telescopes") to "here's a rigorous mathematical description in the full non-linearized theory of General Relativity that is good up to arbitrarily large incoming gravitational radiation: you always get a bigger (slowly spinning) black hole" is roughly the subject of the long paper discussed at the Quanta link and found at <https://arxiv.org/abs/2205.14808>.
The size of a Kerr ergosphere -- the region outside the outer event horizon where things cannot remain still relative to the distant universe -- is determined by a combination of the black hole's mass and its spin angular momentum. If the mass/spin ratio is very high, then mass effectively determines the ergoregion's volume. A low spin also means the variation from the pole (where the ergo-effects are zero) to the equator (where they are strongest) is small.
With a low mass/spin ratio (fast spin), the larger ergoregion can "hold" more gravitational radiation, and the strong ergo-effect leads to a stronger interaction between the Kerr black hole and any gravitational wave inside its ergoregion. This is even harder to calculate than the hardest equations in the paper. It is conceivable that the "held" gravitational radiation could be concentrated while within the ergoregion, and that if the ergoregion were large and strong enough, and the incoming gravitational radiation were "just so", a second black hole could form in the ergoregion from the collapse of the concentrated gravitational waves. In which case [a] would it be flung away to the far reaches of the universe [b] crash into the Kerr black hole merger-style or [c] hang around in a stable orbit near the black hole? [a] and [b], no problem, we have an "asymptotic" Kerr spacetime. [c]: big problem, because the spacetime is now like a barbell, and that sheds gravitational waves. (A single rotating black hole does not shed gravitational waves).
These authors don't seem to say this outright, but they do cite an abundance of papers where the reasons proofs are harder as mass/spin approaches 1:1 are set out by their respective authors.
The Tome of The Black Hole's Mathematical Stability was so massive, at 912 pages, hardly a word in the English language escapes the horizon of events unfolding betwixt its covers.
You joke, but I recall my graduate level GR class homework. Just mechanically writing out solutions (the GR equivalent to writing the correct integral down and then evaluating it) to a much less exciting problem, starting with the most basic mathematical representation of the tensor field(s) involved would take pages and pages of handwritten work. It’s a fascinating field.
> Klainerman emphasized that he and his colleagues have built on the work of others. “There have been four serious attempts,” he said, “and we happen to be the lucky ones.” He considers the latest paper a collective achievement, and he’d like the new contribution to be viewed as “a triumph for the whole field.”
Come what may in peer review, this is an admirable attitude to have toward science.
I’m can’t help but imagining a world where the papers on black holes become so large they themselves undergo gravitational collapse and become black holes.
Exercise for the reader: how large can a paper get before collapsing?
Seriously though, can someone comment on this and it’s significance?
Assuming it's confined to a one meter radius, the Bekenstein-Schwarzschild limit comes out to around `(1m)^2 (tau c^3 log(e) / 2hbar G)` or about 2^230 bytes (or 2^150 yottabytes, at the point where SI prefixes run out).
If you're limited to storing one bit per amu (or other unit of mass), rather than saturating the Bekenstein bound for your space-energy budget, you only really care about the Schwarzschild limit (`(1m) c^2 / 2 G * (1bit/amu)`), which gives 2^175 bytes or 2^95 yottabytes.
Obviously, if you have more space available, there's no hard upper bound, although you may need to put some parts of the (physical storage substrate of the) paper into orbit around other parts to keep the mass density below the (decreasing with radius) density of a black hole of that radius.
As someone who has done a PhD in gravitational dynamics I think I can say that there are probably only a few dozen people in the world who can comment intelligently on this paper.
> Seriously though, can someone comment on this and it’s significance?
I have not been part of the hyperbolic PDE subfield of mathematical relativity (but a sibling subfield) but the (in-)stability of Kerr black holes had always been presented to me as one of the big open questions in that field.
And, now putting on my physicist's hat, it is indeed: I mean, if Kerr black holes had turned out to be instable, then black holes wouldn't last very long and the question would have been what the photos from the Event Horizon Telescope are actually showing. (Note: It is expected that in nature all black roles rotate at least slightly, so are of the Kerr kind.)
This is all about a vacuum Kerr solution, so the black hole (BH) is eternal. We might have three parts to the spacetime: "A" where there's some gravitational radiation far from but heading towards the BH, "B" where they and the BH are very close together, "C" where the remnants of the inbound radiation are far from the BH. "Stability" means that at "C" the BH is still essentially Kerr, because it was at "A".
Stability includes a bigger BH or a BH with different spin comparing "A" and "C".
"Instability" means that at "C" we no longer have something that's essentially Kerr.
Now for the theoretical physics sketch:
One way to get there that comes to mind is letting gravitational waves that are "just so" enter the ergoregion in a way which causes them concentrate into a caustic that does not run off to infinity as a new, free, black hole (which would just be a flavour of "C"). In that case we have a black hole binary, which sheds (arbitrarily weak) gravitational radiation, instead of e.g. the black hole we're interested in and the new one which is at infinity, neither of which sheds any gravitational radiation at all. So we might have a "C" configuration that is badly described by the Kerr metric, even though we had an "A" that was asymptotically Kerr.
This paper (and several in its bibliography) show that despite questions about this that do arise in linearized gravity (and effective one body and numrel and self-force), the full theory says that for a << M (implying smaller and weaker ergoregion) we don't get two black holes (or anything that's badly described by the Kerr metric).
Now for the astrophysics sketch:
> in nature
We don't have a vacuum spacetime, unlike in this paper.
We also have a sky with a lot of black holes in binary systems of various types and closenesses, at least a couple triples, and probably many more orbital configurations somewhere in nature (especially the future if most baryons are in black holes). The paper doesn't deal with those. It doesn't want to help us figure out the "final-parsec problem".
Also, natural black holes don't extend into the infinite past, unlike the Kerr family of solutions in this paper. Isolated ones might settle down into asymptotically Kerr, but only if they don't fully evaporate!
Lastly, old gravitational waves are weak in recent epochs of the universe, so they aren't going to give much of a kick to even a low mass Kerr-like BH. Strong early GWs have been redshifted. A strong-like-this-paper-envisions GW kick from a binary black hole merger to a nearby third BH isn't something you'd describe in a slightly perturbed Kerr picture, and all three of those BHs could have started as primordial, direct collapse, or been ejected right the hell out of their parent galaxy clusters before meeting in deep space, so each can be effectively slowly-rotating Kerr for a long time before they get close to each other.
Perhaps it is getting a bit ridiculous publishing 900+ pages of a math proof in a pdf format. Perhaps mathematicians should move to a GitHub style publication platform.
For a paper, I would take a PDF over basically any other form of media any day of the week. Being able to archive a paper easily and print it in the form as the author intended it is extremely valuable.
Do you know what physicists or mathematicians actually want, or are you just theorizing as an outsider, based on experience from completely different domain?
I'm confused by the reason behind your question (which I'm assuming you have, I guess). Are you arguing that scientific works shouldn't be made accessible?
If PDF format is the preferred format of those needing to access the work, compared to likely alternatives, then it is accessible. Are you claiming you know better what the needs of professionals in the field are?
I think maybe we have a difference in what accessible means. I'm talking accessible as in, 'I can literally access this', not as in 'I can understand what this means'.
I use a screen reader, and unless the pdf is explicitly created as an accessible pdf, it is just garbled up nonsense.
Almost all arXiv submitters (and certainly any serious one in a category like this paper's math.AP (Mathematics / Analysis of Partial Differential Equations) <https://arxiv.org/list/math.AP/new>; click the "other" in "[pdf, ps, other]" and there click Download sources) will supply the preprint's sources, which in mathematics-heavy fields are almost always LaTeX. The sources are associated in a predictable way with the abstract page, which is what should practically always be supplied instead of a direct link to a PDF or other final format.
In this particular case <https://arxiv.org/format/2205.14808> lets you "Download source", where the source in question is a tar file with four png images and a large .tex file. The .tex file has long swathes of English text; equations are mostly in $math$, although there are some parts that are likely to be less straightforward to access (e.g. under "Definition of the Ricci and curvature coefficients").
arXiv admins and most authors will react supportively to contact from blind and visually impaired people who want to read a preprint.
I think maybe we have a difference in what accessible means. I'm talking accessible as in, 'I can literally access this', not as in 'I can understand what this means'.
I use a screen reader, and unless the pdf is explicitly created as an accessible pdf, it is just garbled up nonsense.
If you cannot read the PDF, you can also download LaTeX source from arxiv. I don’t think it is reasonable to expect anything more accessible than that.
We don't have a good way to handle equations. They are dense and sometimes minute aspects of positioning have semantic value. And this paper in particular, the equations are the core of the content.
There's room for improvement over the status quo, but with the standards we have available today, faithfully representing the content requires a faithful representation of the visual layout. So a typesetting format like PDF is what's needed.
Somewhat OT, but I've had this question for a while now: Why do we know that black holes have a singularity? This might be a dumb question, but when you increase the mass of, let's say, a neutron star, at some point gravitaton is too strong for light to escape and there'll be a Schwarzschild border. But why does the mass inside of a black hole have to shrink to a singularity? Can't there be some ultra-dense but finitely small core?
Essentially the Schwarzschild equations based off relativity say that happens.
The problem turns out that we have no way to actually know if that's what happens at the core of the black hole. The event horizon presents a problem where it becomes impossible to experimentally test anything beyond that point. For example the fundamental constants could alter themselves inside the event horizon, as long as the event horizon was still a one way trip, it wouldn't matter, we'd not be able to gather any information about that.
If we assume general relativity is correct, then singularities are an unavoidable result of black holes. The math simply does not allow for such a large density to not form a singularity. Having said that, we know GR is not correct, as it fails to account for any quantum phenomena; and we have thus far been unable to combine GR and QFT into a unified theory. Attempts to combine the theories fall under the title of quantum gravity. Several proposed theories of quantum gravity predict that black hole singularities do not exist.
I'm not familiar with the current thinking about this, but last decade the singularity was understood as a gap in our understanding/model. Of course there's model dependent realism, after all if all you have are models (built on decades or centuries of data), with no means of devising new experiments, what else is reality?
As I understand it: before gravity captures light--which moves as fast as anything can move--gravity will have captured everything else and overcame all other repulsive forces crushing the matter into a singularity
edit: pixl97 made a really good point right as I posted. I should say "As I understand the math"
Is this more than a preprint? Usually really long research like this takes a long time to verify, like that guy who claimed to solve the abc conjecture. The article talks about it like everyone agrees this is valid, but is it?
Probably there isn't anyone who wasn't an author who can vouch for it yet, but this is a different situation. These guys have been working toward this problem for a long time in much closer touch with the mathematical community, with partial results that people can vouch for. This contrasts with the situation with the abc conjecture, which was done in almost complete isolation from the mathematical community with as far as I know no intermediate results.
That's only 2 months for 900 pages, like 15 pages per day, it's impossible to read it so fast. IIRC checking the proof of the last Fermat theorem took a few years (and a few modifications to fix holes).
However, the authors are in close communication with many of the other authors in their extensive bibliography whose work this follows (and in turn who likely be provoked into further work). The progress (seen in the paper's review-like parts and the bibliography) has been towards proving that less and less "gentle" vacuum spacetimes aren't irrevocably upset by gravitational waves passing through them. The bibliography cites a number of pre-pandemic preprints that were in "to appear in <journal>" state. In that sort of academic bubble, which is far from unusual, errors are liable to be found fairly quickly. Also I would guess that the authors revisited their self-cited previous work, which sometimes produces forehead-smacking.
My PhD thesis was in Harmonic Analysis and has a lot of Fourier transforms and antitransforms. I still think that approximately the 50% of the + or - signs are wrong.
Anyway, in spite of the common image in popular culture, most math results are more roust and can survive a few sign changes, perhaps with some small fixes.
how do you even check 900 pages of dense hard math...nuts. goes to show how top math and physics people in whole other leauge compared to college level
Title should be: slowly rotating Kerr black holes are stable. The proof doesn‘t say anything about fast rotating black holes or charged black holes (Kerr-Newman metric). And the proof assumes the world is classic. It doesn‘t take quantum effects into account. So it is a mathematical achievment to prove the stability of the Kerr solution of general relativity. It doesn’t tell us anything about real black holes though.
General question: where does this research lead to? As in what might be the next step for this research team, and/or their field in general? I always like to understand what discoveries like this could open up in the future.
*Asking as someone not in the field or any type of physics/mathematics
There are theoretical and potentially practical implications for this work. Black hole stability can be considered a kind of stress-test for General Relativity as a theory. If we believe the universe has rotating black holes, then we expect a valid theory to predict they are stable (or at least not catastrophically unstable). So a stability result helps validate the theory by at least showing there's one fewer way to refute it.
Some parts of the proof may actually be more insightful or practical than the result itself, but they don't get discovered or understood in detail until someone sits down and attempts to prove the result. If I had to guess, the most likely truly practical implication might be that some of the insights within the proof could help with numerical simulations of black holes. And if they help with numerical simulations of black holes, they might also help with numerical simulations of other PDEs that are more relevant in engineering.
At a certain point of abstraction, theoretical physics almost never has any direct correlation with empirical reality. It is most often used as a way to give the paradigm lenses that color our thoughts nice little workouts. (One can also apply Wittgenstein's notion of language games here.)
If, by the term "black hole", a person is referring to some object that has the shape of a mathematical point, then it just doesn't make much sense to call it a thing that relates to the world of observation. (The postulates of Quantum Mechanics dictate that physical objects must be fundamentally spread out in the form of wave functions.)
Solutions to simplistic kinds of mathematics come in the form of idealizations called "points". But physical reality is fundamentally spatial, and the necessary maths must involve things like topological manifolds, which brings us directly to the doorstep of String Theory, which is not so much a "theory" but rather a broad category that consists of the entire spectrum of all possible Quantum Field Theories. String Theorists, in fact, are always speculating over the possibility of some given theory's existence, such as when Witten spoke of a mysterious "M-theory" in the mid-90's.
>a person is referring to some object that has the shape of a mathematical point
Black holes aren't points, they're space-time shapes with a singularity at the middle and a spherical event horizon. The black hole at the center of our galaxy extends across 16 million miles, or a little over eighteen times the size of our sun.
If the singularity at the middle is slightly modified to be something else according to a better theory of gravity (most physicists believe that this will eventually happen), the outlying spacetime will not change very much, for reasons similar to how Newton was able to work out how the planets moved around the sun without knowing what the sun was made of or what was inside it.
If you imagine a circus tent propped up in the middle by a square pole, it will look very much like one propped up by a round pole. That's because solutions to the Laplace equation smooth themselves out as quickly as possible as you move away from the boundary condition.
I thought wether or not a singularity sits at the center of a black hole is actively discussed.
Or to be more precise: our math seems to indicate that there is one ... and precisely because of that we think that our math might be wrong or incomplete (because every single time we have encountered an "infinite" or something resembling that in the past it turned to be an error).
General Relativity says there is a singularity, more than one depending on the metric and coordinate system chosen.
However GR is not a quantum theory and it is well known that it clashes with quantum mechanics in ways that would show up in a singularity so what the physical reality of space actually at and nearby the singularity is still a mystery because we don't yet have an accepted theory of quantum gravity.
>they're space-time shapes with a singularity at the middle
Upon googling "black hole singularity", the first "People also ask" is "Does black hole contain singularity" and the answer is, "No, black holes in our universe, that is to say the real universe, do not contain singularities." While this doesn't in itself invalidate your point, it does seem to raise questions.
>The black hole at the center of our galaxy extends across 16 million miles, or a little over eighteen times the size of our sun.
I think what you are doing here is conflating the physical effects of the thing (the gravitational field of force) with the mathematical description of the thing itself (a precisely defined geometric structure). If you are talking about extremely strong gravity fields, such as those that bind entire galaxies (or galaxy clusters or even larger organizational patterns), then that is one class of things (empirical), but the purely theoretical notion of physical singularities is an entirely different class of things altogether (a class, which, IMO is perfectly self-contradictory).
>That's because solutions to the Laplace equation smooth themselves out as quickly as possible as you move away from the boundary condition.
This seems incorrect. I find that solutions to the Laplace equation typically "smooth themselves out" in a quasi-linear way (ie, the way sines and cosines do). The most vexing question in Quantum Mechanics is in fact why quantum states (i.e. the eigenfunctions that are solutions to PDE's such as Laplace's equation) appear to us as localized packets rather than how their "wave functional" mathematical descriptions would dictate (diffuse). The way this conundrum is resolved in QM is by way of a perfectly ad-hoc procedure called "collapsing the wave function".
What? Lighten up on the philosophy of a topic before you understand it.
Sure, the physical reality of the singularity is a bit of a mystery as it comes up against the nature of gravity at an extreme which requires a quantum theory which includes general relativity, but that is a known unknown and not what any of this black hole physics is about.
Outside of the singularity, and definitely outside the event horizon, a black hole is a very real thing which has been seen and measured in "empirical reality" and theory has so far matched well with measurement.
Nobody is going to Euclid and claiming a point or line is a real physical entity, anything but a useful abstraction.
>Lighten up on the philosophy of a topic before you understand it
But philosophy is an activity that must necessarily be done in order for there to even be a topic that can at all be understood in the first place! We had to have someone like Descartes before we could get someone like Newton, and Newton before the quantum theorists, etc.
The idea that you can just jump straight to (and always stay within the framework of) "hard science" and do away with the malleable philosophical bits that allow for the kind of connective tissue that binds civlizations is perhaps the biggest reason why we collectively face the existential crises that we currently do.
And the other idea that everyone (or at least everyone who has jumped through the hoops to get the necessary credentials) can simply sit fat and happy in their own arcane specialties as long as they are able to "prove" their worth to the rest of us by patting each other on the back with their circles of academic citations is... well... not very sustainable. All academic disciplines, to the extent that they cannot remain "rooted" within some larger framework of philosophical sensibility will necessarily get weeded in due time.
No, this isn't about the philosophy of science, but you in particular misrepresenting a topic and then making philosophical arguments about it. You ought to understand the science you're criticizing before sharing your commentary on it.
The postulates of Quantum Mechanics dictate that physical objects must be fundamentally spread out in the form of wave functions.
Even a senior undergraduate knows that QM is not resolved with GR. To quote a QM postulate like this as an argument against point-like objects in reality reveals more about your level of education in physics than anything else.
At a certain point of abstraction, theoretical physics almost never has any direct correlation with empirical reality.
Solutions to simplistic kinds of mathematics come in the form of idealizations called "points". But physical reality is fundamentally spatial...
Just so wrong... Empirical reality, as you like to say, specifically in collider physics, tells us that fundamental particles are point-like as far down as we can see.
I don't claim to have all the answers, but you are just being obtuse and passing by with a layman's philosophy and masking your lack of knowledge with stiff sentences.
Just because you don't understand it doesn't mean it's not true.
Having said all this, I have not read their paper and it has been 5 years since I was in this space. This is a truly remarkable accomplishment and the result of decades of hard work!
I'll end with an amusing anecdote. A fellow grad student, when deciding between U of Chicago and Princeton for his PhD program was pitched by a U of Chicago professor who once said something like "Of course you could go to Princeton and write 700 page papers that nobody reads." When this story was shared during a conversation over tea at Princeton, another professor retorted, "Or you could have gone to U Chicago to work with him and write 70 page papers that nobody reads!"