Hacker News new | past | comments | ask | show | jobs | submit login
Physicist discovered an escape from Hawking’s black hole paradox (quantamagazine.org)
255 points by theafh on Aug 23, 2021 | hide | past | favorite | 163 comments



Has anyone else (without a background in physics) kind of given up trying to understand developments in physics? It seems like the reporting is usually completely inaccurate or a metaphor at best. And that to have some grasp of it requires having a solid foundation in undergrad physics at a minimum.

I'm just wondering if I'm being lazy by not trying hard enough or efficient. It's definitely not for lack of curiosity, but I also don't like to fool myself into thinking I understand something that I don't.


> I'm just wondering if I'm being lazy by not trying hard enough or efficient.

Leonard Susskind at Stanford has a series of courses specifically aimed at helping people get up to speed in understanding modern physics. Every several months I work up the enthusiasm to watch several lectures, and then I get distracted and forget it all. Clearly, it's because I'm not trying hard enough because there are so many other things to also be curious about. https://theoreticalminimum.com/

But I also wish that physics were less focused on understanding extremes, and more interested in understanding physical systems closer at hand. There are interesting recent-ish results found in the behavior of crumpling paper, collapsing piles of sand, or the 'legs' of wine in a glass. I suspect that we could have much richer conceptual tools for thinking about the physical world actually around us if only more resources went into looking at it, rather than into exploring how laws do or don't break down in extreme conditions that don't naturally exist near earth.


Quantum gravity gets the headlines, but only a pretty tiny minority of physicists actually work on stuff like that. Most people are researching problems much closer to hand. Basically the utopia you're looking for is already here.


> But I also wish that physics were less focused on understanding extremes

To defend this approach: Extrema are a good way to get a handle on a problem, which can then be extended.

In programming we almost always have to handle the null case ("what if the graph is unpopulated?") and often an extreme case as well ("What if we end up with 10 million users? How to we quickly respond to XXX").

In physics, like so many other domains, the poles are often the most enlightening region.

However, as you say, work on those extrema rarely translates into more common situations until a breakthrough happens.


>In programming we almost always have to handle the null case ("what if the graph is unpopulated?") and often an extreme case as well ("What if we end up with 10 million users? How to we quickly respond to XXX").

Even the extreme case has to be somewhat realistic to be worth considering. Physics has gone way past that point once they started building accelerators larger than cities to detect subatomic reactions. Not saying it's not worth investigating but it's not at all comparable to any practical domain, at this point it's l'art pour l'art.


The same was said about quantum mechanics (of which these accelerators are part of the investigation) in the 1900s, yet without that work there would have been no transistor .. or superconductors.


Umm how could the same be said about quantum mechanics ? Double slit experiment can be setup in a classroom.

If the predictions of your hypothesis require a country sized accelerator to test what are the real world applications ?


General relativity is maybe a better example: It's necessary for GPS but it was initially very difficult to validate experimentally.


There's a lot more to QM than the double slit experiment (and a lot more experimentation before predicting superconductors).

But if you consider basic research worthless, physics is hardly the only "offender".


I don't consider basic research worthless - just saying the likelihood of anything practical comming out of fundamental physics seems very low when even theory can't predict measurable effects outside of TeV particle collisions.


>the likelihood of anything practical comming out of fundamental physics seems very low when even theory can't predict measurable effects outside of TeV particle collisions.

You may be interested in the work being done on the connections between QGP and strange metals. Some progress is being made on extending the laws discovered in simple scenarios to complicated situations.


It's realistic, they are learning about the particles that make up everything on Earth. But not necessarily pragmatic, in that there aren't likely new uses for that knowledge in the foreseeable future. Still, imagine what we can do once we've achieved a model of how it all really works.


I don't really know how the model enables much if it requires that kind of effort to test the predictions. If the predictions implied something more practical you could test that. I think the time where this kind of physics breakthroughs lead to anything useful is gone.

At the same time I think there's stuff that's much more practical with still a lot of discoveries to be made - like superconductivity.


I can relate to this. It might seem stupid but understanding this made me change the way I tied my shoes, and my laces come undone way less frequently now.

https://news.berkeley.edu/2017/04/11/shoe-string-theory-scie...

(Also thanks to everyone for their responses, a lot of good stuff to check out.)


Mandatory link to Ian's Shoelace Site: https://www.fieggen.com/shoelace/


This site helped me to realize I had been tying one of my shoes wrong my entire life.


Let's be real. There are more physicists than fitting research problems. That's why your average physics PhD graduate is typically employed outside of the field they learned and earns less than an average React programmer.

That said, theoretical physics has the goal of understanding reality from a reductionist point of view. At the scales that goes from the nucleus to the Solar system there are few questions left. We know the Standard Model and GR and they fit the data perfectly. Of course some questions remain for how things interact when there are many of them (e.g. warm superconductivity) but there are few questions about fundamental laws

You can think about it like bootstrapping an open source system (it was in the homepage today). There are still many technical hurdles downstream but we are interested in reducing the binary blob from which all starts to its perfect minimal form. And the only places we still have not figured out well are things at the limit of our instrument capacity. Black holes (GR and relativity, we still cannot figure out that and proving black hole seems experimentally challenging), exotic particles (what are quarks composed of?), dark matter (why far away galaxies seem to rotate so quickly?), dark energy, inflation, that stuff.

I think physics should be smaller, it has too many graduates. But actually these problems should be researched. They are the fundamental questions that remain and there is a reason that the layman considers this stuff to be "real physics" and not origami folding


It’s quite likely that the oversupply of physicists is paradoxically shrinking the space of what can be researched.

More physicists mean more competition, more physicists mean more people to convince that a radical idea is worth pursuing, and even more so more people to convince that a radical result is true.

If the field was smaller and those in it had more freedom, we might see more interest in exploring new areas. Right now it’s hard to see many people in the field with enough freedom to do anything that isn’t the prevailing orthodox view.


I blame Gödel's platonism. He showed us that our theories will always have limits, which is true. But there are two ways to confront this problem:

- Incrementally improve the existing theory, knowing that some of your goals are impossible and hoping that you don't get stuck on one of these.

- Develop as many inconsistent-but-locally-useful theories as possible along a method for selecting the right one in the right situation.

The latter didn't sit right with him or his contemporaries--more for gut-feel reasons than anything practical--so many of us are stuck in this rut where we just compete for opportunities to participate in the former.


But .. but ... sure, science as a whole has that particular moby dick, but even when it comes to particle physics there there are many theories (models) side-by-side. Some useful for this, some for that. Grand unified theories (and any theory of everything) will only cover the fundamentals. It's not a coincidence that structural engineers don't have to whip out the Einstein Field Equations when they want to check for resonance, and so on.

And very likely we'll always have competing theories/models at the extremes and they might be inconsistent but they might simply turn out to apply in different regimes, etc.

See also https://en.wikipedia.org/wiki/Model-dependent_realism (coined by Hawking and Leonard Mlodinow)


If there's a method for selecting the right one for the situation, it's all one self-consistent theory


Would you recommend using Newton’s laws to model the behaviour of colliding toy trains at a few feet per second, or Relativity?


Newton's laws are a part of relativity, in the way a map of California is a part of a map of the US.


I don't see why the two should be connected. Consistency and utility are somewhat orthogonal.

Here's a counterexample:

Einstein chose hyperbolic geometry for arguments about space ships traveling near light speed, but we use spherical geometry for arguments about the shortest path an airplane should take. Those theories disagree about the playfair postulate, so they're inconsistent. Yet we can pretty reliably pick the right one for the job.


Really interesting take. Now I wonder myself if the smaller sizes of fields in the 1800s to late 1930s enabled the great discoveries then.


Also the cost of experiments. You can't build a gravitational wave detector in your backyard. You can't do the data processing on your abacus.


Everything is physics. We break apart chemistry as it’s own subject not because there is a clean line separating protean folding etc, but rather for historical reasons. Stick a lead bar in a particle accelerator and it’s Physics stick a hand in and it’s Radiology.

Material science, Chemistry, and Astronomy really cover most of the obvious areas physics could expand into.


There’s a lot of relatively unexplored ground regarding macro phenomena such as fluids, and molecular dynamics, as well as a lot of open ground in alternate approaches to most common theoretical frontiers.

We’ve barely touched alternatives to tokamaks and string theory. Blending classical and quantum molecular dynamics simulations is under explored. Improving the efficiency of simulating large scale dynamical systems is under explored.

These are the areas I know offhand, I’m sure every working physicist has ideas they think are worthwhile that they seemingly don’t have time for/can’t get funding for.


> That's why your average physics PhD graduate is typically employed outside of the field they learned and earns less than an average React programmer.

Ha, I used to think that a big chunk of physicists by education are employed outside of the field they learned and earn top 10% salary of the industry doing software in financial companies.

Heard more than once that computer science is easier than physics. Physical background helps hugely with software.


How would physics help with software though? My impression from trying physics is that its just that if you are smart enough to do physics you are smart enough that software engineering is not really hard but at most merely complicated.


Physics teaches people how to work with emergent properties of simple systems, something with applications in software wherever the typical industry case analysis + ontology* paradigm is not the right way to work.

(*This used to be case analysis + object oriented ontology, but now there are other language features to map ontologies on to.)


While the goal of theoretical physics is to have a reductionist approach to understanding reality, it looks like most of this is being spent on sub-nuclear particle physics and extreme astronomical situations. There are many real life situations that can be explained and understood better with branches of theoretical physics such as dynamical systems, condensed matter, statistical physics and others. There needs more funding so more people can solve problems that are of direct societal interest which in turn can attract more funding and students.


>>> I suspect that we could have much richer conceptual tools for thinking about the physical world actually around us if only more resources went into looking at it

What I hear in this statement is more towards the domains of research engineering as opposed to theoretical physics. Sand piles are of course the bridge between the two ;)


>Sand piles are of course the bridge between the two ;)

Bridges are an engineering topic, most coastal areas lie on sedimentary rocks, and wormholes, a theoretical physics topic, are also called "Einstein–Rosen bridges."

In other words, while bridges are bridges between sand piles and sand piles, sand piles are bridges between bridges and bridges.


Yes. You took the entropy right out of my mouth.


I was thinking more along the lines of complex system research, self-organizing criticality etc. E.g. https://en.wikipedia.org/wiki/Abelian_sandpile_model

This can be a scientific rather than an engineering discipline, insofar as it's concerned with understanding the behavior of systems, rather than creating a solution which makes a system behave according to our wishes.

We have a limited understanding of "order" or patterns arising out of systems with many aggregating interactions, and often focusing on relatively idealized and isolated systems.


wine legs on applied science: https://www.youtube.com/watch?v=s6w0tSg-msk !

And let me randomly throw in metallic water too: https://www.youtube.com/watch?v=Vdz18ibX7rE


Are these videos working for you guys?



I worried about this too. But I think my attitude has changed after watching lots of the PBS Space Time youtube channel. They do a great job of breaking down these concepts at a level where highly interested non-physicists can get what feel like the real details without dumbing it down too much. They have good videos on many physics topics, and regularly explain new discoveries.

https://youtu.be/QLSIZg0npuA


The Fermilab channel is also quite good for short-form content. ScienceClic English has absolutely wonderful visualizations. All of them do make some subtle inaccuracies or skip things for the sake of brevity though. I think Sabine Hossenfelder's channel has the most accurate videos in that 10-15 minute range, but they're still only about 15 minutes long.

I don't have a proper education in Physics, but have been trying to self-teach and I think that none of the ~15 minute video channels really cover things to a very detailed degree. You really do need textbooks/lectures/real papers to actually understand it. The channel "Physics Explained" is pretty good for more in depth breakdowns of things, but it is quite dry compared to those other channels and still not really a substitute for a textbook or class.

And I don't even mean learning things well enough to get a job as a particle physicist or anything. Just some things, like say particle spin, just can't be explained in under a few hours and without the math behind them. They don't have a proper intuitive analog to our macro-level world.


PBS Space Time seconded. I recommend taking the rabbit-hole approach with them - i.e., blocking out a considerable more amount of time than the length of the video you're about to watch. They always reference past videos that expand on the building blocks of whatever the topic is in the video you're watching, and it helps to go watch those if it's a new topic to you, before continuing with the current video. I absolutely love that channel.


Thirding PBS Space Time, and seconding taking time to really focus and treat watching them like studying. I became really interested in physics about 2 years ago. Initially the content was really challenging but I forced myself to rewatch many of the vids several times and it was ultimately very rewarding. I have a good enough grasp on the core concepts that I'm able to explain them to friends in-depth and it makes for great conversations, especially when people are in a state of mind to pontificate about the nature of the universe hehe.


Fourthing as well. Think I've watched (heard) every episode at least twice now. They really calm the mind I find, as I frequently drift off to sleep with it in the background, dreaming about Space Time.


I like Sabine's channel even better. She is great at simplifying things, and explaining the raw concepts and what some equations / findings really mean

Sabine Hossenfelder https://www.youtube.com/channel/UC1yNl2E66ZzKApQdRuTQ4tw


Not a physicist but I find the Science Asylum also pretty good:

https://youtu.be/Q2OlsMblugo


Seconded. Here's their video specifically about the black hole information paradox:

https://www.youtube.com/watch?v=9XkHBmE-N34


PBS space time is the best ressources out there. i have a phd in this kind of things and still learn a lot. but you're right, while nature is fundamentally weird to our brains (and often involve very complex maths), physicists (including myself) are typically very bad at explaining things in a simpler language. one of the reason i stopped was the very little (between 10 and 100) people around the world i could talk of my work.


I second PBS space time, and would also like to mention the notorious Sabine Hossenfelder, she has a great educational youtube channel and does a good job at explaining things to the lay person.


i do partially agree. while it was once a very good educational channel, it has often been very opinionated, sometimes in good, but also in somewhat narrow ways.

for instance repeating that modified gravity is great and that strings/supersymmetry/etc is bad is a bit weak especially on a science education channel.

i have worked with strings and some of her criticism if founded, repeating over the years that people that work in those domains are intellectual fraudsters (i'm barely exaggerating) is wrong and especially damaging on an educational channel. consequently, there a whole mob of youtube commenters that repeat this (with no context) to whoever wants to hear that.

the same happened with Smolin and his book, following Green's book. TBH LQG is not yet there (despite recent interesting progress) and Calabi Yau compaction don't yield the universe we observe. Modified gravity doesn't seem to work too well too..

so yes, if you remain critical of what she says :)


> for instance repeating that modified gravity is great and that strings/supersymmetry/etc is bad is a bit weak especially on a science education channel.

I'm not a physicist, but she does point out that she thinks dark matter is a combination of modified gravity, and some new particles, and not just one or the other.

Also, she does usually make it clear when she has an opinion and bias towards less supported hypotheses, but it's always on things that don't already have any evidentiary basis, like dark matter.


> Modified gravity doesn't seem to work too well too..

I mean, it works out basically as well as dark matter. Both are consistent with some observations, and inconsistent with others. Last time this happened we simply postulated wave-particle duality, which is alone the same lines as what Sabine is pursuing.


The main issue is that quantum mechanics and {special | general} relativity are non-intuitive. Any classical analogy is a leaky abstraction at best.

There are communicators who can pierce through this effectively. However, they tend to be researchers who do not have the time to spend writing pop-sci articles.


You are right about quantum mechanics and general relativity, which requires more advanced math, and it's hard to create good and useful analogies without it.

That said, if you can handle basic high school math, you don't need any leaky abstractions for special relativity: explain experiment with speed of light measurement and it's consequences, then explain thought experiment about the light clock and mathematically derive time dilatation out of it.


Special relativity may not require a lot of math, but it's results are still very unintuitive.


And sadly quantum mechanics is the simple and obvious version of quantum field theory, which is what the standard model is. Electron energies around a proton in the atom are one thing, but particles as fields and renormalization kill. Tho Feynman wrote a non maths book on QED.


It is actually a very famous controversy in physics between Hawking and Leonard Susskind.

https://en.wikipedia.org/wiki/The_Black_Hole_War

And if you care about understanding physics you absolutely have to check Susskind's "the theoretical minimum" videos. He explains advanced concepts with remarkable clarity. You really can grok string theory if you watch some of his series


If you cannot understand it, then probably it doesn't matter for you, because you have no problem to solve. Most of the time, physicists are trying to invent a mathematical formula or construct to elegantly describe a physical process, to make an accurate prediction.

Just imagine, that we have a game, where we want to accurately predict next frames of a video. It's an interesting game on its own, because you need to understand deeply what happens on the video to be able to accurately predict behavior of all objects, animals, and persons in the video.

Such game requires a lot of skill, to accurately guess and predict, but most of the time it's not important for us, mere mortals. For example, we put a lot of effort into OpenGL, PBR, physic engines, etc. to make realistic games. Do you feel obligated to study all of that when you are interested in a realistic fly simulation? Do you feel obligated to study construction of AK when you like to play a 3D shooter?

If you really want to understand physics, then I suggest to perform experiments, or play with a physical simulation, or, even better, to implement your own physical simulation. Look, for example, at this beautiful simulation of black hole done in OpenGL shader:

https://ebruneton.github.io/black_hole_shader/demo/demo.html https://www.youtube.com/watch?v=_hhOd7GDboM https://github.com/ebruneton/black_hole_shader https://ebruneton.github.io/black_hole_shader/paper.pdf


This is the frontier of knowledge. By definition, things that theoretical physicists themselves barely understand.

For instance, this physicist reportedly "Discovered an Escape From Hawking’s Black Hole Paradox". If true (I presume it is), this implies that other physicists before her didn't understand the black hole paradox all that well!

It also implies that you'll never get a crystal clear understanding of it from reading popular science.


Kind of, as you say it is time-consuming to keep up. A bigger problem is that it's not obvious what the impact of many discoveries would be on daily life; physics sometimes seems to have been stuck in an era of diminishing returns after important breakthroughs that led to a short era of rapid innovation - sort of how nuclear fusion has been 20 years away my whole life.

I've often felt part of the problem here is the relative decrease of manned missions in space, which are not the best bang for the buck in scientific terms, but at one time created a widespread of view of 'humanity's future in space' that provided the motivation for and wide public interest in many scientific endeavors.

Nowadays, there's a widespread feeling that planet earth has got very crowded and there aren't as many opportunities for big new discoveries (the sort that can be appreciated by anyone without specialist education/training), that the long-term viability of our habitat is Not Great, and that the prospect of space exploration is too remote and costly to have any impact on the lives of ordinary people, but is limited to a microscopic scientific or financial elite.

Of course, this is somewhat irrational; as a society we've chosen to have ubiquitous worldwide real-time communications devices that would have once seemed limited to star Trek. Computing has made big science small enough to fit in our pocket and allowed anyone who is really interested to be a software maven. But it's not as spectacular as the future anticipated a few decades ago.


Isn't it like this with any serious subject? You need to have read quite a few books to get it?

One positive thing is that if you just pick up the university books/papers, there's no exam. You can just read them for understanding the concepts instead of for passing tests.

I picked up a book about relativity (both) years after graduating, and it was an interesting read. I won't claim to understand how Ricci tensors work, but it made sense at the time.


> Isn't it like this with any serious subject? You need to have read quite a few books to get it?

No, I'd say physics/math is uniquely hard in this regard.

A lot of stuff in the humanities, you can still get the gist of a paper even if you're not precisely familiar with the field, authors, etc. Then read a couple of textbooks, a handful of survey articles, all of which is a few days' reading, and you'll understand nearly of all it.

But physics? A well-educated layperson can read a paper and not have the slightest idea what any of it means, no clue whatsoever. And a few days' reading isn't going to help much -- it's quite likely to need a couple years' of study at a minimum to understand the context of the paper.

Remember, math/physics isn't just its own subject, it's its own language. For most people, the average math/physics paper might as well be written in Chinese. While a history, sociology, or political science paper is incredibly more accessible.


On second thought this is a good point. I think of it as mathy stuff being very deep, while humanities stuff is very broad. So you can stick your feet in the math end of the pond and not reach the bottom, but wherever you stick your feet in in humanities, you'll get it, but you can still wade a long way to places you haven't been.


That's why I like Quanta Magazine. They do a great job of laying down enough material to get a sense of what's going on.


Especially in this article, the explanation of the proposed breakthrough was pretty clear:

Old: Errant entropy curve (Hawking's entropy calculation) due to wrong black hole surface area used

New: Resolved entropy curve (Page curve) via quantum-corrected surface area (quantum extremal surface)


I'm glad you thought the article's explanation was "pretty clear".

For me it was less clear in part because it raised the question: "what is a quantum extremal surface?", which doesn't really seem to be answered in the Quanta article.

Perhaps I could persuade you to try your hand at your terse sort of summary for the interviewee's two-author paper https://arxiv.org/abs/1408.3203 (Engelhardt & Wall, E&W2014) defining quantum extremal surfaces, and having done so return to and similarly summarize the part of the Quanta Magazine article where the interviewee is asked (several years later) about applicability of these defined surfaces outside higher-dimensional anti-de Sitter space supporting a conformal field theory on its timelike (n-1)-spherical boundary, Maldacena-style. (cf. end p.23 E&W2014 and their footnote 6). Such a summary could enlighten one or both of us, and perhaps other readers, and at the very least I'd be grateful (since I have no idea how to capture what quantum extremal surfaces are in only a line of text).

(The open access https://link.springer.com/article/10.1140/epjc/s10052-020-08... is likely to be of some help, its SdS_4 spacetime being a decent approximation of a late-time isolated galaxy cluster in an expanding Robertson-Walker universe. Our standard cosmology models our universe in a way which one might describe reasonably as expanding RW -> dS_4 equipped with a dusty distribution of matter such that at late times most mass -> "dust grains" that resemble SdS's Schwarzschild submanifold. Cf. https://en.wikipedia.org/wiki/De_Sitter%E2%80%93Schwarzschil... ).


> I'm just wondering if I'm being lazy by not trying hard enough or efficient.

I don't think anyone else specifically said this, so allow me to be one of the few who say you're being efficient. Sure, you could spend time to understand this stuff, and yes it's important knowledge for the world/society, but knowing about black holes really isn't going to change anything you do, unless you're a physicist or working on something involving deep space (and likely not really then either).

Sometimes esoteric knowledge is useful in other areas, and sometimes learning esoteric knowledge is fun or helpful to build learning skills, but sometimes it's just more junk to fill your brain sponge with. If it doesn't tickle your interest, leave it be and it's fine. If it becomes useful, chances are over time more ways to explain it will have been made and some might be more comprehendable by you.


I would suggest starting with YouTube. Start consuming a variety of videos from different sources and you'll begin picking things up and gaining enough knowledge to start identifying bad info. You can start with lectures of complex topics by physicists meant for the general public. Their is a lot of handwaving to remove the math and oftentimes they go into personal beliefs concerning theoretical physics, but they are very good at identifying each time they do so.

PBS spacetime is one that seems informative while giving enough clarifications. Including looking into a few pop sci theories, explaining what they would mean, and ending with why science don't consider them as serious explanations.

You can also find some more math heavy explanations and slowly build your math skills to be good enough to understand the physics that uses it.


The latest developments (with a few exceptions) generally really are impenetrable without a solid foundation in undergrad physics, which meant that many of my trips to Physics Stack Exchange have given me a direct personal experience of being on the dumb side of the Dunning-Kruger effect.

The good news is, as others have already suggested, this level of education is now very accessible by e.g. YouTube channels:

https://youtube.com/c/pbsspacetime

https://youtube.com/c/SabineHossenfelder

https://youtube.com/playlist?list=PL701CD168D02FF56F (Susskind, The Theoretical Minimum: Quantum Mechanics)

I would add the channels:

https://youtube.com/user/EugeneKhutoryansky

https://youtube.com/user/minutephysics

and the MOOC Brilliant.org

That said, I’m still definitely in the “undrergrad” level despite all this; I can recognise the equations of GR and QM, but not use them, and there’s plenty which I know I must be misunderstanding.


> The good news is, as others have already suggested, this level of education is now very accessible by e.g. YouTube channels:

With the caveat that you must work on on every problem set you are confronted with.

An essential part of a physics education is struggling over extremely hard problem sets. (I'm assuming/hoping these channels offer decently hard problem sets.) This more than lectures teaches you how to flail about in unknown areas of physics and to gauge your own understanding. This I think is a physicist's superpower.


> An essential part of a physics education is struggling over extremely hard problem sets

Indeed; this is where brilliant.org makes its sales pitch. And, thankfully, when its physics courses went beyond my level of mathematics, it also has a mathematics course which I’m hoping will get me up to the right level.


Isn’t it kinda true that A Brief History of Time was so important in part because it successfully dumbed down a bunch of esoteric physics? Maybe we are just reverting to the mean?

I recall years ago mentioning offhand to someone that I read SIGPLAN proceedings and their eyes bugged out and they asked if I could understand those. I said “only about 3/4s” but I knew exactly what he meant. Learn to human.


I am one of these with a background in physics, but have been out of the circle for the last 10y. I highly recommend the videos from PBS Space Time on YT. They have playlists on various topics and, if you are willing to put the mental effort, can give a solid foundation and allow to understand to some extent papers/theories like the one reported here.


Not given up but I would like to find a physicist on twitter who can layman's terms some of thes developmemts.




Thank you


Natalie Wolchover is one of the finest science journalists around, especially for physics. You can rely on her.


Yes. This is nonsense to me.

That said, check out scienceclic. They do good run downs of BOTH the metaphorical stuff AND the math that underpins it. PBS spacetime is a fun place to check out too. Between the two of them, you should have enough of an idea of these things to wax scientific at a cocktail party.


Yeah. I wish magazines would do more work bridging the gap between scientist and layperson. Most articles are either "completely inaccurate or a metaphor at best" as you say, or they're just an unabashed, untranslated interview with a key scientist (like this article).


My favorites are the articles about a new math discovery...no clue...but I like cheering for them!


Startswithabang and other science educators, particularly Veritasium, do excellent jobs.

It's complicated and worth the time to understand what they present.


> I'm just wondering if I'm being lazy by not trying hard enough or efficient. It's definitely not for lack of curiosity, but I also don't like to fool myself into thinking I understand something that I don't.

I don't think you're lazy or not trying hard enough. The truth is just that… it's a hard and long road. Even more so when you're studying by yourself.

I took about the straightest path you can take to learning Quantum Mechanics and General Relativity (and all the math you need for it), attending classes and sitting down on my butt every day for 3 to 4 semesters. There are ways to shorten it but I'm not sure how rewarding that would be.

It also depends on what you mean by "understanding". If you really want to understand things at the deepest level possible, there's no way around a university-level education. I'm saying "university-level", not "university", because one could certainly learn these things on one's own. But to be honest with you I think chances of pulling this off are very small, mostly because physics outsiders don't have access to the same resources or social networks as enrolled students, which makes studying even more frustrating.

Anyway, FWIW here's a roadmap for learning General Relativity at the deepest-possible level along with the minimum timeframe needed (IMO) and the most important topics / keywords you really need to understand well:

    Linear Algebra (1-2 semesters): vector spaces, linear maps, dual maps, matrices, symmetric bilinear forms – These things lay the foundation for pretty much anything in math and are needed to understand differentiation in several variables (see below) as well as pretty much anything in Differential Geometry and Relativity.

    Real Analysis (1-2 semesters): Mostly differentiation of functions of one to several variables + a bit of integration theory – needed for pretty much anything in Differential Geometry and Relativity (especially coordinate changes and manifolds) but also in mechanics.

    Differential Geometry aka (Semi-)Riemannian Geometry (1 semester): manifolds, tensors, metric, connections, geodesics, curvature – These concepts are at the heart of General Relativity

    Mechanics (1-2 semesters): Newtonian Mechanics, Lagrangian Mechanics, Special Relativity (all with a focus on both theoretical and experimental physics) – without these there's no hope of understanding (or appreciating) the physics content of GR

    General Relativity (1 semester)
(Side note: Don't let anyone tell you that you don't really need Differential Geometry and all the other math to understand Relativity. They're lying and probably don't really understand Relativity, either.)

For quantum mechanics it's a bit shorter because the math is not as involved (at least to understand the basic concepts:

    Linear Algebra: (1-2 semesters): (finite-dimensional) vector spaces, linear maps, dual maps, matrices, symmetric bilinear forms – No way around this. 99% of quantum mechanics is encoding fuzzy concepts in linear algebra.

    Mechanics (1 semester): Newtonian Mechanics, Lagrangian Mechanics

    Quantum Mechanics

    Bonus: Hilbert space theory, basics in functional analysis


> In the past two years, a network of quantum gravity theorists, mostly millennials,

There you have it: millennials are killing the Hawking Information Paradox industry.

(It's always been said that mathematicians do their best work when they're young, it seems weird to mix generational cohorts into it)


Maybe it's a subtle reference to Planck's Principle [0], which is often summarized as "Science advances one funeral at a time"?

In this case, suggesting that younger scientists might invest their time into new ideas that older scientists would dismiss.

[0] https://www.chemistryworld.com/news/science-really-does-adva...


It is a favorite pastime of journalists to attribute events to a particular "generation", as if there was any kind of explanatory power in doing this.


I think the blurry line for millennials put the oldest of us at something like 34 to 38 years old now? So yeah most people that got into academia after their bachelors and just finished their post-doc should be a millennial.


There is also the statement that when an old scientist tells something is possible, then it is certainly possible, but if they express it is impossible, then it is possible.


Classic millennials and their cancel culture /s


For those of you like me having trouble with the abstract nature of modern physics, I have to recommend the relevant PBS Space Time on the subject: https://www.youtube.com/watch?v=HF-9Dy6iB_4


So, layman's question about black holes, almost 100% sci-fi derived.

Starting from the time mechanics shown in, e.g., Interstellar. If when you're near a massive black hole time passes differently (more time passes away form the hole, so to speak), couldn't it be said that the regions near and far the black hole are drifting apart in the "time dimension"?

If we take the black hole to be an extreme case of that, isn't the black hole a region that is drifting away so "fast" that light isn't fast enough to reach "us" on the outside?

In that case, there would be no paradox, right? Whatever is inside the black hole is still there, but with no way to communicate.


> isn't the black hole a region that is drifting away so "fast" that light isn't fast enough to reach "us" on the outside?

I've seen some models of black holes that are similar to this. Specifically, what is happening in those models is that the space inside the event horizon is growing faster than the speed of light, so more space is created than light can traverse.

This is the inverse of how cosmological horizons work. The reason we can only observe a limited portion of the universe is because objects are uniformly moving away from all other non-gravitationally bound objects. Space is being created between them. The farther you look, the faster galaxies are moving away from us because space is being created at every point in between. If you try to look far enough, the speed that objects are moving away from us becomes faster than the speed of light: space is being created faster than light can traverse it.

This sort of faster-than-light travel doesn't break the relativistic speed limit because these objects aren't inertially accelerating inside their frame of reference, the frame of reference itself is expanding.


> space is being created faster than light can traverse it.

Which is precisely the sort of behavior one might expect if they lived inside of a black hole.


Isn’t this what is happening in our universe as well? Space is expanding faster than light can traverse it?


I am at this point waiting to see someone prove that we aren't.

Though ultimately I don't know if it matters. What we have is so vast that we can't really fathom it ending except in the abstract. No matter how big a ripple I make in the pond it's not going to matter in a couple billion years. I content myself by thinking in centuries.


This is a reasonable conceptualization, IMO. However, the problem isn't that we can't access the information in a black hole (there are other places in the universe where information becomes inaccessible).

The problem is that black holes evaporate. If the particles released via evaporation don't contain the information about the particles that entered, information is lost when the black hole is completely gone.

The proposed solution is that the information is encoded onto the surface of the black hole and thus into the hawking radiation being released from that surface.


This idea in physics that information is conserved, neither created nor destroyed, just transformed seems awfully similar to a computer to me. A classical computer is not the right metaphor really when you think of the universe as a possible computational process, but the parallels are striking to me.


I suspect it may be just necessarily true that information is preserved in a consistent universe. I don't know though, maybe someone could come up with a model for a consistent universe with information loss, but it seems to me that would lead to physically possible states that are not derivable from consistent laws of physics.


Physics layman, but I agree as a computer scientist. It also sometimes feels like there are "optimizations", e.g., delayed-choice quantum erasure (https://en.wikipedia.org/wiki/Delayed-choice_quantum_eraser)

I'm open to the idea that it's just me projecting what I understand onto what I don't.


Nb. wave function collapse messes with this, and a computer would use something like lazy evaluation to avoid generating the Everettian multiverse.


I'm tying loose ends in my head that probably have long been tied in other ones...

Reading this next to the comment making a parallel between the black hole event horizon and the cosmological horizon...

Wouldn't this give credence to holographic model of the universe?

Could the so-called heat death of the universe and black hole evaporation be identical phenomena seen from either the inside or the outside of the boundary?


The problem is that black hole evaporates due to Hawking radiation, which is a special case of Unruh radiation. This radiation is independent of what falls into the black hole, but just its size (and hence mass). This is the paradox. Two black holes of the same mass can be created with completely different matter and they would radiate exactly the same way and in so doing destroying the quantum information of matter they are made of.

Your point of view/theory would hold if black holes were eternal, but they probably are not if our understanding of physics is correct. In fact, if black holes "die" then the quantum information has to be released back into the universe somehow. This paper proposes a mechanism for that to happen.


Why does that information have to be released? To my naive layman's thinking, if you told me that a black hole permanently destroys that information, I'd think "sure, it destroys lots of other things, so that makes sense". What problem does it cause if we believe that the information is gone forever?


Because quantum mechanics _really_ does not like destroying information. Mangling information beyond recognition is just fine, but the laws of quantum mechanics are very insistent that, if you have a complete description of the state of the universe, you can solve the equations backwards and figure out what happened in the last. When you throw in a black hole following Hawking’s rules, or any other device that irretrievably chews things up and spits them out in a way that can’t, even in theory, be undone, quantum mechanics breaks.


Got it, thanks! That seems unintuitive to me (which is pretty much the summary of QM as far as I can tell), but I trust that some pretty clever people are convinced of this.


The problem is that black-hole evaporation is a high-level description of many "fundamental" events, and at the level of fundamental physics, there is no known process that destroys information.

Or so popular-science articles tell me.


All information is physical. The information you're reading right now could conceivably be stored in electron charge, or spin, or polarization on optical storage, etc. Information simply must have some physical form.

Accepting the destruction of any information means that none of the physical symmetries we observe actually hold up, like conservation of momentum and energy. That would change literally everything.


I believe the idea is that in quantum mechanics, time evolution is described by a unitary operator, and because it is unitary, it must have an inverse, and, therefore, the state after must determine the state before.

Which, of course, reduces the question to "why does time evolution have to be unitary?"

And, one definition of what it means for an operator U to be unitary, is that it preserves inner products, and is surjective.

Why should it preserve inner products?

Well, a state should have norm 1, i.e. the inner product of it with itself should be 1, and the state in the future should also have norm 1. (this 1 can be thought of representing the probability that "something/anything happens", which should always be 1.) And also, the time evolution should be linear (that things are done with linear operators is nearly the core assumption of QM ime ), so therefore it should also preserve the norm of other vectors. And, the polarization identity allows one to recover the inner product operation from a norm which came from an inner product,

In what follows, "<" and ">" are angle brackets, not less than or greater than signs. also, by ||x||^2 I mean the norm squared of x, i.e. the inner product of x with x, i.e. <x | x> . The polarization identity (a theorem of math, not specific to physics) states that

<x | y> = (1/4)( ||x + y||^2 - ||x - y||^2 - i||x + i y||^2 + i||x - iy||^2)

So, in particular, for some linear operator A,

<A x | A y> = (1/4)( ||A x + A y||^2 - ||A x - A y||^2 - i||A x + A i y||^2 + i||A x - A iy||^2) = <A x | A y> = (1/4)( ||A (x + y)||^2 - ||A (x - y)||^2 - i||A(x + i y)||^2 + i||A(x - iy)||^2)

And, if A preserves norms, i.e. if for all x, ||A x|| = ||x|| , then therefore

<A x | A y> = (1/4)( ||A (x + y)||^2 - ||A (x - y)||^2 - i||A(x + i y)||^2 + i||A(x - iy)||^2) = (1/4)( ||x + y||^2 - ||x - y||^2 - i||x + i y||^2 + i||x - iy||^2) = <x | y> .

So, by the polarization identity, if a linear operator preserves norms, it preserves inner products.

So, if you accept the "time evolution is linear, and the state should always be a unit vector in a Hilbert space", then it follows that time evolution should preserve inner products.

The only thing remaining is, I guess, the assumption that time evolution is surjective. I.e. for any state, there is some state which should be able to lead to it.

I suppose one could question this assumption?

But I don't think giving this up would result in allowing the loss of information, because these requirements still imply that time evolution should be injective. If two states x and y were both sent by time evolution to the same state z, then, if x and y are not equal to each-other, then x-y is not zero, and it can be re-scaled to have norm 1, (specifically, giving us (x-y)/||x-y|| ) and be a valid state,

and the time evolution would send (x-y)/||x-y|| to (z-z)/||x-y|| = 0. Which would mean, it would send a valid state to, uh, nothing. This contradicts our assumption that it preserves a norm of 1. To interpret this a bit, if it did fail to be injective in this way, sending both x and y to z, then if the current state were (x-y)/||x-y|| , then in the future, after applying the time evolution operator, the probability that "anything" would be 0, which is absurd (and also contradicts our assumption of preserving the norm).

So, if [the state is represented by a vector in a Hilbert space, and the Born rule applies for probabilities, and time evolution is linear], then time evolution has to preserve the inner product and therefore also be injective.

This I think basically justifies the "information is preserved" idea.

You might ask "ok, how would you modify quantum mechanics in a way that did allow time evolution to not be injective?" and, I'm not sure what the appropriate way to do that would be.

Hm, I suppose maybe you could like, use states which are technically different, but not in ways that any observable could ever (even theoretically) distinguish between?

(are selection sectors relevant to that? I'm not sure.)


Awesome, thank you for that!

And in my nightmares tonight, I shall be required to write that from memory on a chalkboard in front of the class.


Well, objects falling into a black hole can reach the singularity in a finite amount of time. So you're going to have to enrich these singular spacetime points with a lot of extra structure if you want whatever passes through them to still exist. ¯\_(ツ)_/¯

To wit, you can imagine classical black holes as pulling whatever's near them into the future. The effect is so severe when you pass the event horizon that escaping the black hole amounts to traveling backwards in time. The effect is so severe when you reach the singularity that the entire timeline of the universe is in your past. So the singularity itself is more like an infinitely distant future than a point in space, with the caveat that the black hole slings you toward it with enough acceleration that you either actually reach it or something about this classical picture breaks down.


I feel like there's something so scary about falling into a black hole, literally unable to escape, that we just really, really want it to be "survivable", somehow.

Which is kind of funny when you consider that no one would expect to survive falling into a star, but we don't grasp at straws the same way to say, "Oh, you wouldn't actually be immolated, the solar wind would blow you back into space first."


To be fair, virtually any place in the universe, except where you happen to be right now, is probably not survivable.


I recently heard somewhere that the cosmic background radiation (CMR, 2.7 kelvin or so) is so much hotter than the "temperature" generated by Hawking radiation of black holes (apparently on the order of a billionth of a kelvin) such that they effectively do not radiate, and are not expected to do so for a loooong time (until expansion of the universe drops the background temp to an incredibly cold temperature).

What I have not been able to determine is when they might be expected to occur? How far in the future will black holes start evaporating? (I believe the answer depends on the size of the black hole as well)


You can easily find out yourself. First use a black hole temperature calculator like

https://www.omnicalculator.com/physics/black-hole-temperatur...

or

https://www.vttoth.com/CMS/physics-notes/311-hawking-radiati...

to find the Hawking temperature T_BH of your black hole. Then use the fact that CMB temperature T_CMB is inversely proportional to the cosmological scale factor a(t), where t is time:

https://physics.stackexchange.com/questions/76241/cmbr-tempe...

Set a(t_now) = 1 for convenience; then

T_CMB(t) = T_CMB(t_now) / a(t)

and you want to find the time t when T_BH = T_CMB(t).

That depends on how the scale factor will change over time:

https://en.wikipedia.org/wiki/Scale_factor_(cosmology)

We don't know that, but if we assume for simplicity that dark energy will keep dominating, a(t) will grow exponentially, i.e.

a(t) = a(t) / a(t_now) ~ exp(H_0 * (t - t_now))

where H_0 is Hubble's constant. So

T_CMB(t) = T_CMB(t_now) / a(t) ~ T_CMB(t_now) / exp(H_0 * (t - t_now))

Numbers:

T_CMB(t_now) ~ 2.7 K

H_0 ~ 71 km/s/Mpc ~ 2.3e-18 s^-1

t_now ~ 14 Gyr ~ 3.2e16 s

so, with t expressed in seconds,

T_CMB(t) ~ 2.7 / exp(2.3e-18 * (t - 3.2e16)) K

Setting this equal to T_BH and solving for t, I get

t = 3.2e16 + ln(2.7 / T_BH) / 2.3e-18 s

So let's say you have a solar mass black hole. Then

T_BH ~ 6.16871e-8 K

and so

t ~ 7.7e18 s ~ 2.4e11 years

i.e. 240 billion years (17 times the current age of the universe).

That's actually not a humongous number, thanks to the exponential expansion of a(t). It would take a lot longer with the expansion rate of a matter-dominated universe (exercise for the reader!)


According to this, on the order of million trillion trillion trillion trillion trillion years [1].

[1] https://www.youtube.com/watch?v=uD4izuDMUQA


Some of the larger black holes could take 10^100 years to evaporate even after the universe grows cold. I don’t know how long that would take, but the whole process will take an unimaginably long time.


Does that mean you can build a Dyson sphere around a black hole, then use the black hole as a thermal sink?



God damn, nice.


Thanks, all great responses!


Netta Engelhardt on Sean Carrol's Mindscape podcast: https://www.youtube.com/watch?v=m-6mcLX_v2I


> information about particles’ past states gets carried forward as they evolve

So I was curious and googled and found:

> the value of a wave function of a physical system at one point in time should determine its value at any other time

Is there a layman's explanation about what this ... is?


This is a vague statement that could mean two different things.

1. That quantum mechanics is deterministic (as far as wavefunctions go) and time-reversible. Knowing the state of a system at any given point, you can use the differential equation (Schrodinger's equation) that determines the evolution of the system, to find the state any other time.

2. The should in the statement is referring to the philosphical idea that we expect that the true laws of physics will always be deterministic and time-reversible.


This is where my layperson's intuition totally breaks down.

Matter goes into black hole. Black hole destroys everything because too much gravity and spacetime itself might or might not literally have a break in it. Eventually the remnants of said matter radiates out as random particles.

That seems totally logical and not paradoxical at all to me. My uneducated mental models of black holes and information theory are so crude that I can't even see what the problem with this is. All I can figure is that the conclusions of a theorem in Research Area A turn out to violate an axiom that is used and dependend-on in Research Area B.

I did however find this very interesting document that seems to cover some of what I'm missing: https://plato.stanford.edu/entries/spacetime-singularities/. I'll have to make my way through it over the next few days, maybe it will benefit other people, too.


I'm a layman, so I'm not 100% certain on this, but I think this is a quantum version of Laplace's Demon; where if you know all the information about the state of a system, you can calculate what the system will look like at any point in the past, present, or future.


Does this imply that if we knew the total state of the universe, we could calculate its future state?

Basically, does this imply the universe is deterministic, or that we're living in a simulation?


It implies that the universe is deterministic, with the caveat that what’s deterministic is the universe as a whole, which includes umpteenillion extra “timelines” which we can’t see, in addition to our own.

There’s remains indexical uncertainty, as we can’t predict which timelines we’ll see. The answer is of course all of them.



It's known that the universe is deterministic, just not all people agree.


The evolution of the wave function is deterministic. However, the observables of the wave function are not deterministic. So if I tell you the state of a photon moving toward your eye, you can determine the probability distribution of what color you will see, but not the actual color, because there's randomness during collapse.


(Alternately, there's indexical uncertainty in timelines, which rouuuughly comes out to the same thing if you handwave about game theory a bit https://plus.maths.org/content/playing-games-part-i )


It sounds like just the Schrodinger Equation, Quantum Mechanics 101


Mathematically, thee solutions to the differential equation have a “time evolution operator” that allow the quantum states to be pushed forward or backward in time.


This is basically stating an assumption of determinism with respect to physical dynamics.


Determinism.

The laws of physics are mostly fully deterministic, even quantum laws.


How are these extremal surfaces related to the event horizon? How are these surfaces related to the "firewall" solution to the blackhole information paradox that everyone was talking about a few years ago?

https://en.wikipedia.org/wiki/Firewall_(physics)


The question about firewalls is such a good one. In short, physicists identified this problem involving three parts of an evaporating black hole, and all three seemed to be entangled according to the usual understanding, but that was bad because “entanglement is monogamous”, meaning that a system A can’t be “married” (maximally entangled) to system B and system C at the same time. So it was proposed that maybe spacetime just ends, just inside the event horizon, and so the metaphorical third wheel, system C, is killed off or actually just never existed. It’s not like anyone’s been inside a black hole and then came out to tell us if spacetime ends there. So that’s a firewall, and it was a hypothesis for how to resolve the non-monogamy issue. But it does require a bizarre new phenomenon where spacetime ends suddenly in a region of low curvature where nothing was supposed to happen. So many people did not like this idea.

However, there was an alternative idea, which was that C was the “same person” as B, in which case its perfectly legal to be married to B and C at the same time. But B and C are really far apart, so that seemed weird. It would have to be that spacetime was joining these distant systems together via their entanglement. This was called ER=EPR, or roughly speaking the idea that the connectivity of space is defined by entanglement, so that these distantly separated systems B and C were actually connected via a very quantum sort of wormhole and thus were the same thing.

This new work on the evaporation paradox, amazingly, makes it clear that at least in some cases, ER=EPR is actually happening in an unexpected but concrete way—in these models, there is the emergence of a spacetime bridge of the distant entangled systems. And that’s what’s so awesome. We always suspected the firewall paradox would prove something about what spacetime is, because it was a very sharp paradox. And now, in this unexpected and un-firewally way, it has!


The black hole model used here is the elephant in the room, namely the no-hair conjecture. It's derived for the classic black hole, which has a problem that it's assumed into existence and has no means of formation (collapsars are only asymptotically close, but not exactly classic black holes, they are furry and the no-hair conjecture doesn't hold).


Our visible universe has event horizon around it, which has thousands of galaxies falling away from it and becoming unobservable every day, due to cosmological redshift.

For some reason, physicists are not concerned with information loss via this one. I would be glad if somebody explained the difference.


Black holes are thought to be information destruction not loss. Things moving super far away are lost but should still be out there, things falling into a black hole disappear and then once the black hole evaporates are gone forever. Or at least that use to be the theory.


It sounds a mistake to me, thinking that black holes' contents are still "out there". They are no longer observable, just like the galaxies that are no longer observable due to expanding universe.

In both cases it is due to curvature of space, so I think these are essentially the same case.

I would definitely like to hear something verifiable on why there's a difference and why only one of these susceptible to information paradox.


My take on the difference (not a physicist):

Take that galaxy that just crossed our "observable universe horizon" so that we can't see it. If there is a civilization halfway between Earth and that galaxy, they can still see that galaxy. The galaxy can see that second civilization and so can we. There isn't a single fixed "observable universe" boundary in space, it's just relative to the observer.

With a black hole, it is different. There is no point that can see both sides while being seen from both sides,. If you are outside the black hole, you see nothing from within. If you are inside, then you can see the inside (this is speculation) and you also see the outside. It's a very clear boundary.


This is known to be not correct. When you are near the event horizon, you can still observe most of our ordinary observable universe, as well as subset of black hole's interior.

This is the direct reason/consequence of not being able to observe the event horizon when near it, or notice when you cross it.

The boundary is not clear, it depends on the observer.


That's a good question.

The tl;dr is that if information hides on the other side of an event horizon and doesn't come back, we can pretend unitarity (and all the rest of the physics we've discovered) continues where we can't see it.

A non-evaporating black hole forever holds within it the information about what fell into it.

A forever-expanding universe causes information to exit observability forever.

Partitioning away -- hiding forever -- information is not the same as losing track of it when it comes out of hiding.

There are some differences between these two types of horizon because they are generated by different metrics : one for an expanding spacetime and one for a collapsing one.

We can see the differences by adapting these theoretical (as opposed to astronomically observed) objects.

If an expanding universe's expansion slows and reverses, then eventually all the galaxies that exited from one observer's view return into its view (having evolved with stars forming, aging, dying, galaxies merging, and so forth). If we are talking timescales of a few billion years, then if an us-like observer has detailed information about a galaxy now leaving its view, it can in principle predict what it will look like in billions of years when the galaxy returns back into view. The gentle assumption here is that stellar physics does not stop when the most distant galaxies go out of view.

If the timescale is pushed out to trillions and zillions of years, these us-like observers could still maintain the idea that the galaxies which exited from visibility continue to evolve like the closer galaxies which continue to be seen. A star which ends up on the other side of the cosmological horizon continues being that same star, evolving as normal.

A black hole is different, precisely because we should expect unknown extremely high energy physics to occur as e.g. protons fall in. What happens as you crush some quarks and gluons together at energies enormously higher than that we get from the LHC, or even from supernovae? We don't know. In fact, when we try to answer that, we lose track because our calculations tend to become singular : https://en.wikipedia.org/wiki/Singularity_(mathematics) We don't know what should pop out of a black hole late in evaporation, but we do know when a star crashes through a black hole event horizon, it will stop behaving like a star very quickly.

Indeed, even just on "our" side of the two horizons we can see differences near them. The furthest galaxies, at the edge of what we can see of the cosmos, are filled with normally behaved (young) stars. The shapes of those galaxies are not distorted by proximity to any horizon. We expect that to continue as we see galaxies deeper and deeper into our sky. By comparison we can see gas clouds falling into the black hole in the centre of our galaxy, stars orbiting it, and distortions to these caused by these close approaches to the central black hole. We have even found evidence of stars ripped apart by more distant extragalactic black holes. Crossing a black hole horizon does violence to the bit of the star that has not yet crossed; crossing the cosmological horizon would not change the star's basic behaviour.

If the universe were to collapse in the future, we would expect to see disappeared stars returning into view. Those stars stayed in locally gently curved spacetime, just like our local star did. If a black hole were to shrink in the future, we would be surprised if it spat out intact stars, or space probes, or whatever fell in emerging unscathed. Those objects did not stay in locally gently curved spacetime, and indeed would have encountered the locally enormously curved spacetime inside the black hole. That strong curvature spaghettifies things, at the very least.

These are just the consequences of our best theories of gravitation and matter applied to situations we have no reason to expect to be able to observe. As far as we know our universe is not accelerating towards a recollapse, it is accelerating towards faster expansion. And as far as we know no astrophysical black hole in our universe is presently shrinking. It's fairly safe to bet that if there is ever to be a reversal of the expanding cosmological metric or the collapsing black hole metric it's not going to be soon, so humanity and its descendants have lots of time to think about evaporating black holes (including those that evaporate in a contracting "anti-de Sitter" universe with a big crunch, which is the setting (sometimes including extra spatial dimensions than the three we're used to) for many approaches like the one in the fine article in Quanta Magazine linked at the top).

Now, a more direct answer to your question: in an almost-completely-flat-space universe if we have all the data (position, momentum, particle species, etc) at every point in a time-indexed spatial slice of our universe, we can calculate the entire data in neighbouring slices, and the data in those slices' neighbours, and so forth, into the infinite future and the infinite past. An expanding universe doesn't break this, it just means that we can't choose any arbitrary slice and march forwards and backwards from there, we have to take initial data from the hottest densest earliest part of the universe. From complete initial data and appropriate dynamical laws we can (in principle) describe anything in the future, even if the parts we describe are so separated from one another (in that future) that they can't exchange light with one another. The formation of non-evaporating black holes doesn't change the picture much: we know that things fall into a black hole and stay there in some unknown state, unable to exchange light with things outside the black hole.

However, once we introduce black hole evaporation we have the problem that we don't know how the stuff inside the horizon evolved inside the horizon, so we have no idea what should pop out through the last stages of evaporation.

In our standard cosmology, we can expect black holes to have evolved from stuff that was close to us in the hot big bang era but which is now almost certainly forever outside our cosmological horizon. A general solution to the black hole information paradox should not create craziness in those so-distant-we-will-never-see-them black holes, much less in the earliest visible quasars. That tends to get forgotten until someone asks what the interviewer asked:

    Most of the justification for the quantum extremal surface formula comes from studying black holes in “Anti-de Sitter” (AdS) space — saddle-shaped space with an outer boundary. Whereas our universe has approximately flat space, and no boundary. Why should we think that these calculations apply to our universe?  
That's an excellent question, and it was not answered by the interviewee. (I'd love to be persuaded that it has ever been reasonably answered by anyone).


My intuition is failing here.

If I put a log through a wood chipper, I can't un-chip it! Why should we expect the same for stars getting torn apart by intense gravitational fields?

Does it matter that the gravitational field is literally "spacetime itself" changing shape, versus two pieces of matter interacting with each other as in the wood chipper case?

Here's my guess of what's happening based on what you read, please tell me where I'm going wrong:

Physics generally assumes that it can be theoretically un-chipped; if somehow I could run time backwards, that I would end up with exactly the same log that I started with. But the results of interacting with with the mathematical singularity in the black hole cannot be "undone" in this way. And so far, all attempts to avoid trying to model "matter that has passed through a singularity in spacetime" with some kind of firewall, etc. have failed.


Thank you for the detailed considerations.

First I want to say that black hole does not imply extreme conditions. You will not notice when falling into a really large black hole. They are violent only when small. Large black holes are almost as benigh as outer event horizon, shredding-and-tearing-wise. We can't observe singularity, so whatever matter state it is on has no bearing on information paradox.

With regards of reappearing from black hole. When the universe is close to big crunch, a lot of very heavy black holes begin to merge. When we are virtually inside a black hole, it may merge with more black holes, and if they are sufficiently large, we will be able to interact with objects (such as stars, even) inside the black holes in which they disappeared from our sight previously. Moreover, we will see that they have evolved during their absense in line with how objects outside of observable universe evolved in absense of observation.

This is when talking about very large black holes, the size of our galaxy. These are easier than it sounds due to very fast black hole volume growth.

About evaporation, I can't say too much. But I also don't see how it

UPD: ...I don't see why it needs introduction of new physics, given that it is a virtual phenomenon - nothing interesting really happens near the event horizon, it only becomes interesting at a distance.


Below, I'll focus on model black holes, immersed in vacuum, isolated in asymptotically flat spacetime, and without regard to initial formation.

Yes, you can make such a theoretical black hole arbitrarily large, and thus geodesic deviation[1] can be arbitrarily small just outside the horizon.

The region just outside the horizon[2] is still a strange and likely bad place to be, filled with post-Newtonian effects from gravitational redshift and plunging orbits.

[aside 3]

(Astrophysical black holes must have some upper mass limit, and will generally have accretion structures that produce additional hazards).

> black hole does not imply extreme conditions

I invite you to calculate the scalar curvatures[4] in any model black hole, including one with an arbitrarily high mass (say about that of the known universe), and the geodesic equations below ~ 6(GM)/(c^2) [5]. If you do that, you'll find that extreme conditions are always manifest, and that in particular even for arbitrarily large mass model black holes, once you are past the point of no return you are inevitably drawn into a caustic, and fairly quickly by your own wristwatch-time.

Super-and-ultra-massive black holes are mostly interesting because the curvature very close to the horizon is well within the effective field theory limit of General Relativity, so theorists can be sceptical of the introduction of quantum corrections to gravitation in those regions of (all) theoretical black holes unless the corrections vanish as one takes the black hole mass to a very high limit.

> a lot of very heavy black holes begin to merge

You should consider that clusters galaxies will begin to reappear from the other side of the horizon well before they get close enough to one another to interact gravitationally. The very late time big crunch is not necessary to demonstrate the difference between switching from an expanding universe to a contracting one and switching from a growing black hole to an evaporating one.

Indeed, although late times of black hole evaporation are theoretically interesting (is there a remnant?) even the very earliest stages of evaporation are different: what rushes out from the region around the black hole is greybody radiation, not the stars, rocks, and space probes we threw in while the black hole was still growing.

Taking the mass of a black hole to an arbitrarily large value does not change this essential difference.

I don't know what you're trying to say in the final two paragraphs.

If your "I don't see why it needs introduction of new physics" is a request for information rather than a dismissal of the debate itself, you could start with the late Joe Polchinski's excellent 2013 slide deck https://www.slideshare.net/joepolchinski/firecit and the references at https://en.wikipedia.org/wiki/Firewall_%28physics%29

There is nothing virtual about Hawking radiation; it appears pretty generically close to all sorts of black holes equipped with a non-vacuum exterior, and has been shown in acoustic analogues. Unruh: http://inspirehep.net/record/775859?ln=en availabile at https://pos.sissa.it/043/039/pdf If you are perturbed off ISCO by Hawking quanta scattering off you, you likely will care very much that it is not a "virtual" phenomenon, while you are still able to care about anything at all.

- --

[1] We can consider the Riemann curvature tensor R^{a}{}_{bcd} u^b u^d X^c, where u^a is the 4-velocity of an object on a geodesic and X is a vector quantity in the tangent space between that geodesic and one nearby (e.g., a different mote of neutral dust in an infalling cloud, with identical 4-velocity) describing these geodesics' tendency to separate or converge. More roughly, this encodes the inwards-squash and the back-to-front stretch of this dust cloud from a macroscopic perspective. It also, in suitable coordinates, describes the spaghettification of extended bodies bound by non-gravitational forces. For spherically symmetric masses, in general X shrinks with distance from the mass. For black holes we take the mass to be highly focused so this obtains practically everywhere in the Schwarzschild interior. The interior oddities arising from breaking the symmetries of Schwarzschild with e.g. a non-negligible spin parameter do not change this statement qualitatively.

[2] In the Schwarzschild case the region R_s < r < ~ 3R_s is prone to result in inward plunges for non-massless objects, and of course the size of that region scales with M. Some details: https://hepweb.ucsd.edu/ph110b/110b_notes/node80.html Less symmetrical black hole models also have exterior regions, scaling with M, that are hazardous to any non-massless observer.

[3] The region just outside the cosmological horizon is essentially identical to the region just inside the cosmological horizon. There is an important symmetry difference: when someone else recedes past our cosmological horizon, we recede past theirs. We do not notice when we exit someone else's cosmological horizon -- we are doing it right now. We continue on diverging geodesics (cf [1]). Being on the inside of a black hole -- even one with a mass greater than the observable universe -- gives a different view. With the time one has left, one could determine (via e.g. Synge's method) that the spacelike part of the spacetime curvature is curved and thus one is inside a truly enormous vacuum black hole. If we break the vacuum condition Robertson-Walker -> Friedmann-Lemaître-Robertson-Walker & vacuum Schwarzscild -> Lemaître-Tolman-Bondi, we would not see isotropic distance-dependent gravitational redshift of luminous matter inside the LTB black hole as we do in the FLRW expanding universe: there would be an enormous anisotropy.

[4] Let's use Schwarzschild (black hole and coordinates) to avoid drowning in calculations for a black hole equipped with an "outer event horizon"; the interior metric is also much easier to reason about, and compared to cases such as Kerr-Newman, much more physically plausible. For a Schwarzschild black hole, the Kretschmann curvature scalar is R_{\mu\nu\lambda\rho} R^{\mu\nu\lambda\rho} = \frac{48M^2}{r^6}, where R is the Riemann curvature tensor. Coupled with the appearance at r = 2M of the Schwarzschild horizon, we can see that as we take M -> \infty your point about "benig[n] ... shredding-and-tearing-wise" holds. However, calculating for the interior, we find that the Kretschmann scalar explodes and becomes irregular at r = 0. This is diagnostic of a gravitational singularity.

[5] For example, https://en.wikipedia.org/wiki/Schwarzschild_geodesics#Geodes... although I'd want to resort to a textbook.


Trying to orbit a black hole near the horizon is a bad idea indeed, since most of observable universe's light will be blueshifted to high energy X- and gamma rays in my understanding. But, freefalling into a large black hole should not be inherently dangerous, because you are blueshifted in the same fashion as light, travelling at very high speed at this point, so I don't expect you to be roasted. I admit that it's very hard to freefall into a black hole, because any angular moment around black hole will be conserved, and you have great chance to near-miss it and be grilled as you fly away from the event horizon. But it should be simpler with larger holes.

Black hole evaporation is basically nonexistent for any large black hole. It is theoretically puzzling, but not something you would interact with when falling into event horizon. Since you will never observe crossing event horizon, you also have no chance of having close encounter with (already very feeble) evaporation radiation. You will just see it happening elsewhere, at any moment.

As soon as you have crossed, the direction to center of mass is your new time axis, so you won't directly experience movement towards it. If the black hole is really large, you can spend some time in it, and then interact with other observers who were unreachable for you, but now they are since they have also entered this black hole, or have entered another black hole which then merged with that of yours.

Can you please elaborate on the point [3]? How does one determine that they are in the universe-sized black hole? What difference would it make?


In a vacuum solution, there is no "observable universe's light", there is only the central mass M at point p, and some test observer who can probe point != p.

If there is starlight falling onto the black hole we adapt the metric e.g. Schwarzschild -> Vaiyda (incoming) [1], which models this incoming light as spherically symmetrical, ignoring Olber's paradox, and equipped with no wavelength, charge, or rest-mass: a "null dust". The major practical difference is on the wavelength stretching/squashing that light would experience, but the "null dust" does end up with lighter or heavier "raindrops". In this approach, and more realistic but still very theoretical ones, there are families of accelerated observers, including ones orbiting at ISCO, that will be trying to move through a torrent of heavy-raindrop incoming null dust.

"Freefalling into a large black hole should not be inherently dangerous". No, a massive radial freefaller in the Vaiyda model will get splatted on the back windscreen by heavy raindrops. More physically, a subluminal radial infaller will get sunburned by distant starlight trying to race past it, and additionally any other faster-moving infalling mass, such as cosmic ray protons.

"It's very hard to freefall into a black hole". One can do brief course corrections as one approaches the black hole, and still be in freefall when one shuts off the short-impulse thrusters. The adaptation of the resulting worldline isn't especially hard, and one can simply take its cutoff at the final course-correction. This is easy enough to see with a Minkowski diagram -- the accelerated parts of the worldline will be curved, the freely falling parts will be straight-lined: this image is frequently encountered in resolutions to the Twin Paradox where physically plausible acceleration is introduced.

> evaporation is basically nonexistent for any large black hole

In a theoretical model, like Schwarzschild, we literally have all eternity to trace the black hole's evolution, of course. The central finding of Hawking's 1974 paper is that arbitrarily large Schwarzschild black holes with vacuum (or classical electrovacuum) substituted with a noninteracting scalar quantum field will evaporate in finite (if long) time. Follow-on work has generalized to different quantum fields and black holes that form by collapse or which have non-negligible spin or charge parameters.

More astrophysically, we do expect to find Hawking greybody radiation around black holes of all sizes in our universe. The greybody temperature will be cold, especially for more massive black holes. Crucially this temperature is much less than that of the cosmic microwave background (much less typical interstellar or intergalactic-but-in-cluster media), so the Hawking radiation cannot cause these black holes to shrink. The origin of the Hawking quanta extends well outside the event horizon, so a well-planned hyperbolic orbit with well-shielded and sensitive sensors should pick them out. There are plausible natural phenomena which that idea roughly models, or conversely, it could be revealed by the inverse compton scattering spectrum of a very large weakly-feeding black hole (Sgr A* is not hopelessly far from that!).

I'm sorry, I just can't understand what you are trying to say in your second paragraph's second sentence. (The first sentence is just observing that collision with the singularity is in the future of every object crossing the horizon. "Time axis" depends on choice of system of coordinates, and one has total freedom there (including using no coordinates at all), as in any General Relativity problem. I don't see how to relate that to other infallers though. "Everyone gets squashed into the singularity" is what you are trying to say? So? It's not like one can have a conversation when one is part of a singularity.)

> How does one determine they are in the universe-sized black hole?

One looks at the sky and sees everything in it apparently contracted to an extremely bright high-energy point. If one sees a spread of galaxies occupying different solid angles on practically the entire sky, with smaller angles relating to higher redshift, one is not in a black hole, one is in an expanding universe.

In a vacuum setting, where there are no galaxies at all, one would have to measure the local spacetime curvature as discussed in https://physics.stackexchange.com/questions/109731/how-to-me... -- particularly the overview of the point raised by JL Synge in his textbook 1960 Relativity: The General Theory; Pub: North-Holland that one finds in one of the upvoted answers. As noted in several of the answers, with some care one can measure an angle deficit. Alternatively one could track the evolution of a freely-falling spherical dust cloud's oblateness/prolateness: https://math.ucr.edu/home/baez/gr/ricci.weyl.html -- in a vacuum expanding spacetime sphericity would be maintained, whereas in a vacuum black hole the cloud would become ellipsoidal. Even in a universe-sized black hole, starting far from the singularity, the cloud would on human timescales develop a "nose" pointing in the direction of the singularity.

> What difference would it make?

The interior of a universe-sized black hole is not compatible with life (or star formation).

(Life (and stars and galaxies and so forth) could in principle exist outside a universe-sized black hole, but would notice that the views towards the black hole and away from it differ remarkably).

Why does this matter? It addresses your point that a universe "close to big crunch, a lot of very heavy black holes begin to merge" is an objection to stars and galaxies popping into view after than had receded behind a cosmological horizon, which also sidestepped the point that as a black hole horizon recedes, stars and spaceprobes and so forth do not pop out.

Black holes and cosmological horizons are just different.

See the section, especially the second and third paragraph at https://en.wikipedia.org/wiki/De_Sitter%E2%80%93Schwarzschil...

We got to this difference by you rejecting isk517's perfectly reasonable comment, "Things moving super far away are lost but should still be out there, things falling into a black hole disappear and then once the black hole evaporates are gone forever", which I have just been expanding upon. In particular your objection was "In both cases it is due to curvature of space, so I think these are essentially the same case". Above is how they are not the same case, stemming from the curvatures of black holes and expanding universes having different sign. Finally, you asked to "hear something verifiable on why there's a difference and why only one of these is susceptible to information paradox". Which I was trying to do. The tl;dr is that the contents of a black hole drives Hawking radiation even if the radiation's associated greybody temperature is too cold for evaporation; the expanding universe's contents cools and when the associated ~blackbody temperatures drop below that of the Hawking radiation, black holes will fully evaporate. That evaporating black holes have a greybody spectrum unrelated to the microscopic details of what fell in is the information loss problem in a nutshell.

- --

[1] https://en.wikipedia.org/wiki/Vaidya_metric#Ingoing_Vaidya_w... (eqn 15 with r ~ 6M).


Thank you for deciphering my thoughts and then trying to reason in the same context.

With regards to heavy raindrops, it is very interesting if we can quantify this effect. For example, let's imagine that you are falling into a black hole with Schwarzschild radius of 200,000 ly, which is located between Milky Way and Andromeda galaxy, and there's no huge accretion disk on this black hole (let's imagine we're falling at 45° tilt to its equator and with zero angular momentum WRT its spin, but we can also imagine a non-rotating black hole with no accretion happening at all). We are at 1.1r. What's the energy flux due to heavy raindrops? What's the energy flux due to Hawking radiation escaping? While I don't expect you to do the math, and I did not, my common sense tells me that latter is "negligible" and former is "significant, but not something you can't realistically shield against". Do you happen to appraise it differently? Otherwise, it seems that we are able to enter the sufficiently large black hole as an outside observer.

Then, I would expect that we would see the other stuff falling into the black hole in proximity with our own point of entry (entire stars even), and I expect that they will be somewhat blue shifted. Imagine a star which has fallen in this black hole at the same time as our observer, on a distance of 2 ly. We have at least 200,000 years before we hit the singularity to make observations. After 100,000 years we will observe that the star is only 1 ly away, which translates into 1/100,000 blue shift of that star towards the observer. I also don't immediately see why everything we see inside this black hole will be contracted to an extremely bright point. Maybe the "outside universe"'s light would? The light from other objects inside this Schwarzschild radius (which has not hit the singularity yet) should be propagated normally, with slight blue shift. Clouds developing a nose may be a thing. However, in no way they the nose can point towards the singularity, since the singularity (the center of mass) will literally be in the future, the 𝛕 axis pointing towards it. So the whole cloud will move in the direction of the center of mass with almost speed of light. This is how we will perceive it locally, of course.

Now we are returning to information paradox and black hole evaporation. Here, we have Kruskal–Szekeres coordinates which allow us to describe precisely what happens when matter falls into a black hole, and this includes the Hawkins radiation pairs. As a layman, I see these holographic solutions of information paradox, at best, an example of explaining the black hole evolution while working in bad coordinates (the ones tied to us as an observer), and at worst a result of further confusion. Kruskal–Szekeres coordinates also have "time axis" which you have previously dismissed.

My point is that nothing special happens in "our" coordinates, nothing special happens in the coordinates set inside the black hole, and nothing special happens with the observer which is freefalling into the black hole, so there is no case for information paradox other that information travelling through the event horizon.

Maybe there is some sort of paradox of matter falling into singularity, but it is entirely unrelated to event horizon, a distinct phenomenon. In this fashion, it can't be used in explainations of black hole evaporation.

> The interior of a universe-sized black hole is not compatible with life (or star formation).

It is a very bold claim, but I wonder what happens with life which has just entered the universe-sized black hole, with sufficient shielding to survive it of course. How long does it have and what will affect the outcome? Same with pre-existing stars.

If they still have some runway, then you can surely observe the scenario when object A falls in a black hole outside of observability by B, then B falls into another black hole, then black holes merge and with some luck, A and B are observable to each other. But even if they don't, it's not important since observability is not a function of life or stars, in my understanding it just means that two objects can interact, and some kind of objects (if just protons or quarks) should be possible inside the black hole.

I don't claim that black hole equals cosmological horizon, I'm just imagining that they're the same with regards to the (absense of) information paradox.

I imagine that a computation-intensive simulation involving the outside of black hole, the inside of black hole and how they evolve and interact with regards to coordinate translations may shed some light on how the black hole evaporation works, without any additional physics such as holographic surfaces. And the solution will probably not lose information at any point. It's just that there is no observer to whom the whole information is available at any point of the evolution.

I'm sorry that while I have typed a lot of things, they're not in the direct coherence with your arguments, which, in time, were not in direct coherence with my previous point, so we are bound to zigzag.


Reading quanta magazine always makes me feel like a failure inside . All these ppl doing cutting edge research about things that matter. Thank God for crypto investments , stock investments or else i'd have nothing going for me.


Recent discussion about confirming Hawking's Black Hole theorem:

https://news.ycombinator.com/item?id=27696774


That is an unrelated result, although the confusion is understandable.

You refer to a theorem, named after Hawking, that states that in classical general relativity the area of black hole horizons must always increase. The experimental confirmation of this theorem refers to the fact that the total area indeed appears to have increased for the black hole merger event observed by LIGO (which should indeed fall well within the realm of classical general relativity).

However, once quantum gravitational effects are taken into account this black hole theorem no longer holds. Indeed, since the real world should be quantum, it is expected that the area of black holes eventually does decrease: they evaporate by emitting Hawking radiation. This is however purely a theoretical expectation, since these evaporation effects are a loooong way from being observable; evaporation probably only becomes a significant effect on time scales far beyond the current age of the universe.

In quantum gravity there are nevertheless plenty of theoretical paradoxes and open questions, and the above article describes some recent progress by the theoreticians in this area.


So in layman's terms, we just redefine the surface of the black hole until the equations work? (I'm sure I'm wrong, but maybe this'll help kick off the discussion)


While I find pbs space time very good and entertaining, I am wondering if I am the only one who thinks the content is a bit towards the complex side?


That's great - there is a massive amount of overly simplified physics popular journalism.


This is hardly a bad thing. Not all content should necessarily be easy to digest.


> In the past two years, a network of quantum gravity theorists, mostly millennials, has made enormous progress on Hawking’s paradox.

Wonder why they feel the need to mention that they are mostly millennials?


Amid the pandemic and the increasingly scary bits of international politics and increasing nationalism I’m just so happy that very exotic science continues to happen.


> Engelhardt set her sights on quantum gravity when she was 9 years old.

I tried to come up with a witty comment about that but words fail me. Wow.


What about: "I'm glad she did run not across the Men in Black in the middle of the night..."



Picking very difficult, specific topics like this to be interested in as a kid is pretty common.

Sticking with it—that's unusual.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: