Hacker News new | past | comments | ask | show | jobs | submit login
Dimensional Analysis and Black Holes (2019) [pdf] (hapax.github.io)
63 points by segfaultbuserr 9 months ago | hide | past | favorite | 15 comments



The "let's assume the result is proportional to a product of powers" ansatz always threw me off a bit. Why not additive, or including some function of the independent variables? Is there a better justification than "it often gives the correct result?"


This technique, known as Rayleigh's method, has a partial justification in Buckingham's π theorem: https://en.wikipedia.org/wiki/Buckingham_%CF%80_theorem


You cannot add/subtract things which don’t have the same dimensions. Further, such formulae cannot forbid negative answers even though you’re trying to model strictly positive quantities. That naturally lends to multiplication.

If you want to exponentiate, you can only use dimensionless quantities in the exponent (and that kind of dependence does pop up sometimes, in interesting places). But the exponential dependence means very low sensitivity for small values for the exponent, so (by Taylor approximation) you might get away with using a constant, or redefining variables (by subtraction/rescaling) to come back to the multiplicative format. In other words, you’re taking the log on both sides and doing dimensional analysis with the new variables.


That’s a good question. Addition and subtraction does not make sense because you can’t add non-homogeneous dimensions. And if they were, there’d be a simpler expression. As for a function more complex than multiplication, you can consider the general formula such as an exponential decay and transform it using logarithms into a multiplicative form.


> We assume the hole is small compared to the size of the earth, and the package light compared to the mass of the earth, so we can neglect h and m.

If you're going to write a paper that builds intuition, you should write one on this problem. Physicists always make magical simplifying assumptions that drive me absolutely bananas. And wow! Things get so much simpler but are still correct to an order of magnitude.

The same thing they do when gathering / discarding possible factors.

I actually suspect this is revisionist. Most folks would probably start with various factors / powers and guess-and-check until some of those factors have zero power and the dimensions work out.

Relevant XKCD: https://www.explainxkcd.com/wiki/index.php/793:_Physicists


I didn't get the justification for the Trinity test calculation being within an order-of-magnitude of the actual value (putting aside, of course, the fact that it turned out to be very close!)

To me, this experiment would seem analyzable in the same way: suppose we have a loudspeaker with a line of microphones stretching away from it. The speaker emits a pop, and 25 ms later, we record how far the sound has traveled. Clearly, we are going to get the same distance for pop intensities varying over several orders of magnitude.

Given the knowledge that the speed of sound in air (though not necessarily that of shockwaves?) is independent of its intensity, this is not surprising, but what physical insight justifies taking the calculation given in the text as being an order-of-magnitude estimate? I would have thought it would require additional inputs, such as empirical evidence of fireball size from conventional explosions.

Enrico Fermi famously estimated the yield on the spot by measuring how far the blast wave moved small pieces of paper, but that seems to depend on more than just dimensional analysis.

https://www.tandfonline.com/doi/epdf/10.1080/00295450.2021.1...


> It turns out you can take an arbitrary shape and split it into tiny rectangles

Citation needed?


This is pretty much how integration works, and calculus shows us that integrating any arbitrary function can be done this way.

But really isn't not that hard to intuit. Take any arbitrary shape and draw any number of arbitrarily small rectangles inside it. It should be easy to see that by adding the area of each rectangle, you approach the area of the shape. As the rectangles get smaller, the combined area of the rectangles gets closer to the true area of the shape.


Riemannian calculus.


Of course. My mistake.


> Citation needed?

Pixels.


Hm... What about a fractal shape with no right angles? Is that feasible? Or do you just approximate it with a fractal arrangement of rectangles?


I can't say I've ever done calculus on a fractal, but my guess is that most fractals have a fixed surface area ratio. Since fractals have infinite textural depth, there's a hard limit to their surface area compared to the overall area.

But what you're asking about is the inaccuracies in the Reimann Integral https://en.m.wikipedia.org/wiki/Riemann_integral

The short version is that you pick a reasonably small size for your rectangles, and it mostly averages out. The discrepancy is so small that it can largely be ignored.

But for argument's sake, you can simply imagine infinitely many infinitely small rectangles. That will fit the curve perfectly, but is impractical to compute. So we accept the slightly less accurate value.


A fractal has measure zero in the space it's embedded in. It obviously can not contain any object of nonzero measure of that space.


Does that include space-filling curves? …do those count as fractals, if the fractal dimension would be an integer?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: