It would be interesting to view the evolution over long periods of time.
This simulation is 2D, but it's similar to what happens in globular star clusters. In these, there's a phenomenon called the "gravothermal catastrophe".
The particles (stars, sponge bits) relax to thermal equilibrium, where the kinetic energy of a particle has a distribution where probability declines exponentially in energy/temperature. Some of the particles will have energy high enough to escape to infinity (to "evaporate"). When they leave, the remaining particles are more tightly bound, so the cluster shrinks. The particles then move faster (by the virial theorem, total kinetic energy is always 1/2 the negative of the gravitational potential energy). Evaporation accelerates until the cluster basically explodes.
Why this doesn't happen to actual star clusters was eventually determined to be due to three body collisions that cause binary stars to form, and these stars then inject energy into passing stars (causing the binary star orbits to shrink). This energy injection reheats the cluster, inflating it again and preventing runaway evaporation.
I'm not clear that the simulation here can handle formation of such binaries.
>The only force at play is newtonian gravity, which we modify by adding a softening length, ε.
> This way we avoid numerical instabilities due to divergent forces when two particles get to close together.
So it appears he doesn't handle close encounters that form binaries. Probably reasonable if he's using heavier discrete particles as a proxy for dark matter.
I wrote my own toy gravity simulation. 2 stars that come very close together at a clock tick will experience a massive force that throws them both right out of the system. This is an artefact of artificially dividing time into clock ticks to make the calculation tractable. As I understand it, 'softening length' is a fudge to stop that happening.
You'd have to use a more sophisticated algorithm with variable time steps for closely interacting particles, I think. And then some way to deal with particles that become bound.
yup. molecular dynamics simulations are hard-core. and the parallelization is highly non-trivial so you need fast fabrics and complex message passing to scale. one of the raisons d'être for supercomputers
> Evaporation accelerates until the cluster basically explodes
This sounds very much like what happens at the end of a black hole's lifetime as it evaporates by Hawking radiation. Any chance that's a real connection?
No, but also maybe yes, but it's way over my pay grade as a physicist. In short, I think the fact that gravitational systems have negative specific heat capactiy is very relevant.
As gravitational systems lose energy, the "temperature" of the ensemble of particles goes up. (I.e. objects with smaller orbits have higher velocities.)
It is probably not exactly an accident that this relationship holds for blackholes as well: the hawking radiation formulas suggest a larger and larger temperature for blackholes with smaller and smaller event horizons. The hawking radiation stuff is built upon entropy / temperature relationships so I think there is actually some kind of connection there.
There might even be something baked into the energy conditions / bianchi identities of GR that is manifesting in that way, but I'm speculating.
Well yeah, probably, but sometimes very weird analogies between systems turn out to produce real physics, and it doesn't even seem impossible to me that there's something similar happening behind the event horizon of a black hole. I'm just hoping someone smarter than me has already done the math.
There is a relationship in that both the "Spongebob cluster" and a black hole has negative heat capacity. The math is already there in the virial theorem. See <https://en.wikipedia.org/wiki/Heat_capacity#Negative_heat_ca...>. Detailed treatments of the non-relativistic case you can find in an undergrad astronomy textbook; the relativsitic singleton case ehhhhh I don't think you're ready for it but Wald's General Relativity §12.5 & §§14.3-14.4 would be a good choice (and he shows you the math, which has been known for several decades), and for relativistic orbits I think you need to go beyond textbooks (although you probably could start with numerical relativity textbooks, like Baumgarte & Shapiro or Alcubierre, although I don't have either handy to double-check where they go with thermodynamics. Oh and the paper I linked in a sibling comment has a good and relevant bibliography. <https://academic.oup.com/mnras/article/516/3/3266/6668807>).
However it's best to think of "the black hole" as the entire spacetime (in Hawking's 1974 treatment and similar; or alternatively out to somewhere in the asymptotic flatness), in which there are two regions without a horizon, one to the past of the event horizon formation, and one to the future of final evaporation.
What goes into the horizon doesn't stay in, therefore what happens inside is part of the picture (and has been speculated about for fifty years! Fifty!)
Yeah technically, in the current formulation, but I think at this point the smart money is on Hawking radiation being correlated with something on the inside. For instance this is my favorite solution for the information-loss problem, that the info is carried away by hawking radiation.
As far as I am aware the virtual particles near the event horizon of black hole behave nothing like stars in a galaxy. For a start, stars much more massive (many many orders of magnitude) and aren't influenced by quantum mechanical effects in the same way as individual particles.
I like your comment, it provoked some catch-up reading, so forgive me pecking a bit at what you wrote.
The visualization is pretty limited, and was probably just a fun way for the astrophys student to use his choice of tools. (Which we should encourage! His work is great!)
Digging deeply into the consequences of this example of obviously physically improbable initial conditions could be somewhere between entertaining and enlightening, but would quickly go over the head of early astro students first encountering the virial theorem and negative heat capacity. "Getting the physics right" would be a significant research project. You'd also generally have to do without animations, unless you are very patient and have a big compute time budget (see the acknowledgments section of the MNRAS paper below, and my final paragraph).
Motivated by the previous paragraph's themes I found a recent (2022) MNRAS open access paper which among other things has a good overview of the (recent) state of the art in modelling star clusters, some good teaching material in section 2, and in section 3 we see their software packages. I'd suggest you begin with the summary in section 6.1.
In principle simulating the Spongebob cluster could produce information in the top two graphs of figure 7, and a 2d version of one or two of the graphs in figure 6. The additional information in those figures is certainly interesting, but nowhere near as pretty as the Spongebob animation. And I'm not sure what extracting similar figures for the Spongebob simulation would be useful for.
Conversely, the Spongebob simulation could not generate figure 9, and that figure is especially interesting to me (q.v. §4.2 & final sentence in §3.2).
And finally, "The movies of the full simulations, from which Fig. 6 was produced, will be made available upon reasonable request as well and will be uploaded publicly in the future". Not sure the upload ever happened, although I didn't really search much (e.g. it's not linked at the arxiv <https://arxiv.org/abs/2205.04470> or in DDG media searches on title or a few of the authors).
You have to have an X account to do that now right? Whenever I try to scroll through an account’s tweets lately, I just get a list of their tweets from random dates in no apparent order.
Correct me if i am not wrong they probably defining initial conditions for the particles (such as positions (to create the spongebob (althought I wish there was a way to convert an image to these n bodies), velocities, and masses)
Then you set up the gravitational interactions between them and futher iteratively update their positions and velocities over time using some numerical integration method (Euler's method/Runge-Kutta method?) to simulate their motion
Looks like it. A.K. Dewdney did a computer recreations column in Scientific American back in the eighties with a nice exposition of how to do a basic star cluster simulation. Only practical with a lot fewer “stars” on the gear I had available then. It used Euler’s method iirc.
I have a copy of The Magic Machine on my shelf which I (unintentionally) stole from my university library at the end of my senior year. His work was pretty influential on me, inspiring me to keep exploring programming at a time when my day to day work in the subject was often painfully boring.
Speaking of which, Dewdney recently passed away on March 9th but I haven't seen any notice of it on HN or other CS-related sites. I know he alienated a lot of people with his unfortunate turn to conspiracy theory after 9/11, but he really contributed a lot to popular interest in CS and recreational mathematics in the 1980s and 1990s through his Scientific American articles and his books.
I was going to link to the article about sympletic integration, but it's toooo technical. This other article explains the problem https://en.wikipedia.org/wiki/Energy_drift
...but 1 second per time step is a lot. I wonder how fast it would've been if it wasn't in Python. I think we as a society are doing a whole lot of people (especially physicists) a disservice by mainly teaching them Python rather than languages which are literally hundreds of times faster in some cases. Python works well when you just want to glue together existing fast C or Fortran libraries with Python APIs, but it quickly proves limiting.
I've personally been caught by the Python trap, where the easiest way to do something was to write a Python script to do it, and it worked, but then I wanted to try to process more data or whatever, and suddenly Python is a huge limiting factor. I then spend more time parallelizing the Python code to make it run faster, and it becomes a beast that's hard to debug and which maxes out 32 CPU cores and is still 10x slower than what a single threaded Rust program would've been and I regret my choice of language.
EDIT: Also, this is in no way anti-Python, I think it's a nice language and there are many uses where it is wholly appropriate.
Right, and compiling Python to machine code does get rid of the overhead associated with opcode dispatch... but it's not magic, Python is still a wildly dynamic language. It's mainly the dynamicness that makes it slow, not the fact that each opcode has to go through a switch statement in the cpython interpreter
To get significantly better performance with a JIT, you need one which analyzes the code at runtime to detect patterns, such as "this function is always called with an integer argument" or "the dictionary passed to this function always has this shape", like what V8 does. AFAIK Numba doesn't do that.
(Though if I'm wrong and there are benchmarks which shows Numba coming close to something like Rust in normal dynamic Python code, please do correct me! I haven't done much research on Numba specifically)
It's slightly exaggerated; the Python program might not have been able to fully utilize all cores, it's really just 16 cores with hyperthreading. But it's not unreasonable: 150x speed-up isn't unexpected when going from Python to C/Rust/C++ in number crunching code, 150/16=9.3 (16 is based on the assumption that the gains from hyperthreading and the losses from imperfect parallelism more or less cancel out)
I don't think I have the code for these large-ish data processing experiments I did any more, but it would be fun to make some toy problems with large amounts of data and create comparable Python and C implementations and create a blog post with the results.
Can we assume that you weren't able to use Numpy here, or at least that your inner loops weren't using it? It can be faster than C++ when you don't happen to know all the optimizations the Numpy library writers knew.
Yeah, I'm just talking about normal Python code here. If you're able to express your problem such that numpy or scipy or pytorch or NLTK or some other C/Fortran library does all the number crunching, Python's performance is less of an issue
IIRC two galaxies can pass through each other with no collisions, because the space between the stars is so large compared to the size of the stars. Quite how well that applies to Spongebob is an open question.
You don't quite RC, because of the interstellar medium (and sometimes the circumgalactic medium). In a galaxy-galaxy collision, individual stars are almost collisionless, depending on the stellar density of the galaxies. However, if there's even a small fraction of the normal amount of gas in both galaxies, the interaction will be very X-Ray bright from gas friction. The behaviour of gas tends to dominate many if not most galaxy-galaxy collisions.
NGC 2207 https://www.chandra.harvard.edu/photo/2014/ngc2207/ ("Colliding galaxies like this pair are well known to contain intense star formation. Shock waves — like the sonic booms from supersonic aircraft — form during the collision, leading to the collapse of clouds of gas and the formation of star clusters")
Spongebob pretty clearly doesn't account for gas. The initial velocities are also not really comparable to a galaxy-galaxy interaction. And of course it's in two spatial dimensions, so fewer ways that any pair of particles might avoid each other than in our three.
And to answer the grandparent comment, the gravitational interaction of particle-particle close calls in Spongebob are suppressed, so no collisions. Gravitation is the only force modelled, so no clumping. I arrived at that from scattered Q&A comments in the twitter thread, but also it's pretty clear from eyeballing.
Are we assuming the objects retain form? If I smash two iron cubes together and apply enough gravity, eventually I'll get fusion of the atom cores. As the highly-compressed charged particles start to collide more-and-more, some will repel, some will fuse.
As long as the 2nd law is in play, I could see the two objects as being passed through each other. One side you have particlesA[cub1_particles, new_fusion_particles] and on the other side you have particlesB[cube2_particles, new_fusion_particles]. Both of these would sum to the same thermodynamic energy of [cube1, cube2].
To the size of their core, sure. But a collision happens way before the atom cores are close to eachother, as you already mentioned. Therefore it does not compare as an analogue.
Additionally, the densities of the systems are also very different (not in absolute terms of course)
Electromagnetism is some 40 orders of magnitude stronger than gravity. Also gravity only attracts, it doesn't repel like 2 atoms approaching each other. So it is really quite different.
Imagine a pencil drawing of Spongebob, but using tiny dots rather than strokes, collapsing onto itself, forming what looks like two separate masses (of sponge particles?) which then collapse together in the end.
This simulation is 2D, but it's similar to what happens in globular star clusters. In these, there's a phenomenon called the "gravothermal catastrophe".
The particles (stars, sponge bits) relax to thermal equilibrium, where the kinetic energy of a particle has a distribution where probability declines exponentially in energy/temperature. Some of the particles will have energy high enough to escape to infinity (to "evaporate"). When they leave, the remaining particles are more tightly bound, so the cluster shrinks. The particles then move faster (by the virial theorem, total kinetic energy is always 1/2 the negative of the gravitational potential energy). Evaporation accelerates until the cluster basically explodes.
Why this doesn't happen to actual star clusters was eventually determined to be due to three body collisions that cause binary stars to form, and these stars then inject energy into passing stars (causing the binary star orbits to shrink). This energy injection reheats the cluster, inflating it again and preventing runaway evaporation.
I'm not clear that the simulation here can handle formation of such binaries.