Has anyone explored the computational nature of Newtonian gravity? That is, if you carefully setup a set of masses in some manner, and let them interact though their gravitational pull, what kinds of things can you compute? Is gravity Turing complete? Is it a push down automata? Finite state machine? Can you use choreographies like these, coupled together to create register machines, or simulate cellular automata?
Not quite a computation, but there is a striking example (due to Xia) where 5 particles interact and "kicks" off one particle to infinity in finite time. Further details can be found in
There might be some way to devise an analog computer, although it's not clear to me how you would implement arithmetic operations.
Operational amplifiers can be configured to generate analogous orbits by using them to differentiate one of their inputs. By wiring multiple op-amps together and measuring voltages at different points it's possible to observe chaotic behavior in the measurements, just as if you were to directly differentiate from some initial condition on paper using a differential equation. This configuration is an analog computer. In fact, I wouldn't be surprised if op-amp circuits existed with perfectly analogous mathematical behavior to the orbits described in the paper.
The problem with gravitational computing using n bodies probably lies with establishing initial conditions, and with inhibiting the effects of neighboring systems.
This paper seems to be missing some related work, for instance Carles Simó.
Greg Minton created a computer-assisted proof system for showing that there must exist a choreography with parameters within a certain distance of some given approximate parameters. This isn't just a matter of more floating point precision; it certifies that there is a critical point for action of the right kind. http://gminton.org/#gravity and http://gminton.org/#cap
Note: that only works in Internet Explorer (not Edge, of course), since all other browsers have disabled support for NPAPI plugins, including Java. I guess it shouldn't be too hard to implement a JVM in JavaScript that's good enough to run simple applets like this, but sadly nobody is doing it.
"The applet viewer operates on HTML documents, but all it looks for is embedded applet tags; any other HTML code in the document is ignored. Each time the applet viewer encounters an applet tag in an HTML document, it launches a separate applet viewer window containing the respective applet."
As far as security goes, I don't think appletviewer claims to be more secure that the browser. It has a similar security policy/sandbox.
We live in a boring planetary system. Can you imagine if we lived in one of those choreographies how difficult would it be for Kepler to devise the laws of planetary motion?
For one, the science is dead wrong. Just because the three body problem doesn't have an analytical solution that doesn't mean that orbits are chaotic. Case in point: the very solar system in question is now known to have at least one planet in a very stable orbit. Nor does it mean that they are particularly difficult to calculate in many circumstances. You just can't use algebraic methods. So I was turned off because the what-if proposed was fantasy not science fiction. Fwiw I had the same response to the movie Arrival, which many people seemed to love.
Second, the proposed solution that allows for the aliens to predict future seasons and therefore develop culture and technology reflects human problem solving. This is intellectually lazy and wrong. A more realistic solution, of the type I prefer to read about, would have the aliens evolve different ways of thinking that are more compatible with iterative methods than algebraic manipulation. To such a mind abstract reasoning would be really difficult, but it would be entirely natural to specify initial conditions and solve iteratively. Such a mind would find solving ordinary differential equations as easy as breathing, but solving x + 3 = 4 would need a computer. Even more interestingly, how would those differences in mind structure reflect the development of their language, culture, and societal structure? These are all very fascinating questions that seemed extremely obvious to me from the setup, and the sort of thing many hard sci-fi authors like Raynolds or Clarke would tackle. It's a book I would read and rave about. But instead we get the same old trope of basically human aliens invading earth because it is "habitable", never mind that habitability should be a non-concern to any self-respecting interstellar species, for whom climbing down a gravity well is probably a net negative. Planets are in fact shitty places to live in terms of galactic real estate, even more so when they are infected by foreign biology.
Finally, the behavior of humans across the entire books is utterly unrelatable. I'm limited in how much I can say without spoiling the story. But in short every scientist is portrayed as incurious (wtf?) nihilist easily swayed by the suggestion to kill all humans. And the rest of humanity hardly dares better -- they willingly and dejectedly walk to their own execution when they are told to later in the series. I have no explanation of this other than bad writing.
How stable are these orbits? Do they tend to degenerate into simpler forms over time? If they're stable, do we see any asteroid triplets in such configurations? Why not?
Reminds me of Cixin Liu's 'Three Body Problem', about alien invaders that want to escape their own chaotic three-body planetary system by taking over ours... The first two books of the trilogy are just spectacular science fiction.
Guess I have some work to do to update my mobile app Three Body! (It presents galleries of solutions up to those found in 2013, and lets you explore your own initial placements)
The authors of the new solutions have published all the initial conditions - so hopefully it won't be too hard. (Although they use an 8th order RK integrator and the one I have is a regularizing Bulirsch-Stoer integrator).
I wonder how can authors claim such great certainty in their results, after all, it is all based on number crunching. Some floating point error, and similar is bound to creep in...
The numerical methods used for this kind of calculations are engineered to compensate numerical errors (floating point and errors inherent to integrators).
They take advantage of a priori knowledge about the laws of physics, in particular the conservation of mechanical energy and angular momentum. Predictor-corrector is one family of methods. Other methods rely on convergence as timesteps change.
There is a lot of literature on numerical integration applied to celestial mechanics and new methods being released every year. These methods are tailored to this problem space.
When I looked into making a solar system simulation awhile back, I actually found it shocking how much goes into monitoring and ensuring conservation of energy/momentum in these models.
Once I was aware of this issue though, even more shocking to me was that many papers on climate models I came across do not even mention they monitored adherence to conservation laws. There is a disconnect there, I would think this adherence either is a big deal (as would be suggested by the practice in astronomy) or not (as would be suggested by climate research), not that it would change by subfield.
When just doing point gravity dynamics (no friction, collision etc) energy and momentum take care of themselves as long as the math is clean. I say this confidently because I made a solar system simulation[1] with no attention to conservation of energy at all. In practice Star systems also shed large amounts of angular momentum through solar wind[2], but I expect this is often not modelled.
Climate models are an entirely different matter and with respect, there is a disconnect with your scepticism of them for this matter.
It says in your readme[1] though that your solar system simulation simulation is off by 70k km (regarding Earth orbit) after only 1 year relative to the JPL ephemerides, for "unknown reasons". I don't see how you can use that to justify not checking for conservation of momentum/energy.
Come on it doesnt say "unknown reasons" it says the "It is possible the discrepancy is due to the limitation of javascripts 64bit FP numbers rather than subtle algorithmic or physics error"
70k km / a year (on a billion km long orbit) was a lot closer than I had hoped for, considering its a 64 bit newtonian model. But this is beside the fact that having worked on the project a fair bit, I understand that any model adjustments to conserve energy or momentum are non-physical 'fudges' to correct deficiencies of the model. There are no laws of physics dedicated to adjusting energy, momentum, information in order to conserve them. They are conserved by a mathematical correctness of every physical law - a proper account of that eludes me, but it is certainly explained somewhere.
Right, you don't know how much error the floating point error accounts for. You also brought up loss of angular momentum to solar wind, etc. These are all just speculations. That may be fine for the purposes of your simulation, but not for predicting if asteroids will impact the earth or the influence of human activity on Earth's climate.
Also, I know you are supposed to use a symplectic integrator to ensure conservation of energy when doing these simulations, so it is not somehow hardcoded into the laws, eg: https://en.wikipedia.org/wiki/Leapfrog_integration
We could certainly calculate how much error was introduced by rounding, if we had the time and inclination. It is useful to understand that about numerical modelling, it should not be necessary to accept any indeterminable discrepancies.
Conservation of energy, momentum, information is hardcoded into all known laws of Physics. It is our integration algorithms which do not all ensure conservation of information - but that is carefully achievable with basic first order schemes, such as Verlet and leapfrog integration.
>"Conservation of energy, momentum, information is hardcoded into all known laws of Physics. It is our integration algorithms which do not all ensure conservation of information - but that is carefully achievable with basic first order schemes, such as Verlet and leapfrog integration."
I am not sure about "conservation is encoded" part, but I can believe it. Either way, in practice the simulations will encounter issues along these lines if they are not careful. For example, I left your simulation running for what amounted to ~120 yrs and saturn lost all her moons. This is the best pic I could get, sorry (hopefully it is reproducible): https://i.imgur.com/C7emd28.png
Since you do not report anything about conservation of energy/momentum, how do we know that isn't the problem?
>For example, I left your simulation running for what amounted to ~120 yrs and saturn lost all her moons.
Thanks! heh, it is rather complex there are all sorts of things... most likely the new gravity function set for that model messed up, which can apply attraction between groups of objects if they are distant and weak enough.
But honestly I am certain that it is something in particular and not something mysterious about conservation of energy/momentum which i have not come across yet, because there is no formula in that gravity simulation which should be able to destroy momentum. In the other models, there are quasi-physical friction and pressure functions which do destroy momentum and do have some rough adjustments to compensate for that somewhat, but accuracy is lost whether things are compensated for or not. Tracking energy and momentum can be a good way for tracking accuracy of a model, but if you do any adjustments to correct it, that is a kind of fudge not based in true physics. It is poignant that you expressed scepticism of climate models because of a perceived lack of such adjustments, while the opposite is actually true. Climate models have to include a lot of calibrated adjustment and quasi-physics, because the whole system is too complex to represent at all relevant scales. This should not condemn the work of computer scientists specialised in climate modelling to undue scepticism.
> But honestly I am certain that it is something in particular and not something mysterious about conservation of energy/momentum which i have not come across yet, because there is no formula in that gravity simulation which should be able to destroy momentum
It's a well established fact that numerical integration methods either gain or lose energy (in particular, the Runge-Kutta family is known to lose energy over time). For celestial mechanics simulations, a special class of numerical integration methods called "symplectic integrators" are used and their purpose is to conserve energy and angular momentum.
> but if you do any adjustments to correct it, that is a kind of fudge not based in true physics.
When you are numerically integrating differential equations that model physical phenomena, you're not doing "true physics" but an approximation thereof.
And an approximation that makes Earth drift 70 km per year or Saturn's moons drift out of orbit in a few hundred years is a very bad approximation by scientific standards.
The methods used for celestial mechanics calculations need to be precise over thousands to millions of years. And the way they work is to "fudge" with the numerical methods to preserve energy and angular momentum. It's a much better approximation of "true physics" than your toy simulation.
Your assumption that the issues are due to floating point errors is incorrect. 64 bit double precision is millimeter accurate to the orbit of Neptune. That's good enough for scientific applications.
If you're interested in this, you could take a look at this scientific grade N-body simulator and the methods it uses:
https://github.com/hannorein/rebound
> Your assumption that the issues are due to floating point errors is incorrect.
Its not as easy as declaring the resolution of absolute position. If you really want to spend the time on figuring it out, then examine the difference in scale between the bodies positions, velocities, accelerations given different model timestep values, what happens during the squaring summing and rooting, multiplying by G of those values and also even (because i have looked into this before) the subdivided timestep values involved in tempering this models data, so it becomes a perfectly stable quantised version of its anologe values (with no need for the kind of corrections Runge-Kutta is notable for)
That link is very intresting to me thankyou. You can notice that it includes some quite simple integration schemes as 'symplectic' (basically means stable without need for gross correction)
>64 bit double precision is millimeter accurate to the orbit of Neptune.
It's a good point of order though, I got this mixed up last time I looked at issues in that system. I got fed up trying to explain, that the integration scheme used there is "symplectic" - it doesn't require gross adjustments, so the many bugs that have and will spring up shouldnt be patched with gross adjustments. If the model is gaining or loosing energy, i cant blame it on the integrator or on missing physics.
So regardless of that, thanks for the correction, i had got it in my head 53bit mantissa was somewhat less than mm resolution at Neptune. Im still wary that the resolution could easily cause problems, but accept there is a fair chance these could be avoided with careful calculations.
>"Quite precise data for thousands of NEOs is available, but its needs converted to position and movement vectors to use in this simulation."
https://github.com/strainer/fancy
Thanks for the good discussion. I'm still unconvinced that asking for climate models to report on conservation of energy is "undue speculation", but I really don't know how big a deal it is. Anyway, I looked at my old code and here is some R to get the JPL data and convert to state vectors, hopefully you can use/translate it to easier test your sim:
Thankyou for bearing with me - Ive been under the weather and have trouble explaining myself at the best of times.
The tricky thing I ran into with the JPL NEO data was they don't seem to release the co-ordinates for them as they do the major solar system bodies. It looks like they just serve out a selection of orbital and observational measurements, which someone has figured out how to convert already, but it could take me ages.
A priority for the project is tidying and documenting code so that other people might use it. Recently facility for efficient collision processing was built, next I want to accommodate multi-point objects and types of bonds, to start more down to earth simulations which is what Im most interested in - dynamic spatial awareness for vacuum cleaners and things :)
> There are no laws of physics dedicated to adjusting energy, momentum, information in order to conserve them. They are conserved by a mathematical correctness of every physical law
The physical laws guarantee conservation, yes. But not all numerical methods for solving differential equations do.
Hanno Rein, one of the experts in the field, just published a new algorithm which is exactly time-reversible for floating point numbers. Figure 1 of the paper is pretty incredible:
It would surely depended on how numerically (un)stable the calculations were.
Some of the stuff we did in fixed income derivatives was rock solid robust, and some would go off the rails at the slightest provocation, so I don't think that your thesis is valid as it stands.
I'm not sure I follow. Both responses to my post seem suggest that "following conservation laws is not an important issue for climate models".
Ok, I am open to that but I don't really follow the argument that the stability of the simulation can be used to tell us that violation of conservation laws will not lead to inaccurate results.
I suspect the fact that the Earth itself is an effectively infinite sink of momentum, and space is an infinite sink of energy, has something to do with why conservation laws aren't as much of a thing for climate models.
I did that in Falling Bodies, my 1990s ragdoll simulator. At the end of each step, the system energy was computed, and if it had changed, the timestep was reduced.
Most simulators don't do that. Older, bad approaches could inject energy, which is why some games would suddenly have things fly apart. Newer approaches err in the direction of losing energy, which doesn't look so bad.
The paper has a section on this, around the end of page 4, which is really interesting. The short version is: They compared their double-precision results to extremely high-precision Taylor expansions (with theoretical 70+ digit accuracy, and calculated in 100+ digit accuracy), and found that they matched to the accuracy you'd expect.
That doesn't guarantee that the orbits are perfectly periodic, I suppose, but it does suggest that the orbits are stable with respect to rounding errors up to those you get from using doubles.
Well, they can't, because the system might still fly apart at some future point. It is possible that these are close to periodic solutions, but to make it rigorous what they need to do is show that the principle of least action is satisfied for some configuration close to the approximations. This was done by Greg Minton well before the posted paper (abstract [1]).
In the text they say they use high-precision floating point calculations to reduce this type of error:
> At first, using the obtained initial conditions, we checked the 137 periodic orbits by means of the high-order Taylor series method in the 100-digit precision with truncation errors less than 10^−70 , and guaranteed that they are indeed periodic orbits.
> Besides, we use the CNS with even smaller round-off error (in 120-digit precision) and truncation error (less than 10^−90 ) to guarantee the reliability of these 27 families.
They do reference some software (e.g., "dop853"), but I'm not familiar with the details of those ODE solvers.
That sounds really impressive. I jumped to comment before reading the paper. I found a nice gem in David Tongs lectures [1] in chapter about Dimensional analysis. He mentions Planck legnth Lp ~ 10^-35m, for which "All indications are that this is the shortest distance scale
possible; at distances shorter than Lp, space itself is likely to have no meaning". So 10^-70 sounds like a nice margin.
> He mentions Planck legnth Lp ~ 10^-35m, for which "All indications are that this is the shortest distance scale possible; at distances shorter than Lp, space itself is likely to have no meaning". So 10^-70 sounds like a nice margin.
That's an apples-to-oranges comparison, though. The 10^-70 number is a relative error while the planck length is just a length. The numerical computations in the paper were likely done in a system of "simulation units" where the actual lengths, velocities, forces, etc. are normalized to values of order unity. This has advantages for the numerical aspect, in terms of preserving floating point accuracy. But it also means that to translate it to a physical system (e.g., a triple system of stars), the simulation needs to be scaled to physical units. The 10^-70 that's quoted just means that the values should be numerically accurate to 1 part in 10^70.
If you wanted to compare the (floating point) accuracy of the simulation with the Planck length, you would need to scale the simulation to a physical size and see what the 10^-70 fractional error would translate to in physical units.
Until 2013, only 3 or so solutions to the 3-body problem are known. Now we have over 150 solutions. This sounds incredible, given how fundamental the problem is, but So what?
Will this change astrophysics - for example - in any way?
3-body problem is unsolved math problem. There is equation, but no one knows how to solve it, and the only way to deal with it is number crunching.
Stable orbits are solutions to the problem, but partial ones. The more solutions are known there more possibility to find more generalized solution. Maybe even the general solution for the problem.
I do not know how it can change astrophysics (its just newtonian gravitation, not einshteinian), but it can bring new methods/ideas to mathematics, then improved math will change everything. Maybe.
From engineering point of view, maybe the problem is solved. From maths point of view it isn't.
For example, one cannot say for sure, are these trajectories periodic or no. 100-digit precision says "yes", but there is no guarantee, that 101-digit precision wouldn't say "no", they are not periodic.