One of the first programs I ever wrote was a simulator for a planet rotating a star with a naive difference equation approximation to Newton's law. I was a bit disappointed to see the planet reliably spiral into the sun.
The main thing is that something like Euler's method (naive iterative approximation) doesn't guarantee conservation of energy. I believe that this is why planetary dynamics are usually handled with Lagrangian equations rather than the naive approximation approach.
Edit: It would be nice to see what the author's system does for two bodies as a sanity check. Three body system was indeed chaotic but still conserve energy - would this system do that?
It wasn't the first program I wrote, or even a program I wrote, but in middle school a friend wrote a 3-body integrator in BASIC (sun, earth, moon). That single 20 line program shaped my entire world view for a long time (decades), implying to me that we could, if we had powerful enough computers, simulate all sorts of things... even entire universes (which was also an idea that I explored with cellular automata).
It's not a particularly helpful worldview and can often be harmful if you're working with complex systems, or systems that require more than o(n(log(n)) per step, or any number of other real-world problems scientists face.
Many years later I was impressed at how well astronomy packages work (IE, "it's time T at local L, what is the angle of the sun in the sky?") and stumbled across this paper by Sussman: https://web.mit.edu/wisdom/www/ss-chaos.pdf which shows some pretty serious work on future prediction of solar system objects.
>simulate all sorts of things... even entire universes
You also assumed that chaos is a measurement problem. You could simulate entire universe if you knew the initial conditions sufficiently enough. There were two nice recent papers[1][2] that showed in order to predict some orbits you'll need an accuracy less of Planck length or else some systems are fundamentally unpredictable.
I'd love to see convincing evidence that we could simulate the universe using only standard physical laws. IIUC we don't have a way to do that or reliably say whether it's possible. It's also not that interesting a problem because it's so impractical.
But it would be interesting to see if we could simulate a universe and observe if it in any way resembled ours. Even if the simulated universe was much "smaller". (It would of course have to be.)
> if we had powerful enough computers, simulate all sorts of things
In reality, we're having a hard time precisely simulating even two atoms interacting, with all the quantum effects, diffraction, gravity (however minuscule), etc.
Our universe is surprisingly detailed.
64-bit floats aren't even close enough to precisely simulate real world. What's the precision of the mass of an electron? What's the precision of its coordinates or motion vectors? Maybe plank length for coordinates, maybe not. What about acceleration? Can it be arbitrarily low? An electron's gravitational field from a billion light years away should theoretically affect us (in time).
The assumption in your comment is that any of this is real to begin with and logic isn't being short-circuited in our brains to make everything "check out" even if it doesn't.
If you simulate a universe with cube blocks from Minecraft, it doesn't matter as long as your users think the simulation is real.
And since you are simulating their consciousness, you can easily short circuit the train of thought that would cause doubt, or that would attempt logic, etc., so they truly believe their Minecraft cube world is incomprehensibly detailed down to the atoms and galaxies in the sky.
They'd happily go on the whiteboard, and prove their theories with math like 2+2=5 and everyone would agree because they literally couldn't disagree - they would feel in their hearts and minds that this is perfectly correct. There's nothing to say that's not happening now.
In fact, this is how I see most advanced civilizations performing simulations. The compute savings would be immense if you could just alter user consciousness as opposed to simulating an actual universe.
I always find skepticism like this to be really interesting, since in the end we could always be getting fooled by the Deus Deceptor or something. That being said, let me take a stab at being anti-skeptical for the fun of it.
I work around people who do "Computational Chemistry", which is basically running quantum physics calculations. These tend to be done in order to either understand the properties of materials, or to understand the reasons why reactions happen. The results are more advanced materials and better performing reactions. An early and famous example of such technology is the laser. A more typical modern example would be searching for Zeolite catalysts which have particular properties, or trying to create surface coatings which protect implants from being eaten by the immune system, or on which ice cannot freeze.
Basically, I believe the advanced calculations to be correct because they lead to things which are (eventually) used in daily life.
In nearly all situations, these advanced calculations bear only a limited relationship to the underlying physics occurring in material systems. A lot of simulation work involves twiddling parameters until you get the result you want to see, and then just publishing that one simulation. It's sort of a post-hoc retro-causality problem. Many of the things you describe came about because of a combination of immense amounts of lab work (mostly of which were failures), some theoretical concepts, and a person willing enough to twiddle params until they fall up something that works, after which they can optimize the parameters.
It is true that simulations produce results which may not reflect the underlying system if the simplifications and fudge factors are incorrect. Thus fiddling with parameters is part of the process.
In the example I gave of searching for zeolite catalysts, the simulations were just used to identify candidates for labs to study. I don't remember the exact numbers, but I think it brought the list of candidates down from hundreds to less than 10. The majority of these candidates were at least somewhat effective. Unless we believe that pretty much all of those hundred candidates would have been effective, then the advanced calculations were doing some work.
The question is, is all that work actually just done because of parameter twiddling? I don't think so. Consider that neural networks are often used lately in order to provide computationally simpler models of various physical phenomena. They can do a somewhat better job if fed with a lot of real data, but they use at least thousands of times more parameters than the simple quantum physics calcs with fudge factors. Thus I think it is safe to say that the structure of the quantum physics calcs does meaningfully model some part of reality. (Unless, as xvector points out, our memories are being continuously overwritten to make reality seem consistent)
It's also good to note that the fudge factors (read: parameters) and quantization are done because it would be too computationally difficult to model the parts of the system modeled by fudge factors for systems with a useful amount of atoms in them, and we just don't know how to compute ODEs for complex systems in continuous time and space. In simple systems, (e.g. 2 photons interacting) analytical solutions for ODEs can be found, no fudge factors are needed for computation, and the computed results match the experimental results to within measurement error.
> Basically, I believe the advanced calculations to be correct because they lead to things which are (eventually) used in daily life.
I think you are missing my point - if you can short circuit logic, you will never be able to know whether your calculations are correct (but you will believe it)
Whether the outputs are used in daily life or not is irrelevant. You don't truly know if that is happening because you do not know what the fuzz factor is in the simulation.
Is the night sky the same as it was yesterday, or is it generated on the spot and your memory edited? The latter is more compute efficient.
Does your coworker look the same, or is the fuzz factor in the sim very high and they have a new face/body generated every day, with your memory edited to match?
Etc. that the outputs of the equations you described are used or not is irrelevant because it would be far more compute efficient to just not have them mean anything and to fuzz their existence/workability
Indeed, I can't prove that it is or isn't the case that my thoughts and memories aren't being constantly overwritten to make reality consistent. I don't have a firm belief one way or the other, but I act like reality does intrinsically make sense.
Reason being, if reality is consistent, than acting as though it does achieves my goals. If reality isn't consistent, or is consistent in a way that differs from what I am capable of comprehending, then I am unable to compute any pattern of behavior that would be helpful to achieving my goals.
Thus it only makes sense to me to act like reality is consistent. I think that if I am acting this way, then it makes sense for me to say that I "believe" reality is consistent in a non thought overriding way.
EDIT: Looking at your comment again, I think that you think it is likely that reality should be simple because computing that would be easier. If we are stuck in a simulation by more advanced beings, then it is possible that compute power is a limiting factor, or they may just have computers so powerful that simulating us could be a cinch.
The simulation scenario is easy to imagine. However, just because I can't imagine scenarios besides "it just is this way", "God did it", and "We are in a simulation" doesn't mean such scenarios don't exist.
> But if it's a proper simulation, base reality must be even more detailed. Like, a lot more.
Not necessarily. You could create the feeling or impression of detail on-demand - consider a 2D fractal in software that you can zoom into infinitely. It's not more detailed than our base reality, it's actually quite a simple construct.
one imagines that post-singularity overloads don't have to worry about IEEE754. Float is likely not the right representation here, but double is enough to represent solar-system-scale differences at centimeter precisions.
It's true that Euler integration is about as crude as you can get, but you don't need to reach to Lagrangians for improvement; something like Verlet integration can already bring dramatic gains with fairly small changes needed.
Can convert Euler's method to a symplectic integrator utilizing v_{n+1} when computing x_{n+1}. That said although such integrators are widely used (usually of higher order than Euler's) in celestial mechanics, one is not restricted to them. For example Bulirsch-Stoer is also very used even if it isn't symplectic because remains accurate (energy error very low) even on long integrations.
Would it make sense to explicitly implement conservation of energy?
I.e. do a simple method but calculate the total energy at the beginning, and at each step adjust the speeds (e.g. proportionally) so that the total energy matches the initial value - you'll still always get some difference due to numerical accuracy issues, but that difference won't be growing over time.
The method you describe would be an example of what is called a "thermostat" in molecular dynamics (because the speed of molecules forms what we call temperature). Such adjustments to the speed can definitely paper over issues with your energy conservation, but you still have to be careful: if you rescale the speeds naively you get the "flying ice cube" effect where all internal motions of the system cease and it maintains its original energy simply by zooming away at high speed.
Thermostats ensure that the average _kinetic energy_ remains constant (on average or instantaneously depending on how they are implemented). Your parent post wants to enforce the constraint that the total energy remains constant. So its a bit different from a canonical ensemble (NVT) simulation. This is a microcanonical ensemble simulation (NVE). This means you don't know if you should correct the position (controlling the potential energy) or the velocities (controlling the kinetic energy).
Basically, there will be error in the positions and velocities due to the integrator used and you don't know how to patch it up. You have 1 constraint; the total energy should be constant. There are 2(3N-6) degrees of freedom for the positions and velocities (if more than 2 bodies). The extra constraint doesn't help much!
Edit: Also, the only reason thermostats work is because the assumption is that the system is in equilibrium with a heat bath (i.e. bunch of atoms at constant temperature). So there is an entire distribution of velocities that is statistically valid and as long as the velocities of the atoms in the system reflect that, you will on average model the kinetics of the system properly (e.g. things like reaction rates will be right). In gravitational problems there is no heat bath.
The main thing is that something like Euler's method (naive iterative approximation) doesn't guarantee conservation of energy. I believe that this is why planetary dynamics are usually handled with Lagrangian equations rather than the naive approximation approach.
Edit: It would be nice to see what the author's system does for two bodies as a sanity check. Three body system was indeed chaotic but still conserve energy - would this system do that?
https://en.wikipedia.org/wiki/Lagrangian_mechanics