Hacker News new | past | comments | ask | show | jobs | submit login
Interactive Lorenz Attractor (malinc.se)
102 points by webdva on Dec 28, 2019 | hide | past | favorite | 17 comments



There is a collection of animated Lorenz Attractors with interactive code created with 140 (or less!) characters of JavaScript: https://www.dwitter.net/h/lorenz Disclaimer: I created some of them :-)


Have you seen this? It's amazing: glChAoS.P / wglChAoS.P real time 3D strange attractors GPU explorer: https://www.michelemorrone.eu/glchaosp/dtAttractors.html#Lor... Recent "Simulating comet's journey in Lorenz" post on HackerNews: https://news.ycombinator.com/item?id=21945598


Thank you for appreciation.

The correct glChAoS.P / glChAoSP and wglglChAoS.P / wglChAoSP website is:

https://www.michelemorrone.eu/glchaosp/

glChAoSP /wglChAoSP is RealTime 3D strange attractors GPU explorer and hypercomplex fractals - over 200 Million particles.

It's freeware, Open Source, and both: native Multi Platform and WebGL... and display not only Lorenz Attractor, but over 100 object types between Attractors, Hypercomplex Fractals, and DLA/DLA3D (Diffusion Limited Aggregation)

Thanks again.

BrutPitt / Michele Morrone


It's awesome.


I <3 generative art - this is really well done, the detailed explanations appreciated. And I can especially enjoy this after playing through Everybody's Gone to the Rapture https://en.wikipedia.org/wiki/Everybody%27s_Gone_to_the_Rapt...


Excellent!

Just one thing: on a touchscreen interface with Firefox, zooming out works well but zooming in is sometimes jittery or not registering at all. (Panning and rotating work fine.)


Nice visualization (particularly the butterflies)! If the OP is here, what ODE solver are you using BTW?


Not OP but a real common one is Runge Kutta 4; it’s a balance between complexity and stability of solution.

https://en.m.wikipedia.org/wiki/Runge%E2%80%93Kutta_methods


There is no advantage to using a better ODE solver on this example because the sensitivity to errors is so high. You can measure this using the uncertainty quantification methods in the Julia ODE solver suite:

https://docs.juliadiffeq.org/latest/analysis/uncertainty_qua...

where divergence on the Lorenz attactor tends to occur by t=80 or so even with accuracy of 1e-16.

But the funny thing about chaotic problems is that the shadowing theorem holds, which is:

>Although a numerically computed chaotic trajectory diverges exponentially from the true trajectory with the same initial coordinates, there exists an errorless trajectory with a slightly different initial condition that stays near ("shadows") the numerically computed one.

So you might as well just use Euler's method with high error, because it's backwards stable for this calculation, i.e. it gives a trajectory on the attractor, just the wrong trajectory. But since every method gives an O(1) error wrong trajectory after a short finite time, you might as well use the cheapest most error prone but convergent method.


While we are both here, if you are not aware of this paper it might give you a giggle (3500th order simulations on a supercomputer): https://arxiv.org/abs/1305.4222

They simulated Lorenz reliably up to 10000 time units; I managed 1400 units using MPFR on my NUC using my own code (500th order, took about 13 hours!).


Haha great! This is 100% a "might as well put it on Arxiv since no one reviewer would ever see this as significant" lol. It's at least very fun. I'm going to have to save this one.


Cheers but I already have my own solver, by implementing this approach https://projecteuclid.org/euclid.em/1120145574 ;) Just wondering though . . .


Interesting. Do they go on to compare it to standard methods? Ie what’s the big benefit here?


That paper needs an update. Taylor methods utilize AD effectively to get something that seems efficient. But there's a lot of theory to explain why they aren't efficient in practice, mostly because of truncation error coefficient sizes. Also, since the time of the article, there have been much better high order explicit RK methods that have been derived, and improved schemes for adaptive order explicit extrapolation. When we benchmark these against Taylor methods which use an optimal form of AD (provided by TaylorIntegration.jl), we still see about a 10x performance difference when at the edge of Float64 range:

https://benchmarks.juliadiffeq.org/html/DynamicalODE/Henon-H...

https://benchmarks.juliadiffeq.org/html/DynamicalODE/Quadrup...

As you get outside of Float64 range, the parallelized explicit extrapolation methods seem to do better. This is something we're going to be getting a lot more precise benchmarks in JuliaDiffEq now though, since we have optimized generic implementations of each of the methods and can thus use Float128 and arbitrary precision on all of the methods simultaneously to really figure out the cutoff.

But thinking about efficiency and accuracy misses the point of Taylor methods. The Taylor methods have a way of getting rigorous error bounds on the integration (like interval arithmetic, but for the continuous ODE), which is the real reason why they can be useful, with the extra cost of course.


Erm, sure I know that name from somewhere! I won't dispute your claims about alternative methods, but I've found it hard to find any open documentation about this stuff, so I am unaware of it myself. Then again, I hereby announce that I am an _amateur_ at this, so I'm not taking criticism personally!

In fact one of the reasons I wrote my stuff (FWIW in c and Python) is because I couldn't follow the code in TaylorIntegration.jl ;) My stuff is _really_ simple, and I know _exactly_ how it works.


High order means much less dependence on step size. Automatic (exact) differentiation means no roundoff from finite differences with small step size. They use a very clever and involved code generation approach, I have just hardcoded the recurrences in my implementation (much simpler). https://github.com/m4r35n357/ODE-Playground

It is not a black box approach like RK4, it depends on the model using a (hardly at all) restricted set of functions. Piecewise definitions are OK though.


Wow, that looks cool and it's very inspiring. Thanks for sharing.




Consider applying for YC's first-ever Fall batch! Applications are open till Aug 27.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: