> Sometimes I think the surest sign that we’re not living in a computer simulation is that if we were, some microbe would have learned to exploit its flaws.
What makes the author so sure that it hasn't? We can only take the workings of the universe for what they are. We have no context with which to determine what is a flaw and what is just physics. One could say that quantum tunneling looks a hell of a lot like a common collision detection bug, but that could just be because how reality actually works is unintuitive to minds evolved for a more applicable set of rules. We'd never know the difference. In fact, it probably isn't meaningful to say there even is a difference.
These anecdotes remind me of stories where someone asks a genie for a wish, and the genie technically grants it but in a way that is not what the wisher intended. For example, (cribbed from another thread), "I want to be rich". So the genie renames you to Richard. Ironically a very religious buddy of mine claimed that I as a software engineer am in part responsible for releasing a "Jinn", which could have unintended consequences for humanity.
Computers are the genies and golems of the fairytales. Stories that demonstrate the folly of confusing what you ask with what you mean are as old as humanity itself, but the stories we now tell are not fictional anymore: we went and built those golems and genies and now share the planet with billions of them!
It's interesting that so much neural net weirdness emerges from exploiting errors in physics simulators or floating point math.
I am now expecting the next generation of perpetual motion machines to include AI to try to take advantage of physics bugs in our own universe.
On a related note, does anyone know how you might go about fixing a simulator that allows collisions to generate more energy/momentum than was initially supplied?
Or otherwise violates known invariants?
A classic mistake is to use something like "velocity += acceleration*time" (this is called Euler integration). It looks reasonable, and it's good enough for a toy project, but it doesn't conserve energy unless the timesteps are infinitely small.
A more sophisticated mistake is to use something like Runge-Kutta Integration: highly accurate in terms of position, but it is not symplectic so the total energy will drift over time. Think of your simulated world as a stack of graph paper sheets, where each sheet represents a surface of constant energy. Runge-Kutta will take you very close to the ideal (x,y) point - but not necessarily on the same sheet. Verlet Integration may be a little further from the right point each time, but by its mathematical form it will always stay on the same sheet.
Bad title (in the original) because the examples in the paper are mostly drawn evolutionary computation and artificial life, with only a few relating to neural networks being used in those fields.
In my opinion, it is very, very hard to fully sandbox a clever AI. It can find side channels, or it can turn innocuous stuff into computing device...
e.g: "Accidentally Turing complete" shows how things that were not intended to be computing devices are actually Turing complete. Including: Magic the gathering, the card game.
Those labeling nets keeps resurfacing in the news. They're more of a hack than a proper implementation. It uses a focus-based algorithm to detect prominent features and just glues them with NLP without using any context. If it detects a scissor and a paper it might say "Man uses scissors to cut painting" despite no man or painting being present. That's because there's a high statistical correlation between the two in it's dataset. That's it.
Most of the examples actually come from the unexpected behaviour of evolutionary algorithms.
My favourite example of such (and it doesn't look like it made it into the paper the article is based on) was an attempt to create a random number generator via a genetic algorithm that controlled the development of an electronic circuit. The unexpected way the algorithm solved the problem was to produce a radio.
Why is the author sure we need microbes to discover exploits in a simulation: the "floating point error" problem would be far more obvious, like being unable to measure sub-planck lengths or model singularities?
What makes the author so sure that it hasn't? We can only take the workings of the universe for what they are. We have no context with which to determine what is a flaw and what is just physics. One could say that quantum tunneling looks a hell of a lot like a common collision detection bug, but that could just be because how reality actually works is unintuitive to minds evolved for a more applicable set of rules. We'd never know the difference. In fact, it probably isn't meaningful to say there even is a difference.