Have you read some of von Neumann's work on self-reproducing automata? Do you think it could have interesting applications in this area, perhaps some kind of neural reconfiguration into agents?
For example, what are the CA rules that spawn "life-like" creatures, that then are capable of performing certain actions, and eventually self-reproducing into other actors?
I've done some exploratory work on CAs and Life (unfortunately I couldn't get any academic positions so far). (My main area of interest right now is motivation design) Feel free to get in touch if interested.
(For example, I would be interested in even getting the lifeforms to act, and engineer motivation functions, getting them to be "ethical", "collaborative", (or selfish experimentally), "compliant" (to external requests), etc.)
Perhaps this kind of environment, with its inherent reliability, can prove a simpler, elegant building block for agents and environments.
Hi, I'm the first author of this article.
I explored self-replication with and without variation quite a bit - we have a paper at ALIFE2021 that is all about self-replication with neural networks.
One thing I point out in the soon-to-be-published paper, where we use feedforward neural networks instead of Neural CAs for self-replication, is that generally CA rules can be seen as environment rules, and CAs can specialise and diversify with the state vectors. There is a wide spectrum in which to select how many rules you want to embed into the environment and how many you want the models themselves to learn for self-replication. As von Neumann said (paraphrasing): we don't want to explain away the problem by making the environment too complex, nor have it so simple that it makes progress too difficult.
Having said that, I already have some unpublished results where I can find ways to have traditional Neural CAs self-replicate while having also functional capabilities (like persisting a pattern). But as they are now, you can easily see them as being mostly "environmental rules". There certainly is room for plenty of research in this area. I encourage researchers to focus a lot about finding interesting non-perfect self-reproduction.
Adversarial images targeting an image classifier have been shown to be transferable to separately trained models (i.e., models with different weights or different architectures relative to the model for which the adversarial image was constructed to target).
I'm curious if the adversarial CA reprogramming techniques are similarly transferable. That is, do the adversarial CA and/or the adversarial perturbation matrix transfer to separate CAs (trained on the same task) with different weights or architectures than the original CA that was targeted?
This looks very interesting! I only skimmed the abstract now, but I will look more closely tomorrow. Back in 2016-17 I did a master's thesis on a combination of (much simpler) cellular automata and neuroevolution, and had some thoughts about how other neural "paradigms" like backpropagated deep nets and indeed, adversarial training could be used with CAs. It's great to see that it's an ongoing topic of research.
Have you read some of von Neumann's work on self-reproducing automata? Do you think it could have interesting applications in this area, perhaps some kind of neural reconfiguration into agents?
For example, what are the CA rules that spawn "life-like" creatures, that then are capable of performing certain actions, and eventually self-reproducing into other actors?
I've done some exploratory work on CAs and Life (unfortunately I couldn't get any academic positions so far). (My main area of interest right now is motivation design) Feel free to get in touch if interested.
(For example, I would be interested in even getting the lifeforms to act, and engineer motivation functions, getting them to be "ethical", "collaborative", (or selfish experimentally), "compliant" (to external requests), etc.)
Perhaps this kind of environment, with its inherent reliability, can prove a simpler, elegant building block for agents and environments.