Hacker News new | past | comments | ask | show | jobs | submit login
Neural Network Quine (arxiv.org)
138 points by crazyoscarchang on March 20, 2018 | hide | past | favorite | 25 comments



Quine, the descriptor in the actual article, is both more accurate, and more appropriate for a forum made of programmers.

The fact that they made it into a quine with non-trivial, useful side effects is actually really interesting, regardless of the language/paradigm.


What’s the significance of this paper?


It's mostly interesting from an artificial life perspective. I'm just going to quote from the motivations sections here:

Biological life began with the first self-replicator (Marshall, 2011), and natural selection kicked in to favor organisms that are better at replication, resulting in a self-improving mechanism. Analogously, we can construct a self-improving mechanism for artificial intelligence via natural selection if AI agents had the ability to replicate and improve themselves without additional machinery.

Neural networks are capable of learning powerful representations across many different domains of data (Bengio et al., 2013). But can a neural network learn a good representation of itself? Self-replication involves some level of self-awareness, and is a small step towards developing introspective capabilities in neural networks.

In a HyperNetwork (Ha et al., 2017), a small recurrent neural network is used to generate the weights for a larger one, which can be viewed as a meta-controller enforcing a soft weight-sharing constraint between layers of a recurrent neural network. Similarly, we can view self-replication as a mechanism that enforces a soft weight-sharing constraint between a network and past versions of itself, which is helpful for lifelong learning (Silver et al., 2013) and potential discovery of new neural network architectures.

Learning how to enhance or diminish the ability for AI programs to self-replicate is useful for computer security. For example, we might want an AI to be able to execute its source code without being able to read or reverse-engineer it, either through its own volition or interaction with an adversary.

Self-replication functions as the ultimate mechanism for self-repair in damaged physical systems (Zykov et al., 2005). The same may apply to AI, where a self-replication mechanism can serve as the last resort for detecting damage, or returning a damaged or out-of-control AI system back to normal.


>Biological life began with the first self-replicator

I never understood why people assume that those biological self-replicators were in any way analogous to quines. Quines generate information from within themselves. Proto-life would have to assemble copies of itself from molecules available in the environment. It seems like completely different principles of work.


> Proto-life would have to assemble copies of itself from molecules available in the environment.

Quines also live in an environment: the programming language they're written in. It's a very different _kind_ of environment; for instance the physical world has preservation of mass while it's really easy to copy bits. But both quines and proto-life can only replicate in the right environment, and they do so via an interaction between the information contained in themselves and the environment in which they are embedded. They seem quite analogous to me.


A rock is analogues to a car from the point of view of a child.

That's the problem with analogies: they are observer-dependent.

A quine shares no causal property with DNA.


Oh, but it does, if you treat "assembling a copy of yourself from atoms in the environment" as equivalent to the implementation details of printing to standard output.


> causal property

"assembling a copy of yourself" is a description you are imposing on a series of events.

The events in the case of a digital computer are certain oscillations to the electric field. That shares nothing with the chemical assembly of biological material


The whole field of artificial intelligence is missold on analogy.

"Neurons" arent neurons. Replication isnt replication.

AI proponents speak of photographs of animals made of ink (electrical fields in silicon) as-if they were themselves animals. Inviting us to marvel at how life-like the piece of paper is.


The "assembling of a copy" is implemented by a combination of printing to standard output and compiling (and running) the resulting program - both of which are parts of the environment, not of the quine.


If you're interested, there's an entire section of the book Godel, Escher, Bach that talks about exactly this connection. (Chapter 16, "Self-Ref and Self-Rep")


The title was updated in the last 3 hours to include the word quine instead of self-replicator. GP comment was written in the previous context.


1. It's cool.

2. "We suggest that a self-replication mechanism for artificial intelligence is useful because it introduces the possibility of continual improvement through natural selection."


> What’s the significance of this paper?

We're progressing from static neural nets (like the old TensorFlow) to dynamic nets (like PyTorch) and even dynamically generated on-the-fly neural nets (hypernets) which is great for creating architectures that better fit the problem, case by case.

This paper shows another way to generate a net from a net, but this time with self replication. Self replication could be useful for evolutionary optimisation, and, as they say in the paper, could become a rudimentary form of introspection. The idea of a "self-replication loss term" is very exciting, it's rare that we see such innovation in loss functions.


Sex sells.


Would be interesting to use this as a training algorithm -- have multiple self-replicating networks that mate and copy portions of their weights in random patterns... eerie.


We already have the mating part with NeatAI. Link: http://nn.cs.utexas.edu/downloads/papers/stanley.ec02.pdf

Essentially it's a neural network which is represented by a series of genes. Genetically, new genes come in by having new links form between neurons, disabled, or split (with an in between neuron). This, as you can see, allows for a wide range of network representations. Because these variations are tagged with id's sequentially, we can mate two neural networks by combining their genes in order. They never conflict, and will produce something similar to both parents.


Create fitness vectors based on their performance over to different problem types to create competition in the environment.


haha. alas, paper appears to cover asexual reproduction of neural networks only


That'll be fixed when the Uni PR department issues the press release.


Isn't this essentially looking for a fixpoint?


> In mathematics, a fixed point (sometimes shortened to fixpoint, also known as an invariant point) of a function is an element of the function's domain that is mapped to itself by the function.

https://en.wikipedia.org/wiki/Fixed_point_(mathematics)

Looks like it. And they need to resort to some mathematical trickery to achieve this because as the explain in the paper:

> A neural network is parametrized by a set of parameters Θ, and our goal is to build a network that outputs Θ itself. This is difficult to do directly. Suppose the last layer of a feed-forward network has A inputs and B outputs. Already, the size of the weight matrix in a linear transformation is the product AB which is greater than B for any A > 1.


So genetic programming, but for NN's? Hasn't that been done?


GP for NNs has, but this isn't GP; "self-reproduction" (as in the original title) here means that the neural network is learning to output a copy of itself. In other words, the NN is learning to be a quine (as indicated by the current title). This is a very different than just applying genetic programming to NNs. In GP, you're updating the NN (or whatever) by copying it with mutations/recombinations. Here, the NN itself is learning to create a copy of itself.

The fact that they manage to also train to the network to simultaneously perform somewhat complicated tasks is super crazy.


This is a tautology.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: