Hacker News new | past | comments | ask | show | jobs | submit login

I’ve been noodling on how to combine neural networks with evolution for a while. I’ve always thought that to do this, you need some sort of evolvable genetic/functional units, and have had trouble fitting traditional artificial neurons w backprop into that picture.

My current rabbit hole is using Combinatory Logic as the genetic material, and have been trying to evolve combinators, etc (there is some active research in this area).

Only slightly related to the author’s idea, its cool that others are interested in this space as well.




Then probably you know about NEAT (the genetic algorithm) by now. I'm not sure what has been tried in directly using combinatorical logic instead of NNs (do Hopfield networks count?), any references?

I've tried to learn simple look-up tables (like, 9 bits of input) using the Cross-Entropy method (CEM), this worked well. But it was a very small search space (way too large to just try all solutions, but still, a tiny model). I haven't seen the CEM used on larger problems. Though there is a cool paper about learning tetris using the cross-entropy method, using a bit of feature engineering.


I am familiar with NEAT, it was very exciting when it came out. But, NEAT does not use back propagation or single network training at all. The genetic algorithm combines static neural networks in an ingenious way.

Several years prior, in undergrad, I talked to a professor about evolving network architectures with GA. He scoffed that squishing two "mediocre" techniques together wouldn't make a better algorithm. I still think he was wrong. Should have sent him that paper.

IIRC NEAT wasn't SOTA when it came out, but it is still a fascinating and effective way to evolve NN architecture using genetic algorithms.

If OP (or anyone in ML) hasn't studied it, they should.

https://en.m.wikipedia.org/wiki/Neuroevolution_of_augmenting... (and check the bibliography for the papers)

Edit: looking at the continuation of NEAT it looks like they focused on control systems, which makes sense. The evolved network structures are relatively simple.


Maybe a key innovation would be to apply backpropagation to optimize the crossover process itself. Instead of random crossover, compute the gradient of the crossover operation.

For each potential combination, "learn" (via normal backprop) how different ways of crossover impacts on overall network performance. Then use this to guide the selection of optimal crossover points and methods.

This "gradient-optimized crossover" would be a search process in itself, aiming to find the best way to combine specific parts of networks to maximize improvement of the whole. It could make "leaps", instead of small incremental steps, due to the exploratory genetic algorithm.

Has anything like this been tried?


Thermodynamic annealing over a density parameter space




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: