Hacker News new | past | comments | ask | show | jobs | submit login

> "The results show that the brain has special neural circuits to detect snakes, and this suggests that the neural circuits to detect snakes have been genetically encoded," Nishijo said.

> The monkeys tested in the experiment were reared in a walled colony and neither had previously encountered a real snake.

[1] https://www.ucdavis.edu/news/snakes-brain-are-primates-hard-....




All this is saying is our neural architecture has specific places that respond to danger and the instinct of fear. Anyone who has seen an MRI knows this is the case. It does not mean actual knowledge of snakes is encoded in our DNA.

Our brains are "trained" on the data they receive during early development. To the degree that evolutionary pressures stored "data," it stored data about how to make our brains (compute) or physiology more effective.

Modern ML tries to shortcut this by making the architecture dumb and the data numerous. If the creators of ML were in charge of a forced evolution, they'd be arguing we need to make DNA 100s of gigabytes and that we needed to store all the memories of our ancestors in it.


Yes, we are definitely talking about two different processes. Biology is far more complex, nuanced and inscrutable. We don’t understand what all is in our DNA. We do have strong ideas about it.

When it comes down to it, code is data and DNA is code. There are natural pressures to have less DNA so the hundreds of MB of DNA in humans might be argued to be somewhat minimal. If you have ever dealt with piles of handcrafted code that is meant to be small, you’ll likely have seen some form of spaghetti code… which is what I liken DNA to. Instead of it being written with thought and intention, it’s written with predation, pandemics, famine, war etc.

I agree we tend to simplify our artificial networks, largely because we haven’t figured out how to do better yet. The space is wide open and biology has extreme variety in the examples to choose from. Nature “figured out” how to encode information into the very structure of a neural network. The line defining “code” and “data” is thus heavily blurred and any argument about how humans are far superior because of the “reduced number of training examples” is definitely missing the millennia of evolution that created us in the first place.

If we decided to do evolution and self modifying networks then we will likely look for solutions that converge to the smallest possible network. It will be interesting to watch this play out :)


> The line defining “code” and “data” is thus heavily blurred and any argument about how humans are far superior because of the “reduced number of training examples” is definitely missing the millennia of evolution that created us in the first place.

I disagree. The line is quite clear. Our factual memories do not persist from one generation to another. Yet this is what modern ML does.

The "data" encoded in DNA is not about knowledge or facts, it is knowledge about our architecture. Modern ML is a like a factory that outputs widgets based upon knowledge of lots of pre-existing widgets. DNA is a like a factory that outputs widgets based upon lots of previous pre-existing factories.

The factory is the "code" or "verb." The widgets are the "data" or "nouns." Completely separable and objectively so.


If you are talking about facts like "cos(0) = 1" then yes, of course I agree; Those kinds of facts do not persist simply by giving birth. However, that's a very narrow view of "data" or "knowledge" when talking about biological systems. Humans use culture and collaboration to pass that kind of knowledge on. Spiders don't have culture and collaboration in the same sense. We are wired/evolved with the ability to form communities which is a different kind of knowledge altogether.

It seems like you are simultaneously arguing that humans (who have a far more complex network, or set of networks than current LLMs) can recognize a class of something given a single or few specific examples of the thing while also arguing that the structure has nothing to do with the success of that. The structure was created over many generations going all the way back to the first single-celled organisms over 3.7 billion years ago. The more successful networks for what eventually became humans were able to survive based on the traits that we largely have today. Those traits were useful back then for understanding that one cat might act like another cat without needing to know all cats. There are things our brains can just barely do (eg: higher level mathematics) that a slightly different structure might enable... and may have existed in the past only to be wiped out by someone who could think a little faster.

Also, check out epigenetics. DNA is expressed differently based on environmental factors of previous and current generations. The "factories" you speak of aren't so mechanical as you would make them seem.

All of this is to say, Human biology is wonderfully complicated. Comparing LLMs to humanity is going to be fraught with issues because they are two very different things. Human intelligence is a combination of our form, our resources and our passed on knowledge. So far, LLMs are simply another representation of our passed on knowledge.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: