Hacker News new | past | comments | ask | show | jobs | submit login

Chomsky's ideas around the Universal Grammar are just a theory. Similar to how his formal grammars somewhat represent real languages but never fully, the UG model will never explain it all, or even most of it. Brain biology just doesn't like the idea of formal things/rules/grammars.

Here's an alternative theory/approach. What if natural languages are just the way the device starts working when the number of neurons grows quickly? NL properties sort of emerge out of low-level details of brain work? Neurons are simple but the brain is not. Complex brain properties emerge from trivial parts the same way our full bodies emerge from a simple DNA/RNA system. Any details in these systems would be too statistical to expose a limited rules system.

Obviously, powerful enough ML system can infer the system's properties. In fact, it can infer any function. The thing is that this doesn't mean there's some kind of simpler model explaining details of emergent system's work.

What is surprising is the way LLMs imitate a stateful function (our brain, with memory, fluid biology, etc) using a stateless inferred function (the model). I suspect this statefulness might be the answer to the question of "poverty of stimulus" problem.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: