Hacker News new | past | comments | ask | show | jobs | submit login

There are thousands of structures and substances in a human head besides neurons, at all sorts of commingling and overlapping scales, and the neurons in those heads behave much differently and with tremendously more complexity than the metaphorical ones in a neural network.

And in a human, all those structures and substances, along with the tens of thousands more throughout the rest of the body, are collectively readied with millions of years of "pretraining" before processing a continuous, constant, unceasing mulitmodal training experience for years.

LLM's and related systems are awesome and an amazing innovation that's going to impact a lot of our experiences over the next decades. But they're not even the same galaxy as almost any living system yet. That they look like they're in the neighborhood is because you're looking at them through a very narrow, very zoomed telescope.




Even if they are very different (less complex at the neuron level?) to us, do you still think they’ll never be able to achieve similar results (‘truly’ understanding and developing pure mathematics, for example)? I agree that LLMs are less impressive than it may initially seem (although still very impressive), but it seems perfectly possible to me that such systems could in principle do our job even if they never think quite like we do.


True. But a human neuron is more complex than an AI neuron by a constant factor. And we can improve constants. Also you say years like it's a lot of data--but they can run RL on chatgpt outputs if they want, isn't it comparable? But anyway i share your admiration for the biological thinking machines ;)


The sun is also better than a fusion reactor on earth by only a constant factor. That alone doesn't mean much for out prospects of matching its power output.


> human neuron is more complex than an AI neuron by a constant factor

constant still can be not reachable yet: like 100T neurons in brain vs 100B in chatgpt, and also brain can involve some quantum mechanics for example, which will make complexity diff not constant, but say exponential.


> and also brain can involve some quantum mechanics

A neuroscientist once pointed this out to me when illustrating how many huge gaps there are in our fundamental understanding of how the brain works. The brain isn't just as a series of direct electrical pathways - EMF transmission/interference is part of it. The likelihood of unmodeled quantum effects is pretty much a guarantee.


Wikipedia says 100 billion neurons in the brain


Ok, I messed up, we need compare LLM weight with synaps, not neuron, and wiki says there are 100-500T synapses in human brain


Ok let's say 500T. Rumor is currently gpt4 is 1T. Do you expect gpt6 to be less than 500T? Non sarcastic question. I would lean no.


So, they may trained gpt4 with 10B fundings, than for 500T model they would need 5T fundings.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: