Hacker News new | past | comments | ask | show | jobs | submit login

I'm not saying that it is impossible to have an intelligent machine, I'm saying that we aren't there now.

There's something to your point of observing a system from within, but this reminds me of when some people say that simulating an emotion and actually feeling it is the same. I strongly disagree: as humans we know that there can be a misalignment between our "inner state" (which is what we actually feel) and what we show outside. This is wat I call simulating an emotion. As kids, we all had the experience of apologizing after having done something wrong. But not because we actually felt sorry about it, but because we were trying to avoid punishment. As we grow up, it comes the time where we actually feel bad after having done something and we apologize due to that feeling. It can still happen as adults to apologize not because we mean it, but because we're trying to avoid a conflict. But at that time we know the difference.

More to the point of GPT models, how do we know they aren't actually understanding the meaning of what they're saying? It's because we know that internally they look at which token is the most likely one, given a sequence of prior tokens. Now, I'm not a neuroscientist and there are still many unknowns about our brain, but I'm confident that our brain doesn't work only like that. While it would be possible that in day to day conversations we're working in terms of probability, we also have other "modes of operation": if we only worked by predicting the next most likely token, we would never be able to express new ideas. If an idea is brand new, then by definition the tokens expressing it are very unlikely to be found together before that idea was ever expressed.

Now a more general thought. I wasn't around when the AI winter begun, but from what I read part of the problem was that many people where overselling the capabilities of the technologies of the time. When more and more people started seeing the actual capabilities and their limits, they lost interest. Trying to make today's models look better than what they are by downplaying human abilities isn't the way to go. You're not fostering the AI field, you're risking to damage it in the long run.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: