You definitely know when, while talking with a person, you just pretend to understand what this person is saying vs you actually understand. Is an experience that every human has in his/her life at least once.
No you cannot know this, because you might just be simulating that you understand. You cannot reliably observe a system from within itself.
It's like running an antivirus on an infected system is inherently flawed, because there might be some malware running that knows every technique the antivirus uses to scan the system and can successfully manipulate every one of them to make the system appear clean.
There is no good argument for why or how the human brain could not be entirely simulated by a computer/neural network/LLM.
Wonder if anybody has used Godel's Incompleteness to prove this for our inner perception. If our brain is a calculation, then from inside the calculation, we can't prove ourselves to be real, right?
Maybe that is the point, we can't prove it one way or the other, for human or machine. Can't prove a machine is conscious, and also can't prove we are. Maybe Gödel's theory could be used that it can't be done by humans. A human can't prove itself conscious because inside the human as system, can't prove all facts of the system.
Why would it not be computable? That seems clearly false. The human brain is ultimately nothing more than a very unique type of computer. It receives input, uses electrical circuits and memory to transform the data, and produces output.
That's a very simplified model for our brain. According to some mathematicians and physicists, there are quantum effects going on in our body and in particular in our brain that invalidate this model. In the end, we still don't know for sure if intelligence is comuputable or not, we only have plausible sounding arguments for both sides.
Do you any links to those mathematicians and physicists? I ask because there is a certain class of quackery that waves quantum effects around as the explanation for everything under the sun, and brain cognition is one of them.
Either way, quantum computing is advancing rapidly (so rapidly there's even an executive order now ordering the use of PQC in government communications as soon as possible), so I don't think that moat would last for long if it even exists. We also know that at a minimum GPT4-strength intelligence is already possible with classical computing.
He's one of the physicists arguing for that, but I still have to read his book to see if I agree or not because right now I'm open to the possibility of having a machine that is intelligent. I'm just saying that no one can be sure of their own position because we lack proof on both sides of the question.
Quantum effects do not make something non-computable. They may just allow for more efficient computation (though even that is very limited). Similarly, having a digit-based number system makes it much faster to add two numbers, but you can still do it even if you use unary.
I'm not saying that it is impossible to have an intelligent machine, I'm saying that we aren't there now.
There's something to your point of observing a system from within, but this reminds me of when some people say that simulating an emotion and actually feeling it is the same. I strongly disagree: as humans we know that there can be a misalignment between our "inner state" (which is what we actually feel) and what we show outside. This is wat I call simulating an emotion. As kids, we all had the experience of apologizing after having done something wrong. But not because we actually felt sorry about it, but because we were trying to avoid punishment. As we grow up, it comes the time where we actually feel bad after having done something and we apologize due to that feeling. It can still happen as adults to apologize not because we mean it, but because we're trying to avoid a conflict. But at that time we know the difference.
More to the point of GPT models, how do we know they aren't actually understanding the meaning of what they're saying? It's because we know that internally they look at which token is the most likely one, given a sequence of prior tokens. Now, I'm not a neuroscientist and there are still many unknowns about our brain, but I'm confident that our brain doesn't work only like that. While it would be possible that in day to day conversations we're working in terms of probability, we also have other "modes of operation": if we only worked by predicting the next most likely token, we would never be able to express new ideas. If an idea is brand new, then by definition the tokens expressing it are very unlikely to be found together before that idea was ever expressed.
Now a more general thought. I wasn't around when the AI winter begun, but from what I read part of the problem was that many people where overselling the capabilities of the technologies of the time. When more and more people started seeing the actual capabilities and their limits, they lost interest.
Trying to make today's models look better than what they are by downplaying human abilities isn't the way to go. You're not fostering the AI field, you're risking to damage it in the long run.
I am reading a book on epistemology and this section of the comments seem to be sort of that.
> According to the externalist, a believer need not have any internal access or cognitive grasp of any reasons or facts which make their belief justified. The externalist's assessment of justification can be contrasted with access internalism, which demands that the believer have internal reflective access to reasons or facts which corroborate their belief in order to be justified in holding it. Externalism, on the other hand, maintains that the justification for someone's belief can come from facts that are entirely external to the agent's subjective awareness. [1]
Someone posted a link to the Wikipedia article "Brain in a vat", which does have a section on externalism, for example.