When you're talking to a person, though, you also have an understanding of what a human is and what your own experience is.
Its reasonable to interact with another human and expect that they are roughly similar to you, especially when your interactions match what you'd expect.
That doesn't extend as well to other species, let along non-living things that are entirely different from us. They could seem intelligent from the outside but internally function like a lookup table. They also could externally seem like a lookup table while internally matching much better what wed consider intelligence. We don't have context of first hand experience that applies and we don't know what's going on inside the black box.
With all that said, I'm phrasing this way more certain than I mean to. I wouldn't claim to know whether a box is intelligent or not, I'm just trying to point out how hard or impossible it would be today without knowing more about the box.
> Its reasonable to interact with another human and expect that they are roughly similar to you, especially when your interactions match what you'd expect.
It is a default belief that most of us have. The more I learn, the less I think it is true.
Some people have no autobiographical memory, some are aphantasic, others are autistic; motivations can be based on community or individualism, power-seeking, achievements, etc.; some are trapped by fawning into saying yes to things when they want to say no; some are sadistic or masochistic; myself I am unusual for many reasons, including having zero interest in spectator sports and that I will choose to listen to music only rarely.
I have no idea if any AI today (including but not limited to LLMs) are conscious by most of the 40 different meanings of that word, but I do suspect that LLMs are self-aware because when you get two of them talking to each other, they act as if they know they're talking to another thing like themselves.
But that's only "I suspect", not even "I believe" or "I'm confident that", because I am absolutely certain that LLMs are fantastic mimics and thus I may only be seeing a cargo-cult version of self-awareness, a Clever Hans version, something that has the outward appearance but no depth.
> It is a default belief that most of us have. The more I learn, the less I think it is true.
Sure, that's totally reasonable! It all depends on context - I think I'm safe to assume another human is more similar to me than an ant, but that doesn't mean all humans are roughly equivalent in experience. Even more important, then, that we can't assume a machine or an algorithm has developed similar experiences to us simply because they seem to act similarly on the surface.
I'm on the opposite side of the fence as you, I don't think or suspect that any LLMs or ML in general have developed self-awareness. That comes with the same big caveat that its just what I suspect though, and could be totally wrong.
as evidence that GPT-4 can understand Python, based on assumptions:
1. You cannot execute non-trivial programs without understanding computation/programming language
2. It's extremely unlikely that these kind of programs or outputs are available anywhere on the internet - so at very least GPT-4 was able to adapt extremely complex patterns in a way which nobody can comprehend
3. Nobody explicitly coded this, this capability have arisen from SGD-based training process
That's an interesting one, I'll have to think through that a bit more.
Just first thoughts here, but I don't think (2) is off the table. The model wouldn't necessarily have to have been trained on the exact algorithm and outputs. Forcing the model to work a step at a time and show each step may push the model into a spot where it doesn't comprehend the entire algorithm but it has broken the work down to small enough steps that it looks similar enough to python code it was trained on that it can accurately predict the output.
I'm also assuming here that the person posting it didn't try a number of times before GPT got it right, but they could have cherry picked.
More importantly, though, we still have to assume this output would require python comprehension. We can't inspect the model as it works and don't know what is going on internally, it just appears to be a problem hard enough to require comprehension.
2. This was the original ChatGPT, i.e. the GPT3.5 model, pre-GPT4, pre-turbo, etc
3. This capability was present as early as GPT3, just the base model —- you'd prompt it like "<python program> Program Output:" and it would predict the output
Its reasonable to interact with another human and expect that they are roughly similar to you, especially when your interactions match what you'd expect.
That doesn't extend as well to other species, let along non-living things that are entirely different from us. They could seem intelligent from the outside but internally function like a lookup table. They also could externally seem like a lookup table while internally matching much better what wed consider intelligence. We don't have context of first hand experience that applies and we don't know what's going on inside the black box.
With all that said, I'm phrasing this way more certain than I mean to. I wouldn't claim to know whether a box is intelligent or not, I'm just trying to point out how hard or impossible it would be today without knowing more about the box.