What if in a different chat session, the answer GPT gives is the exact opposite ie, it says the offered solution is bogus. Would you even know of it unless someone tries it and shows it to be so? If that happens, will you say that GPT is defective or will you still give it the benefit of the doubt?
Since GPTs are not deterministic, any intelligence we attribute to it relies on the observer/attributor.
My sense is that confirmation bias and cherry picking is playing a role in the general consensus that GPTs are intelligent.
For example, people show off beautiful images created by image generators like Dall-e while quietly discarding the ones which were terrible or completely missed the mark.
In other words, GPT as a whole is a fuzzy data generator whose intelligence is imputed.
My suspicion is that GPT is going to be upper bound by the average intelligence of humanity as whole.
Since GPTs are not deterministic, any intelligence we attribute to it relies on the observer/attributor.
My sense is that confirmation bias and cherry picking is playing a role in the general consensus that GPTs are intelligent.
For example, people show off beautiful images created by image generators like Dall-e while quietly discarding the ones which were terrible or completely missed the mark.
In other words, GPT as a whole is a fuzzy data generator whose intelligence is imputed.
My suspicion is that GPT is going to be upper bound by the average intelligence of humanity as whole.