> If you request a bot to write code that can accomplish a specified task, it needs to understand the task
No, it doesn't. It may simply be a coincidence. For example, I can write a bot that always outputs print('hello world'), and then ask it to write a hello world program in Python.
Comparisons to ELIZA aren't naive. They underscore the fact that more complex models of the same kind as GPT-3 use the same matching approach, they just have a bigger database of matches with more complex rules. They don't derive their answers from first principles. They don't have anything like more abstract concepts or models in any useful form. Which was the goal of AI all along! The AI was the search for these models and concepts, so that we could automatically establish the truth of questions nobody knows the answer to. Models like GPT-3 don't know, and cannot possibly search for the answer to the questions nobody knows the answer to because they are aggregators of existing knowledge.
No, it doesn't. It may simply be a coincidence. For example, I can write a bot that always outputs print('hello world'), and then ask it to write a hello world program in Python.
Comparisons to ELIZA aren't naive. They underscore the fact that more complex models of the same kind as GPT-3 use the same matching approach, they just have a bigger database of matches with more complex rules. They don't derive their answers from first principles. They don't have anything like more abstract concepts or models in any useful form. Which was the goal of AI all along! The AI was the search for these models and concepts, so that we could automatically establish the truth of questions nobody knows the answer to. Models like GPT-3 don't know, and cannot possibly search for the answer to the questions nobody knows the answer to because they are aggregators of existing knowledge.
> Too many philosophers waste time debating
I bet you aren't one of them though.