The other major thing missing from Chat GPT is that it doesn't really "learn" outside of training. Yes you can provide it some context, but it fundamentally doesn't update and evolve its understanding of the world.
Until a system can actively and continuously learn from its environment and update its beliefs it's not really "scary" AI.
I would be much more concerned about a far stupider program that had the ability to independently interact with its environment and update it's believes in fundamental ways.
In context learning is already implicit finetuning. https://arxiv.org/abs/2212.10559.
It's very questionable to what extent continuous training is necessary past a threshold of intelligence.
In context learning may act like fine tuning, but crucially does not mutate the state of the system. The same model prompted with the same task thousands of times is no better at it the thousandth time than the first.
GPT-3 is horrible at arithmetic. Yet if you define the algorithmic steps to perform addition on 2 numbers, accuracy on addition arithmetic shoots up to 98% even on very large numbers. https://arxiv.org/abs/2211.09066
Think about what that means.
"Mutating the system" is not a crucial requirement at all. In context learning is extremely over-powered.
> Yet if you define the algorithmic steps to perform addition on 2 numbers, accuracy on addition arithmetic shoots up to 98% even on very large numbers. https://arxiv.org/abs/2211.09066 Think about what that means.
That means that even with the giant model, you need to stuff even the most basic knowledge for dealing with problems of that class into the prompt space to get it to work, cutting into conversation depth and per-response size? The advantage of GPT-4’s big window and the opportunity it provides for things like retrieval and deep iterative context shrinks if I’ve got to stuff a domain textbook into the system prompt so it isn’t just BSing me.
It means you have natural language programming. We would need to prove that natural language programming is more powerful than traditional programming at solving logical problems, I haven't seen such a proof.
> Until a system can actively and continuously learn from its environment and update its beliefs it's not really "scary" AI.
On the eve of the Manhattan Project, was it irrational to be weary of nuclear weapons (to those physicists who could see it coming)? Something doesn't have to be a reality now to be concerning. When people express concern about AI, they're extrapolating 5-10 years in the future. They're not talking about now.
Doomsday predictions will always be wrong in hindsight because if they were correct then you wouldn't be here to realize it. The near misses in the Cold War, where we almost accidentally got obliterated, show that the concern wasn't misplaced. If anything, the concern itself is the reason it didn't end badly.
Until a system can actively and continuously learn from its environment and update its beliefs it's not really "scary" AI.
I would be much more concerned about a far stupider program that had the ability to independently interact with its environment and update it's believes in fundamental ways.