Fully automating software engineering won’t happen until AGI. As a good Yuddite I expect us to have bigger problems when that happens.
You need an agent with a large and coherent world model, in order to understand how your programs relate to the real world, in order to solve business tasks.
This isn’t something any program synthesis tech currently available can do, because none of it has a coherent world model.
GPT-3 comes closest to this, but isn’t able to engage in any kind of planning or abstract modeling, beyond semi coherent extrapolations from training data.
Maybe scaling up GPT by a few more orders of magnitude would work, by generating an emergent world model along the way.
What is a "Yuddite?" I tried Googling for it and got the impression it was LessWrong forum terminology for people who believed too strongly in LessWrong, but I couldn't find many references.
Luddite but mixed with "Eliezer Yudkowsky" who is a researcher working on the problem of friendly AI (or whatever they're calling it these days). Basically trying to prevent skynet.
The GP is saying that once we have AGI, then "AGI is going to make the human race irrelevant" outweighs "AGI makes software devs irrelevant".
You need an agent with a large and coherent world model, in order to understand how your programs relate to the real world, in order to solve business tasks.
This isn’t something any program synthesis tech currently available can do, because none of it has a coherent world model.
GPT-3 comes closest to this, but isn’t able to engage in any kind of planning or abstract modeling, beyond semi coherent extrapolations from training data.
Maybe scaling up GPT by a few more orders of magnitude would work, by generating an emergent world model along the way.