I have a feeling this is the correct read in terms of progression. But I'm skeptical if it'll ever be able to synthesize a program entirely. I imagine that in the future we'll have some sort of computer language more like written language that will be used by some sort of AI to generate software to meet certain demands, but might need some manual connections when requirements are hazy or needs a more human touch in the UI/UX
> But I'm skeptical if it'll ever be able to synthesize a program entirely.
Emotional skepticism carries a lot more weight in worlds where AI isn't constantly doing things that are meant to be infeasible, like coming 54th percentile in a competitive programming competition.
People need to remember that AlexNet is 10 years old. At no point in this span have neural networks stopped solving things they weren't meant to be able to solve.
I feel like you're taking that sentence a bit too literally. I read it as "I'm skeptical that AI will ever be able to take a vague human description from a product manager/etc. and solve it without an engineer-type person in the loop." The issue is humans don't know what they want and realistically programs require a lot of iteration to get right, no amount of AI can solve that.
I agree with you; it seems obvious to me that once you get to a well-specified solution a computer will be able to create entire programs that solve user requirements. And that they'll start small, but expand to larger and more complex solutions over time in the same way that no-code tools have done.