I'm going to throw some assumptions/ ideas I've been thinking about around this:
no matter how hard I try/train I can't do some jobs. (sports athlete, etc.) So I could see how AI could eliminate jobs to the point most humans can't earn money.
AGI if possible will evolve from a company not a software program. AKA company X makes Y revenue as the number of employees tends toward 0.
LLMs are probably just a component of AGI. So, the argument LLMs can't do X doesn't really mean much when it comes to the discussion of Jobs and AI.
What if the paper clip(in the optimizer problem) is AI compute.
It's possible that the low hanging fruit of AI will stop at some point. but I'm not sure there any indication it's any time soon.
Amdahl's says you get a diminishing effect, and we're already seeing this as we get into the GPT-4 level of performance, where it doesn't quite perform better at some tasks as 3.5.
Gustafson's says that as power increases, you'll tackle problems that were previously untacklable, and this cycles into more productivity. So CPUs lead to web to cloud computing to social networks and things like people emailing a file from mobile to PC instead of using a data cable.
Gustafson's creates a whole line of other jobs and different bottlenecks.
It's the framework paradox as well - you think frameworks would reduce the difficulty and pay of jobs, but they only go higher. It's why tech people are more eager to get into AI.