I'm a principal SWE with 25 years of experience and I think software today is comically bad and way too hard to use. So I think we can get engineers to write better software with these tools. The talk of "replacement" is going to be premature until we get something remotely resembling AGI. Unless your problems are so simple that a monkey could solve them, AI of today and foreseeable future is not going to solve them end to end. At best it'll fill in the easy parts, which you probably don't want to do anyway. Write a test. Simple refactor. Bang out some simple script to pay down some engineering debt. I've yet to see a system that doesn't crap out in the very beginning on the real problems that I solve on a daily basis. I'm by no means a naysayer - I work in this field and use AI many times daily.
Funny enough, now I write better code than I used to thanks to AI because of two reasons:
- AI naturally writes AI code that is more organized and clean (proper abstraction, no messy code)
- I've recognized that, for AI to write code on an existing codebase, the code has to be clean and organized and make sense, so I tend to do more refactoring to make sure AI can take over them and update them when needed.
I'm actively transitioning out of a "software engineer" role to be more open minded on how to coexist with AI while still contributing value.
Prompt engineering, organizing code for AI agents to be more effective, guiding non-technical people to understand how to leverage AI, etc. I'm also building products myself and selling them myself.
See, the thing is, to determine which abstractions are "right and proper" you _already need a software engineer_ who knows those kinds of things. Moreover, that engineer needs to ability to read that code, understand it, and plan its evolution over time. He/she also needs to be able to fix the bugs, because there will be bugs.
I'm with you 100% of the way on this one. Am coding with Claude 3.5 right now using Aider. The future is clear at this point. It won't get worse and there's still so much low hanging fruit. Expertise is still useful to guide it, but we're all product managers now.
There are a lot more photographers now than there ever were painters, and the size of the industry is much larger than it used to be. It is true that our work will change, but personally I think that's great - I don't enjoy the initial hump that you usually have to overcome before you begin to actually solve real problems, and AI is often able to take me over that hump, or fill in things that don't matter. E.g. I'm a backend person but need a frontend for the demo - I'm able to do that on my own now, without spending days figuring out some harebrained web framework and CSS stack - something I probably wouldn't do at all if there wasn't no AI.
Your analogy fails because the economy still needed human workers to take the photographs whereas there is a possibility that in 5 or 10 years, the economy will have no need and no use for most people.
I work in this field and I would bet that in 5-10 years the situation will not be much different compared to today in terms of employment unless we invent AGI all of a sudden, which I don't see any signs that it'd even remotely happen. Job definitions will change a bit, productivity will improve, cost per LOC will drop, more underserved niches will become tractable/profitable.
Well, I know what will happen within about a year long time horizon. As far as at least developer assistance models are concerned the difference at the end of 2025 is not going to be dramatic, same as it was not between the end of '23 and this year, and for the same reasons - you do need symbolic reasoning to generate large chunks of coherent, correct code. I also don't see where "dramatic" would come from after that either unless we get some new physics which lets us run models 10-20x the size in realtime, economically. But even then, until we get true AGI, which we won't get in my remaining lifetime, those systems will be best used as a "bicycle for the mind" rather than "the mind", and in the vast majority of cases they will not be able to replace humans entirely, at least not in software engineering.
I assume you're not talking about chatgpt4o, because in my experience it's absolutely dogshit at abstracting code meaningfully. Good at finding design patterns, sure, but if your AI don't understand how to state machine, I'm not sure how I'm supposed to use it.
It's great at writing tests and documentation though.
Claude is better most of the time on the simpler stuff, but o1 is better on some of the more difficult problems that Claude craps out on. Really $40/mo is not too much to pay for both.