Yeah, the real curse is on the programmers who are going to be out of jobs much sooner than people anticipate.
All the talk about the low quality of code got me thinking: if humans aren’t reading the code, then the only thing that matters is correctness. Correctness may be an easier problem to solve than readability and refactorability.
Media synthesis means artists of all sorts will take a hit. Gpt means writers, programmers, journalists, and others will take a hit. Prompt engineering and larger models could mean any procedural use of knowledge in any context could be subject to automation.
Consumer level robotics, drones, and battery tech is close to a level that manual labor, delivery, and such tasks can be automated.
Maybe we should be preparing for post-scarcity. At the very least, the idea of "jobs" is going to have to transform. The 40 hour work week isn't going to make much sense in the near future.
Copilot (and all other AI tools, and non-AI tools) will lead to more developer jobs, not less. The thing holding most industries back at the moment is the lack of available developers at a cost-effective price. Literally every business has an opportunity to automate some process better than it does now, or to make an app to assist a business function, or to drive sales and growth through an application of new tech. The reason why they don't do that is cost and availability of skilled people to do the work. If a new AI tool removes the need to write code entirely and a developer just needs to work out the specifications and requirements then whole industries will start employing more developers.
The part of dev that you get paid to do is not writing the code. It's the part where you define what the code should do. That's where all the value in development resides. Getting AI to write the syntax is 100% upside for the tech industry.
> The part of dev that you get paid to do is not writing the code. It's the part where you define what the code should do. That's where all the value in development resides. Getting AI to write the syntax is 100% upside for the tech industry.
I guess they felt the same ways when the first compilers were introduced. Or interpreters.
> the lack of available developers at a cost-effective price.
There's already developers on the market at an incredibly low price. Good luck getting code that compiles out of them, much less that is correct. I'm afraid we'll see more and more of these, since now they'll be able to cycle through completions and try until it seems to pass all tests.
There's already developers on the market at an incredibly low price. Good luck getting code that compiles out of them, much less that is correct.
I've been a dev for almost 25 years, and in my experience how much someone is paid has very little correlation to how good their code is. I've worked with outsourced developers in India, Poland, and Vietnam who are brilliant at writing clean, robust, well-designed code and earn the equivalent of low-two-digits thousands of dollars per year, as well as former FAANG engineers who earn three-digit thousands of dollars and write untested, untestable spaghetti code.
I'm afraid we'll see more and more of these, since now they'll be able to cycle through completions and try until it seems to pass all tests.
In which case the value will lie in defining the tests, and the people who do that will earn the most. That's fine.
> I've been a dev for almost 25 years, and in my experience how much someone is paid has very little correlation to how good their code is. I've worked with outsourced developers in India, Poland, and Vietnam who are brilliant at writing clean, robust, well-designed code and earn the equivalent of low-two-digits thousands of dollars per year, as well as former FAANG engineers who earn three-digit thousands of dollars and write untested, untestable spaghetti code.
That has not been my experience at all.
There's this myth of the genius dropout in our industry, or that FAANG is basically luck, and yet it always fails to materialize.
It's not like gpt models are some sort of random walk or Markov bot. They produce nuanced text with deep semantic relationships between large segments of text. There's a direct relationship between the algorithms human brains used to produce the training data and the algorithms being approximated by these models. No, gpt-3 isn't human level generally, but it is human level competent in some domains.
These tools are effectively less then a year old in production, but we're already seeing the potential for huge disruption in lots of markets based on relatively straightforward uses of the tech.
I can't wait to see what a skillful and artfully sophisticated use will be. I don't think we've even scratched the surface.
> There's a direct relationship between the algorithms human brains used to produce the training data and the algorithms being approximated by these models.
Eh. That deep learning networks is like the brain is like saying that cars are like cheetahs. Sure, they go really fast by converting some kind of fuel into kinetic energy, and they move by exerting force on the ground, but that's about it.
Brains don't have ReLU units. Brains have lots of different types of topology, not just an uniform network, and can handle some of that topology arbitrarily being shut off due to damage. Brains use global chemical changes for (otherwise) out-of-band signaling purposes. Brains don't use gradient descent. Etc...
This seems like a gross simplification. We want factorable code because that helps with DRY. DRY code is more portable and the end product is smaller in size. Also, there's performance considerations.
I'll be surprised if there's a tool like copilot anytime soon that can identify need for memoization, for example, and implement it.
But if a human isn't messing with the code, the code doesn't matter - what the bot does to the code matters. And if the bot can handle bad code in a way humans can't, that means something. Code quality will become less important if automated tools, the ones reading and writing the code, don't care about it.
In some ways this has been the case for a long time. Tools that generate code, such as Visual Studio's form designer as a simple example, often generate really bad code. But no one ever reads it, and if you need to make a change you just run the tool again. So the fact the code is bad isn't very relevant.
> if humans aren’t reading the code, then the only thing that matters is correctness. Correctness may be an easier problem to solve than readability and refactorability.
I suspect it's the opposite: It's rather easy to write code that looks good but isn't exactly correct.
All the talk about the low quality of code got me thinking: if humans aren’t reading the code, then the only thing that matters is correctness. Correctness may be an easier problem to solve than readability and refactorability.