Hacker News new | past | comments | ask | show | jobs | submit login

Between this and OpenAI's Github Copilot "programming" will slowly start dying probably. What I mean by that is that sure, you have to learn how to program, but our time will be spent much more on just the design part and writing detailed documentation/specs and then we just have one of these AIs generate the code.

It's the next step. Binary code < assembly < C < Python < AlphaCode

Historically its always been about abstracting and writing less code to do more.




First, If this is correct, if alpha code succeeded, this will bring to its own demise.

I.e. as soon as it starts replacing humans, it will not have enough human generated training data, since all of programming will be done by models like himself.

Second, alphacode was specifically trained for competitive programming :

1. short programs. 2. Each program has 100's of human generated solutions.

However, commercial program are:

1. long. 2. Have no predefined answer or even correct answer. 3. Need to use/reuse a lot of legacy code.


> as soon as it starts replacing humans, it will not have enough human generated training data, since all of programming will be done by models like himself.

As a natural born pessimist, I can't help but feel that by the time we get to that point we'll just keep blundering forward and adapting our world around the wild nonsense garbage code the model ends up producing in this scenario.

After all, that's basically what we've done with the entire web stack.


Reinforcement learning and adversarial training can render both of those concerns as non-issues in practice.


The phrase "in practice" doesn't really work when you're referring to highly finicky strategies like RL and adversarial training


My bet would be that it will never happen in a reasonable time frame. And also by that logic, writing that "documentation/spec" would just mean learning a new programming language the AI engine can parse making it as useful as a compiler. Anyone who has been writing and designing software for a while knows the cycle is way more complex than take some input and write code.

Let me know when the AI engine is able to do complex refactoring or adding features that keeps backwards compatibility, find a bug in a giant codebase by debugging a test case or write code that's performant but also maintainable.


I agree, from a totally different angle. Let's take something I know better as an example: Structural engineering. Structural engineering should be a "solved problem". It seems, ostensibly, relatively simple compared to a more open ended activity like "programming".(For "technical reasons", it ends up being more similar than you might think.) Still, you are ultimately dealing with the same materials, the same physics, and very similar configurations.

And yet, despite the fact that we have programs to help calculate all the things, test code-required load-combinations, even run simulations and size individual components... it turns out that, it doesn't actually save that much work, and you still need an engineer to do most of it. And not just because of regulatory requirements. It's just, that's not the hard part. The hard part is assembling the components and specifications, specifying the correct loads based on location-specific circumstances, coming up with coherent and sensible design ideas, chasing down every possible creative nook and cranny of code to make something that was originally a mistake actually work, and know when the model is just wrong for some reason and the computer isn't simulating load paths accurately.

Specifying the inputs and interpreting results is still about as much work as it was before you started with all the fancy tools. Those tools still have advantages mind you, and they do make one slightly more efficient. Substantially so in some cases, but most of the time it still comes out as a slight assist rather than a major automation.


As a former structural engineer, I completely agree with this sentiment. For every engineering project I was involved in, the automated components were at most 2 to 5% of the rest of the work.


I hear that.

Machine Learning also has a long way to go before it can take a long, rambling mess of a meeting and somehow generate a halfway usable spec from it. I mean, the customer says they want X, but X is silly in this context, so we'll give them Y and tell them it's "X-like, but faster". For example, SQL is "Blockchain-like, but faster" for a lot of buzzword use-cases of blockchain.


You ever notice how the "let me know when" part of this keeps changing? Let me know when computers can ... play Go/understand a sentence/compose music/write a program/ ...

But surely they'll never be able to do this new reference class you have just now come up with, right?


Not really? I mean I would never say "let me know when computer can do X" when X is something that doesn't require too much creativity and imagination. Like, a computer composing music, doesn't impress me too much because music itself has structure. A computer creating music that would wow a professional composer? That would be impressive. Same with this topic. A computer that solves some (because it failed several) short programming challenges and OP says it will kill programming entirely? Not even close. Pretty cool though.


It keeps changing since our imagination of what tasks requires intelligence are weak. We think that when a computer can do X it can also do Y. But then someone builds a computer that can do X but can't do Y, and we say "oh, so that doesn't require intelligence, let me know when it can do Z and we can talk again.". That doesn't mean that Z means the computer is intelligent, just that Z is a point where we can look at it and discuss again if we made any progress. What we really want is a computer that can do Y, but we make small mini tasks that are easier to test against.

The Turing test is a great example of this. Turing thought that a computer needs to be intelligent to solve this task. But it was solved by hard coding a lot of values and better understanding of human psychology and what kind of conversation would seem plausible when most things are hardcoded. That solution obviously isn't AI, I bet you don't think so either, but it still passed the Turing test.


At what point do we give up and realize that there is no one thing called intelligence, just a bunch of hacks that work pretty well for different things sometimes? I think that's probably where people keep failing here. The reason that we keep failing to find the special thing in every new field that AI conquers is because there's nothing special to actually find? I mean, we could keep moving the goalposts, a sort of intelligence of the gaps argument? But this doesn't seem productive.


Possibly interesting trivium: automated debugging was first described in 1982, in Ehud Shapiro's PhD thesis titled "Algorithmic Program Debugging" (it's what it sounds like and it can also generate programs by "correcting" an empty program):

https://en.wikipedia.org/wiki/Algorithmic_program_debugging

Of course all this targeted only Prolog programs so it's not well-known at all.


It's also the starting point for Inductive Logic Programming (as in Shapiro's "Model Inference System"), as I'm sure you know ;)


Let's say I'm aware of it :)


Solving competitive programming problems is essentially solving hard combinatorial optimization problems. Throwing a massive amount of compute and gradient descent at the problem has always been possible. If I'm not mistaken what this does is reduce the representation of the problem to a state where it can run gradient descent and then tune parameters. The real magic is in finding structurally new approaches. If anything I'd say algorithms and math continue to be the core of programming. The particular syntax or level of abstraction don't matter so much.


> Solving competitive programming problems is essentially solving hard combinatorial optimization problems.

True, but if you relax your hard requirements of optimality to admit "good enough" solutions, you can use heuristic approaches that are much more tractable. High quality heuristic solutions to NP-hard problems, enabled by ML, are going to be a big topic over the next decade, I think.


I should correct myself, this isn't even that. This is just text analysis on codeforces solutions, which makes it even worse than I thought. Very pessimistic about it's generalizability.


> If anything I'd say algorithms and math continue to be the core of programming.

I disagree; I think the core of programming is analyzing things people want and expressing solutions to those wants clearly, unambiguously, and in a way that is easy to change in the future. I'd say algorithms and math are a very small part of this work.


That's not programming, that's called being a good employee. Any person in any role should be doing that. Programming is about algorithms and math. Now a good employee who's in a technical role should have both.


> Programming is about algorithms and math.

You've simply restated your opinion without providing any supporting arguments, and as I already said, I disagree. The vast majority of programming I see (and as a consultant, I see a fairly wide variety) is not about algorithms and math, but instead gluing together systems and expressing domain logic.

Now, I suppose you could argue that domain logic is "algorithms and math," but in my experience, it's less about the specific algorithms and more about precisely describing fuzzy human behavior.

It's that "precisely describing" and "easy to change in the future" parts that makes what programmers do different than what any good employee does.

(I do agree that there is some programming that is focused on algorithms and math, but it's in the minority, in my experience. Perhaps the type of work you do is focused on algorithms and math, but I believe that's a relatively small part of the software development ecosystem.)


No I'm not talking about programming that requires calculations or programs written to solve mathematical problems. Programming at its core is about defining precise logical relationships between abstract objects and then writing algorithms to understand and modify these objects. This is a mathematical process and you should use mathematical thinking to do this.It may not always seem like it when the objects and relationships appear to be simple but that is the core of programming.


Creating a higher level abstraction is something people have been trying to do for decades with so-called 4th-generation languages. At some point, abstracting away too much makes a tool too cookie-cutter, and suddenly deviating from it causes more difficulty.


Maybe it's not more abstraction we need, just automating the drudgery. Abstractions are limited - by definition they abstract things away, they are brittle.


Read: Ruby on Rails


I'd note that assembly, C, and Python didn't replace 'programming' but were expected to do so. I'd wager that what you now call 'detailed documentation/specs' will still be called programming in 10 or even 20 years.


If you could change a sentence in the documentation and then run a ~1min compilation to see the resulting software, it would be a very different kind of programming. I suppose it'll give a new meaning to Readme-Driven-Development.


Model-driven development and code generation from UML were once supposed to be the future. It will be interesting to see how much further this approach takes us.

Assuming ANNs resemble the way human brain function you'd also expect them to introduce bugs. And so the actual humans beings would partake in debugging too.


I agree, I expect programmers will just move up the levels of abstraction. I enjoyed this recent blog post on the topic: https://eli.thegreenplace.net/2022/asimov-programming-and-th...


The "problem" is that as you move up the levels of abstraction, you need fewer people to do the same amount of work. Unless the complexity of the work scales as well. I've always felt that programmers would be the first class of knowledge workers to be put out of work by automation. This may be the beginning of the end for the programming gravy train.


> The "problem" is that as you move up the levels of abstraction, you need fewer people to do the same amount of work.

This will lower the entry barrier to developing software so more people will go into the field. Before you needed to know a programming language, now you will just have a dialogue with a language model.

> I've always felt that programmers would be the first class of knowledge workers to be put out of work by automation.

We've been automating our work for 70 years, and look how many programmers are employed now. The more we automate, the more capable our field becomes and more applications pop up.


>This will lower the entry barrier to developing software so more people will go into the field.

Indeed. The ideal future of programming is something out of star trek. I often noticed how everyone on the ship is a programmer of a sort, they whip up a simulation as the problem warrants regardless of their field. But in this future, the job of programmer basically doesn't exist. As a programmer, I should be allowed to have mixed feelings about that.


Let your imagination fly. We always want more than it's possible, our wishes fill up any volume like an expanding gas. Humans are going to be crucial to orchestrate AI and extract the most utility out of it.


> as you move up the levels of abstraction, you need fewer people to do the same amount of work

Yes, but the total amount of work (and surrounding complexity) also increases with it. Just look at the evolution of the software industry over the last few decades.


History isn't a great guide here. Historically the abstractions that increased efficiency begat further complexity. Coding in Python elides over low-level issues but the complexity of how to arrange the primitives of python remains for the programmer to engage with. AI coding has the potential to elide over all the complexity that we identify as programming. I strongly suspect this time is different.


Yes, this is how you increase prosperity (see: agricultural revolution, industrial revolution, etc). You can now create more with the same number of people.


On the other hand, as the value of an hour of programming increases, the quantity demanded may also increase.


Or you can do things at a faster pace and increase your productivity.


There aren't enough developers either way.


I disagree that programming is dying -- tools like Copilot will lead to a Renaissance in the art of computer programming by enabling a larger population to design programs and explore the implications of their design choices. I wrote a short essay [1] on the history automated programming and where I think it is heading in the future.

[1]: https://breandan.net/public/programming_with_intelligent_mac...


You also get to specialize harder. You’ll be able to move into more advanced programming styles. I’m thinking of formally verifiable C/C++ programs for safety critical applications, and code using advanced concepts from programming language theory.

The programming languages of the future are going to make Rust look like Python. That’ll be in part because you as an individual programmer aren’t weighed down by as much boilerplate as you were pre-copilot, pre-alphacode and pre- the more advanced coding assistants of the future.


> writing detailed documentation/specs

That's what code is.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: