Hacker News new | past | comments | ask | show | jobs | submit login

Calling it now: If current language models can solve competitive programming at an average human level, we’re only a decade or less off from competitive programming being as solved as Go or Chess.

Deepmind or openAI will do it. If not them, it will be a Chinese research group on par with them.

I’ll be considering a new career. It will still be in computer science but it won’t be writing a lot of code. There’ll be several new career paths made possible by this technology as greater worker productivity makes possible greater specialization.




The problem is this view continues to view software engineers as people that write code, that's not what my job is, it is figuring out how to solve a business problem using technology, and getting people on board with that solution and updating and refining it.

This viewpoint seems to me to be very similar to the idea of 3rd generation languages replacing developers because programming will be so easy, it isn't about how easy it is to write code, I function as a limited mentat taking all the possible requirements, tradeoffs constraints, analyzing them and then building the model, then I write out the code, the code artifact is not the value I add. The artifact is how I communicate the value to the world.

This doesn't make programmers redundant anymore than Ruby, PHP, or Java made developers redundant because it freed them from having to manually remember and track memory usage and pointers, it is at most a tool to reduce the friction of getting what is in my head into the world.

I control the code and whoever controls the code controls the business. I posses the ability to make out the strands of flow control and see the future state of the application. For I am the Sr. Software engineer and I have seen where no Project Manager can see.

Apologies to Frank Herbet I just finished listening to Dune.

EDIT:

I got off track at the end but my point is that no matter how good the tools for developing the code are, they will never replace a software engineer anymore than electric drills and power saws replace home builders. It merely elevates our work.


I actually agree with you on that. I had another comment further down the thread where I said that software engineering can’t be fully automated by anything short of artificial general intelligence.

As humans we have a coherent world model that current AI systems are nowhere near close to having.

That coherent world model is a necessary precondition for both understanding a business goal and implementing a program to solve it. AlphaCode can do the second part but not the first.

AlphaCode doesn’t have that world model and even if it did it still wouldn’t autonomously act on it, just follow orders from humans.

Competitive programming is going to be solved much earlier than programming in a business context will, because it’s completely independent of business requirements. It’s at most half as hard of a problem .


If I am given the ability to produce a program by formalizing the fuzzy requirements I am given, I will not hesitate to abuse this option. I can see a future where there is be a "market" for specifications to be composed together.

Analyzing the requirements is a hard problem when we do it with our brain. But our job would be very different if all we had to do it to write down the constraints, and press a button to see an error: invalid requirements, can't support this and that at the same time.


Three months ago in the Copilot thread I was saying

> in 5 years will there be an AI that's better than 90% of unassisted working programmers at solving new leetcode-type coding interview questions posed in natural language?

and getting pooh-poohed. https://news.ycombinator.com/item?id=29020401 (And writing that, I felt nervous that it might not be aggressive enough.)

There's this general bias in discussions of AI these days, that people forget that the advance they're pooh-poohing was dismissed in the same way as probably way off in the indefinite future, surprisingly recently.


The issue is these techniques are growing in capabilities exponentially, while we have a habit of extrapolating linearly. Some saw the glaring deficits in copilot then reasoned that linear improvements is still glaring deficits. I don't know that this bias can ever be corrected. A large number of intelligent people simply will never be convinced general AI is coming soon no matter what evidence is presented.


> techniques are growing in capabilities exponentially, while we have a habit of extrapolating linearly

What does this even mean? How do you put a number on AI capability? You can say it is growing faster than people expect, but what is even exponential or linear growth in AI capability?


I take your point that the linear/exponential terminology is a bit dubious. But the simple way to make sense of it is just going by various benchmarks. E.g. the power-law relationship between the model accuracy and the model size: https://eliaszwang.com/paper-reviews/scaling-laws-neural-lm/


Yes, for very precise, comprehensive text descriptions of problems.

It will take a far-far more advanced AI to write such descriptions for real-world problems.

Writing requirements for a project is difficult work, and not for technical reasons, but for human reasons (people don't know what they want exactly, people have trouble imagining things they haven't seen yet, people are irrational, people might want something that is different from what they need, etc.)

In this regard, we are safe for a few more decades at least.


Yes, they have been trying to create 'sufficiently formal human readable text' to spec out projects; not detailed enough to execute by a computer but formal and precise enough so humans know exactly what they are getting. That still doesn't work at all and that is between humans. If the specs are clear enough, the act of programming is already mostly not the issue, however, they never are. I am looking forward to ML helping me writing boring code (which CoPilot already does, but again, that's not really where time/energy is spent anyway) and protect against security issues, scalability issues and all kinds of bugs (it could rewrite algo's it knows; it could recommend libraries that I should use instead of the crap I rolled myself etc).


Fully automating software engineering won’t happen until AGI. As a good Yuddite I expect us to have bigger problems when that happens.

You need an agent with a large and coherent world model, in order to understand how your programs relate to the real world, in order to solve business tasks.

This isn’t something any program synthesis tech currently available can do, because none of it has a coherent world model.

GPT-3 comes closest to this, but isn’t able to engage in any kind of planning or abstract modeling, beyond semi coherent extrapolations from training data.

Maybe scaling up GPT by a few more orders of magnitude would work, by generating an emergent world model along the way.


What is a "Yuddite?" I tried Googling for it and got the impression it was LessWrong forum terminology for people who believed too strongly in LessWrong, but I couldn't find many references.


I believe he's referring to "luddites" -- a group of people who resisted technological innovation during the industrial revolution.


Luddite but mixed with "Eliezer Yudkowsky" who is a researcher working on the problem of friendly AI (or whatever they're calling it these days). Basically trying to prevent skynet.

The GP is saying that once we have AGI, then "AGI is going to make the human race irrelevant" outweighs "AGI makes software devs irrelevant".


That’s the idea.


I am a follower of Elizier Yudkowsky.


I would actually argue the programmers job has never been 100% writing the code, it’s always been interpreting, fixing and decoding the ideas of others.


The older I get the more I see it has not been about programming for most tasks for quite a long time. In the early 80s it was a bit more (but not even much more); at that time as well I spent most of my time debugging and changing behaviour slightly (but in a lot of pages) instead of just cranking out huge bags of code.


I would argue that we figured this out over 50 years ago but oddly enough some people still hold onto the idea.


A programming genie that grants programming wishes to the general public. Since most of what I do on a daily basis is engineering solutions based on tradeoffs, I can only imagine the number of programmers needed to debug solutions given by the programming genie in response to poorly described feature requests.

If we become mechanics of the software AI vehicles of the future, so be it.


AI is being aggressively applied to areas where AI practitioners are domain experts. Think programming, data analysis etc.

Programmers and data scientists might find ourselves among the first half of knowledge workers to be replaced and not among the last as we previously thought.


I'm already anticipating having the job title of "Query Engineer" sometime in the next 30 years, and I do NLP including large scale language model training. :(


One of the big venture capitalists predicted “prompt engineering” as a future high paid and high status position.

Essentially handling large language models.

Early prompt engineers will probably be drawn from “data science” communities and will be similarly high status, well but not as well paid, and require less mathematical knowledge.

I’m personally expecting an “Alignment Engineer” role monitoring AI systems for unwanted behavior.

This will be structurally similar to current cyber security roles but mostly recruited from Machine Learning communities, and embedded in a broader ML ecosystem.


I like this descriptions better, considering that companies like Anthropic are working specifically on Alignment and AI Safety. Being that the team actually spun out of Deep Mind, it is interesting.


Alignment is going be a giant industry and will also include many people not originally in Stem. The humanities and “civil society” will both have their contributions to make.

It’s likely that alignment jobs won’t themselves be automated because noone will trust AI systems to align themselves.


>“Alignment Engineer” role monitoring AI systems for unwanted behavior.

ha, I know people already doing this..


The thing is, Competitive Programming (CP) is a completely different discipline/subject with its own trivia knowledge and tricks. CP uses Computer Science the same way as e.g. Biology uses Mathematics. It has very little in common with a real world software development.


I said as much in another comment.

Automating the software development profession proper is going to be much harder and will require autonomous agents with coherent world models, because that’s what you need to act in a business context.


This is in line with what other code generation AI's have accomplished.

To reach average level at codeforces you need to be able to apply a standard operation like a sort, or apply a standard math formula, as the first 1-2 problems in the easy contests are just that. It is impressive that they managed to get this result in real contests with real unaltered questions and see that it works. But generalizing this to harder problems isn't as easy, as there you need to start to device original algorithms instead of just applying standard algorithms, for such problems the model needs to understand computer science instead of just mapping language to algorithms.


Calling it now: Your prediction is off by an order of magnitude or two (10 years -> 100 years, or 1000 years)


It can be really tempting to think about research progression on a "linear" timescale but more often than not it eventually ends up following an "exponential" curve because of technical debt. And there appears to be a _lot_ of techniques used here which we don't fully understand.

I wouldn't be surprised if a specifically engineered system ten years from now wins an ICPC gold medal but I'm pretty sure that a general purpose specification -> code synthesizer that would actually threaten software engineering would require us to settle a lot of technical debts first -- especially in the area of verifying code/text generation using large language models.


It doesn't even have to be average human.

Let's say AI only gets to 10% (or 20% or 30% or whatever, it doesn't really matter), that's a huge number of jobs being lost.

Imagine having a machine write all the "simple/boring" code for you. Your productivity will go through the roof. The smartest programmer who can most effectively leverage the machine could replace many hundreds of programmers.

I should brush up on my plumbing and apply for a plumbing license soon. (I think plumbing is safer than electricians, because many CS people have good EE foundations).


You're extrapolating across very different types of problems. Go and Chess have unlimited training data. Competitive programming does not.


To me, that's actually one of the more interesting questions. It's possible to grade the output of the AI against objective criteria, like does it run, and resources consumed (RAM, CPU time, and, particularly of interest to me, parallel scaling, as GPU algorithms are too hard for most programmers). To what extent can you keep training by having the AI generate better and better solutions to a relatively smaller input pool of problems? I skimmed the paper to see how much they relied on this but didn't get a clear read.


Depending on what you want to do, you can either choose an industry with very fuzzy requirements (to stay near the programming side) or one with very complex but with strict requirements (to benefit from those coding robots). I guess we will need simulators for most of what we do in order to train those robots.


Didn’t we all (collectively) have this discussion the last time someone put the math functions in a library and rendered math calculation programmers obsolete?


>> There’ll be several new career paths made possible by this technology as greater worker productivity makes possible greater specialization.

Can you list a few?


How long before it can write the code without plagiarizing code from online?


How long before the typical human coder can do so?


Are you saying you cannot write code from scratch?


Not the parent comment, but I cannot code from scratch (outside of very simple and small applications). Competitive Programming is at about the limit of what I can do without looking things up, and only because I've had practice specifically for that kind of artificial environment.


I can write some code from scratch, but my ability to write code is improved by an order of magnitude when I can refer to online resources, including example code.


Humans study CS for 5 years, reading code from online to be able to solve these problems.


Don't worry, there are a lot of much simpler jobs, like drivers or cashiers that will surrender to AI before coder's job does. So UBI will be implemented long before that happens.


I wouldn't be so sure. Programmers (and drivers and cashiers) can "survive" in poverty like millions others already do. This transformation is coming in waves that keep the proverbial frog in the pan.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: