Hacker News new | past | comments | ask | show | jobs | submit login

The thing is that we don't know. What I also have been seeing for a while (like for at least for a decade) that whatever profession seemed to be in danger, whichever profession came out on top on (guess) lists like "these will be replaced by AI soon", each and every one of them thought that it can't happen to them and they all had (and continue to have) explanations, usually involving how that jobs needs human ingenuity. (Unlike all the others, of course :) )

Now completely I agree with you that a significant part of our job is understanding and structuring the problem, but I'm not sure it can't be done in another way. We usually get taking in when we think about what machines will be able to do by thinking that just because we use intelligence (general/human intelligence) to solve the task it means that it's a requirement. Think chess. Or even calculating (as in, with numbers). Or go. Etc.

The funny thing is that we don't know, until someone does it. I've been thinking for a while that a lot of what I do could be done by a chat bot. Asking clarification questions. Of course, I do have a lot of background knowledge and that's how I can come up with those questions, but that knowledge is probably easy to acquire from the internet and then use it as training data. (Just like we have an awful lot of code available, we have a lot of problem descriptions, questions, comments and some requirement specifications/user guides.)

The hard part would probably be not what we have learned as a software developer, but the things we have learned while we were small kids and also the things that we have learned since, on the side. I.e. being a reasonable person. Understanding what people usually do and want. So the shared context. But I'm not sure it's needed that much.

So yeah, I can imagine a service that will talk to a user about what kind of app they want (first just simpler web sites, web shops, later more and more complicated ones) and then just show them "here is what it does and how it works". And then you can say what you'd like to be changed. The color or placement of a button (earlier versions) or even the association type between entities (oh, but a user can have multiple shipping addresses).




I think programmers are relatively "safe" from AI for the simple reason they are the ones who talk to AI.

The job of programmers is to have machines do stuff so that humans don't have to, and of course, they do it for themselves too. Scripts, libraries, compilers, they are just tools to avoid flipping bits by hand. If something like copilot is not embraced by all programmers, it is that it is often less than helpful, and even then, some have adopted it. If we have super-advanced AI that can have a high level understanding of a problem and writes the app for you, then it is not much more than a super-compiler, and there will be programmers who will tell the super-compiler what to do, think of it as a new, super high level programming language. The job will evolve, but there will always be someone who tells the computer what to do.

And if there is no one needed to tell the computer what to do, that's what some people call "the singularity". Programming, or its evolution will probably be the last technical job. Social jobs may continue further, simply because humans like humans because they are human. Maybe the oldest profession will also be the last profession.


What I was trying to convey is that I'm not sure at all that you'll need a programmer (i.e. someone who has the mindset and the skills of a person we call today as such) to talk to the AI. Because the AI may just be able to understand a sloppy description that the average user (or the average product owner) is able to communicate. And when/if not then it will be able to either ask clarification questions (like "what do you mean by account?") or just generate something and then let the user figure out if it's doing the right thing for them. If not, they can ask for changes or explain what they think was misunderstood.

And my (weak) conjecture is that we may not need an AGI/human level AI for this. In which case we might still want to have some software to be written. But you're right, I'm also not sure that there will be a point where we still want software but have very intelligent machines. And while saying that programmer will be the last technical job doesn't sound like a strong claim, I'd say say it would probably be teachers :)

> The job will evolve, but there will always be someone who tells the computer what to do.

Which may very well be the users, if the machine is able to follow a conversation. Now the thing that may be the showstopper for now might exactly be this: that the machine should be able to hold a context for long enough (over multiple iterations of back and forth communication). As far as my limited knowledge goes, this is something that they have not yet figured out.

The "our kind will always be needed" is exactly the fallacy I was talking about and the one that the practitioners of every intellectual professions seem to have. They think they will be needed to interface between the machine (whether it's a legal or a medical system) and the client. Because they assume that the machine will not be able to communicate only to process the existing knowledge base.

But again, the whole field evolves through surprising leaps. Yep, Copilot is not insanely useful, but already amusing/frightening enough. It seems to pick up context from all over the code base. Sometimes it goes totally wrong, and generates gibberish (I mean generate non existent identifiers that make sense as English expressions but ones that don't exist anywhere in the code). But quite a few times it picks up the intent (the pattern/thought pattern) even if it is spread out over a file (or several ones).


I imagine I'll be editing this a bit, so I apologize if there are obvious typos left from any changes I make while I'm thinking. Sorry for the mini-essay. :)

Also, these points are not to be taken separately. They're part of a broader argument and should be treated as a unit.

1. Programming competitions are deliberately scoped down. Actual day-to-day work consists of meeting with stakeholders, conducting research, synthesizing that research with prior knowledge to form a plan, then executing. This work skips to the plan synthesis, relying on pattern-matching for the research component.

2. This current work, even if refined, would be insufficient to conduct daily programming work. This is just an extension of point 1; I acknowledge that you're talking about the future and a hypothetical better system.

3. The components required for your hypothetical programming bot are the components not covered by this work.

4. Context-aware/deep search tools are still very incomplete. There are some hints that better user-intent models are around the corner (i.e. companies like TikTok have built models that can adroitly assess users' intents/interests). I've seen no work on bringing those models to bear on something more nebulous like interpreting business needs. (But I also haven't been actively searching for them) Also, Google, who dumps a large amount of money into search every year, is among the best we have and it's definitely far from what we'd need for business-aware programming bots.

5. Conducting the research step in the programming process automatically will require better tools.

6. Conversational AI is still very incomplete. See Tay bot from Microsoft for examples of what goes wrong at scale. People, in general, are also not very aware of themselves during discussions and even very intelligent people get locked in a particular mindset that precludes further conversation. If a user tries fighting the bot by insisting that what they said should be sufficient (as they definitely do to other humans) that could pollute the bot's data and result in worse behavior.

7. Meeting with stakeholders part of the programming process automatically will also require better tools.

8. By points 5 & 7, critical domains still require more research. There is ongoing research in fields like Q&A, even some commercial attempts, but they're focused on mostly low-level problems ("construct an answer given this question and some small input")[0].

9. Advanced logical reasoning is advanced pattern matching + the ability to generate new reasoning objects on the fly.

10. Current systems are limited in the number of symbols they can manage effectively, or otherwise use lossy continuous approximations of meaning to side-step the symbol issue (it's a rough approximation of the truth, I think). See [1] for an up-to-date summary on this problem. Key phrase: binding problem neural networks

11. Current "reasoning" systems do not actually perform higher level reasoning. By points 9+10.

12. Given the rich history and high investment over time these fields (points 4, 6, and 11), it is unlikely that there will be a sufficiently advanced solution within the next 15-40 years. These fields have been actively worked for decades; the current influx of cash has accelerated only certain types of work: work that generates profit. Work on core problems has kept going at largely the same pace as usual because the core problems are hard-- extra large models can only take you so far, and they're not very useful without obnoxious amounts of compute that aren't easily replicated.

13. Given the long horizon in point 12, programmers will likely be required to continue to massage business inputs into a machine-usable format.

The horizon estimate in point 11 was a gut estimate and assumes that we continue working in parallel on all of the required subproblems, which is not guaranteed. The market is fickle and might lay off researchers in industry labs if they can't produce novel work quickly enough. With the erosion of tenure-track positions taking place in higher education (at least in the US) it's possible that progress might regress to below what it was before this recent AI boom period.

[0]: https://research.facebook.com/downloads/babi/ [1]: https://arxiv.org/pdf/2012.05208.pdf


there will always be someone who tells the computer what to do

Until the computer starts telling people what to do


> Until the computer starts telling people what to do

My phone has me well trained. All it has to do is play a short message tone and I'll come running...


Uh...that already happens and AI isn't event required


That actually gave me chills. We're doomed!


"Maybe the oldest profession will also be the last profession."

-- GuB-42, Wednesday February 2, 2022


Problem is, to "talk with AI", most developers would need to 'retrain' (to use GP's word).

Writing and training a neural network is very different from writing a common program.


But it's not that. We're not talking about training narrowly intelligent ML systems for specific problems. You're right, that's a distinct skill. We're talking about a ML system that can write code based on some higher-than-now level of human input. What that level will/could be is what we're arguing about. Whether it has to be done by some kind of programmer-like person or whether it can be a more generic user/product-owner/product manager. I.e. someone who understands the problem domain but doesn't know too much about the solution domain/technology.

Those ML/AI systems will also have to be built, coded and trained but that's a job for a very small set of people compared to the total number of end users (and the total number of developers on the market today). And, as the ML/AI field stands, it always seem to turn out that specialized algorithms that do what the ML layer cannot do, get pretty quickly eliminated by the ML layer. So most solutions always gets closer and closer to end-to-end.


Your logic is completely flawed. If there is a super AI with real intelligence that understands problems and codes it up for you, why wouldn't it be possible to go one step further and solve problems on its own? Why do you think that a human programmer has to feed a problem statement to the AI for it to work?


It is what I meant by "the singularity". AIs that are so intelligent that they don't need humans, including when it comes to building better AIs. The idea is that they get in a runaway self-improving cycle and what they do after that and the place of humanity is left to the imagination.

I don't believe in the singularity, but if we get to the point where AIs don't need human programmers anymore, things are going to get... interesting.


I sometimes get the feeling that all my coding is actually a class of mathematical transforms that I have no idea how to define but feel very strongly that it is definable and AI-able.

Well it’d a curious day when an AlphaGo moment hits coding. Would be funny if it happened at the same time as Fed rate increases and destabilizing world events this year (the path from median human to top human is shallow). Mass firing of a few million highly paid redundancies out of the blue? Would be quite a sight.

Or maybe it wouldn’t happen that way, but rather it would pave the way for a leaner set of startups that were built with the power to do the same thing at the same or better velocity with an order of magnitude or fewer people.


What professions are these? Chat bots didn't eliminate human CSRs. OCR didn't eliminate human data entry. Object detection hasn't eliminated human intelligence analysts. Machine translation hasn't eliminated human translators. Humans still make a living as professional Chess and Go players. Truck drivers were supposed to be on the chopping block a decade ago, yet they're more in demand now than ever. Human radiologists haven't gone anywhere. Even GPT-N hasn't eliminated human writers. Human transcriptionists haven't even been eliminated. We just have a lot more videos that automatically get shitty transcriptions instead of none at all now.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: