Hacker News new | past | comments | ask | show | jobs | submit login

It sounds less weird if you think: "AI has automated some jobs, and eventually it may automate away most of them, in the same way (pending AlphaGo victory) it's now automated winning at most board games... So what happens if the job of programmer gets automated away?"

(I'm convinced that programmers will take the AI-automating-all-the-jobs idea more seriously once it's their own jobs that are on the line)

There are a few ways I can think of to object to this line of reasoning:

1) You could argue that programming will be a job that never gets automated away. But this seems unlikely--previous intellectual tasks (Chess, Jeopardy, Go) were thought to be "essentially human", and once they were solved by computers, got redefined as not "essentially human" (and therefore not AI). My opinion: In the same way we originally thought tool use made humans unique, then realized that other animals use tools too, we'll eventually learn that there's no fundamental uniqueness to human cognition. It'll be matter & algorithms all the way down. Of course, the algorithms could be really darn complicated. But the fact the Deepmind team won at both Go and Atari using the same approach suggests the existence of important general-purpose algorithms that are within the reach of human software engineers.

2) You could argue that programming will be automated away but in a sense that isn't meaningful (e.g. you need an expensive server farm to replace a relatively cheap human programmer). This is certainly possible. But in the same way calculators do arithmetic much faster & better than humans do, there's the possibility that automated computer programmers will program much faster & better than humans. (Honestly humans suck so hard at programming https://twitter.com/bramcohen/status/51714087842877440 that I think this one will happen.) And all the jobs that we've automated 'til now have been automated in this "meaningful" sense.

Neither of these objections hold much water IMO, which is why I take the intelligence explosion scenario described by Oxford philosopher Nick Bostrom seriously: http://www.amazon.com/Superintelligence-Dangers-Strategies-N...




> (I'm convinced that programmers will take the AI-automating-all-the-jobs idea more seriously once it's their own jobs that are on the line)

I assure you we do not. We would all love to be the one to create such a program. But don't worry it isn't happening anytime soon in the form of singularity.

Just like other people in other jobs, we sometimes come up with more efficient ways of doing things than our bosses thought possible, and then we have some extra free time to do what we like.


"But don't worry it isn't happening anytime soon in the form of singularity."

Probably not soon. Worth noting that the Go victory happened a decade or two before it was predicted to though.

(To clarify, I was not trying to establish that any intelligence explosion is on the immediate horizon. Rather, I was trying to establish that it's a pretty sensible concept when you think about it, and has a solid chance of happening at some point.)

>Just like other people in other jobs, we sometimes come up with more efficient ways of doing things than our bosses thought possible, and then we have some extra free time to do what we like.

Yes, I'm a programmer (UC Berkeley computer science)... I know.

But don't listen to me. Listen to Stuart Russell: https://www.cs.berkeley.edu/~russell/research/future/


> (To clarify, I was not trying to establish that any intelligence explosion is on the immediate horizon. Rather, I was trying to establish that it's a pretty sensible concept when you think about it, and has a solid chance of happening at some point.)

Well first you were saying programmers may hesitate to create an AI singularity because it may cost them their job. I said we would love to but probably won't anytime soon. Now you're saying that it's likely that it will happen some day. I'm not sure these points follow a single train of thought.


>Well first you were saying programmers may hesitate to create an AI singularity because it may cost them their job.

The line about programmers fearing automation only once it affects them was actually an attempt at a joke :P

The argument I'm trying to make is a simple inductive argument. Once something gets automated, it rarely to never gets un-automated. More and more things are getting automated/solved, including things people said would never be automated/solved. What's to prevent programming, including AI programming, from eventually being affected by this trend?

The argument I laid out is not meant to make a point about wait times, only feasibility. It's clear people aren't good at predicting wait times--again, Go wasn't scheduled to be won by computers for another 10+ years.


The fact that programming is an exceptionally ill-defined task. Computers are great at doing well-specified tasks. In many ways, programming is the act of taking something poorly-specified and making it specific enough for a computer to do. Go, while hard, remains very well defined.

I hope for more automation in CS. It will help eliminate the boilerplate and let programmers focusnin the important tasks.


> In many ways, programming is the act of taking something poorly-specified and making it specific enough for a computer to do.

Software development in the broad sense is that, sure; not sure I'd say programming is that -- taking vague goals and applying a body of analytical and social skills to gather information and turn it into something clearly specified and unambiguously testable is the requirements gathering and specification area of system analysis, which is certainly an important part of software development, but a distinct skill from programming (though, given the preference for a lack of functional distinctions -- at least strict ones -- within software development teams in many modern methodologies, its often a skill needed by the same people that need programming skills.)


There is one uniqueness to human cognition: the allowance to be fallible. AI will never be able to solve problems perfectly, but whereas we are forgiven that, they will not be because we've relinquished our control to them ostensibly on exchange for perfection.


It may interest you to know that machine learning algorithms often have an element of randomness. So they are allowed to explore failures. Two copies of the same program, trained separately and seeded with different random numbers, may come up with different results.

I'm not saying there will or won't be AI some day, I just thought that point was relevant to your comment.


Sorry, I found that a bit difficult to follow... it sounds like you think human programmers will beat AIs because we can make mistakes?

Well here's my counter: mistakes either make sense to make or they don't. If they make sense to make, they aren't mistakes, and AIs will make the "mistakes" (e.g. inserting random behavior every so often just to see what happens--it's easy to program an AI to do this). If they don't make sense to make, making them will not be an advantage for humans.

At best you're saying that we'll hold humans of the future to a lower bar, which does not sound like much of an advantage.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: