Hacker News new | past | comments | ask | show | jobs | submit login

Throughout all of that development, computers remained remarkably dumb. We still program them manually and they do what we tell them to even if it's disastrous.

Driving on open roads with other humans is going to require elements of real intelligence that are qualitatively different than your OS example where they just did more programming, better, faster.

It's apples and dump trucks. What if computers are just the wrong approach for this problem?

Computers are always improving. Granted. Doesn't mean it's going to be in a direction that's useful for general AI and driving full-autonomous.

On the flip side of constant improvement is are the claims that we'd have full autonomous cars "real soon now" for each of the last 20 years.

In 2017, we're still far enough away no one will give a product availability date. This again points to a qualitatively different challenge than what came before, and the need to develop several new layers of tech (not just integrate what's there) to get the job done




One thing that isn't getting better: humans ability to drive cars. Actually with computers getting smarter, we might even be getting worse at driving cars because we're spending more time on our computers when we should be driving.

Take a modern computer and show it to a programmer from 1970 and try to convince him that computers are still remarkably dumb. I doubt he'd agree. AI always falls into this trap. Once it's been done, it's not AI. It's just an algorithm. AI is something else. And when we do something that only humans could do just a few years ago, suddenly it's not AI anymore either. It's just an algorithm. AI is something else. And when we do that, now it's just an algorithm too. AI is something else.

I have a robot vacuuming my floors right now. I just typed "vacuuming" wrong and my computer corrected me without asking, it just did it. A minute ago I asked my phone, using my voice, to turn off my bedroom lights and it did. Seems pretty mundane, I know. So too will self-driving cars in time.


Alan Turing established a gold standard for what intelligence is quite some time ago. Modern computers still fail the unrestricted Turing test.

No doubt he'd be impressed at the progress, but failing the test is failing the test.

There is a difference between behaviors which seem intelligent (your roomba) and actual intelligence. Seemingly intelligent behavior (like Eliza) only works when your counterparty isn't too probing or discriminating. It's like a Potemkin village of intelligence; looks great, as long as you don't look too closely.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: