Hacker News new | past | comments | ask | show | jobs | submit login

The problem, however, is your test can't be assumed to generify the way an explicitly coded web service would.

You can look at your code and say "For all invalid XML, this, for all input spaces, that." You can formally prove your code in other words.

You CANNOT do that with Neural Nets. Any formal proof would simply prove that your neural network simulation is still running okay. Not that it is generating the correct results.

You can supervise the learning process, and you can practically guarantee all the cases within your training data set, and everyone in the research space is comfy enough to say "yeah, for the most part this will probably generify" but the spectre of overfitting never goes away.

With machine learning, I developed a rule of thumb for applicability: "Can a human being who devotes their life to the task learn to do it perfectly?"

If the answer is yes, it MAY be possible to create an expert system capable of performing the task reliably.

So lets apply the rule of thumb:

"Can a human being, devoting their life to the task of driving in arbitrary environmental conditions, perfectly safely drive? Can he safely coexist with other non-dedicated motorists?"

The answer to the first I think we could MAYBE pull off by constraining the scope of arbitrary conditions (I.e. specifically build dedicated self-driving only infrastructure).

The second is a big fat NOPE. In fact, studies have found that too many perfectly obedient drivers typically WORSEN traffic in terms of probability to create traffic jams. Start thinking about how people drive outside the United States and the first-world in general, and the task becomes exponentially more difficult.

The only things smarter than the engineers trying to get your car to drive itself are all the idiots who will invent hazard conditions that your car isn't trained to handle. Your brain is your number one safety device. Technology won't change that. You cannot, and should not outsource your own safety.




Edge cases in this scare the hell out of me. I'm envisioning watching CCTV of every Tesla that follows a specific route on a specific day merrily driving off the same cliff until it's noticed.

I mean what would have happened here if another Tesla or two were directly behind Huang, following his car's lead?!

Possibly nothing, I'd assume the stopping distance would be observed and the following cars would be able to stop/avoid, but I wouldn't like to bet either way. Perhaps, in some conditions, the sudden impact on the lead car would cause the second car to lose track of the rear end of the first? Would it then accelerate into it?


The report indicates the system ignores stationary objects. I would not be surprised if a suddenly decelerated car in front of the system effectively vanished from the car's situational awareness. Your scenario does not seem that far-fetched.

An attentive human would realize something horrible had happened and perhaps reacted accordingly. A disengaged or otherwise distracted one may not have the reaction time necessary to stop the system from plowing right into the situation to make it worse.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: