Hacker News new | past | comments | ask | show | jobs | submit login

You don't have those sorts of systems for regular human drivers, either. If a human driver passes out (diabetic episode, heart attack, drunkeness), you are just as SOL. With a self driving system there's various levels of redundancy you can put in, up to some sort of dead man's switch on an embedded controller that slams the brakes (or maybe just a gentle stop in combination with existing ADAS) if all of the "higher level" systems stop responding.



It's true that's a similar problem with humans. Yet when it occurs to the same person often they may face restrictions on their driving privileges.

When AI goes bad it could impact whole fleets simultaneously. And for all the many ways we know software can unintentionally go bad the liability can be a rabbit hole.


That seems very unlikely. 'AI' driving systems are subjected to millions of hours of simulated driving. A very good human driver will have a few thousand hours of experience after decades of driving.


Has your computer or phone ever glitched? Have you ever suffered a Malware attack? Every single way that computers are vulnerable, machine driven systems are vulnerable. We just saw the SolarWinds hack that was installed through malicious software updates. That could happen in an autonomous driving system as well.

Humans are still far better at dealing with unforeseen circumstances than computers. An AI is trained on a dataset and can overfit for certain parameters.


Releasing unverified AV code would breach most every 'duty of care' legislation in existence. Releasing it in a way that compromises thousands of systems at a time would be the same. These are devops issues not capability.

An AV system needs to pass the driving test your 16 year old sister passed, that's it.


That has already happened with network security software (SolarWinds) used at the highest levels of government including the Pentagon, White House, Congress, and most of the Fortune 500. There is no reason to believe that a massive AI driving system could not be hacked in a similar way through a malicious exploit or update. In fact, adversial AI could be used to dupe these tests and achieve the effect. If 50 million cars on the road were using one particular software suite, this is a vector of attack that has to be considered. It is not an impossibility and you cannot hand wave it away.


The current L2 system developed by Comma.ai simulates the model and changes in CI on every commit from what I can tell.


The average human driver doesn't need millions of pictures of a stop sign to identify one either. Training current AIs sucks and the results need to be verified because they can be flacky, the current research really doesn't come close to match an organism that spend millions of years evolving the ability to move in a complex 3D world.


The AV problem is not as intractable as you imagine. Driving a car is not the same as dating a girl.


Which seems unlikely?

Do AI drivers face the same variety of conditions as humans? Or is it millions of hours on near [perfect] roads in a temperate climate?


You can't compare the times, because AI and humans "learn" in vastly different ways, at vastly different rates.


Can an AI system learn on the fly (that would negate the testing done by the way)? Because I’m fairly sure other than sensor input “normalization”, they can’t and humans can.


It's in a human's interest to not kill him/herself.

The interests aren't aligned at all with self-driving cars. Do you think any executives, engineers or salespeople will go to jail if anyone dies? Think again.


Humans typically aren't interested in killing themselves, but their judgement can also be pretty bad. 40% of traffic fatalities involve impaired driving. In 2018 Distracted driving was attributed to 2,800 fatalities.

Uber ATG's fatal pedestrian collision was the beginning of the end for the program. Uber CEO Travis Kalanick was pushed out of the company, the program has been shuttered, and one of the top engineers, Anthony Levandowski, is in jail (although Levandowski was not implicated in the accident, he's also not invulnerable).


You think Kalanick was pushed out because of the one self-driving crash, and not the sexual harassment scandals, theft of medical records, or threats to journalists?

(Of course it wasn't any of those - it was because the self-driving program wasn't anywhere close to working even ignoring all the safety problems. You can do all those things - the only thing you can't do is waste investor money.)


Travis autonomy would take a couple years. He spent some absurd sum acquiring the talent for ATG and gave them unrealistic deadlines. The program was a shit show. A pedestrian was killed, the program was stopped, Investor money was lost. While this is not the only factor leading to Kalanick's ousting, it was a big one.


The problem isn't total system failure, it's mistakes.

A mistake in a car can cause a crash. A mistake in an elevator can't cause a crash.


The problem is, what if the higher level systems are responding, but are confused by the data they see? We'd have to have redundant decision-making like in the space industry - two of each computer and sensor that have to agree on the result, otherwise safety systems stop the vehicle.


We're going to end up at the trolley problem again. "Should I do action X and kill 3 or do action Y and maybe kill 2?"


This was the scenario in the depressing flashbacks in the movie iRobot. [1] A cop, truck driver and a passenger car with a little girl were pinned together under water in a lake. The NS4 (robot) passing by saw the accident and calculated the cop would have the highest probability for survival and rescued him and not the child, giving him PTSD.

I am morbidly curious how much from that movie will play out with cars and other machinery as things evolve. I am also curious if we will learn any lessons ahead of time or if like city traffic lights, {n} number of people have to die before it becomes a financially viable discussion.

[1] - https://www.youtube.com/watch?v=_MFGx8d1zl0


Don't waste too much time on that. These are cars. They can just brake, it's good enough.


If a self driving car hits a school bus at high speed (despite braking), I'm pretty sure a lawyer will make a case that it could have avoided the school bus by driving up on the side walk which may or may not have had people on it.

I get your point, but I think it is probably more correct for people who are making a split second decision rather than for a car where the decision has already been made in code and someone (or some company) has to take responsibility for why it was made that way.


Today’s AIs are happy to correctly identify which lane they should be on - counting how many people are on a given vehicle is so way out of scope that there is no point continuing.


This is a strawman.

If the option is available to avoid collision with another car and it brakes instead, colliding and killing someone else, that will be a lawsuit and that lawsuit will easily win.


You'll have lawyers willing to make a case against you even if you make a provably perfect choice.

And the real issue is whatever got you into that situation, not the split second choice you make.


Let me know when we reach the point that autonomous cars can bend the laws of physics too, that would truly be a hoot!


I think you deeply misunderstood me.

I'm not saying that braking is magic and solves all problems. I'm saying that braking is morally sufficient in the real world. If hitting something is unavoidable, you can hit whatever is directly in front of you while braking as hard as possible. It's good enough.


That's good enough for a human.

Not for a machine which you can file a lawsuit over and say "it had a decision that it can actually make and chose to kill my son." See how that changes things? It's no longer "an accident."


Eh. You can say the exact same thing about a human.

Apply the reasonable person standard. Are they going to do any better? Probably not.


No you can't. Humans don't have near the reaction time as these self driving cars do. They can process much more than we can, and they never get distracted, like we do.

If a self driving car kills someone and it had the choice/option to do something else then we have a problem that needs dealt with. This isn't a human being or human error.


Humans can panic and decide to swerve, and it only takes them a fraction of a second longer than it takes a self-driving car. Someone willing to argue that the car had a choice and made the wrong one could just as easily argue that the human had a choice and made the wrong one.

You might personally think that oh, the machine is being rational and intentional, that's different from a human. But a lot of people will treat a human the exact same way. That split second panic is judged as if they had all the time in the world to consider the optimal choice.

The legal system already has and handles this type of lawsuit.


There's nothing stopping someone from making a self driving car with two compute systems and multiple types of sensors with overlapping FoVs. In fact, you can usually see on most self driving cars - a minimum of two roof mounted LIDAR (which share FoV in front and back) and various cameras.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: