If it's a subjective matter tied to perception of risk rather than actual, statistical risk, such perception can be swayed.
The challenge remains that people will be killed in accidents involving autonomous control. And we anticipate that the number of people killed will be fewer, hence 'saving lives'. However, the lives lost in autonomous accidents will be a different set of people than those that would have died in human driven accidents. There will be cases where a court determines that the autonomous system was the cause. Families of those killed will want justice, while those separately saved by autonomous systems may never be heard from in the same case.
I expect that in the end it will come down to a business decision, and that decision will be informed by an actuarial exercise: Will profits and insurance be able to cover the costs of defending and settling such cases. Who knows, maybe the threshold is crossed at 5x safer.
It strikes me that a useful analogy here is the adoption of automatic elevators in buildings. In some ways, it's amazing that pretty much everyone in industrialized countries is OK with being locked in a windowless box controlled entirely by a computer, hanging over a hundreds of foot deep shaft, and in fact many people were terrified of that when Elevator operators were first replaced with computers. Some places even had operators employed to simply stand there and push the buttons to provide confidence that an trained expert was there, even though they didn't actually contribute to safety.
Eventually, autonomous elevators got common enough that people will look at you really funny if you're not willing to ride in one, even though they are still responsible for ~20-30 deaths per year.
Can you provide more info about people being terrified of automatic elevators? I searched around and everything I found seems to cite one NPR article from 2015.[1] The interviewee's book is out of print and costs $100[2], so that's where that trail stops. If public sentiment against automatic elevators was as strong as described, it seems like there would be more historical evidence available. It's easier for me to find articles disparaging self-checkout systems than for automated elevators. I realize the change in elevators happened long ago, making articles harder to find, but you'd think at least one of them would have gotten digitized and indexed.
I have a subscription. The first two seem to be more about tenants feeling that an operator raises the quality of life. The third one talks of several deaths of children. Apparently in these cases there was a gap of 8 to ten inches between the car doors and the landing doors which accidentally trapped the children and then the car was requested from someone on another level. An ordinance had been proposed to require operators, but opposition claims that rules requiring doors to be flush will prevent such cases.
So this would likely be in the category of cases that would probably never happen with a manual operator. Two solutions were proposed: keep manual operators, or improve the safety of the automatic system.
> Two solutions were proposed: keep manual operators, or improve the safety of the automatic system.
After Southall [1] the drivers union felt the correct approach was to double man trains. It is surely not a coincidence this would also have meant more members and thus more power and income for the union.
That accident would have been prevented by ATP (fitted to the train but disabled because the driver had not been trained) or AWS (fitted to all British trains but disabled by the driver due to a fault).
The ATP system would have braked to a halt to avoid entering an occupied track section. AWS would have audibly alerted the driver to the signals (showing a Preliminary Caution, an ordinary Caution and then Danger) and if the driver did not react, braked to a halt. With neither it appears the driver simply did not see the signals because they weren't looking and so did not react until far too late.
The correct fix of course is to require trains to have working AWS/ ATP and use it. Operating trains with more humans is a worthwhile work around to get a faulty train back to a repair yard as empty stock, but makes no sense in passenger service.
The analogy with elevators largely doesn't work. In the main the only individuals at risk from an elevator accident are those that chose to enter the elevator, this is vastly different for automotive accidents where the individuals affecting by the autonomous car are not those that chose to use one.
The 4-5x is based on statistics of a sample with a high ratio of human drivers to autonomous drivers, if done on the road today.
Autonomous won't be known to be N times as safe as human driving until there are sufficient autonomous cars on the road and the varying ratios of autonomous to human drivers have been at equilibrium for a some sufficient amount of time to gather sufficient data to compare to human driving safety statistics.
So- 4-5x times is a lie. It could be much more dangerous with only autonomous drivers or much less dangerous. They don't know yet.
e.g. "This autonomous car is 4-5x safer than your human-driven car."
When people hear numbers like that, they may assume it's based on sufficient data and is true, even if it's theory.
But how could it be true or acceptable, given that we're only just starting to have more autonomous vehicles on the road, and the safety could be dependent on the ratio of autonomous vehicles to human drivers?
Are you talking about the study, or people claiming that about real cars? The study would need to have a wide variety of safety numbers, and all the cars are theoretical. If 4-5x lines up with a claim about a real car, it's probably a coincidence. Saying there's not enough data about real cars has no bearing on this research at all, unless the abstract is wildly misleading.
You replied to someone talking about waiting for autonomous cars to be 4-5x as safe. That means they are waiting for a car to meet that threshold with correct statistics. You can't call out nonexistent statistics as being a lie!
We have safety statistics on real cars with a minority of autonomous vehicles on the road.
What we don't have are adequate safety statistics on autonomous cars on the road with each other and human drivers at various ratios of autonomous cars to human-driven cars.
If a study were to be able to propose safety numbers based on the various ratios and various autonomous cars and systems, then that would be adequate, given the theories behind those numbers are well-founded.
If you were to say, "This is how it is: given that the ratio of autonomous cars to human-driven cars isn't changing and we expect it to stay the same, we can see that the autonomous cars are 4-5x safer," then I'd not call it a lie.
But 4-5x safer without qualifications shows a gross misunderstanding of the importance of the effect of the ratio of autonomous drivers to human drivers; the variation in environment and expected behavior could influence greater or decreased safety, leading false assumption.
It would be easy to do a comparison with different ratios, though, so you shouldn't preemptively assume incompetence.
And it seems pretty unlikely that increasing the number of autonomous cars will make them signficantly less safe, and that's the only result that would be a problem. Mild fluctuations don't matter on a scale as coarse as "4-5x", and an improvement would be good.
Do you think that increasing the number of autonomous cars will linearly decrease the number of accidents to a certain point? Don’t you think that increasing their number will increase the chances of meeting with really bad human drivers and we simply don’t have sufficient info on whether those ‘meetings’ are less or more deadly than with a human driver - and those are the cause of a significant chunk of accidents. And by intuition I doubt that today’s AI could react better than a competent human driver to someone cutting in front of it, and the like. Simply because we are better at reading high-level patterns in others’ driving. Reaction time is not the only metric that matters.
> Do you think that increasing the number of autonomous cars will linearly decrease the number of accidents to a certain point? Don’t you think that increasing their number will increase the chances of meeting with really bad human drivers and we simply don’t have sufficient info on whether those ‘meetings’ are less or more deadly than with a human driver - and those are the cause of a significant chunk of accidents.
There's a difference between the risk changing, versus merely going from insufficient data to sufficient data.
When you have an extremely small data pool, it's also quite possible that one or two meetings with a really bad driver will give you a misleadingly bad impression of autonomous cars.
But I'll put it this way. Once we've seen either 10 billion miles or 100 fatalities from a particular tier of self-driving, we'll have a very solid idea of how dangerous it is. Getting that much data only requires a tenth of a percent of cars in the US for three years. (And if they're particularly dangerous we can easily abort the test early.)
> And by intuition I doubt that today’s AI could react better than a competent human driver to someone cutting in front of it, and the like. Simply because we are better at reading high-level patterns in others’ driving. Reaction time is not the only metric that matters.
If someone's dangerously cutting people off there probably isn't much to read in their patterns. Being cut off seems to me like one of the situations that is most about reaction time and least about thinking.
While "those unable to stairs up to floor XYZ" can be proven to be a statistically smaller subgroup than "all possible elevator riders", this argument may not hold much water. If you're a wheelchair user or other less-abled person, and your choice is between stairs and an elevator of any sort, you weren't really given much of a choice in that matter.
I know it’s an analogy, but elevators are dead-simple concepts with many fail-safety mechanisms independent of the controlling ‘program’. Self-driving cars are ridiculously complicated black-box program with at most software fail-safes that are itself more complicated than one can reason about so the only thing remaining is empirical testing in real-world situations. And every single bugfix have to be retested substantially as it can just as easily make something else worse - and I don’t feel confident about today’s “self-driving” cars
there is some kind of implied machine capability that a layman assign to a computer. If a task is culturally ( through movies , series, ... ) though to be within that implied capability people will be comfortable. ( cf automatic trains, ... )
The difference with elevators is that the safety systems are actually in large parts independent of the control. Since the Otis safety elevator, you could go so far as to cut the elevator cord and it would still be ok.
With self-driving cars, you don't have those type of backup safety systems.
You don't have those sorts of systems for regular human drivers, either. If a human driver passes out (diabetic episode, heart attack, drunkeness), you are just as SOL. With a self driving system there's various levels of redundancy you can put in, up to some sort of dead man's switch on an embedded controller that slams the brakes (or maybe just a gentle stop in combination with existing ADAS) if all of the "higher level" systems stop responding.
It's true that's a similar problem with humans. Yet when it occurs to the same person often they may face restrictions on their driving privileges.
When AI goes bad it could impact whole fleets simultaneously. And for all the many ways we know software can unintentionally go bad the liability can be a rabbit hole.
That seems very unlikely. 'AI' driving systems are subjected to millions of hours of simulated driving. A very good human driver will have a few thousand hours of experience after decades of driving.
Has your computer or phone ever glitched? Have you ever suffered a Malware attack? Every single way that computers are vulnerable, machine driven systems are vulnerable. We just saw the SolarWinds hack that was installed through malicious software updates. That could happen in an autonomous driving system as well.
Humans are still far better at dealing with unforeseen circumstances than computers. An AI is trained on a dataset and can overfit for certain parameters.
Releasing unverified AV code would breach most every 'duty of care' legislation in existence. Releasing it in a way that compromises thousands of systems at a time would be the same. These are devops issues not capability.
An AV system needs to pass the driving test your 16 year old sister passed, that's it.
That has already happened with network security software (SolarWinds) used at the highest levels of government including the Pentagon, White House, Congress, and most of the Fortune 500. There is no reason to believe that a massive AI driving system could not be hacked in a similar way through a malicious exploit or update. In fact, adversial AI could be used to dupe these tests and achieve the effect. If 50 million cars on the road were using one particular software suite, this is a vector of attack that has to be considered. It is not an impossibility and you cannot hand wave it away.
The average human driver doesn't need millions of pictures of a stop sign to identify one either. Training current AIs sucks and the results need to be verified because they can be flacky, the current research really doesn't come close to match an organism that spend millions of years evolving the ability to move in a complex 3D world.
Can an AI system learn on the fly (that would negate the testing done by the way)? Because I’m fairly sure other than sensor input “normalization”, they can’t and humans can.
It's in a human's interest to not kill him/herself.
The interests aren't aligned at all with self-driving cars. Do you think any executives, engineers or salespeople will go to jail if anyone dies? Think again.
Humans typically aren't interested in killing themselves, but their judgement can also be pretty bad. 40% of traffic fatalities involve impaired driving. In 2018 Distracted driving was attributed to 2,800 fatalities.
Uber ATG's fatal pedestrian collision was the beginning of the end for the program. Uber CEO Travis Kalanick was pushed out of the company, the program has been shuttered, and one of the top engineers, Anthony Levandowski, is in jail (although Levandowski was not implicated in the accident, he's also not invulnerable).
You think Kalanick was pushed out because of the one self-driving crash, and not the sexual harassment scandals, theft of medical records, or threats to journalists?
(Of course it wasn't any of those - it was because the self-driving program wasn't anywhere close to working even ignoring all the safety problems. You can do all those things - the only thing you can't do is waste investor money.)
Travis autonomy would take a couple years. He spent some absurd sum acquiring the talent for ATG and gave them unrealistic deadlines. The program was a shit show. A pedestrian was killed, the program was stopped, Investor money was lost. While this is not the only factor leading to Kalanick's ousting, it was a big one.
The problem is, what if the higher level systems are responding, but are confused by the data they see? We'd have to have redundant decision-making like in the space industry - two of each computer and sensor that have to agree on the result, otherwise safety systems stop the vehicle.
This was the scenario in the depressing flashbacks in the movie iRobot. [1] A cop, truck driver and a passenger car with a little girl were pinned together under water in a lake. The NS4 (robot) passing by saw the accident and calculated the cop would have the highest probability for survival and rescued him and not the child, giving him PTSD.
I am morbidly curious how much from that movie will play out with cars and other machinery as things evolve. I am also curious if we will learn any lessons ahead of time or if like city traffic lights, {n} number of people have to die before it becomes a financially viable discussion.
If a self driving car hits a school bus at high speed (despite braking), I'm pretty sure a lawyer will make a case that it could have avoided the school bus by driving up on the side walk which may or may not have had people on it.
I get your point, but I think it is probably more correct for people who are making a split second decision rather than for a car where the decision has already been made in code and someone (or some company) has to take responsibility for why it was made that way.
Today’s AIs are happy to correctly identify which lane they should be on - counting how many people are on a given vehicle is so way out of scope that there is no point continuing.
If the option is available to avoid collision with another car and it brakes instead, colliding and killing someone else, that will be a lawsuit and that lawsuit will easily win.
I'm not saying that braking is magic and solves all problems. I'm saying that braking is morally sufficient in the real world. If hitting something is unavoidable, you can hit whatever is directly in front of you while braking as hard as possible. It's good enough.
Not for a machine which you can file a lawsuit over and say "it had a decision that it can actually make and chose to kill my son." See how that changes things? It's no longer "an accident."
No you can't. Humans don't have near the reaction time as these self driving cars do. They can process much more than we can, and they never get distracted, like we do.
If a self driving car kills someone and it had the choice/option to do something else then we have a problem that needs dealt with. This isn't a human being or human error.
Humans can panic and decide to swerve, and it only takes them a fraction of a second longer than it takes a self-driving car. Someone willing to argue that the car had a choice and made the wrong one could just as easily argue that the human had a choice and made the wrong one.
You might personally think that oh, the machine is being rational and intentional, that's different from a human. But a lot of people will treat a human the exact same way. That split second panic is judged as if they had all the time in the world to consider the optimal choice.
The legal system already has and handles this type of lawsuit.
There's nothing stopping someone from making a self driving car with two compute systems and multiple types of sensors with overlapping FoVs. In fact, you can usually see on most self driving cars - a minimum of two roof mounted LIDAR (which share FoV in front and back) and various cameras.
I don't think a self driving car that has to make rapid decisions in quickly changing circumstances with tons of moving parts outside of its control is comparable to a specific use case with much fewer variables like an elevator.
Why not? The relative complexity of an automatic elevator at the time considering their level of tech was probably comparable to the complexity of an autonomous vehicle today.
Lol. At the level of tech? We are barely better than a lego mindstorm that goes forward until something is not closer than x centimeters.. (with a bit of sarcasm)
An autonomous car can’t really exit a parking building (I’m not really sure what are they called, but those multi-level parking thingies) because it would require them to understand signs and the like and those may often be missing.
Somehow the idea of a computer error killing me seems way worse than at least having a chance to save myself, since I’m a very cautious driver (though unlikely safer than average by 5x) . Self driving cars need to get to airline level safety where crashes are a rare thing and most people don’t think twice about giving up control to the pilots/auto pilot. If that takes expensive Lidar thats what we should use. I can’t imagine ever feeling good about trusting my life to a computer vision algorithm.
When a human kills someone with a car it is almost always in a way we can empathize. When you’re around cars you have pretty good mental models for the humans that drive them and how the car will behave.
When you’re around human piloted cars you can look at the driver and get a pretty good idea of intent. You can tell if the driver sees you, you can tell what their mental state is, if they’re paying attention, what they intend to do. You can sum up a person with a glance, this is the power of evolution, we’re really good at figuring things out about other living things.
Crossing the street in front of a car is a leap of faith, not troubling at all when there’s a human there, but a robot? There’s no body posture, no gestures, no facial expressions, nothing to go on. There’s a computer in control of a powerful heavy machine that you’re just expected to trust.
Robot cars don’t make human mistakes, they make alien mistakes like running down kids on the sidewalk in broad daylight, things which don’t make any sense at all that make people feel like they aren’t anywhere safe.
It won’t take but a couple of cute kids killed in a surprising matter to shut down the whole autonomous experiment.
From experience I can say that's it's pretty disturbing when a human driver breaks your model, too. And this happens more often than I'd like - I was second in a set of pedestrians about to cross the street (two lane arterial with a 25mph limit - heavily used by both cars and peds - at a cross walk I use several times a week) - the guy in front of me and I both judged that the driver was looking in our direction (it was dark admittedly but street is well-lit) and the car slowed in a fashion that led us to think it was going to stop.
The fellow in front of me steps in front of the car which suddenly accelerates and hits him knocking him down. Immediately before impact it decelerates and comes to a stop.
The fellow ended more shaken up than anything else, but it was a very near thing and a reminder that it's precisely that assumption of the predictability of human drivers that fails in many crash situations.
In other words, your judgement that you understand what other drivers are going to do is more fragile than you imagine.
It seems that most of the radar systems today in cars (in the US) are pretty adept at detecting pedestrians reliably and engaging the automatic emergency braking system to at least minimize the potential for harm.
On my 2018 Lexus, AEB won't come to a complete stop, but it will definitely slow from say 40mph to 20mph, so hitting a pedestrian at 20 hopefully has a much better outcome than hitting them at 40+.
> Crossing the street in front of a car is a leap of faith, not troubling at all when there’s a human there, but a robot? There’s no body posture, no gestures, no facial expressions, nothing to go on. There’s a computer in control of a powerful heavy machine that you’re just expected to trust.
This is something that's very solvable though. Robot cars should and almost definitely will have a way to communicate to pedestrians. I agree with the general point though around a greater possibility of very out of the norm mistakes.
There's an opportunity for them to communicate better with pedestrians than the average human driver. Drivers tend to assume that their intent to stop or not to stop is obvious and don't bother with a clear signal like flashing their lights or waving visibly.
From the pedestrian's perspective, it can be hard to see the driver at all (small movements of the hand can be invisible in sun glare; direction of gaze likewise), and also hard to tell what they're doing. Just because they're slowing somewhat as they approach doesn't mean they see you or intend to stop.
One could imagine a standardized set of signals on the front of the car. Red = stopped/stopping. Yellow = about to start moving; Green = moving and continuing. Something like that.
Drive AI (now acquired by Apple I believe) used to have LED matrix displays that communicated that way with other road users. I recall seeing them say things like "waiting for you to cross" or "driving autonomously" with an icon.
When a human kills someone with a car because they're drunk, or texting, I don't have much empathy for them.
I read a statistic long ago - don't know how true it is, but it feels truthy - that half of all traffic fatalities happen between 9pm and 3am on friday and saturday nights. The fact that autonomous systems will never be intoxicated, distracted, or emotional makes me feel much safer.
A brick tied to the gas pedal will also never be intoxicated. It takes more than inability for intoxication to make a system that can drive a car safely.
Maybe not 50%, but there's certainly a strong bias in that data toward friday/saturday nights. Since the data resets at midnight rather than on bar hours, look at the difference in midnight-4am data on saturday and sunday mornings, vs the rest of the week.
Those are also the only nights when people are out at all. People who have to be at work on weekday mornings aren't driving home from visiting their parents late on a Tuesday night, they're doing it late on a Saturday night. Yeah, it probably is alcohol but there's a lot of confounding factors.
It only makes me feel safer if those systems are substantially safer than humans.
If the systems are broadly as safe as humans _including_ a significant set who are drunk / high / distracted, that feels subjectively much less safe even though the statistical number of accidents is the same.
Everybody drives better than the average, until they get distracted or tired. A AV driver that passes the standardised driving test 1M times in a row is good for me.
Well, now we've transferred the responsibility: Now the programmers of the car's AI must never be intoxicated, distracted, or emotional while creating it. Also, the business people managing the programmers must not be greedy, callous, or incompetent, which is a tough sell.
So we need AI driver certification then. Just like we have with people. If the AI can past the, perhaps 5x human capability test, its licensed to drive, just like the 16 year old teenager driving his new truck on the highway behind you with his four buddies scolling coors.
The one time I was hit by a car as a pedestrian was a driver who wasn't paying attention. He was making a perfectly legal left turn at a green light, except for the pedestrians in the way (me and my girlfriend).
The danger with an autonomous vehicle is it not seeing you. The danger with a driver is not noticing you.
> Robot cars don’t make human mistakes, they make alien mistakes like running down kids on the sidewalk in broad daylight, things which don’t make any sense at all that make people feel like they aren’t anywhere safe.
This expresses well the perceived difference between catastrophic human and machine errors.
I have two cars, a ’99 4Runner and a Tesla Model S.
When driving my S, I’m at an involuntary elevated state of alertness compared to my 4Runner. I think it’s discomfort at driving a car that might suddenly kill me through failure modes that I can’t possibly anticipate.
While I think my S’s technology is much more likely to result in my safe travel than my dumb iron 4Runner, my subconscious maintains a different assessment.
I expect this dichotomy in perceived risk is generational and will disappear after a decade or two of lower accident rates in autonomous vehicles compared to human drivers.
I outfitted my car with Comma's openpilot and I can concur with that my experience is similar with the additional effect that overall, my driving quality of life is significantly improved.
I can tell that I already have an inordinate amount of trust in the system several weeks in on boring highway driving.
There is no evidence that AI can reach the level of skill and safety of a human driver. I’m not saying that it’s not possible, only there is no reason to be sure of the contrary. IMHO we are extremely far.
And IMHO we are extremely close. There is lots of evidence that AI can be much _better_ than a human driver, although currently on things like well mapped highway driving with clear conditions. Whats going on now is just making that general purpose for all different types of enviornments.
Out of curiosity, do you have a specific source for,
"There is lots of evidence that AI can be much _better_ than a human driver, although currently on things like well mapped highway driving with clear conditions."
The only thing I know about is claims from Tesla, comparing Autopilot (in "well mapped highway driving with clear conditions") versus general human drivers (and which, as I recall, had some further sketchiness).
They can easily surpass humans in really specified problems like looking at millions of cells and identifying which is likely cancerous. Today’s AIs are pretty bad at complex problems where each subproblem would require a specific AI and then some logical thinking on top measuring each results’ to each other. And driving in non-trivial circumstances is definitely in the latter category
Honda did an experiment where they added LCD's to the headlights to give the car expressive eyes to communicate to people outside the car.
Also while it is scary to us and it will take a while there are already self driven vehicles we just take for granted like elevators and driverless trams/trains. Sure they are much easier to make but they weren't trusted at first.
Trains are on a track, the scenario never really changes. The variables are low. Just like in an elevator, which is largely a mechanical device anyway. There is no real decision making.
> When a human kills someone with a car it is almost always in a way we can empathize
Nonsense. You can empathize with someone texting and killing someone?
This whole post reads like an attempt to appeal to people’s emotional attachment to human drivers coupled with fearmongering about robots.
You are placing far too much emphasis one our ability to “read” other drivers intent and the impact this has on automobile accident fatalities. Many accidents occur without any chance to see the offending driver e.g. accidents at night, someone switching lanes when you are in their blind spot, a drunk driver suddenly doing something erratic, etc. Moreover, this so-called advantage of human drivers is statistically meaningless unless you believe that the number of deaths due to automobile accidents is at an acceptable level and that it cannot be improved with technology, in this case, AV. I certainly don’t believe that. In the not too distant future, I believe this position will be laughable. Through adoption of autonomous vehicles, many predict we will drastically cut the number of fatalities. Will there be issues along the road? Most certainly. But as long as the overall number is falling by a significant amount, we simply cannot justify our love affair with humans “being in control”. We’ve proven to be perennially distracted, we have terrible reaction times, we have extremely narrow vision, we panic in situations instead of remaining calm, etc. and yes, these faults do lead to the deaths of children. These are not theoretical deaths like the robot scare tactic examples, these are actual deaths from human drivers.
>Through adoption of autonomous vehicles, many predict we will drastically cut the number of fatalities.
Who are these many people, and why should we believe their predictions?
> We’ve proven to be perennially distracted, we have terrible reaction times, we have extremely narrow vision, we panic in situations instead of remaining calm, etc. and yes, these faults do lead to the deaths of children.
We've also proven that all software has bugs, and developers keep introducing new bugs in every single release. There is no reason to think that self-driving car software will be any different. Whats worse is that when software is updated, these bugs will now be pushed out to tens of thousands of cars - instantly.
Bit much to call someones position nonsense when they're just skeptical of obvious stuff :)
I was referring the absurdity of empathizing with drivers who kill people while texting, drunk, etc. (hence the quotation). What part of that statement do you agree with?
But I’ll go further and double down and say the entire post is nonsense. Why? Because the author’s skepticism doesn’t extend to the human factor. The position is not an accurate representation of the facts i.e what causes accidents (humans) and the known data around AVs today. If AV risk is so obvious as you claim then why does the enormous amount of data show that AVs are involved in the less accidents and lead to less fatalities than cars operated by humans on a mile per mile basis? And how is the negligent human driver not obvious as a source of automobile fatalities? The notion that we are safe because we can read humans is not substantiated by anything. Maybe you believe this number of fatalities is acceptable or the best we can do but I certainly don’t. There will be flaws in autonomous vehicles, no doubt. But will there be a net reduction in automobile related fatalities as a result? Like anyone else, I can’t predict the future. But to paint a rosy picture about how our ability to read other drivers is somehow safer relative to AVs is nonsense. It just is. The data doesn’t support this argument. And separately, if we’re talking about will happen in the future, the notion that humans will ultimately prevail over AVs because for safety reasons seems preposterous. We can debate the “when” in terms of AVs but debating the “if” seems pretty out of touch with the way society has progressed with respect to our willingness to depend on technology.
>Because the author’s skepticism doesn’t extend to the human factor.
And your over-enthusiasm for AV doesn't extend to the human factor. We all have our own blinders ;)
>The notion that we are safe because we can read humans is not substantiated by anything.
That is your own misinterpretation. I did not read the comment that way.
>If AV risk is so obvious as you claim then why does the enormous amount of data show that AVs are involved in the less accidents and lead to less fatalities than cars operated by humans on a mile per mile basis?
What you mean when you say AV, is actually "AV + Human". We're running controlled experiments, limiting the unknowns, and we're mandating a human be present - because the current AV technology sucks.
> We can debate the “when” in terms of AVs but debating the “if” seems pretty out of touch with the way society has progressed with respect to our willingness to depend on technology.
People used to say that about flying cars 40 years ago.
A future where AVs exist that can replace human drivers is a future where so many requirements and drivers for personal mobility have changed as well: due to the possibility of replacing a human performing a highly complex task in situ.
That future may not even want or need cars.
The other, more realistic future is one where human-level AVs are always just out of reach. Where causes of accidents are just as intransparent as with human drivers, we're all a little bit safer but a patch can cause catastrophic divergent behavior due to the innate non-linearity of the problem.
That future may not want or need cars either but it may not even be considered.
> The other, more realistic future is one where human-level AVs are always just out of reach.
That sounds pretty unlikely to me. "just out of reach" is a very narrow band, and to be stuck there despite decades of improvements would be pretty strange. I think there are only two likely outcomes for the next 50 to 100 years: either we don't even get particularly close, or we'll slowly but surely surpass the median human driver.
> Robot cars don’t make human mistakes, they make alien mistakes like running down kids on the sidewalk in broad daylight, things which don’t make any sense at all that make people feel like they aren’t anywhere safe.
This.
It doesn't matter if autonomous cars are objectively safer, 5x, 10x, 100x, doesn't matter. They have to be subjectively safer, which is part PR problem, UI problem, and part human stubbornness problem.
As much as we engineers want to look at numbers and say "see, safer!" That does little to help people who have had deaths, or the visceral impact that death can have.
You also end up reducing the number of people who are impacted by death or who have to feel those emotions. Yes, in the process of touching the system you also end up shuffling around who gets affected but in the end it's just part of the process of affecting fewer people.
Once self-driving is better we need to do is to create moral rules for AI.
There is a car controlled by computer. Pedestrian (child) abruptly enter into road from behind cover. The Computer knows that with current speed it is impossible to stop. Its other choices is to drive into the sidewalk killing an old lady, drive into the opposite lane risking the life of a car owner and people in another car.
A Human driver can decide on instinct, usually protecting themselves. The Computer needs to have an algorithm that decide who will live and who dies.
Why is it always about running to the sidewalk or the other lane? There's also the option to reduce speed as much as possible, aka break hard, that a computer can do a) earlier than a human would b) much harder than a human would. Yes, the people in the car might take a lot of negative Gs. But that's also an option. The car might still hit the kid, but the difference could be some broken bones, or with chance, bruises, vs. death.
> There's also the option to reduce speed as much as possible, aka break hard, that a computer can do a) earlier than a human would b) much harder than a human would.
a) A computer can initiate braking a small fraction of a second faster than a human, which is great but not such a huge difference in braking distance.
b) A computer cannot brake any harder than a human, certainly not "much harder". The max deceleration rate is traction-limited, which any remotely modern car (last ~25 years) can easily sustain with an untrained driver thanks to modern ABS.
(As a side hobby I instruct in car control and accident avoidance clinics. Blindly braking hard is not often the best answer.)
>Then the car was going too fast. Full stop. The rest of your scenario is irrelevant.
How can you possibly say that with a straight face?
There will be situations where a car is going a perfectly fine speed for the situation and then the situation changes in a way the car cannot have seen, known about, or anticipated resulting in a crash. This happens all the time with human drivers. It will happen with AI drivers too.
Furthermore, we don't subject anything else that moves to this burden, why would we do so for cars (AI or otherwise)?
> There will be situations where a car is going a perfectly fine speed for the situation and then the situation changes in a way the car cannot have seen, known about, or anticipated resulting in a crash.
The parent's situation was not one of those. This was a situation where "Pedestrian (child) abruptly enter into road from behind cover." and there's an old lady on the sidewalk. In other words, there's a situation with limited visibility, limited room to maneuver (since the only other option is to go up on the sidewalk) and pedestrians present (the child may not be known but the old lady was). If you're in that situation and you're going too fast that you can't stop on a dime, you were not going a "perfectly fine speed for the situation".
In European cities you will often see pedestrians walking 1m from cars driving 50-70 km/h. Human drivers can take this risk, AI to be useful needs to handle it well.
My reference shows 1% at 30km/h. But even 1% is too high. But luckily you also usually get a chance to scrub off some speed through braking. Braking follows a square law, so driving a little bit slower gains a massive difference in stopping distances.
When you already believe that cars should not exist, it’s not much of a stretch to say that cars which do exist should be limited to less than a walking pace anywhere pedestrians might be present.
There’s a thing about the car companies having essentially stolen public space from the everyone else when they made it incumbent on pedestrians to watch out for cars, and a desire to reverse this.
Of course this would pretty much invalidate them as a transportation mechanism, but that’s the point.
I think you could simultaneously have a better improvement for pedestrians and less impact on traffic by turning those parking spaces into sidewalks and greenery.
While I don't disagree, I will note that most people don't think twice about giving up control to bus and taxi drivers.*
I think we trust humans to make a reasonable decision in a trolley-problem scenario (rightly or wrongly). Or rather, we trust the human we're in a vehicle with to value their own life, and thus our own, more than those outside of the vehicle in most scenarios.
I expect there is research to investigate this, though I wouldn't know where to begin looking.
*I've definitely had a few bad drivers, of course.
I always sit in the back of the taxi (just like what Waymo does), and that already significantly decreases my chances of dieing. And with bus it's again easily more than 5x safer than a car.
Busses are generally much safer because they are bigger and heavier. Except in mountains. But I only encounter that on holidays and on holidays we take a lot more risks.
Bus vs. AV: AV rider dead, lots of injuries in the bus. That's conservation of energy and momentum.
Assuming some utilitaristic equation for minimizing total injury, an AV bus probably won't swerve, and probably won't brake significantly except if colliding with a heavy or fast vehicle.
The average baseline involves a driver that is not paying attention or hasn't slept X% of the time. You cannot magically wave away all the bad days and pretend only the good humans are on the streets. The bad days happen, people die.
The autonomous vehicle only has to do better than that. That's the whole point.
Statistically no. But you need to convince people - that is the point.
We are not rational. We do not respond well to a car running straight into a barrier or stand-still vehicle at 100 km/h without even attempting to brake (such as the failure modes that Tesla has demonstrated) even if it does well most of the time and has better statistics than an average human.
And further, as noted in the thread, a good part of driving is assessing other vehicles and vehicles behaving oddly (even if it is objectively better in isolation) is really bad and will increase the risk of collisions.
I think google have experienced this, that people do not respond well do the driving technique of their car since it doesn't behave as a human - not that it does anything wrong.
Individuals may not act rationally but regulators and insurance companies with their birds-eye view will see the hard numbers and hopefully provide incentives aligned with the rational choice.
I didn't suggest that the technology should be rushed and I agree that a careful approach can save more lives. But what constitutes "careful" matters here. For example if a city chooses to offer robotaxi discounts to people with bad driving records (before some cutoff date to avoid perverse incentives) then even an average taxi fleet could be a net-benefit even though the taxis do not perform better than the general population. And that's just in terms of lives saved, not counting the other benefits of having cheap transportation.
==
Obviously, not everyone can be above average. Exactly half of all drivers have to be in the bottom half when it comes to driving skills and safety.
==
Maybe, but the bottom half is not necessary drivers worse than average.
I do not know how you would calculate "average". But there are people on the road that could pull down the average a lot. So that more than 50 percent are better than average.
In colloquial speech, most people don't differentiate between "mean" and "median". My guess is that, in that kind of survey, the participants read or say "average" and implicitly mean "median" – and exactly 50 percent of drivers are better than the median, by definition.
The problem with not differentiating clearly between mean and median, is that you look out at other drivers and calculate a mean, then you assume that the median is the same. With a sufficiently skewed distribution this will hugely inflate your numbers, even if you have a realistic perspective of your actual skill.
I feel the opposite. The one sensor that is guaranteed to have sufficient information to drive in all conditions is vision. That's obviously because humans drive exclusively with vision (and a single vantage point to boot - modulo mirrors).
>The one sensor that is guaranteed to have sufficient information to drive in all conditions is vision.
Minor nit but humans can't drive--certainly not safely--in all conditions. You can certainly get to a point in fog, blizzards, and even very heavy rain where you really would like to get off the road if possible. (Not always possible of course and in snow particularly, pulling off to the side of a highway isn't a great option.)
Not a nit at all; the parent comment has the cart completely in front of the horse. Humans use vision (primarily) to drive because it's the only sense we have that's even close to being sufficient.
There are certainly other senses (lidar, ultrasound, radio signals) that robots could avail themselves of that would be helpful even in conditions where vision also worked.
Kind of. The difference between human vision and computer vision is that human vision is stereoscopic. We perceive depth in addition to color and shape. And that gives us the ability to perceive the 3-dimensional shape of an object, which lets us anticipate how it might move. A lot of CV algorithms operate on single images from a single camera, which makes it impossible to judge depth. In that case you'd have to use the size of an object as a proxy for its distance and speed, so you'd tend to misjudge how far things are from you and where they're going.
The nice thing about LIDAR is that you can gather that depth/shape information and with sufficient calibration map the shapes in the camera image to the depths in the LIDAR image. You can do the same thing with two cameras so I'm not sure why LIDAR would be preferred here.
We don't need two eyes to drive. And we don't need two eyes to gauge depth. Neither does a machine, but adding stereoscopic cameras is not hard.
The stereoscopic effect is very poor on driving distances, doesn't help us that much. Primarily we use clues form the environment to gauge distances. We also have to focus our eyes to the correct distance - that also tells us how far an object is.
Primarily we have had an insane amount of training to understand our world. An understanding that a self-driving car will never achieve unless singularity happens. It might not need that, but it will need other ways to compensate for it.
Tesla as an example has 3 forward looking cameras, additionally a single moving camera can sense depth since differences between frames relates to the distance from the camera.
LIDAR has its advantages, like precise 3D positions under ideal conditions. However there are downsides as well. Cost is a big one, but that's becoming less of a issue over time. Another is sensitivity to rain, fog, blowing sand, etc.
A complicating factor is human driven cars will assume cars to act like they have human limitations. So higher speeds when humans can see well, and low speeds when humans can't.
Not sure Tesla's current sensors will do it, but seems like camera based systems are likely to be quite competitive with LIDAR. Maybe instead of 3 forward cameras, 6 or 8 so there's overlapping views (for stereoscopic vision), handling failures better, and allowing a narrower field of view at a higher zoom.
More range will be a huge help, that way an autonomous car can slow more gently when uncertain and drive more like a human. After all superhuman reflexes aren't much use if you get rear ended all the time.
> You can do the same thing with two cameras so I'm not sure why LIDAR would be preferred here.
I almost spilled my beer about to comment that a camera or two are equally if not more powerful than lidar. To me personally, Lidar feels like an incomplete solution to 3D mapping when high res images from a smartphone camera can provide so many more data points from different angles.
My thinking was vehicles should have an idea where other vehicles are without the need for comp viz. Like beacons saying "hey Im here." And we can try to calculate relative direction and distance. The vision bit should ideally come in to validate and confirm other things like road signs etc. Ideally we should add that data to mapping software andnthe car should know these things without "seeing it".
You can get 3D data from a single moving camera. The technique is called structure from motion and has been demonstrated to work well more than a decade ago.
The biggest problem with relying on visual data is that you get very noisy data. Poor lighting and reflective or glossy surfaces cause problems (I'm not sure what current state of the art is, it's been a few years since I looked at the research).
As far as I understand the big advantage of LIDAR is that you get nice and clean depth data and it's not so computationally expensive.
That's why one place I really want a driving assist is automatically backing out of spaces in parking lots. Visibility is terrible for the driver. You need to be paying close attention in multiple directions at once, you often don't have visibility at all when you need to start moving (like a larger vehicle parked next to you), and both pedestrians and other vehicles can appear out of nowhere, often moving in unexpected directions. It feels very unsafe.
Computer vision could be making those go-stop decisions for you, much more effectively than human drivers.
Heck, imagine a "smart" parking lot that tracks its available spaces and communicates with your car. You enter the parking lot and hand over control, and the car and lot work together to park you safely in the best available space.
Some cars already have those. I've got a car from 2017 that has some kind of sensors wrapping all the way around the bumper. When I'm parked between two cars, it can detect a car roughly 15-20 feet away as soon as I get an inch or two of my bumper clear of the other cars.
It does lack any spatial indicators to the driver, though. It just plays a beep, but it doesn't seem to play the beep from the side of the car the danger is in, nor does it beep louder when they're closer. Still, it works remarkably well. Reliably detects pedestrians and vehicles (even while I'm moving, which is impressive, because it doesn't go off for parked cars when I start moving).
I think it's the Dodge ParkSense, but I bought the car used so there's a possibility it's something aftermarket.
It's saved my ass a number of times. I'm in jacked-up pickup truck country, so it is often impossible for me to see pedestrians about to walk behind me because of the lifted truck, and likewise the pedestrians can't see me until they're behind me.
A similar calculus was what lead me to get a private pilot’s license instead of a motorbike - general aviation fatalities are something like 90% pilot error but 60% of motorbike fatalities are NOT the fault of the rider.
To add to the ‘different set of accidents’ point - the accidents caused by autonomous cars will most likely also be harder to accept. They will be accidents that a human driver (at least with the benefit of hindsight) would appear unlikely to have caused.
Things like that Tesla driving straight into a wall and killing its driver. Or not seeing a van right in front of it because the sun was too bright. Things where a bit of common sense (not something AI excels at) might have avoided them.
On the flip side the lives saved will be based on having super human reaction times / situational awareness and will be things no person would have been able to do.
So maybe there’ll be a battle of public opinion (and PR!) weighing these things against each other.
I think you’re going to find human caused deaths to be just as bizarre if you ever look into them. You might think people are generally ok drivers but when you’re looking at each fatality as the worst mistake per 1.5+ Million hours driven people are doing some truly dumb things.
On that time scale you can expect people to have fallen asleep at the wheel several times. They spent literally months looking at their phones rather than paying attention to the road etc. Because here’s the thing most accidents aren’t fatal, for every death you’re going to see a host of accidents that should have been avoided and tens if not hundreds of thousands of mistakes that could have resulted in a serious accident but didn’t.
This is why it's important that the set of drivers killed in AI accidents will be different. The ones killed in accidents up to this point are the ones being reckless (and sometimes, the people they inevitably hit).
The ones killed in accidents involving self driving cars will include people who drove responsibly or even defensively in the past. They will also be among higher income earners for the first few years, while self-driving capabilities are still too expensive to be included in the average car.
Moreover, people like to think they're in the set that wouldn't get themselves killed by their own driving, regardless of how poorly they drive.
I was driving on 280 once in clear weather and light traffic, and the car ahead of me suddenly swerved left, swerved right and went tumbling across all lanes and ended up upside down on the shoulder. It was a really bizarre thing to witness. And both people inside seemed mostly OK.
But then we assume that autonomous cars will replace these bad drivers - which is a pretty unsubstantiated claim, and the same percentage of drivers still be this bad when autonomous cars are more popular - and we don’t have enough data to conclude that autonomous cars are better at dealing with these bad drivers than competent humans (and I assume this is not true as of yet)
No, if there are lives saved by self-driving cars, they will primarily be in instances where the driver was distracted/drunk/high. Thus any competent lawyer workin a case against a self-driving car company in the case where their car killed someone, will argue (rightly!) that those sane lives could also have been saved by less high-tech measures, such as AI detecting undesired human behaviour and putting the car in limp-home-mode or similar.
In that case you are saving the same lives and not causing any new fatalities, but it's not a super cool tech that will attract lots of investment. So the choice to implement self driving was ultimately putting profits over human lives.
I think detecting impaired and distracted drivers is a lot harder than you think and more high tech since you need that detection in addition to the ability to limp home.
There are very trivial situations where even current Teslas are safer than previous cars: while I wouldn’t trust my life to Autopilot, I’d rather drive next to a drunk driver using autopilot than one driving “manually”. Other simple things, like detecting excessive swerving and drifting out of the land, would be relatively straightforward signals to the car to maybe limit speed and encourage pulling over.
Detection is pretty much where AIs excel so I doubt it would be a hard problem. And it could also just disable driving the car and that’s it, saving the same amount of people.
There's actually a potential safety advantage from the public's irrational reaction to autonomous driving deaths (irrational if compared to their response to human driving driving deaths):
Autonomous driving deaths will be much more scrutinized. In the press, by regulators, by competitors, by the companies involved. There's likely to be a lot more data involved. The scrutiny may ironically increase as it becomes safer, maybe becoming a lot like plane crashes, in fact, where each one is scrutinized by groups of experts and fixes are put in place to prevent that kind of thing from happening.
The safer it is, the rarer and more novel the fatalities are and therefore the more resources will be spent on fixing the safety issues.
That is, if autonomous driving is allowed. If it's not allowed because overall irrational fear of autonomous driving, then that feedback cycle can't happen.
So the paranoia isn't necessarily a bad thing as long as it doesn't cause paralysis.
It isn't just a perception issue. It is going to depend on how we assign blame. Today if a human causes an accident that kills someone, the family can sue the human and get some from their insurance company, but most insurance policies have a relatively low limit on what they will pay. As far as the individual goes, the median net worth of someone in the US is less than $100,000. You can sue, but there really isn't that much money to get.
However, with self-driving vehicles, if you sue the manufacture of the vehicle, they have significantly deeper pockets. This is going to effectively keep us stuck in the place where no matter how good self driving cars become, the responsibility remains with the driver.
>> The challenge remains that people will be killed in accidents involving autonomous control.
While that's almost certainly true (and depending on one's interpretation of autonomous is already true), some people already believe that zero deaths in transport is possible: https://visionzeronetwork.org/about/what-is-vision-zero/. If it's possible to hold human drivers to that standard, why not autonomous systems as well?
That was my original hope for self-driving cars. The easiest way to ensure that you don't kill anybody is to limit your speed to 20mph. At and below that speed a car-pedestrian collision is highly unlikely to result in a dead pedestrian. Also, at 20mph you can stop on a dime. So I imagined a large fleet of robot cars traveling at 20mph and normalizing that speed, forcing human drivers to slow down too.
But it turns out that self-driving vendors spend a lot of effort on "driving like a human", which includes driving faster than the speed limit and faster than vision zero would allow on shared streets & roads.
Well that’s a silly vision. I have a vision of immortality too. This is a textbook example perfect being the enemy of good. I don’t want to hold any system to the standard of zero fatalities, and further I think this is why “shoot for the moon and even if you miss you’ll land amongst the stars” is faulty. People waste an exorbitant amount of time and fossil fuels commuting in passenger vehicles. Anything that significantly changes that balance should be considered. Progress is progress and letting everybody use the time they used to spend commuting is certainly progress.
there's zero chance mass transit can accommodate people's needs, even if you somehow solve the shopping issue with local markets and fixed item prices, the job<>house graph will always look like a fractal set of overlapping stars.
cars is one of the most important social equalizer of this and last century. it allows people from basically all social status to move themselves, their family and their belongings at any time in full autonomy, enabling them to find houses and jobs in the most economical and convenient places.
private cars win over public transit because it's a better product. the only way for public transit to compete is to mandate laws to make cars comparatively a worse product, which is fundamentally anticompetitive - once public transit become monopolistic, once the company owns a tract, they can abuse their position deciding pricing and timetables.
even in traffic, private cars offer a better experience: people are in their own space, under a roof from start to finish, with their preferred temperature and away from both obnoxious or influenced people.
forcing mass transit trough legislation will surely have some short term benefits, but also:
- more class divide between the richs, that will always be able to afford a car, and the poors who won't
- massive gentrification of well connected areas, while the poors get pushed out to where service is of less quality
- dependency of the poors to expensive third party services on any occasion they need to move themselves or their goods across a non serviced destination. rentals is not an option: rentals need a credit card, and guess who don't own one?
- dependency on public transit times. have an unusual work shift or need to visit parents? get prepared to wait in the open under the cold or rain. need to get your child out of school in a down hour because he's suddenly ill? too bad, transit only gives peak service at peak hours.
of course there's situation where mass transit gets convenient, like large cities where a significant percentage of the population shares the same source/destination points, at which point people will use it naturally on their own.
opposing private transit is going to destroy an important achievement of modern society, the car has been a great economical equalizer and still is. market, after all, need customer flexibility not to be captured. Safety is something to be solved with more safe cars and better technology all around.
it'd be like discovering tomorrow that soap pollutes: some will want to force everyone to smell, some will create a better soap.
“of course there's situation where mass transit gets convenient, like large cities”
That’s where most people alive, and in the not too distant future, the vast majority.
You’re also leaving out all the downsides of cars: mass casualties, sedentary lifestyles, air and water pollution, reshaping urban forms to non-human scales.
When you live somewhere that hasn’t been exclusively designed for the car, you might realize that it’s perfectly possible to live a happy, healthy life without ever using one, or doing so only rarely through various rental schemes.
It’s also rich that you bring up poor people. Requiring car ownership, like much of the built infrastructure in the United States does, is a huge burden on the finances of the poor and working class.
You’re also forgetting that there’s options other than “jam everyone on a central star topology bus” for transportation: walking, bicycling, and various e-devices. All of which are much, much cheaper than cars and yet provide many of the benefits you tout.
Cars have their place and have been extremely useful for many, but our society has gone too far.
All the poors were kicked out the inner city and commute for hours a day now
And just wait till they realise you can influence voting patterns by making it inconvenient to reach the polling places just by tweaking timetables, it'll make gerrymandering pale in comparison
It’s incredibly depressing that in the context of a discussion of transport safety, you seem to think we’re doomed to 30,000 dead a year from crashes alone, to say nothing of shortened poorer quality lives from obesity and lung disease, just because you claim the personal car is intrinsically superior to other modes of transport.
That’s a really dystopian vision of the future and thankfully disconnected from the reality in a lot of major cities which are taking steps to reclaim transportation systems from domination by the personal automobile.
> you seem to think we’re doomed to 30,000 dead a year from crashes alone
I even said:
> Safety is something to be solved with more safe cars and better technology all around.
but apparently you're filtering what you want to hear, so there's no discussion to be had here, especially since all the point so far are appeals to feeling without addressing the point made. I'm sorry that logic has failed you.
>that’s where most people alive, and in the not too distant future, the vast majority.
Ironically enough, Covid is showing just how much people would rather not choose that destiny. In the US, inner city real estate has taken a hit while car-only rural/Exurbs property has seen a dramatic uptick. I take this to mean that, A. many people would like get away from the filth/homelessness/noise city, and B. many people would like to have space for creative activities that our litigious and ownership driven society deems only appropriate for private ownership. Space that makes urban landscapes take on “non-human scales” is just part of that equation. No way for everyone that wants one to have space for a nice garden, a gym or a painting studio or whatever in the confines of a city, ergo suburbs. As automation frees our time people are going to desire more space for creative activities not less. Having grown up in a more rural setting, you learn how to use the space afforded to you. Things like having a band, making/fixing things, keeping animals, growing plants, experimenting or really any physical project or collection, all require a considerable amount of space.
Which brings me to my next point, people choose to live in the city because it is convenient in our society, not by some law of attraction. Short of climate change laying waste to the environment, I expect people to actually move away from dense urban living long-term as automation becomes more dominant. Personally, I enjoy living in the city because it is convenient and I don’t partake in a lot of the activities that drive people toward wanting space, and honestly like the people I find there better, but find every city I’ve ever been to be extremely loud, absolutely filthy, and ugly as hell–architecture is only beautiful in contrast with how ugly the urban setting is. Amazon delivery alone has made rural life much more attractive.
And besides all this, there’s still the problem that the vast majority of the New World just isn’t as settled as the Old. While Europe was adapting cities to the modern era of transportation, we were bootstrapping a country into existence.
You say, “our society has gone too far”, but out here in California, by modern society’s standards, we would need several lifetimes of radical change in order to come anywhere near such a goal through traditional means(e.g. public transit, dense urban housing). You are accurate in your appraisal that our cities are often designed around the car, but you can’t just wave a magic wand and fix the massive structural housing problems and lack of mass transit infrastructure. And this isn’t the 1970’s, infrastructure construction takes FOREVER and is ludicrously expensive, especially major infrastructure that will see lots of public use. Don’t forget about our stringent wildlife/environmental/worker protections, and accessibility/fire codes to further complicate, slow and add expense(not to say that these are universally bad). Moreover the state has spent the better part of the last 100 years actively fighting urbanization. Currently the average commute in California is 30 mins/20 miles each way[1], apparently twice that of Europe[2]. California also has 40 million people most of whom commute by car and an entrenched political establishment. To illustrate these points: San Francisco is working on adding an extension to its metro, a 1.7 mile addition with 4 stations, albeit a deep cut going through downtown, will take a decade to construct and will cost $1.5B, it’s expected to service 35k people per day(which, having lived along the proposed line for 2 years recently, seems awful high). Meanwhile all of the state’s urban centers and especially SF are compromised of legions of office buildings with only a smattering of apartment buildings. Instead of building up in the Bay Area, we paved over Antioch, some of the best farmland in the world, with ideal conditions for the difficult to grow Almond crop, to put in suburbs some 45 miles away from the City, and only decades later finally got around to running a single rail line out there. Meanwhile, we created a giant waterway to move some of the fresh water that flowed through that area 250 miles south to the desert so that we can grow almonds there instead. We did that right away, and the state is now planning on building a new pipeline and pumping station because the original waterway was allegedly causing environmental problems. I take the skeptical tone because they’ve never actually said that they plan to stop using the old waterway, only that it has problems that the new one won’t have. My point here being that a supposedly progressive state like California is ideologically incredibly far away from even considering moving away from the car. Meanwhile, another real human tragedy that we may be able to fix relatively quickly—that is 10’s of millions of people wasting ~250 hours a year driving a car to work, most likely in traffic. I see it as a great Good if we can automate driving to free up all that time, and setting the conversation up around a goal of 0 fatalities is incredibly callous of that cost of human life.
I’ve both been poor in the US and known and talked to many many poor people here. Car ownership is absolutely empowering, because while you see it as an unnecessary expense, most see it as a valuable tool. And a car is cheap if you consider it housing. It’s really difficult to compare being impoverished in the US vs Europe because of how little social safety net we give people here. All else being equal though, I’d rather be poor and be “burdened” by car ownership than live in a society that bars me from affording one as a poor person. In the former situation Im at least able to freely move to a more suitable location. A few key things a car provides: A car is a mobile comfort zone. Sick of your partner’s family(or your own?) or not feeling the office for lunch? sitting out in your car is way more peaceful than sitting on the curb. A car is also a mobile lock-box. Going grocery shopping? Pretty handy to be able to go to multiple stores on the same trip, not everyone likes making a trip to the store on the way home for a couple of things everyday. It’s also useful if you are poor to not be beholden to local prices, traveling a bit offers you the ability to shop around for deals. It’s also handy to have things with you(food and drink, cosmetics, clothes, tools)as in general when you are poor you can’t afford to make convenient purchases and generally need to be more prepared to strike at opportunity. Car ownership offers all kinds capabilities for odd jobs as well, now formalized with the gig-economy. Also, having had both a car and a bike in downtown SF, e-devices are not an alternative to cars, they are an alternative to a bike. I could mostly replace owning a bike with the rentals, not having to think about theft is nice. But a car operates in any conditions and lets me do a proper (Costco)grocery run, buy large things, buy things not available locally (Craigslist), deal with bed bugs(seriously saved our ass having a car there), go to nature, travel with a pet, and visit and transport friends and family.
Anyway thanks for the link about vision zero. I just fundamentally disagree with their core principle, ‘life and health can never be exchanged for other benefits within the society.’ —that’s literally what society is about, we exchange our life in the form of work for other benefits in the form of money. While I think this is a fine ideal, and I do believe that we should strive to raise the value of life above a dollar value, I certainly do on a personal level, I also think that we still live in a world of harsh limitations and that we should be conscious of the effect we have of those who we take from when we try to protect others. I found the traffic calming in London unthinkable and terrible, purposefully making the lives of every motorist miserable to decrease the risk of accidents seems like a Faustian bargain to me. You trade a shared social risk of injury for the guarantee of making a bunch of peoples lives harder, because if you make it sufficiently hard enough hopefully less people die. Even knowing my way around SF, driving around downtown with its traffic calming is a nightmare and easily one of the most high stress, anger inducing things I did when I lived there. Reading through visionzero’s policy agenda and action plan for SF feels so out of touch with the realities of driving in it. I did learn why the lights are such a mess in downtown though, unlike in most areas where they time the lights for motorists to go a certain speed, in SF they time them for pedestrians. This makes me extremely skeptical of the whole program, lights that aren’t timed for a speed limit drastically increase motorist propensity to speed in order to beat the next light. Taking a harder look at visionzero, I don’t see anything that informs their policy initiatives, and they actively state that “How data is used to communicate and evaluate progress toward goals should reflect the values of the overall strategy.” So I really don’t see why they are worth listening too, they don’t ever provide any sort of cross analysis either. No mention of how construction effects traffic in their 2019 SF report that I could see, ludicrous. No mentioning road rage or driver sentiment ever. Some moral crusade, lol. It seems more like it’s really some rich persons sick vision of zero cars, and the plan is to legislate torture into the driving experience to do it. Freedom of movement isn’t something poors should have.
If you read about Vision Zero and its methodology, you can see that it indeed celebrates incremental progress and indeed encourages simplest improvements first.
Before you dismiss something as silly, perhaps you could try understanding it some first.
> However, the lives lost in autonomous accidents will be a different set of people than those that would have died in human driven accidents.
That's an interesting insight into part of the problem, thanks for pointing it out.
Indeed in America if you ride a motorcycle but don't ride drunk, don't ride unlicensed, don't ride an unregistered bike, and do wear a helmet, you are orders of magnitude less likely to die because you are in a statistcally safer demographic of riders. Yet, any one of the riders in that safer demographic could still be one of the ones hit by a red light runner and killed without their blame. We would definitely feel that death more tragic and senseless.
> I expect that in the end it will come down to a business decision, and that decision will be informed by an actuarial exercise: Will profits and insurance be able to cover the costs of defending and settling such cases.
I'm seeing type of phrasing occur more and more. Once the defendant can be named in a legal action, we'll start seeing SDVs. IMHO, the worry isn't that they will kill, but that no one is to blame.
Although it will change the day to day narrative of a pedestrian. E.g., My thought process will change from this person might not see me, to, that car's AI might not see me. ... or even "Oh, its a Toyota, they kill more than Hyundai... stand back!" But now I'm just writing SciFi.
I don't think so. Normally whenever I want to cross the street (as a pedestrian, but even more as a cyclist) and a car approaches I (unconsciously) examine its speed, and if it's higher than acceptable I try to make eye contact with the driver to make sure they see me and it's safe for me to go. How do I make contact with the AI of the car? More importantly, how do I get the cue I've been noticed?
That's a really good point. I forget how often when I'm walking, running, or biking I will try to make eye contact with a car to make sure we're aware of each other.
> However, the lives lost in autonomous accidents will be a different set of people than those that would have died in human driven accidents.
This is a really good point. I drive really conservatively and like to think I will never ever cause an accident let alone a fatal one. I think if this lever was taken away I would have a hard time accepting the automated driving a significant amount of time.
> There will be cases where a court determines that the autonomous system was the cause.
At the moment, manufacturers almost totally escape blame for fatal accidents [involving human drivers] - it's understood societally and in the legal system that the human driver was the one at fault.
That isn't a totally accurate picture of the responsibility. The manufacturer provided a vehicle that included a risk of fatal accident. (Reducing this wrong and describing it as 'lives saved' feels uncomfortable to me)
With an autonomous system, blame for fatalities can no longer be placed on a human driver: and yet, there is still a failure of responsibility (maybe this will be a more accurate placement of blame)
Hasn't there been only 1 fatal accident? If not, I can't imagine there have been >5, so seems unfair and misleading to make bold claims like that. I think the courts will rule, and if the system misunderstood something, the manufacturer will be at fault.
Idk about risk: that is hard to establish. There are so many conditions in which an automatic pilot hasn't been tested. We don't even know the factors involved in estimating the risk: e.g., is it dependent on the human co-pilot? And self-driving cars may change the car usage patterns, exerting a contextual influence on the risk.
Then there's the question of responsibility. Who will be held responsible when the automatic pilot is driving? If it's the human, then a high risk of causing an accident will be unacceptable to many drivers.
Obviously the car manufacturer will be liable when a fault in their product kills people.
And that is as it should be. The costs will be baked into the price of the vehicle. This aligns incentives. The manufacturer wants to pay as little as possible, and the owner will want to be safe.
With the car recording video and other data, determining what actually happened should be simple and reliable.
Note that the car owner no longer pays any insurance in this system.
Smart manufacturers will cover this risk with third-party insurers, for some period of time: probably the vehicle warranty period. After that, it will be on the owner to renew.
If I was one of these insurers, I'd be insisting the vehicles were maintained to an extremely high level of roadworthiness by mechanics that licensed or certified. Home DIY would be forbidden, as would any modification to the hardware or software. Think something like happens now for aircraft.
I do all of my automobile maintenance (except for replacing tires) and I do a fair amount of the routine, preventative maintenance on my airplane (which is permitted for part 91 operated aircraft).
I doubt that lack of roadworthiness on DIY-maintained automobiles is a significant contributor to fatal and serious collisions. I also think most DIY-maintained cars are just as well-maintained as their same-year counterparts that only see dealerships (sometimes more as the DIY is likely to be an enthusiast and unlikely to be deterred by cost pressures relating to the $130/hr service rate).
It's reasonable to expect that the first generation or so of fully autonomous vehicles will be rented out and must adhere to a manufacturer-mandated update and service program or they can't be driven. You won't own them.
It is software, of course you can (with access to source code and its tools). Car will record all of its sensor data, and we'll be able to analyze its decisions.
I mean, we were able to prove that the Uber car killed someone through reckless engineering because of its black box. It was obvious that the car saw the person and then deliberately ignored them, and the stock autonomous breaking system has been disabled.
> we anticipate that the number of people killed will be fewer, hence 'saving lives'. However, the lives lost in autonomous accidents will be a different set of people than those that would have died in human driven accidents. There will be cases where a court determines that the autonomous system was the cause. Families of those killed will want justice, while those separately saved by autonomous systems may never be heard from in the same case.
This reasoning was addressed with the VICP program for vaccines in the U.S., based on the idea that vaccines save many lives overall, but still injure a small number of people who will want to be compensated for that injury.
Maybe there should be a self-driving-car injury compensation program along similar lines once particular cars are convincingly proven to be even moderately safer than human drivers. People might be mad about it, but maybe the precedent of the apparent success of the VICP would be persuasive (at least for courts and legislators, and maybe for some people actually injured by self-driving cars).
A counterargument is that vaccine injuries are generally unforeseeable and reflect absolutely no fault on the part of the injured person (or injured person's guardian), while some people injured by self-driving cars will bear some fault for their own injuries, which insurers or manufacturers might well want the opportunity to adjudicate. (Otherwise, there could be a perverse incentive to deliberately do dangerous things around self-driving cars to provoke an injury and receive a payment, something I've heard already sometimes happens with personal injury claims against human drivers in some parts of the world.)
> However, the lives lost in autonomous accidents will be a different set of people than those that would have died in human driven accidents.
So far it seems that this is very much the case. Autonomous cars do relatively well in highway scenarios whereas they appear to do poorly recognizing bicycles, for instance. Reducing safety to one single metric would be a big mistake.
> Will profits and insurance be able to cover the costs of defending and settling such cases.
Risk will be transferred to individual consumers through "pay attention / hands on wheels / you're still the driver" warnings. Those warnings will never go away, even with vehicles that are in all other ways marketed as L5.
And frankly, it should not be legal. Either don’t sell it as self-driving / don’t even have that feature, or take full responsibility (within normal limits) for any accident.
I'm wondering whether the first steps ought to be to make current driving much safer through automation - like detecting drunk driving, falling asleep, and such and limiting speed, refusing to start, alerting police and/or nearby vehicles etc. The lessons from that might better inform the long tail of cases that autonomous vehicles will eventually face?
My guess is it's much easier for the public to understand the risks and liability of human drivers than AI.
Humans often get distracted or misinterpret signals. And their errors stop with the driver. AI could react in more unexpected ways when it goes sideways. Humans too can get weird when chemically impaired but the liability is much easier to attribute.
Sensationalism will win out quite a bit. And responsibility. Its just a tough problem.
- People developing Bell's Palsy at the same rate with vaccine than not. But suddenly only those with the vaccine show up in the social media feeds. Because nobody just posts "I just got bell's palsy" but now they do because people are paying attention. Same will happen with AVs. "I just got into an AV accident" will make headlines, while "I just dozed off and hit a kid" will barely circulate.
- People inherently trust humans over technology. Just because. So they will be quick to distrust autonomous vehicles. I already had convos about the fact that yes, teslas do kill, but on the whole self-driving teslas do kill far less than non.
- When a human drives, the liability is on the human. When a car self-drives, the liability might be on the manufacturer.
> People inherently trust humans over technology. Just because.
From the perspective of someone who rides a motorcycle, the #1 thing you need to do to not crash is to anticipate what all vehicles around you might possibly do.
For example, I always avoid riding in another car’s blind spot for obvious reasons.
The problem (for motorcyclists) will now be trying to adapt to understand what a Tesla might do and where a Tesla’s blind spots might be - and once you add in the idiosyncrasies of other AVs I could see it being really difficult to ride safely around AVs.
It’s fairly easy to anticipate actions of another human, and not as easy to anticipate when actions are decided by an algorithm.
FWIW I think the above also applies to cyclists.
(I suppose this becomes a non-issue if the assumption is that AVs will be so superior to human judgement as to never strike another motorcycle or cyclist - 5x safer sounds like a starting point)
I actually expect the opposite. Motorcycles preferring to be near Tesla's and any other car with sensor based safety features that are on 24/7.
I'm frequently alerted of a motorcyclist approaching from the rear by it appearing on my Tesla display because it's detected by the cameras and ultrasound. I've rarely notice the motorcycle before the car does.
I own a Tesla and I ride a motorcycle. I would much prefer to deal with 100% autopiloted Teslas than current human drivers.
Teslas don't have blind spots. They have eight cameras that give 360 degree views around the car. They also have a dozen ultrasonic sensors that can detect obstacles up to 5 meters in all directions. The only way to collide with a Tesla on autopilot is by doing something really dumb.
In practice, a Tesla on autopilot tends to drive like a human taking a driving test: accelerating slowly, always signaling before turning or lane changing, always yielding to pedestrians, always braking or cancelling lane changes if an aggressive driver gets in the way, never honking. If traffic is too dense to lane change to the desired freeway exit, it reroutes rather than cutting into traffic (as pretty much any human would).
I don't think that's generally true, or at least, the contrary notion is also true some times. I'm pretty sure that if someone asked me the product of 13 x 6, they would trust me more if I punched some numbers in to a calculator and gave them a result versus if I just did it in my head. I don't know, but I think the likelyhood of me mistyping numbers is embarrasingly high, and probably about as likely as a mistake in easy mental math.
It's also closely linked to your third point though. Liability with self-driving cars is difficult. When people talk about self-driving cars, they sort of just hand-wave away the fact that there will be accidents, so as to avoid this difficult problem. This does not instill confidence.
>When a human drives, the liability is on the human. When a car self-drives, the liability might be on the manufacturer.
I think this is a point that needs more emphasis, especially on the word 'might'. Without both laws and a history of court cases giving evidence for how those laws are interpreted and enforced, it isn't possible to tell where liability may end up. I wonder if that is part of the reason people are hesitant. Liability is being removed from the driver, but it doesn't see have have found a place to settle back down at, so people are viewing it as if liability is just being removed. For initial court cases (and the amount of time and money it takes to fight them), this may not be an unrealistic expectation.
If liability is on anyone, it would seem it has to be the manufacturer. And if there's no liability then the options are basically set up some sort of vaccine fund-like system or just to shrug and say it's between you and the insurance company.
I can't think of any product that has been developed since 1970 that can kill people. The exceptions are medical devices and pharmaceuticals. I sure hope self-driving cars can be an exception, but that will definitely take a federal law limiting the liability of manufactures. Similar to how small aircraft manufacturers were being pushed to extinction due to very high liability costs until the passage of the General Aviation Revitalization Act in 1994.
Lots of products can (and do) kill people. But drug side effects aside, it's hard to think of modern consumer products that, used and maintained properly, might just go and kill you some day and people being OK with that.
Would this baseline include all the accidents from distracted drivers, drunk drivers, drug drivers etc.? Or is it referring to an average human driver who isn't intentionally breaking the law?
If the baseline includes all these sorts of human error, I see no issue with holding robots to a higher standard. Imagine if we rolled out robot policemen who only executed black people for no reason at the same rate as humans do.
Behaving as a human would is often more important than staying strictly in line with absolute and relative positioning with a road.
The semi-permanent snow cover on many roads in 1/3 of the USA that lasts weeks if not months in duration. Humans driving on these snow covered roads form emergent lanes having little to do with absolute positioning or even relative positioning of the curb. They form lanes based on what other humans do.
Self-driving cars that depend on knowing absolutely where they are and relatively where they are simply don't and won't function. We need self-driving cars that can behave as a human will for that. And that is a long way off.
No autonomous car has shown it can handle these common situations. Until then self-driving cars should not be approved nationally and probably be restricted to the arid and warm states that do not have winter.
I think this is a good point. The best lesson my dad ever gave me back when I was learning how to drive was to 'be predictable.' People don't get in wrecks when everyone behaves as expected. And the rules of the road are largely aimed at guiding that predictability. But in the end, regardless of the written rules, humans behave as humans and a robot driver should behave like other drivers. And it may change based on locality.
Does self-driving have to handle all weather conditions right away? A sensible implementation needs to take the current conditions into account, such as the weather and the status of the road and car. If those are bad, it would refuse to active itself, similarly to how a responsible human would choose to not drive in bad conditions.
Driving in a whiteout blizzard is one thing. But people do need to get around in northern states in the winter and they absolutely sometimes have to drive in snow (and sometimes snow happens mid-trip or you have to get home from another location). I certainly don't go out of my way to dive in substantial snow (and fortunately I don't need to commute any longer) but it sometimes happens.
If it's just the autonomous system that doesn't work that's fine but now you really can't depend on the car unless there's a competent licensed driver who can take over.
I think we're in agreement. My comment was in reference to a more advanced version of what exists currently: a car that can be driven manually and always requires a licensed driver, but can activate its automation on command.
I think we'll have versions like that for a long while before we arrive at autonomous vehicles that do not require a licensed driver at all.
No, they do not need to operate in those conditions, but we have people seriously making proclamations about the imminent end of the truck driving employment industry as we know it, because "trucks will have no need for drivers."
Those people are fantastically wrong. And that's just one example.
Non-autonomous trucks probably do some of the time. I'm from a snowy place and actual big snow storms with significantly reduced visibility are infrequent, probably on the order of one day out of 100. It might be OK to compromise and say that the autonomous part of the fleet doesn't go out when that happens.
Definitely not all truck driving, but an autonomous truck that's able to handle highway driving in good weather is a much easier problem that would put a significant number of truck drivers out of work
Yeah, all you need is one truck to lead a 'train' of autonomous trucks.. Think of the conductor/engineer being the truck in front, all other trucks ride so close to each other they cut down wind-sheer to save gas. At specific exits one truck will detach from the group, go to a staging area where a local driver finishes the last mile while the train keeps on going.
More likely the truck-driving model would change. Instead of having one employee who goes where the truck goes, you'd have employees who reside in or near shipping destinations and meet up with the trucks for loading/unloading/fueling/maintenance/etc. On the one hand a given number of employees could service many more trucks along a single route, decreasing labor requirements, but at the same time covering large numbers of routes may take more people, or companies may focus on a narrower set of routes, offering more opportunities for smaller shipping companies. Odds are the number of people doing truck-driving related work would stay roughly the same, but the total volume of shipping would go up.
I really don't see any reason why there is need for humans on highways. That is inter-hub transports. Even more optimally we would have rail infra for this, but sadly that is not cost or space effective.
I don't see humans going away for last 10 miles, that is delivery. Possible to make automatic, but there is an other layer of robots to be involved. And the environments like stores and private business in cities are much harder to design for this than hubs themselves.
I think you've misunderstood. In places that have winter it's not just when it's snowing that the road lines and curb are obscured. It's for long periods of time: days, weeks, months, depending on the road.
This isn't a weather condition. It is a season condition. It's around an appreciable part of the year and human drivers drive in it.
This is assuming people are considerate and knowledgable enough to not use the semi-autonomous modes in areas that have winter snowcover. I have no confidence in people to do so. It would have to be made an explicit part of state laws. And then you have cars that are legal to drive in one state but illegal in another.
Yes, exactly, this is always the same example I give (in my case the 401 here in Ontario, Canada) -- blizzard in the middle of February, lane markings covered, highly unpredictable road surface, spontaneous temporary lanes, cars working at a crawl, snow plows coming through that you have to move over for, and can't pass, cars or trucks jack-knifed or half in the ditch. This kind of thing happens to varying degrees at least once a year, and I honestly don't think that these scenarios are actually properly in the imagination of the primarily-California-based engineers who work on self-driving.
For context, the greater Toronto region is 6 million people, and Great Lakes region from here over to Chicago is multiples of that. Winter is 4-6 months. This is not an insignificant edge case for a small population, and if self-driving can't handle it, no thanks from this driver.
It isn't snark. It's just blunt truth. Those places won't get it first either. If self driving cars come about, their full feature set may well be geo-limited. Even covering just California, Arizona, and Texas would make the technology amazing.
Not to speak of the Chinese, who will simply build their cities to include road beacons or whatever is necessary to keep AVs effective.
I'm sorry but it's troll-level behaviour to slap the "insignificant market share" label on the entire northeast and midwest which includes 6/10 of the largest "urban agglomerations in North America": https://en.wikipedia.org/wiki/List_of_the_largest_urban_aggl...
Unfortunately, troll behaviour or not, that's how SF companies behave in the real world. The usability of their products tend to be proportional to how close you are (physically or otherwise) to the bay area. I live in Calgary and I would be very surprised (and happy) if I see self-driving cars here before the end of the century.
You're just not important enough for how hard it is. Why is that so offensive to you? You don't even want it and you're upset no one cares to offer it to you? Bizarre.
Is this like not being invited to a party you didn't want to go to? Okay, then, maybe Tesla's snow driving test will give you the chance to ostentatiously decline.
Not the OP, but personally I don't want other self-driving cars on the road with me risking me and my family. We know how easily, and plentifully, software bugs get introduced every single release, I would imagine developers being the last set of people who are willing to risk their life on software.
Because I'm not talking about when it's snowing. I'm talking about the entire season of winter. The snow doesn't disappear when it stops snowing. Even when plows come they're not getting down to asphalt half the time. It's still obscuring the curbside and road markings for weeks or months.
Yes, freeways, highways, and major roads might have their curbs and markings restored to full visibility within a day or two. But most roads do not.
I think I read that ninety something perfect of driving deaths or accidents are people being irresponsible in the manner you said, but can’t find a source again, so take it with a grain of salt.
I was able to find one that of the 37k driving deaths in 2016, 10-11k involved BAC over .08 and about the same involved speeding. Not knowing the overlap, 10-22k/30k is 27-59% of deaths involving drunk driving or speeding.
If it was on the high end of that, then to do better than speeding and/or drinking alone, you have to be at least 2.5x safer than human.
I wish there were better stats on the safety of the sort of driver you would let drive you around (e.g. you wouldn’t get in the car with your drunk friend behind the wheel).
Without meaning to comment on how possible it would be to carry out on a policy level, replacing the worst human drivers with robot drivers that match average human drivers should be an improvement for everyone.
It also potentially opens up policy options, or at least makes them easier to choose.
Interesting idea. I think there may be feasible political routes to accomplishing that. Tighten up the points system that basically every state uses for deciding when to suspend your license, and simply force those who would normally lose their license into robot driving.
One easy way to start would be to give basic reflex and reaction tests. On one of my last trips to the supermarket early this year before the COVID lockdowns started, I was on line behind a very elderly old woman (who for some reason had chosen the self-checkout line). She had trouble merely attempting to lift her groceries out of her basket and scan them. She was entirely unable to feed her bills into the feeder without assistance. As I was leaving the supermarket I watched her struggle to open her car door before slowly getting into her car. I sat in my car for a few minutes watching as she struggled mightily to simply back her car out of her parking spot.
Here in New York, the roads are filled with elderly people like this. Mental acuity aside, they do not possess anywhere near the reflexes or dexterity to drive a car safely. Nonetheless, they are out there on the roads every day. It would be a trivial issue to implement basic physical and mental acuity tests that should be required in order to have a driver's license. Unfortunately this remains implausible because of the lack of an alternative means of transportation available to these elderly people (which could potentially be provided by self-driving vehicles).
Well, but the point was that human performance on any task has a lot of variability. Therefore - 'worst' - as a label, does not strictly apply in the qualitative sense. Maybe a probabilistic risk model would be more appropriate. What complicates things is that you cannot compare a human to an AV algorithm apples-apples. For even the worst human driver is extremely unlikely to confuse a human on the road with a paper bag - something that a ML classifier algorithm can (if there was a bug).
> As I was leaving the supermarket I watched her struggle to open her car door before slowly getting into her car. I sat in my car for a few minutes watching as she struggled mightily to simply back her car out of her parking spot.
That is your cue to stay away from such drivers for your own safety. Just like if you see an idiot swerving on the freeway. Or if you hear loud honking and cars braking there is probably something going on. An AV would likely miss such cues unless it was specifically programmed to do so.
Exactly. For the benefit of passengers, self driving cars should drive as well as a sober, attentive, well-behaving driver.
When I get into a car as a passenger today, I already get the benefit of riding with a better than average driver — I can identify and choose not to ride with people who are drunk, inattentive, reckless, etc. That’s the comparison that is a more reasonable baseline expectation.
A lot of comments are focusing on safety via driving better. But with self-driving vehicles, can't we make the layout of the car safer, and thus accidents cause less harm to the people inside?
For example, right now because we need to see the road, I assume there is significantly more danger from the windshield vs. a padded back on both sides of the car with passengers facing each other like in a train car.
It seems likely that we can make self-driving vehicles much safer, even with the same number of collisions, by just changing the layout.
On a related note, before we get to self-driving cars I really wish we had un-crashable cars.
They have systems like active lane keep which are supposed to increase safety but do the exact opposite of what's safe. For example suppose a driver falls asleep or gets a heart attack. The lane keep will beep a few times quietly and disable itself when the car reduces speed to under 40mph, causing the car to then go straight and crash, instead of aggressively lane keeping until the car comes to a complete stop while increasing the beep volume the whole way, and then putting on emergency flashers, which would be the safe thing to do.
Lane keep gets it wrong often enough that this doesn’t really work. Probably ~80 of activations are false alarms in the two vehicles I’ve driven that had it.
Assists are nice, but I don’t want the car to be aggressively wrong.
For handling rapid deceleration, I think you’d be best off with seats facing the rear. A cushioned seat can distribute force much more evenly than a seatbelt when the car stops suddenly and your inertia wants to keep you going, and the backs could be reinforced to block anything coming into the passenger compartment from the collision. Hard to do that with a windshield.
After the crash is where I'd be concerned. While the windshield has a lot of negatives, one positive is that it's relatively easy to break and pull someone out of the car in a crash.
IDK, probably still safer, but it's something that'd deserve some aggressive crash tests to make sure we aren't making a pinto mistake.
Are a significant number of traffic fatalities currently avoided by pulling people out of vehicles through broken windshields? That seems a little overspecific to me. In fact, rear-facing seats are absolutely known to be safer (c.f. infant car seats), given that in all collisions one vehicle is almost by definition decelerating along its forward axis.
I would be unable to sit in such a vehicle at all, then. I must face forward and clearly see the horizon or I throw up in a matter of minutes in any moving vehicle.
This seems pretty reasonable and also very possible to achieve. It would be insane to allow a technology on the streets that makes as many mistakes as humans are making. I certainly wouldn’t use self driving cars if they killed 30000 people per year like humans are doing right now. How would you assign responsibility for crashes? Our current system is far from perfect but at least it’s something people understand and know how to navigate. And there are drivers that are better and more cautious than others. So it’s not just an illusion of control.
What so many autonomous car advocates seem to miss is that it is nearly impossible to meaningfully compare relative safety with current self driving cars, because we don't have level 5 autonomy yet.
In order to compare them with current technology, you'd have to be able to answer the question: how safe would human drivers actually be if they didn't have to perform their most difficult tasks? Because that is what current autonomy does.
I'm willing to believe that current tech is capable of being safer than human drivers, simply because they do so many things way better than humans do, like stopping for pedestrians and safely navigating around cyclists. But to compare them in general, that is left to be proven. You can't just compare incidents per mile driven, because autonomous vehicles can conveniently opt out of driving whenever the task gets too hard.
It definitely tends to be the already safer driving, like highways, that they do well on. I have a Model 3, and trust it pretty well on highways. However it does not do turns at all, and other 'city driving' type tasks well. It can now do stop signs and traffic lights, which seems to be good so far too.
However, not living in an area with many sidewalks, I do not trust it for one second to navigate around pedestrians or bicycles. I don't think it will actually try to go around a bike but, I have never given it the opportunity to either. I take full control back and give them a very wide berth myself.
> You can't just compare incidents per mile driven, because autonomous vehicles can conveniently opt out of driving whenever the task gets too hard.
But isn't that kind of the point? We use autonomous driving for the tasks where autonomy is objectively better, and we have the human do what the human is still better at. Best of both worlds.
But that's not the goal of autonomous driving or the goal of most autonomous driving advocates. The goal is that you never need to know how to drive. It's essentially a taxi. Most people I know who are waiting for the autonomous driving future are waiting for one where they never have to drive.
I imagine now: "Sorry, can't drive through this thick falling snow. Must pull over and you must operate now, human. We are 100 miles from civilization in any direction and no cellular service is available, best of luck!" I cannot imagine that most future drivers who are minimally driving in their daily life will be able to suddenly handle
conditions that the "autonomous" vehicle cannot.
5x safer isn't an unreasonable threshold to ask for. All life-critical, engineered systems incorporate a safety margin.
To win buy-in from the population at large, you sure as heck better have overwhelming statistics. Marginal just won't cut it.
When you're talking about taking control away from the driver, all you need is one accident a human would have prevented to create political backlash and deteriorate trust in the system. Just because several other unvoiced lives may have been saved doesn't offer consolation to the victims or deem the loss acceptable. I think of it as analogous to the legal doctrine that convicting one innocent is worse than letting 10 guilty individuals go free.
I'm always skeptical when developers claim a computer can do a better job than a human, as I've encountered so many edge cases the programming just never adequately accounted for. It will take time and a great deal of experience running these platforms in the wild until they become truly resilient. I would be quite pissed off to find the severed limb I suffered in a crash was due to programming not sufficiently distinguishing e.g. road paint from water streaks.
And the metric you choose to define "safer" will never be perfect. Extra buffer helps offset any bias or gaps in your methodology and capture more of the long tail of "one-offs" never accounted for in version 1.
I'm a big believer that driverless vehicles will provide huge improvements to our quality of life, I just feel there's a lot of misrepresentation taking place out there today as to how far along these systems are.
This should be calibrated to the risk the top X% of cautious/safe drivers, and exclude reckless, inexperienced, or intoxicated drivers. As a safe driver, you shouldn't have to accept risk calibrated to "average" (i.e. drunk, reckless) driver.
Why should it be? In the end the dead bodies count and it doesn't matter whether a cautious or inexperienced driver killed them. Inexperienced drivers are prerequisite of experienced drivers, there's no way to get rid of them. Excluding them from statistics is just discounting those deaths as... somehow less important?
If a self-driving vehicle is only 1.5x (instead of 5x) as safe as the average human driver then you're not trading between death by humans vs. death by machine, you're primarily trading between death by human and spared by machine and only secondly between the former.
This person is saying that on an individual level, they are not willing to cede control to an “average” AI when they know (or believe) themselves to be above average.
You’re talking about it at a societal level, as if everyone switched over to robot cars at the same time.
We don't need to switch everyone over at the same time. For example we could start with young (more likely to be drunk and inexperienced?) or known-bad (traffic offenses) drivers where perhaps even sub-average autonomous vehicles could make a difference.
You’re right, I shouldn’t have said “At the same time” but the point still stands: your other comment was talking past the OP, not addressing their point.
You’re talking about it as a macro optimization problem while the OP was explaining a rational decision at the level of the individual.
> In the end the dead bodies count and it doesn't matter whether a cautious or inexperienced driver killed them
It matters to the safe drivers. Bad drivers are mostly a danger to themselves. At only "1.5x as safe as average", it's a good deal for the bad drivers, but there are probably a lot of "2x as safe as average" drivers that are getting a bad deal. They are in more danger than before.
I am a safe driver. (My measure: two moving violations in nearly 40 years of driving, the last one 16 years ago. No accidents in 19 years, no injury accidents ever. And I've driven daily for the whole time.)
In the past couple of weeks, I've narrowly avoided hitting pedestrians three different times. Each time, the pedestrian was somewhere other than a valid crosswalk (once was on a highway exit). In each case, I think an autonomous vehicle could have handled it better than me.
Maybe a "perfect" autonomous vehicle would've reacted better... But, I'm pretty sure that lady with her bicycle walking across the street in Arizona would say differently in regards to Uber's program. I mean, it was a perfect test case for such a system. The guy whose Tesla drove into the side of a semi-truck might feel differently too. Oh and the one in a Tesla who was driven into a barrier in Mountain View and the car caught fire... He died too.
A perfect autonomous car sure sounds nice - but will it ever arrive? Would it have just hit 1 of those pedestrians instead? Would it just go, ding, and suddenly you're in control? Would it kill 1 and then the company would have the data to know to not kill pedestrians in that one specific example? How many people would have to die as test subjects before the system would be better than people? And what if it never got there but you still killed all those folks anyway?
Source? It seems only logical that the number of accidental deaths goes up with the number of bad drivers on the road - not just because they kill themselves.
I just mean that a disproportionate amount of the danger created by bad drivers is to themselves. I don't have a source, but I think this is obvious.
My point is that, even if we lower the total death count, the safest drivers could still end up at greater risk, because a disproportionate amount of the reduction in deaths will go to bad drivers.
IIHS says "Nationwide, 53 percent of motor vehicle crash deaths in 2018 occurred in single-vehicle crashes." The other categories being multi-vehicle and property only.
Let’s say you hire a chauffeur to drive your kids around. You find out they’ve been drinking on the job and speeding recklessly. When you confront them, they pull out stats that they’ve been actually less drunk than average. Do you fire them and find a new chauffeur?
When it’s a robot chauffeur, you have to evaluate it like you would a human one.
In this quite hypothetical scenario, if the statistics he cites are correct and also apply to chauffeurs (i.e. chauffeurs are not statistically different from the general population) then firing him and hiring a new one may not improve your situation. It would be better to invest in a breathalyzer or something.
So what you're suggesting is an appeal to emotion, fire your driver to ameliorate your dissatisfaction even if it might result in an even worse driver.
So to turn the question around, do you prefer false sense of of safety for your children or actual safety?
That’s only the case if it’s entirely statistical, while the whole point is that there are factors under your control. Hiring someone/something to drive your family around isn’t a reversion to the mean. You can make certain efforts (interviewing, not tolerating bad behavior, etc). It’s a third person version of the usual debate of ‘I’m a safe driver’ versus ‘I only had like three beers and that was two hours ago’ versus ‘robo car.’ If you bucket the first two together and throw your hands up in the air saying humans are humans oh well, you’re pretending you don’t have the agency you actually have.
In the third person version, I suppose there’s an implicit unstated option that while your particular chauffeur has evidence they are better than average, you have an option to hire someone more responsible. That aspect of agency is central here.
> if the statistics he cites are correct and also apply to chauffeurs
I meant compared to the general population. As in self driving versus general population stats.
Ok, I see what you're going for. But then the question is how much safety is that agency buying you? And how many people even have an option to exercise such agency? You do not have it when it comes to other drivers that may cause accidents or run you over (or your children if you wish) as pedestrian. You have far less of it for taxi, rideshare or public transport services. And how many parents will drive their children even when they're stressed or haven't slept because the children simply have to go somewhere and they can't afford other options?
In aggregate we can probably buy more safety by having policies that encourage replacement of bad drivers with merely average autonomous vehicles rather than attempting to rely on individual behavior to improve safety.
If you want to still exercise personal options you could choose an autonomous car plus safety driver.
Nobody will be forcing you to buy self-driving car for quite a while. But as a safe driver, you should care about eliminating the most unsafe drivers from the roads.
As a self-identified safe driver, no action I am capable of taking will put an unsafe driver in a self-driving car.
I'll even go so far to say that many unsafe drivers can't afford a self-driving car. They're often unsafe because their car is on balding tires, the brakes don't work, and the tail lights are busted.
The rest, well, they simply enjoy driving unsafely and thus have no reason to get into a self-driving car.
I've questioned the lack of driving experience as a risk factor since the pool of experienced driver excludes those who died becoming experienced.
Assuming someone has a certain (constant) probability of excluding himself from the driving pool every year, over time the average percentage will drop as folks most susceptible from excluding themselves will have already done so.
Exactly. Because now we can punish those individual drivers, lock them up, take away their car and license, but are we going to pull the plug on all cars with auto-pilot X because X is causing accidents? Is a small change in the software enough to establish it as a new driver? It's "smoking is good for you" all over again.
No, on the contrary: deaths through those drivers can be eliminated. It only makes sense to look at total number of deaths, including by alcohol, drugs, inexperienced drivers, elderly drivers, distracted drivers (smartphone, etc.).
Engendering trust and reducing materially regressive liability/litigiousness is a good call - and something that SHOULD be set as a standard by an external body.
IMHO this is typically a good role for government regulation - setting a standard measurement of outcome for the public good, but not dictacting HOW that should be achieved.
On the face of it, the delay in accepting self-driving cars till they are 5x safer would cost thousands of lives in the interim (while they are only twice as safe, three times as safe etc.) Is there a reason ordinary people's views should have prescriptive force here? Maybe they're just flat wrong.
Arguing that people should do what you want, irrespective of what they themselves want leads to all kinds of pain, on all sides.
"Why are people voting against their own self-interest?" is an analogous phrase. It seems awfully condescending to me.
Nobody's bound to your perspective of what's rational. Better to just accept that this is the kind of hurdle that self-driving will have to jump over and work on getting there ASAP.
Elon realized that the best way to get people to buy electric cars was to make electric cars that are better than gas cars, not to tell people they're wrong and stupid for not wanting to buy some inferior electric car. Once self-driving cars are obviously better than all but the best race drivers, people will accept them as a matter of course.
I didn't say people should do what I want. I said that a random focus group's opinion does not necessarily override objective reasoning about what will save lives. Would you use this approach to decide whether the 737 should fly again, or what is the appropriate price of carbon, or how strictly to restrict activities during the Covid pandemic?
Why don’t we mandate that people submit themselves to a mandatory medical experimentation lottery? We’ll do so much better if we go through as many people as we do lab rats, and it’ll save unimaginable lives in the long run.
Utilitarianism via taking current lives to save future lives is the wrong perspective here.
There may be good reasons for the approach the article suggests, but this is not one of them. Nobody takes any lives, and there is no question here about experimentation. This is not a trolley experiment. It is a choice of two regulatory regimes. Under both of them, some people will die. If we choose the regime "ban self-driving cars until they are five times safer", then more people will die.
Because human lives are not fungible. If some guy somewhere else was gonna die and you make an intervention where I'm more likely to die then that doesn't work for me. I will oppose it to the end of my being (after all, the alternative is the end of my being).
That is, if you take all the deaths from sleepy drivers, drunk drivers, angry drivers and replace them with random chance then I can no longer increase my chances of survival by not driving at night, not driving on holidays, not driving during commute hours, and avoiding shoals.
Instead now you've taken my ability to increase survival and moved it into the base rate. Nope, I think I'd accept maybe a thousand other arbitrary people dying before I'd accept myself dying.
Sure, but in a democratic system with perfect information you should expect to lose the vote on your hypothetical "me vs 1000" trolley problem right? And in the absence of perfect information you'd I guess you'd mount a special interest lobby and hope for the best...
If it were me vs random one thousand and obviously so, yes. But fortunately, the Wobegon Effect makes it so that anyone can conceive of themselves being me (or even better, of themselves being better than me - considering I'm not particularly a safe driver).
It is precisely because it is democratic then that makes it possible for any individual to exploit human cognitive errors. An authoritarian meritocracy would not fall for those tricks.
Well that 1.00000001x would include all drivers. Including those that are tired, on their cell phone, drunk, see poorly, senile, high, distraught, unlicensed, pissed off, etc.
Do you really want more cars on the road driving worse than an average awake driver that's not drunk or looking at their cell phone?
I have an older friend whose driving terrifies me, but who lives in an area with effectively zero public transportation or reliable cab service. While I don't want to see this person on the roads, the alternative is literally moving into a senior community (which would probably be the death of this person).
Frankly, if self-driving cars became .75x as safe as the average human driver, it would still a net safety improvement if got this person out from behind the wheel.
I hate this mindset, lots people get killed by senior driver with impaired vision/reflexes and it's always the same bullshit "but it would kill him to take away his independence".
well, tell that to people that will eventually be under his wheels.
I didn’t say it was OK. I’m not in a position to do anything about it, and my friend lives many states away. But as a practical matter, it they’re going to keep running errands and such, I’d infinitely rather they be doing it in a self-driving vehicle.
I’d also suggest a little empathy for people in such a situation. “Taking away their independence” sounds so minor and practical, but often is basically a euphemism for “removing the sense of self-determination that allows them to stay alive”. It’s not as simple as saying “Old Joe shouldn’t drive anymore”, because the implications can be pretty severe. That doesn’t mean that it’s a great excuse for Joe to keep driving! But it does mean that there are some pretty big philosophical conversations to be had about it. And if/when self-driving cars come around, a lot of those drawbacks vanish in a puff of smoke. Then we can say, as a society, “we want you to keep just as much independence as you’ve always had! You just don’t need to be the one working the controls of the car anymore, but it’s still going to take you on every errand and friend visit you’d ever want to go on, and actually help you be even more capable.” That sounds pretty damn splendid to me.
Fair is fair. If a self-driving car is a safer driver than John Doe, but we don’t allow that self-driving car on the road, perhaps we should revoke John Doe’s drivers’ license as well.
We absolutely should. But we should also have alternative ways for John Doe to get around without a car, and we don't.
Car culture in the US has started to evoke that old saw: "If you owe the bank a thousand dollars, you have a problem. If you owe the bank a billion dollars, the bank has a problem."
Our society would be a lot better off if we regularly revoked the licenses of unsafe drivers. But the entire nation is designed around car ownership, so doing that today would be an unconscionable human rights violation.
Outside of super-dense urban centers, there is almost no public transit, and taxi/Uber rides are prohibitively expensive because of the distances involved. Outside of cities, it's not unusual to drive >100 miles on an average day.
And that's on easy mode, when you don't have kids to ferry around.
This title is awful (but it was copied from the site). What it should say is "Study finds that most people surveyed didn't trust self driving cars until they were five times safer".
More like "people who have never owned or been in a self driving car..." faster horses.
That said, it will happen. I just wonder what will happen as self driving car safety exceeds human drivers. Will people be prohibited/disincentivized to drive?
The comparison between all human drivers and all autonomous vehicles is far more complex than "whichever is statistically safer, as a whole". Belittling people who feel differently than you about it muddies the conversation for no reason.
That might be their current stated preference, but I don't think it will be most people's actual choices. Imagine if self driving was available on every car right now with the press of a button and it was as safe or twice as safe as a normal driver. How many people would press that button, start texting, and just continuing to progress to paying less and less attention ?
People already don't pay the attention they should when driving or when using a driver assistant system.
Yeah, not to mention value of time. If I could hop in the car, at the same level of safety as my own driving, and spend the 1.5 hours to trailheads reading a book or even programming, I'd much rather be doing that than paying attention to the road.
On the contrary, you can argue that button saves lives by doing a better job than the reckless drivers who aren’t going to pay attention in the first place.
Based on the abstract, it looks like this is an attempt to measure how safe self-driving cars need to be in order for people to prefer using them. It is not any sort of requirement from the NIH.
What's so magical about 5? Why not 4x or 6x. 2x safer will be 500,000 lives saved yearly. We can see that even 1.25x safer is very significant. Just weird seeing that magic number 5x...
I don't think it's useful to talk about global traffic deaths in this context. Since obviously regulation will change by country, the difficulty of developing self driving will change by country, and road safety varies enormously by country. The US is likely to get self-driving first, but is already way safer than the average country, and the countries where deaths are higher are less likely to be able to afford the roll out of self-driving cars.
In the US there are ~36,000 deaths from motorvehicle accidents.
To give some context, America could improve their fatality right by 5x by bringing themselves into line with the safety standards observed in Western Europe whose fatality rate is already at around 2.7 per 100,000 people.
It's also important to remember that self-driving is likely to represent the safest journeys - highway commutes etc.
Not magical, but you have to pick something. I suspect the goal is to pick a number that most would think is better than an experienced, awake, attentive (not looking at a phone) human at the wheel. So even the safest drivers would be safer on the road as the number of autonomous driven cars increases.
At only 1.25x it might well be worse than you. Keep in mind that the average safe human driver includes people that are tired, high, drunk, unlicensed, mentally ill, physically compromised, etc.
While common, someone getting killed because someone is drunk or asleep is much more acceptable than having a computer make a mistake.
If we want society as a whole to accept autonomous cars it's best to show a clear benefit to society, not just better than 51% of drivers.
I rented a 2019 Mercedes last week and drove it for over 1200 miles, most of which was driven with the cars driving assist technologies enabled.
My guess is that because this car drives so "carefully", such as automatically following at a safe distance (leaving maybe a 3 second gap between the car in front of it), human drivers will end up causing many more accidents. There must have been more than 50 drivers (with many annoyed stares into my window as they passed) that made unnecessary lane changes to go around me just to then closely follow the car in front of me.
This large gap may make it seem like the car is going slower than it is, as so many drivers tried to overtake me but failed as slower traffic in the other lanes blocked them.
Human drivers may just become worse over time as more law-abiding autonomous vehicles hit the road. "5x" might not be as much of an improvement in the future.
I like the adaptive cruise control, because it drives more carefully than me. I have the same experience as you when I use it, regarding other drivers, but then I realize they're going to drive that way whether I'm using it or not. Therefore, I'm of the opinion that human drivers other than myself will cause the same amount of accidents, but I may cause less using while using it, so in the end there will be less accidents caused by human drivers as more people use adaptive cruise.
On the contrary, I think as AV and other semi-autonomous driving tech becomes more frequent on the road, people will be more easily able to recognize it and won't behave irrationally as you mentioned.
I don't share the same optimism. Many people already stare at their phones while driving on the highway (I especially enjoy jolting them back to attention with a friendly honk), I'm not so confident they'll pick up on the subtleties of autonomous driving.
This is much more than just your classic speed-only "cruise control".
You certainly couldn't drive with classic cruise control in a center or right hand lane (USA) for any extended period of time, at least not in normal highway-speed traffic where cars are merging in and out every few miles.
Problem is your the average for human drivers is brought down mostly by bad drivers. However, even the if we talking about averages 50% of people are better than average. I would not feel to confident in a self driving car that is only approximately as good as 1 out of 2 drivers. I would want the car to be better than high upper percentile. Especially knowing an automated system can have reaction times that can put any humans reaction times to shame.
So yes equal safety is far from enough. Especially considering 50% would be better drivers than a self driving car that was only as good as the average driver. Your asking 50% of people and some percentage of people who overestimate their abilities to trust a car that would perform worse than them.
Most of your point is just the Lake Wobegon fallacy at work. Obviously you think you're not a below average driver. But everyone does, and lots of them are wrong.
The part that isn't is that you're imagining a non-linear distribution where a significant fraction of accidents are the fault of these identifiable "bad drivers" and that the MEDIAN human driver is actually better than the average/median SDV.
To which I ask: is there evidence for this? It's a new argument, and I haven't seen it made before.
Like from that point alone your asking a large portion of people to trust something less capable than them-selves. We should be aiming for the higher/upper percentiles not the average. Especially, considering how fast a digital control system can operate, average is a horribly low bar.
-Edit-
If throw in the effect that a lot people over estimate their abilities your going to get even fewer people who trust a self driving car that is only as good as average.
Misleading title. The article is not saying we need more than equal safety; it's saying that self-driven vehicles are perceived as more dangerous and people in the study wanted them to be 4-5x safer than human-driven vehicles to overcome this.
Human beings need a target for vengeance and hatred. If someone kills your child, you can hate them. If an automated car kills your child, you don't have anyone to hate. Saying that this is about safety thresholds is a distraction from the true human problem exposed by automated cars:
"Which individual will held accountable and risk jailtime if their car kills someone you love, and how can this individual be identified from the appropriate government registries within 24 hours of a death?"
Until this is clearly defined in law, automated driving will continue to be resisted under any number of plausible justifications, and arguing with those justifications will have little effect.
It’s less interesting to me who is registered as responsible, or what process is used to select that person. But if no specific single named individual is registered as personally liable without possibility of corporate liability shield, then we won’t get public acceptance of self-driving cars for a much longer time than could be possible.
If, as the registered person, you were notified about the vulnerability and didn’t patch it, you could be convicted of criminal neglect at minimum, same as a driver who ignores a recall notice and continued driving.
What most people seem to misunderstand about "driving" is that it is not a sensory or stimulus response problem. It is a cognition problem.
Computers or computer based "AI" are good at solving bounded problems that do not require open ended or on the fly model building and judgement exercising in real time. At this biological intelligence, honed by millions of years of selective evolution still excels.
Computers can "solve" GO or Chess because fundamentally the rules are simple and the models required to play these games are subject to only a few, static constraints.
Driving, in all conditions on all roads, on the other hand, requires a flexible model of the real world that approaches that built by sentient biological intelligences.
The problem is not sensor or perception latency.
Sure LIDAR can blow human perception out of the water.
But that does not matter.
What matters is making the correct decision based on sensory inputs using a high fidelity model of the real world.
Computer does not understand the difference between two people playing catch by the roadside, parallel to the road and a situation where a child might be chasing after a soccer ball. This is not just combinatorics and probability...it is theory of mind. Thus until AGI is invented FSD will be a misnomer.
Doesn't matter how much faster a computer can perceive if it does not know how to integrate the raw data it receives into a model of the world that yields correct decisions to the circumstances.
Costs aside, what about something like mag-lev tracks for cars, that can start/go on a dime, and on freeways go faster, even switch lanes to get around slower traffic. Maybe even do away w/ speed limits just go as fast as you 'feel safe' going with the only limit being the max. In cities you'd have sensors/grids everywhere to detect non-car traffic, and regular cars could even drive over the mag-lev, or it could be a separate track, and you can go in/out of mag-lev/drive modes. Maybe it parks you, til you're ready to take over control (say you're napping on the commute). Alarm goes up, you wake up. Stretch, even get out and stand up for a minute, get back in. Buckle up - drive the final block to where you want to park at your job, or if it's a country side location, up in the mountains, etc you might drive for longer then park where ever.
Essentially you could just cover cities and highways to nearest gas stations. Car's running out of gas/electricity it routes itself to nearest depot.
Going cross country and want to stop at lunch? Program the car, and it'll pull to nearest gas station in tim-buk-two and let you figure out where to go from there.
Point: A/I self-driving aren't the only way to get autonomous cars. 50/50 re-thinking infrastructure, sensors, car-to-car communications could get us a lot closer faster.
You might be a safer driver than the average human driver, in which case a SDC increases risk for you personally (and overall, in case the less safe drivers keep using non-SDCs). In that regard, we should wait until SDCs are safer than almost all human drivers.
Most drivers believe to be above average drivers, which is of course impossible. But there might be interesting correlations, for example with social status. Lower social status does disadvantage people in many regards. I am sure that insurance companies have data on the question if they have more (fatal) crashes as well.
And it does stand to reason that people with higher social status drive newer and better cars as well, so we could end up with a situation where the better drivers are replaced by computers before the worse drivers.
Interestingly, I think an ethics committee said a few years ago that once SDCs are safer than human drivers, it becomes a moral imperative to outlaw non-SDCs. I am wondering, if they will explore the human driving safety 'distribution' before inacting such rules. Waiting for the 5x margin could solve that problem, because then you will probably have SDCs that are safer than almost all human drivers, and it could be an incentive for companies working on the technology to get to that level faster, than they would in case they start selling them en masse earlier.
This is 100% just an artifact of a system going from human control to non-human control. Nobody bats an eye at systems which were never human controlled - or transitioned so long ago that nobody recalls human control.
I've never seen anyone hesitate when getting on a fully automated train system at an airport or an elevator. Even more-so with amusement park rides that literally put people in extreme situations.
In amusement parks you get a much smaller selection of the population than on the street or even at an airport. People who don't enjoy thrill just have no reason to visit them at all.
Aside from that, all the systems you mentioned are mechanically constrained way more than a car. Accidents happen, when these mechanical constraints physically fail, not when a computer makes a wrong choice because it failed to detect an obstacle or similar.
How far do we think human drivers' safety level ranges? Like most drivers I'm falsely convinced I'm a safer driver than most, but still I expect quite a large range (say a factor 10 between 10th and 90th percentile?). It seems reasonable for self-driving cars to be expected to improve safety over human driving for the large majority of drivers, not just half of them.
> It seems reasonable for self-driving cars to be expected to improve safety over human driving for the large majority of drivers, not just half of them.
Depends, do you want to save lives? Then self-driving cars only need to be a little safer than the drivers they replace. Which means if infrequent drivers with little experience that get replaced by robotaxis of average reliability could be a net-win in saved lives.
Delaying their deployment until technology arrives that beats the most conservative drivers just means accepting a higher death toll.
> if infrequent drivers with little experience that get replaced by robotaxis of average reliability could be a net-win in saved lives.
I don't know if frequent drivers are inherently safer drivers than infrequent drivers. There might be the negative effect of reduced attention due to more 'routine'.
But I seriously doubt that frequent drivers drive so much safer that they negate the effect of being exposed to the risk so much more. Is a person that drives 10x more than the average driver more than 10x safer? Why replace cars first, that don't get on the street a lot? And how do you organize deploying SDCs to infrequent drivers first? Unless those people don't own the cars anymore, but rent them, in which case I agree. That would increase utilizations of these cars.
> Unless those people don't own the cars anymore, but rent them
Yeah, that was the idea (hence robotaxis, not owned ones). It seems feasible especially in urban areas where car ownership is not essential so the remaining uses could be replaced by rented autonomous ones.
But how false is that impression that we are safer than average? I don't drive drunk, drugged, tired or distracted. I avoid driving in bad weather. I make sure I have good tires and brakes. I don't intentionally speed. I bet most accidents are caused by the above. I'm not interested in a self-driving car that drives like it's checking its cellphone after 3 beers.
Exactly this. Some people think that Tesla's autopilot is just great, better than a human driver much of the time. As a Tesla owner, I am flabbergasted by that. At best, AP drives like I do. That is, perfectly straight and between the lines down a straight road. With any curves, and sometimes just on straight roads, it drives like a high-functioning alcoholic. I can't imagine how some people drive normally where they think that qualifies as 'good driving.' Some of us are very attentive drivers -- I never look at my cell phone, I never drink and drive, I don't drive when I'm tired, I avoid driving in inclement weather or at night unless strictly necessary, I am a very defensive driver. I don't get tickets, I don't get in wrecks, and this is by design -- I take great pains to reduce my exposure to these risks.
Personally I think the old joke about how 90% of drivers think they're better than average is both true, and also just a funny joke. We see a lot of perfectly good drivers on the road, but we don't notice them ... because they're perfectly good. There's only a few lanes on any given road, though, so if 10% of drivers in near proximity are crappy drivers then it practically shuts down the road. We notice that, and assume that most drivers are crap. Wrong.
I have a Tesla and I agree that it drives somewhat poorly compared to a Human ... when there's no surprises. Handling lanes and turns just moderately well, but not great.
However it frequently notices things before I do. Lane splitting motorcycles approaching for the rear for example. Or a car in front of me slowing down, but not using the brake lights.
It also does quite well when a car brakes in front of me, especially if it's a surprising slow down like on an onramp where I'm looking over my shoulder to merge.
So while I've not had an accident of any kind in over 25 years, I do appreciate the car noticing before I do.
So while I don't let the Tesla drive autonomously, I do feel like I'm a much safer driver with the active assistance from the car and that the Tesla (even with the same sensors) will continue to improve. Not sure if they will hit full autonmous on the current hardware, they might need another revision (to add CPU and better sensors) before they drive better than most humans.
I don't disagree that at times AP has been helpful to me. The sensors do pick up on things, and if you are actively paying attention and ready to take over at a moment's notice, it is probably a net positive. Though on average, for me, things like the forward collision warning tend to be more nuisance than help. Startling, and of the half dozen times it's activated for me, once was me needing to notice that traffic ahead and suddenly stopped, the rest are things like right-turning drivers that are way out of the way but the car panics about them. Even on 'late' mode.
The technology will certainly improve, however. Probably going to be quite a while, if ever, before I let it do all the driving, though :). At least partly because I enjoy driving.
An oft overlooked factor in acceptance of AVs is that the evolution of technology from driver-assist to autonomy will alter perceptions:
First, human drivers using driver assist will become safer "drivers" even though the added safety is properly the result of the technology that is evolving toward AVs. For example, it should become very difficult for a human driver to hit a pedestrian or cyclist. Not impossible. Just exceedingly unlikely to be the fault of the driver.
Secondly, driver assist will habituate drivers and other road users to the performance characteristics of AV technology. The upshot is that AV technology will not be benchmarked against the way drivers and other road users behave and perform today. In some ways the expectations for incrementally better safety will be higher. In other ways, the "flavor" or road risks will change in a way that converges on how AVs perform.
I think the insurance companies will have a different and much more financially based standard.
More importantly, I doubt NIH will trump that conglomerate and its influence on NHTSA
Also, you could argue that restricting a technology that will result in 20% less deaths on the road is the opposite of the public health.
To underline that, that is potentially 10,000 people in terms of death or major disfigurement. PER YEAR.
And self-driving could be, in a targeted/situational manner FAR safer if it took drunk/drugged/tired drivers out of the equation, which are responsible for around 33% of deaths.
If someone is drunk, a technology 2x an alert driver will be
10x a drunk driver.
One issue that is often overlooked is that humans are pretty robust to unforeseen situations vs AI. Take for example the recent fires in California and the smokey skies. Many of the cell phone pictures did a horrible job of capturing the photos, because their AI had been trained that the daytime sky is blue.
And with such a failure, all the cars with similar software would be affected at once.
Is 5x safer a realistic goal? There are limits to how safe a car can be on a road full of human drivers, no matter what sensor suite it has and how fast its reactions are. A vehicle can only respond so quickly to control inputs. Making a computer that's five times as safe as a human might be a thousand times more difficult than making one twice as safe.
Depends how you count. Being is 5x less accidents might be unreasonably hard. But causing 5x less accidents seems reasonable, especially since most humans are that safe.
I don't have the stats, but I believe the worst 20% of drivers cause a large fraction of the accidents. That 20% often includes the uninsured, the unlicensed, the drunk, high, emotionally distressed, and the physically compromised (senile, low blood sugar, tired, etc)
That's true but beside the point. The moment autonomous cars become better than the median human driver, keeping them off the road starts costing lives every day.
The big problem with numbers like this is how do you measure it?
Tesla claims their system is vastly safer than human drivers, but currently it only engages in situations where it's already fairly safe to use. So should that system be 5 times safer than all human-driving, or safer than human-driving under the conditions the Tesla is able to engage?
Why, though? Even in the case that it's the same safety self driving cars would yield huge productivity gains as people can work or sleep while commuting and truckers can have 1 or 2 self driving trucks following them cross country. And transportation for the elderly or disabled who cannot drive themselves.
This is not the government saying "we the government require...". It's the results of a study of what people believe. people's risk tolerance is almost never rooted in a rational calculation. Risk tolerance is based on emotion, and self-driving cars currently trigger an emotional response.
As soon as self-driving cars become a regular part of people's lives and not an exciting new thing, the calculation will shift to a much more rational one
This calculation is actually very rational. What you seem to ignore is that, with conventional cars, there is a relatively small amount of "known unknown" risks. There are of course significant risks, but almost all of them are known not only in kind, but also in quantity. Drunken drivers, dumb people, broken brakes, whatever. We have several decades of data regarding these risks. The amount of "unknown unknowns" can also be assumed to be relatively low, given that the concept of humans driving cars has quite a history now and largely stayed the same for a good number of decades.
With autonomous cars, even once you have a few years of safety data from a large enough number of cars to be able to make the call of them being 5x less dangerous than human-driven alternatives in that data, you will still end up having much more "unknown unknowns" (of which I can't tell you any, because they are by design unknown) in addition to also having much more "known unknowns" like the possibility for large-scale software bugs potentially causing thousands of casualties at the same time. These risks will only go down slowly with enough time, there's practically no way of fast tracking getting these down, hence you have to incorporate a large enough risk buffer in your assumptions for rationalizing to even start using that fancy new tech, and the only place where this risk buffer can come from is having a much bigger difference in the "known knowns" department of risks.
Those unknowns already are being elucidated by experimental fleets. Self-driving cars won't be deployed en masse before the vendors can already demonstrate solid statistics worth hundreds of millions of passenger-miles, which will be sufficient to get the fatality rate.
How much does that tell me about potential software failure modes that don't kick in until a significant scale (speaking of double-digit percentages of all traffic, these test fleets are not even close to that) has been reached? Or about weird, but potentially fatal side effects of incorporating rules put up by regulators into the software that cannot be tested with today's alpha testing fleets because these rules might not even exist yet? Or about how good all these different AI vehicles of different vendors in very different software and hardware revision states interact with each other (think of situations like HFT trading algorithms that run each other into a doomsday spiral, just with vehicles at an intersection twitching around quickly in weird ways, trying to interpret each others actions)? Or about the hackability of future robotic cars (think for example of those slightly modified fake traffic signs)?
Nothing. That's why regardless of how impressively big these test fleets are, there will be a lot more of these unknowns.
Some of them seem like tail risks to me that are unlikely to dominate fatality statistics even if they were to occur and will be quickly patched or recalled if needed. Many of these hypothetical concerns could also affect existing driver assistance systems and aren't unique to autonomous vehicles. Hacking can also happen with human-operated vehicles. Interaction between multiple self-driving ones can also be tested with experimental fleets by concentrated local deployments.
I think this high level of certainty is basically just the governments way of acknowledging they are terrible at gathering/defining useful metrics and so with a wide margin there's very little room for error on the politicians part. I'm unsure if this is overly cynical, but I don't expect politicians today became career politicians by worrying about safety more than protecting their political status. Further, I suspect media would look for any definition possible to blame politicians for deaths so politicians feel it necessary to be blameless before allowing interesting, progressive ideas to materialize.
I would say that we tend to reduce human flourishing to exclusively being alive. I think the 5x multiplier maybe covers things like loss of liability in an accident, a sense of ownership of the vehicle, loss of privacy or obscurity, regulatory or operational infrastructure costs associated with a switch to self-driving, freedom of choice, etc. All of these have some ultimate impact on human flourishing beyond just a binary dead or alive definition. My opinion is, if these aren’t included in the 5x, they should be.
1.0001x is definitely not acceptable because dangerous drivers bring the total-human-safety metrics down. The "average" (median, or even maybe 25%ile) driver is probably much less likely to be in an accident (or fatal accident)than the drivers who drive most dangerously, e.g. frequently texting while driving or driving while intoxicated. So for most drivers, 1.0001x the average human rate would actually be worse than driving themselves, although they may find the risk acceptable.
However, how many people get hurt is not the only thing that needs to be considered. It's very likely that when a self-driving car would be equally safe as a human driver, the people who die in accidents caused by the car would be different than those who would die in accidents caused by human drivers, and so you'd end up with situations where individual next-of-kin could make entirely legitimate claims after accidents that their loved ones would be alive if not for the hellspawn car.
Trying to convince juries that it's alright because for every person who die in the cars, two other people who would have otherwise died got to live would probably be tough. Especially as the accidents that the self-driving cars are most apt to prevent are ones that could at least partially be considered caused directly by bad choices made by the driver (DUI, distracted driving, falling asleep at the wheel).
Once the data gets good enough that you don't need to do statistics on it[0], it becomes a lot easier to sell the idea to the public.
I find it weird that nobody seems to talk about the national security implications of self-driving cars. Imagine the Russian cyber attack we just experienced happening on millions of self-driving cars...
Why would a state actor do this? It's a clear declaration of war, and probably wouldn't kill more people than suddenly launching missiles. Even if they could somewhat hide it, the US is prone to retaliate on fairly flimsy pretexts.
There may be some terrorism risk, but any truly terrifying scenario requires a multitude of incredibly stupid design choices. Like constant internet connection, remote updates, no manual override, and a single widespread system. Hacking stoplights is roughly as scary.
edit particularly as an attack on self driving cars would be an intentional targeting of civilians, and would likely be considered similar to using chemical weapons.
It's not obvious. There are two methods they could attack. Force cars to make drive dangerously, which is a clear attack on civilians. Or halt all cars, which wouldn't be but would require incredibly dumb design choices in the cars. I assumed the first, and I guess you picture the second.
Your first option is way more devious than anything I had imagined, that's true, but I'm not convinced it would warrant toe to toe nuclear combat with the Ruskies, to quote Dr. Strangelove (or rather Major Kong in that movie). I'd expect something along the line of strict economic embargo, with all sorts of shady people making a lucrative business out of it.
If you do that we will launch missiles. You will not survive the attempt. See, this is the thing with stuff like this. "You can't prove it!" doesn't work.
If a Russian terrorist cell (state sponsored or not) did this to American vehicles on the road, Putin will be on the phone begging to not be blown up. The leaders of a dozen countries will be on the phone begging us not to blow him up.
It's like America's power supply. Notoriously easy to destroy, but if you do destroy it, hell will rain down upon you.
Because it turns out the devices that make the peace don't operate like the devices that operate in the peace. So you can't break them that easily.
I wonder what range in safety we tolerate in human drivers? How much worse than the average is a newly-licenced 17 year-old (or whatever age) or an 80 year-old?
Well, in terms of overall crashes and injury crashes, you don't get safer than the 80 year old until you're 30 [1]. Though the rate of fatal wrecks is about the same between 16-17 and 80+. I think that may be due in large part to the fact that 80 year old humans are far more fragile and likely to die in a wreck where a younger person would walk away from.
There will always be a long tail where the machines fail in scenarios a human can handle. We're just going to write off those deaths as an act of nature?
Algorithm aversion is real and shows we prefer humans even in the face of statistical evidence that humans are sub-optimal decision makers. [1]
I suspect it’s because we inherently dislike the idea of handing control over to a complex black box. Barring sociopaths, we can reasonably assume to interpret how a person thinks. This isn’t necessarily the case for algorithms, which leads to trust issues.
this is quite an academic exercise since a decade of intensive research hasn't brought us close to working self-driving cars, much less 1x safe self-driving cars, much less 5x safe, nor is there any clear path to resolving this open research problem.
I think this related to the "illusion of control". People feel safer when they are driving, rather than a machine, even when they are not safer. I hope government regulators do not impose 5x safety requirements on self-driving cars.
Assuming the system is properly maintained and used, if anyone's responsible it has to be the manufacturer. Certainly the passenger isn't any more than if an Uber gets in an accident today.
And, with the possible exception of drug side effect (and even there there are lawsuits), we don't really see consumer-facing products that, even if used as directed, kill a fair number of people and we just go oops. Let's say autonomous vehicles kill 3,000/year in the US, i.e. 10% of the current rate. (In reality, human-driven cars will take a long time to be phased out even when self-driving is available but go with the thought experiment.) Can you imagine any other product we accept killing thousands of people a year and we're fine with that?
ADDED: As someone else noted, you could argue that tobacco etc. fall into that category but we're mostly not OK with that and is reasonably thought of as in another category. (And pretty much no one is smoking because they think it's good for them.)
Just about any food is potentially unhealthy if not consumed in moderation. A bag of potato chips and a Coke now and then isn't going to kill anyone. But a couple bags and half a dozen cans a day sure isn't good for you. And a porterhouse steak every day probably isn't that great for you either.
You asked for accepted products that kill people, not for products that kill unconditionally. Foods are conditionally unsafe (if consumed in excess) just like cars are conditionally unsafe (if not operated carefully). Deaths by cardiovascular diseases (partially caused by inappropriate diet) exceed vehicular deaths. And yet they're accepted.
There is no shortage of products that can injure or kill you if you operate them unsafely including cars. But you won't "operate" an autonomous vehicle at least while it's autonomous. An autonomous vehicle causing an accident due to a software mistake is the equivalent of a regular automobile suddenly losing steering control because of a design defect on a highway--and the latter would absolutely be a liability issue for the car maker.
Right, I forgot that this was an argument about responsibility. In the case of food I guess there's some shared responsibility. The customers of course have a lot of choice here, but the manufacturer still optimizes for tastiness (increasing consumption) without necessarily optimizing for healthiness. That could also be considered a design defect.
Perhaps for an owned autonomous vehicle the equivalent shared responsibility would be a user-selectable conservative ("comfort") vs. aggressive ("sporty") driving style. Or the option to drive yourself and only let the software intervene if it thinks what you're doing is unsafe.
So, back to the question
> We don't really see consumer-facing products that, even if used as directed, kill a fair number of people and we just go oops.
The only very nebulous other case that comes to mind are unsafe computer systems in general. When a hospital or critical infrastructure gets hacked then this is treated almost like an unavoidable natural disaster rather than the responsibility of the operator or manufacturer.
You may have to sue the manufacturer and prove that their system is at fault. Which is pretty much impossible considering the legal resources these big corporations have versus the little guy. This would end up like tobacco or junk food where companies were able (and still are) able to deflect any kind of responsibility.
It may be easier to find fault in an autonomous vehicle. Assuming it has a black box that records sensor data, you can replay the algorithm and see what went wrong.
If corporations are people, you should be able to bring criminal murder and manslaughter charges against them, with the top-level executives acting as proxies to serve the jail sentence.
The illusion of control is a thing, but actual control is a thing as well. One possible reason to avoid self-driving cars is that there actually are safe and unsafe drivers, and fatal accidents in self-driving cars will presumably be a much flatter distribution among those drivers than the one we have now. Which means that even if they're safer overall, they could still be less safe if you're a good driver.
If they are setting an objective measurement, how is that an illusion of control? In fact it seems like exactly the opposite - they are putting hard numbers on the level of risk they consider tolerable. They are making that available to everyone so they can debate and dispute it.
If anything, this is removing the illusion of control. The illusion of control would be to say you would never trust self-driving cars. Saying you will trust them at a level of 5x measurable safety criteria above human drivers is totally different.
Now we can make actuarial arguments about whether it should be 5x vs 2.6x vs 0.9x and debate how to measure the safety criteria - that’s a completely different world from one where people “feel like” human control of the car is safer.
For sure it is good to seek a measurable criterion. The question is whether laboratory subjects' views on the right level should have normative force. An alternative take is: these are just not-very-informed people, and unless they can give reasons for their views, we shouldn't take them seriously as inputs into the policy-making process.
"We arbitrarily chose a number so we could feel like we were making improvements. Nothing justifies 5 over, say, 3, or 10. When cars are in fact 3x safer, all those saved lives won't be saved, because our arbitrary 5 has yet to be reached."
The challenge remains that people will be killed in accidents involving autonomous control. And we anticipate that the number of people killed will be fewer, hence 'saving lives'. However, the lives lost in autonomous accidents will be a different set of people than those that would have died in human driven accidents. There will be cases where a court determines that the autonomous system was the cause. Families of those killed will want justice, while those separately saved by autonomous systems may never be heard from in the same case.
I expect that in the end it will come down to a business decision, and that decision will be informed by an actuarial exercise: Will profits and insurance be able to cover the costs of defending and settling such cases. Who knows, maybe the threshold is crossed at 5x safer.