I am curious why you believe this to be the case? Driverless cars have yet to even come close to being workable in adverse conditions, to my knowledge, please correct me if I am wrong.
> This is mostly about sensors and geometry.
And mostly about the limitations of sensors, and the limitation of available algorithms to overcome geometry. A algorithm a driverless car uses to overcome some geographical feature might expect something to be spherical, but in reality, it ends up being elliptical, only it is too late for the car to adjust, for example, or vice versa. There is literally hundreds of thousands of potential corner cases out there. The earth in itself is a good example: although many people try to do distance calculations with lat/long and some standard radius are usually off by n precision points because the earth isn't perfectly spherical, only approximately. Driverless cars have to be precise. A couple points off might spell the difference between safely driving in the lanes, and veering into a ditch, off a cliff, or into a median or obstacle.
Sensors will suffer some of the same problems (or different problems) as our biological sensors do. How will road signs, lines, and likewise be detected in adverse conditions? Snow? How will radar overcome natural geographical corner reflectors?
Driverless cars seem to becoming along just fine in perfect conditions.
Not trying to be defeatist, but I believe if we want true driverless technology, the infrastructure will have to support the vehicles for these edge cases that we cannot overcome. Or we can continue to be idealists. I for one wish we never gave up on trains -- they are the perfect "driverless" technology candidate.
I think interesting edge cases include:
'empty cardboard box or log?'
'Deer just crossed road, is there another one close behind?'
'Driving temporarily on the wrong side of the street due to construction or a downed tree limb'
'Police directing traffic'
'Driving on wrong side of road to avoid collision'
'One lane bridges'
'Busy parking garage with blind corners'
'City Bus driver forces themselves recklessly into your lane'
'Other driver in round about about to cut you off'
'Pulling out to cross a busy 4 line highway with no median or traffic signal'
Not to mention, any snow on the road is going to make it supremely hard for a computer to determine where their lane is, or even the road is.
Driverless cars have advantages over humans too. Faster reaction speed, constant 360 degree surveillance, lack of distraction, (theoretically) better control of brakes and steering to handle skids.
Waymo has demonstrated solutions for many of the edge cases you mention. This video is from 3 years ago:
https://youtu.be/tiwVMrTLUWg?t=530
Regarding snow, it maybe a difficult problem, but if cars can't see road markings, neither can humans, so how do they do it? Sometimes by following a car in front. Though far from real autonomy, my Tesla doesn't need to see road markings for autopilot to work, it will fall back to following the car in front.
And.. Why does Waymo have to solve this inclement weather problem now? Surely it would be better to demonstrate safety in good weather, then light rain, then heavy rain, then light snow? They can just refuse to operate in bad weather unless they are confident about it.
Weather and trips being not completely predictable, the car would have to be able to pull over and stop if there was weather the car wasn't certified to handle, and if the human was asleep in the back seat or whatever and took a long time to get ready to take over.
I think I've seen at least a prototype Waymo car (it was exhibited in the Computer History Museum) that was designed without the steering wheel and pedals for a human to use. That wouldn't work so well given that the car might pull over and be unable/unwilling to drive anymore. (Maybe you can put an expected upper limit on how long the weather will remain bad ... Nah, storms can last for days, by which point a human who didn't bring food and water could be in bad shape.) I guess you could rely on being able to call AAA. Though a bad blizzard might be exactly the case where AAA would have a hard time reaching you. The human being able to take over eventually seems like an important feature. I guess that is the way all the cars in the field (that I've seen) work.
Does anyone happen to know if self-driving cars are practicing "pull over and stop" maneuvers? For some reason I've never heard of it. I think it is the only potentially reliable fallback mechanism if the car encounters, say, weird terrain it knows it can't handle. (I don't think you can assume the human will be able to respond within n seconds.)
> I think I've seen at least a prototype Waymo car (it was exhibited in the Computer History Museum) that was designed without the steering wheel and pedals for a human to use. That wouldn't work so well given that the car might pull over and be unable/unwilling to drive anymore.
It'd be fine - you just need to have some backup option. A small joystick or two behind some panel, or screen control, or perhaps you control the car with a phone app. It's like having an emergency spare tire - you just need a minimal driving UI that can be used in case of emergencies but users otherwise don't need to see or deal with. It can be inconvenient and speed-limited, since you don't expect to use it much.
Though if there's decent connectivity an obvious intermediate option is remote operation. Some human who is really good at driving might take over for a bit, driving your car from the comfort of their own home or office if you're not comfortable doing so yourself using the backup control pad.
Other edge cases include: Amish buggies, rotaries in Boston, Syracuse's upside down traffic light (green is on top), etc. The possibilities are endless; how can you test them all? Humans are really good about adapting to novel situations on the fly, not so much computers.
I mean, you are right about humans as a species, but oh god, not as drivers. As a cyclist commuter, pass/get passed by a lot of cars and see a lot of irregular/irrational behavior. Everytime there is a change in the traffic rules along my route (detour, construction, closed lane, new signalling) you will get a pretty high rate of people who either ignore the change, panic and drive recklessly, or panic and come to a confused stop.
Maybe we need driverless cars that can show panic faces when they don't know what to do and be coached by a friendly citizen what to do? It would be super cute.
Agree. It's a fun game to think up edge cases that would challenge a self-driving car. On average these cars will far outperform human drivers. I see human drivers blow through stop signs every day completely unaware that they did so.
One approach would be to record 100-to-1000 trials of how human drivers navigate each particular edge case in vehicles instrumented for autonomous driving.
Sadly, no one seems to have figured out a way to economically motivate human drivers to do this.
Isn't this part of what Tesla has been doing though? At least they sell cars that simultaneously has:
- human drivers in demanding situations
- sensor and compute package planned for autonomous driving
From there it is mostly a question of
1. getting permissions to collect the data
2. finding some efficient way to store -> sift for interesting situations -> magic happens here -> lots of regression testing -> self driving car -> profit!
I think it comes down to being able to model the world around us, having a model for human behavior, animal behavior, etc. If we see a baby deer, we look for the mother deer.
It feels like we're also really good at creating multiple open-ended stimulations in the back of our mind about how things might play out, where they can easily be paused or fleshed out depending on which became more probable or if we want to learn something. It's kind of a running background process.
30,000 deaths per year suggest that humans fail at these edge cases all the time. Maybe ate their best a human outperforms a machine but as any person who drives if other drivers are at their best.
I think it's still close to the most likely cause of violent death for an American; Neck and neck with gun deaths, if you count the self-inflicted.
I mean yeah, compared to medieval statistics, it's great, but in comparison to the other non-medical ways to die you face as a 21st century American? automotive fatalities are quite high, and the problem deserves a lot of focus.
(I mean, I agree that reducing miles driven is also a good goal. I'm just saying, making safer cars is a lot easier than talking people into allowing higher density construction and cities built around transit and walking rather than around cars)
Or, judging by traffic's constant deaths ... Their don't.
Anyone who's seen a Waymo car drive around the valley knows they're much safer than human drivers. They're annoyingly good, because for instance they just never miss a bike and are careful around them, and if you can't tell why it's a bit annoying.
But I've always come away from such incidents with a feeling of "I should have seen that". I've yet to see them screw up even once.
And yet you can't drive from SF to San Jose without seeing a human driver screw up.
> And yet you can't drive from SF to San Jose without seeing a human driver screw up.
I don't think anyone is debating that in perfect conditions the computer cars can sometimes out perform the worst drivers.
If your concern is terrible drivers killing themselves and others, what you should be arguing for is much stricter standards for obtaining a driver's license. Probably removing 'full coverage' insurance and only allowing liability would go a long way as well. Can't keep wrecking your vehicle if nobody's buying you a new one.
>If your concern is terrible drivers killing themselves and others, what you should be arguing for is much stricter standards for obtaining a driver's license.
Or for keeping a drivers license.
>Probably removing 'full coverage' insurance and only allowing liability would go a long way as well. Can't keep wrecking your vehicle if nobody's buying you a new one.
I disagree... a used car is pretty cheap. Cheaper than full coverage, if you have a bad record.
I think this should be attacked by increasing the minimum liability coverage. right now, in California, you can drive with homeopathic levels of liability insurance... $35K isn't going to cover very much hospital time/time off work if you hit someone who has a good job. The minimum liability coverage should be 2-3 orders of magnitude higher.
Of course, for either of these things to be politically possible, we first need a society where it's reasonable to get from A to B without a car. An idea that is fought against daily even in relatively dense places like where I am now.
Furthermore self-driving cars are at this point ONLY driving through areas with a lot of pedestrians. You'd expect them, therefore, to have much higher accident fatalities than normal vehicles. Why ? They're always around pedestrians. Instead, in the first years of operation, they're safer than humans.
Even with Uber's less than stellar safety practices, and Tesla's ... well ... how do I put this politely ? [1] seems a good link. Let's just say Tesla owners could be more careful, especially since they signed a contract stating that they would be more careful (and youtube has much worse than that video). Given that that's how people use self-driving cars I would argue that's pretty damning for human driving skills.
Not sure what your point is. That graph shows about 10 deaths per billion VMT, which is exactly the number I stated. It is a completely unfounded assertion that they are safer than humans.
You forgot ‘A woman in an electric wheelchair chasing a duck with a broom’ and ‘people playing frogger on roads’.
And Waymo self driving car didn’t seem to have a problem with that.
Things like snow aren’t necessarily a problem in the beginning because you can start by selling a self driving car that doesn’t work in the snow. There’s still a market for it.
I think we need to keep our expectations realistic. Most of those would challenge a significant portion of human drivers (and the scenarios suggested by sibling commenters are even more likely to cause a human to crash), and it's perfectly acceptable for a human driver to say "X crazy thing happened causing me to crash. What the fk?"
Except us humans drive in the middle of the street in residential areas often below the posted speed limit just for this reason. And if we live on this street, we can better anticipate the conditions.
I've seen this multiple times as a situation that people can deal with instinctively but computers won't. However people need to be trained to handle this situation; it is explicitly mentioned in the driver's handbook.† So I don't see this as a "win" for people over computers.
> Not to mention, any snow on the road is going to make it supremely hard for a computer to determine where their lane is, or even the road is.
HD mapping allows cars to know precisely where they are on the road even if they cannot see the lane markings below. In fact, georeferenced HD maps can pinpoint the car's location in the world down to less than 10cm of absolute precision (as opposed to relative precision)
These are tough issues, but they're probably not all relevant in the suburban Phoenix area where Waymo is getting started. Maybe it rolls out one city at a time for many years and that's okay?
I would add some more edge cases - driving a street in Delhi or Bangkok. Or navigating small Italian town where streets layout had been designed circa 12th century.
The infrastructure driverless vehicles rely on is their own internal maps, which have lanes and signage marked out. Being able to visually detect these things is important, but it is a measure for added redundancy.
Great strides have been made in using machine learning to filter out obscurants such as snow. Perception for autonomous vehicles is effectively a solved problem.
The biggest technical challenges are related to planning. So say you're approaching an intersection. There is a pedestrian about to cross, a cyclist in front of you and another vehicle waiting to turn left across your path. As a human, you understand that if you behave one way, it will cause the pedestrian, the cyclist and the other car to respond a certain way, but if you respond to the situation another way, it will cause all three to respond differently.
Our ability to game out scenarios like this is intuitive, but for AI to predict how it's behavior as an agent will influence the behaviour of other agents on the road is a daunting undertaking, particularly when taking into account the full scope of scenarios that need to be mastered before an autonomous can reliably safely navigate anywhere.
> Great strides have been made in using machine learning to filter out obscurants such as snow. Perception for autonomous vehicles is effectively a solved problem.
Great strides is not 'perfected' and I don't think perception for autonomous vehicles is a solved problem whatsoever. What is a driverless car to do if it slides out of position on the road and is now facing the wrong direction? Does it have the ability to know whether it needs to call for help, can safely maneuver, it's occupants are in immediate danger and should exit the vehicle (or not exit the vehicle)?
The two key aspects of an autonomous OS are perception and planning. Being able to "see" well enough during a flurry is a perception problem. Knowing whether to maneuver out of it or call for help after spinning out is a planning problem. There are many planning problems left to address and master before autonomous vehicles are ready for widespread adoption. Some planning problems are unique to specific intersections, and dedicated software has to be written just for that intersection, like the intersection of Market and Castro, in SF, for instance.
Can they perceive where on the road they should be located when the road is totally covered in snow and you can't see the pavement, let alone the line markets, possibly only tire tracks, if that?
Yes. An autonomous vehicle with lidar and maps can locate itself using any landmark, or series of landmarks it can see, it isn't limited to lane markings.
Just think, the remains of the last self-driving car will help the next self-driving car avoid the trap. But only if Google carefully catalogs it and no one moves the wreckage.
You can't reliably know the distance from pavement to any of those things unless you plan on indexing every physical object along every roadway.
I think the most likely scenario is self driving cars won't drive in those conditions, hopefully those conditions don't develop while you're on the roadways or you'll be parked somewhere cold.
They do index everything, the maps they rely on are lidar generated 3D point cloud fields that make a model of everything around the road, and are frequently updated.
The goal isn't perfection. The goal is to be as good as humans.
Eg: An autonomous vehicle driving through snow backed by millions of miles of experience driving in snow is better than someone from the tropics driving through snow for the first time.
> Perception for autonomous vehicles is effectively a solved problem.
The problem is that you can't separate "perception" from "cognition". A human could drive a car reasonably well via a webcam feed, so does a webcam count as "solved perception"?
"Perception" in the context of autonomous agents means a mapping from sensors to a model of the world; how effectively can the agent establish a reliable model given its sensor readings?
What GP means by perception being a solved problem is that existing sensors and algorithms produce high quality and reliable models in a wide range of environment conditions.
A webcam is a sensor device, the human observing the webcam feed most likely cannot construct a good model of the car and its surrounds, as they can't do things like shoulder checking or looking in the rear view mirror, which a good driver does frequently to maintain a model of where other cars are.
But then "reasonably well" for a human driving a car remotely via webcam feed is likely to be a far lower standard than we're holding self-driving cars to.
As a human, you understand that if you behave one way, it will cause the pedestrian, the cyclist and the other car to respond a certain way, but if you respond to the situation another way, it will cause all three to respond differently.
Why does the self driving car have to restrict itself to extremely subtle human nonverbal communication? Just put a loudspeaker on the outside of the car and have it announce its intensions clearly, yielding to pedestrians/bicycles according to the laws of the land.
> yet to even come close to being workable in adverse conditions, to my knowledge, please correct me if I am wrong.
There's some video here for:
>“The Yandex.Taxi autonomous car safely navigated the streets of Moscow after a recent snowstorm managing interactions with traffic, pedestrians, parked vehicles and other road hazards on snowy and icy streets,” https://www.theverge.com/2018/2/16/17020096/self-driving-car...
Presumably it's not perfect but it seems to be coming along. Snow and ice covered Moscow is more adverse than what I usually drive.
> come close to being workable in adverse conditions
Given how people drive in the rain and snow in the Northeast, I would argue human drivers have not even come close to being workable in adverse conditions.
We drive in them anyway, despite often times magnitudes more risk of crashing, because we want to.
I see this fallacious argument made as a counterpoint time and time again and it is getting tiresome.
If you look at all the relative statistics, both IIHS and insurance, the drivers who practice safe driving techniques and do not drive distracted have something crazy like a 3-4 fold lower accident rate. Moreover, those drivers who have had an accident are much more likely to have another one. Have you ever met someone who has gone their entire life without an accident? They exist, I encourage you to speak to them about how they drive -- more often than not it isn't by...accident (sorry)
The other point I am making here -- the drivers who do account for reckless, endangering driving, or who are careless by some measure of frequency (not careless as a whole, but maybe they forgot to change to their winter tires and properly inflate them?) -- they tend to pay the price the most. Their insurance rates are higher, they get tickets and points, and eventually the worst ones aren't allowed to drive at all.
When the "worst" AI's that are responsible for some amount of crashes, who is going to pay the price? Whose license is going to get deducted? Whose insurance is going to go up? In the world of driverless ubiquity, who is going to be the check and balance? What self-selecting mechanisms are going to protect us? You think Waymo will give a shit?
There are other reasons this argument is fallacious, the most obvious one that driverless cars have not yet proven they are safer or can remain statistically significantly safer than safe human drivers. How do we know driverless cars cannot or won't be compromised? What if a foreign nation state deploys an attack that exploits a known weakness? What if a car company is infiltrated by a bad actor and uses bad training data? We were able to break into a country's nuclear reactor. Russians were able to break into our democracy. I imagine this would be a nice juicy target for a country that really doesn't like us. Ect. Ect. It is going to come with a whole host of it's own unique problems, so comparing them to current human driving problems (which have potential solutions that our government won't embark on -- different story for a different day) generally won't be fruitful.
>the drivers who practice safe driving techniques and do not drive distracted have something crazy like a 3-4 fold lower
That's quite the qualifier. Problem is I see at least 3 people with their eyes on their phone during every 30min commute to work ...so...
>When the "worst" AI's that are responsible for some amount of crashes, who is going to pay the price? Whose license is going to get deducted? Whose insurance is going to go up? In the world of driverless ubiquity, who is going to be the check and balance? What self-selecting mechanisms are going to protect us? You think Waymo will give a shit?
These are literally the least difficult issues around self driving cars.
> There are other reasons this argument is fallacious, the most obvious one that driverless cars have not yet proven they are safer or can remain statistically significantly safer than safe human drivers.
> These are literally the least difficult issues around self driving cars.
You are either young, or extremely naive (or both) if you don't think these are going to be challenging problems.
> so we should stop trying?
Not sure why you thought that this begged that question. What I am saying is we should stop making arguments that compare driverless car habits, problems, and safety to the current habits, problems, and safety of human drivers. They are generally bad arguments. They won't bear any fruit. Statistics aren't people. Trying to sell a generation on driverless cars on some unproven projection that "they will be X times safer!" will fall on deaf ears to drivers who have made it a lifetime of safe driving.
> if you don't think these are going to be challenging problems.
are they show stoppers? c'mon assessing risk is old as the hills. There's an entire profession that is devoted to assessing and pricing in risk. With regard to liability, the courts will decide. It'll be messy but it'll get sorted out in time.
>Trying to sell a generation on driverless cars on some unproven projection that "they will be X times safer!" will fall on deaf ears to drivers who have made it a lifetime of safe driving.
people will get over their fear in time. People feared cars when first introduced too.
"With all the anxiety around driverless cars lately, it’s worth remembering there was a time people worried about cars exactly because they had human drivers. In fact, it was the removal of the horses—the horseless carriage—that gave some people fits.
In the 1890s, the prospect of a person driving without the aid of a second intelligence was a real concern. A horse, or team of horses, acted as a crude form of cruise control and collision aversion.
In 1896 Alfred Sennett warned, “We should not overlook the fact that the driving of a horseless carriage calls for a larger amount of attention for he has not the advantage of the intelligence of the horse in shaping his path, and it is consequently incumbent upon him to be ever watchful of the course his vehicle is taking.”
Also, the lack of support for harder environment isn't because they're impossible, but rather because they are focusing on getting it perfect in normal environments before expanding. The person said they are "coming along fine", not that they are done.
Also, the lack of support for harder environment isn't because they're impossible, but rather because they are focusing on getting it perfect in normal environments before expanding.
I beg to differ.
I claim that adverse environments are very hard, even pathological problems and placid driving in well laid out suburban road with generally well behaved drivers are merely hard problems.
The thing is that an engineering team working on a problem like self-driving in adverse conditions is always going to be making some progress. The main question is whether they are going to make the kind of progress that justifies the hefty expenses involved.
anyone have any idea of how frequently horse-drawn carriages crashed in dense urban inner city environments 120 years ago normalized against the incidence of crashes for contemporary cars on the same streets at the same speeds?
Maybe this could form the basis for a kind Turing test for level 1-4 autonomous Vehicles.
Fundamentally, if a human can drive a car with nothing but two relatively poor eyes with a pretty small field of vision set in a single location inside the vehicle, an AI can be trained to drive using the same inputs - any more sensors are a bonus.
The other thing people forget is the bar isn't that high: self driving vehicles don't have to be perfect, they just have to be better than humans are. All of the "whatabout" edge cases people proffer as examples of areas an AI would have trouble with, people have trouble with too. The difference is that once an AI learns to solve that edge case, it doesn't have to relearn going forward.
I am curious why you believe this to be the case? Driverless cars have yet to even come close to being workable in adverse conditions, to my knowledge, please correct me if I am wrong.
> This is mostly about sensors and geometry.
And mostly about the limitations of sensors, and the limitation of available algorithms to overcome geometry. A algorithm a driverless car uses to overcome some geographical feature might expect something to be spherical, but in reality, it ends up being elliptical, only it is too late for the car to adjust, for example, or vice versa. There is literally hundreds of thousands of potential corner cases out there. The earth in itself is a good example: although many people try to do distance calculations with lat/long and some standard radius are usually off by n precision points because the earth isn't perfectly spherical, only approximately. Driverless cars have to be precise. A couple points off might spell the difference between safely driving in the lanes, and veering into a ditch, off a cliff, or into a median or obstacle.
Sensors will suffer some of the same problems (or different problems) as our biological sensors do. How will road signs, lines, and likewise be detected in adverse conditions? Snow? How will radar overcome natural geographical corner reflectors?
Driverless cars seem to becoming along just fine in perfect conditions.
Not trying to be defeatist, but I believe if we want true driverless technology, the infrastructure will have to support the vehicles for these edge cases that we cannot overcome. Or we can continue to be idealists. I for one wish we never gave up on trains -- they are the perfect "driverless" technology candidate.