I've been saying this for years, as I took Thrun's class when he was at Stanford. Google's dirty little secret is that self driving cars are still mostly smoke and mirrors. Given relatively controlled conditions and a trained driver who can play backup when needed, they work. But if you put them in complicated situations - snow, a busy city environment, abnormal signage - watch out.
The problem is that the driving model is probabilistic. When you solve a problem probabilistically, getting from 90% covered to 99% to 99.9% covered to 99.99% covered involves exponential leaps in difficulty. So even if the car covers 99.9% of driving conditions (and it currently doesn't), there's still a tremendous amount of work to be done to get it to 99.9999% correct, or whatever the threshold is for it to be deemed "safe" for fully autonomous use.
I personally am bearish on the technology, as getting the inconvenient final situational cases correct will be extremely challenging. I would love to be proven wrong, but at Stanford I came to the opinion that the probabilistic approach would get us to really cool demos, but never a fully autonomous vehicle. That being said, the people working on this are a whole lot smarter than I, and I would love to be proven wrong.
One that Google has not solved, for instance, is navigating a gas station. When they fill up gas at Shoreline & Middlefield in Mountain View, I see humans doing the driving.
Approaching this problem as 'navigating around X' is a wrong way to go about it. Once you solve the problem of navigating a gas station, the next you will face is to drive around a school, or a lane where kids are playing. The list would never end.
The idea must be to come up with a generic algorithm that solves these problems as a whole. Not one specific case at a time.
The gas station is a special case though, because the objective isn't just travel from here to there. Finding a parking space is somewhat similar kind of special case, where there is a specific objective.
I decided to test a Google self driving car as it crossed the intersection by accelerating into its broadside - no reaction at all. Well, got a reaction from the humans inside.
I recently narrowly avoided getting killed by a broadside collision by braking just in time. If I were further I would have sped up out of the way. Would a probabilistic approach handle this? Maybe they need to compile a list of special edge cases.
You can't compile a list of edge cases for this kind of thing, because it is impossible to know the comprehensive list of all the situations the car won't handle correctly.
In the end, you need a learning technology that can properly adapt to any possible situation and give a decent response. Maybe it can be done, but we certainly aren't there yet and I'm skeptical as to the the tractability of the last bit of the problem.
I think they're currently concerned with making sure the vehicle drives safely. Many humans apply evasive maneuvers, only to end up killing someone else or hurting themselves in other ways.
All this shows is that the Google car was driving well, and you weren't. Though I'm sure as the tech progresses they'll look into this sort of thing, and will implement what makes sense.
It seems to me we have a better chance of seeing low flying autonomous vehicles for personal transportation before we see self driving cars (think quad coptor drone style vehicle with 500 lb payload capability). Autopilot systems are already widely used in commercial air vehicles.
Fewer obstacles need to be accounted for in the air and there is less in grained regulation. It feels like we are going to leapfrog the car all together. I would love to see a detailed analysis that compares these two approaches.
For autonomous cars, the challenge is software. It's very difficult to build software that handles all the edge cases mentioned in the article. The hardware (cars) is a solved problem.
For autonomous personal flying machines, you're right that the software is easy, but the challenge is power density. The only VTOL machine that can carry humans a considerable distance without refueling is a helicopter. Helicopters are too large, disruptive, and expensive for everyone to have one. But no one has figured out how to shrink one down and still be able to carry a person.
If you compare the two approaches, one big difference is that software is improving much faster than power density. So my money would be that we solve the first one first.
If all flying cars had autopilot I don't think itd be that hard. They'd all follow the same autopilot guidlines. Ground cars will forever have to deal with the millions of "legacy" systems like lights, manual cars, stop signs, pedestrians.
A flying car system would be totally networked from day one. Id imagine it being like the cars from the minority report.
Why do you think this? Self-driving cars seem reasonably close whereas there doesn't seem to be anything resembling what you're proposing. Also, aren't autopilot systems successful in commercial air vehicles because they operate high up away from most everything other than a few birds and other commercial air vehicles and they maintain massive separation between vehicles (compared to roads)? This wouldn't seem to scale well to lots of small personal vehicles relatively close to the ground. Or are airborne autopilot systems also good in these situations and I'm just unaware?
It depends on how "reasonably close" they end up coming. If it ends up being a xeno's paradox of issues where it can handle 90% of situations, then 99%, then 99.9%, then 99.99%, etc, but each of those incremental improvements prove harder and harder to achieve, how many lawmakers are going to say "Sure, only .001% of robot cars on the road get into catastrophic accidents in unhandled situations, that's fine, let's make these things legal"?
The problem with emergent technology is that it's what people tend to be most afraid of, and if there ends up being situations a robotic car absolutely cannot handle that's known about, how long will it take for people to start abusing it?
And for unhandled exceptions...awhile back I was driving up 280 and a police officer pulled out in front of traffic, flipped his lights on, and started weaving back and forth across all lanes of traffic. All the drivers slowed down and kept behind the officer, obviously not sure what was going on. The officer stopped weaving at a couple points along about a one-mile stretch to get out of his car and pick up an item off the freeway, then got back in and resumed weaving, until he got up to a previously-pulled-over car and parked behind another officer.
That's definitely not something they covered in driver's ed, apart from "if something unusual is happening, slow down". But how long would it take for it to make the news if a smart car in that situation passed the car on a weave and struck a drunk guy that was stumbling along the freeway at 85mph, do you think? And do you really think every possible situation that occurs during driving will eventually be able to be handled by a smart car?
Another example, any point you run into a car or random other vehicle that's double-parked in the city. If the car can only figure out that pedestrians are blobs of pixels, does it have sufficient resolution to figure out how far away that oncoming car is, or will it just patiently sit behind that moving truck until they're done and start moving again?
There's a lot of edge cases for this tech, and most if not all of them have potentially fatal exception cases if you fail to handle them correctly.
how many lawmakers are going to say "Sure, only .001% of robot cars on the road get into catastrophic accidents in unhandled situations, that's fine, let's make these things legal"?
If they're actually thinking properly (doubtful), they'll look at the rate of catastrophic accidents with human drivers, and make a call based on whether or not self-driving cars are an improvement.
At least now there are multiple companies and organizations working on these problems. I just hope, there will be some openness and sharing across all these teams.
In the air you have an entire 3rd dimension to play with. You can get far more density of vehicles per route. You can fly in a band lower then commercial air vehicles 300-1000 ft has hardly any vehicles flying. The self driving car always strikes me as a faster horse solution.
Being able to abstract away the 3rd dimension is a huge boon, which is why we're still doing self driving cars.
In the air you can't ignore the 3rd dimension. Every calculation your car AI was making before, will now have to grow by an order of magnitude. Even making a helicopter stay stationary in 3d space is a challenge.
I'm with you here. You could essentially build from scratch a traffic setup for autonomous vehicles, instead of retrofitting the autonomous vehicles to a current driving system. And by flying you get to avoid lots of complications that will prove very difficult to work around. No pedestrians, snow, signage, construction, etc etc.
On the other hand, traffic in the air might become a very messy problem. Last I heard autopilots are not used during takeoff and landing, so they barely deal with any traffic at all...
When air vehicles fail they fall from a great height. They also tend to be vastly less energy efficient, at least for a personal vehicle. Maybe the second is not an issue if you go far enough into the future though...
Personally I think the path forward for autonomous cars goes hand-in-hand with services like Uber. First you need there to be a population of people who can get by without owning cars and have them rely exclusively on driving service. Then you slowly augment the driving service with autonomous vehicles when these folks are going to be traveling along routes that are supported by the cars. These routes will likely be an order of magnitude cheaper to travel, since there is no human driver. The market dynamics combined with the steady march forward of the capabilities of the autonomous cars will do the rest.
In other words, I think it's pretty unlikely you will just wake up one day and swap out your car for a self driving one to drive you around. Eventually, maybe, but fundamentally it will be less about which car you own and more about if you own a car at all. In the long run if autonomous vehicles are successful it makes little sense to own a car anyway, so it's only logical that the early adopters naturally be non-owners.
Yes, self driving cars make more sense as a service than as a possession. And knowing their limitations helps us see where they might be useful today, so that they can be gradually incorporated into traffic.
So what if the car can't drive itself in anything other than familiar roads during clear weather? I would guess that the majority of my current driving is done under those conditions. If the car could take care of itself most of the time but fell back to my control during rain, snow, obscure roads, or construction zones, I think it would still be a net benefit to me.
The problem there is driving is a skill, and one which you can lose if you do not exercise it. A self driving car which only works 99% of the time seems like a non-starter, since by that point the humans inside of them are no longer qualified to take over. Particularly if that 1% is in conditions that are particularly difficult.
I don't have a driver license so for me that would be perfect. I don't care about driving, I don't want to drive, but I want my (or a) car to come for me to the bar / after work. If the weather is not good then I will use public transport.
I think people overestimate their abilities. Sure, this is a new technology but in X years these cars will be better drivers than 99% of the people.
I could see people driving slowly and carefully in those rarer cases. e.g., in the event of minor flooding or a fallen tree that confused sensors.
In cases like garbage bags (may be empty, may contain sharp debris) on the road, I could see a car communicating with the others in the network to describe the problem from another vantage, or after driving through.
This is already a problem, as many drivers already don't have extensive experience driving on a snowy road, dirty road, aquaplaning. Most drivers are used to asphalt/concrete roads, and have real problems the few days with rare weather & road conditions. So with little self-driving practice in future, this could be a big issue.
Oh really? A futuristic mind blowing developing technology which hasn't been released to the public (and has no plans to) isn't ready to be released to the public? What's the point of an article like this, especially being posted here? We're all technical people who understand the complexities (or understand it's so complex that we can't understand them.) If the effort is to inform the ignorant public of their misconceptions, post it on fox news or yahoo answers.
It's an odd reaction that self-driving cars have to be perfect and handle every edge case and have tens of millions of miles of error-free testing before they're considered "ready".
Humans usually spend about 30 minutes driving 5-10 miles to get a license, are far more unpredictable and inattentive, and have a stupid number of edge cases of their own.
Not to say we shouldn't hold self-driving cars to a higher standard, we absolutely should, but it's silly to think they need to be perfect.
>"Some experts are bothered by Google’s refusal to provide that sort of safety-related information... the public 'has a right to be concerned' about Google’s reticence: 'This is a very early-stage technology, which makes asking these kinds of questions all the more justified.'"
This seems to be a pretty common opinion, but I fail to see why anyone feels they have a "right" to be told Google's plans, strategies and ideas, much less their development mistakes and failures for a thing that is not a product currently available (or even close to it).
I realize there is a very real fear many people have about autonomous vehicles because it's something they don't/can't understand. And while that's something Google's marketing/PR department may want to deal with, this fearmongering attitude of "Google won't tell me exactly how it's going to work... it must not be safe... oh no we're all going to die!!! " is just getting ridiculous.
I remember a few of these difficulties being mentioned in Sebastian Thrun's Self-Driving Cars course on Udacity. It's been awhile, but I seem to remember the snow discussion in particular. The problem wasn't just slippery conditions, but the question of how the vehicle can localize itself when the various visual cues that it uses are covered in snow (e.g. road lines).
I think there are better potential solutions for a self driving car to orient itself in those kinds of conditions than for a person. I've driven in miserable weather, and it is very disconcerting - you just guess and go. A self driving car could take measurements of the road + gps coords + look its map database to know how many lanes there should be, then estimated where the lanes and limit lines are. Once my eyes fail me, I don't have a fallback.
That, and you have to consider that one of the chief issues this article brings up - that the self driving cars are relying on unreliable maps - is that once a self driving car traverses a road, it could propagate a much more detailed, or at least much more auto-appropriate map to the swarm of other cars.
The maps would get magnitudes better the year the cars were unleashed.
People have trouble with snow as well. If there are no tracks in front of you, you just guess where you're supposed to go and go for it. It's pretty fun on roads you know well, rather miserable on roads you don't.
Also, most 3-lane roads magically turn into 2-lane roads, and many 2-lane roads turn into 3-lane roads etc.
You just described the problem nicely. The "magic" that turns 3 lane roads into 2 lanes etc, is a situational awareness that is really, really difficult to impart on a learning system. The big problem is that probabilistic models don't have a notion of "common sense" solution to an odd situation. They need to have seen the situation, or something very similar to it, enough to make a reasonable calculation of what to do.
The 3 lane to 2 lane problem is solved already. These cars follow a centimeter-accurate 3D map that has the lanes precisely defined (as well as acceptable speeds, location of stoplights, etc).
The Google car knows the lane change is approaching long before it shows up on any sensor.
This isn't about a lane change approaching. This is about people disregarding the concept of lanes when there's fresh snow on the ground because they have no idea where the lanes are.
Not quite. I would define common sense as "a reasonable fallback solution given that the current situation is unfamiliar." This is something AI systems have a LOT of difficulty with, the self-driving car being no exception.
Most of the problems yet to be tackled seem to be troublesome for many humans as well. But as for the adding to the maps, couldn't google work with local governments to have stoplights, stopsigns, construction etc added and google will contribute observations of potholes etc.
Living in Russia, I always tried to conceive how Google cars would behave here, especially in winter.
As a driver here you have to take daring decisions sometimes, something that robots probably should never do due to 'do no harm' rule. That's a real engineering challenge.
There was recently a car crash here in Victoria, Australia, that resulted in the deaths of two teenagers and injuries to five others. It was apparently caused by the relatively inexperienced driver swerving to avoid a rabbit. In hindsight it would have been better if the driver hadn't swerved and the rabbit had been run over, but in some sense this is a moral choice of the type that robots can't make.
In theory self driving cars sound like they could be potentially safer than human drivers, but there are many edge cases like the one above where a robotic driver following reasonable heuristics (e.g., swerve to avoid animals on the road) could cause fatal accidents that most human drivers would be able to avoid.
I think this is a poor example. You could just as easily program the robot not to try so hard to avoid hitting a small animal (or at all). It's not a moral choice, it's a decision based on size and shape. Also, a robot might be less likely to swerve into other cars or dangerous locations.
There are problems for robot cars, sure, but this doesn't sound like one of them.
> It's not a moral choice, it's a decision based on size and shape.
A baby crawling on the road is the same size and shape (roughly) as, say, a dog. In that situation the choice is a moral one, whether the robot knows it or not.
> this is a moral choice of the type that robots can't make.
The driver in Victoria did not make a moral choice. Did they really weigh the life of the rabbit against death and injury of seven people? Extremely unlikely.
A robot, like an experienced driver, could easily use the heuristic that hitting the small thing is safer than hitting the big thing.
>>Did they really weigh the life of the rabbit against death and injury of seven people?
I would say yes. Firstly I would see what the driver actually did. Did the driver have the seven people in plain sight when he/she made that decision? But for sure the driver did make a moral decision to save the Rabbit's life. It would have even been the case where the person thought he could save the rabbit's life without harming the humans. So you have to make decisions which are purely not mathematical here.
>>could easily use the heuristic that hitting the small thing is safer than hitting the big thing.
How would this hold against hitting some one below average height versus hitting some one tall. Or running over a baby Vs an aged person.
I'm not blaming the driver for making an immoral decision but I am saying that many more experienced drivers would not have swerved. For example, when driving in Africa where stray dogs are common, my father had a pre-determined policy not to swerve to avoid them given that it increased the probability of an accident involving human injury.
As I say in a separate comment that "small thing" may be human baby or toddler on the road.
I'm always glad to see reasonable overviews of emerging technologies. While very focused on what's wrong, it is a good counterpoint to the typical breathless coverage.
Talks about unknown traffic lights possibly being a problem. But if all cars were self driving would there even be a need for traffic lights?
> If it encountered an unmapped traffic light, and there were no cars or pedestrians around, the car could run a red light simply because it wouldn’t know the light was there.
So the car determined it was safe to cross and crossed? I don't see the problem. The red light is for humans, but a self driving car will know where the other cars are and know when to cross an intersection.
You could argue the reverse even. A self driving car comes to a traffic light that is green but detects a fast moving car running the red, it would stop. But a human might not and continue on thinking green is ok.
They are still designed fundamentally to avoid obstacles. And they have 360 degree vision, meaning they will avoid other vehicles when they do not follow expected traffic behavior.
I imagine some kind of "paranoia" setting where the car just assumes that anything it sees moving can and will accelerate towards it or its immediate path.
I think you would have to experience first-hand what it is like to drive in almost any country that is outside the US, Europe or most of the commonwealth to really appreciate the difficulty of developing a self-driving car that could handle such conditions more safely than humans.
It very much is hard.
I've driven in and around India. Aside from the fact that the majority of vehicles have four wheels and generally move forwards more often than backwards, the similarities to driving in countries with more regulated driving environments are few to none.
Try circumnavigating Connaught Place on any day for a taster, or try taking the main highway north from New Delhi any distance.
The smaller more fragile vehicles give way, in the interests of self-preservation, to the larger, heavier vehicles.
Approaching in an intersection in a truck? Blast your horn to warn others of your approach so that they can get out of the way. Slowing down to carefully approach the junction is not what happens.
Need to cut across some lanes of traffic in a school bus to make a turn? Have your co-pilot hang out of the side door waving, shouting and berating other vehicles to make them give way. That's your turning indicator in many cases.
Need to take the same school bus on a circuit to collect kids in the morning and are running a bit late? Why not head straight through some empty fields. Roads can be optional.
Can you imagine the sensors and speeds that might be available within one decade let alone two or three? Compare that to a human looking in just one direction at any given time and with imperfect reaction time.
You know those larger, heavier vehicles you mentioned? What if they're self-driving long-haul transport? They'll likely be able to make it through.
Cost will be the thing that prevents self-driving cars from dominating roads in India before the technology itself is a problem.
Yes, I've seen crazy intersections in person - I considered exactly that sort of thing with my comment.
But I can also see that a self-driving car could edge forward gradually until it had the opportunity to take a larger advantage. A lot of those vehicles and bikes are not travelling at full speed and are also prepared to brake if need be. A self-driving car would be able to judge all of these things almost immediately unlike the reaction lag we get with humans.
>>Try circumnavigating Connaught Place on any day for a taster, or try taking the main highway north from New Delhi any distance.
That's like the easiest of the tests you give.
The difficult ones require record precision gear-clutch-accelerator usage, to move vehicle in measures of inches in case of grid locks in some places in India. This is beyond the fact that it may raining heavily, and you might be navigating through huge pot holes, with garbage, beggars and stray dogs on streets.
Well then nothing can give you true idea until you drive here.
But a self-driving car doesn't need to be more intelligent than a human brain, it just needs to be better at driving. Human brain is certainly a better general-purpose compuing machine, but there are many situations where technology can vastly improve its performance for specific cases (e.g. reaction time).
As for distinguishing rock and paper, humans also often fail at similar tasks - "is it a lump of snow on the road ahead, or is it a rock?". Better to just avoid uncertain objects in such cases. But true that there is certainly some room for improvement (for cars, because we are unable to improve brains).
An autonomous vehicle needs to be at least as good as people at crap like reading numbers and street signs. There is no such thing as a computer which can do this task, though all drivers can do it, or they wouldn't pass their driver tests. That's why these are still considered challenging data sets for machine vision people:
I think it may not be a long time until we realize that achieving fully cognizant AI would be faster than covering all these corner cases in somewhat hard coded logic.
I made this comment about google self driving cars two years ago and I think it's still relevant now:
200k miles is nothing. Over 8 billion vehicle miles are spent per DAY in the US. One person is killed for every 75 million miles driven. 200k miles isn't enough to test every terrain, under every weather condition, in every lighting, etc. There's too many variables in the equation.
The Google car won't be truly safe until it has logged 1000x miles.
Considering the average human has only a few miles of officially observed driving before it receives a license, I think this is a bit unfair.
How would you suggest testing these vehicles to tens of millions of miles without having an actual public deployment?
Also, Google can test its algorithms in simulation to many millions of miles within hours. I believe they're lobbying for simulation to be a major part of testing:
You make it sound like they're stopping trials. There are only going to be more vehicles from more manufacturers out there being tested and in increasingly complex situations. That mile count will continue to rise.
And this isn't true: "The Google car won't be truly safe until it has logged 1000x miles."
I'd love to see an article on this topic that was actually written for techies/engineers.
IMO, this could be right out of Wired the way it is written -- it conflates regulatory issues, problems that are clearly (relatively) easy but just not tackled, and problems that are legitimately still pending solutions before this goes mainstream. An article focusing on what the actual real difficult unsolved technical issues are would be fascinating, but this ain't it.
"The car’s sensors can’t tell if a road obstacle is a rock or a crumpled piece of paper, so the car will try to drive around either."
As a human I sometimes can't tell the nature of something on the road at speed. They gave an easy example (rock vs paper) but IRL it is usually things that aren't so easy to identify like trash bags that might be empty... or maybe not. So avoiding any unexpected item in the road when safe to do so is probably always the right solution anyway.
"Urmson also says the car can’t detect potholes or spot an uncovered manhole if it isn’t coned off."
People aren't that great at this either, especially in traffic where the hole is obscured until the last second, at least an automated car that reports home could, via various sensors, detect hitting "something that felt like a pothole" and send that information out to other cars in the region.
Negative obstacles (e.g. potholes) in particular are trouble for line-of-sight flight time sensing modalities like laser & radar. The lower the sensor is mounted on the vehicle, the longer (in direction of travel) the hole has to be before you get a sufficiently large difference to notice between returns from the same beam in subsequent time-steps. This is of course worsened further by increasing vehicle speed. It is somewhat mitigated by having multiple beams, i.e. the Velodyne laser scanner has 64, but even still only a few are hitting the obstacle and surrounds at any one time.
Some of the more interesting research in this area uses infrared to detect the change in heat of a shadowed hole versus flat ground. Of course, that only works when there's sunshine to provide the heat source.
P.S. Google people reading this (Urmson, Ferguson, etc), please consider this my annual reminder of the awesomeness of your Google Sydney office and the large quantities of autonomous field robotics engineers training and working within walking distance of it, for wages that could be doubled without your budgets even noticing...
>So avoiding any unexpected item in the road when safe to do so is probably always the right solution anyway.
Exactly. These are edge cases that will undoubtedly be better handled as the technology matures in the near future. There no reason that these (still very speculative) issues should slow down adoption.
>that aren't so easy to identify like trash bags that might be empty... or maybe not.
A good example, but even if the trash bag is empty it can cause problems. I ran over an empty trash bag once, and it got stuck to my muffler and melted onto it. Made a huge mess and could have potentially caught fire.
Accidents are usually caused by human unpredictability, lack of attention, terrible reaction time, stupidity, narrow perception, inability to follow traffic rules, and the effects of various drugs. Driverless cars mitigate all of these issues.
I would feel much better riding a bike in the vicinity of a self-driving vehicle than any random human.
There are a number of issues to iron out yet and a lot of testing to do. But make no mistake, they're already far better.
They are currently stumped by situations that humans are OK in. They are not "already far better" in all situations. Even if they are better on average, my point is that this is not enough. If the world was run on pure rationality, they would have only to be epsilon better on average to be adopted. In the actual world, they are going to need to be very much better, and have very very few failure cases to be adopted.
The problem is that the driving model is probabilistic. When you solve a problem probabilistically, getting from 90% covered to 99% to 99.9% covered to 99.99% covered involves exponential leaps in difficulty. So even if the car covers 99.9% of driving conditions (and it currently doesn't), there's still a tremendous amount of work to be done to get it to 99.9999% correct, or whatever the threshold is for it to be deemed "safe" for fully autonomous use.
I personally am bearish on the technology, as getting the inconvenient final situational cases correct will be extremely challenging. I would love to be proven wrong, but at Stanford I came to the opinion that the probabilistic approach would get us to really cool demos, but never a fully autonomous vehicle. That being said, the people working on this are a whole lot smarter than I, and I would love to be proven wrong.