"Work is already underway in Japan to build a dedicated autonomous driving proving ground, to be completed by the end of fiscal year 2014. Featuring real townscapes - masonry not mock-ups - it will be used to push vehicle testing beyond the limits possible on public roads to ensure the technology is safe."
Meanwhile Google has driven +200K miles on real highways and streets with their autonomous car.
Still under controlled circumstances, though. This sort of mockup is necessary for full testing.
I'm not sure how autonomous Nissan is shooting for, and I think they're rather deliberately not defining it. You can draw a fairly smooth continuum between "antilock brakes" and "full autonomous car you'd put your unaccompanied children into in the middle of a Midwest winter", and I imagine they're not leaping straight towards the latter. But still, to get there, you need to be able to test things like "what does the car do on a street covered in a patchwork of ice when a dog jumps out in front of it?" without waiting for exactly those conditions to emerge.
If you want a fun thought exercise, imagine how an antilock brake system handles the left wheels being on ice and the right wheels being on dry pavement. And remember... torque. You can't "just" brake with the right wheels....
> Google has driven +200K miles on real highways and streets
I have got to say, I am pretty amazed they have managed to swing that. Permission to drive autonomous robot cars around the everyday streets of litigation and safety-obsessed California? An accomplishment of equal size to any one technical challenge in the car's making, I would say!
As I understand, they didn't ask permission. They simply announced one day that they'd been doing it for a year, and then lobbied California to legalize it afterwards. Given the potential benefits of being the state that makes autonomous cars a thing, I imagine it was an easy sell.
Meanwhile...
California is trying to build a 100 billion dollar bullet-train boondogle that will be ready in 20 years.
By that point our efficient electric cars will be able to link together in high speed freeway 'trains'.
What if that money was invested in developing the autonomous cars?
The public thinks everyone else is going to ride the train and free the highway up for them... what a waste.
No sale. Economic stimulus is completely nonfunctional when the lead time is in the "many years". The term "shovel-ready" was thrown around a couple of years ago for a reason.
Of course, I suppose you could just hope that you start planning for the project in this downturn, but it'll only start in the next one....
For the economic downturn that happened five years ago. The fact that we still need "stimulus" is itself a pretty powerful argument against its effectiveness. Our new economic innovation, making sure the economy stays down as long as our incredible sluggishness in "stimulus".
It costs so much money because the land is so expensive. The land is so expensive because 40 years ago many people like yourself said bullet trains won't work in America.
Is it really that hard to be convinced that traveling 220mph on a train between cities 200-500 miles apart is a good thing?
California isn't getting the ideal system now but if it builds the first one, lots of people will use it, and the next generation will have an easier time building the 300-350 mph maglev.
Cars cost something like $15+ per (calendar) day, and cause congestion when used in big cities. Making them autonomous won't help with either of those, but replacing them with usable mass transit does.
Automation will certainly help with congestion by optimizing who is doing what when. I am not a network engineer, but I cant imagine that people are driving the most efficiently to even attempt to maximize throughput.
especially when you think about the fact that a lot if not most traffic jams are ghost jams, where a slight overreaction ripples through the following cars causing them all to stop for no apparent reason.
I might have to look into this a bit more, and I would agree that adding more cars to the point of the system literally being unable to handle them cannot be optimized.
However, its clear to me that individual actors in the traffic system do not behave in a way that is beneficial to considerations above and beyond their own.
I do believe that this would cause some decrease in overall traffic if it was "fixed" by an automated program. (If your car can drive 200 mph and not make a mistake, doesn't that imply that you can have more traffic throughput on the road?)
Either way, I am interested if you have specific examples besides the meme of traffic being terrible in LA. I assume they have done massive studies on the problem considering how problematic it is known to be.
Traffic in LA is a symptom of a social interaction, where demand exceeds supply of a scarce resource. Its not a "technical" problem in the naive sense. You proposed solution is like trying to solve the "tragedy of the commons" through fertilizer. The problem of the commons is not that the quality of the commons is poor. The problem is that no matter how productive it is, it will be overgrazed. So, this is a social problem. Not a technical one in the engineering sense. The solution is ultimately solved with poltics, who keep the politically less powerful (ie, the poor) out of the commons.
Making cars autonomous would virtually eliminate traffic. Traffic only exists because of the inefficiency generated by cars driving at different speeds in a "pack" on the highway.
Aside from traffic caused by accidents or unavoidable roadwork, the vast majority of traffic - caused by raw inefficiency and poor reaction to the aforementioned two - can be regulated away.
While this is a good idea in theory, i'd say the autonomous highway isn't going to work well if you give grandma the option to tap on the brakes when she gets nervous.
Obviously the bullet train is going to be a disaster, but if there were a high speed train (as in Europe) from SF to LA, I would definitely leave my car home and use the train. I'm only to happy to free up I5 for you.
I take the train from Santa Barbara to San Francisco from time to time and would even put up with a train with decent internet connectivity. Even if the train is slow, I could at least be doing some work on the ride. Unfortunately the train is currently more expensive, less productive, and slower than driving.
The technology to build faster trains was invented before man landed on the moon. It has been 49 years since this technology went into production. It's older than most people reading this discussion.
I don't see any feasible way to transition to high speed linked autonomous cars. Unfortunately, there are a lot of people who need to get to work via car, and these people will not be able to afford a new car. Transitioning from a driver based to a driverless technology would take a government intervention in the economy that is unacceptable to most Americans. If we utilize the same road system, there would need to be massive subsidies for the poor to swap out their cars and we would need to outlaw classic automobiles. Creating a separate road system would take a massive public investment.
Nonsense. You start with driverless cars that can use existing infrastructure with everybody else, then as the density of driverless cars goes up over time, you take advantage of the other economies that emerge. At some point, probably decades later, eventually society will simply ban driverless cars, but you don't have to start there. There's no need for a "big bang" event.
By the time that happens, the "poor" won't have to "replace" their car, because they'll already have dumped the expensive rust-buckets for time-shared rent-a-cars. It's the rich that will be the last holdouts, not the poor.
By simply adding autonomous cars to the current infrastructure, it doesn't improve peoples lives in a directly obvious way. They can't go any faster, and traffic does not get reduced. Yes, there is the imperceptible benefit of a slight reduction in the probability of death, but a recent New Yorker article pointed out that antisepsis was slow to be adopted because the benefits were not as easily measurable -- in contrast, anesthesia, invented at the same time, was adopted worldwide almost instantly. I personally would only buy a driverless car if it meant I could immediately get where I am going twice as fast. Otherwise, I rather enjoy driving.
It benefits the person owning the driverless car, who can now do something other than drive while still moving from A to B. You seem to be insisting on thinking collectively, but that's not how people decide or act. It only has to benefit the owner for it to sell.
The rich will be the last to let their human driven cars go, but they'll also obviously be the first to adopt them, once they're safe. In this case, the rich other than the ultra rich, who can already afford cars-they-don't-drive... giving further evidence that, yeah, people want this.
Your argument would, for instance, seem to explain why cars never took off... why, one car hardly benefits anyone. There aren't even any car suitable roads, after all, and think of all the horses the smelly, loud thing will spook! But... that's not how people buy things.
The collective benefits come later. The individual benefits come first.
The teams Nissan and Google are using to develop this technology already dealt with the problems of a highway/city populated with human pedestrians and drivers. High speed linked autonomous cars are certainly not the fist step of this transition.[1]
Why would they have to be linked? If I were building autonomous vehicles, I wouldn't trust information from other vehicles in the same way I don't trust a human driver to use their turn signal properly. The vehicle should rely only on its sensor data and drive defensively.
Cars are way too dangerous. We accept the absurdly high number of injuries and deaths they cause only by lack of a better mean of transportation.
I hope self-driving cars will have hyper-efficient safety mechanisms, especially to protect pedestrians and bikers, not just the car's passengers. This would be a huge progress for humanity.
Making self-driving cars safer than people-driven cars is trivial. The hard thing will be handling the tidal wave of FUD when the inevitable happens and someone dies as a result of one. Google or Nissan or whoever will need a mountain of good PR and safety statistics to be able to push it back. That alone is enough reason to motivate the creation of "hyper-efficient safety mechanisms".
I'm really wondering about how the social change will occur. Creating the technology will end up the easy part. How do we deal with our cars being software driven, and that software being created by companies which may not have the best track record on consumer privacy? (That's a concern I have about the hyperloop as well- wouldn't these proposed futuristic transportation involve a lot of surveillance by default, compared to old-fashioned modes?) How do we deal with the transition when the majority of cars are still people driven? How does government handle this revolutionary shift?
From no working prototype to multiple commercially available vehicles in 7 years. Yeah, okay.
I hate to be that guy, but it took ~3 years to get the Leaf to production. And the Leaf uses well-known technology that had been shipped by other vendors a decade earlier. I'd be happy to be proven wrong, but this feels like a very long reach.
I am very dubious about other people's driving skills, I tend to assume everyone else on the road is out to kill me and will do the dumbest thing possible at any given moment.
But even so, I am also an experienced software developer, and I know that software is only as good as the author(s). Bugs happen. It's inevitable. And I don't want to die or be injured because of software errors. I'd rather it be human error.
Now you might say to this, "Planes fly on auto pilot constantly. Every time you fly you're basically in the hands of software." And this would be true. But my response to that is:
1) The air is much less densely packed than the roads and highways.
2) In the air, even though you are traveling much, much faster than in a car, the pilots have more time to react to a problem than a driver in a car.
3) The pilots are highly trained, experienced and hopefully alert. Drivers in automated cars will be complacent and texting on their phones.
I think this is a terrible, terrible idea and misuse of technology, despite the fact that humans are shitty drivers. I think it's only going to exacerbate the problem, not improve it.
Have you seen car accident stats? I'll concede that bugs do occur, but it would take some serious failure to make it any worse than the average driver. Even decent or good drivers are not that great, especially when you factor in distraction, tiredness, poor judgement of road conditions, etc.
I understand that you're talking about the interactions between real drivers and autonomous drivers. I still think that the autonomous driver will be better able to react to even a poor human driver than a real one. With all the sensors, they can detect when someone is drifting lanes, not stopping at a stop, etc. and can react accordingly with much more accuracy than a human driver.
You're right, but the software of an autonomous car will be held to much higher standard than just 'has be to better or equal to a human driver'.
The whole thing gets interesting when you look at the judicial side of it: Who is responsible in the case of an accident? The car manufacturer (if it was a software failure)? Then even one accident triggered by it is too much.
It will be interesting to see how this plays out. Hopefully society sees the benefits and gets out of the mindset you mention above (which I understand you are pointing out, not advocating).
The idea that 'even one accident is too many' is a joke when you consider how many accidents there are with human drivers, and yet we accept them willingly. If there are 30,000 deaths per year from human driven accidents, then surely the real answer to how many accidents by computer driven cars is acceptable should have an upper bound of 29,999, right? Why isn't it acceptable if it saves even one human life, rather than being unacceptable if it costs one?
Unfortunately, we as a group don't tend to be very logical, and love to spread blame, so I am afraid we'll take a long time to start saving lives. The data on the self-driven cars on the road is pretty overwhelming that they are safer than human drivers, but unfortunately there will be those (as in this thread) who believe that is true of everyone but themselves.
For autonomous cars or not, whoever is liable in the case of an accident will acquire insurance to cover the cost of accidents. Currently, that's the driver. With autonomous cars, that may be the manufacturer, in which case 1) The cost to the manufacturer of insurance will be factored into the retail price of the vehicle and 2) The total cost of insurance will be lower because the accident rate will be lower.
If you were to replace every human driven car on the road with only autonomous cars, I will agree with you that the accident rate would likely be much lower over all.
But there's two problems here:
1) We won't be replacing all the cars overnight. The problem comes with the interaction between terrible human drivers doing wildly unpredictable, insane things and the inflexible, unadaptable automated cars.
And
2) I'm not comfortable with dying or being injured due to a software error regardless of whether its likelihood is higher or lower. The roads are far more dangerous than the air, and this is why I'm uncomfortable with the whole concept of automated cars on the road and I'm not uncomfortable with autopilot in planes.
My experience as a human driver is that most accidents can be avoided by either (A) slowing down in ambiguous situations, or (B) aborting that lane change (usually because two cars are attempting to go into the same space).
Those behaviors are easily improved upon by software. Dealing with lane changes is something software is likely to be vastly better at, because they can point sensors in all directions -- I only have one set of eyes.
Furthermore, a lot of horrific accidents are because of bizarre unexpected things that the human does not react to in a timely fashion precisely they are so far out of the normal scope of traffic flow.
For example, what if a car coming the other direction drifts in front of you on a rainy night? This is an unfortunately common and very lethal kind of accident in rural areas. It will take you 1 to 3 seconds to interpret the scenarios and react appropriately. (You only get 3-4 seconds before your are killed.) A computer can instantly recognize that car 100 yards away is maneuvering in a potentially lethal manner and begin slowing down right now. That buys more time for everyone.
Strangely, this might kill autonomous cars entirely. Any failure to properly respond to something bizarre can be the subject of a lawsuit that could kill the autopilot manufacturer.
Engineering solutions are a tiny part of getting this stuff to market.
the interaction between terrible human drivers doing wildly unpredictable, insane things and the inflexible, unadaptable automated cars.
There are only so many things bad human drivers can do, the physics is pretty limiting. It isn't like they can teleport in front of another car. Even the craziest things like opposing traffic jumping the median into oncoming traffic isn't all that crazy, it isn't conceptually different from a deer jumping in front of a car.
I'm not comfortable with dying or being injured due to a software error regardless of whether its likelihood is higher or lower.
> There are only so many things bad human drivers can do, the physics is pretty limiting. It isn't like they can teleport in front of another car. Even the craziest things like opposing traffic jumping the median into oncoming traffic isn't all that crazy, it isn't conceptually different from a deer jumping in front of a car.
I was thinking about your assertion that it's easy to solve, and why exactly I disagree with it so strongly on a base level. What am I basing this belief on?
And I figured it out. Driving games.
The AI in driving games like Gran Turismo has had more work than Google or any other company has done on theirs. And not only that, the AI in driving games has perfect information of the physics involved and the environment around them.
And those AIs are still fucking terrible when interacting with human players. Pack of AI driving around? Just fine. Add a human driver to the mix and it turns into a disaster.
This is why I am very concerned about automated driving in the real world alongside human drivers.
First, I don't think it is easy. I just think it is feasible because it is a reasonably bounded problem.
As for driving games, those have a different set of requirements and incentives. Driving poorly isn't necessarily a bad thing from the position of game play.
The AI in games like Gran Turismo is a terrible example. That AI is not designed to avoid accidents. Accidents are an acceptable part of the game play, and you're racing at high speeds above the norm.
It would be at least as difficult, or less so (but not more so) to program AI to interact with human drivers with the primary goal being safety, not speed and exhilaration.
> I'm not comfortable with dying or being injured due to a software error regardless of whether its likelihood is higher or lower.
Why on earth is dying due to a software error worse than dying due to a human error if you can chose one or the other, and the chance of a software error is lower?
Individual odds of being in an accident may be lower than the average. Software for all of the cars would be approximately the same, so the chances of being in an accident could increase for extremely cautious drivers. I doubt they would, personally, but it would be an argument against adoption until the cars are demonstrably safer than possible with human drivers.
If the chance of an accident increases then sure, be against that. And I've no idea what the chances are. But to say you'd rather have a higher chance of dying from human error than lower chance of dying from software error... doesn't make any sense.
Eventually - not this year, maybe not this decade, but eventually - technology will be better than human drivers. How many lives, then, would you be willing to sacrifice, on the grounds that human error is somehow better than (less frequent) machine error? Six? Sixty? Six hundred? Six thousand? Since 1985, nine hundred thousand people have died in car crashes in the US alone. That's more than the entire population of San Francisco. Think about that. Nine hundred thousand people. Nine hundred thousand corpses. If technology can prevent that, I'd say we have a moral duty to not only develop it, but to do so as fast as possible, before too many more people die early, violent deaths.
As I stated to the reply above yours; we're not talking about replacing all cars on the road with automated cars overnight. We're talking about selling automated cars alongside regular cars. So your argument doesn't really hold water.
Yes if we had an automated car ready to replace everyone's normal car, we'd start saving tons of lives tomorrow and I'd be a monster for opposing it. But that's not reality, that's not what we're talking about.
And what I'm saying is, I'm uncomfortable and see problems ahead. And I seem to be totally alone in this, which is amazing to me.
I am sure that the automated drivers will react faster, but the problem is that they can only react to events they have been programmed to react to and only in the ways they have been programmed to react.
There's no ensuring either of:
a) It will react to everything that it needs to react to / not react to things that it shouldn't.
b) It will react in the proper way when it does react.
This is the problem.
Like I said if it were all automated drivers on the road, I think we'd have a really amazing safety record. But the problem is that with human drivers and automated drivers on the road, the human drivers are going to do all kinds of stupid shit, and how much of it will be reacted to properly? I don't know.
It seems to me most people on HN significantly underestimate the complexity of solving this problem of automated drivers alongside human ones.
Plus, I am also uncomfortable with training human drivers to just trust the computer and pay no attention to what's going on around them. I think that's the worst outcome of automated drivers.
The problem with your thinking is that you assume the engineer's behind this can't program a response to every conceivable condition. I'd argue they can.
If you have a 3-dimensional model, with a finite number of axes to move upon, any movement towards the vehicle from another vehicle, no matter what it is, can be enumerated and mathematically modeled.
Once that has been done, it is a matter of programming the autonomous car's probabilistic response to the oncoming collision.
You think that most people significantly underestimate the complexity of solving this problem - I'd argue that, no, figuring out how to maintain satellites in orbit is more difficult than modeling every possible 3-dimension interaction between two, four, or 20 different cars.
No matter what kind of "stupid shit" human drivers do, there's only so much "stupid shit" they can actually perform. It can all be modeled mathematically on a 3-dimensional plane, and thus programmed for.
Is it a complex problem? Yes, assuredly. But your comments are approaching it with what seems to be mild hysteria - it's a basic engineering problem, and all you need to do is model it out. The math is there. It's solid. And I trust math more than I trust human drivers.
Exactly, and and if the total number of crashes decreases by one or two orders of magnitude, then the outlier incidents that the automated cars couldn't correctly respond to might be a tiny number, so low that it pales in comparison to the number of mistakes made by human drivers.
Computer
- Can see 360 degrees, 100% of the time
- Can read the movement of obstacles in real-time, with very accurate estimate of each surrounding vehicle's speed
- Can run 100s of simulations per second to estimate the odds of an impact
- Can react almost-instantly with correct inputs to the brake and steering based upon real-time instant of surface conditions
Human
- Can react within .7 seconds with correct inputs to the brake and steering based upon real-time instant of surface conditions in ideal conditions
- Can see ~180 degrees, with significant focal-point limitations
- Can guess other vehicles reactions based upon experience
Which do you think will have a better reliability rating in many typical driving scenarios?
I politely disagree. Having driven next to self-driving cars for the past few years in the bay area, in my opinion they are far more competent than most drivers. They don't stop tracking the road to put on makeup or check their phone. They don't get distracted by the kids in the back seat. They have their sensors on the road constantly, and not just in front of them. They don't have blind spots.
Will bugs happen eventually? Yes, then they'll be patched. New bugs will be found, but every accident that happens will make the cars safer. Each error improves how safe they are, and since they're currently safer than humans based on accident statistics, I'm more than happy to see something that's already safer get even more safe over time.
A problem I see is every car manufacturer building his own system, maybe with a standardized communication channel to speak to other cars but the control software in itself will be bound to a certain manufacturer. This means different bugs in each car, fixing a bug in one car doesnt mean the error doesnt exist in other cars and so on. New players entering the market are facing pretty much impossible hurdles to enter.
Some kind of Android for autonomous cars would be quite nice I guess.
Self driving cars sound like a slam dunk for the U.S. driving conditions (Bay Area, etc..). I'd buy one in a heartbeat there.
But what about Latin or Asian messes of countries? Say Rome, Buenos Aires, Lima, Hanoi or Bangkok? Lots of motorcycle and bycicle and human traffic (and in the case of my native Montevideo, horse-drawn carriages!).
Driving in some of those countries is more psychology and playing chicken than anything else (and yes, accident rates are apalling)
I still don't get it. Maybe a computer could react to an erratic human driver faster. But more importantly, it can react more consistently than a human driver who gets tired or distracted (or confused by other humans' behavior).
Edit: To put it another way, a computer that avoids common-case crash scenarios most of the time outweighs humans avoiding edge-case scenarios sometimes.
By watching faces or body language, human drivers can sometimes predict sudden moves before they happen. Example: children playing on the sidewalk and suddenly running into the road.
Still, I believe autonomous cars can be made better than human drivers.
The thing is, you're trying to say that automated cars are a bad idea, but all the arguments you make actually militate for getting rid of the humans ASAP!
If you were going to pass a law against humans driving cars I'd be the first to vote for it.
But as long as humans are driving cars, solving the issue of having an automated car driving alongside them is a non-trivial task which people seem to be underestimating.
As a developer, I agree there will be bugs. However, I'm also well aware that friends of mine have written code with a much more stringent level of testing than the web apps I typically work on. No doubt much of the public will raise an outcry against letting computers drive for them, but if the number of fatalities decreases by one or two orders of magnitude, that will be hard to ignore.
I am under the impression that safety is already at least one order of magnitude better than a human driver. Frankly, I can't wait until the majority of cars are under computer control; it will be much safer.
> I think this is a terrible, terrible idea and misuse of technology
What a bizarre attitude you have there. What possible rationalisation could you have for this extreme, arbitrary prejudice?
A driver kills a dozen people in a freak accident. A real tragedy for everyone involved. A panel of independent investigators founds his action to be faulty. He goes to prison. Next day another guy, on an other road makes the same mistake...
An autonomous vehicle kills a dozen people in a freak accident. Newspapers scream bloody murder. A panel of independent investigators determines a chain of causes. The Immediate bug gets fixed, and the inadequacies in the testing methodology is remedied. We all get to live a tinny bit safer ever after.
1. Autonomous cars shouldn't really be impacted by density. You can only make a highway so dense...there's one-four lanes, and it's like minesweeper. You can be surrounded by six cars at a time, maximum (two on either side, one above and one behind). Density might increase the likelihood of an accident just because there's more objects in motion, but an individual autonomous car would theoretically account for this.
2) The entire point of an autonomous car is for drivers to not need to react to anything at all. So...I'm not sure what this has to do with risk. Maybe the drivers of non-autonomous cars wouldn't be safe...but that's the same situation as today. Autonomous car drivers could read a magazine in the middle of an accident. Who cares?
3) Again, my above comment, this doesn't seem relevant.
I think you're being a bit hysterical, to be honest. It's a problem of math. You can model every possible interaction between the cars on a 3-dimensional plane with a finite number of axes. It might be complex, but we can program every single possible interaction and collision - and that's the first step to programming collision response.
Basically, the takeaway is, I don't think there's a model of car interaction that isn't improved by having an autonomous car, even if only one car out of the two is autonomous. And I'm pretty sure this scales.
That's a bug with the american government (your?), not Google. Tesla on its own can't fight it any more than Google can, them both being american companies.
If the NSA wants to know your real-time location on the roads, it really doesn't need Google to do that. Or Facebook...or any other $BIG_COMPANY for that matter.
Meanwhile Google has driven +200K miles on real highways and streets with their autonomous car.