These cars could be so freeing for so many segments of society. I agree with everyone that reducing the commuting annoyance would be awesome - imagine not having to worry about parking your car in the city anymore. You drive in on a highway at 90mph, get dropped off and then the car goes home or onto another task or gets shared out. Amazing. I'm even more excited about getting the elderly out of their homes safely, and getting drunk people off the road altogether.
It's easy to play out the use cases and see why Uber is investing in this area and why it feels threatened by this technology advance.
I think Uber sees self-driving cars much more as an opportunity than a threat - unless they are afraid they won't be able to buy self-driving cars from anyone but Google, and Google won't sell to them. But I'm sure there will be quite a few suppliers by the time they need the cars [1].
Short term I agree with you, but longer term I'm not sure. Short term, they lower costs massively.
Longer term, a majority of people have self-driving cars. If I live in the suburbs, I either keep it all to myself, or loan it out when I'm not using it. If I live in the city, then I likely don't own a car - but instead call on a car as I need to. These probably come from both individual lending their car, and from groups/companies that are setup as a service.
Either way, these cars are likely to be network agnostic - meaning they'll lend themselves out to Uber, Lyft - whoever. So while costs will go down, it could lead to a big margin compression. Meanwhile, the arms seller (Google, whoever) makes money on each car being sold.
Makes sense, therefore, for Uber to continue to focus on capturing the market - their installed consumer base (and their brand) is their strongest asset going forward.
I think people focus too much on Uber replacing consumer transportation. I think they're going to branch out into transport of goods and eat into the trucking industry in a major way, in addition to whatever they do with consumer transportation. A lot of the commoditization concerns that you'd have with transportation don't apply as much to transport.
I could see people buying "priority" access to the self driving car fleet. People who don't mind waiting or don't need to move around during peak hours would just have lower costs.
Google Ventures and Google are separate arms of the same company. Ventures invested, but Google proper is working on their own service to compete against Uber.
Note how easy it'll be for Google to flip the "Uber" provider in Google Maps to their own service.
It's akin to single user vs time sharing OS back in the 60s. I wonder how more efficient would society be with fluid allocation of transport.
Many of us like the idea, I also talk about it to some people, surprisingly they have no knowledge of the SDV projects (Google or other), even driving instructors. So far it adds to the trust and legal issues, public awareness seem quite slow for such a thing.
That would be incredibly wasteful. Double the cost of gas, double the environmental damage, much worse traffic because cars are going home by themselves with no passenger, then returning to get the passenger later.
Not really. In a future of self-driving cars most people won't even own their own car so most of these cars will be shared - essentially automated taxis. So after the car drops you off it will go to its next call or park somewhere until someone calls it.
How is the car going to park itself? The grandparent's point was about how annoying trying to find a parking space is.
Can self-driving cars parallel park yet?
Can they detect a strip where parking is only allowed between 9-4 and it's 4:30? Can they detect fire lanes?
EDIT:
Also, are these automated taxis really going to be more affordable than owning your own car? Taxis are incredibly expensive today. Sure you save a lot of money when you don't have to pay a driver, but is it really enough to make car ownership undesirable?
> Can they detect a strip where parking is only allowed between 9-4 and it's 4:30? Can they detect fire lanes?
I don't have specific examples, but I know that it's possible to read signs easily, (as in you can write a program to do it yourself with no great techno-wizardry,) and I don't see any reason they couldn't use that data.
Yeah, sure, we can think of a far future where everything is automated but I think this discussion is more about the near future (near being when self-driving cars are for sale).
The logistics of parking go far beyond getting into the space. How does it pay the parking meter? How does it know if a parking lot is public or not, how does it pay for that? How does it know not to go into the unsafe neighborhood where wheels get stolen?
Compared to the logic involved in actually driving down the road safely these are all easy problems to solve.
But realistically they are problems that won't be had, because in the near future (and probably in the far future as well) most people aren't going to buy their own personal self driving car because not only are they expensive, but it is pointless to have a personal car when it is way more convenient to offload responsibility for the car ownership to someone else.
Instead they will use an app to summon an automated taxi car owned by a corporate entity (Google, Uber, etc), and when they are at their destination that car isn't going to park, it is going to be off to pick up another passenger.
The car owner will have designated secure parking lots for overflow cars that aren't needed out on the roads during non peak hours. The cars would return to that designated lot when not needed.
Sure, I might be overstated the problem of parking, but I still don't see how this is more economical than car ownership. If I have to summon a taxi for everywhere I need to go I need the taxi rates to be a fraction of what they are today.
I can't disagree with you on all of that, but I feel like you might be making the problem harder than it needs to be:
Almost any initial solution is going to be imperfect, and is going to involve mistakes if you try to solve for the entire problem space. Usually, the way to address this is to solve a small subset of the problem. If you don't know how to pay for the parking in a particular place, don't park there. Things like that. You limit the problems that you have to address, and the lessons you learn while applying that technology and the infrastructure that develops around its use alters the difficulty of refining it to address other cases.
I can easily imagine the way that I would solve the parking problem, at least initially: Set it up so that the car can pay wirelessly and make it so that people who own parking spaces can add their carpark to a database as enabled for wireless payment. There are even systems which already exist that do something similar to this; toll roads, toll bridges, et cetera: you put a little device on the front of your car and that records your usage of the relevant infrastructure, which is then charged to your account.
Put a bunch of image recognition for parking signs in the car - good parking space usually follow fairly straightforward formats; I'm not sure whether the signage is defined in statute or regulation, but generally they're much alike. Do some simple correlations on whether the person actually indicates they want to park there to refine the algorithm....
Crime figures are publicly available, so you can hook into that database for avoiding bad areas...
And have a rule that says not to park somewhere if you're unsure.
#
In the case of taxis, it's even easier: there are designated taxi ranks where they have to wait. And, considering there commercial enterprise, (there are two types of taxi regulation here, it's a bit weird,) you could rely on the taxi company to fill out a database themselves of other places that the car could wait - it's in their financial interest to do so.
>Taxis are incredibly expensive today. Sure you save a lot of money when you don't have to pay a driver, but is it really enough to make car ownership undesirable?
This seems to be a point that is often missed. In denser cities--where utilization can be relatively high--drivers make maybe $15 per hour. (Maybe a bit higher when actually carrying passengers which is probably the relevant metric.) So it's not clear to me how these hypothetical vehicles are going to so revolutionize the way people get around by cutting costs by maybe $15/hour (ignoring other labor costs associated with cleaning or any incremental cost of the vehicle).
The Google self driving car already relies on being fed a detailed map of everywhere it's going to drive. Adding things like fire lanes and parking restrictions to that should be relatively easy.
I think that will be the case for most families second car, but not their primary. People will own one family car, that's feels comfortable and contains personal items. The second car (today used largely for commuting) will be self-driving.
I don't know that the traffic would be much worse, because if you assume that a significant number of cars are self driving, then you eliminate many of the problems in terms of traffic flow that come from a human driver: maintaining a constant speed and distance from the car in front, aggressive driving, counter-productive driving habits, tragedy of the commons issues. It is possible to use the roads far more efficiently than we currently are capable of as humans.
There's also the point that it doesn't really matter how bad the traffic on the roads is if the only majority of the things suffering that bad traffic don't have any people in them.
Gas and polution:
If you can tell your car to go somewhere else when you are not in it, then electric cars and out of city parking lots - things like that - become a lot more feasible. You no longer need the infrastructure to be right next to your office to make effective use of it.
One of my good friends is a veteran of the USMC, who served two tours in Afghanistan. While deployed, he sustained injuries to his hip and brain. As a result, his mobility is impaired, and he has lost half of his field of vision.
Until his town succumbed to the taxi lobby and banned ridesharing, he was able to run errands, get groceries, and take care of himself. Now, he is counting down the days in captivity, waiting until he can once again be self-sufficient thanks to autonomous vehicles. Keep up the great work, Google et al, and let's hope that day comes soon :)
This is precisely why I take public transit. I know it's not an option for everybody, but if it's available in your area, take another look. I'd much rather spend 45 minutes reading a book than 20 minutes staring at someone's bumper and hoping I don't get killed. And I do.
It seems like the vast majority of collisions with google's AVs are rear-end collisions at intersections. I'm surprised that they haven't put any technology in place to attempt to avoid, or at least reduce the impact of these collisions. Clearly, predicting the trajectory of other vehicles should be second nature for the AVs.
In the video, there appears to be a shoulder with plenty of escape room.
Is there any attempt made to un-distract the driver? Flashing brake lights? Horn?
"The Google AV had been stopped for about 11 seconds at the time of the impact"
The cynic in me thinks they are intentionally avoiding technology to address these types of accidents so that they can unequivocally prove the the AV had no fault (i.e., had been stopped for x seconds).
Is there any attempt made to un-distract the driver? Flashing brake lights? Horn?
This sounds too similar to victim blaming. In my opinion any action taken by someone who is about to be hit from behind in a low speed crash is likely to make the problem worse. I don't think that is the time to have a quick jerky motion into the shoulder of the road. Even honking the horn could distract or panic another driver that then causes a crash.
To me the solution is to eliminate distracted driving.
There's a middle ground here where the car sees it's about to be rear-ended and softens the hold on the brakes just enough to convert some of the impact energy to forward motion.
It knows the distance to the car in front and has a good idea of the rear car's speed. The only real variable is the weight of the approaching car but a pessimistic estimate could leave more margin to avoid touching the car in front.
If this is happening at intersections, releasing the brakes would potentially push your car into oncoming traffic and add a t-bone collision that would not have happened otherwise.
It depends on how fast the incoming car is travelling. If you are about to be rear ended by someone traveling 60 miles an hour (happens sometimes in rural areas where people don't pay attention and are traveling highway speeds on narrow two lane roads), then it would actually be safer to accelerate the car to 30mph and crash into the person in front of you so that it is a 30mph impact to front and 30mph impact to rear rather than a massive 60mph impact to the rear of the car.
When stopped in a place other drivers might not expect, just flashing the brake lights does wonders. Probably because human eyes and brains notice change much more readily than a solid light.
> stopped in a place other drivers might not expect
You should expect drivers to stop at intersections. You should normally leave enough space between you and the car in front to avoid rear-end collision, even if that car stops suddenly. And there's already a signal the car in front has slowed - their brake lights come on. (Maybe that's not a US thing though?)
Well, for one people do not expect drivers to stop at a green light, like was the case here. You might wish everybody was well aware of their environment, but the fact is, a green light is a strong signal that makes people assume the other drivers are moving or are going to.
The second point is completely off-topic, as the car in question had plenty of space between them and the Google car's rear-end. It just failed to notice it was not moving.
The last point is already addressed in my previous comment: a solid brake light is much less noticeable than a brake light coming on. In some countries in Europe it's relatively common to have blinking brake lights, but I don't think that's the case in the US.
These points are just basic human behaviour, and they aren't going away, no matter how contemptuous you are and how much you look down on other people.
>Well, for one people do not expect drivers to stop at a green light, like was the case here.
This is bad driving. You use the brake lights (and if you haven't seen the break lights working, you use the visual speed til you confirm the break lights are working) to determine of a car is going to stop. You should never put yourself in a position where any other factors are involved in if the car in front of you will stop or not.
> a green light is a strong signal that makes people assume the other drivers are moving or are going to.
And such drivers are to be held responsible for crashes and should be educated when found. Those who do not change should lose the privilege of driving.
Green light are dubious, you don't know how long until they shift, and thus you 'really should' decelerate a bit (driving school talk). With clearer signs, you could at least not fear having to stop in emergency. But even then, that won't stop someone to cross his red light ...
Roads signs are like compiler warnings, it's all cute and nice, but it's just characters on the screen anyone can physically ignore. Let's have roads with nasty speed bumps before all intersections, which are all annoying, impossible to cross fast, roundabouts. #haskell
> Well, for one people do not expect drivers to stop at a green light, like was the case here.
er:
> The lane to the left of the Google AV was a left-turn-only
lane. The vehicle waiting immediately behind the Google AV in the straight-only lane began to move forward
when the green arrow left turn signal appeared (despite the signal for the straight-only lane remaining red)
and collided with the rear bumper of the Google AV.
>Well, for one people do not expect drivers to stop at a green light, like was the case here. You might wish everybody was well aware of their environment, but the fact is, a green light is a strong signal that makes people assume the other drivers are moving or are going to.
Watch the video. The self-driving car braked because the two cars in front of it did.
Hmm... of course it did. It's not a human. The green light vs. stopped traffic can't mislead it the same way a human can be misled through inattention.
The car behind the self-driving car didn't stop though, which lead to the incident. Presumably because it didn't pay attention to the fact that the cars in front were not moving despite the green light.
I'm not sure I understand where this thread is leading.
> GREEN means you may go on if the way is clear. Take special care if you intend to turn left or right and give way to pedestrians who are crossing"
You seem to be saying that people are shitty drivers and that we need to either just put up with it, or fit our cars to stop some careless inattentive idiot from driving into us.
Your argument is essentially against all forms of safety equipment that protect against being rear-ended, and can be used without substantial modification to argue against safety equipment of all kinds: why bother with crumple zones when I could just insist everyone else on the road were better? The reality is that I am in the car being hit, there is little to nothing I can do to make people not hit me at some point in my life (and to this date, I have been in a car rear-ended twice; in both situations the person behind us was 100% at fault under essentially any useful definition of "at fault"), and so I have a budget I am willing to spend on random safety equipment to make me less likely to be injured when this happens. I already am spending a lot of money on safety features for my car: at least these ones sound cheap to add.
And you should always drive at the speed limit and come to a complete stop at stop signs. And yellow does not mean accelerate.
Lots of non-shitty drivers do not follow every road rule to the letter under every circumstance and, indeed, it's not always possible to especially without pissing off everyone around you.
I actually agree with the basic point that it's the following driver who has 99% of the responsibility not to run into the car ahead. That said, there are plenty of behaviors like coming to unnecessarily abrupt stops in unexpected places (which I'm not saying is the case here) that will absolutely lead to people running into you.
There's two things, bad people and bad drivers. Schools can only teach proper driving, not proper behavior. And schools are doing a sub-par job IMO. There are so many things you are not taught there or are in a hand-wavy fashion.
Sure, hitting someone stopped is almost always the legal fault of the person rear-ending.
But that doesn't mean that Google is somehow off the hook. Google could invent a car that never causes crashes, but yet is still more dangerous than a human driver based on the inability to avoid crashes that humans easily avoid.
And, really it isn't hard to envision such a car being built. If it randomly hits the breaks to avoid phantom sensor readings, then the cars might be stopping in unexpected places an order of magnitude worse than a normal driver.
I'm sure driverless cars will eventually be safer than human, but I'm not convinced they will be.
When I went to driving school they tought us to leave quite some room in fornt of us when stopiing at lights etc. Then when the other car stops behind you you close the gap (or if the car behind you is going to rear end you you have some escape room, ie. always also check for space to get out of the line.) The point being was that also check for trafic behind you when stopping.
It's interesting to see Google's display of the situation.[1] Some of that has to be guessing by the software. They don't have an aerial view of the situation; that's reconstructed from their LIDAR scanner. That view is subject to occlusion by other vehicles. Their scanner is high enough that they can see over most small cars, but they can't see their far side. They show a map of all the vehicles in the intersection as if they had all that info, but if there's a truck or an SUV blocking their view, they can't see through it.
I'd like to see the video (they have cameras) that goes with the LIDAR reconstruction.
The CMU/Cadillac/Uber self-driving effort doesn't have a LIDAR on a tower, so they don't have as much information. They haven't been reporting accident data, which would be interesting.
> National crashes-per-miles-driven rates are currently calculated on police-reported crashes.
This made me curious; wouldn't it make more sense to get that kind of data from crashes reported to insurance companies? At least here in Italy, most minor-but-with-small-damages crashes are reported to insurance companies, but not to the police.
On my case in France (countryside, don't know about large cities), most small damages are not even reported at all since both driver are fearing that the insurance would increase their monthly fee so people are giving cash to each other, it's quite common.
Very much the same in the UK as well. Most folk will try to avoid going through insurance. I'd rather be out of pocket for a few hundred £ than pay extra on insurance for the next few years.
Could you please provide legislation for that? I would not believe, that you would commit fraud when not reporting an accident and paying for repair yourself. At least not in Germany, where I am from.
It's part of almost every insurer's terms and conditions.
They use your driving history to set your premiums. Thus they want to know every accident you've been in because that affects how much they charge you - it affects how risky you are.
If you know you've had an accident, and they ask you to report all accidents, and you do not report an accident with the intent to keep your premiums down you've possibly committed fraud.
I have absolutely never heard of such a thing in the US and I've been driving for a long time. I mean, people get little dings and whatnot in parking lots or wherever all the time--which they may or may not do something about at some point. I've never heard of sending a letter to the insurance company in the absence of a claim. I'm not saying there isn't some fine print in some contracts but it's just not something I've ever heard of someone doing.
I would imagine one of the primary reasons for not reporting to the police in the USA is to avoid reporting to insurance companies. A claim makes one's premiums go up, often higher than it would take to amicably settle the issue between the drivers.
Note: I'm not commenting on the legality of such an approach, which varies by local law and the severity of the crash.
> most minor-but-with-small-damages crashes are reported to insurance companies, but not to the police.
I'm not sure if it's the same way in America, but in South Africa insurance companies will at least ping the police for information if you make a claim - so drivers generally always get them involved if they plan to claim.
The reason why that wouldn't work here is because there are a few hidden costs if you make a claim: people frequently fix small bumps out of their own pockets. This means that not even insurance companies have decent stats on the type of accidents that Google is talking about.
As others have pointed out, lots of small accidents aren't reported for fear of rates going up. But there also just a lot of people without insurance. Bizarrely, not every state requires car insurance, so some people just don't have it.
How many employees with cars does Google have? It could offer a voluntary system where they fit data gathering boxes to every willing employee's car. A bit like the insurance-reducing boxes currently used.
As a fellow European, this also surprises me. However, IIRC, in the US a lot of accidents aren't reported to the insurance companies either, but are handled with cash transactions.
a quick question. heres the situation I am in a self driving car driving full speed on the highway, a little girl crosses the highway (highly unprobable but still possible) what will my car do? maintain high speed and kill her? or try to avoid her knowing that any attemps to avoid collision at this speed would likely injure me if not kill me? then how the car would react if it is a deer? then how the car will be 100% sure its a deer?
this is where self driving cars are limited and shoudlnt decide for us.
We can throw the same question right back at you. What would you do - kill the girl or kill yourself?
These hypotheticals don't really help the conversation. I don't know how the car would react but it would be nice to know if the car is going to try and work with neighboring cars to try and not hurt anyone. Not to mention that the car will already see the girl crossing the highway before she ever does get out there because of how sophisticated the sensors are.
>What would you do - kill the girl or kill yourself?
Unless the AI behind these cars are so advanced that we cannot know ahead of time what it would do and it cannot be put in a simulation to test it, there is a vast difference between a self driving car and a human. Namely, we don't (and with the current level of knowledge can't) know what a human will do til after the situation occurs. And in general, in snap judgment cases like this, we do not punish a person for making a bad choice unless it is outright malicious.
Contrary to this, a self driving car is in a system who we can attempt to simulate (though not at perfect fidelity) and one where we know the underlying algorithms enough to answer the question and even change the algorithms to change the answer before the situation ever occurs.
>Not to mention that the car will already see the girl crossing the highway before she ever does get out there because of how sophisticated the sensors are.
This is attacking the situation instead of answering it. All I would have to do is modify the situation slightly and the question would remain just as valid and thus the response does nothing but postpone.
I'm fine with this. I'll gladly take a trade where my probability of dying, via the general improvements gained from self-driving cars, decreases in most cases, but increases in other very specific, low probability cases. On average, I die less frequently.
I trust a highly trained algorithm to make better and faster decision than a human driver. The algorithm is even aware of all the cars around it and their speed and can decide to take evasive action with a way better chance of success than I ever could.
If the other cars on the road are self driving cars as well they could all communicate and take into account the one car moving to avoid the object in the road, thereby also reducing the chance of damage to the vehicle swerving to avoid the object
It should be a three tier decision. First, preserve the life in the vehicle. Second, preserve life outside of the vehicle. Third, minimize property damage.
Deer vs little girl does not really matter, that is just an emotional ploy.
I'd hope the algorithm was designed to crash the car instead of running over a pedestrian. A pedestrian has little chance to survive getting run over compared to a passenger in a car with a seatbelt, airbags, crumple zones, etc.. to protect him.
How do we take into account emotional harm from killing a little girl causes the driver to be harmed (they weren't driving at the time as they let the car to stake over, but they blame themself for it regardless). A parent would rather they die than hit their own kid.
I guess once these cars begin being sold people can start testing situations using dummies and such.
Trolley problem philosophical discussions notwithstanding, I still suspect that the default mode (as it is with humans under most circumstances) will be to lay on the brakes and hope for the best.
That said, there are certainly circumstances where swerving is the right decision and a vehicle with 360 degree sensors is potentially in a better position to make an optimal decision than a human making a snap judgement based on mostly what they see in front of them. Probably the closest I ever came to being in a really serious accident happened on a highway when someone slammed on their brakes to avoid rear-ending the car in front, ended up drifting into the next lane where another car had to swerve to avoid them. I ended up swerving into the breakdown lane. Amazingly with cars swerving and braking all over the road at 70 mph, no one collided but it was a very near thing.
I strongly suspect that, however many decades away true self-driving is, good assistive driving and collision avoidance systems especially on highways are going to be a pretty big near-term win.
It will better be able to make a decision than a human can. But what decision is optimal isn't well defined because never before have we had the need to define it. So far, only directly malicious behavior is known to be bad. But prioritizing your safety over the girls safety is something we have neither decided to be good or bad.
I wonder how many of these driver errors are due to the assumption that the self-driven car will behave just like a human driven one?
It wouldn't take much — an unnecessary hesitation, a slow pull away or (on the other end of the spectrum) super-human reactions to cause me to rear-end someone.
Most of my near-collision experiences have been at a junction where I've judged a situation to be safe to pull into the road for both myself and the car in front, and I start to manoeuvre while doing my one last check into the oncoming traffic to verify its clear. My assumption is that the car in front will do the same but occasionally they don't and I have to stop suddenly to avoid collision. Every time I feel surprised and embarrassed. Why didn't I check? Truth is the other 400 times we all behaved as expected.
My point being, if the cars are still being limited in their speeds and are (as sometimes reported) a little ponderous in the middle of a manoeuvre maybe it is their behaviour that is causing some of these 'driver' errors.
>
It wouldn't take much — an unnecessary hesitation, a slow pull away or (on the other end of the spectrum) super-human reactions to cause me to rear-end someone.
If you rear end someone it is almost always[1] your fault, and in the examples you give it's definitely your fault.
I'm a little bit surprised that you describe this as the fault of the Google cars.
As I understand it, from the perspective of an insurance company, rear-ending is basically always your fault no matter what the extenuating circumstances are. I couldn't stop on glare ice a number of years back (brakes basically did zero) and eventually kissed the bumper of a car I was following a good 100 ft. behind that had skidded and eventually came to a stop in the middle of the road. No damage but I mentioned this to a friend who worked at an insurance company and he said I'd have been considered at fault if a claim had been necessary.
Yes, but fault isn't the problem - if another driver is doing something that would make an accident "their fault", I normally still see their erratic driving and move my own car away from them. If I see someone flying up at me in my rear-view mirror, I move. If I see someone weaving a bit in their lane, I give them an extra lane of space between us.
I don't know if Google's cars do these things or not - but those are the kinds of avoidance techniques I would hope for, even if an accident would not have been their fault.
The one time I rear-ended someone in my life I was doing what you described. It was fully my fault and the result of me trying to gain approximately two seconds on my trip. Fortunately that was over twenty years ago and I haven't repeated the mistake.
I see autonomous cars as the solution to this problem.
Definitely they could address it somehow. They could have additional stronger red light turning on when car is about to be rear ended. And some sort of rear airbag to softly catch the car behind.
It's easy to play out the use cases and see why Uber is investing in this area and why it feels threatened by this technology advance.