Hacker News new | past | comments | ask | show | jobs | submit login
Regulation, not technology is holding back driverless cars (nytimes.com)
165 points by ultrasaurus on May 29, 2011 | hide | past | favorite | 145 comments



My dad works in the airplane industry and had an interesting story to tell me that relates to this. (I'm not sure exactly how accurate it is or what the source is, sorry). Apparently, it is illegal for pilots to read while flying. Even if they are heading in a straight line with no one around for miles. This is because one time an pilot wasn't paying attention and due to a series of software failures, the plane turned into a mountain. Interestingly, the plane was tuning at a very exact amount so that the number of Gs remained constant.

In any case, this single crash caused regulation to state that computers can never fly planes by themselves. This strikes me as rather unfair. If a single human crashed a plane, it does not make it illegal for humans to fly planes by themselves.

Another example is that in London, subways must be driven by a human. Even though driving a subway may be trivial (there is no way to steer), Londoners apparently do not feel comfortable being driven by a non-living thing. They want to be sure that if they die, the driver dies too, adding a level of accountability.

It seems that this sort of wide-spread mistrust of machines is driven more by socially normal paranoia than any kind of logic. I for one am rooting for machines to take over all forms of driving. There may be a few mishaps, but it will probably become hundreds of times safer eventually.


It's certainly not "logic," I wouldn't call it paranoia. Consider that some people just like to be more in control of their lives than they are, and they see that small bit of control slipping away every day. If you don't understand computers -- and, granted, if you're reading this forum, it's probably hard for you to put yourself in those shoes -- you're putting your life in the hands of a black box. Nevermind that you do it (unknowingly) all the time. Everything you touch is starting to become alien because it incorporates some crazy technology you've never seen before.

There's a great section in Neal Stephenson's Anathem where the science-y younger brother is speaking to his older sister, who's mechanically talented but not very techy. He says there's always a better way to build something, that you can always take the inefficiency out of whatever system you're working with. But she doesn't want that, because she wouldn't be able to fix it herself. That's how most people see technology, and to be honest, we see it similarly, because we can fix it, so it's not a threat. In 30 years, will we be able to fix the things that are running to world, and will we be as inclined to trust them as we do now?

I'm not saying it's logical, or appropriate, but it's a natural response to the situation. I think it's important to work with it rather than dismiss it.


This argument (while good) won't apply to cars for simple economic reasons. Having a human pilot or subway driver costs 1 marginal person for hundreds of people. Having an extra driver per car costs 1 person per person. I will strongly bet that the two orders of magnitude difference in cost will overcome the bureaucracy.

Additionally, once cars become sufficiently advanced, it will be much cheaper to insure a self-driving car than to insure a human.

Once the change happens, it will be breathtaking how rapidly it occurs. Social stigmas are hard to sustain towards a large portion of the population. In 2000, if you were wearing a helmet skiing, you were crazy. Now, you're crazy not to. You used to be a loser if you were on a social network. Now you're a loser if you're not. Both these changes happened over perhaps one to three years. Critical mass may only be ~5% of the population. You only need enough to avoid individuals being singled out as odd.


"Social stigmas are hard to sustain towards a large portion of the population."

Over 90% of Americans have used illegal drugs, and yet there is still a strong stigma there.

http://www.monitoringthefuture.org/pubs/monographs/vol2_2009...

(C.f. page 101, as well as the methodology.)


Your phrasing of that statistic is misleading. What you mean is, "90% of Americans have used illegal drugs or used legal-drugs illegally." When you say "illegal drugs" people think of marijuana, heroin, cocaine, etc. Casually lumping in underage use of alcohol and tobacco is purposefully misleading.


Does it count underage drinking? I didn't even notice that if it does. Regardless, around 85% of Americans use marijuana alone, so it wouldn't change the statistics by more than one or two percent. So I don't see how that would be misleading.


You're really misreading these statistics. Nowhere do they say anything close to "85% of Americans use marijuana".

On page 103, it says that 82% of fifty-year-olds (who attended high school during the heaviest-drug-using period in America's recent history) have used marijuana at least once in their lives. But only 10% of that same group have used it in the last year.

Among current 18-year-olds, only 42% have ever tried marijuana even once. And among most age groups under the age of 30, only between 23-34% have used marijuana within the last year.

However, I don't see evidence that the study you posted counts underage drinking in among illicit drug use.


"Nowhere do they say anything close to '85% of Americans use marijuana'."

I said 'have used' in my original comment, which is what I meant. (In my reply I was thinking lifetime prevalence, i.e. around 85% 'use' marijuana at some point during their lives. Poorly phrased, my bad.)

"Among current 18-year-olds, only 42% have ever tried marijuana even once."

The age of first use of marijuana is highly variable. By age 25 around 65% of people have used marijuana, and more people keep trying throughout the next couple decades of their life. The important statistic is what percentage of people use marijuana at least once in their lives, not what percentage use it on any given day.

"On page 103, it says that 82% of fifty-year-olds (who attended high school during the heaviest-drug-using period in America's recent history) have used marijuana at least once in their lives."

The 82% figure doesn't count people who didn't make it to fall of their 12th grade year in high school, so you actually have to adjust it up a couple percentage points. Also, even if less high schoolers were using illegal drugs for a decade or two, that probably won't have much effect on the overall lifetime prevalence. It might eventually decline a few points, but probably not that much.


85% sounds pretty wild. Hm, looking at pg 78 of the linked document, it says among people surveyed who were exactly 50 years old when surveyed, graduating in 1977, those who tried marijuana were 82%.

1977 has got to have been the historical peak of high school usage. I notice they don't mention 51 year olds or 49 year olds, they have specifically pulled out this one exact age for this statistic.

It then goes on to say that 29-30 yr olds have a 68% "adjusted" lifetime prevalence. What does that mean adjusted? It means the actual number they measured is lower (32% for 19-30 year olds), and they tweaked it upwards because they wanted a bigger number.

About this time is where I say a study has too many methodological flaws and should be disregarded.


You're incorrect about the methodological flaws.

Only 50-year-olds are mentioned because this is a longitudinal cohort they've been testing since 1977, and this year (2009) it's their turn to be reported.

The "adjusted" number is higher than the measured number because some people answered the survey ten years ago saying they'd tried marijuana in the past year but then in the current survey the same respondent said "no" when asked if they'd ever tried marijuana in their lifetime. So they're publishing both the actual numbers and the "we know you're probably lying" numbers.


"What does that mean adjusted?"

It's explained the study.

"About this time is where I say a study has too many methodological flaws and should be disregarded."

Try actually reading it. It's really not that difficult to understand. The 'maturity recanting effect' section here also gives a brief summary of what they did and why:

http://www.erowid.org/psychoactives/statistics/statistics_ar...

Don't criticize studies without taking the time to understand them, otherwise you just sound like these people:

http://www.youtube.com/watch?v=RjX1IaU1A0w


I feel very comfortable stating with 100% certainty that your claim that the study said 85% of all people report being marijuana users (exact words: "around 85% of Americans use marijuana alone") was completely and outrageously inaccurate. It's humorous but a touch sad that you can't admit you were completely and utterly wrong. Lay off the stuff, clear your head, and re-read the report, dude.

It's also charming that you cite erowid.org as a reliable unbiased source. I am familiar with the site and have read it. It is a radical pro-drug site.


Where did he call them unbiased? (I would call them reliable, though they certainly have an agenda. But they've always struck me as working hard to present facts, or at least label opinion as such. Certainly I can't think of another site about drugs that I would trust over Erowid. And I would be very interested indeed to hear of deliberately misleading information posted there.)


I think Erowid is about as unbiased as it gets. All they do is provide information about drugs, they are neither advocating drug use nor condemning it.

Even their legal articles don't take any position on what drug policy should be, all they do is explain the case law that exists.


85% have used at least once, not "use" marijuana.


There are lots of stigmas against behaviors which are widely considered, well, stupid. Or immoral. Or degenerate. Even if lots of people have tried them (and perhaps still occasionally imbibe).

Hypocrisy is demonized. Sometimes rightly, sometimes wrongly.


> Having an extra driver per car costs 1 person per person.

If the car has some sort of payload (like a Semi or UPS truck), then this makes total sense. If the human is the payload, then there is no benefit at all. (Well, other than being able to multi-task -- but if you're working from your car, why not just work from home and skip the commute?)


You can multi-task in areas other than work. You could be reading a book, playing a musical instrument, making phone calls, or even enjoying some recreational refreshments that impair your ability to operate a motor vehicle.

Much like taking public transportation to work, it turns commute time into time that can be spent doing something other than paying attention to the commute.


Recreational refreshment? Another technology revolution that will be driven by the fact, that sex sells?


Very interesting! I never thought about the insurance issue. The current systems rates premiums mostly according to personal driving record, right? [1] Hence, if I never drive myself, my premium should only depend on the crash record of that type of robot car.

This would also imply, that my premium could raise, because that kind of robot, recently crashed a bit too often. At the same time, I might be able to reduce my premium by installing a software or hardware update.

[1] I have a license for years but never owned a car. AFAIK it is not just the drivers record but also the statistical likely hood of a crash, according to gender, age in car. Though, cars today are not rated as 'unsafe' because they are buggy but because they are popular with reckless drivers.


"Now, you're crazy not to [wear a helmet skiing]"

We do not leave in the same world. I mean, really, I am in East asia, not the same world. People do not ski much here, and if they do it is without helmet.

I know it is a bit tiring to recall this but it is a good habit to think about the reality of the generalization that one write. What I like on HN is that it is usually not too provincial. Sorry for the nit-thing.


"Londoners apparently do not feel comfortable being driven by a non-living thing."

The Dockland Light Railway in London is driver-less, although there is a staff member on each train to operate the doors and check tickets:

http://en.wikipedia.org/wiki/Docklands_Light_Railway


So does a French metro line and others :) .

But it's not the same - as one Frenchman put it: "it's like an elevator, only it goes horizontally instead of vertically".

That is not the case with driverless cars - not that I'm against them.

The use cases put forth in the article are quite compelling, but we still haven't been able to eradicate "elevator operators" ("ascensoristas") here in Uruguay, it will be far more difficult to remove drivers' unions !


There were nearly 6,420,000 auto accidents in the United States in 2005. The financial cost of these crashes is more than 230 Billion dollars. 2.9 million people were injured and 42,636 people killed. About 115 people die every day in vehicle crashes in the United States -- one death every 13 minutes.

I will wage anyone that if all the cars on the road were computer driven there would be no where near 42,636 people killed per year.


I too can imagine the number dropping down far lower than that, even under a thousand, let's say.

I can then imagine the newspaper headlines the few times per year that computers do end up, for whatever reason, being responsible for a small handful of deaths: "COMPUTER GLITCH KILLS FOUR PASSENGERS IN HORRIFIC CAR ACCIDENT!"

Never mind that human "glitches" are responsible for dozens of such accidents every day at present. For whatever reason, I have a strong feeling that headlines such as those will scare people worse than the way things are now.


This suggests that any effort to introduce driverless cars should be preceded by a multi-year ad campaign that (ahem) drives home the >40k dead/yr. figure.

Presumably, it would be paid for by insurance companies. It could be rolled out as a general public awareness effort (i.e. packaged with generic reminders about wearing seatbelts, maintaining proper tire-pressure, etc.), combined with localized elements that included maps highlighting particularly deadly times and places where drivers should be especially vigilant.

Once the figure is widely accepted, the companies who would be offering the technology would have an established base against which they could run ads that credit themselves with lives saved. Assuming that these claims are backed up by public data assembled by trustworthy parties, even the most vociferous "if it bleeds it leads" news agencies would have a hard time terrorizing people BACK to the days of a 9/11 every month.

Also, while I hate to be cynical, there's an upside to corporate mass media, in that a lot of ad dollars spent by pro robo-car interests WILL make news agencies think twice about scaring the daylights out of people just for a ratings bump. After all, the whole point of ratings is to boost the price of ad sales.

I realize that this may seem like a lot of effort, but no industry takes a longer view than the insurance industry. If there's one group that has the time, money, and motivation to make the push, it's the guys who are paying costs associated with all the current grief and mayhem. Moreover, our not-so-clean political system does mean that legislators can be easily persuaded once insurance companies are comfortable that the technology is truly secure.

My guess is that any push would need Federal backing, and would be focused - initially - on the Interstate highway system, where much less reactive driving takes place.


It will be pushed as "driver assist".

A little siren goes off, warning you to break. Navigation tells you where to go, and how.

An override button appears, allowing you to give the computer control in an emergency. Eventually, the button becomes a switch, and the switch gets left on.

"Autodrive" licenses (similar to automatic licenses) are introduced, allowing people to pass their driving test in full auto mode, provided they only drive cars in full auto.


So why would insurance companies actually want that future. Wouldn't car insurer be out of buisnes if there were no more crashes?


And I worry about the case when it will become more blameworthy for a human rather than a computer to operate a vehicle.


Why? If computer-driven cars are much less likely to crash than human-driven cars, it would be entirely appropriate for there to be a social stigma against humans driving cars (because it would show disregard for the lives and safety of others). It would also be entirely appropriate to treat human drivers more harshly when they were responsible for accidents, as they would have had the choice of instead allowing a computer to drive and statistically probably avoid the accident.


You misinterpreted my sentence (but perhaps it wasn't your fault since I elided a crucial piece of information).

I did not say that human-driving ought not be morally blameworthy but, since I enjoy driving, I am understandably worried about a time when something that brings me pleasure will become unethical.


I suspect you will then be able to exercise driving in a more controlled, and most likely more interesting than everyday city driving, environment the same way it is now for kart circuits and the like.


One problem is that even if there are far fewer road casualties with driverless cars, there will still almost certainly be some casualties, perhaps many. Who do we hold responsible for those?

If we blame the car makers, there may not be any car makers because nobody will want to take on the liability.

If we don't blame anyone then there is no incentive to improve the situation and no way to hold people accountable for sincere negligence.

We would have to accept that a consumer product will occasionally kill us, and come up with some standard to judge whether the manufacturer "tried hard enough" to prevent it.

I can't think of any precedent for that. There are plenty of products that can kill us, but only if we misuse them. A driverless car will have a few deadly design flaws that we only learn about from experience.

Maybe this should be publicly acknowledged up front and a legal framework established. That seems better than pretending it's completely safe, causing a shitstorm when something inevitably goes wrong.


Perhaps a new entity is needed? Think of a zipcar like service which is accountable for the vehicle. Large enough to absorb the occasional accident, and pressure automakers for safety/efficiency improvements.


I can't help but think that people may not drive well with computer-controlled cars on the road. The decisions that a driver makes sometimes depends what they think the decision of other cars will be. Unless the computer-controlled cars account for this in their decisions, then there may actually be more crashes, as the other cars don't do what human drivers expect. Another interesting result is that computer controlled cars may not be able to use the most efficient behavior until most cars on the road are computer controlled.


I imagine they would be designed to drive using under the assumption that other cars are going to make unusual maneuvers, and to compensate for that.

During the period where both human and computer driven cars share the road, I suspect the human driven cars will get a speed advantage because of this.

Personally, I'll be more than happy to add ten minutes to my commute if it means I don't have to drive. I think a lot of people will be with me on that one.


Unions have a lot to do with this as well.


A friend at Union Switch and Signal told me that the Chinese subways (which can run full-auto) had to provide work for two people for each train.

One guy presses a large "Go" button to leave the station, which is automatically stopped at, and there's also an emergency stop button.

The other guy has an "open / close door" control.


In Chicago, there was a case some years ago where a woman was pulled under a train by a door.

She was carrying a very expensive violin. She got off the train and the doors closed between her and the violin, with the strap around her shoulder. Because the strap was thin, it didn't activate the door's obstruction detector.

Because the violin was so expensive, she didn't want to lose it, so instead of crawling out of the strap, she stood there and waited for the conductor to notice her. One of his jobs was to verify visually that there were no problems in the station before the train started moving -- but he didn't do that. She was dragged under the train and the wheels sliced off her legs.

Point being, those people are likely serving an important safety function. The conductor didn't do his job on the Chicago train and so a woman lost her legs. But a system with no human conductor would be an unsafe system by design.


As your story illustrates, humans and machines both have failure rates. Sometimes the presence of humans actually makes a system less safe. The meltdown at Three Mile Island, for example, would not have happened if a panicked human operator had not overridden the automatic emergency cooling system.


> The conductor didn't do his job on the Chicago train and so a woman lost her legs. But a system with no human conductor would be an unsafe system by design.

Perhaps, yes. But in this example, with no human conductor, the woman would have just sacrificed her violin instead of relying on the unreliable human conductor.


It's trivial to design an electronic system that would have preformed the job the human conductor was supposed to have been doing.

Hell, my garage door 20 years ago did it.


The Skytrain system in Vancouver is all automated. I don't recall if it's had any accidents, but I've never heard of any.

http://en.wikipedia.org/wiki/SkyTrain_(Vancouver)


The system has an automated track safety system. It can get a little annoying at times as a rider — I've heard blowing leaves on the tracks can trigger the system, which is significant given the large majority of stations are elevated and can be exposed to the elements. This article from a few years back has described some of the more unfortunate cases: http://thetyee.ca/News/2008/11/18/SkyTrain/


Unions aren't against technological improvements they're just against how quickly those improvements would be put into place because a LOT of jobs would be lost. If driverless subways for example were phased in as workers retired, I'm sure there wouldn't be a problem. Or if re-training were provided for the workers before they were let go, that might be acceptable as well. It comes down to the fact that humans have to eat and educate their children: machines don't.

Also (as a separate issue) I thought the governments of the world wanted to create more jobs, not fire everyone who is currently working... ;)


Counterpoint: as an airplane passenger, my experience is the same whether there's a human pilot or not; I sit in my seat until the ride is over.

But in my car, if it could drive itself, I could get in the back seat and sleep. I could have a tiny apartment back there and read or play video games.

I think that if people will text while they drive, risking their lives for a tiny message, they will gladly let their car drive for the payoff of getting to do whatever they want during the trip.

Side effect: if people can sleep comfortably in the back seat while their car drives, suddenly a 4-hour commute may sound reasonable. That's going to screw with city planning, pollution, etc.


I've ridden the driverless subway system in Copenhagen. Apparently the Danish don't mind being driven by a non-living thing.


The fact that pilots are willing to travel on a plane is the best guarantee that the plane is safe, better than any regulatory licence/permit anyway. That's one thing you'd lose, as well as not having someone to hold directly accountable.

As for driverless cars... It just seems folk are getting a bit ahead of themselves - can computers really handle all the edge cases that come up while driving? Like when you have to break some road rules temporarily in order to circumnavigate an obstruction. Or when the car gets stuck. Etc.


"Londoners apparently do not feel comfortable being driven by a non-living thing."

Does not surprise me. Most Europeans have not even adapted to automatic transmission in cars, and still believe that a human can switch gears better with manual transmission (which has not been true for decades now). Makes me wonder if they ever adapt to fully automated driving.

(And yes, I'm saying this as a European).


A lot of automatic transmissions plain aren't that smart; I can think of several I've driven that were too conservative about changing down on a hill which made the cars much less responsive than they should have been. Also there's the old chestnut about autos being less fuel efficient / less powerful: http://en.wikipedia.org/wiki/Manual_transmission#Benefits

Either way, I don't think the prevalence of manuals in Europe is entirely (maybe not even mostly) about not giving up control to the machine.


Clearly that's not universally true, otherwise racing drivers would use automatic gearboxes. It may be true for everyday driving, but I'd need to see some citations for this claim.

Frankly, I'm a little dubious of automatic transmission. Perhaps I've just been driving the wrong cars, but I haven't found an automatic gearbox that can match a human being. For instance, if I'm descending a steep hill, I'll stick the car in a lower gear, but an automatic always chooses the higher gear.


Automatic transmissions avoid engine braking. If you want it, you need to choose it manually.

And personally I never do for the simple reason that brakes are a lot cheaper than transmissions. And I rarely descend a hill long enough and steep enough that brake heating is a serious problem. (And when I do I usually just slow down, and stay slow.)


I downshift on hills because that keeps more of my stopping power ready to use. Brakes heat up when you use them continuously, reducing their effectiveness.


>Clearly that's not universally true, otherwise racing drivers would use automatic gearboxes.

F1 drivers did until it got banned. In fact, if I'm not mistaken I believe our more advanced automatic transmissions came from F1.


Unless you create a transmission that reads my mind, or can see the road and surrounding traffic it will not shift "better". Faster? yes. More smoothly? yes. Better? No.


Well, these folks at Google have been working on something along those lines...


I would appreciate counterpoint instead of (or in addition to) the downvotes...


You can't beat this with a manual gearbox: http://en.wikipedia.org/wiki/Continuously_variable_transmiss...

They're an option on the more expensive Audis and a few other manufacturers have them.


This is technology that looks very promising but is not, from my research, mature.


What do you mean not mature? This has been on the market in cars since the sixties. Audi has had it in their top models for over a decade. It's just expensive


Since you mentioned Audi:

A quick search for "CVT reliability" produces pages of horror stories related to the Audi (and other) CVTs.

The CVT is only available on the A4 and A6 (not the A3, A5, A7, A8, Q5, Q7, et al). Also, the CVT is only available with the smaller engine options. Finally, the CVT cars are front wheel drive only, no quattro available.

That is what I mean by "not mature".


It's also difficult to drag the car with automatic gearbox (you have to do it very slowly or put it on a trailer) and you can't start the car by pulling it if battery gets flat.


Don't you think automatic gearboxes have just a bit of an image problem - they certainly used to in the UK.


The pilot should be there when the Software fails, and that's why he shouldn't be busy with something else. It adds to security, and it's worth it (from my point of view) whatever is the cost.


You can't say "whatever the cost". What if it were $1 trillion per passenger? Obviously it's not, but there's some point at which you would change your mind. You have to assign a cost to these decisions in order to have a well reasoned discussion. The EPA, CPSC, NRC, OSHA, and OMB for example all use varying costs of life to make decisions with the value of one human life ranging from $1-$8 million.


"Cost" can also include the cost of poor decisions.

If 1 pilot in a hundred million decides to override the autopilot, and crash the plane into a landmark building, this is a cost.


This is easy... just make the engineers who built the subway sit inside. You're welcome.


I appreciate the rest of your comment, and I'm not sure exactly what you mean by "illegal" but if you mean "breaking an actual law" it is not "illegal" for a pilot in command of an aircraft "to read". Having said that, if you're doing something in the cockpit as pilot in command that doesn't pertain to piloting your aircraft, you're doing it wrong.

Autopilots are adjunct flight controls. They are not...the pilot in command.


No surprises there.

The transition to driverless cars is (IMHO inevitable. At some point it will be cheap enough that the additional cost will pale in comparison to the lives that will be saved as well as the simple convenience of being able to do something else while commuting somewhere.

Likewise I see this kind of thing replacing many forms of public transportation. There will simply be a fleet of cars. You'll say where you want to go and some system will route people and cars to destinations.

But, the transition won't be quick or easy. You need look no further than the aviation industry to see why.

Basically, automation in modern aircraft is a double-edged sword. It seems to erode the ability of pilots to actually fly [1], software errors causing deaths [2] and (I can't find the link to this) I also read a study that in more automated planes, pilots are more likely to believe erroneous instruments rather than their own senses and experience.

The issue won't be how the car normally behaves because as demonstrations have shown, current systems require very little human intervention.

The issue will be extraordinary circumstances plus the huge liability problem of any errors.

Example: if someone runs a red light and causes a crash, killing someone, that person is responsible. If an automated car does the same thing, the manufacturer will be responsible.

That alone will impede adoption.

Instead I think you'll have what we already have: slowly adding automation to cars. Cars already have radars and can stop themselves from colliding, they can park themselves and so on.

But at some point the driver will need to go away and that will be a tremendously challenging leap forward for society.

[1]: http://www.tourismandaviation.com/news-4530--Pilot_Reliance_...

[2]: http://catless.ncl.ac.uk/Risks/8.77.html#subj6


People generally cite the lawsuit threat, but the threat cuts both ways. I've seen lawsuits for putatively malfunctioning anti-lock brakes, and I've seen lawsuits for not including anti-lock brakes. At some point, once it is reasonably well demonstrated that automated cars drive more safely than humans under some circumstances, someone is going to file a lawsuit for not having automated drivers. It's only a matter of time. That's when it really gets legally interesting.

Note I said "interesting", not "the beginning of the inevitable victory". I say "interesting" because I think we can pretty much all agree how this plays out until that happens, but I can't see past that event.


Yup, I forsee the current seat belt system, where even in situations when it's advisable not to be wearing a seatbelt and/or the other mandated safety systems it's still illegal.

The problem is a systemic one that stems from the belief that the gov't is the omnipresent nanny of its citizenry.

Note: Before I get downvoted, I recognize that statistically seat belts save lives, however, there are conditions in which seatbelts cause fatalities (even when used properly), and that the choice of whether to wear or not wear one should be that of informed consent.


There are conditions when vaccines kill children, but there is NO situation where the sensible choice is to not vaccinate your healthy child. The same with seatbelts.

Edit: Added "healthy".


there are conditions in which seatbelts cause fatalities (even when used properly)

Do you have an example?


Driving on an iced lake or any other condition where your car is more likely to become submerged than you are to run into something, being unable to exit the vehicle quickly can be fatal.


The typical example is an underage child in an adult seatbelt. Because of the difference in dimensions, a crash causes the seatbelt to impact the neck.

I'm not sure that would be classed as "used properly", though.


Is it a legal requirement for underage children to use an adult seatbelt? I know here in Sweden for example it is illegal to have child, under a certain height, in a car without proper child seatbelt/car seat. Form a legal point of view no seatbelt and adult seatbelt are more or less equivalent.


Here (Australia) children have to use a correct child restraint and/or seat up to the age of 7.

I believe the legal point is the same as you've stated - the law is "not correctly restrained" or something.


Given informed consent for seatbelts, what do you think the ratio would be between people who choose not to wear one because they have completely informed themselves from reliable sources and calculated that they're more likely to survive without one than with one, versus people who will just leave their seatbelt off because it's uncomfortable?


I'm not sure but if a person prefers to be more severely injured in the event of an accident rather than uncomfortable all the times they are not in an accident then it's an appropriate choice for them. I don't feel it's my place or any other third party to make these choices for adults.


except when we have to pay for their surgery or whatever via their emergency room visit.


Sure, or even have to share emergency resources. I mean, why on earth would an ambulance attend a person who's seriously injured because they exercised their right to be comfortable when the ambulance could be attending someone who actually took some responsibility for his own safety and considers the well-being of the community at large.

Not that I'm this much of a bastard, but for those of us that have faith-based issues around community-provided health services.


Let's put aside the "who's paying for the ER" argument. A driver that loses control of the car trying to brace oneself or restrain a passenger that isn't wearing a seat belt can increase the severity of an accident for other people, too. A car crash victim that falls on to the street suddenly becomes a hazard to other vehicles, too.


The nanny state is what you get when you involve government in issues of health insurance and its kin.

The laws and regulations then turn towards cost reduction along with "taxes" to discourage higher health costs.

We only have the cigarette laws, seatbelt laws, and other 'endanger self only' actions that are regulated.


As a non-smoker I like smoking bans, since smoking affects me as a by-stander, too. I would also be OK with people injecting heroin instead of smoking---anything that doesn't stink.


Spoken like someone who has never had to pick the used syringes from a public playground before their child could play on it.

But agreed in principle ;)


We need a deposit on syringes.


"pilots are more likely to believe erroneous instruments rather than their own senses and experience."

The human brain is not setup for 3D dimensions of orientation, unless you can accurately see the horizon it's more than likely that your perception of the planes orientation and vector is wrong than the instruments. In most turbulent situations the brain is wrong and the instruments are right. This is why pilots are trained to fly on instruments only. On most planes the instrumentation system actively and accurately informs pilots that their instruments are broken when they are in such a state.


it's not just the human brain. certain g-loads at certain vectors can cause a situation where 2 different scenarios are indistinguishable without outside reference. Illustrated most famously in Einstein's elevator thought experiments.


Could you explain what you mean? My understanding was that Attitude indicators solved this problem using a gyroscope a long time ago. Maybe this falls into what you meant by an "outside reference", but it seems to me autopilots would clearly take these measurements into account.

http://en.wikipedia.org/wiki/Attitude_indicator


it does, they can fail. humans can fail at assessing relevant information as well.


I couldn't get past the (unclosed parenthese in the second sentence. It's like my inner programmer went into OCD overload...


I couldn't get past the use of "parenthese"[] as a singular noun in the first sentence. It's like my inner pedant went into OCD overload.

1:Parenthesis :: >1:Parentheses


... hoist on my own petard.


touche



Speed limits are another realm in which regulation can hold back the development of driverless cars, besides merely allowing them on the streets in the first place.

With computerized drivers, it will finally be possible to fully enforce speed limits, by introducing some ceiling to the speed attainable by the car. I'm sure some "well-meaning" legislator will make it his or her priority to ensure that speed limits are never exceeded. However, at least in Massachusetts, if you go on the highway, everyone (including police) drive at ~75 MPH even though the posted speed limit is 55 or 65 MPH. Few will buy a car with this kind of handicap, were it to exist -- and I worry that it will. Many speed limits in the US were imposed decades ago, with less safe and responsive cars -- it would be a pity if potentially revolutionary technology advances were thwarted by this fact.

Legislation has already crippled or made useless many useful automotive innovations. In the US, technologies that allow for adaptive cruise control (maintaining a distance to the car in front of you) can only decelerate the car, and not accelerate the car. This forces the driver to have to constantly accelerate, greatly reducing the effectiveness of this feature. Many computer-laden vehicles with navigation systems are similarly crippled -- they automatically "lock" when the car is in motion, and some (like in Lexus vehicles) cannot even be overridden by people sitting in the passenger seat... often causing unintended risks like drivers pulling over on busy highways just to readjust their GPS target.


Or they introduce a robot-only lane where the cars can decide for themselves what a safe speed it, making speed limits obsolete. When cars can communicate with each other about dangers, much higher speeds are possible with less distance between cars.


An absolutely fantastic idea. It's a shame it will never happen.


If firmware is limiting your car's speed, the firmware will get reverse engineered and patched to allow any speed.

Jailbroken cars: mark my words, that will happen the day after automated drivers are released.


With computerized drivers, it will finally be possible to fully enforce speed limits

Why the hell would you want speed limits on a road with computerized cars? That's the whole point of eliminating human error and human limitations from the control loop, that 200 mph+ driving can be extremely safe.


Speed limits, in some areas, also serve to limit noise.


That is probably a small fraction of all speed limits.


Yes, at the moment. But if safety goes away as an issue, we will hit the issue of sound next.


The article doesn't make its case very well. The core problem people are presumably worried about is safety, and saying it they have a "good safety record" is hardly enough to reassure the senators, etc. who would presumably be responsible for relaxing restrictions.

For example, what about edge cases? Suppose the Google car does just fine in normal driving conditions, but in a blizzard w/ 26 mile per hour gusts of wind (as I drove in recently), or when a tractor trailer flips over on the road in front of you? Humans have a certain intuition that allows them to do bizarre twitches in extreme situations (even including supernormal strength) that presumably no machine intelligence will be able to approach for a long time (if ever).

Or what about the possibility of someone hacking the car? Could a worm engineered by some hostile government take millions of cars off the road -- or, worse, cause them all to steer into the median and cause mass damage and thousands of instant casualties?

It is, frankly, irresponsible not to consider edge cases like these when drafting legislation, and while I'm all for gradual introduction and more testing, the author of this article has convinced me that senators sitting on their hands not doing anything are probably acting on the interests of the people much more so than those who wish to simply hand over driving and navigation functions to machines as soon as possible.


I wonder if driverless cars could start out as a tool for people with disabilities. If such use were challenged, I can imagine the supreme court taking seriously a case by a person who is quadriplegic or blind demanding the right to use a self-driving car. If they can prove them safer, it will be hard to find a compelling government interest that could offset denying the use of this assistive technology.


They appear to be starting as 'driver aids' on new cars. Today's higher-end cars have adopted technology that alerts you when you drift out of your lane, watches your blind spots, safely follows the car in front of you, performs emergency stops, parallel parks itself, and adjusts vehicle dynamics in a multitude of ways.

My assumption is that this is a first step towards driverless cars, as it provides the manufacturers a way to test critical technologies in a safe environment. If the dealerships aren't downloading data from these systems, and sharing it back to the engineering groups, thats an enormous wasted opportunity.


I believe your assumption is correct as well. All of these 'new high tech safety toys' are the tip of the iceberg. With all of these components being used and people becoming comfortable with them, we will soon see more adoption towards driverless cars.

I think another major milestone towards this will be the next generation of GPS, with the increased accuracy it will help guide the cars where they need to go.


One idea would be to make some long haul roads, or sections of them, completely driverless. Maine to Miami along a section of I95, or NYC to LA. We could start the test with tractor trailers. Let them drive for a few years and tune the system. There would be a huge economic benefit to allowing trucks to run 24x7 without drivers.


Totally - I've been saying this forever: make the current HOV system enabled for driverless cars. Then it's a competition between fast driverless & slow driven - let's see what people do.


> There would be a huge economic benefit to allowing trucks to run 24x7 without drivers.

What would happen to the current drivers? Which new jobs should they pick?


Who cares?

There was a time when 80% of the population used to work on farms. Now much less than 10% work on farms in the western world. Do we have 70% unemployment?


The increased unemployment due to automation isn't a new problem, just finally getting attention. As also said in reply to you, farms make the perfect example. An arce produces far more food now than at any point in the past.

The bigger question is: Will we enter a new social contract where all people benefit from automation, our will only the property owners? Currently, we're trending heavily toward the latter. Regardless, that is a political issue, and unavoidable. Whatever we decide, even to my detriment, I'm of the opinion that we shouldn't let it guide technological progress. At least not just yet.


They can become candlemakers: http://bastiat.org/en/petition.html


Actually, this is an interesting question from a sociological point of view.


We don't need driveless cars, we need carless people:

http://www.carfree.com/


Given the looming energy crisis, the coming climate crisis, and the "peak everything" crisis, we'll get there sooner and later.


Right. The internet has taken away the need for a large part of the reasons for driving or (non-entertainment) travelling.

Making more use of telecommuting will drive down the number of accidents without the potential risks and liability issues of driverless cars.

Not that driverless cars aren't neat, but I agree it is the solution to the wrong problem.


This would make riding my motorcycle much safer, I like it.


If driverless cars start reducing the car road toll then motorbikes will start to look really dangerous:(.

I suppose we could get self driving motorbikes but that would be missing the point of riding.


given that a huge proportion, maybe even the majority, are caused by people in cars turning in front of or merging into motorcyclists I doubt it.


>But it’s clear that in the early part of the 20th century, the original advent of the motor car was not impeded by anything like the current mélange of regulations, laws and lawsuits.

They did try in the 19th century though, at least in the UK, with the Locomotive Acts[1]. The way those laws went out of their way to protect the status quo (i.e. horse-powered transport) is an interesting parallell to today's possible transition from human-controlled to computer-controlled transport.

[1] http://en.wikipedia.org/wiki/Locomotive_Acts


"The Locomotive Act 1865 (Red Flag Act):[5] Set speed limits of 4 mph (6 km/h) in the country and 2 mph (3 km/h) in towns. Stipulated that self-propelled vehicles should be accompanied by a crew of three: the driver, a stoker and a man with a red flag walking 60 yards (55 m) ahead of each vehicle. The man with a red flag or lantern enforced a walking pace, and warned horse riders and horse drawn traffic of the approach of a self propelled machine."


Some things aren't brought into focus here.

(1) Off-highway driving happens. That means that the algorithms have to manage an uncontrolled environment where 'anything' can happen.

(2) It is very expensive to create bug-free software.

(3) You can't iterate by failing fast on life-critical systems after it is released. Failure means killing someone.

(4) Legal liabilities. It's not going to work to say something like, "This car's driver software is not warranted free from defects".

(5) Humans can manage situations utterly outside the norm; algorithms can not see beyond the vision of the designer.

I work in an industry which operates below the levels of software assurance that the medical/flight industries work at, and it is incredibly painstaking as it is. A fully automated car will be very expensive to build.

I am not a paranoiac regarding software. I am a paranoiac regarding software bugs and the limits of the software designers.


I suspect that this kind of "risky" technology will be deployed first in more adventurous countries, like China.


Regulation and fear are to be expected. The question is, what to do about them? I predict the largest PR campaign in the history of technology. Public opinion generally drives regulation. So less public fear will lead to less regulation.

While I have no way to prove it, I'd bet my right hand that Google's PR people made this story happen. I'd bet they also made the first NYT piece blowing up the Chauffeur project happen and they made it look serendipitous for authenticity. I think "The Suit is Back," and I think it's going to come back again and again.

Prediction: Driverless cars will be portrayed in a very positive way in a major motion picture within the next year.


This is the crux of the matter:

> imagine that the cars would save many lives over all, but lead to some bad accidents when a car malfunctions. The evening news might show a “Terminator” car spinning out of control and killing a child. There could be demands to shut down the cars until just about every problem is solved. The lives saved by the cars would not be as visible as the lives lost, and therefore the law might thwart or delay what could be a very beneficial innovation.

It's otiose to point out that the premise of personal motor vehicles is not called into question every time a human driver spins out of control and kills someone.


Human psychology operates in a realm only dimly linked to logic, my friend. Consider the response in the US to the "risk" of terrorism, which in quantitative terms is tiny.


I'm not sure I trust the underlying architectures that are being developed with my life...

DDOS and MITM attacks take a whole-new meaning if the networked entities are 3-ton objects moving at 65 mph.


Its possible to prove that driverless cars can be safe. And that's by keeping cars from having accidents. If safety systems can keep human drivers from having accidents by stopping vehicles before they can have an accident, they can do the same for robotic vehicles.

Such systems would place limits on how fast you could accelerate, turn the wheel or apply the brakes. And they would also brake for you when other vehicles, pedestrians and animals appear to be on a collision course.

Such systems will have to be proven in the real world. And we are starting to see them. The newest Mercedes have such features. I predict that full blown systems will dramatically lower accidents for older people, teenagers and those who drive under the influence. Eventually all new cars will have these systems.

And by then it will be a lot easier to trust the machines.


So much of technical progress happens through delivering most of the value of the previous solution at a fraction of the cost (email vs postal mail). Society seems to rule this kind of progress out for a few industries like health care, I assume something similar is happening here.


The problem with healthcare is not that progress is being ruled out but that healthcare is highly labor-driven. So far there's no machinery that can replace a doctor.

http://en.wikipedia.org/wiki/Baumol_effect


> The problem with healthcare is not that progress is being ruled out but that healthcare is highly labor-driven. So far there's no machinery that can replace a doctor.

As a matter of policy, the FDA will not approve any device that makes a medical diagnosis without human intervention. [1]

Recently there have been numerous studies and much punditry accusing the American medical regulatory environment of stifling innovation and driving up costs. [2][3][4] The arguments are strikingly similar to Cowen's argument re autonomous cars.

[1]: http://www.technologyreview.com/biomedicine/37596/ [2]: For a quick summary, see http://reason.com/blog/2011/05/25/will-fda-regulation-kill-t... [3]: A much discussed recent report [PDF] is at http://www.nvca.org/index.php?option=com_docman&task=doc... [4]: An article in JAMA opined similarly [PAYWALL], http://jama.ama-assn.org/content/305/15/1523.extract


Ars Technica did a great series on the technology and economics of self driving cars a few years ago:

http://arstechnica.com/old/content/2008/09/future-of-driving...


A somewhat similar note put very nicely by James May from Top Gear;

http://www.youtube.com/watch?v=iS0IxnxwJSU


Perhaps it's because software has an obvious history of being buggy. A web service won't be down for a while if this fails, hundreds of lives being at risk on a 24 hour a day basis. Maybe a little wait-and-see is a reasoned approach for a complex interactive dynamical system like this?


I don't think there is a computer in the world that has a worse uptime than myself. Even old cheap Windows ME boxes did a fair enough job of being able to run without impairment for more than 20 hours or so.


The complexity of the applications will be far greater than Windows ME. Good article on The Long, Dismal History of Software Project Failure - http://www.codinghorror.com/blog/2006/05/the-long-dismal-his.... Before we let everyone's self-driving mashup on the road it might be good to figure out if they work first.


"There were nearly 6,420,000 auto accidents in the United States in 2005."

Can we stop calling all motor vehicle crashes accidents. If the driver was drunk or purposely distracted it is NO more an accident than if I was randomly firing a gun hoping no one got hit.


Maybe we just need to emphasize the negative impact of long commutes. Suddenly you have that commute time to do something else. :)


No state has anything close to a functioning system to inspect whether the computers in driverless cars are in good working order, much as we routinely test emissions and brake lights.

Having lived in Oregon, Arizona, and California, I have never had anything other than emissions routinely inspected. Demonstrate a car smart enough to monitor its own brake pad wear, alert on burnt out bulbs, and provide a clear readout of all detected issues (i.e. not a coded blinking service light, or plug interface) before you start trying to make it drive itself.

(I do love the idea of an automated train of cars, and driving my drunk self home, though)


> Demonstrate a car smart enough to monitor its own brake pad wear, alert on burnt out bulbs, and provide a clear readout of all detected issues (i.e. not a coded blinking service light, or plug interface) before you start trying to make it drive itself.

I don't know about other cars, but all model Porsche's will do exactly this. Granted, it doesn't have as many sensors as I would like--leading to it only giving an 'engine warning' light for innumerable problems, but when it comes to breaks, oils, lights, etc. it has a pretty fine grained detection pattern.


Same with the jaguar XJ since 1988. brake pads low is neat to see show up in the display. The XJ has more sensors so it gives more than just check engine.


You're right. A gradual easing in to the idea of smarter cars would make driver-less cars less of a shock in the future.


Can your average human driver handle any of those situations either? I wager not.

"alert on burnt out bulbs"

I'm not sure I've seen a car from the current or past 2 decades that doesn't do this...


My aging BMW had most of the features you mention (may it rest in peace). It indicated how many brake lights were out, whether the fluids were low, and when the brakes were worn. It also had electronic stability control, which made driving in the snow possible and driving the rain much safer and faster.

We also have mandatory annual safety inspections where I live, which should catch things like worn suspension, rust-damaged parts, etc.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: