I think that self-driving cars should act in the self interest of the owners. It will encourage people to use self-driving cars and decrease road deaths.
Also, if you are at a rail yard and see something hurtling toward 5 workmen and can push a fat man in front to protect them, you should refrain. The workmen had the opportunity and duty to take actions to prepare for and prevent rogue train cars. The fat man should not have a duty to be constantly on guard against assault.
People make this out to be far more complex than it needs to be. Just give the car a fast connection to the credit bureaus and have it minimize the sum of the FICO scores of those killed.
Eh, I really don't think people on HN tend to explicitly value others according to wealth. The failure mode some people have is not being fully aware of the degree that a certain level of wealth or stability during childhood is a prerequisite for all but a very few to get to a solid technical [self-]education.
It's satire, but not meant to be about HN readers. It's just an amusing way to cut the Gordian Knot and riff on society's treatment of the less well-off.
> I think that self-driving cars should act in the self interest of the owners.
To what extent? At some point, we're going to have to decide as a society what call a self-driving car has to make in situations like "I can run down this family in the cross-walk or get rear-ended by that oncoming vehicle".
When autonomous vehicles encounter an obstacle, they brake. Moreover, why in this problem, that appears in absolutely every thread about self-driving cars, is there a family in a crosswalk with no awareness whatsoever of a speeding vehicle? Are they deaf and blind without assistance? This is not a joke.
The car will be aware of the presence of the crosswalk. It will slow long before arriving at it. So the deaf and blind family will likely be safe.
I'll concede that there are some scenarios where choices may be required:
e.g. Child darts in front of car. Slam on brakes and hope for best or swerve off road (which probably isn't a cliff) or into traffic? And perhaps there is a different calculus if the car isn't carrying a passenger.
But, for the most part, these trolley arguments are pretty uninteresting from a technical, as opposed to a philosophical, perspective. It's going to be very rare that an autopilot would even in principle be able able to reliably forecast a significantly better outcome than hitting the brakes and/or otherwise protecting itself/its occupant as much as possible.
> e.g. Child darts in front of car. Slam on brakes and hope for best or swerve off road (which probably isn't a cliff) or into traffic? And perhaps there is a different calculus if the car isn't carrying a passenger.
I think its fair to say that the car might have an engine that decides to preserve human life at the cost of human injury.
To avoid killing the child, it may serve into traffic if and only if doing so has a high likelihood of survival for all passengers.
If it can't determine that; it'll take the conservative approach and break to mitigate the damage of the impact to the child.
Regardless the decision engine of the car isn't perfect. I'd have a hard time seeing it predict serving left to cause a 4 car pile up, instead of right to cause a 5 car pile up. How could it know? At some point, it'll just short-circuit its decisions because time is passing, and its critical to do something rather than simply slow down and maintain course.
They sometimes break [sic]. I think your position requires a fully autonomous world. Instead, imagine that the car behind you continues to barrel forward instead of slowing itself. Your autonomous vehicle then has a decision to make.
What is the decision? The car will follow exactly the rules of the road. If the car behind it continues to barrel forward, that driver did not allow for adequate stopping distance. In this scenario, that driver is at fault.
Of course that doesn't say anything for the safety of the child, but just as we don't expect humans to be able to account for every single scenario, neither can we require a computer to do so.
For every one of these supposed ethical dilemmas, replace the computer with the most pedantic driver imaginable: this person has literally memorized the driver's manual. Ask that person how they would respond in that situation. That is how the computer would respond. There are no choices to be made: the choices have already been made by the legislators who enacted the driving laws.
I imagine the first set of self-driving cars will require a huge bumper sticker (like the ones they put on learner cars: "STUDENT DRIVER") that indicates this car will not exceed the speed limit and will not deviate from the rules. Everyone else can adapt accordingly.
There's an implicit decision there - choosing between you (the passenger in the autonomous vehicle) and the people on the street. You can't have your cake and eat it too. Fault isn't what's at stake here.
You actually DO require the computer to account for every single scenario - since we DO require humans to do so. Nobody says "Oh, it's okay that they <<insert some negative driving consequence>> - nobody expected that they'd be able to handle it!"
Legislators with driving laws aren't how people interact with roadways, except in a basic sense of a framework.
I do agree with the self identification of autonomous vehicles, especially if they replace LIDAR. However, there are already highway capable autonomous systems that are driving around right now (Teslas are an example). There's just no test case yet.
>There's an implicit decision there - choosing between you (the passenger in the autonomous vehicle) and the people on the street.
Does the law ever require a person to break the law? That is the decision you've proposed the computer make. This is a serious question, not a pithy comment. Any action outside of immeditately stopping breaks the law: swerving into another lane involves changing lanes without signaling or entering into oncoming traffic. Swerving into the sidewalk breaks the law as well.
I contend that we do very often say "it is unfortunate this situation but you complied with your instincts as they have been developed within the confines of traffic safety law."
I also contend that fault is at stake and only fault can be considered at stake. We do not expect people to correct other humans' mistakes by breaking the law, correct? So why do we expect a computer to correct for all human's mistakes? There will certainly be tragedies involving autonomous vehicles. But to count that as a mark against the technology is holding it up to a standard we have never applied to any other technology.
The computer merely acts in accordance with traffic laws. That there are two possible outcomes (passenger injury and pedestrian injury) does not imply that a choice has been made. In fact, those are not the only two outcomes, we've merely whittled it down to those for the sake of discussion.
I don't understand why this keeps coming up. I have never seen or even heard of a driver getting into a situation where there's any sort of moral dilemma in how they react. There's always an obvious best answer or a bunch of equivalently good answers.
For the one-in-a-trillion cases where this is not true, it's OK if the car reacts sub-optimally. It will still be vastly better than human drivers, who frequently react sub-optimally in unambiguous situations.
There are physical limits on how fast or smart you can make braking.
We make thousands of unconscious decisions while driving - many of them moral/ambiguous. "Should I pass this car in the intersection even though it is breaking the law?" "Can I squeeze past this bicyclist even though it forces them towards the curb?"
The core of most of these arguments assume a fully autonomous world - and it just won't work like that.
If you are in a situation where there is a risk of falling off a mountain and you are driving fast enough that you can't safely stop if you see people in front of you, you are doing something wrong. A self-driving car should definitely not even get into situations where it's forced to make decisions like that.
I know it is rare, but this situation can arise through no error of the car's controls. Consider the failure of the car's master cylinder. If your Brakes don't work, then this might be the exact choice you make.
I don't want to sound overly cynical, but I somehow doubt that self-driving cars are going to suddenly make cars more reliable. Mechanical failure are not a common cause of crashes - http://www-nrd.nhtsa.dot.gov/pubs/812115.pdf - according to the NHTSA there, they account for 2% of crashes between 2005-2007. Still, self driving cars will need to be able to handle situations where there are mechanical failures.
Consider the failure of the car's master cylinder.
I realize it's just one example to make a point, but automotive master cylinders don't completely fail all at once since about fifty years ago. What causes a master cylinder to fail is a failure of the seals. But because modern master cylinders have "dual channels" (for lack of a better description), half of it might fail, but it's so unlikely for both sides to fail at the same time I never even heard of it in my years as a professional mechanic, let alone actually seen such a failure. Though a self-driving car would still need to account for the fact that only half the wheels have functioning brakes.
Any other failures I can think of on modern cars are going to be so much of a failure that merely getting the car to the side of the road might be a challenge, let alone worrying about the bus full of nuns and orphans in front of you. Examples would be catastrophic tire or suspension failure (ball joint breaks loose, for example, making the car very difficult to steer).
On the flip side, it could be argued that late-model cars have worse failure modes. By that I use our Nissan Leaf as an example. There is no direct mechanical linkage on any part of the car that makes it stop, go, or steer. Parking brake is electrically actuated. Brakes are run on a wire. "Gas" pedal is wire. I'm pretty sure the steering is drive-by-wire, but I'd have to go back and look it up to make the claim with confidence. I'm sure there are redundancies, but I'm picturing an ECU letting its smoke out and then you might as well just let go of the wheel and pray to your preferred deity.
Anyway, this has been Captain Pedantic checking in, thanks for reading. I now return you to the point you actually trying to make.
> By that I use our Nissan Leaf as an example. There is no direct mechanical linkage on any part of the car that makes it stop, go, or steer. Parking brake is electrically actuated. Brakes are run on a wire. "Gas" pedal is wire. I'm pretty sure the steering is drive-by-wire, but I'd have to go back and look it up to make the claim with confidence. I'm sure there are redundancies, but I'm picturing an ECU letting its smoke out and then you might as well just let go of the wheel and pray to your preferred deity.
This is not completely correct: your Leaf does have a purely electronic throttle and the e-brake is actuated by a solenoid, but no production passenger car presently uses brake-by-wire, and the first (and presently only) production passenger car with drive-by-wire is the Infiniti Q50. While your Leaf does have additional electronic braking means to make the regenerative system work, if the ECU lost its magic smoke the service brakes would still work. As to drive-by-wire (perhaps you're confused with electronic assist?), on the Q50 at least there is a physical linkage to the steering in case of electronic failure.
This is not completely correct: your Leaf does have a purely electronic throttle and the e-brake is actuated by a solenoid, but no production passenger car presently uses brake-by-wire
Thanks for the correction. I ass-u-med (and actually do recall reading it somewhere, but just some dude on a Leaf forum probably) that because of all the computerized fiddling required to balance regen braking with mechanical braking, the brake pedal was just a fancy potentiometer.
As for steering, I went to garage between builds and looked: yup, there's a mechanical linkage in there. Shows how often I lift the hood on that thing. :-)
I'll leave my error-filled post as-is as an example of how being a pro mechanic 20 years ago doesn't mean you know everything about modern cars. :-P
I believe the only steer-by-wire model on the roads right now is some Infiniti. Even that has a clutch which engages the steering mechanically if the car loses power.
Electric cars potentially have additional braking redundancy. Even if the brakes failed entirely, they could drive the motor in reverse to stop the car. I don't think any of them actually do this, since total brake failure doesn't happen very often, but they could.
You pretty much have to trust the ECU. A failing ECU could just punch you in the face with the airbag and floor the accelerator. Fortunately, they seem to be pretty reliable.
You are quite correct. This was a poor example. A better example would be a failure of a brake hose or brake line, where the system loses pressure quickly. (I had a 99 suburban do this recently. It was quite an adventure)
Your examples of tire/ball joint failure is of course much better.
Yes, but have you ever stopped with your eBrake? first, it doesn't usually effect all 4 wheels, and second, it isn't antilock. If, traveling 60mph your master cylinder or brake lines fail, the E Brake might not do the job. I know the E Brake in my Toyota Tundra wouldn't stop me if I were going 60mph.
Again, my point was not to say that this was likely to happen, but merely that we should ensure that there is framework for dealing with these kind of scenarios where a car needs to choose. According to the document I shared earlier, 95% of accidents are driver error somehow. Self-driving cars are sure to improve those statistics, but for public acceptance we need to answer the other 5% of accidents too.
EDIT: as Saberworks below points out, there is no Emergency Brake in the Tundra, or in fact almost any modern car. I was loosely equating the parking brake with an emergency brake because in many cars there is no "emergency brake", so the parking brake is your only other choice.
Most people think of the parking brake as an emergency brake for some reason. These things are wimpy brakes that are designed to prevent a vehicle from rolling if parked incorrectly or on a hill. They are generally cable actuated and don't have much stopping force if the vehicle is in motion. The Tundra has a foot-activated ratcheting parking brake and I don't recall it ever being referred to as an "emergency brake" in the manual (although it's been a while since I read mine, I could be wrong).
Most people think of the parking brake as an emergency brake for some reason.
That's because that's what it used to be called. In the owner's manual. I imagine the change might be due to liability more than anything else. That, and just marketing. No one wants to imply that you'll have an emergency in your shiny, new Chrysler.
These things are wimpy brakes that are designed to prevent a vehicle from rolling if parked incorrectly or on a hill. They are generally cable actuated and don't have much stopping force if the vehicle is in motion.
Fifty years ago, this would describe most brakes on consumer vehicles [0]. They're wimpy because they're drum brakes (though more upscale vehicles have rear discs). They are difficult to modulate because they're actuated by flexy cables. But cable-actuated drum brakes were what you got when you bought a 1960 Chevy.
As for stopping force, a parking brake should have all of the stopping power that the rear brakes usually have. The reason they often don't have much stopping power is because most people don't have them adjusted regularly (meaning the cable runs out before the full power available is applied), or more commonly they don't use it at all and the cable rusts up. If properly maintained, you should be able to lock the rear wheels at speed.
[0] EDIT: give or take. I don't remember exactly when cars went hydraulic, and the web's no help. I did own a 60's Harley/Aramachi motorcycle that had cable-actuated brakes, FWIW.
The first part sounds like a design flaw of your Toyota Tundra rather then a self driving car issue.
Secondly, is it considered a mechanical failure when the master cylinder fails and the driver is too slow to find the emergency brake actuator in time? Self driving cars do not suffer from that problem.
There are very little things that can happen totally unexpectedly and that you can avoid once you know about them. Yes, a falling rock can crush your car, but there is not much you can do about it. My point is that if you are driving on a mountain road with a steep cliff, you should be driving slowly. If you see rocks on the road, you can expect larger rocks on the road ahead of you and you should be driving slowly. If you have to make the decision between driving off the road and falling off a mountain, or driving through people, you are driving badly and self-driving cars shouldn't be allowed to drive badly.