Hacker News new | past | comments | ask | show | jobs | submit login

I think that self-driving cars should act in the self interest of the owners. It will encourage people to use self-driving cars and decrease road deaths.

Also, if you are at a rail yard and see something hurtling toward 5 workmen and can push a fat man in front to protect them, you should refrain. The workmen had the opportunity and duty to take actions to prepare for and prevent rogue train cars. The fat man should not have a duty to be constantly on guard against assault.




People make this out to be far more complex than it needs to be. Just give the car a fast connection to the credit bureaus and have it minimize the sum of the FICO scores of those killed.


...huh?

Oh, this comment is satire.

Eh, I really don't think people on HN tend to explicitly value others according to wealth. The failure mode some people have is not being fully aware of the degree that a certain level of wealth or stability during childhood is a prerequisite for all but a very few to get to a solid technical [self-]education.


It's satire, but not meant to be about HN readers. It's just an amusing way to cut the Gordian Knot and riff on society's treatment of the less well-off.


> I think that self-driving cars should act in the self interest of the owners.

To what extent? At some point, we're going to have to decide as a society what call a self-driving car has to make in situations like "I can run down this family in the cross-walk or get rear-ended by that oncoming vehicle".


When autonomous vehicles encounter an obstacle, they brake. Moreover, why in this problem, that appears in absolutely every thread about self-driving cars, is there a family in a crosswalk with no awareness whatsoever of a speeding vehicle? Are they deaf and blind without assistance? This is not a joke.

The car will be aware of the presence of the crosswalk. It will slow long before arriving at it. So the deaf and blind family will likely be safe.


I'll concede that there are some scenarios where choices may be required:

e.g. Child darts in front of car. Slam on brakes and hope for best or swerve off road (which probably isn't a cliff) or into traffic? And perhaps there is a different calculus if the car isn't carrying a passenger.

But, for the most part, these trolley arguments are pretty uninteresting from a technical, as opposed to a philosophical, perspective. It's going to be very rare that an autopilot would even in principle be able able to reliably forecast a significantly better outcome than hitting the brakes and/or otherwise protecting itself/its occupant as much as possible.


> e.g. Child darts in front of car. Slam on brakes and hope for best or swerve off road (which probably isn't a cliff) or into traffic? And perhaps there is a different calculus if the car isn't carrying a passenger.

I think its fair to say that the car might have an engine that decides to preserve human life at the cost of human injury.

To avoid killing the child, it may serve into traffic if and only if doing so has a high likelihood of survival for all passengers.

If it can't determine that; it'll take the conservative approach and break to mitigate the damage of the impact to the child.

Regardless the decision engine of the car isn't perfect. I'd have a hard time seeing it predict serving left to cause a 4 car pile up, instead of right to cause a 5 car pile up. How could it know? At some point, it'll just short-circuit its decisions because time is passing, and its critical to do something rather than simply slow down and maintain course.


They sometimes break [sic]. I think your position requires a fully autonomous world. Instead, imagine that the car behind you continues to barrel forward instead of slowing itself. Your autonomous vehicle then has a decision to make.


What is the decision? The car will follow exactly the rules of the road. If the car behind it continues to barrel forward, that driver did not allow for adequate stopping distance. In this scenario, that driver is at fault.

Of course that doesn't say anything for the safety of the child, but just as we don't expect humans to be able to account for every single scenario, neither can we require a computer to do so.

For every one of these supposed ethical dilemmas, replace the computer with the most pedantic driver imaginable: this person has literally memorized the driver's manual. Ask that person how they would respond in that situation. That is how the computer would respond. There are no choices to be made: the choices have already been made by the legislators who enacted the driving laws.

I imagine the first set of self-driving cars will require a huge bumper sticker (like the ones they put on learner cars: "STUDENT DRIVER") that indicates this car will not exceed the speed limit and will not deviate from the rules. Everyone else can adapt accordingly.


There's an implicit decision there - choosing between you (the passenger in the autonomous vehicle) and the people on the street. You can't have your cake and eat it too. Fault isn't what's at stake here.

You actually DO require the computer to account for every single scenario - since we DO require humans to do so. Nobody says "Oh, it's okay that they <<insert some negative driving consequence>> - nobody expected that they'd be able to handle it!"

Legislators with driving laws aren't how people interact with roadways, except in a basic sense of a framework.

I do agree with the self identification of autonomous vehicles, especially if they replace LIDAR. However, there are already highway capable autonomous systems that are driving around right now (Teslas are an example). There's just no test case yet.


>There's an implicit decision there - choosing between you (the passenger in the autonomous vehicle) and the people on the street.

Does the law ever require a person to break the law? That is the decision you've proposed the computer make. This is a serious question, not a pithy comment. Any action outside of immeditately stopping breaks the law: swerving into another lane involves changing lanes without signaling or entering into oncoming traffic. Swerving into the sidewalk breaks the law as well.

I contend that we do very often say "it is unfortunate this situation but you complied with your instincts as they have been developed within the confines of traffic safety law."

I also contend that fault is at stake and only fault can be considered at stake. We do not expect people to correct other humans' mistakes by breaking the law, correct? So why do we expect a computer to correct for all human's mistakes? There will certainly be tragedies involving autonomous vehicles. But to count that as a mark against the technology is holding it up to a standard we have never applied to any other technology.

The computer merely acts in accordance with traffic laws. That there are two possible outcomes (passenger injury and pedestrian injury) does not imply that a choice has been made. In fact, those are not the only two outcomes, we've merely whittled it down to those for the sake of discussion.


I'd rather see a focus on smarter and faster braking then on solving contrived moral dilemmas.


I don't understand why this keeps coming up. I have never seen or even heard of a driver getting into a situation where there's any sort of moral dilemma in how they react. There's always an obvious best answer or a bunch of equivalently good answers.

For the one-in-a-trillion cases where this is not true, it's OK if the car reacts sub-optimally. It will still be vastly better than human drivers, who frequently react sub-optimally in unambiguous situations.


There are physical limits on how fast or smart you can make braking.

We make thousands of unconscious decisions while driving - many of them moral/ambiguous. "Should I pass this car in the intersection even though it is breaking the law?" "Can I squeeze past this bicyclist even though it forces them towards the curb?"

The core of most of these arguments assume a fully autonomous world - and it just won't work like that.


This please.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: