Hacker News new | past | comments | ask | show | jobs | submit login

It's really fascinating how prevalent these discussions used to be early on in self-driving car development, vs now where the more common discussion is "how do we prevent them driving into a truck because it was painted white and looked like a skyline?"

There's an underlying assumption in these examples that we'll be in a car that can tell if someone is homeless, or that knows the net worth of the person driving it. And those scenarios are kind of absurd, why would a car be able to do that? A car that can in milliseconds identify whether or not someone is an executive or whether they're breaking the law should never be getting an accident in the first place, it should be able to identify way ahead of time when someone is about to cross the street. But we come up with these really ridiculous premises that are fueled by simultaneously an overestimation of what technology is capable, a lack of thought about the implications of the technology we're describing, and an under-awareness of the real challenges we're currently facing with those technologies.

An analogy I think I used once is that we could have the same exact conversations about whether or not Alexa should report a home burglary if the burglar is stealing to feed their family and Alexa knows the family is out that night. It's the same kind of absurd question, except we understand that Alexa is not really capable of even getting the information necessary to make those decisions, and that by the time Alexa could make these decisions we'd have much larger problems with the device to worry about. But there was real debate at one point about whether we could put self-driving cars on the roads before we solved these questions. Happily, in contrast, nobody argues that we should stop selling Alexa devices until as a society we decide whether or not justified theft ever exists. And it turns out the actual threats from devices like Alexa are not whether or not machines believe in justified theft, it turns out the actual threats are the surveillance, bugs, market forces, and direct manipulation possible from even just having an always-on listening device at all.

The danger of an Alexa device is not that it might have a different moral code than you about how to respond to philosophical situations, the danger is that Amazon might listen to a bunch of your conversations and then accidentally leak them to someone else.

So with self driving cars it's mostly the same: the correct answer to all of these questions is that it would be wildly immoral to build a car that can make these determinations in the first place, not because of the philosophical implications but because why the heck would you ever, ever build a car that by design can visually determine what race someone is or how much money someone makes? Why would that be functionality that a car has?

We have actual concerns about classism and racism in AI; they're not philosophical questions about morality, they're implicit bias in algorithms used in sentencing and credit ratings, fueled by the very attitude these sites propagate that any even near-future technology is capable of determining this kind of nuance about anything. The threat of algorithms are that people today believe that they are objective enough and smart enough to make these determinations -- that judges/lenders look at the results of sentencing/credit algorithms and assume they're somehow objective or smart just because they came from a computer. But I remember so clearly a time when this was one of the most common debates I saw about self-driving technology, and the whole conversation feels so naive today.




It's a great example of Moravec's Paradox. We spend all our time thinking about what moral choices machines ought to make after cogitating upon the profound intricacies of the cosmos. We should be more concerned with figuring out how to teach them to successfully navigate a small part of the real world without eating glue or falling on their noggin.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: