Hacker News new | past | comments | ask | show | jobs | submit login

You could be perfectly sober and still not be able to intervene in time to prevent a crash. Systems like this encourage inattention, but still expect drivers to turn on attention in a fraction of a second. That's the entire point. So no, it's not moot.



How about alcohol odour detection required in all vehicles?

How about attention-distraction monitoring, and perhaps first penalty or safeguard is forced reduced speed, where attention-distraction level determined determines the maximum speed the vehicle can go - whereby reducing the need for as fast of a reaction time?

Any other possible solutions?

I think in general coddling people is more harmful than good, and is lazy to not find nuanced solutions - just because blanket rules are "easier."


It depends on the discussion you are having. I agree that it is not “moot” in either case.

Your point is that these systems need to be save even in the face of incapable drivers and that, despite the misleading marketing, they are not that ( yet ).

The other point though is that people already have access to cars and, according to the evidence, cars with these features are LESS dangerous.

Encouraging over-reliance is a problem but not as big a problem as having to rely on the human 100% of the time. This statement is just a fact given the statistics.

Given the above, while it is too extreme to say that operation by an impaired driver is “moot”, it is fair to suggest that the most significant source of risk is the impaired driver themselves and their decision to operate a vehicle of any kind. The biggest offer to this risk are the additional safety features of the vehicle. The degree of over-confidence caused by the vehicle type is a distraction.


There are no statistics that categorically prove cars with these features are less dangerous. Tesla's own "safety report" is extremely misleading and has no controls for geography, weather, time of day, average age of cars, demographics, etc.

If you're developing an autonomous system, you have a moral and ethical obligation not to roll it out when it's half baked and unsafe. You can't give an unfinished safety critical system to the general public and then blaming them for misusing it.


Don't all Tesla vehicles have the highest safety ratings ever?

I guess maybe though something being less dangerous isn't the same as something being relatively more safe?


Those safety ratings don't assess FSD performance.


You're missing or avoiding my point?

Comparing similar accidents regardless of FSD or not - a Tesla's occupants are arguably kept safer, would fair better than being in any other vehicle, right?


You’re making an entirely irrelevant point. They’d fare better if they were in a bus too.

We’re talking about cause of the accident here, not what happens after one.


Naw, you're just dismissing my valid point because it has weight to it - you want it to be irrelevant, it's not.


Then please tell us how crash ratings of a vehicle is helpful in assessing FSD performance. The entire discussion is of who is responsible for causing the crash. Automated system collisions are counted regardless of severity.


> The other point though is that people already have access to cars and, according to the evidence, cars with these features are LESS dangerous.

I don’t think we can say anything of the sort for Tesla FSD (beta).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: