Hacker News new | past | comments | ask | show | jobs | submit login

If you can't tell a bird from a car or a bike you have zero business being in the self driving car arena.



It's not that I disagree with you, it's just by your standards, [almost] no one should be in the self-driving car arena.


That's quite possibly a reasonable assessment of the situation.


...yet?

At least this doesn't seem to be that hard of a line to draw in terms of competency before allowing on public roads


Not on public roads.


What do you do when your radar says it could be a person, but your vision sensor tells you it's probably a bird?


Go back to the drawing board.


Stop anyway.

And admit we're going to continue to need human safety drivers...


Uber: Block any remediation actions, wait 1 second and hope for the best?


You slow down; because even if it's a bird, cat, dog, or armadillo it does damage to your car (including getting it bloody) if you hit it at a higher speed.


You quit and then get a job at McDonalds, because you're clearly not qualified for self-driving cars development.


So you need to wait for sensors that have 100% accurate object detection before you can work on a self-driving car?

Meanwhile, you let humans who are far less than 100% accurate (and are often distracted) keep killing people? I went into a panic stop one rainy night when a billowing piece of plastic blew across the road in front of me, it looked exactly like a jogging person to myself and my passenger.

We don't need self-driving cars to be perfect before they are released, just better than people.


If you really want self driving cars to happen then you will need to take one very important factor into account that has absolutely nothing to do with technology: human psychology.

People are not going to be swayed by statistics, they are going to be swayed by their emotions and if the self driving car attempts that 'hit the road' so to speak are going to be sub-par in the public perception even if they are might be statistical win then they will get banned. So if you are really gung ho on having self-driving happening then this should concern you too because otherwise you will never reach that goal.

Self driving cars need to be - right from the get go - objectively obviously better than human drivers in all everyday situations. Every time that you end up in a situation where a self driving car kills someone human drivers - and all their relatives and the press - are going to go 'well, that was stupid, that would not have happened to me' and then it will very soon be game over. And you won't get to play again for the next 50 years.

See also: AI winter.


The way it works in safety critical software safety analysis in my experience is that you have a hazard analysis/failure modes effects analysis that factors in severity x probability (and sometimes a detectibility measure)

So if you identify a failure mode that contributes to a catastrophic hazard for instance, you better build your system to drive the probability down. The resultant severity x probability score you end with has to fall within the risk parameters deemed acceptable by management/safety


Self-driving cars are far away from matching human driver skill level right now. We do not need them to be "perfect" we need them to stop doing such a silly mistakes as described in the article - that is a very basic requirement before they can be allowed on public roads.


Self-driving cars are far away from matching human driver skill level right now.

Well, no. Has any of the systems been tested in heavy snows on icy roads or on a road without maps?

silly mistakes

A person died.


A person died.

Lots of people die in car accidents.

Nearly 40,000 people die each year in car accidents. At least 8 of the top 10 causes of accidents would be improved with self-driving cars.

1. Speeding

2. Drunk Driving

3. Speeding

4. Reckless Driving

5. Rain

6. Running Red Lights

7. Night Driving

8. Design Defects

9. Tailgating

10. Wrong-Way Driving/ Improper Turns

Well, no. Has any of the systems been tested in heavy snows on icy roads or on a road without maps?

Aside from figuring out where the edge of the road is, the biggest accident risk that I've seen with driving in heavy snow is speed -- no one wants to drive 15mph for 2 hours through a snowstorm to stay safe, so they drive 30 - 50mph instead.

And I'm not sure how to solve the road visibility issue with self driving cars, but presumably the same heuristics that humans use could be emulated in software (which I suppose is primarily following the car ahead or his tracks or looking for roadside signs that mark the edge of the road).


Lots of people die in car accidents.

I doubt anyone would refer to those as silly mistakes either.

My point with the second part is that humans have proven driving in snow storm and places that aren't fully mapped is possible, something that self-driving cars have not.


> We don't need self-driving cars to be perfect before they are released, just better than people.

Who goes to jail when the self driving car kills people?


This is a real dilemma. IIRC, some car companies have already stated they will take responsibility


no ones going to jail from MCAS auto-correcting planes into the ground.


Yet. It may still come to that. If the Germans can go after Martin Winterkorn long after he thought he was in the clear then this may still happen as well.

https://www.theverge.com/2019/9/24/20881534/volkswagen-diese...


Are there lawsuits/investigations still pending or they are over and no one from Boeing was found to be guilty?


The Boeing story is only getting started for real. It may take a new head of the FAA before the whole thing will be tackled frontally but eventually that will happen. The USA can't afford an FAA that is without respect.


Depends. If it's a hardware failure, then no one should go to jail, just like today, if my wheel falls off (through no fault of my own) and I run into a child, I wouldn't expect to go to jail (heck, drivers already get pretty much free rein to run down pedestrians by saying "I didn't see him"). The car manufacturer may have some financial liability if it was a product defect, but again no jail time.

The interesting moral dilemma is what to do if the car decided it was better to run into a pedestrian and protect the driver than to run into a brick wall and protect the pedestrian.

There's no easy answer to that dilemma.

https://www.nature.com/articles/d41586-018-07135-0


The choice shouldn't be between human drivers and human supervised computer drivers. Computer supervision of human drivers is viable, effective, and allows for evolutionary progress.


But may be less safe than fully computer controlled cars unless that computer supervision is able to take control completely -- humans tend to view safety features as an excuse to push the envelope.

"I can text because my car will warn me if I run off the road or stop me before I hit the car in front of me"

"I don't need to slow down on this snowy road, ABS will help me stop safely"

"Sure, I'm driving too fast for this road, but my airbags and seatbelts will protect me if I crash"

https://www.wired.com/2011/07/active-safety-systems-could-cr...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: