Hacker News new | past | comments | ask | show | jobs | submit login

I'm thinking about this simple trick to keep the safety driver focused: at random intervals, the software would provide a stimulus (such as a red LED light projected on windshield), and the person would need to react as quickly as possible by pushing a button. Thus the person would know she has to expect the need to react quickly, and the software can measure the level of attentiveness.



I think in this case, from the video, it is clear that the safety driver is not looking at the road all the time.

But lets consider the scenario that the human driver was 100% alert and focused and were looking at the road all of the time. How would this have played out in this scenario?

The car is driving along. The human driver sees the pedestrian 20-30m down the road but notices that the car is not slowing down.

What does the human driver do in this situation?

Slam on the brakes just in case? Do nothing assuming the car has worked out that it is safe? Or wait assuming that the car will make a decision in time, and then slam on the brakes if it doesn't ... but would it be too late by then?

What (if any) decision do you make to override - and when do you make it - when you're using an autonomous system?

I dont think this sort of thing is clear-cut. Obviously in hindsight in this scenario we'd all pick "slam on the brake", but in this sort of scenario where it is kinda "now or never" for braking, if you override the system too early you never get to really test/develop the system, but if you leave it too late you dont have enough time to prevent the accident. There must be an incentive/encouragement for these human drivers to "let the system do its job" rather than disengaging all the time, otherwise no one would ever let the systems run in full-auto all the time.

It sounds like a tough job. Presumably there are systems in the car to indicate what it "sees", but cross-referencing that against what your eyes see is probably quite difficult "Of the 50-60 things on the screen that the car has registered, it has missed this one thing." I'd not want that job.


Every incident is logged including the SDC's response for the particular situation. Let the humans take over and avoid the accident.

Then you try to recreate it in your test environment and see what would have happened if not for human intervention. If it would have resulted in an accident, fix the code to make sure it doesn't need human intervention, it to your test case and make sure all of the existing tests pass.


Self driving trains have been in existence for decades. In NYC (which I don't think is heavily automated) they require the drivers to point when they pull in to a station to prove they're paying attention. Many freight trains have a throttle that is dialed in to place at a set speed. To ensure engineers are focused there's a dead man's switch that requires activation at set intervals.



there are deep learning solutions that classify super well wether the driver is paying attention or not.

you could just play a buzzer when that happens, and tell them they will get fired if it happens too often (after human review of the tapes ofc)


There would be an adversarial neural network on the other end trying to fool the former into falsely detecting attention, i'm not sure this would end well. In the end they might have ended up with a perfectly distracted driver that shows no visible signs of inattentiveness in the interior video.

But I fully agree with the general idea: if Uber had quickly conjured up one of their glossy presentations outlining an impressive arsenal of countermeasures against safety driver inattentiveness, they would be in a much better position now. They didn't though, and everybody (including myself) is assuming that Uber just did not care: if a safety driver fails at their impossible job, it's their responsibility, not Uber's. Turns out it might still be Uber's problem though.


This seems like a misunderstanding of the problem. The problem is that it's very difficult paying attention to something you have no control over or interaction with. To stick in a prod to punish people isn't going to make it easier. You'll end up just firing a lot of drivers. What would be better is to give the human something to do. For example you could put a set of input devices in the car for the human: Tell them to record their 'level of confidence' so you can check that against the cars confidence levels, or input suggesting steering adjustments to gauge where the car is differing from normal human behaviour.

That way when your driver has become an active participant (albeit probably useless) in the driving.


Train drivers are expected do just that. Driving a train has way less interaction than a car. They managed to setup a system (at least in Netherlands) where they do pay attention. Specifically hiring people based on personality/skill, various things in the train, checks, suspension when you miss a red sign, etc.

It seems Uber did none of that. It's super obvious the safety driver isn't paying attention. Why wasn't this noticed? Why wasn't a system setup to handle this? It feels like they cut corners.


To add to your comment: GM's SuperCruise semi-autonomous mode uses a real-time head and eye-tracking system to ensure the driver is alert. I don't know if it's based on DL, but it's available in a production vehicle.


Dead men's switches tend to do the job - to a point. Railway operators tend to learn to keep this particular annoyance at bay, but in extreme cases (tired/etc.) to exclusion of all other alertness.


That sounds worse than just driving. It also definitely switches who is the machine and who is the master, like one of those breathalyzers attached to the ignition.


That is what Tesla’s auto pilot does. It periodically starts flashing that you need wiggle the steering wheel to ensure you are there.



Oh my, that is so dangerous since it won’t know you aren’t holding the wheel if it needs to disengage.


what does it do if you don't? does it just pull over to the side of the road?


Yes, eventually after a few warnings.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: