Hacker News new | past | comments | ask | show | jobs | submit login

Exactly the type of answer I was looking for. Thanks



It's also worth mentioning that in the event that, upon first seeing c1, c2 hails c1 and receives no response, good AI logic will have c2 treat c1 as an unknown and begin running probabilistic predictions against c1.

For a possible example: If there are potential bends in the road coming up, one of those predictions will warn c2's main logic, and then c2 might decide to pre-emptively slow down to give itself leeway to brake if c1 does decide to change lanes. Likewise, if c1 is similarly programmed, it might make a more cautious tentative to shift lanes, but give itself time and space to maneuver back in case c2 doesn't slow down. And again, the same kind of logic can happen between c2 and c3, or c3 could have the same prediction as c2 against c1 or even against both c1 and c2 in the case where none of them can talk to eachother.

So not all is lost, and in cases where reliable communication and acknowledgements cannot be obtained, AI cars would (if they're well programmed) adjust their current logic/driving so that they have the mechanical leeway to adjust to various scenarios.

For a very well programmed AI present in all cars (but without common knowledge of this), each car would consider the physics and pick the optimal way to drive while keeping enough margin of error to react and avoid collisions (or being the cause of collisions! avoiding two cars only to make one of them bump into a third one is no good!). In the vast majority of actually-existing situations, I suspect this would still be faster and much safer than human drivers (given the above assumption of good AI).




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: