Justice is generally enforced by a human with a proven track record of being reasonable, a judge. I don’t think a judge will go after a CS intern because she forgot a semi-colon in her pull-request. A judge might very well go after a company that has a pattern of emails arguing that dodging around requirements on brakes saves money by not having to replace them as often.
In the case of manual driving, there is an understanding that humans might kill other humans when operating a motor vehicle and the enforcement focuses on a subset of precautions (alcohol, speed, lanes and, for professionals, continuous hours of work). For lane assistance and robot cars, there are no explicit list of guidelines yet, but there should soon be enough cases around: clearly communicating to drivers what is their responsibility; taking necessary steps (which might need professional bodies to define) when releasing something new, possibly a dedicated bureau. Cases will be more complicated because, presumably, only large corporations can handle those. The responsibility will most likely be focusing on testing practices and enforcement, not individual coders.
I suspect the closest structure will be pharmaceuticals & medical devices: it is currently acceptable to sell complex products made by large corporations that can statistically be related to thousands of people dying -- because they come with scientifically sound studies on samples, proving that they save more given a certain diagnostic. A public body defines how to prove that those help and private initiatives try to match those criteria, asking for experimental exceptions to medical practice.
Google has a habit of testing A/B on half of the world’s online population, so they might push against not trying more widely than small tests at clinical scale. There could be an interest in building a realistic simulator and testing code in there; building tracks able to replicate edge cases with mannequin -- things that actually already exist for the auto industry.
The prize is so large and the brand impact of being seen as unsafe too large for stakeholders to not find a reasonable solution.
In the case of manual driving, there is an understanding that humans might kill other humans when operating a motor vehicle and the enforcement focuses on a subset of precautions (alcohol, speed, lanes and, for professionals, continuous hours of work). For lane assistance and robot cars, there are no explicit list of guidelines yet, but there should soon be enough cases around: clearly communicating to drivers what is their responsibility; taking necessary steps (which might need professional bodies to define) when releasing something new, possibly a dedicated bureau. Cases will be more complicated because, presumably, only large corporations can handle those. The responsibility will most likely be focusing on testing practices and enforcement, not individual coders.
I suspect the closest structure will be pharmaceuticals & medical devices: it is currently acceptable to sell complex products made by large corporations that can statistically be related to thousands of people dying -- because they come with scientifically sound studies on samples, proving that they save more given a certain diagnostic. A public body defines how to prove that those help and private initiatives try to match those criteria, asking for experimental exceptions to medical practice.
Google has a habit of testing A/B on half of the world’s online population, so they might push against not trying more widely than small tests at clinical scale. There could be an interest in building a realistic simulator and testing code in there; building tracks able to replicate edge cases with mannequin -- things that actually already exist for the auto industry.
The prize is so large and the brand impact of being seen as unsafe too large for stakeholders to not find a reasonable solution.