Thanks for your comment. I think you may be right.
> I think you got down-voted because you misrepresented what Telsa is actually doing, which is a difficult arbitrage between:
>
> - known preventable deaths from, say, not staying in the lane aggressively enough;
>
> - possible surprises and subsequent deaths.
I don't think I was misrepresenting anything (at least, I was trying not to). I just pointed out that behaviour-changing updates that may be harmless in, say, smartphone apps, are much more problematic in environments such as driving-assisted cars.
I think this is objectively true.
And I think we need to come up with mechanisms to solve these problems.
> That learning pattern (resolving unintended surprises as they happen decreasingly often) is common in software.
My argument is that changes in behaviour are (almost automatically) surprising, and thus inherently dangerous. Unless my car is truly autonomous and doesn't need my intervention, it must be predictable. Updates run the risk oof breaking htat predictability.
> Others have preferred the surprise-free and PR-friendly option of not saving the dozens of thousand of lives dying on the road at the moment.
My worry is that (potentially) people will still die, just different ones.
> being in favour of Telsa (and Waymo) taking more risk than necessary
If I'm taking you literally, that's an obviously unwise position to take ("more than necessary"). But I think I know what you meant to say: err on the side of faster learning, accepting the potential consequences. Perhaps like NASA in the 1960s.
But my argument was simply that there is a problem with frequent, gradual updates. Not that we shouldn't update (even though that's actually one option).
We ought to search for solutions to this problem. I can think of several that aren't "don't update".
But claiming that the problem doesn't exist, or that those that worry about it are unreasonable, is unhelpful.
> I think you got down-voted because you misrepresented what Telsa is actually doing, which is a difficult arbitrage between: > > - known preventable deaths from, say, not staying in the lane aggressively enough; > > - possible surprises and subsequent deaths.
I don't think I was misrepresenting anything (at least, I was trying not to). I just pointed out that behaviour-changing updates that may be harmless in, say, smartphone apps, are much more problematic in environments such as driving-assisted cars. I think this is objectively true. And I think we need to come up with mechanisms to solve these problems.
> That learning pattern (resolving unintended surprises as they happen decreasingly often) is common in software.
My argument is that changes in behaviour are (almost automatically) surprising, and thus inherently dangerous. Unless my car is truly autonomous and doesn't need my intervention, it must be predictable. Updates run the risk oof breaking htat predictability.
> Others have preferred the surprise-free and PR-friendly option of not saving the dozens of thousand of lives dying on the road at the moment.
My worry is that (potentially) people will still die, just different ones.
> being in favour of Telsa (and Waymo) taking more risk than necessary
If I'm taking you literally, that's an obviously unwise position to take ("more than necessary"). But I think I know what you meant to say: err on the side of faster learning, accepting the potential consequences. Perhaps like NASA in the 1960s.
But my argument was simply that there is a problem with frequent, gradual updates. Not that we shouldn't update (even though that's actually one option). We ought to search for solutions to this problem. I can think of several that aren't "don't update".
But claiming that the problem doesn't exist, or that those that worry about it are unreasonable, is unhelpful.