The idea of a Tesla self-driving system getting target fixation and driving towards the thing it should avoid is hilarious to me. And it also makes me wonder how likely that is to actually happen. Is the model weight of "shiny, bright reflective thing in IR spectrum" for "probably a road reflector, avoid" higher than "probably some water in the road, so that must be where the road is"? (Obviously totally made up examples).
Uber ran over a woman [1] walking with a bicycle because she "didn't exist" as far as their software was concerned. This is a real danger when using the public as beta testers for robots. The Uber vehicle has more sophisticated sensors than a Tesla which is winging it with just cameras.
I was curious about the result of the investigation since the article you linked said it was not yet known whether the self-driving features of the Tesla were active during the accident.
The outcome of the investigation was that the Autopilot was not engaged and the cause of this particular crash was human error:
"Per the investigation, the car data showed that the driver hit the gas pedal rather than the brake. The Model S then entered the rest stop, going 60 mph, hitting a curb and then the trailer."
Someone can wish no ill to others, yet still find irony or absurdity in things that already happened. It's something that both victims and witnesses often do as a way of processing life's darker realities. Please don't tell other people how they should experience tragedy or speak on behalf of victims. You're no better situated to do that the person you responded to.