Hacker News new | past | comments | ask | show | jobs | submit login

Tesla pushed out a software update to AutoPilot (Lane Centering) that caused a fatality[0], I hardly think they should be held up as an example of good software hygiene.

[0] https://www.mercurynews.com/2018/03/29/tesla-crash-victims-f...




There was recently an interesting twitter dump by a former Tesla engineer whose NDA was up; as of a few years ago their software development practices were... pretty bad.

Apparently every Tesla vehicle runs a Kubernetes cluster and is/was remotely accessible to Tesla via SSH?

https://twitter.com/atomicthumbs/status/1032939617404645376?...


I sleep very well at night knowing my software is not keeping helicopters in the air, cars on the road, etc.

Honestly I'm in awe of the lot of programmers who take these kinds of tasks on. I don't know how they can relax at all relative to their work.


> I don't know how they can relax at all relative to their work.

They strictly follow procedures. If faulty code gets through to production and somehow causes a fatality, it's a failure of the procedures, not the individual developer. This works so well in aviation and yet seems completely non-existent in the automotive world.


The procedures exist and are 'followed', but there's a lot more time-to-market pressure. Manufacturing lines dictate the schedule and software must keep up no matter what.


When I worked on safety-critical products, it was actually really nice to be able to push back.

"We can't satisfy the safety case yet" is all you need to say to get your manager to cave.

If they want to take the risk upon themselves to sign off on a known-unsafe (technically; in practice it was already pretty good, just not good enough yet) device, and go to jail if something goes sideways, they can be my guest...

In practice, they preferred to come back next week and ask if we were done yet.

The project went wayyyy past the deadline, thanks for asking :-)


If you as an individual are taking on that level of personal responsibility, your organization is broken.

Part of the regulatory process is to demonstrate sufficient levels of resilience in both the systems produced and the development process used to produce them. It should be a really boring process with a lot of crosschecks. You should be sleeping well knowing that your work is being reviewed and tested and attacked by other people.

(Reality is often different, of course. Just don't try to be the hero.)


I recommend reading the "NASA Manager's Handbook for Software Development". It contains some great guidelines for writing safety-critical software. AFAIK, no Space Shuttle mission ever suffered a serious safety incident due to a software defect.

https://ntrs.nasa.gov/search.jsp?R=19910006460


> I sleep very well at night knowing my software is not keeping helicopters in the air, cars on the road, etc.

Hopefully at least, but you never know where those random shell scripts you put in a gist (or whatever) might end up.


I'm with you, I'm nowhere near confident enough in myself to work directly on safety-critical systems. I imagine an important part of those organizations is removing SPOFs with respect to the engineering team itself; i.e. nobody fails alone, and everybody tests the hell out of everything. Could be wrong though.


If you work for AirBnb your bad software is definitely changing lives and putting people in danger. Software doesn't have to be embedded to materially affect someone's life.


Saying it caused "a fatality" is being disingenuous. It's along the same false pretense as "we shouldn't use self driving cars at all, if they lead to even one fatality".

Fatalities aren't good things, but they're inevitable consequences of driving. And it's disingenuous to expect anyone's code to be guaranteed to work on every mile of road in the US, under all conditions, at all times. If autonomous vehicles substantially reduce overall fatalities we are better off.

(Nobody talked about banning combustion vehicles after the pinto or banning Fords over their side mounted fuel tanks or their explorers that rolled over when the defective tires blew out)


> Saying it caused "a fatality" is being disingenuous.

It is accurate.

They pushed out an update that changed how AutoPilot behaved, someone had AutoPilot enabled, AutoPilot accelerated straight into a concrete barrier, and the occupant died.

Here is the NTSB's initial report on the incident:

https://www.ntsb.gov/news/press-releases/Pages/nr20180607.as...

> It's along the same false pretense as "we shouldn't use self driving cars at all, if they lead to even one fatality".

I didn't say anything remotely like that. I stated what had already occurred. Trying to put those words in my mouth doesn't seem like you're responding in good faith.


You have no proof that the accident wouldn't have occurred anyway without the update. Or that their updates didn't save one or more lives.

That's my point. Not an "accurate" but misleading accusation at the software without considering the hidden variables.


"You have no proof that the accident wouldn't have occurred anyway without the update."

Well, the NTSB preliminary report doesn't make any statements about what the software update did, per se. But it does indicate that the driver's hands were not on the wheel at the time of the accident and that the "autopilot" software was activated. So it's fair to say that the "autopilot" software was responsible for the crash.

"Or that their updates didn't save one or more lives."

This is a red herring. It could be true. But there's no evidence for it so it's not worth thinking about. Our system of moral judgements rightly puts a lot of weight on demonstrable causes and effects, and generally ignores hypothetical, but totally unproven speculation like this. Otherwise anyone could get off the hook for anything, by saying, "I may have done bad action X but you can't prove I didn't also do good actions Y and Z which could well outweigh X."

"Not an "accurate" but misleading accusation at the software without considering the hidden variables."

In ordinary life we never know all the hidden variables. We just make judgments based on the best available information (or if necessary, decide to postpone judgment until better information is available). The NTSB preliminary report seems credible and I see no reason not to draw reasonable conclusions from it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: