Hacker News new | past | comments | ask | show | jobs | submit login

> I'm not aware of Tesla specifics, but i've previously read several stories on HN of car computers who wouldn't let the driver be in control.

That can't happen. People misremember things when they're in a panicked state. Or they're exaggerating to defend themselves. The instant you even slightly tug on the wheel autopilot and autosteer of any kind will disengage and revert to a traffic aware cruise control state (with an audible chime). And any application of the brake disengages autopilot/autosteer and also automatic cruise control with an audible chime.

> I mean apparently you've been around for a while on these forums. You're surely educated that software can never be trusted, especially with life-endangering situations.

I'm not sure what you're saying. People aren't "trusting" it yet which is why it's easy to disengage and override.




> That can't happen.

Yes yes i've heard that one before. I've had many bugs "that can't happen" over the course of my lifetime. Fortunately none of them involved a car so i'm still around to talk about it. A few years ago, somebody would have laughed at the idea of Boeing producing fallible equipment, especially since we as a species have decades of experience of air travel. Now would you still say it's unthinkable that Boeing could produce faulty hardware/software?

A link that gets posted on HN quite often contains many such bugs "can't happen" bugs, that aren't even caused by AI : https://beza1e1.tuxen.de/lore/index.html

I also remember some threads about high-range medical equipment (in actual hospitals) killing people due to software bugs.

> And any application of the brake disengages autopilot/autosteer and also automatic cruise control with an audible chime.

Is that a physical kill switch that triggers a sensor to produce an alert/chime? Or does the sensor ask politely the computer who actually controls the car to disable the autopilot? In the latter case, i know of no way to make this safe: for example what happens if the sensors requesting human control back die, short-circuit, or otherwise malfunctions? Or if the program enters certain kinds of memory violations? Or any other software/hardware fault?

> I'm not sure what you're saying.

I'm saying i've seen enough bad engineering (not always, but sometimes along with bad faith) across all fields i've been even remotely involved in. And i'm saying i don't even remotely trust the >100 micro-controllers you find in a modern car whose schematics and source-code we can't inspect. I mean i don't fully trust mechanical hardware either, but at least with it symptoms and failure modes can be easily reproduced and debugged.

And i'm definitely saying i would never ever trust a machine-learning algorithm with life-making decisions. More on this topic:

- James Mickens at Usenix on why AI is usually a terrible idea: https://www.youtube.com/watch?v=ajGX7odA87k

- jwz's blog (another HN favorite) is also full of examples of AI failures ; a fun harmless example: https://www.jwz.org/blog/2021/06/using-ritual-magic-to-trap-...




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: