> I'm not aware of Tesla specifics, but i've previously read several stories on HN of car computers who wouldn't let the driver be in control.
That can't happen. People misremember things when they're in a panicked state. Or they're exaggerating to defend themselves. The instant you even slightly tug on the wheel autopilot and autosteer of any kind will disengage and revert to a traffic aware cruise control state (with an audible chime). And any application of the brake disengages autopilot/autosteer and also automatic cruise control with an audible chime.
> I mean apparently you've been around for a while on these forums. You're surely educated that software can never be trusted, especially with life-endangering situations.
I'm not sure what you're saying. People aren't "trusting" it yet which is why it's easy to disengage and override.
Yes yes i've heard that one before. I've had many bugs "that can't happen" over the course of my lifetime. Fortunately none of them involved a car so i'm still around to talk about it. A few years ago, somebody would have laughed at the idea of Boeing producing fallible equipment, especially since we as a species have decades of experience of air travel. Now would you still say it's unthinkable that Boeing could produce faulty hardware/software?
I also remember some threads about high-range medical equipment (in actual hospitals) killing people due to software bugs.
> And any application of the brake disengages autopilot/autosteer and also automatic cruise control with an audible chime.
Is that a physical kill switch that triggers a sensor to produce an alert/chime? Or does the sensor ask politely the computer who actually controls the car to disable the autopilot? In the latter case, i know of no way to make this safe: for example what happens if the sensors requesting human control back die, short-circuit, or otherwise malfunctions? Or if the program enters certain kinds of memory violations? Or any other software/hardware fault?
> I'm not sure what you're saying.
I'm saying i've seen enough bad engineering (not always, but sometimes along with bad faith) across all fields i've been even remotely involved in. And i'm saying i don't even remotely trust the >100 micro-controllers you find in a modern car whose schematics and source-code we can't inspect. I mean i don't fully trust mechanical hardware either, but at least with it symptoms and failure modes can be easily reproduced and debugged.
And i'm definitely saying i would never ever trust a machine-learning algorithm with life-making decisions. More on this topic:
That can't happen. People misremember things when they're in a panicked state. Or they're exaggerating to defend themselves. The instant you even slightly tug on the wheel autopilot and autosteer of any kind will disengage and revert to a traffic aware cruise control state (with an audible chime). And any application of the brake disengages autopilot/autosteer and also automatic cruise control with an audible chime.
> I mean apparently you've been around for a while on these forums. You're surely educated that software can never be trusted, especially with life-endangering situations.
I'm not sure what you're saying. People aren't "trusting" it yet which is why it's easy to disengage and override.