Hacker News new | past | comments | ask | show | jobs | submit login

Autopilot cannot be enabled until the position of the cameras is calibrated. That's why it requires you drive on well-marked roads for several miles when you first receive your car, before it works.



I have zero knowledge of Tesla cars, but do wonder why this can't be pre-calibrated at the factory?


I don't know, but the calibration also has to be re-done if you have a camera replaced.


Through that is not how neural nets work.

It's anything but trivial to make a neural net properly abstract over some "camera position parameter".

More over it's nearly impossible to be sure it did properly abstract in all cases. I.e. it might have abstracting in every case but some arbitrary edge case which for a human looks no special at all but for some arbitrary reason is for the NN.

Anyway this is highly speculative and might very well be unrelated to the given Teslas behaviour.


I’m pretty sure some things get normal algorithms, that not everything is a giant neural net.


Hard to know without being on the team, hence my WAG modifier to the guess. My reasoning is that the Model Y is built on the same chassis as the Model 3. So, a quick and dirty solution is to use the same training set results. But, it most likely is something else.

This is a roundabout way of saying that maybe - just maybe - the problem is only for the model Y.

Am rooting for Tesla to fix it. Maybe a Tesla engineer looks at hacker news? Already filed a complaint with the Tesla Sales Manager. Or perhaps the log of my screaming in terror did the trick. Is prosody for QA sentiment a thing?


I don’t think they segregate their training data based on car models. I mentioned in an above comment, but Comma.ai doesn’t do this and their devices support tons of cars. It would be very odd if a company far smaller than Tesla was able to figure out how to account for different camera positions and Tesla wasn’t. I bought a 2022 model year car and plugged the Comma device in and it just worked, and they would have had pretty much no training data from my car at that point. Just my speculation though.


Are there more recent papers? I see one from 2016 [1] But, then again my assumptions are a bit dated as well. Was thinking: could a different horizon on a CNN mid-layer trigger a false positive? Perhaps, classify a slight rise as a bumper or some other obstacle?

Maybe a simpler system, like Comma.ai's cameras has looser tolerances. Somewhat akin to the one or two eyes of a Human driver.

Maybe it is policy. I could imagine the brand hit to Tesla for every crash - even due to driver error. Maybe phantom breaking is an artifact of erring on the side of caution. Maybe lawyers got involved. (The horror!)

Anyway, idle speculation, this.

[1] https://arxiv.org/pdf/1608.01230.pdf


I have a Comma 3 in my car, and it has a calibration phase that takes like 3 minutes of driving. The device works for many different cars of different sizes, and their driving algorithm uses neural networks. As far as I know you still have to get the device fairly close to centered on the windshield for it to work, but clearly you can still do some sort of calibration based on driving data alone. Maybe Tesla can account for even more deviation in camera position because they know ahead of time where the cameras are mounted?


I'm not saying it's impossible or will take long, I'm saying it's not easy to implement and depend on your pipeline subtle bugs can sneak in you might not be able to find with any testing.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: