Hacker News new | past | comments | ask | show | jobs | submit | more aimkey's comments login

>Tesla's self-driving is far safer than human driving: the number of accidents and deaths per mile driven are something like an order of magnitude lower (i.e., around 10x safer).

Lies, damn lies, and statistics.

Tesla here, is, again, being funny with the numbers. They LOVE to cite autopilot ON death statistics as being "10x safer than normal driving". What they fail to note is that Autopilot can ONLY be on while driving on a limited access highway. Highways are much safer to drive on than a mix of ALL ROADS, which is where the baseline figure comes from.

Another confounding factor is the price of the vehicle. The average CONFIGURED Tesla with the FSD package today costs what? $65k? More? Those X's and S's are $100k+. Nobody is buying that base Model 3. The point is that Tesla drivers are 1) Older and 2) Wealthy. Wealthy, older people get in far fewer car crashes than the average driver. In fact, car crash fatalities are really driven by two groups: drunks (or pill addicts), and young (teenage) men. Not saying it's IMPOSSIBLE to have a substance abuse problem and own a Tesla, but the average Tesla owner is less likely to have these issues. It's also less likely to own a Tesla while young.

So, Tesla autopilot stats should be compared to other comparably priced vehicles while driving on the highway ONLY. That would actually be a fair, honest comparison. I believe a recent outgoing BMW 5 series chassis finished its entire life without a single fatality in the US. That's right -- 4-5 years of service in the US without a single death. Turns out, wealthy people who drive expensive family sedans don't get in a lot of fatal highway crashes.

Here's a Forbes article (sorry) doing some of the back-of-the-napkin math. They estimated that in Q3 2019, autopilot really wasn't any safer than manual driving.

https://www.forbes.com/sites/bradtempleton/2020/10/28/new-te...


> So, Tesla autopilot stats should be compared to other comparably priced vehicles while driving on the highway ONLY.

I disagree. Autopilot driving on the highway should be compared to all human drivers driving on the highway. Otherwise you wouldn't be comparing against human performance per mile driven apples-to-apples.


Modern vehicles have much safer crash characteristics than older cars. The average vehicle on the road in the US is 11 years old. Do you know how much crash characteristics of cars have improved in the last decade? The comparison needs to stay in the modern, $65k+ vehicle realm for it to be apples-to-apples. Otherwise, you're comparing a bunch of decade old rust buckets with heat-cycled rubber, no blind spot monitoring, and Takata airbags to modern vehicles and claiming victory. Come on.

The BMW F10 535i had ZERO FATALITIES over its entire life in the US. Zero. It had no "self-driving" capabilities. Just lane-departure warning, BLIS, and ACC.


> The BMW F10 535i had ZERO FATALITIES over its entire life in the US. Zero. It had no "self-driving" capabilities. Just lane-departure warning, BLIS, and ACC.

Yes, all evidence I've seen indicates that cars partially driven by computers (adaptive cruise control, lane-departure warning, blind spot information, etc.) are safer than cars entirely driven by human beings. The BMW you mention is safer precisely because it is partially driven by machines. The more we automate driving, as machines get better and better at it, the safer we will all be on the road.


> Yes, all evidence I've seen indicates that cars partially driven by computers

Emphasis on PARTIALLY. Anyone who has read recent takeover scenario studies is rightfully horrified at the notion of a completely “hands off” driving experience, where the driver is expected to remain alert and vigilant but they’re not inputting any steering, throttle, or braking. Unsurprisingly, it takes people about 2 seconds to re-engage as active drivers. 2 seconds is way too long, which is why it may be safer to NOT use a system that does steering input in addition to throttle and brake. You need to keep the drivers actively engaged. And no, “touch the steering wheel every 30 seconds” is not active engagement. And if a car has “self driving” but also active driver monitoring, what’s the point? The driver doesn’t get to relax at all. The stress of driving doesn’t come from the input unless you’re racing. The stress of driving comes from having to stay alert. If I have to stay alert, I’d rather just drive myself instead of trusting an experimental system that drives like an indecisive, half-blind grandmother.

The BMW was safe PRIMARILY because it’s a well-designed, modern car, driven by an older and wealthy (safe) demographic. The assistance systems are probably secondary. They weren’t even standard on all vehicles and they were very primitive in that first generation.

If you actually read the Forbes article above, the back of the napkin math actually DOESN’T indicate that Teslas in autopilot are safer than normal driving. That’s the entire contention. I do not think these full-takeover systems are safer at the present time than active human drivers in comparable vehicles with safety assist systems. Tesla is very clearly fudging the numbers to make it appear as if autopilot is safer, but the claim doesn’t stand up to some really basic analysis.


> What they fail to note is that Autopilot can ONLY be on while driving on a limited access highway.

This isn’t true anymore


What an annoying charlatan. Karpathy is a brilliant computer vision engineer, but he has let his expertise in that subfield cloud his judgement on achieving the overall goal of autonomous driving.

Musk and Karpathy have been dead wrong about LIDAR for years. Remember Musk making the absurd claim of a million Tesla robotaxis by 2020? I think most hilarious is that both Karpathy and Musk claim the LIDAR systems are too expensive. Yet, in the same 2019 Autonomy Day they simultaneously claimed that Teslas would be able to drive themselves and operate as robotaxis, earning their owners passive income and therefore justifying significantly increased MSRPs. So, the $7k LIDAR system (that accelerates safe autonomous driving) is not worth the cost, yet stumbling towards autonomy on vision only is? If the car becomes an money-earner, you should use all of the systems available. The 2019 Autonomy Day was an utter embarrassment. I'm sure 2021 will be more of the same.

So now it seems that they've realized their folly in logic. So what's the solution? Well, you can't just complain about COST of non-vision perception systems. Because, as noted above, that doesn't make sense if you're going to simultaneously claim that your car will be able to earn you money (augmenting any extra hardware cost that gets you to that point faster). No, now you have to smear all non-vision perception systems. You have to say that their data is worthless and detrimental to the overall effort.

The entire claim from the 2019 Autonomy Day that "vision is what humans use to drive" is also completely bogus. Humans use many senses to drive. They feel the pedals and steering wheel. They use their equilibrio sense to sense motion. And they use their hearing to hear other vehicles, sirens, and issues with their own car (driving with headphones in is illegal for a reason). Any modern car, even a Tesla, is also using far more than just vision when attempting autonomy. Forget about radar and LIDAR for a moment. There are endless sensors in the drivetrain. Steering angle sensors and multiple IMUs for the electronic stability control. Brake and wheel sensors for the ABS. Temperature sensors everywhere. And countless other ECUs. The notion that vision is getting you there exclusively is nonsense. There's no good argument against LIDAR today other than perpetuating a lie to sell cars that are cheaper to produce. And, Karpathy has a massive professional conflict of interest in making CV the main player -- he's a CV expert. He was never a fusion expert before his hire. If CV is the pathway forward, he's gets to remain "the guy". It certainly behooves HIM to make that claim.

Autonomous driving will not be achieved in this decade. Perhaps ever. Ask yourself honestly: if you were tasked with building an autonomous commercial aircraft OR an autonomous car, which would you choose? Most would say aircraft -- nothing to really hit in the air, fully mapped airport and runway systems, and far fewer variables. Yet autonomous aircraft still do not exist. Perhaps the edge cases always rule the roost. Ask yourself why driving would be any different...


Re: senses - have you ever played a driving sim? You can drive just fine with vision without tactile.

Sirens are primarily a means to get you to look in a direction. 360° cameras can notice the emergency vehicle as soon as it's visually relevant. And if they decide they need an audio siren detector, that's like, practically intern level signals detection at this point. Hardly a dealbreaker.

100% hands-down would pick autonomous car vs airplane. Flying a plane isn't just moving the aluminum bird through 3 space and periodically taking off or landing.

Autonomous aircraft don't exist because a huge amount of the ritual of flight is before and after the captain is even on the plane, let alone flying. There is a tremendous amount that the pilot and copilot go through, on the ground, before taxi, after liftoff. It's way, way more involved and way more generally intelligent. We can design AI to take off, path to a destination, and land. Those 3 things are the easiest parts of flying, yet do not comprise the act of flying a 2-seater, let alone an airliner.


Mentioning the senses wasn't meant to be an itemization of "senses" that a car or car operator needs. Of course you can still drive decently while deaf and without feeling the g-forces. The point is that vision is NOT the only input. In semi modern (non-self-driving) cars, the driver is still assisted by a flurry of additional vehicle sensors in addition to vision and the other human senses. So dismissing LIDAR as "not needed" is foolish.

I understand the complexities of modern commercial flight. I still consider it far less complex (computationally) than driving on today's public road. The fact that you have V2V communication out of the box (via transponders) is probably the biggest factor. I would not rule out autonomous driving WITH a standardized V2V / V2E system. Without one? I'm bearish.


My understanding is that LIDAR is diminished on the aspect of not just "not needed," but also potentially exacerbating local minima under good (e.g. fair weather) conditions.

Having LIDAR means you can rely on LIDAR, which means you need the LIDAR to be 100% reliable. I've read some companies claiming their lidar works through rain and snow but I haven't seen actual renders of it yet to judge for myself.

No lidar acts as a forcing function for Tesla to get their vision system that much more reliable. I think this will ultimately result in a better vision system than the competitors, and I believe that (somewhat paradoxically) this will result in an overall better, safer, and more reliable system, when it comes to the 20% long tail.

Lidar+vision will probably dominate in fair weather driving. I'm skeptical it's of much benefit in bad weather, and that long tail is the hard part.


> No lidar acts as a forcing function for Tesla to get their vision system that much more reliable.

Alternatively, no LIDAR means you aren't gathering corresponding LIDAR and vision data that you can use to improve detection of the vision system.


Tesla does use LiDAR for calibration and testing of their vision systems. But they also have a million cars on the road with radar which also gives fairly accurate depth information. They use the depth information provided by the radar to automatically label their data and then they can check their vision based estimates against that data. Karpathy has mentioned this approach in a few different talks.


> Musk and Karpathy have been dead wrong about LIDAR for years.

Right, because all those companies that use LIDAR are making billions driving people around. Oh, wait, actually they are burning 100 of millions every year.

> I think most hilarious is that both Karpathy and Musk claim the LIDAR systems are too expensive.

That's literally the opposite of what Musk says about Lidar. He LITERALLY said he wouldn't use them if they were free.

> yet stumbling towards autonomy on vision only is

Look up the Marginal Revolution from 1870.

> The entire ...

... The entire paragraph is an exercise in missing the point.

It seems really what you are saying is not that Tesla are charlatan but the whole industry is.


>It seems really what you are saying is not that Tesla are charlatan but the whole industry is.

The entire industry has an air of charlatanism. Tesla is the biggest charlatan of the bunch. The LIDAR-using ventures at least have a snowball's chance in hell. Tesla has no chance with vision alone.


I would love to know what your credentials are that you can say this with such certainty


Just the fact that this person created a throwaway account just to thrash talk, speak volumes.


> Yet autonomous aircraft still do not exist.

Well, that's a truth with modifications [1]

[1] https://www.youtube.com/watch?v=B2uc98EEPqE


It seems as if the people gobbling up the "Tesla has the data! Autopilot will keep getting better!" line have never trained a neural network in their life. Models converge. Loss stops decreasing, regardless of more incoming data. Extreme manual data cleaning effort becomes required to prevent overfitting. Model architecture has to change and hyper parameters have to be tweaked. Then you're back at square one as far as testing goes if you change any of those things.

The notion that Tesla's model HAS to keep improving simply because they will be able to pile on more (unlabeled!) data is laughably false. And, in fact, quite insulting to the intelligence of even the most casual ML engineers.


> And, in fact, quite insulting to the intelligence of even the most casual ML engineers.

Exactly, casual ML engineers. The issue of plateauing tends to occur because there is no more novelty to be had in the data. What mega-experiments like GPT and similar have shown us is that actually you can keep adding novel data and keep improving the model. Kinda inelegant, yet effective. The problem is, most institutions can't add more novelty beyond a certain scale, since that usually means shoveling more money at data storage and compute, on top of the novelty collection.

Tesla merely has to open the money tap to get more of both compute and storage, and let the real-time data flow in.


> Tesla merely has to open the money tap to get more of both compute and storage, and let the real-time data flow in.

And if you watch the other parts of the presentation, you'll see the bits about them buying clusters with 5k+ A100 GPUs. Presumably they intend to do something with those. Probably not streaming Fortnite concerts.


I would agree if their increase in data was linear, but it is increasing by orders of magnitude, which should have qualitative consequences for what they're able to accomplish as they claw their way through 9s. I don't see how it's possible to get progressively more 9s without scaling in both data and compute.

The point of the higher scale isn't just more data, it also makes it easier to solve the unbalanced data problem, because rarer and rarer scenarios will appear in large enough numbers to work with.


> The notion that Tesla's model HAS to keep improving simply because they will be able to pile on more (unlabeled!) data

That the exact opposite of what they are doing.

It seems like you didn't watch the talk at all.


>to pile on more (unlabeled!) data

given the nature of that data you can get a lot of unsupervised mileage, so to speak, out of it.


You make it sound extremely manual and sequential when reality is anything but.

A team with funds like Tesla, Google, FAIR is going to be using NAS and have a continuous testing pipeline. Tesla has arguably the best environment for continuous testing which is the most difficult part of improving a model. Andrej even said in his talk that their supercomputer is in the top 5 for FLOPs.

SOTA on ImageNet for the past few years has been driven by pre-training on massive datasets. Vision transformers are increasingly more common and are extremely data-hungry.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: