Hacker News new | past | comments | ask | show | jobs | submit login

I think the brunt of the "Tesla Autopilot Deaths" problem is not the tech, but the marketing department (which has to include Elon Musk of course). This "kind of autonomous driving" is absolutely deadly, by definition. It's never going to work well. "The car will drive by itself, until it can't. So don't pay any attention, until you need to save your own life. We'll give you 2 seconds warning". I mean, really? I don't know why Tesla could possibly think this is a good idea. By extension, anyone who actually uses or trusts autopilot is basically forfeiting their life. I think it is cognitively harder to use autopilot and pay attention, than to simply drive safely.

As far as the tech goes, I am in agreement with Elon. I think LiDAR is a short term advantage, but the companies/stacks depending on LiDAR will reach a local maximum from which they won't recover without totally starting again from scratch. LiDAR is short range, low resolution, and extremely expensive. There is no reason that visible light (+ IR) CMOS cameras can't do better by an order of magnitude, at a lower cost.




This "kind of autonomous driving" is absolutely deadly, by definition. It's never going to work well.

Instead of just speculating, the NHTSA did an actual study on this, and they reached the opposite conclusion. They investigated crash rates in Tesla vehicles that had the original autopilot hardware installed, both before and after autosteer was enabled via a software update, and found that autosteer reduced accident rates by 40%.

When Tesla talks about how much lower their accident rate per mile is compared to other cars, that's kind of statistical bullshit, since there are plenty of other uncontrolled variables that are in play that could affect this besides the cars themselves. But in the case of the NHTSA study, they were looking at the same drivers in the same cars, and the only difference was that one day a software update came out and enabled autosteer, and suddenly accident rates went way down. That's pretty damn convincing if you ask me.

https://static.nhtsa.gov/odi/inv/2016/INCLA-PE16007-7876.pdf


And if you'd like a critical take on that report:

But exactly how NHTSA reached that conclusion has been a mystery. It’s also the subject of an ongoing lawsuit between the agency and a group of researchers aiming to obtain NHTSA’s data so it can try to replicate the findings.

That may be an impossible task.

On Friday, in response to a Freedom of Information Act request filed in March by Jalopnik, NHTSA said it was unable to provide most of the materials and data used in the investigation, citing a “confidentiality determination” it granted Tesla in July. (It’s not uncommon for companies to submit requests to NHTSA for certain information to be treated as confidential. From there, NHTSA then chooses whether or not to grant the request.)

In particular, the agency said, NHTSA is withholding in full an “excel spreadsheet Tesla submitted that contains production data from disclosure because the spreadsheet contains information related to trade secrets and commercial or financial information.”

[...]

Autosteer, however, is relatively unique to Tesla. That’s what makes singling out Autosteer as the source of a 40 percent drop so curious. Forward collision warning and automatic emergency braking were introduced just months before the introduction of Autosteer in October 2015. A previous IIHS study shows that both the collision warning and auto emergency braking can deliver a similar reduction in crashes.

https://jalopnik.com/feds-cant-say-why-they-claim-teslas-aut...


Let's take the most conservative interpretation of that result and say that Tesla's collision warning and automatic emergency braking are responsible for the entire 40% reduction in accidents since that is similar for that feature set in other cars. That means that the introduction of autosteering at worst had no effect on the accident rate. That is a pretty big difference than "This "kind of autonomous driving" is absolutely deadly, by definition."

EDIT: So let me add that yes, this report probably isn't as pro-autopilot as some people are claiming, but I would still consider it positive for Tesla considering it is the only publicly available report on the subject and it shows that autopilot is at least no worse than a human driver.


> Let's... say that Tesla's collision warning and automatic emergency braking are responsible for the entire 40% reduction in accidents... That means that the introduction of autosteering at worst had no effect on the accident rate.

That doesn't follow logically at all. Collision avoidance and autosteer were introduced within months of one another. One could be reducing accidents while the other increases them. The most conservative interpretation is that the study may not have properly isolated these variables and is misattributing the safety gain of collision avoidance to autosteering.


You threw an ellipsis over a qualifying phrase in my statement and then responded as if I didn't say it. Your second ellipsis removed "since that is similar for that feature set in other cars". My conclusion rests on the assumption that those features on a Tesla are similar to those features on other cars. The only way for autopilot to be less safe than a human (taking the numbers from that study as true) is for those safety features to be substantially more effective on a Tesla. That is certainly possible, but it seems likes a weird argument to make since it rests on Tesla's technology being simultaneously industry leading and yet not nearly as effective as they claim.


Nah, I was clear you were characterizing that as the most conservative interpretation of the study. Your assumption is logically flawed. To wit:

> The only way for autopilot to be less safe than a human (taking the numbers from that study as true) is for those safety features to be substantially more effective on a Tesla.

You lumped the two types of features together. There's your logical flaw. The most conservative interpretation is that, yes, Tesla collision avoidance could be great while autosteering is "not nearly as effective as they claim."


These features are all interconnected and work together. At the most basic level they are about identifying obstacles and avoiding them. For an accident to happen, you usually need a failure of more than one of these systems.

For example the most recent fatality was directly caused by an autosteering error directing the car towards a median. However the accident could have been prevented if the automatic emergency breaking triggered before the collision. Considering it didn't, I have a hard time imagining that the automatic breaking has surpassed the rest of the industry to a large enough extent that could make up for the autosteering being as dangerous as you are suggesting.

It is certainly possible, it just seems unlikely that Tesla threads the needle to be extremely competent at one feature while also being incompetent enough to bungle a very related feature. That scenario relies on a lot more assumptions than the scenario I proposed which is why I described my original scenario as being the most conservative.


You're just doubling down on very poor reasoning. Collision avoidance can fail 50% of the time and still result in a massive 50% reduction in accidents. Whereas if autosteer fails 50% of the time while seducing drivers into paying less attention, it's going to increase the accident rate. It is not the underlying technologies that matters here, it's the fundamentally different use cases and compares.

> I have a hard time imagining that the automatic breaking has surpassed the rest of the industry to a large enough extent that could make up for the autosteering being as dangerous as you are suggesting.

This is another comprehension error. You claim was "autosteering at worst had no effect on the accident rate." Thus any increase in the accident rate caused by autosteering, not merely a "large" one, would invalidate your claim.


What bothers me is that Tesla has this data. Their cars are instrumented out the wazoo and uploading everything to the mothership. It would be trivial for them to discount cars that encountered AEB events, didn't use Autosteer etc from the dataset. The fact that even with that data they still feed us BS statistics about stuff like autopilot highway miles vs overall miles really worries me


>Let's take the most conservative interpretation...

I disagree that your interpretation is the most conservative interpretation. The most conservative interpretation is that we know basically nothing about the safety of Tesla's autopilot. There are so many unaccounted variables here - the presence of AEB and collision detection, total utilization of TACC vs TACC + Autosteer, utilization of Autosteer over time (for example: are people conservative with it at first and then gradually increase their usage into riskier driving situations?), non-airbag accidents, the constantly updating nature of Autopilot systems (for example what if an update actually makes the vehicle less safe?), whether people have more accidents initially driving any EV due to being unfamiliar with the instant torque of EVs and so on and so on. Mix in proprietary data which no one is willing to share and I think the only thing we can say is that we don't know.

Also:

>it shows that autopilot is at least no worse than a human driver.

It does not show that!

There are human driven miles in both buckets, for all we know the second bucket could have more human driven miles than the first bucket which is why the accident rate is lower. Tesla has never proven that it's Autopilot system is safer than or as safe as a human driver.


As far as I know, they have not released the methodology behind that data. There are a lot of doubts as to whether it is truly showing the benefit of autosteer (in the autopilot sense) and not just the benefits of the other safety systems that were enabled around the same time.


IMO the death rate would be much more interesting, but I guess we don’t have enough data. I suspect that autopilot+human fails less often than human alone, but when it does, it does so catastrophically.


This agrees with my experience. Anecdota I know, but I feel that I am safer with autosteer on because a moment of inattention will not cause the car to drift out of lane or drive into another vehicle. I have a 2015 S 70D, autopilot 1.5 I suppose.


"I feel that I am safer with autosteer on because a moment of inattention"

If one has "moments" of inattention significant enough to drift out of a lane or "drive into another vehicle", one should not be driving.

There is a current (Toyota ?) commercial where a young driver with her father is saved from mowing down a crosswalk full of pedestrians by some form of "auto stop" and the father proudly declares "it stopped for you". If you need that to avoid killing people, you should not be driving.


I don’t think it’s as black-and-white as you claim. Suppose a driver is inattentive at some moment with probability x, and conditions where inattentiveness would lead to a crash exist with probability y, then a probability of a crash at that moment is xy. Probably, both x and y are > 0 for all drivers, but xy could be low enough that an accident never occurs.

The driver has no reliable way of estimating x or y and therefore can’t judge the risk of their lack of attention causing a crash. Maybe they “shouldn’t” be driving according to this formula, but they don’t know that, and autosteer could save them from an accident.

Most likely, the presence of autopilot increases x and decreases y. We can’t tell which affect dominates by arguing on the internet. We need data.


>a moment of inattention will not cause the car to drift out of lane or drive into another vehicle.

Full automatic steer isn’t needed for this. Many many cars, for at least a decade, have had lane keep assist.

Now active lane keep assist is even more common, which steers you back into your lane, but doesn’t actively steer until you drift, preventing the main abuses of the feature.


And there's the conundrum. Autosteering can help you in a moment of inattention. But by nature it also encourages moments of inattention. It is far from clear that this is a net safety gain.


I fully agree with you, except to the Lidar argument. It holds currently, because Lidars are relatively new (compared to cameras). Considering the progress cameras have made, and apply a similar progress to Lidar. It's ultimately a race between learning enough on mostly-visual input, and developing cheaper and better hardware. It's a race, and I think it's brave to make claims on what tech will develop faster.

Also, I don't think a "restart from scratch" is required when moving away from Lidar - I don't have any hard evidence to make that claim and am happy to change my mind, but I haven't found any argument for doing so yet.


> It's ultimately a race between learning enough on mostly-visual input, and developing cheaper and better hardware.

I’m not convinced that the camera-based approach is just a learning problem. As far as I know, cameras still have awful dynamic range compared to humans. Remember the Uber crash video where it looked like the victim was only visible for a couple seconds before impact? Cameras need to work a lot better to be suitable for autonomous driving.


The video that uber released was from a dashcam.


I’m unsure about the cameras in the Tesla’s, but modern cameras (Sony A7M3 being the current leader) have a way better view than the average human. So much so that it can take sharp and accurate pictures in the dark while a normal human eye wouldn’t even see anything / it would seem pitch black.

As for dynamic range: HDR and the like can easily take care of that. The proliferation of phone cameras has thankfully taken care of this. I don’t think cameras are even remotely a bottleneck here.


How well does the Sony A7M3 work at night in video mode from inside a moving car with oncoming traffic headlights in the FOV? That is really the case that needs to be supported.

My guess is that LIDAR is superior in that scenario, but I've never tried a Sony A7M3 either.


Also, you need to consider an A7M3 with a somewhat dirty lens.


I'm guessing this is about the A7s - the sensor really is phenomenal. But it's also large and way too expensive for a vehicle (although the dynamic range problem would be somewhat solved with good hdr)


This really depends on the Camera, some have beyond human level dynamic ranges which we could equip on cars. The cheap Go Pro sensor was likely much worse than what the car was actually using.


I don't believe that's actually the case that cameras have better dynamic range than humans. From what I've seen human vision is definitely in the 20+ stop range. It's especially doubtful that cameras that are cheap enough to put on an autonomous system would get anywhere near that.


The human eye uses a lot of tricks to be high effective dynamic range but the retina is a long way from 20 stops. https://en.m.wikipedia.org/wiki/High-dynamic-range_imaging

So, in a direct comparison where cameras are allowed to use a similar set of tricks tricks including a fair amount of post processing they actually win. Aka, Retna vs CCD the CCD wins, iris vs iris camera wins etc.


> Remember the Uber crash video

I always had the feeling they softened the image. Or at the least it was camera with low quality/settings giving a false impression of what the view was. If you pause the video right before impact the detail is still much poorer than you would expect.


Oh, without really much doubt, Uber modified the picture. Some person on reddit went out there and took a picture with their phone camera a few days afterwards. The Uber image is pretty clearly modified to seem like the woman 'came out of nowhere' (in terms of bit-depth).

https://www.reddit.com/r/SelfDrivingCars/comments/869olw/hdr...


An HDR picture requires, among other things, sufficient stability and time to capture all of the light, shades, tones, etc.

It's more likely that the sensors in the Uber setup are set for fast processing, i.e., less light per frame, rather than for visual fidelity, i.e., more light per frame. After all, faster processing is more important to a vehicle traveling 60 mph than sharp borders. If the shutter time is set low enough, you would get something similar to what Uber released even on a DSLR.

Also, as others have pointed out the Uber video was from a dashcam. The point of the dashcam was to record events in the event of an accident, not be a high-fidelity visual record.


Still, that means they put out their most favorable image, one where the woman just appears from the inky darkness, not the ones that the car was actually 'seeing'. If anything, this further demonizes Uber (past Greyball, the rampant sexual harassment, and the other crumminess)


A HDR still and a normal video are going to look very different, especially when you consider the video was taking from a moving car. It proves that Uber's footage was of low quality, but not that it was modified.

A single image is always going to be a poor way of judging the light levels, because the exposure (shutter speed, aperture, ISO) can make it look drastically different.


> We'll give you 2 seconds warning

It's rather important to note that Tesla does not even promise that. It's clearly the impression that a great many people have including technically minded people here on HN. But that impression can kill you.

I'm all in favor of the long term goal here, but just focusing on the short term, what exactly is the user benefit of autopilot if you have to promise to stay 100% as focused and attentive as if you didn't have autopilot? As far as I understand, automatic collision avoidance is on by default at all times. And we've had cruise control for ages to physically rest legs and feet. So what's the point of autosteering?

I believe Tesla cites data that accident rates are lower with autopilot on. So, if they are being straightforward, that would imply it avoids collisions even more than regular always-on collision avoidance. (This is controversial as others have noted.) But... even if Tesla's implication is correct, logically this would only be true if the driver wasn't paying 100% attention to the road. Which is a violation of autopilot TOS. Therefore, it is plausible that most every accident avoided by autopilot is at best a case of misusing autopilot.

And that's a stretch. As others have noted, quite possibly it is the collision avoidance features that are responsible for the reduction in harm, not autosteering.

If instead the main point of autopilot is to train the fleet for our eventual Level 4-5 autonomous future... that seems like a dangerous way to go about it. Cheap, even. It leverages imperfect users to gather training data instead of employing trained professionals to do it in company vehicles.


think LiDAR is a short term advantage, but the companies/stacks depending on LiDAR will reach a local maximum from which they won't recover without totally starting again from scratch. LiDAR is short range, low resolution, and extremely expensive. There is no reason that visible light (+ IR) CMOS cameras can't do better by an order of magnitude, at a lower cost.

I disagree. A visible-light based system like Tesla proposes is, quite frankly, a joke. Camera-based systems are too easily confused and require too much processing power to work in high-speed situations. (For example, at least one of the Tesla deaths was the result of the camera system failing to identify that a stopped object wasn't part of the road. A radar-based system would have easily recognized the obstacle.) Camera systems are also too dependent on maintaining the cleanliness of the sensors, and their performance degrades based on lighting conditions. While they certainly will be part of future AV systems (i.e., for street signs), they won't be the primary sensor.

Some combination of radar will form the basis of AV systems going forward. LiDAR is the most likely foundation, given its accuracy and speed, especially as LiDAR is expected to drop in price by nearly 99% over the next 5 years. (Recent developments in the underlying technology, plus economies of scale, have reduced the price of a full LIDAR suite from nearly $100,000/vehicle to just over $1000/vehicle. Technologies in the pipeline that are close to commercial production should reduce that cost to $100/vehicle or less.)


Not sure what you are going on about. Teslas HAVE radars.

https://www.tesla.com/blog/upgrading-autopilot-seeing-world-...


Tesla has extremely basic radar, not LIDAR. (LIDAR is a type of radar used for ranging and positioning, both of which are fundamental tasks for an self-driving system.)

Basic radar can provide positioning and ranging, but this requires additional processing to coordinate the input from multiple radar sensors to triangulate the positions of reflections. LIDAR gives you ranging with a single sensor, and positioning if you rotate the sensor.


> will reach a local maximum from which they won't recover without totally starting again from scratch

Why would you need to start from scratch? AFAIK all the sensor output, whether results from vision or LIDAR get thrown into a mix together with other sensors anyway, and the aggregated results from potentially conflicting sensor readings is then used in the later stages of the process.

Thus, it doesn't sound like an impossible task to switch out that part of the system. There seem to be a lot of startups out there working on commoditizing each of the sensor families too, so even if a company makes a wrong choice now, they could catch up pretty quickly.


Lidar will get cheap before vision gets good. There is no one in the industry who is serious about fully autonomous vehicles talking trash about Lidar.


I fully agree. I wish the "autopilot" would just be used passively, e.g. auto-braking when it spots something bad ahead, or some sort of warning if it notices the driver deviating from the lane (and maybe only active avoidance if that deviation means avoiding crashing into something in the other lane).


Essentially Volvo's approach: "Here is an extraordinary statistic: since the Volvo XC90 went on sale in the UK in 2002 it has sold over 50,000 vehicles, yet not a single person has been killed while driving it, or as a passenger."

http://www.bbc.com/news/business-43752226


That's a curious turn of phrase, and immediately makes me ask: how many other people have been killed by Volvo XC90s? The XC90 is a big, tall, heavy car that makes me nervous as a cyclist - does it protect its occupants at the expense of other road users?


A car of any size can kill a person on the road, it’s not something you design for.


well, Europe (in general, because I don't know the specifics off the top of my head) regulations require(d) changes to hood/bumper/front-end designs to improve pedestrian survivability during impacts.


I know that they require 1" (or 2.54cm) of space between hood and engine, to create a little crush zone before people hit parts that won't deform.

That's the only thing I know about it, and my source is a review of a C7 Corvette ZR1 (which Chevy cannot sell in the EU to my knowledge - due to the fact that the engine forms part of the hood)


Unlikely, given Volvo has publicly stated (https://www.volvocars.com/en-ca/about/our-stories/vision-202...):

”Our vision is that by 2020 no one should be killed or seriously injured in a new Volvo car.”


I'll take that with a grain of salt. There's a lot less opportunity to strike a deer while going 90mph on a highway in BFE or SnapChat your way into rear-ending a Chevy Suburban in the UK than there is in the US.


> I fully agree. I wish the "autopilot" would just be used passively, e.g. auto-braking when it spots something bad ahead, or some sort of warning if it notices the driver deviating from the lane (and maybe only active avoidance if that deviation means avoiding crashing into something in the other lane).

That's what other companies with driving assistance features do. The system tries to keep you safe, but it will complain rather loudly if you release the steering wheel for longer than a few seconds, it's neither intended nor sold as a self-driving system.


In many other industries, automation or alternate data/program paths are kept "on" but in a simulation system together with actual driving. So there should be a period of 5 to 10 years where all tesla cars have the autopilot system active, but controlling a virtual car in a virtual environment. As far as autopilot is concerned it's doing stuff, but only the driver is in control of the vehicle (it's best to disconnect the autopilot system completely from any physical car control systems)

Collect data and see where the two systems (one human, one automated) differ and where they react the same, and from there extrapolate how well the system works (vs a baseline of human drivers under all conditions)

Do this for 10 years and you'll have a pretty good idea of how to improve autopilot.

I'm surprised such a study hasn't been done inside telsa itself.

The space shuttle if I remember correctly, does something similar to this with the redundant processors - all computers does all calculations simultaneously, and if there are discrepancies then either use consensus (2 out of 3) or sound the emergency alarm.


That space shuttle thing is a different idea for a different problem. They use best 3 out of 5 to protect against transient corruption primarily and errors in implementation as a second factor(I think, if they use a single implementation then this is a non factor).


The problem is you can NOT give SDC directly to consumers. Period! It is being debugged and you should be limiting the audience and have a highly controlled audience using a commercial offering.

Nobody debugs without working with a limited group of people let alone something that kills people.

The Waymo approach seems to be the much more responsible.

SDC - Self Driving Cars


Autopilot is really just advanced cruise control. I think this is the biggest flaw in Tesla's marketing. People hear autopilot and think "car will drive by itself". In reality it is simply a feature that allows you to spend less mental energy on driving just like cruise control.

From anecdotal experience I would explain it as normal driving requires 80% of your mental attention, regular cruise control requires 60%, adaptive cruise control requires 40%, and Tesla's autopilot requires 20%. If you use it in that manner it is almost certainly safer than manual driving. The problem is that people get complacent and spend large stretches of time without dedicating enough attention to the task. This is supported by the fact that neither of the drivers involved in the two confirmed autopilot fatal accidents made any evasive actions. Yes, autopilot is partially to blame for that, but so are the drivers who weren't spending enough mental capacity on driving. That isn't a problem limited to autopilot. It is the same problem when people manually driving get distracted by their smartphone.


Yep literally just lane keep with a much longer warning time than other higher end cars. Shows the power of marketing.


Partially agree. OTOH, giving 20% attention is a much harder task than giving 80% - we (people) suck at multitasking, which means the larger task eventually gets 100%, and the smaller slip towards 0%. Lethal.


Planes have autopilots. Even so, they don't fly themselves. Not sure where people have been getting these notions from, unless it is from works of fiction.


Much like the automation levels of self-driving cars, there are different kinds of autopilots. A small aircraft might have heading-hold/altitude-hold only. A modern airliner will pretty much fly itself and the crew is only there to "supervise" (to the extent that they are seeing a similar rise in accidents in situations the autopilot cannot handle, because the crew doesn't know when to override it and they don't have the proficiency to fly the plane properly in those situations).

You might be surprised to learn that autoland has been around since the 60s... the UK were early pioneers due to notoriously poor weather in the British Isles, autoland systems allowed them a highly safe all-weather landing capability in zero-visibility conditions. The US was much slower and our aircraft really only caught up to some of the UK aircraft's capabilities in the 80s and 90s, and tended to rely much more heavily on ground guidance.


It's indeed a matter of perception: Moon landing is fake but Star Trek is real - therefore airplane autopilot must be closer to the latter than to the former :-/


> anyone who actually uses or trusts autopilot is basically forfeiting their life.

Worst of all, is putting in danger other people's lives who happen to share the same road with this type of Tesla drivers.


The whole "marketing confusion" angle really only makes sense for new owners or renters. Anyone who owns or uses the car for any length of time knows what the system is and what it can do. There have been a few people who wrecked under a newbie understanding, but really you're not going to get very far like that. The fatalities, at least, were people who knew how to use it but who encountered system limitations while trusting it enough to not be paying enough attention to take over. I don't think the name mattered.

Certainly it's not a perfect mode of operations, but entirely human-driven cars run into things through inattention, too. Is it worse? Better? I don't think we know for certain yet, other than it's not an obvious major problem. There's a lot of Autopilot use out there and only a handful of wrecks.


I emphatically DISAGREE.

That's like complaining about cruise control:

I think the brunt of the "cruise control" problem is not the tech, but the marketing department (which has to include American Motors and Plymoth of course). This "kind of autonomous driving" is absolutely deadly, by definition. It's never going to work well. "The car will accelerate by itself, until it can't. So don't pay any attention, until you need to save your own life. We won't give you any warning".


> This "kind of autonomous driving" is absolutely deadly, by definition. It's never going to work well.

Can you quantify this statement? All of the statistics suggest that it is in the same ballpark of safety as fully manual driving.

> I think it is cognitively harder to use autopilot and pay attention, than to simply drive safely.

Everyone is entitled to their opinion, but don't you think we should look at what the actual data says about autopilot safety (including for the various flavours of autopilot tech)?


What stats are you referring to? I think he's talking about the flavor of 'auto pilot' in Tesla cars that isn't the supposed to be the same as 'self driving' of waymo and the like. If there are stats about this it would be interesting to see, you may be right but I think I would be nervous too about something that can do almost everything but at any moment I have to wrestle control of the wheel, it would basically make me just want to drive myself so I don't have to be nervous about it and watch out for the moment something seems it may go wrong.


> LiDAR is short range, low resolution, and extremely expensive.

It was expensive and low resolution. It is already cheap and very soon going to be - super cheap.


> I think it is cognitively harder to use autopilot and pay attention, than to simply drive safely.

Just out of curiosity, have you driven long stretches using Autopilot? I just did a road trip and on Tuesday during which I drove 11 hours, most of which using Autopilot. I emphatically disagree with you. I'm able to stay more aware of traffic and situations around me because I'm not having to constantly do the tedious tasks like staying in my lane and monitoring my speed (they aren't hard tasks, but take some amount of effort, and are mentally fatiguing after several hours).


100% Agree. When I hired one for a day and used autopilot it just felt reckless as I felt myself disengaging. The intelligent cruise control is much better.


The fact that Tesla was largely unharmed for the death is just a proof of Elon's massive fandom and influence to people.

That's why I don't like this guy.


I don't think autopilot is included to improve their current product, but rather to give them a head start on full autonomy. I think the reason they've rolled it out is so they can crowdsource the testing of their software. I'm just speculating here, but I would guess that this has allowed them to collect vastly more test data than uber and waymo at a fraction of the cost and liability.


It seems that Tesla's vast amounts of data is mostly useless. After all, there's a heavy concentration of drivers in the Bay Area (where they're based) but all that local data didn't stop a Tesla from slamming into the concrete divider and killing it's driver. On top of that, Tesla's collecting data from two very different sets of sensors, which are not compatible with each other (i.e., the original MobiEye suite and their home-built alternative), so all of the millions of miles collected with the MobiEye sensor suite is useless for their own sensor suite.

Waymo may have less data, but it's data appears to be far more useful.


   (which has to include Elon Musk of course)
https://media.giphy.com/media/pVxOjXysPtqOisQqSq/giphy.gif


>> LiDAR is... low resolution,

that's nonsense

>>and extremely expensive.

and that'll change soon


> LiDAR is short range, low resolution, and extremely expensive

It is extremely cheap compared to a chauffeur.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: