Hacker News new | past | comments | ask | show | jobs | submit login
Tesla has a self-driving strategy other companies abandoned years ago (arstechnica.com)
168 points by close04 on March 8, 2019 | hide | past | favorite | 212 comments



Waymo keeps plugging away. Each year, for the last few years, the number of miles between disconnects they report to the CA DMV has doubled. Three more doublings, and that number will be bigger than average miles between accidents for human drivers. Then they can ship a product.

Nobody else is even close.

This problem is slowly being solved, by normal engineering practices. The "fake it til you make it" players are being left behind. Uber has been shown to be totally incompetent. Tesla really just has a good lane follower and an mediocre car follower. Apple has been trying to hand-wave by talking about "significant disconnects" vs. the all disconnects DMV requires them to report.

The LIDAR industry is struggling, but there is progress. Quanergy seems to have been mostly hype.[1] Continental, the big European auto parts maker, bought Advanced Scientific Concepts, which makes and sells a good but expensive flash LIDAR used in DoD and space applications. They packaged it up for automotive use, and are waiting for the self driving industry to catch up. That technology uses exotic indium-gallium-arsenide sensor ICs, which are expensive in small quantities but would probably be affordable if they could sell a few million a year.

This looks like a problem that's being solved. Just not fast enough for startups used to quick payoffs in software.

[1] https://www.bloomberg.com/news/features/2018-08-13/how-a-bil...


"Three more doublings, and that number will be bigger than average miles between accidents for human drivers. Then they can ship a product."

I am skeptical of that, because I expect that even if disconnects are ultra-rare overall, and the testing is representative of normal usage, there will be enough people that frequently use the technology in ways far off on the tail of the distribution that for them, disconnects will be much less rare. And that might create a PR blowup or worse.

I think what is worrying me may be something called "heteroscedasticity".

https://en.wikipedia.org/wiki/Heteroscedasticity


For self-driving cars, what would that be? You can misuse a non-self driving car by speeding, giving you above-average chances of accidents. You can be old, giving you above average chances of being involved in accidents (per km driven). You can smoke, giving you above average chances of getting lung cancer. You can misuse a partially self driving car by somehow convincing it that you do follow the traffic while actually not following it, and then blame any accidents onto the self-drive feature.

But fully self driving cars only have two inputs: starting point and destination. Doesn't matter whether you are old, sleeping, 13 or a dog. Unless some roads are more susceptible to accidents than others, or manufacturers let people also fiddle with how much a car should risk accidents in order to get a little decrease in travel time, I don't see any way possible. And if manufacturers really do give that option, it's truly their fault.


> You can be old, giving you above average chances of being involved in accidents (per km driven).

You have to be very old for this to be true.

It turns out that 16-25 years old is worse than any group until something like 75-80 years old.


>You can misuse a non-self driving car by speeding, giving you above-average chances of accidents.

I feel like it's the opposite... When I drive the speed limit, other drivers around me become more aggressive. Even if they have to exit the highway 20 seconds later they fiercely fight for tiny speed differences of 10km/h, start flashing their headlights, etc.


I meant speeding in a more general sense of driving faster than what's safe given the circumstances. E.g. you shouldn't get near the all-year limit if there's ice on the road. In dry lit summer conditions with nobody on the road you can drive much faster.

If you are driving during rush hour where everyone is stressful, then idiots will flash headlights, do risky behaviour, etc, that's true. And yeah, driving slightly over the speed limit to be consistent with the traffic around you is generally a good idea. But if they are too much over the limit, it's their doing, not mine, and if the risky behaviour causes an accident in which I'm involved, it won't be my fault if I'm following the rules. Conversely, no judge will accept it as an argument if you speed because of the idiots and you cause an accident.

I personally just let the idiots be idiots. Especially when the limit has a cause, e.g. a construction site or something: they put a lower speed limit for the safety of the construction workers. I won't compromise their safety, not a bit.


Maybe on access-limited highways.

But on (sub)urban streets driving the speed limit decreases the chance of causing serious injury to other road users, including pedestrians, even if it may lead to an increase in the total number of crashes between motorists.


In the hands of a consumer, don't forget being diligent with maintenance, driving on unsealed roads, choosing to drive in poor conditions. There are probably a lot more. It is surprising how much some people abuse machines.


Waymo's "shipped product" is likely to be a fleet service in several cities, at least at first, so maintenance is their responsibility. Likewise, I'd imagine unsealed roads would be outside initial coverage areas.

(Note that while I work at Google, I don't have any particularly relevant inside knowledge. What I'm saying is consistent with their current pilot in Phoenix and public articles.)


They already have a fleet although in just a part of one city. They will keep expanding the fleet and the area/cities covered over time. Instead of one big announcement that we have selfdriving cars they will keep expanding and improving over decades. And just like people kept harping about driving range of electric cars. They will suddenly be everywhere. Car manufacturers are wary of Waymo so far but the first large manafacturer to join hands with them will probably dominate car/transport industry of the future.


> Unless some roads are more susceptible to accidents than others

By chance do you happen to live somewhere without icy roads?


I live in Minnesota and have my entire life. I hope very much that self driving cars lead to a more sane response to icy conditions, which is to not drive. If the vehicles are in a position to enforce that at a technical level then all the better.


I agree. I was simply pointing out that "unless some roads are more susceptible to accidents than others" certainly happens, very frequently.

But I too am hopeful that machines are better than human drivers in all scenarios.


Funny. This reminds me of all the comments about apps like Dropbox. "Do you happen to live with strict low data caps?"

Well, guess what, those guys are still on Reddit making snarky comments and the Dropbox guys have IPO'd.


I certainly hope hazardous weather conditions go the way of small data caps.


Haha, or we can get better at dealing with them and introduce ourselves to them as we get better.

But fair play for playing with the analogy. Funny.

Yes, people who can't live in pleasant places will not get some tech early. C'est la vie. We'll get there.


I imagine that computers are a lot better than humans at control tasks like keeping traction on icy roads. I don't think that this is a hard problem in self-driving tech.


I've now seen the word heteroscedasticity twice in 5 minutes and thank you for providing the link to Wikipedia. Although I think the 'meta-variety' synonym was quite apt on the other page.


I'm way less worried about this than when Waymo was originally called the Google Self-Driving Car Project.

If Uber and Tesla continue with their current plans, they will have enough accidents that self-driving cars being in accidents will cease being headline news. They just become an actuarial math problem, just like regular car insurance is today.

(This is my opinion and not that of my employer.)


The last Ars article that I read on Waymo did imply that they had hit a metaphorical wall on their progress, and were keeping things looking good by sticking to lower-speed residential neighborhoods and avoiding situations they had trouble with, like making unprotected left turns onto major streets and passing slow or stopped vehicles on a busy road. Essentially things that require being able to understand that the other cars around are driven by people who tend to react to things, and will probably slow down if you make a turn into a lane that they would be passing you in.

https://arstechnica.com/cars/2019/02/googles-waymo-risks-rep...

https://arstechnica.com/cars/2018/12/we-finally-talked-to-an...


What's the problem? If a a robot can drive a car more statistically more safely than a human, that seems like a net win.

Maybe humans shouldn't be making left turns in traffic or passing trucks either. Those are probably, statistically, the most dangerous things you can do when driving. But we do them all the time, because we have no idea how dangerous they really are.


The problem is the other car has a human in it, and those situations are ones where the reaction of the other human is critical to safety analysis, but generally unpredictable.


A computer will be able to react to a sudden crazy movement by another driver far better than a human will.


“Will be”? Yep. Given an infinite timescale computers “will be” able to do lots of things better than people, such as navigate rough terrain or open a door.

Currently, they can’t do any of that better than humans, and probably not for several years.


It sounded like the post you replied to was referring to emergency maneuvers, and I think automated systems are probably better than humans at some kinds of emergency maneuvers even right now.


Performing emergency maneuvers, yes. Predicting the need for emergency maneuvers based on the varied and unpredictable responses of humans in exceptional circumstances, not so much.


Yeah, UPS trucks don't make left turns. Maybe self driving cars can do the same.


Please stop fixating on those California disengagements. They mean approximately nothing. They are unregulated, voluntarily reported incidents. A low number could mean that safety drivers are acting irresponsibly, or that the vehicles are only being used on public roads in known-safe circumstances.

I strongly suspect that the quest for a low disengagement number is what got that woman in Arizona killed by Uber. These reports really should be sealed from the public. AV startups seem to be using their disengagement rate as a way raise money. They are aided by journalists hungry to make sense of a highly secretive field publishing these figures completely out of context.


Really? I'm pretty sure it was irresponsible staffing and unsafe software: Uber reduced their drivers from two to one per vehicle. That driver was watching a cell phone video and not watching the road, but Uber should have known that was a risk and taken action to mitigate it.

The Uber self-driving program was trying to move too fast, before the software was ready. That and the operations failure made it inevitable that someone was going to be killed.


> These reports really should be sealed from the public.

Yeah, the public has no right to know what vehicles on public roads are capable (or incapable of).

And we wouldn't want pesky innocent deaths to impede with Uber's ability to "bring self driving to market"...


The public has a right to know how unaided AVs on public roads operate. The "number of disengagements" of prototypes reported by corporate research programs has no relation to this.


What's a better metric or way of measuring it?


You should take Waymo's numbers with a healthy dose of skepticism. They are very selective about which disengagements they report. Here are the details from their 2017 letter to the DMV:

This report covers disengagements following the California DMV definition, which means “a deactivation of the autonomous mode when a failure of the autonomous technology is detected or when the safe operation of the vehicle requires that the autonomous vehicle test driver disengage the autonomous mode and take immediate manual control of the vehicle.” Section 227.46 of Article 3.7 (Autonomous Vehicles) of Title 13, Division 1, Chapter 1, California Code of Regulations.

Waymo has developed a robust process to collect, analyze and evaluate disengages for this report. We set disengagement thresholds conservatively for our public road testing. The vast majority of disengagements are not related to safety. Our test drivers routinely transition into and out of autonomous mode many times throughout the day, and the self-driving vehicle’s computer hands over control to the driver in many situations that do not involve a failure of the autonomous technology and do not require an immediate takeover of control by the driver.

To help evaluate the safety significance of disengagements, Waymo employs a powerful simulator program. In Waymo’s simulation, our team can “replay” each incident and predict the behavior of our self-driving car if the driver had not taken control of it, as well as the behavior and positions of other road users in the vicinity (such as pedestrians, cyclists, and other vehicles). Our engineers use this data to refine and improve the software to ensure the self-driving car performs safely.

https://www.dmv.ca.gov/portal/wcm/connect/42aff875-7ab1-4115...


Waymo is ahead but only as a Taxi service in dense urban cities. Tesla however can deliver one day that will self drive 90% of the time. Many, especially in North America will only want the later. In North America, a car is still seen as a freedom enabler, one that lets you go to the beach, go on a hiking trip, and go on a few offroad track in a park during weekends. All of these will not be possible with a Waymo car but still possible with a Tesla Model S/3/X


People take their commuter Teslas off-roading? Every time I try it ends with strange looks from the tow truck driver.


> Three more doublings, and that number will be bigger than average miles between accidents for human drivers. Then they can ship a product.

They can ship a product that can drive under the conditions they're testing under. Which would be appropriate for autonomous taxi service in certain municipalities, but presumably not for the general market.

Granted, that's been their business plan for a while now.


So can Tesla. They don’t have to enable full automous driving in all conditions.


Do you have hard numbers so we can establish a trendline?

My perhaps false recollection was that Waymo disengagements had flatlined 2017 to 2018.


You're right; 2016 to 2017, not much improvement. Then, twice as good in 2018.[1] (CA DMV stats run on a year from December to November, so the most recent data set is for 2018.)

[1] https://thelastdriverlicenseholder.com/2019/02/12/disengagem...


I spam this on every waymo mention, so I apologize, but can they please automate highway driving first and then work on getting me through taco bell? I care way more about taking a nap between cities with autorecharge than automating a 4 minute trip to a fast food joint.

If they really are doing this well with non-freeway driving, then that just tells me they could have released freeway driving about 4 years ago.


At least Tesla has an actual product you can buy? Where’s Waymos product? Will they sell to consumers or the incumbent car makers?

I agree with your statement about Tesla’s marketing of self driving. My Grand Cherokee isn’t so far off their mark. It has lane assist, adaptive cruise control with stop, parallel and back in parking. I’d bet Jeep could do summon and more stuff on the highway.


> At least Tesla has an actual product you can buy?

What self driving product do they have? Elon's "Oh yeah, we will have full self driving coast to coast in [8 months]" is horseshit to the point of "How is he allowed to say this on an earnings call?"

Other competitors, widely recognized to be ahead of Tesla estimate 5, more likely 10 YEARS. Coast to coast? Your Tesla will navigate a Pittsburgh winter storm on poor roads in Level 5 autonomy... this year? No.

So then we're with "sophisticated driver assist". Actual products you can buy with sophisticated driver assist are sold by: Toyota, Honda, BMW, Mercedes, Audi, Chrysler...

My Jaguar has: adaptive cruise, four way sensors, lane assist, including ability to prevent you from changing lanes if a blind spot obstruction is noted, adaptive front lighting and high beam assist, driver condition monitoring (detects multiple 'drift', lane departure events or sub-optimal braking), traffic sign recognition and adaptive speed limiting, rear traffic monitoring including the ability to avoid, if possible, full auto parking, and so on.


My wife’s car has a similar long list of “features” (Mercedes DISTRONIC package with active lane keeping) and it is nothing even remotely comparable to driving my TM3 with autopilot.

Those cars do not steer themselves. The difference between having to steer the car and not having to steer the car is extreme.

It totally changes the way you drive. Instead of looking down your lane and keeping the car centered, you scan all around you and keep situational awareness of the drivers around you while the car keeps itself in lane. It’s a totally different experience.


> The difference between having to steer the car and not having to steer the car is extreme

Assuming you keep your hands on the wheel as required what exactly is the difference? I only found it to be a distraction as it takes me considerably longer to reengage (in less than expected situations) as the human driver after a period where I just let go a little.


That’s your experience as a Tesla driver?

I don’t have any “reengagement lag” as you might say. When I want to start steering the car again I just start steering. There is a slight resistance felt in the wheel if you turn past the autopilot without flipping the right stick up to explicitly disengage AP first.


There is a massive difference in the driving experience. How long did you try it for?


Active Lane assist features also steer themselves? At least for all VW brands they do (VW, Audi, Seat etc). Pretty sure the Mercedes active lane keeping does the same. They just don't sell it as an autopilot.


And my wife’s Acura will completely steer itself (with a much shorter list of features), but requires your hands on the wheel. It works without your hands, but it gets testy if you don’t move the wheel slightly every 15 seconds.

It’s significantly nicer to drive long stretches with this system, but I can’t imagine doing it without your hands on the wheel. When the sensors fail (weather or poor lane markings), you need to be able to take over immediately. Granting too much autonomy to the system is just too risky (and the point of the article).


Your wife's Acura does not steer itself the same way a Tesla steers itself. Your wife's Acura will try to keep the car from exiting a lane.


I had the same experience with Subaru's EyeSight system. I drive very rarely (you can count the number of times I drive in a year on one hand) so after a day of driving I'm usually completely beat. But after a day of highway driving with the EyeSight lane keeping/adaptive cruise control I was remarkably less fatigued.


Are you taking about LKAS? [1]

Not trying to be rude, but that is nothing at all like driving with AutoPilot.

[1] - https://youtu.be/iKWBdySgssE


You’re not being rude. It is the lane keep assist, and it is nothing like Tesla’s autopilot. But that was the point I was trying to make. It’s a very good, but limited, drivers aid. But because it’s likited, it won’t make (or rather let) you get complacent. This was the gist of the original article — the Tesla autopilot system is very risky when the system needs to hand over to the driver. The driver is likely to trust the system too much and therefore let down their guard and lose key context that is needed when the autopilot sees something that it can’t cope with. In those situations, a more limited system is likely safer because the driver is forced to be more engaged throughout the trip.


How are you going to say Tesla doesn't have a product and then go on to list a bunch of worse products as if they are better?

Tesla has effectively incorporated Lane Keep Assist/Automatic Cruise Control into a single product called Autopilot that has vastly better software than other similar products.

Simply put, if other companies had a product that would have allowed for billions of miles to be driven hands free feet free, they would have it, but they do not, not even close. Tesla does.


Vastly better? I'm not sure the last time my car did an uncommanded lane change on a well painted road with good ambient light. Yet, I can find multiple videos of Teslas doing exactly that.

I'm not sure the last time (actually I am - it has not) my car treated a roundabout with a shallow center median as "entirely optional, follow the circle or go straight over", but I can find multiple videos of Teslas doing that.

The two cars I've driven that are modern (2017 and 2018 Audi and Jaguars) also didn't hit a big white pickup truck in broad daylight in parking mode, but I've seen Teslas doing that, too.

So, how are we defining "bunch of worse products"? "Less advanced, but better tested"? If that's the case, I think I'll take that.


Weird, when I search "Jaguar self driving" or "Jaguar Hands Free" I don't find any real videos of it at all. Are you sure it's not Jaguar that doesn't have self driving?

Do you actually drive your car hands free and feet free for more than 30 seconds ever?


> Do you actually drive your car hands free and feet free for more than 30 seconds ever?

Fairly certain in many cases that's illegal. And apropos of anything else, doesn't Tesla itself require steering wheel contact/torque every 15 seconds?


I'm pretty sure that your comment falls under Poe's law somehow. I just can't tell if it's meant seriously.


>Tesla has effectively incorporated Lane Keep Assist/Automatic Cruise Control into a single product called Autopilot that has vastly better software than other similar products.

Is it? This[1] implies that, contrary to what you say, most other manufacturers do offer equivalent features in equivalent luxury vehicles. Tesla might have gotten beaten many of them by a year or so, and might sell more Xs than Toyota sells high-trim Lexus's, but neither of those things really say anything about the quality of the driving technology. And from what I've seen, Tesla's isn't particularly spectacular.

[1]: https://www.cars.com/articles/which-cars-have-self-driving-f...


At least Tesla has an actual product you can buy?

I have a self-driving automotive product you can buy. It doesn't work as advertised, either.

Calling a lamb's tail a leg doesn't make for a five-legged farm animal. This reminds me of the conversations about, say, phone authentication using facial identification. "Samsung had it before FaceID!" Yeah, I had one of those Samsung phones. It did not do what it said on the tin. So, yeah, Tesla has a product you can buy, but it is not a self-driving vehicle despite what they might tell you.


What do you call it when a Tesla drives me 90% of the way to work where I don't do anything but observe?

It's not full self driving but it definitely is driving it self and providing value.


^Exactly. I think Tesla's marketing is shit around this. Elon needs to shut the hell up on twitter as well.

Stop coming at me with Waymo is 10 years ahead of Tesla... Ok great, are they shipping & delivering any value to paying customers? Waiting.......

In 5 years time, do you really think Waymo will have paying customers and have gained regulatory approval? Hell no...

Tesla at least realizes they can't get passed the regulatory hurdle. Keep driver at the wheel, provide a value, automate and refine.


Very much this. I see it as a waterfall vs agile approach when you use software development methodologies as a comparison. Tesla put out a minimum viable product, and is quickly iterating on it. It isn't the ivory tower dogmatic example of perfection, but it is really good, and is tons of fun. Waymo on the other hand, is trying to just skip putting their products seriously in the hands of users and are doing limited trials in limited mostly perfect driving conditions (like Phoenix, AZ).

The best thing is that as much as people want to argue on the internet about "which is better", it is somewhat inevitable they'll both succeed in their own way. Waymo will eventually crack the nut for Level 5 autonomous driving with Lidar just like Tesla will crack the nut for their full self driving (defined as driving from the east to west coast without any involuntary disengagements) with machine vision. I actually (potentially incorrectly) suspect Tesla will win this war first as they simply have a multiple of any competition when it comes to amount of driving data. How many more hundreds of millions of miles has Tesla autopilot logged compared to Waymo? Reminder: Tesla is building ~5,000-6,000 cars per week and tends to sell them pretty well. Their data moat only continues to widen. Once they figure out the right models for really pulling it off, they'll make HUGE leaps with the data they have. Some nights, my Model 3 will upload 10G of data up to Tesla. Hopefully, it is from Shadow Pilot to train the fleet.

TL;DNR: There is nothing stopping both Tesla AND Waymo from solving full self driving using competing technology and different approaches. It is an exciting time to be alive!~


It's not full self driving

I call the same thing you call it, apparently.


What kind of traffic and road conditions does that trip involve and how much are you an observer vs. you hanging on to the wheel and paying attention to everything as if you were driving yourself (staying engaged) as you are required by the law and by common sense?

And if you don't mind me asking, how would you quantify in real money the value added by the system?


The 90% I am talking about is a 12~ mile stretch of poorly maintained highway near Detroit usually in stop and go traffic and yes I am paying attention.

I wouldn't quantify the value in real money. I would say the main benefit provides is that I have much less decision fatigue and am less stressed after my commute. I also don't care how fast I really get there anymore since I just optimize for less stress, and my trip is enjoyable.

The other main value I commonly see overlooked is safety from distracted people. If we can assume people are going to keep texting and driving ( they are ), would you want them to do it while actually driving or with this system engaged?


> I have much less decision fatigue and am less stressed

Then you're not engaged. It's scientifically proven that your brain is no longer alert if you are not constantly engaged. And you can't be magically less stressed by decisions while being fully engaged and alert in order to verify each of those decisions. It's like saying a calculator makes you life easier because you no longer have to calculate yourself to get the result, you only have to calculate yourself to verify the result.

You're either less stressed because your mind simply offloaded tome tasks to the car contrary to the law and common sense, or you're actually engaged and your brain is doing what any driver would normally do so the whole effect is placebo. Which is cool I guess but this last one can be had for a lot less.

Driver assists are there as a backup, not to rely on them to do what you no longer feel like doing because it's too stressful. And if you feel less stressed letting a pretty dumb AI (compared to a human driver) with poorer eyesight than yours to drive you around you must be a pretty laid back person anyway.


> and yes I am paying attention

...Then what is the point of that system?

Tesla's 'self driving' is incredibly dangerous because it puts the driver in a set of conditions that make it very difficult to maintain attention on the road. And yet the driver must be able to take over from the 'self driving' system with subsecond warning if things go awry.

That's not value adding, that's a huge risk. I would not use that system even if someone offered to install it for free in my car.

Tesla's autopilot is a marketing twist on lane keeping adaptive cruise control - a technology available in almost every car at a similar price point. It is not a step towards actual autonomy.


What's incredibly dangerous is thinking of this sort of system as "self-driving". But a car that can make most of the low-level driving decisions for the user, such as attempting to steer itself to stay centered in the lane, is going to make for a safer driving experience. It's just incredibly idiotic to market this sort of advanced driver assist as "self-driving" or "the car driving itself", and as something that allows the driver to lose focus; much of the controversy about Tesla is due to them having done exactly this in the past, and in some cases still doing it.


I'm also in Detroit and my roommate owns a TM3. Ive drivin in it with him for hundreds of miles in autopilot and agree with you completely.


Can you show me a Waymo product of any kind? I can buy a Tesla today and the car can drive for extended periods on the highway using their driving tech.


So did a lot of rocket companies, until SpaceX came along and revolutionized the market. As far as I see it, Tesla is building a state-of-the-art driver assist product, but the approach they are taking is unlikely to reach full autonomy for a very, very long time. Waymo, while in a much higher risk position, has the chance (however large or small) to completely revolutionize the way we commute.


There's a few things to keep in mind when discussing self-driving tech.

1) Self driving tech doesn't exist today. It simply does not. There's lots of people working on it and they are using different strategies, but it is not clear how long it will take, which strategy is technically superior, which strategy is more economical, etc etc. There's a lot of very strong opinions floating around (LIDAR is essential! No, vision and lots of data! No, 3D maps or bust, etc etc). It is important to keep in mind that we don't know the future, the people busy inventing it don't know how things will play out and outside observers know even less. The tech industry is littered with strong opinions which have aged terribly.

2) Musks's first principles aside, Tesla was never going to do LIDAR. They're in the business in shipping real cars to real people and there was never any possibility of mixing that with LIDAR. If they had gone with LIDAR and started a research program like Uber, they would have started years after Google, with less data, far fewer resources and absolutely technical advantage. It is far, far more likely that this program would have bankrupted them than beat google. In essence, whether Musk's first-principles is real or a sales pitch doesn't matter. Tesla's choice was their current approach, or nothing at all.


> They're in the business in shipping real cars to real people and there was never any possibility of mixing that with LIDAR.

Can you explore that a little? We have consumer products that contain LIDAR already (e.g. Neato Vacuums). The sensors no longer cost thousands of dollars, and could supplement other types of inputs to provide a broader picture of the road.


Neatos don’t have a real ToF LiDAR in them - they use triangulation to estimate distance. Those don’t work well outside or at long ranges.

https://www.diva-portal.org/smash/get/diva2:995686/FULLTEXT0...


Yes, but you can't install a LIDAR over a software update. They've been promising full self-driving mode and getting closer by updating the software - the owners that already have the current car won't be to thrilled if they get cut-off from the state-of-the-art in Tesla's self-driving tech.


... particularly since they were charged up to three or five thousand dollars more for Elon's "promise" that some day (apparently this year) it will be available...


The LIDAR used for FSD will not be like the LIDAR in your vacuum. At the moment the LIDARS that car's would need are still several thousand if not tens of thousands.


All tech comes in different grades and specs. The LIDAR used on a vacuum cleaner might use the same basic principle but doesn't come close to the requirements of the auto industry and self driving tech.


I don't have a huge problem with the supposed goalpost movement. I am wondering though if drivers need to keep a fairly close eye on things what the true benefit of self-driving is in near future.


It’s more obvious is you break it into smaller jumps.

Cruse control requires you to pay close attention etc, but you can avoid making a lot of minor speed corrections, which makes long drives less taxing. Adaptive cruse control takes this one step further again reducing cognitive load. Lane following might not seem like much, but represents a huge number of tiny course corrections.

I could go on, but even if you can’t read a newspaper the reduction in stress and muscle fatigue really adds up.


Sure, question is, does that lower level of stress make it harder to pay attention enough to save your life when the 0.1% event happens?


First stress kills, so the heath benifits are meaningful.

Second, these systems are adding redundancy. As both the system and human driver needs to fail which means someone that’s only paying attention 98% of the time can still cover for 49 out of 50 times the system fails. Current data suggests a net gain in safety which is likely to improve as these systems mature.


I agree, but the problem is if you are not controlling the throttle and not controlling the steering wheel.. what are you doing? I find it tedious, but not particularly hard to drive for 12 hours in a row. On road trips most people I travel with fall asleep within a few hours if they are not driving.

Seems human nature to watch the car avoid 99 problems, and fall into a false sense of security and stop paying attention. Sleep, cell phone, talking to a passenger, or just day dreaming seems ever more likely the longer you are letting the car drive.

It's 2 people so far that lethally broadsided an 18 wheeler with a tesla on autopilot?

So an attentive human + FSD sounds great. Not sure humans are going to be able to stay attentive though. Sure at some point inattentive human + FSD will be safer than a stand alone human, but it doesn't seem like that's happened yet.

Telsa seems a bit evasive with the safety data. Maybe it's because human with autopilot off (but safety braking on) is just as safe as human with autopilot on.


I completely agree with these points....

Also... if you bought full self driving you're going to get the new v3 hardware retrofit... that's what you paid for and it's going to be better... probably a lot better.

(edit: removed link... just google it)


> Self driving tech doesn't exist today. It simply does not.

Except according to Elon Musk (whom I don't believe) - it had better, since he's promising that it will be available this year.


I've always found that half the issue is with the name: Autopilot. On an airplane, autopilot means you can take your focus off of both the physical and mental act of moving the plane through the air, and if it needs you to intercede there is pretty much no emergency in which you wouldn't have a few seconds to react.

In a car, Tesla's Autopilot simply doesn't give you that. If you are reading and your autopilot exits, you could be dead before your focus can return to the road. Planes fly in wide open spaces far from hazards. Cars drive in busy, tight spaces where a fraction of a second lapse in control can be fatal.

I get that Tesla's implementation is analogous to what you find on a jet. But the environments make them very different. So start by dropping the name. It's not auto pilot, it's handsfree driving.


I don't think this accurately considers how the term "autopilot" is actually used in aviation. Autopilot can mean anything from "hold a gyro estimated heading but not pitch or speed" to "maintain airspeed and follow the localizer down a glide slope while cross checking GPS and ILS."

> if it needs you to intercede there is pretty much no emergency in which you wouldn't have a few seconds to react.

Traffic avoidance is certainly one. Airspeed dropping while the autopilot tries to maintain altitude is another. This can result in a stall and complete loss of control under the wrong circumstances.

Autopilot does not mean "read a book and let the plane fly itself." You are always cross checking instruments, visually scanning for traffic, running checklists, rehearsing your next steps, etc.


You're absolutely right, but the issue isn't what an aircraft autopilot actually is. It's what the word "autopilot" evokes in the minds of the non-pilot public. To them, autopilot means fully automated with no oversight. ("Planes fly themselves!" and other such nonsense.)


Please don’t force marketing to conform to the lowest common denominator.

People have some responsibility to educate themselves about the products they’re using.


You are being ridiculous.

The definition of autopilot has been this way for decades. You could literally ask anyone of any age what it means and everyone would have the same answer.


That's not true.


The lowest common denominator is driving. A driver's license is way easier to obtain than a pilot's license.


It doesn't matter how it works in aviation. It matters how people think it works in aviation.

There was nothing OK about Tesla calling their system "Autopilot."


> Airspeed dropping while the autopilot tries to maintain altitude is another. This can result in a stall and complete loss of control under the wrong circumstances.

This happened to me in X-Plane (sim) the first time I used autopilot. I had dialled in a rate of climb the plane couldn't maintain. Very hard to recover from a stall when the elevator trim is set fully nose up.


I wish Tesla called something like driver assist. I use it for 90% of my commute and it is super helpful, but I would never pick a name for it of auto-pilot.


But that's the term other manufacturers use (for similar features) so Tesla wouldn't have a "unique" feature anymore. A lot of their marketing revolves around offering a product none of the big manufacturers offer.


Autopilot in planes can be used, and often is used, in cases of VFR flight in which see-and-avoid is a requirement. Constant surveillance of the aircraft's surroundings is important. When it isn't done, midairs happen.


This should only be the case if you have lost your radio or have utterly incompetent air traffic control. It could also be the case if you're in class G airspace or in some BFE remote area without a tower, but meh. As stated elsewhere, autopilot on planes is pretty much not at all what most people think of as "auto-pilot". The pilots still are doing their instrument scan, and they're still maintaining situational awareness, but they know that the plane should maintain heading, altitude, and airspeed.

Source: Have private pilot's license and flew the Shadow 200 TUAV for the US Army. It is a VFR rated airframe that is flown 100% IFR, by nature of it being an ~11ft x 3.5ft x 2ft drone.


That's a special case, however. See-and-avoid except in specialized scenarios is what the Level-3 designation of driving-assistance systems is about, something that Tesla has not achieved so far.


Not that special of a case. There's around 10-11 million VFR flights per year, versus around 15-16 million IFR flights, according to this FAA doc[0]

How many of those VFR flights have/use autopilot during their flight, I can't say, but basic autopilot systems for GA planes have gotten pretty affordable, so I would expect there's at least hundreds of flights any given day where a VFR pilot engages an autopilot during the flight.

The main difference is that none of those VFR flights are big airliners with hundreds of passengers. For those who don't know the difference, an oversimplification would be VFR = pilot is 99% in charge, with very little assistance from air traffic controllers outside of special controlled airspace, IFR = controllers are giving the pilot very detailed instructions for the entire flight. And it's pretty much one or the other.

[0] PDF: https://www.faa.gov/air_traffic/by_the_numbers/media/Air_Tra...


A system with lane keeping and traffic-aware cruise control corresponds pretty well to a simple two or three-axis airplane autopilot that just holds heading and altitude.


Tesla makes cars not planes.

And they do so for the general public not just for pilots. And so it's appropriate that you use the general public definition of autopilot. Which is a fully automated, hands free system.


People keep saying that but I don’t buy it. For example, “to do something on autopilot” is a common idiom meaning that you did something reflexively without thinking, often with bad results.


Maybe if you're out driving on a salt lake bed with no other vehicles or structures around.


> I've always found that half the issue is with the name: Autopilot. ...In a car, Tesla's system simply doesn't give you that [taking your focus off]

Yes, exactly. It's a level-2 ADAS - it's about helping you deliver correct inputs to the car, not about autonomous driving in any real sense - regardless of how advanced it might be in a technical sense, or how well-behaving in a best-case scenario! More like the fly-by-wire system in an aircraft than the "autopilot".

And I know that the users are supposed to be aware of this, but Tesla's marketing on this issue is far from consistent, in all sorts of ways. They're doing their users a pretty big disservice.


I think the issue is that most people have a misconception about autopilot in a plane due to movies.

Real autopilots don't work like that, they are typically extremely simple and in many cases, mechanical PID loops. 99% of all autopilots have almost zero intelligence nor electronic sensors of any kind. Only the best of the best, like CAT3 autoland systems are anything close to what you see in the movies.


The term autopilot was first used in 1914.

Over a hundred years of the general public being educated about what that word means. So pretty sure it predates its use in movies.


Unfortunately people buy on promises, marketing guys know that, and Elon - among other things - is a marketing person. The promise of 'self driving' was too appealing to be left unspent.

Ethically discussable, but that's how things seems to work.


I’ve read so many of these articles obviously written by someone who has no hands-on experience with the Tesla Autopilot and what it actually does for the driving experience.

Maybe Tesla will get to a point where my TM3 will drive me to work while I nap, or use my phone. One thing that’s pretty amazing is that if they can get to that point, it will come as a free automatic software update or an available for purchase hardware upgrade on my current car.

This month my car will be getting 5% faster acceleration from an OTA update - how cool is that?

But right now, the reality is that I can engage AutoPilot on the highway and it immediately and dramatically changes the driving experience. Instead of focusing on steering the car, I am focused on situational awareness. I am not just looking down my lane, I am looking at the cars around me, who is passing who, what might be coming up around that bend, that guy on his phone next to me, etc.

I can look at over drivers and assess whether they are paying attention and avoid them if needed.

Because the car is actually lane keeping — not what everyone else calls lane keeping which (surprise) is a total lie - but actively steering the car in the lane around curves and centering the car in the lane probably more precisely than I would be if I were steering manually.

I strongly believe, if AutoPilot never truly advanced much beyond its current capability, and as the currently functionality becomes more widespread, we will wonder 15 years from now, how did people operate their vehicles without this level of assistance? How could you be properly aware of your surroundings if you had to be so preoccupied with minor steering inputs?

If you use your phone and watch a movie or put on makeup while driving, you are breaking the law and endangering yourself and those around you.

Every time a new technology comes to cars people say it will distract, or hypnotize, or lull drivers into distraction (see wipers, radios, automatic transmissions, original cruise control) and anyone who has actually driven with AutoPilot knows this feature is no different.

As a Tesla owner I enjoy and appreciate Tesla’s real-world approach to self-driving and it makes my life better and my drive safer. Thank you Tesla.


Issue is Autopilot is more of a feature that is geared towards convenience.

You are doing all of the driver attention work, but someone who activates Autopilot isn't required to. I think a lot of the argument against Tesla is that Autopilot isn't doing enough in that arena.

GM's Super Cruise is a lot more feature-full with regards to driver attentiveness. It might not be as useful in terms of being able to be used everywhere, but its definitely more well-rounded in terms of forcing driver attentiveness.


Anyone who drives a car is technically, legally, and ethically responsible for being capable of operating the machine and to do so in a responsible manner.

Tens of thousands of people a year fail to do exactly that. Tens of thousands of fatalities every year, not to mention maimings and injuries, countless billions of dollars in property damages and lost wages, because humans generally fail in their legal, and ethical responsibility to operate their motor vehicles responsibly and in conditions they could handle.

There have been a few cases where Tesla drivers have made the same failing. Sometimes the AutoPilot feature was engaged at the time. Sometimes they were just pushing the vehicle too hard and ended up on the wrong side of physics. The Tesla is stupidly fun to drive and sometimes I drive more spiritedly than I should. Being too fun to drive could get people hurt or killed too (and it has)

I strongly believe a responsible driver is more safe with than without an AutoPilot feature, mostly from my own personal experience, as the data point I used to cite is somewhat controversial.

Just like the amazing cornering and torque of the Tesla can be abused and even lead to fatalities when pushed too far, so can the AutoPilot.

I personally think it’s a mistake to make the feature significantly less useful to a responsible driver to try to possibly prevent these edge cases. If it were possible to make the attentiveness features entirely non-intrusive than absolutely they should be added. But in reality the attentiveness features are already intrusive to a responsible driver and detract from the experience.

Interestingly Volvo seems to be going in a different direction. They believe as the manufacturer it’s their responsibility to create a product that even a human attempting to operating irresponsibly or illegally should be kept safe. They’re adding hard speed limits well below the functional limit of the hardware, and contemplating even systems like breathalyzers and fatigue detection which would entirely disable the vehicle.

I personally don’t want to live in a world where every product I use is sizing me up and deciding how I should use it, whether it’s a chef’s knife, jet ski, automobile, or semi-automatic.

In the meantime what I love most about AutoPilot is how it can only possibly get better over time, and every single car in the fleet is benefiting from that. That’s as long as the regulators don’t fuck it up.


The top editor promoted comment[1] on the article summed up very well.

  The attention problem is well known in engineering. 
  It is very hard to get a human to concentrate on something 
  that will turn up good more than 99% of the time, 
  even when there's serious or fatal consequences of failure. 
  Trains are the classic example - tracks are amost always clear, 
  signals are almost always correct which means you have to
  devise all sorts of systems to keep the driver alert.
[1]: https://arstechnica.com/cars/2019/03/teslas-self-driving-str...


Sadly, Silicon Valley companies don't think that best practices of engineering apply to them. It's all about disruption, both with products and safety.


"We're going to apply well known engineering to an existing problem" doesn't sell VCs as much as "We're going to disrupt things and launch satellites illegally"


I am not sure how Tesla’s system is much different than Waymo’s. In both cases, the cars can steer themselves under limited circumstances. Getting to full auto driving in crazy situations like single lane roads or unprotected left turns will take a long time. But using it in the situations it can handle today is very useful. At least with the Tesla, you can actually drive in places it doesn’t support auto pilot, unlike Waymo. The Tesla autosteer won’t engage.


> I am not sure how Tesla’s system is much different than Waymo’s.

Waymo's ultimate self-driving vehicle will not have a steering wheel (as seen in their early prototypes) as the vehicle will take human out of the driving equation. Tesla will probably always have one in the foreseeable future (as they are tackling autonomous driving an drive-assist system). That's the big difference to me.


Sorry, I meant in practical use today. Both auto driving systems only work in certain situations for the foreseeable future.


It's funny how much people have danced around all this, including this article, but what Musk has done with his statements on self-driving capabilities is called lying.


Lying seems to have a strong moral component in English that doesn't seem to be evident here. Most people would say that a proper "lie" involves intentionally trying to convince somebody of something that you know is not true, usually for personal gain. If Musk honestly believes that he can do it, then I guess it isn't technically a lie, even if most people who know the industry say he would have to be delusional to believe that.

It does feel a bit pedantic to say that though. If Musk is sometimes delusionally optimistic instead of intentionally deceptive, how much does it matter to us? I guess the level of delusionalism does vary some - Tesla might meet vehicle production targets, SpaceX might meet their intended date for the first crewed mission, but a Tesla car with current production hardware driving coast to coast while the driver reads a book sometime this year has no chance whatsoever of happening.

Also worth considering that the Silicon Valley VC universe does tend to reward people who could be described as being delusionally optimistic, and thus encourage them to continue to think in that way.


Ahh, the old George Costanza defense.


Musk benefits financially from deceiving the public.

So there is clear wilful intent combined with actual deceit. Sounds pretty clear cut to me.


What's your evidence of intent?


If someone has a delusional idea, is it still a lie? I don't think Musk 'lies' to earn more money. I think he is delusional for how long certain things take to do and at worst is trying to keep his companies afloat so he can continue with his ideas. I'd say his sins are less than the CEO of Enron for instance. It's hard to blame the guy when he successfully built a rocket company.


You could almost call him the successful / less pathological Elizabeth Holmes.

He promises the future before it has arrived, but unlike Theranos, he actually delivers a product. That product might be inferior to what was promised, but it is an improvement on the status quo nonetheless. And once released, Musk does not lie about its capabilities.


> > "We already have full self-driving capability on highways," Musk said during a January earnings call.

That statement certainly isn't true for any meaningful value of 'self-driving'.


This is on the order page for the Model 3 right now:

> Summon: your parked car will come find you anywhere in a parking lot. Really.

Except that feature hasn’t shipped and there’s no ETA. I call that a lie. I appreciate what Tesla is doing to the industry but I wish they wouldn’t be so dishonest in their marketing.


Actually that particular feature is currently in beta testing and the ETA is a few weeks.

It has a limit of 150 feet and I’m sure in practise it will just annoy other drivers unless the parking lot is fairly empty, but there you are.

https://electrek.co/2019/03/01/tesla-enhanced-summon-self-dr...


> I don't think Musk 'lies' to earn more money.

Telling people that the actual physical car they are currently purchasing from his company will soon be capable of autonomous driving when, in fact, it isn't, and won't ever be, seems to be a pretty straightforward contradiction of that statement.


Why do you think it won't ever be?

Also that's literally just your opinion that it won't soon be capable. It's definitely Musk's opinion that it will soon be capable. Mine too.


Musk is stating, for the record, that your Tesla will be able to drive, _coast to coast_, full level 5 autonomy - poor roads, blizzard conditions - THIS YEAR.

Given that the autopilot system has been evolving for five years (introduced in 2014 for Tesla) and still has problems like "aims at water barriers and accelerates", "fails to recognize that trucks may be more complex than simple cube in space", "fails to recognize that vehicles of a certain color may be hard to see when driving towards the sun"... and that will all be solved "this year" (t minus 9 months and change) seems...

supremely optimistic. I was going to call it naive, but that's an insult - Elon certainly knows far more about self driving than I do.

But this is the equivalent of man kind making fire, making the wheel, and then turning that into a car as the next logical step, ignoring all the interim.


I don't think the bar is set at 100% autonomy. I think he's saying it will be possible your car can do those things and probably do them safer than a human, there will still be edge cases for many years I'm sure. I do think blizzards are unlikely to be solved soon but that's a very small % of driving that humans are also bad at.


I'm talking about an actual car that was sold like a two years ago to someone. I mean it's possible that Tesla someday will be able to create some kind of autonomous car and sell it to someone.

But they've been claiming that the cars people already have will, in the future, be able to be autonomous.

Those statements are most accurately described as lying.


Tesla's plan is to replace the Nvidia chip in the car with it's own much more powerful chip. This is going to allow them to achieve autonomy. Those cars sold 2 years ago will be autonomous. That's the plan anyways.


What do you call founders that promise their company will take over the world to VC’s and then burn out spectacularly?


If it is a publicly traded company, and that person is an executive, then yes it is both a lie and securities fraud. If it's a startup or anything else where it's not a public company, then it's ok to be simply delusional.

His sins may be less than those of the CEO of Enron, but that's a very low bar (after all Enron was the largest corporate bankruptcy ever when it collapsed), and people have gone to jail for much less. I get the hero worship, but it doesn't stop Musk from being a criminal.


It wouldn't be the first.


> "We already have full self-driving capability on highways," Musk said during a January earnings call.

Except for obstacles that coincide with whitelisted locations their sensors can't handle. It's only a matter of time before someone dies because of that hack.


This is a (perhaps short-term) failure of first principles reasoning. Elon's known to favor thinking about things from first principles, and there's no theoretical reason that vision can't work. However, technical limitations might take a while to overcome.


Indeed. There's also a slightly less charitable interpretation–Tesla could't ship every car with LIDAR for cost reasons, so they're strongly incentivized to claim they don't need it.

That said, I don't think Tesla are wrong. If Tesla can deploy MobilEye-level automatic visual mapping (see https://youtu.be/GQ15HWCw_Ic?t=1381) this can obviate the need for LIDAR-based localization and dramatically improve their perception systems (by having very good prior information about all static obstacles, such as lane dividers).

Dynamic obstacle detection will still be worse than it would be with a vision+LIDAR approach, but not that much worse. Having superhuman ability along two axes (map-based priors and alertness / reaction time) is likely sufficient to drive better than humans, even if several remaining axes (dynamic obstacle detection and prediction) are worse.

Additionally, Tesla is not (AFAIK) using any structure-from-motion (https://www.youtube.com/watch?v=KT2KsN7yKo0) or stereo-vision (https://www.youtube.com/watch?v=SskSDjUG8ZY) techniques, but there's a chance that these could also improve perception, especially in tough cases where a single frame is not enough for good detection (e.g. white semi in sunlight).

LIDAR is not being used in the (compared to Tesla) small-scale AV systems of Waymo, Cruise, Aurora et. al because it's essential, but rather because it's convenient, and because the companies producing those systems want to give their AVs every advantage at any cost. I fully expect Tesla (and MobilEye) to do superhuman self driving without LIDAR, but it will take longer (i.e. not by EOY).


People should strive for correct reasoning. First principles reasoning, as I interpret that term, is just an approach that helps to achieve correctness by overcoming ingrained assumptions.

If Elon really thought "in theory, camera-only full self driving is possible, therefore I should invest resources in that approach" than he's dumber than I thought. He's done it because Lidar was not an option - and the claim that FSD is coming soon was an intentional lie.


The simple fact that Humans only have eyes and don't need lidar or radar to drive, however we cause accidents when our eyes misinterpret a situation, there are bad road markings or our vision is impaired due to bad conditions a camera only car will have the same faults.

So Personally I would feel safe in a camera only car driving during the day in good conditions and with good road markings or on a controlled road where humans are excluded. I would not feel safe in any other situation..

I still want a model3 it is so good in every other way but I wont be trusting the self driving.


Humans are general intelligences requiring almost two decades of specialized training and growth before we even think about letting them drive. The human eye is also vastly superior to any camera in existence, so even being generous there is. O comparison. Maybe such a comparison will be valid decades from now, or longer, but Tesla is selling people a lie today and it’s a potentially dangerous lie at that.


> the claim that FSD is coming soon was an intentional lie.

What's your evidence of intent?


We figured out the first principles of fusion power in 1930s, but are still nowhere close to practical fusion power generation. The engineering difficulties are just too great, and simpler alternatives exist which get much more attention, funding and optimisation.

Similar might happen to vision-only self-driving. It might take a decade or two longer to develop compared to LIDAR-based approaches, and meanwhile LIDAR is only going to get cheaper.


Vision alone fails pretty frequently even with a human brain and 700 million years of evolution behind it. Count me as a skeptic that cameras + computer will produce a safe self-driving system.


Really doesn't though. Humans are pretty good at driving, and 90%+ of accidents come from people being drunk/distracted/tired etc. We don't crash because we miss categorize a truck as a plastic bag or w/e.


> 700 million years of evolution behind it

Sure, but only about 0.000014% of that time occurred while cars existed. It's not like we evolved to control big metal objects hurtling down artificially made flat surfaces.


On the other hand, while our brains did not evolve to control cars, our cars were designed to be controlled by our brains. A system designed from scratch for autonomous vehicles would look radically different.


And even then, accidents where all parties are following the rules, e.g. being sober, undistracted by something in the car, giving enough stopping distance to the vehicle you're following, going the appropriate speed for the visual and road conditions, &c aren't the norm. Most accidents happen because somebody was irresponsible, usually blatantly.


...with devices that we're emotionally and physically addicted to flashing and making noises that are designed to ignite a flurry of activity in our reptilian brains sending the message PAY ATTENTION TO ME IMMEDIATELY!


This implies that the vision itself is to blame and ignores the actual causes of crashes.

A computer doesn't get distracted by a text message. A computer doesn't get drunk. A computer doesn't day dream.

Cases where a crash happens and the person at fault says something like "I didn't see them!", it's usually because they didn't even look, a mistake a computer won't make.


We can't measure that evolutionary advantage.

What we can measure is fatalities


> there's no theoretical reason that vision can't work

Human brains, which are general intelligences, exist, and are collections of atoms.

I'll happily, for $500,000/copy, upfront, promise to sell you artificial general intelligences. I swear I'll get them built in the next two years.

There is, after all, no first principle reason for why my promise isn't worth the paper it's printed on. Human brains are collections of atoms, so we should be able to, out of atoms, build artificial general intelligences.


I got a Tesla last year. I love it. But it's been painfully obvious to me that Autopilot is nowhere close to something that will let you take a nap.

Often when I'm stopped at a light, cars that are standing completely still will appear to be constantly moving forwards and backwards on the display. My best theory so far is that the Tesla's spatial model is getting thrown off by the other car's turn signal. This does not inspire confidence.

In general, I find the radar-enhanced cruise control very reliable (so nice in bad traffic), but autosteer is flaky at best.


> In general, I find the radar-enhanced cruise control very reliable (so nice in bad traffic)

Which is great but hardly unique to Tesla. Every major manufacturer out there offers adaptive cruise control. My car even recognizes the difference between in traffic stop-and-go/rush hour, and "queue" mode (exiting parking lots after events, etc).


IMO Tesla does not have the hardware on the Model 3 to do full self-driving.

I believe that stereopsis (multiple cameras using parallax to solve for per-pixel depth) is necessary to get a practical, well-functioning self driving system working. LIDAR is just too expensive and not good enough, but stereopsis is extremely flexible and can have extremely high angular resolution.

Combine naive stereopsis with temporally-coherent sensor fusion (e.g. a well-designed Kalman filter), and I think you could have very robust ranging. Humans are already very good at this with two narrowly spaced eyes (stereopsis to 1/4 mile range is not unreasonable for a person)-- but a car is not limited to a 1.5 inch stereo baseline; it could have a stereo cameras on opposite sides of the windshield. That would hugely increase the depth sensitivity, even at moderate resolution-- parallax can be detected well below the Nyquist limit (since Nyquist cuts off frequency, but does not destroy phase).

Tesla is totally failing at even the basic level of environment awareness (c.f. cars which have been driving into exit dividers), which is what I consider to be the easy part of self-driving (the hard part is getting the machine to participate in a nonverbal social environment, which is what the roadway is). Rumor has it that Teslas can't detect obstacles far enough ahead to avoid them at more than 30mph-- absolutely abysmal, if true.

If it were me, I would put cameras on the Tesla windshield in this pattern:

    xXoO      OoXx
Where x and o are long-range and wide-field cameras, and caps and lowercase are dynamically exposure-adjusted to capture both brights and darks. Each pair has the largest possible parallax baseline. And I would do a big, fat sensor fusion on all eight of them to get a high-res, depth-augmented HDR map of the surroundings. No fancy, expensive sensors. Just a large number of cheap cameras and sophisticated software. Maybe do some tricks with cycling 'attention' or multi-res hierarchies to keep the computation load down and realtime.

Tesla has only three cameras with different FOVs and a very narrow baseline-- I doubt they are doing stereopsis. And I don't think they can do the job without it.


I'm confused.

The article seems to extrapolate that Tesla has failed in autonomous driving because it removed some lines from its Autopilot page. It's still very much in progress and progress is routinely confirmed by the company. Musk is known for blowing timelines but they do get delivered.

When it talks about "old approach" I'm again confused. No one else crowd sources driving data from a real world fleet. They have a unique unsupervised learning datasource from shadowing drivers.

As for attentiveness. I don't see how drivers not paying attention is any different to mobile phone use. Currently Autopilot is very clear on telling at every chance its an assistant. Saying it will kill people is like blaming phones for drivers texting while driving and getting killed. These people die because they are breaking the law and not in control of their vehicle.


> Currently Autopilot is very clear on telling at every chance its an assistant.

Yep, hence why this article is just clickbait for Tesla haters. Every Tesla owner is fully aware that it's not self-driving in the sense that you can take a nap.


Arse Technica indeed.


I wonder, though, if this had more to do with how much they are actually investing in AI vs electric car tech. Surely Tesla is an EV company first and an AI company second. I know they like to brag about their autopilot but it eems to me that having a lower cost car is more important than having an expensive lidar system on board.


Most car companies make lots of money on add-on features. The base model couldn't be super profitable, but the base model with the self-driving package probably makes some good money. Or it would if the didn't have to keep doing recalls to upgrade the hardware.


TM3 owner, car has EAP. I have the option to buy FSD for two thousand but haven't jumped. I not only don't believe it but I don't have a need for it. Now it might be useful down the road for resale.

That being said, two thoughts.

First, if they want me to buy it then demo it in passive mode in my car. That is, there is space where the current speed limit is shown. Use that space and below that to show what signs and signals it has seen recently and the order of importance. Currently it does not see speed limit signs and it that is wholly FSD territory then Tesla is overcharging compared to other systems.

Second, just at a stand still cars around me jump and I am not sure it sees stationary cars when I am driving. The best example I have is a two lane road through a subdivision I usually take, the outer lane is a long turning lane but people tend to stop there to let kids out for the water park. I cannot recall my car showing a car when someone is loading/unloading kids but it does see cars moving in that lane when I over take them. So is it just going by too fast for the stopped car in the lane over to register? It knows its a lane. I am not sure but I will wait to see how it develops.


I am still trying to figure out what started and sustained the enormous self-driving car hype of the last few years. I understand why people who don't know much about technology would buy into it, but I don't understand why so many relatively tech-savvy people have bought into it and why huge amounts of money have been invested into it. It should be obvious - and should have been obvious when the hype started, as well - that to create fully self-driving cars that can operate as well as a competent human driver in a full spectrum of real road situations would require solving extremely difficult technical problems that are nowhere near solved and that cannot necessarily be solved any time soon simply by throwing money at them. So what explains the hype and the investment money? Out of the people responsible for the hype, what fraction were/are merely deluded and what fraction were/are lying?


We desprately want it to be true, so we ignore the truth long enough to try earnastly to make it so. It the brave and ignorant charge of the new generation. Which begs the question; Is it enough to mean well and act in earnst, or is ethics reserved for those who have the luxury of self-reflection?


Not having LiDAR for anything over level 3 self-driving capabilities seems like a very bad idea...Computer vision right now just does not have the spatial awareness that you need for self driving capabilities. I wish Tesla would work on something similar to the Kinect v3 but for long range (above 20 m).


Having LiDAR in any car for self driving seems like a prohibitively expensive idea though. For instance: See how no company has released a commercial product with LiDAR.

And I definitely think Computer Vision will get better before LiDAR gets cost competitive.


I agree.

However, suppose the opposite: Suppose LiDAR gets cheap first. Like, I don't know, $1000 per unit and compact enough to not be a big burden on the rest of the vehicle.

Tesla charges $5000 for the FSD add-on. In principle, they could easily afford to retrofit vehicles with the cheap LiDAR plus maintain any advantages the full computer vision system would add (such as identifying vehicle types to assist in predictive behavior modeling and avoidance strategies).

So trying to go the full Computer Vision (with ancillary radar) route actually has pretty low opportunity cost for Tesla.

Another thing: I wonder how much simply much-improved geolocation could help? That failure from last year of driving right into a (failed) collision attenuator (where an off-ramp split off) could've been avoided simply by having high confidence, half-meter-or-better geolocation (from GPS, cell, IMU, etc fusion) and good mapping. And with good mapping/geolocation combined with other sensors, the sensors could focus on identifying changes to the expected road condition, perhaps increasing their robustness. (and when networked, they could use sensing from other vehicles ahead of the current vehicle to assist in navigation as well)


AFAIK all the big companies working on this problem include high-definition mapping as a major component. The problem is gathering that data without having an actual LiDAR. So some companies like Waymo will literally just have cars with LiDARs drive all around every street and map it for them. This is great but means they're limited by when they travelled the road and conditions at the time.

Ideally, you have a mapping system that is constantly updating (every time your fleet drives by something new like construction on the road it auto-updates the map).

Comma.ai is working on a vision+GPS Kalman filter solution along these lines up to 10 cm accuracy. I'd guess Tesla is as well.

And then yes, after that the vision would be primarily for localization.



Totally agree. The race is a reputable solid state LiDAR which Waymo [1] and Velodyne announced a new one of recently [1]. But still, those are around 5K. Though I am really excited for Bajara's prism based scanner [3].

[1] https://techcrunch.com/2019/03/06/waymo-to-start-selling-sta... [2] https://velodynelidar.com/velarray.html [3] https://www.baraja.com/


Elon insists being theoretically right is the same as being practically right. So for example he uses the theoretically perfectly functional normal vision that humans use even if the practical implementation of today simply cannot reach the same results.


You think Elon Musk doesn't think there's any difference between theory and practice? What makes you think that?

And what does it meant to be "practically right" about the future? I don't understand how you could be right in practice about something that hasn't happened yet.


Regardless of whether cameras or lidar win out, just consider the fact that back up cameras already exist on tons of modern cars, and the solution to the environment issues of these cameras is one of:

a) fold it out when you put the car in reverse, so it's always ice-free

b) ignore this problem. It's covered in ice for months.

This is a simple problem in comparison, and neither of the above solutions would work for a camera used for autonomous driving.


If the camera is broken, Tesla will obviously not enable auto driving. It has a steering wheel for a reason.


If it’s “broken” every morning for months in winter climates like the cameras I mention, then people there won’t pay a premium for the capability.

And if it doesn’t keep itself ice free it will risk going from uncovered to covered in seconds. Cameras need to be ice/snow/rain proof in the sense that they stay clear of it while driving.


What about a heated camera lens?


"Self-driving cars also benefit from lidar sensors, and the best ones cost thousands—if not tens of thousands—of dollars each. That's too expensive for an upgrade to a customer-owned vehicle. But the economics are more viable for a driverless taxi service, since the self-driving system replaces an expensive human taxi driver."

That's probably the crucial point. For now, lidar is needed for safe operation, and too expensive for mass deployment in private cars.


Commercial (not even taxi, but transport) will be the first to be autonomous because of the larger savings possible. That we'll see useful autonomy for a $50k car driven by individuals, rather than in a $1M transport vehicle operated by a huge logistics company isn't likely.


What is your evidence that lidar is needed for safe operation?



If self-driving can navigate US streets fully autonomously, even in bad weather, that will be impressive. Now transplant that to just about anywhere else in the world, and it will be impossible. Mexico City, Rome, Buenos Aires. Hah, no chance.


Are there efforts to invert the problem ? instead of fully independent vehicles, having a bit of road signaling system (a reincarnation of 50s US embedded radio track).


Tesla has taken on so many things that have never been done before and succeeded, it would not be surprising if it failed at one of them.


The big missing piece is not LIDAR, but causal reasoning. Autopilot and similar 'AI' cannot reason; it doesn't have a mental model to ponder 'what if' and can't use counter-factuals to ponder what would happen if it didn't do something. It's just glorified pattern matching currently.

Reading Judea Pearl's The Book Of Why certainly sobered my outlook on AI.


Well, that's not necessarily true... self driving in particular usually uses planning, which is basically all about 'what if'. But it's true this is relevant to AI in general.


Isn't that more of an indictment of the self-driving industry as a whole? In Tesla's case they are missing that, but they are also missing LIDAR (or some other thing to fill in that technological gap).


It also has a factory automation strategy that other companies abandoned decades ago.


Anthony Levandowski seems to have the same opinion. I'm a big believer in vision, it'll be interesting to see how it plays out.


[flagged]


... working on this stuff day and night ... —- in California.

I’ve been driving for 40+ years, millions of km, 1/2 during winter.

Now, Elon is a Saskatchewan boy, so he should know better. Perhaps he’s forgotten. Winter driving is at best 1/2 vision dependent, often much less. Many times, you have to actively ignore your vision (everything you see is “moving” sideways). Quite often, you’re modulating your throttle to maintain a tiny ratio of +/- acceleration that maintains static friction; as soon as 1 or more contact patches achieve dynamic friction, you’re entering a spin due to yaw forces induced by your other driven wheels.

Much of the time, the road surface adhesion characteristics are best detected by sheen, vibration, sound and guesswork/prediction based on temperature, sunlight, recent weather, etc.

Unless the vehicle has sound, vibration, and vision sensor integration — it cannot hold a candle to human driving, except in the most trivial driving scenarios.

Winter is a complete non-starter for any autonomous driving technology that I am aware of.

I don’t know if Lidar would really help much. It can’t punch through heavy rain/snow, at least not as well as radar anyway. Maybe it would help in trivial driving conditions. I think vision, sound, vibration, and radar with a powerful ANN trained by professional all-season winter drivers in challenging conditions might be useful, and eventually even good. It should be able to perform super-challenging evasion and recovery maneuvers that could save lives!


The vision problems are real. But the ability to handle ice/snow-induced wheel spin and slew seems like exactly the kind of problem that can be handled with sensitive electronic traction control, perhaps augmented with accelerometers. That specific problem doesn't seem intrinsically more complicated than, say, keeping a drone level on a breezy day -- which is something machines already do much more reliably than humans can. (New England driver here, so I do have some personal experience.)


Traction/stability control can only help with the situation you are already in. Being able to see the condition of the road -- the sparkle of ice forming, for example, or a sheen that suggests black ice, etc -- allows you to act instead of just react.


While vision is a hard problem in general, the specific challenge of "ice detection" seems like exactly the kind of problem that computers can be programmed to do better (and more tirelessly) than humans, particularly with specialized detectors and LIDAR. Better sensors, better reaction time, zero distractions. (I don't have any expertise here, just intuition.)


With the right instruments, I don't disagree. But in the context of cameras, I think it will be quite some time before that happens. Watching demos of state-of-the-art computer vision object detection today, I think it may be decades before we have cameras and computers good enough to do the kind of on-the-fly visual analysis that humans do naturally.


> in the context of cameras, I think it will be quite some time before that happens.

Why? The car can easily detect slipping. It can upload camera data every time any Tesla ever lost traction. You think machine learning won't be able to correlate those signals?

Certainly some problems are way easier for us, but that problem seems way easier for the Tesla fleet than for a human.


When the first snowstorm hits it never ceases to surprise me how the dynamics of the road change - not just the friction, but the number of lanes and where they go all change as people give their best guess, which over the next 2000 cards becomes codified. If you have self-driving cars following the old lanes you're going to have accidents.


Dunno, I think of the people that follow other cars as sheep. In my subaru I can pick any lane I like, and often the one with the least travel has the best traction. With snow tires and a subaru I can handle just about any conditions up to where the snow lifts up the car off the tires.... which I've had happen. Digging compressed snow out from under a car is no fun.

So often I see a giant line of cars behind the plow, slipping and sliding along, then I pick the deepest snow lane with the most fresh snow and it's just fine.

Generally it's not the lane you pick, but the safe speed for that lane. Sure that might vary per lane, but with today's sensors and electric motors the autonomous driver should be more accurate at quantifying that than a human. After all a computer could easily say apply 50HP to each wheel (one at a time) for 10ms or similar to quantify where the dynamic/static threshold is.


It has nothing to do with the traction, it's the fact that people are now travelling across the road at different angles than they were before, if you're trying to stay in the regular lane, you're going to be fighting people moving across the lane.


This is silly. The car has a steering wheel. It won’t engage autopilot in a situation that it can’t handle. Just like waymo cars only drive in Phoenix.


You realize humans drive with just vision, right?


> You realize humans drive with just vision, right?

If that's the case, why do cars have horns?


You forgot balance, sound and vibrations.


This thread has like 4 posts. You're being a bit presumptuous and hostile, no?


Maybe Tesla knows better than Google and Uber and GM and Toyota and Volvo...

Or maybe they've gone down a blind alley in their rush to look like they were first to market.


Well, it's a great way to inflate the stock price. Much better than saying "We'll be just another car company, when self-driving arrives".


Well they also have billions of miles of data from people who have actually drove their car hands free, feet free. No one even has close to this data.


Do they have more data than Waymo? That doesn't seem completely clear to me. They certainly have more vehicles on the road, but I assume they are driven many fewer hours per day.


Interestingly, for air travel, there's another strategy: Rely entirely on GPS and commanding from air traffic control (with systems and structure in place to minimize damage in case those systems fail or are insufficient).

That's what Zipline uses for their fully operational medical delivery system in Rwanda.

The drones are completely blind, directly, but they travel in pre-defined flight paths and communicate with one another and follow directions from flight control.

For self-driving cars, a similar situation may be extremely good GPS geolocation combined with network-wide sensor fusion and mapping. In principle, you could even do real-time optical or synthetic aperture radar from orbit or via persistent aircraft to allow the central controller to identify hazards and obstacles and to update maps in real time without much at all happening on-board except low-latency reactions. Most highways already have much of their length covered by cameras for the local Department of Transportation, and even many intersections and sidestreets have surveillance cameras. A low latency connection to that system, upgraded with higher fidelity, could significantly help autonomous systems. Might be another very helpful public good for cities to provide, much like GPS is provided.

...and on the engineering side, you can have vehicles designed specifically to reduce pedestrian injury, such as not having those large, boxy grilles on SUV. Such things are almost entirely cosmetic and reduce efficiency, so Tesla doesn't have them on their cars. Going beyond that, external airbags may help as well. Zipline uses a foam body and a fail-safe parachute to stop the vehicle if there's a problem.


If a plane can only get a GPS fix to within a 15m radius that's no big deal, whereas if a car has a 15m fix it's running down pedestrians on the sidewalk. Planes also don't have a habit of flying in tunnels or urban canyons where obtaining a gps fix is even more problematic.


Planes already have a problem with GPS outages or military jamming exercises. Imagine if all cars within a 100 mile radius all of a sudden would stop working.


...that's why you develop systems designed to bring them to a safe stop (in addition, stable IMUs can assist for several seconds, plenty of time to come to a safe stop). That should be standard on all autonomous systems, TBH.

And BTW, there are 4 GNSS systems, each run by a different country/entity. The odds of them all failing at the same time is very small.


GPS is (negatively) affected by atmospheric variability. Are the other 3 systems immune from it?


Indeed. But this is an addressable problem. The new GPS satellites (GPS III) have just started launching (on a SpaceX rocket in December 2018), and they should triple the accuracy: https://www.geospatialworld.net/blogs/what-is-gps-iii-and-ho...

But even before that's complete: High precision GPS (i.e. with compensation for atmospheric effects) can get better than 10cm accuracy by tracking the carrier wave: https://en.wikipedia.org/wiki/Real-time_kinematic

Single-digit-centimeter-accuracy is possible near GPS signal compensating base stations (~20km). This level of precision is regularly used by surveyors.

There are several options for improving geolocation beyond inside-out (i.e. vision or lidar) tracking.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: