Hacker News new | past | comments | ask | show | jobs | submit login
Waymo CEO dismisses Tesla self-driving plan: “This is not how it works” (arstechnica.com)
260 points by sytelus on Jan 24, 2021 | hide | past | favorite | 564 comments



I think the FSD beta rollout is completely irresponsible from Tesla. Rolling out a clearly half-baked safety critical technology to its customers (yes, I know it's only a few beta testers) who are untrained is nothing but a tactic to generate hype and get more customers to buy that $10k FSD package.

This is on top of Tesla being the least transparent company out there in reporting safety data or their testing methodologies (Tesla's quarterly safety report is a grand total of a single paragraph). Compare this to how transparent Waymo is [1], the difference in safety culture between Tesla and its competitors is stark. Not to mention how Tesla skirts around the rules by refusing to report autonomous miles with the excuse of classifying it as a driver assist system, while naming the technology "Full Self Driving" and Elon Musk hyping it up every chance he gets on how it will be Level 5-ready by end of the year.

[1] - https://waymo.com/safety/


I have one question for Tesla customers who trust the company to deliver full FSD.

How do you reconcile that belief with the fact that Tesla's embedded developers did not understand the extremely simple concept of write endurance?

The NHTSA opened an investigation into premature HUD failures because they prevented the backup cameras from working. But the fact of the matter is, the company used a small partition of internal Tegra Flash to store rapidly-refreshing log data. And you are trusting these devs with your life when you enable autopilot.

You're also entrusting my life, and those of my family, to them. But we'll gloss over that, because it's expensive not to.


Tesla is organized functionally. The Infotainment Group did the Console electronics. The SW people there did GUIs and such. So yes between the electronics folks and the app folks 'somebody' didn't consider write cycles. In other Tesla groups, such as Body Controls, and Propulsion -- I can assure you those geeks know such things and plan to deal with funky hardware. The Autopilot group is again separate. There really isn't much crossover. "Systems" is unfortunately an unknown word at Tesla. You know, parts is parts.


This is interesting to know, and your comment flipped a switch in my head - I'd like to know the organizational structure of a lot of companies out there. Is this information you acquired personally? Or is there a resource out there where you can refer to the structure of different companies?


Typically the annual report will give you an org chart with the division heads for public companies. If it isn't there it will be on the website or some other publication and if you can't find it and are an investor you can always simply ask.

Here is Tesla's:

https://theorg.com/org/tesla

and a bit more detail here:

https://theorg.com/org/tesla/org-chart

From there on down it takes a bit of work to get more detail, we typically spend a day on this during the run-up to a DD to verify what we receive and use a lot of googling, linked-in, and other sources to figure out who works in the company and in what role.

The GDPR has made this a bit harder. Team pages are a good source of info for lots of companies in the 10-100 people range, they sometimes list all of their employee names + titles.

I'm not aware of a single source of truth for detailed org charts, if it exists we'd be happy to buy it, it would save us a lot of time and effort.


> There really isn't much crossover

except, apparently, the execution platform. you know, the bit that matters.


> How do you reconcile that belief with the fact that Tesla's embedded developers did not understand the extremely simple concept of write endurance?

That sounds more like the kind of situation where the software department said "we need to have a system that has X amount of storage" and the hardware department made the hardware for it, but there was some missing communication about endurance. It's likely not the same people writing the autopilot software.

That being said, I'm not a Tesla customer, and the way autopilot is deployed and marketed makes me very uneasy.


Well it still speaks volumes about internal culture. Everyone on the team should know they are developing a safety-critical system/component. Yet, the Bob from software can write a sloppy spec and Alice from hardware can not care about the spec being sloppy. It is entirely baffling.


>Yet, the Bob from software can write a sloppy spec and Alice from hardware can not care about the spec being sloppy.

Ehhh, you have not been long in industry, have you? :)

And that's why you get everything in writing and doubly signed off from all parties involved. Even telling people directly, to their face, with witnesses, does not work. Checklists for the departments does however.


> Tesla's embedded developers did not understand the extremely simple concept of write endurance

Equally likely is that they do understand but just don't care.

From all accounts, Tesla seems to have a culture of move fast and break things.


Hmm I don't want to defend Tesla, but I do want to push back on this a bit!

Facebook made "move fast and break things" famous, which was Zuckerberg's way of presenting a tradeoff. Everyone company says they want to move fast, but Zuckerberg made it clear that the company should care more about velocity than stability.

I don't believe that's Tesla's attitude. Rather, I think their attitude is more "move fast and ignore regulations". It's not that things won't break, but rather that the tradeoff Tesla is making is around regulation rather than things breaking.


Regulations are all we have to keep things from breaking...ignoring them about something like a self driving car should be a criminal offense.


The other side of the coin as that they hobble 5-20 years behind tech possibilities. If you want to push boundaries, you sometimes have little choice.

Self-driving cars in the next 10 years have been a serious possibility for the last decade or so, yet governments and insurers don't have a policy ready and won't until a couple of years after the first self-driving cars are available.


Tesla, Google et all are not little startups but huge corps with plenty of cash and the ear of any politician or CEO. If they can't get a policy enacted, maybe there's a reason for it.


Pushing boundaries is fine when there are no life-threatening implications.

Self driving is all about convenience and costs[1] and as such it's not necessary, nor is it advisable, to inflict the bleeding edge on the general public. Waymo's geofenced approach is less bad than Tesla's, and it's something that regulators can readily work with also.

1. But teh safeties!1! No. Just no. ADASes (advanced driver assistance systems), particularly autonomous emergency braking, remove the safety argument for self driving. With ADASes you have 95% or more of the (asserted) safety of self driving, and ADASes are available today, on a large and increasing range of cars. There are even retrofit kits.


There is nothing like self driving in current regulations, much less 10-15 years ago when people seriously started working on it. So, in your views, even starting working on this stuff should be criminal offense?


>all we have to keep things from breaking...ignoring them about something like a self driving car should be a criminal offense.

I think it would potentially be a manslaughter charge.


I don't disagree! I just think people misinterpret "move fast and break things" and use it anytime something breaks. I realize it's a nuanced point.


Also I think it was just to cover the backs of FB engineers. Like you implement a new feature and you are afraid you get scolded because it broke something. You know you are covered. So you dare to change things. Actually was there even a handful of cases where things broke? (And I am sure FB will get rid of the engineer who breaks too many things)


Yup, that's exactly it!

So, the culture was "it's okay to break things as long as you're moving fast". I don't think Telsa explicitly would say "it's okay to break things" to their engineers, but I do think they'd say "it's okay to ignore regulations".

In the end, they may have the same results, however it's all about what employees know they're safe getting away with.


Tesla might actually say: "Everybody, we need to push end of quarter sales. You gotta release the FSD as it is. App team, you gotta implement some butt purchase button for FSD that has no undo. Thanks."


Safety critical regulations, it seems. That gets close enough to "brake things" if you aks me.


But things have literally broken.


You have to kill a few people to make an omelette, as they say.


Sure, but I think you're missing my point.


That can be forgivable in some situations, but Move Fast And Break Customers? Not so much.


It's even worse to move customers fast and break innocent bystanders.


Nah, not as bad. They aren't a source of revenue. Now if the bystander also owned a Tesla…


A lot of this also stems from a culture of quarterly earnings reports and idiotic "fiduciary" duty to some shareholders instead of the primary duty being to customers and humanity.

The incentives are fundamentally defined in the wrong way, and the system has just optimized itself for the those incentives.


There is no challenge reconciling imperfect FSD with high trust of that FSD.

When I decided to play the game of risk minimisation, I sold my car. Minimising risk isn't the most important goal of drivers, almost by definition. Cars are not safe in any objective sense. They are tools of convenience.

A fun hypothetical, you and a good friend get tested for spatial intelligence and it turns out there was a big difference in your favour - how big does the difference need to be before you tell your friend you are no longer comfortable letting them drive when you are in the car?


While spatial awareness is important during driving,I believe being focused on driving is even more so.

When driving the tight streets of old European cities with pedestrians jumping out everywhere, I usually watch for hints like too tall cars parked on the sidewalk and potentially hiding pedestrians planning to cross the street, and move my foot from the gas to hovering above the brake pedal. And a million other things like that, mostly by paying close attention to driving.

Sure, I believe my spatial awareness is also great, but that helps me to parallel park in fewer back-and-forths or to remember a way to a place I've been to once six months ago through a maze of one way streets. But it does not help me reduce the chances of an impactful collision (sure, I might ding a car on the parking lot or not because of it, but nobody is going to get hurt because of that).

You are right that cars are not safe, but for some part, you've got control of the risk yourself. I also watch for hints a car will swerve in front of me, and I am sure I've helped avoid 100s of traffic accidents by being focused on the whole driving task. And other drivers have helped avoid traffic accidents that I would have caused in probably a dozen cases too. I think I am an above average driver simply because of that ratio.

You run similar risks when you board a public bus without knowing how the driver feels that day, and how focused they generally are.


> You are right that cars are not safe, but for some part, you've got control of the risk yourself.

I don't want to be in control of the risk, I'm a bad driver. Haven't owned a car for some years. Still drive on occasion when I need to with a hire car.

I want a computer that is better at driving than I am to do it. It is easy for me to see why perfect is the enemy of good on this issue.

You don't want to share a road with me when you could share it with a Tesla FSD.


>You don't want to share a road with me when you could share it with a Tesla FSD.

This might be irrational, but I'd rather be killed by a human than killed by a computer made by a company that's run by a gung-ho serial bullshitter. That would somehow suck worse.


> You don't want to share a road with me when you could share it with a Tesla FSD.

I'd rather share a road with you, a human.

Even if you're a self-admitted bad driver, humans have a strong instinct of self preservation which helps.

Software has no such thing, a bug in the code will let it accelerate full throttle into a wall (or whatever) without flinching because it's not alive.


Bugs in humans let them do that too: "The US NHTSA estimates 16,000 accidents per year in USA, when drivers intend to apply the brake but mistakenly apply the accelerator."

https://en.wikipedia.org/wiki/Sudden_unintended_acceleration


Or: look-but-failed-to-see-errors, which are an "interesting" cause of accidents. When I took my motorcycle driver's test, my driving instructor sometimes warned me that I needed to make movements in a particular way. He claimed that even though I would make eye contact with a car driver, they may look-but-not-see-me. His reasoning was, as a motorcycle rider, I'm horizontal/upright when a car driver may be looking for something vertical (another car).


Riding a motorcycle is a tough one for car drivers, and not just because of the issue you mention: bikes can accelerate and brake much more rapidly due to their lower mass, and inattentive drivers can easily be caught by that. Them appearing where it shouldn't be possible for a car to show up also amplifies the issue (you don't need to look over your shoulders in a single lane street, but bikes easily show up there).

To be honest, I'd trust software even less if I was a bike rider riding in a European (or Chinese, Phillipine...) city, but that's just me :)


> bikes can accelerate and brake much more rapidly due to their lower mass

Cars are typically able to brake faster than motorcycles. One of the reasons why tailgating on a bike is extremely dangerous.


Good point, thanks!


Being a software engineer,I do want to share a road with you more than any self-driving tech out there today.

You need to experience truly bad roads to understand the complexity involved that you would easily navigate and software would be perplexed!

Sure, we need to be building it today to get there some day, but we are so far away!


There is no need for FSD, just a simpler AI/sensors that detects collision and breaks before the driver does (which is already a feature in some cars)


You mention focused driving, but here's a cool idea. Your subconscious which actually handles most of ur behavior and decision making and nuanced calculations gradually learns from your conscious. When you focus on things, you gradually train your subconscious to mirror that behavior and do it autonomously.

This is demonstrable by reflecting on new things you learn versus old things. Old things like walking you barely put any conscious effort in, yet once you reach a certain age the daily obstacle course that is life, which is full of tripping hazards becomes effortless to avoid ensnaring ur foot on and succumbing to a sudden tumble. But if you were to try to roller blade for the first time, suddenly you have to put massive conscious strain and focus on every movement just to avoid falling on something so simple as a slight texture change on a surface.

Also interesting thought on (conscious) spatial awareness: Here's a question is your conscious aware of things first or is your subconscious aware first? When you conscious becomes aware how sure are yah that it's not your subconscious first alerting your conscious beforehand? These are rhetorical questions which psychologists and neuroscientists already have insights about :).

Life is dangerous, but many of the dangers are predictable and the brain is adept growing to adjust to that predictability AND at learning to recognize indicators for unpredictable dangers(humans receive anxiety in these moments). In those latter situations Intelligence and consciousness is needed. Dangers that are predictable can be learned to be subconsciously handled without much worry & with much practice + experience.

Tesla autopilot is a computerized subconscious that's consciously trained by all the tesla drivers.

I strongly suspect that we'll never have level 5 autopilot with or without lidar sensors unless the computers get a human adaptable intelligence module OR some convention simplifies the environment such that new unpredictable dangers can be minimized to a miniscule and acceptable failure rate. I think people in this debate are focusing on the wrong issues.


You say how we subconsciously handle things like obstacles during walking, but here I am at my 38 years of age, tripping on uneven sidewalk where there's a sudden unnoticeable drop in the level of a couple cm (an inch): the same feeling when you go down the stairs in dark, and forget that there is one extra step.

I agree we get subconsciously trained (here, my brain is expecting a perfectly flat sidewalk), but when I say focused driving, I am mostly thinking of *not-doing-anything-else*: to an extemt that I also keep my phone calls short (or reject them) even with the bluetooth handsfree system built into my car with steering wheel commands.

The thing is that a truck's trunk opening in front of you and things starting to fall out on a highway at 130kmph (~80mph) is very hard to train for, but all four of us car drivers that were right behind when it happened did manage to avoid it without much drama or risk to themselves or each other. What self driving tech today would you trust to achieve the same today? Sometimes you don't care about averages, because they are skewed by drunks or stupid people showing off on public roads.

And stats being by miles covered is generally useless: if it was accidents per number-of-performed-manouvres, it'd be useful. Getting on an empty highway and doing 100 miles is pretty simple compared to doing 2 miles in a congested city centre.


Cars are most dangerous for pedestrians, cyclists and bikers.


The hud as you call it is not a safety critical part of the car in Teslas. You can reboot it while driving without effecting the car. The self driving computer is separate and has full redundancy to the point of having two processors running redundant code. There is a reason Tesla’s are consistently rated as the safest cars on the road with low probability of being involved in an accident and lowest probability of injury when an accident does happen.


2 Processors? What happens with majority vote if they disagree? (or maybe they wanted to avoid a 'minority report' situation :-)). But honestly do you know what they do? Although since it is not flying, probably some red indicator will light up. And maybe a stopping maneuver.


Apparently they just try again:

"Each chip makes its own assessment of what the car should do next. The computer compares the two assessments, and if the chips agree, the car takes the action. If the chips disagree, the car just throws away that frame of video data and tries again, Venkataramanan said."


The self driving computer was discussed in some detail at their 2019 Autonomy Day event.

https://youtu.be/Ucp0TTmvqOE?t=4244


Fail safety doesn’t mean anything, if the decisions it makes is bad, like thinking a plastic bag is a solid object on the road, or simply forgetting where the lane is over a distance and swerving into oncoming traffic..


>You're also entrusting my life, and those of my family, to them. But we'll gloss over that, because it's expensive not to.

This is my biggest concern. I'm going to get killed by some jackass' tesla while he's texting, because Elon Musk is a megalomaniac.


There are loads of people texting and driving who aren’t in a Tesla. I worry much more about them.


I worry about them, but not more than people abusing AP, they're all in the same boat.

The people texting and driving are idiots, distracted idiots, but they have no misconceptions about if their car will save them if they take a nap.

Elon's made comments about "your hands are just there for regulatory reasons" and overpromised for years so now people abuse it until it's just as dangerous if not more dangerous than distracted driving (stuff like intentionally sleeping or using a laptop full time)

Other manufacturers are coming out with features that protect me from texting drivers without generating a new breed of ultra-distracted drivers like those who are falling for Elon's act.

Now a base model Corolla, pretty much the automotive equivalent of a yardstick, will steer people back into their lane and warn drowsy drivers that the car is intervening too much.

A Tesla can't even do the latter.

-

One day we're going to look back and wonder why we allowed things like automatic steering without fully fleshed FSD.

I mean the driver is actually safer if you only intervene when something goes wrong. They're forced to be attentive, yet in all situations where they fail to be attentive and AP would have saved them... it does save them. And tells them to get their head out of their backside.

If AP did that every person it has saved would have still been saved, but a few people it got killed would still be here today.


Same here. However, I am assuming you have decent sight, so you can at least protect yourself in certain situation. Me is blind, and I am getting increasingly weary about the future as a pedestrian. As a kid, one of my biggest fears were automatic doors. I sort of imagined they would close on me and try to kill me. I am afraid this horror is going to be true at some point of my life. Automation is going to kill me one day.


Don’t worry, it’s much more likely that you’ll be killed by someone who is texting and driving some other make of car.


> How do you reconcile that belief with the fact that Tesla's embedded developers did not understand the extremely simple concept of write endurance?

Easy: compartmentalization of knowledge. Most software developers I have met have no idea about the storage stuff under their application, they trust the OS and the hardware people to deal with it. I mean, who can blame them in the age of AWS or actual servers where companies simply throw money at any problem and hardware is rotated for something new before write endurance ever becomes an issue And the hardware people probably knew that the OS people would run Linux, but didn't expect logfile spam.


Please note that you are asking this question at the tail end of pandemic where a significant portion of the country decided it was preferable to "just let some old people die" than to lockdown or even wear masks.

Those people will twist themselves into giving you a PC answer but the truth is they're willing to crack a few eggs in order get FSD today. They'll tell you no one else is even trying and in the long run FSD will save more lives and Musk should be praised for having the gumption to get the ball rolling.


> in the long run FSD will save more lives and Musk should be praised for having the gumption to get the ball rolling.

In the long run, for the historical perspective, this is a very plausible outcome. It's happened before (any major construction projects prior to the 1950's, anything involving Thomas Edison, most large damming projects), where a historical event is tied to a bunch of dead innocents, but history books praise the vision and determination of the ones in charge to not give up just because a few measly blue collars kicked the bucket early.


Where the difference in both this and the grand parent post is about choice.

If you fear the pandemic and what to lock yourself up in isolation we should as much as possible allow that. And if you want to work on very dangerous projects for better rewards you spud be able to.

With autonomous cars the choice of risk may not be so easy.

Arguing about what real choice you have is overly pedantic and we should rather concentrate on the principles for the right out come.


That reflects of values and focus of those who wrote those pop tech/business history books/articles.

I have seen history books that do reflect "but he also killed some people" or are critical. They are however less popular in some circles.


I don't understand why it's legal. How can it be allowed to send out uncertified software to cars on public roads? Aren't there safety standards that need to be met? I thought that safety critical products where better regulated than this.


Welcome to being me three or four years ago. I've literally met with MPs and ministers trying to get regulations and other things. They're coming. But they don't happen without civic engagement. Write to your elected representative and call for legislation. It works. It takes time, but it works.


In Europe, most of the Tesla features are disabled because they were deemed hazardous. Only lane assist and adaptive cruise control is enabled. The others are severely limited or disabled (Summon etc.)


While I'm not a big Tesla fan, I don't think legislation is any indication of the actual safety that can be provided. Europe legislation tightly follows what German carmakers can deliver. Once they can offer the same features, it will be legal in no time.


For the pollution aspect of engines, I have some sympathy with this view, even though its not entirely correct.

When it comes to safety though, I disagree. One of the decent things to come out is the Euro NCAP rating. The manufacturers are not part of the testing process, apart from they need to supply cars. Each car is then given a rating.

For Autonomous driving, from what I can see its still down to individual states.


>Europe legislation tightly follows what German carmakers can deliver.

Do you have a source for this? Cause it seems to me that they could easily deliver something like summon, seeing as parking-assist/automatic-parking already exists.


Wouldn’t it make sense that “hardware” gets tested before it can go on the roads, why is it not the case with software? And if they find a bug, than disable the software whole-sale until it is tested by regulators again (since it can introduce new bugs as well)


And they make driving with the remainder of AP extremely unsafe in the EU sadly. For example they limit the angle of turn, meaning the cars cannot drive safely around a lot of non-highway road corners without drifting into oncoming lane (upon which the car brakes and beeps due to another feature called lane assist). Admittedly the car could slow down before the curve, but that would get you into trouble with cars from behind not expecting you to slow down for these kinds of curves.


It's heavily regulated, just ask geohot [1]. Most of the players in this space do their self-driving vehicle testing carefully under controlled circumstances with some approval from local agencies. Tesla seems to be cheating by punting all the responsibility onto their customers, because instead of testing Self-Driving Cars they're just shipping software to customers and letting people run that on their cars.

1: https://www.reuters.com/article/us-selfdriving-safety/comma-...


Well, software that controls a car is a new thing. So I would imagine that it hasn't really been regulated yet in most jurisdictions. For some reason we have a tendency to write laws so that they are specific to individual things rather than general and future prof laws.


Software in cars is not "a new thing". Safety has increased immeasurably in cars over the last few decades, in no small part due to software. Complexity and opacity are a problem, but cars are much safer and more efficient today due in large part to software. I don't think that would have happened if there was some regulatory committee in place to audit the software for safety. We have liability for that, a much better model than a regulatory one.

> However, as the importance of electronics and software has grown, so has complexity. Take the exploding number of software lines of code (SLOC) contained in modern cars as an example. In 2010, some vehicles had about ten million SLOC; by 2016, this expanded by a factor of 15, to roughly 150 million lines. Snowballing complexity is causing significant software-related quality issues, as evidenced by millions of recent vehicle recalls.

https://www.mckinsey.com/industries/automotive-and-assembly/...


I don’t get why it’s not covered under existing laws, clearly every vehicle has to be operated by a driver with a valid license correct? You would think letting a vehicle drive without a licensed operator at the wheel would be negligence.


I agree, but when it came to bullying the anti bullying laws apparently didn't cover online bullying which didn't really make sense to me.

So I assume that this is a case of the person in the drivers seat must have a drivers license, but there is no law saying how much of the driving they must do.


Nobody is doing that. In the locales where truly autonomous vehicles are being tested, it’s happening under specific legislation and regulations put in place by the states. For Tesla, behind all their bluster, the terms and conditions you accept to use their “self-driving” features make it clear that the human driver is always responsible for safe operation of the car, and all of these features are just to assist the human driver.


Kinda hard to square that with a product called "Full Self Driving". Terms of service are generally worthless as a legal shield against catastrophic harm to customers.


Unless you're driving a Flintstones car, much of the driving you do in any modern car is done by software. From fuel injection, to power steering to anti-lock breaks


FSD rollout is purely driven by Elon’s ego. He’s addicted to fame and power, and that comes from image he created of himself being an Iron Man. I’m pretty sure he got to a point where he believes in it, and thinks he’s invincible and can solve any problem, because he’s so much smarter than everyone. Stock bubble making wealthiest man only helped to solidify that.


I don't know, I don't get that feeling. What makes you think that? I don't know much about Elon, but I listened to one of his interviews with Rogan, and he struck me as extremely optimistic, but also grounded and not arrogant.

(I do agree partial-self-driving just seems like a terrible idea. I guess crash stats can reveal if this is true or not, but are perhaps not available.)


Autopilot and FSD beta are not the same. The latter is currently available to maybe a couple dozen testers that are clearly very well informed about the capabilities of the system, as well as the changes in each update. If you really don't believe it, watch their hours and hours of (frankly boring) videos of analyzing the behavior of the system in complex situations while still staying on top of safety.

It does remain to be seen how well Tesla will trust the general public with this level of improved autonomy. As you get closer and closer to the uncanny valley where things just appear to work, you get into the more tricky situations that truly befuddle humans and machines alike.

NHTSA scrutinizes crashes that involve anything close to Autopilot and FSD quite heavily. Aside from one or two incidents they've had complaints about, none of them have risen to the point where they had to put their foot down. Admittedly, Tesla were a big bunch of jerks about how they handled the situation, but still, these were isolated incidents with clear misuse from the driver's part.

I agree with you, in that Musk is overly optimistic (no shit, he's been saying this would be ready in 2018, and it's unclear if it will be in 2021). But he's also quite well informed of the facts on the ground, and is clearly aiming for the moonshot-winner-take-all prize by skipping Lidar and high-precision mapping. That might be a gamble, but need not be an inherently dangerous one, depending on how Tesla handles the super-grey areas around the uncanny valley, where the system appears to work, but really isn't worth risking your life upon. To some, it's already there, as you can see from idiots sleeping in their Teslas while on Autopilot. But again, outside of a couple of incidents over years and millions and miles, the rate of catastrophic failure (accidents) has been surprisingly low.


> clearly very well informed

I completely underestimated the role of professional safety drivers for autonomous vehicles. I thought it's "just a guy" sitting in the car for good measure, but it turns out that the majority of drivers is not fit for the job even after lengthy training, see e.g. [1] (a gread podcast in general).

Also all autonomous driving companies employ safety drivers - except one.

> NHTSA scrutinizes crashes that involve anything close to Autopilot and FSD quite heavily.

I wouldn't put too much hope into the NHTSA regulating Autopilot. It took a two year legal battle to get the data driving their analysis of the Autopilot in 2017, turns out it was completely provided by Tesla, but worse, when confounders where removed, it still showed a higher crash rate for Autopilot.

If you take a non-American view of Autopilot, Europeans agencies did scrutinize the crashes more closely and as a result have restricted the use of Autopilot.

If you are interested in the topic of autonomous driving I recommend the Autonocast podcast.

[1] http://www.autonocast.com/blog/2020/10/29/205-why-teslas-ful...

[2] https://arstechnica.com/cars/2019/02/in-2017-the-feds-said-t...


Elon will get into some pretty bizarre bouts on twitter. I realize this is common for celebrities, but that whole "diver is a pedophile" thing was truly wtf.

https://www.wired.com/story/elon-musk-pedo-guy-doesnt-mean-p...


The whole thing just screamed of a man with with an issue surrounded by yesmen.


Westerners living in Thailand and elsewhere in SE/E Asia have gained well-known reputation for such questionable behavior among other things.


If you go read the court testimony it isn’t as strange as it seems on the face. The diver started the tiff and the insult Musk sent in return was said to be common vernacular in South Africa where he grew up.


Is it also tradition in South Africa to then e-mail press to insist that it investigate the target and pay "private investigators"?


Good point, but the whole thing is a made-up excuse. "Pedo" just means pedophile in South Africa.


his covid comments last year were beyond the pale. The low point for me was when Shannon Woodward, who played a scientist on TV and to my knowledge isn't one, had to explain to him that tests are indeed not a big pharma conspiracy

https://twitter.com/shannonwoodward/status/13275176940992757...


I don’t see any explaining going on in that thread?


There is a lesson in sales and trust hidden in there. No matter how good your device is, an inexperienced (and worse when famous) person can sow distrust in it instantly. Wouldn’t surprise me if next machine is just 4 machines glued together to make it Elon Musk proof.


“Had to” and she just replied on Twitter with some odd assumptions are very different. I think anyone would agree that a test that is wrong half the time is not a good test.


LOL, I love it... an actress with no higher education has to explain antigen tests to the world's richest man.

We really are in the best possible timeline for general lulz and nonsense.


> FSD rollout is purely driven by Elon’s ego. He’s addicted to fame and power...

Whoa. You have strong feelings.

One of my Life Rules that has served me enduringly well is to avoid believing that I can confidently have real insight into what motives drive someone. That rule is not for their benefit, but for my own: to putatively "know" something is to invite an incorrect model of reality and thereby incorrect action. It doesn't actually matter what drives Musk or anyone else: ego, pure business interests, a desire to take humanity to the stars, boredom, pathology, space ghosts, whatever. What matters is behavior and its effects.

If you criticize Musk's ego, ambition, pride, there's nothing actionable for anyone, neither Musk himself nor Tesla's potential customers. If you criticize the Tesla FSD rollout, now there's something people can do.


He also personally received an obscenely large windfall as a result of this strategy.

Debatable how significant this impacts his decision making but it's pretty hard to argue that doesn't impact it at least a little.


I don't think FSD rollout is the central goal. The goal is to build hype and thus market cap that Tesla can use to build factories. Ten years ago, Tesla had no factories (at high capacity). They've had absolutely ridiculous velocity since then precisely because of their ridiculous valuation. Even if this FSD thing doesn't work out, it might have been necessary for Tesla to get this far. If Tesla's stock crashes, they still have the factories. All they need is to make sure nobody can prove Elon knows he's full of it (securities fraud). His Twitter persona suits this strategy to a T.


Conspiracy level nonsense here. Elon tried to take Tesla private. He himself didn't want raise money on higher evaluation. He himself multiple times said they don't deserve the high stock.

He has been an optimist on this technology of a long time and has made that argument for a long time.

The claim that the FSD testing is to boost the stock price is truly bizard, specially because there is very little evidence of any correlation between FSD and stock price.

Go look at the Wallstreet models, most of them don't have large considerations income from FSD. If you look at FSD testers, their videos don't have millions of views.


He didn't create that image on his own. He had a lot of help. He had a cameo in Iron Man 2 where Iron Man treated him as a peer!

(That scene, by the way, is a perfect example of why I hate superhero movies.)


RDJ’s portrayal of Iron Man is partially based off Elon Musk, not the other way around.

Edit: Since I’m being downvoted, here you go, today you learned: https://www.linkedin.com/pulse/true-story-elon-musk-robert-d...


and Larry Ellison, Elon Musk was still too young on the scene in 2008, and not the celebrity he is today.



Not sure about that tibit, but in general that book is said to be inaccurate/too rosy of musk.

From what I have heard, it was both:

Elon Musk, Larry Ellison Appear In Iron Man 2

https://www.forbes.com/sites/velocity/2010/04/29/elon-musk-l...


Kinda weird comment. Not that it matters, but in the scene Iron Man brushed Elon off -- He was "politely rude", and the scene (while serving as an ad for Elon) served the film by giving cred to Stark by having Elon fawn over him.

More importantly though, treating someone "as a peer" is kinda a weird criticism? Superheros on TV these days don't seem terribly stuck-up. When other characters in the films seem honest the heroes usually treat them with respect and dignity.

Anyway, thanks for the heads-up that you hate superhero movies, will keep it in mind next time I see one on a plane.


That safety 'Report' from Waymo isn't a report at all though. It is a general overview on their approach how they say they will handle it and a direction. Basically a primer for regulators and customers that don't know anything yet about it. It is extremely light on numbers over time.


There are a couple more PDFs linked in the same page. What did you find lacking in the safety performance data whitepaper?


What I wonder is would countries like China be able to reach FSD faster since they would possibly be less ethically bound? And similarly with other tech which otherwise would be limited ethically?


IIRC so far China doesn't seem to be doing anything like Tesla, most things I've seen were testing in a closed zone with an engineer, Waymo-style.

They also don't have any self-driving deaths for now, unless I missed something.

I thus don't think they will have this advantage. They might have the advantage of more chaotic traffic?


So this makes me wonder, why not. What's stopping them from advancing quicker in this particular field if they could cut more corners faster? And could they cut more?


Having an ethical framework that differs from yours is not the same as the absence of ethics.


I mean with a single authority you could get away with more. Not saying they have absence of ethics, just saying they could possibly be less bound to develop their tech faster without having to worry about short term trade-offs as much.

They could in this way become stronger than other nations who are more bound. With FSD they could start testing earlier even if it's not guaranteed to be completely safe, and they could adopt it immediately full scale at the point when it's definitely safer or almost as safe as humans, while in western world it would have to be much more safer than humans driving. Maybe this could mean years of advantage and they can start immediately adopting and optimising their economy.

Maybe ethics is not even the correct word. I'm talking about a trolley problem where they would be allowed to switch tracks while western world maybe not. In this case it might mean more deaths up-front, but fewer later with optimised economy.


Telsa will be very transparent when the first round of lawsuits begins and they're facing multi-billion dollar lawsuits.

Toyota lost billions from simply not following accepted safety design practices. Does anyone really think Tesla's self-driving tech is ISO26262 or MISRA compliant?


No it's is IEC61508 compliant.

IEC61508-7 F is giving an example how to come from test hours to a quality and safety statement, in terms of failure in time rates.

With how many, 700.000 morning and evening commutes, tesla is gathering evidence pretty quickly that their vehicles save more lives than they kill due to those rare FSD faults which lead to fatal failures.

Air bags kill people. Always have, always will. Fully misra and 26262 compliant. And air bags save many more people.

Tesla FSD will come, fully validated and fully standard compliant, so will Waymo (I doubt they go with misra).


I need to apologize, it's iec61508-7 Annex D, not F

> This annex provides initial guidelines on the use of a probabilistic approach to determining > software safety integrity for pre-developed software based on operational experience. This > approach is considered particularly appropriate as part of the qualification of operating > systems, library modules, compilers and other system software.

Tesla, given the size of their fleet, can also follow such an approach for their "application" especially the neural nets and the rule sets.


Safety culture is an important topic too, but this article is about something else: technology paths.


> while naming the technology "Full Self Driving" and Elon Musk hyping it up every chance he gets on how it will be Level 5-ready by end of the year.

I am an anti-goverment anti-regulation Libertarian and even I think the government has a very legitimate role to play here in regulating this fraud. This is a bit like Humpty Dumpty where the words do not mean what they mean.


> nothing but a tactic to generate hype

Congratulations, you've figured out Elon Musk.


WOW 1 paragraph of data > 48 page pdf of "look at me i did this first so i know what im doing" . Honestly this is quite dumb , waymo will NEVER i REPEAT NEVER make it to market let alone with any amount of volume ... And it has nothing to autonomy or tech ... just by virtue of having bulky expensive sensor's (which need to be serviced regularly) and power draw of the compute and tech .... other than having a very niche proof of concept like project loon i think waymo will also fold like all of its fellow google x brother-in.


Waymo had one of their employees not properly monitoring their car and they killed someone. I dont believe anyone had been killed by the Tesla development team? Couple of customers did die but that was their fault for not monitoring the car? One was watching harry potter I remember.


You're probably thinking of Uber which killed someone in Arizona, not Waymo.


I worked with a self-driving product team a couple of years ago. Tesla's approach was looked upon with suspicion and often openly mocked.

Tesla's entire self-driving strategy is led by Karpathy, who while brilliant, is an under-experienced researcher with a narrow expertise in large scale 2D CNNs. Tesla (and Comma.ai ?) seem to be the only groups that find Lidar unnecessary for L5 self-driving.

I completely agree with this article. I avoid using cruise control and auto-lane-keep for the same reasons. Either I am in control or I am not. Anything in between is a recipe for distraction, and by proxy, disaster.

On the other hand, I can see Tesla's strategy paying off as a low probability win but no lose situation. If 3d lidar is necessary, then I think Waymo (Alphabet) is closest to self-driving success. If they 'win', I see them licensing their tech to every other manufacturer. In such an outcome, Tesla ends up no worse than all of its competitors. It is a no-lose scenario. However, in the unlikely scenario that 2D videos are sufficient for L5 self-driving, Tesla wins BIG. No one will be even close, especially due to the scale of 2D data that they have gathered.


This matches my experience of how Tesla is viewed in the autonomous car industry. No-one thinks they are a serious contender when it comes to a driverless vehicle (and their business model basically doesn't match with the technology either: every other company is basically going to operate a taxi service because there's far more value to be captured and it can help pay for the hardware making driverless possible)


it's interesting how optimistic long term visions conflict - if you really think autonomous cabs will become widespread then by implication you are also saying that car sales will be drastically lower. Tough to be serious betting on both.

The math of "you'll pay more for the car but the car will go out and make money for you as a robotic Uber" sounds like a pyramid scheme to me; if too many people do it, it can no longer work.


I disagree, largely because you've cherry-picked parts of the overall strategy. The world is obviously not going to go from purely human driven to 100% autonomous overnight. There will be an intermediate phase where initially some few cars will be autonomous and privately owned, and for those owners making use of the car's full lifetime to serve as a taxi would be a no-brainer. It could pay off your (very expensive) Tesla in the matter of a year or two.

After the market for on-demand autonomous taxis is oversaturated, Tesla's plan is actually to stop selling vehicles altogether, or reduce it drastically. Musk has said that FSD vehicles will likely not be for sale after this point, so the "consumer-operated robotaxi" is inherently an intermediate solution.

Vehicle ownership will drastically change in nature after the advent of AVs. Tesla isn't going to be unique to having to adapt to this.


> It could pay off your (very expensive) Tesla in the matter of a year or two.

.. while lota of people unknown to you sit with their butt in your precious expensive car. People who themselves can't (or want to) afford such a car.

Not gonna happen. Nobody will throw that much money behind such an expensive gadget, being happ about their precious toy and then let complete strangers user it most of the time.

Do you rent out your home via Airbnb when you are 2 weeks on vacation? Very few people actually do that.


The entire point of Airbnb was to let people rent out their spare housing capacity - it just became so wildly successful that people used the system to become pseudo-landlords.

My car sits idle 22 out of 24 hours a day. If someone else handled the insurance issues and found drivers willing to pay to use it outside of the times I utilize it, I'd have no qualms about lending it out for an appropriate price. The low end estimate for an Uber driver after expenses and Uber's cut is $8.55/hr. If my car averages half that it's $35k/yr that I don't need to do anything for.


Yes you do. You will need to share your car with strangers. Some morning you will open the door, see an oily stain from a thai takeaway lunch in the driver's seat and the last 3 people who rented the car will all claim it wasn't them.


Have the car drive to a cleaning station between rides.

But really, for $35,000, I would not mind investing in a $300 seat cover.


I could just about believe that this could be a novel kind of finance - someone who couldn't afford to buy a fancy car for themselves could afford it if they offset the cost by renting it out. You accept the butts as part of the cost.


There would have to be a cleaning service after every (n?) ride and ideally some form of (expensive) insurance. Unless there is some high level of surveillance of the driver with a rating system which in itself is a huge drawback.

I agree this whole thing (ie a large percentage giving up ownership) is way more complicated than it sounds.


At that point it will be just another smart investment and not a “shiny toy”.


Ok but then you are not renting out your car that you are really normally using yourself, but are participating in a taxi scheme. That's ok, but let's be honest about it. And why stop at one car? Keep investing, soon enough you'll have 10 cars on the streets raking in money for you. Similar with Airbnb which is more a professionalized hotel business platform than renting out your place while you are on vacation.


Correct. It’s more like a bring-your-own-car ride sharing (which already exists here, SnappCar) but a lot nicer in all aspects.

They estimate covering the cost of the car in 1-2 years, so you won’t be expanding your fleet that fast... there is a lot of discussion about this online, a lot of comments here cover that too. It’s just a step, eventually they won’t sell cars to consumers anymore.


Yeah, exactly. I did a fairly deep dive into what path the car industry is going to take with respect to electric cars (especially what a mass market electric car is going to look like), and it all had a huge asterisk associated with it of "but mass car ownership might stop being a thing before any of this happens", at which point thing change drasticly, because then you have a much smaller market of probably more specialised, expensive cars which are designed for way higher duty cycles than most consumer cars.


I should probably dig a little more into the microeconomics, but for me as a consumer the only difference between an autonomous car and an (human) Uber is price. If we assume that the price of the car stays the same, insurance and fuel still have to be paid and the cab app gets their cut - how much of my ride is really attributable to the cost of the human driver? So how much human labor can be automated away to the benefit of consumers? Not much I think.

Given current hardware prices, it seems like the car itself will be a lot more expensive; so how many trips need to be taken so that the variable cost advantage exceeds the increased fixed cost?

Driving is a ubiquitous human skill and tens of millions are looking for work, I believe it will be difficult to undercut them on price.

Drivers also often do more than merely drive - they help with the luggage, they clean the car, they can talk about their city.

Long-haul trucking is a different story, there is a massive labor shortage and letting big rigs go from one warehouse to another without the need of a driver looks like a huge opportunity.


Significantly more than half of your ride fare on Uber goes to the driver - between 70 and 130%. Human costs are vastly the most expensive part of hailing a ride, and will remain so as long as minimum wage exists.

The same argument for human drivers also applies to human dishwashers and human launderers, and yet, most of us are fine using machines for this purpose. Whether or not this will cause financial disaster for people relying on it for a source of income is irrelevant when replacing jobs with technology.


The statistics, based on https://www.marketwatch.com/story/this-is-how-much-uber-driv...

> Uber drivers typically collect $24.77 per hour

> From that, Uber takes $8.33 in commissions and fees

> Vehicle expenses like gas and maintenance cost drivers about $4.87 per hour

> That leaves drivers with $11.77 per hour

Now, the maintenance and amortization will be higher for autonomous vehicles. Somebody will need to clean these vehicles, scrape the ice off, refuel, monitor them, deal with anomalies and false alarms, etc - all of which is now taken care of by the driver. Autonomous vehicles will likely end up being cheaper, but it's not as clear cut.


At $11.77 per hour, if you have to hire one person to handle all of that for every two vehicles, it's still a huge savings. It would take a very false-alarm-heavy system to require that level of attention.


But what fraction of the money that goes to the driver is spent on depreciation and maintenance?


Here's one breakdown:

https://uberdriverlondon.co.uk/is-it-worth-driving-for-uber-...

I think the interesting number there is 23 000 GBP/year, which is effectively the cost to Uber of having a human driver for one car operating 60 hours per week.

If a computer can drive a car for 60 hours a week, it saves money for Uber if the amortized cost is less than 23 000 GBP/yr.

If a computer can drive a car for more than 60 hours a week, it saves money even if it costs more than 23 000 GBP/yr, although i don't think it's linear - demand is concentrated at a few times of day, so one car on the road 24 hours a day is not equivalent o four cars on the road for six hours each.


In my country, the government's "generous, all things considered" cost of operating a private car* is 45p per mile - which covers not just fuel, but also depreciation, insurance and suchlike.

Meanwhile, a 12-minute, 4 mile Uber journey would cost me £2.43/mile

Of course, the big unknown here is one of driver utilisation: How many miles does the driver drive to pick me up, and how many minutes are they waiting for the next ride to come in?

* Well, it was considered generous the last time I asked a car owner about it. I'm sure a Lamborghini costs more.


Don't forget insurance/liability, gas, security, and cleaning up after passengers, too!


You could incorporate augmented reality into car windows etc to talk about the city too. All of what you mentioned are solvable problems via tech.


The driver is a huge part of the cost of a taxi ride (even with Uber subsidising them unsustainably with investor's money). An autonomous car company can price a journey at about the same as a bus ticket for a point-to-point on-demand journey and still make money hand over fist with their first generation hardware. Even for commuters this could be substantially cheaper than a car.


The running joke at Waymo for a long time was "How many engineers does it take to operate a self driving car?"


> Drivers also often do more than merely drive - they help with the luggage, they clean the car, they can talk about their city.

They commit sexual assault, submit fraudulent reports of rider damages, ...


In poorer nations autonomous cars could take much longer for mass adoption due to different, sometimes harsher road conditions, and lack of reliable road-map data.

In addition, self-driving vehicles would be pricier. For a poor nation, that price tag might be higher than human labor.


yes, making autonomous driving work in central Delhi or Agadir will be much different from a US highway.

Maybe it's too cyberpunk, but I can already see the headline of road pirates capturing autonomous cars and taking them apart for spares. Those sensors, the GPU and computing units are quite valuable.


Your impression of Delhi and Agadir doesn't seem right to me. By the numbers, Delhi has the highest crime-rate in india, but it has a 5 times lower rate of crime than San Francisco does.

And also, self-driving cars will all have enough cameras and be constantly connected to a server that theft will be far less likely for them than regular cars... and regular calls already seem to get by just fine without people stripping them to the bolts.


I meant the kind of traffic they have, I've driven there, it's unique!


Unreported crimes aren't factored into the stats you quote.


That works on both sides of the equation, as someone who lived both in NA and Morocco. There's similar levels of unreported crime.


You seem to think that data being sent to a server will stop physical theft. That's an... Optimistic point of view to say the least.


The Tesla robotic Uber thing can totally work but it's not going to work if everyone with a car now, tries to use it. It'l flatten out to find the break event point.

But that's a normal market effect and the same as Taxis which traditionally (at least in many places) had quotas to stop people not getting enough jobs to fulfill their wage (Taxi 'Tag' or 'Plate' or 'License'); or similar effects on the number of Uber drivers which obviously are not restricted so you have a sort-of imperfect market effect there where it's not worth more drivers joining as they won't get paid enough and some will leave.. or part of why supposedly surge pricing will drive drivers to leave to further away areas to take more jobs, etc.

Not a market expert but just wanted to note that while, in effect, you are right.. it's not really surprising, unexpected or unprecedented and not a pyramid scheme :)


It can work for Uber, because this is Ubers core business. It can work for Tesla, because they have a market share of <0.5%.

Toyota, Volkswagen or General Motors would maybe have less revenue with shared cars utopia.


Frankly, it always sounded like a solution looking for a problem for me. Transportation is like electricity, it’s all about peak demand.

I’m writing this at midnight on a Sunday... there’s no demand for transportation. Come 7-930AM, demand is 1000x more. There’s no self driving taxi that makes sense as car replacement. We already have busses, which are already really cheap and can get cheaper if cars are too expensive to buy.


Current peak demand in cars is limited by roads, and ... always will be. (Hence Musk trying to dig tunnels for more roads.)

Which might be okay if throughput is kept high, and folks can read/eat/sleep while ij traffic.


No, that's a constraint on infrastructure.

A taxi, robot or not, will never be able to meet peak demand... they'll be provisioned to meet 60-80% of peak demand and surge peak pricing. That's the key part of the robot car business model -- these companies want to soak you for using public infrastructure.

The Musk tunnel thing is totally different, it's an extreme version of Washington Beltway congestion pricing that charges you $3-60 to waste less time in express lanes.


If sitting in rush hour becomes even easier (and even more profitable, if you can work while going to a meeting), we'll have more of rush hour sitting.

It's enough if just ~20% of those who participate in traffic are doing so with a "self driving car", if they otherwise would have opted not to do so, we'll have even higher peaks.

Surge pricing might not matter much, because it's in the interest of the fleet operator/owner to get maximal utilization, to get maximum profit. After all Uber does surge pricing to motivate more drivers to come online, but if you have a constant supply (constant number of self-driving cars in the fleet), then you can simply use it to maximize profit. (Or to manage perception, ie. to show that yes there are cars available, sure, it comes at a high price, but they are "available".)

---

> these companies want to soak you for using public infrastructure

Yep, US cities are so burdened with problems (a nasty set of incentives) that public transit is non-viable, which drives even more subsidization of the sprawl, which then breeds these companies that try to "solve it". (I mean Uber/Lyft successfully increased the supply of taxis, which drove prices down. Yaay, great, but this just meant more congestion, more pollution, more sprawl, nothing fundamentally solved.) And of course at this point properly pricing externalities is an insurmountable political challenge, because people are so wedded to their way of life.


Not if you keep the tech. to yourself. Which is why you would do a robotic taxi. You avoid giving out the technology, and so you can charge a premium for your taxi service, while still undercutting traditional taxis.


But price and convenience need to cross some thresholds before I abandon my car.

I think this is a difficult problem as you have surge demand at least for commuting. So we can't get around with 1/10th of cars which are driving all day.


> But price and convenience need to cross some thresholds before I abandon my car.

Ya, but at least for me, convenience is already there for Uber. Price is the only thing that keeps me from abandoning my car for it.


By the time autonomous vehicles are widespread we will be into the age of embodied intelligence and there'll be way cooler shit to build and sell than cars.


Ha, I remember thinking that at Tesla's (very weak, IMO) autonomy day. It seemed like their supposed business model was basically made up on the spot and didn't really make good business sense.


The earnings, much like BTC has, would likely approach just above the cost of fuel and maintenance.

The real winners are the automakers that build the part-time robotaxi investment vehicles (literally), and the consumer who now pays way less for a frighteningly AI-driven future.


Really though how realistic is your business model ... if you wanted a autonomous network to replace Uber Currently 30 million cars on road . you would atleast need 10 million cars assuming each car costs 100k ( optimistically with lidar tech factoring in capex for factories and you dont have major product revisions ) thats 1 trillion in capex ... now you if you had the revenue of apple in 2020 ~ 0.25T$ you could expect to be cashflow positive in 7~8 years (assuming exponential ramp up of production and 0 competition) ... all of these thing need to happen for this taxi service X to succeed ,i think to get all these things right is highly unlikely and i feel Elon knows this aswell so he took the path of simpler tech and trying a software service model which is infinitely more scale-able .


Uber 2019 revenue $14.15 billion, divided by your estimate of 10 million cars is $1415 per car per year. I would suggest you're estimating at least an order of magnitude too high for the car count, since they can more or less operate 24x7.

That would bring the capex down below $100 billion - less than Google's cash on hand. And you'd be starting with the most lucrative cities while staggering it out, and have a very good idea of your payback by the time you've spent serious cash.

Technology and regulatory risks are high for Waymo, but capex is very manageable.


Yes i think i over inflated the number of cars required . https://www.businessofapps.com/data/uber-statistics/#4 going through the data there are about 5 million drivers in Uber in 2019 . i will admit 100B$ in capex is a reasonable assumption But I cant imagine for practical reasons google , apple , Amazon putting a large % of their savings into still unproven tech with regulatory risk. Keep in mind that they would still have to compete with Tesla at somepoint which has a best car platform vs < X car company> in the market. Not to mention they already have close to 1 million cars with all the hardware(their claim, i still feel tesla TPU v2 with TSMC 7nm is a must have) and they are on track to producing 500k cars in US with these capabilities. while no other car company is even trying to integrate < X lidar autonomy startup> in mass production Today . Not to mention that currently the primary hindrance for Tesla is software which can scale much better in comparison to a ground up redesign of a car plant to accommodate lidar based systems .


I don't think $100bn is reasonable - that's a million cars!

There are only 13k taxi medallions in NYC, which is a good point of reference as the limited supply means they are efficiently utilized.

Extrapolating that out to the general US population yields just over 500k taxis, and that's to completely replace every single taxi in the country.


1. You don't need all of that capital in a first year.

2. If you acheive L5 close to flawless system, you'd have investors bursting in through you doors, windows and any other opening in your house. Imagine, if you could do uber without drivers you could conquere the world.


I’m one of those who will still buy my cars for one simple reason, I’m unable to live in other people’s filth. I’ve rented lost of shared electric cars and sometimes you just want to drive it into the sea.


How much value is there really? The gap between paying a driver and using cutting edge tech is super thin at best. Don’t forget that you also need to buy brand new vehicles and compete with Uber/taxis.


> The gap between paying a driver and using cutting edge tech is super thin at best.

The analogy is that the electric bulb did not replace candle as such but created entirely new possibilities that were unimaginable when the bulb was invented. This is nearly true for any invention and not just self driving cars. Advancements in self driving will change not just how people go to office or dates but will also change how goods are hauled, how docks are loaded, how agricultural products are transported, how fruits are picked from trees and how law enforcement deals with patrolling and so on.


You didn't really answer their question...

The candle was a candle and an electric bulb was an electric bulb. One could do things the other was not capable of.

FSD however, is pretty close to what humans do. It's cost that's the difference.

If you want a more apt analogy, this us more like the invention of a household CFL bulb.

It's cheaper, it increases accessibility and quality in some dimensions, but it's not allowing you to do so much that was utterly impossible before.

The comment you replied to wasn't saying FSD enables nothing new but rather that saying stuff like "change how people go to the office" is rather odd when today right now you can pay people for "FSD" and we already did see the shift from it...


Nobody in the autonomous car industry had produced anything yet. Tesla has the best system of those actually in use.

Their arrogance is about as convincing as that of the rocket companies or car companies that didn't think EV were gone happen.

Saying 'in my industry they are joke' is kind of a joke by itself by now.

Also Tesla wants to operate a taxi services too so I don't understand your argument.


> Nobody in the autonomous car industry had produced anything yet. Tesla has the best system of those actually in use.

Waymo is actually operating a driverless paid service in Arizona [0] with point to point pick up and drop off. Does Tesla actually offer this service today?

[0] https://www.wbur.org/hereandnow/2021/01/04/waymo-driverless-...


Operating a incredibly limited test program while burning 100s of millions is not my definition of solving general autonomy.

Maybe if they role out this program to city after city in short intervals and have 10000+ taxi operating I would consider it.

Waymo does some things Tesla doesn't do, Tesla does things Waymo doesn't do. Tesla clearly has no interest in offering a money losing taxi service in one small city. Just as Waymo has no interest in doing all the work to figure out the meaning of obscure signs on roads in China.

The question is, when can I click on an app, get a car and drive basically anywhere. That is 'the solution'.


> driverless paid service

Which works in one small limited area of the earth because of high-def maps, which is not what you would normally call "scalable".


At some point there was also a car company that only sold a single electrical car model, it was pretty expensive and they didn't make that many cars per year, which is not what you would normally call "scalable."

Back to the original goalposts, as far as I know, Waymo is the only company that at this point, does offer a commercial service that's fully driverless. As you point out, they don't cover all of Earth, but neither do any of their competitors. Even Uber and Lyft, with human drivers, don't support all pick up and drop off locations, they only work in certain service areas.


Yeah, that whole car's tech didn't rely on expensive highdef maps.

> fully driverless

Don't they have safety drivers that closely monitor through wireless connection and often take over? While it might look driverless, it is not technically. This is also another reason why it doesn't scale.


Right, because who would digitalize the entire world and take pictures of every house and store front - that'd be insane!


Tesla (and Comma.ai ?) seem to be the only groups that find Lidar unnecessary for L5 self-driving.

There is another group who find Lidar unnecessary for driving: humans.


I own a model 3. Often it says "the camera is blinded" (e.g. when the sun is shining at it), often there's rain on the lens... So I'd argue Tesla's cameras are much worse for driving under all conditions vs. the human eye. I'm not sure there's enough redundancy for depth perception either. I tend to agree with the argument that humans prove it's possible to drive with just visual sensors but Tesla's sensors, for better or worse, are not equivalent.

Then there's the part about the brain being what gets the sensor data and interprets it. I doubt Tesla's computer has all the capabilities of a human brain.

Driving behind a boat trailer, it's clear to me it's a boat, but Tesla's computer is confused. Bicycles can be a little iffy as well. In general the way the car perceives the world seems subpar to how a human does, by far.

I'm a fan of the car, I'm mostly a fan of Tesla/Elon, but I don't think these cars with this technology are ever gonna be truly autonomous so they can drive with no human intervention... IMHO.


> Often it says "the camera is blinded"

It could very well theoretically be a software issuse.


The human driver has a number of options to adjust their “cameras” (eyes) that the Tesla doesn’t. E.g., flipping down the sun shield, adjusting head position to reduce glare, or even putting on a pair of polarized sunglasses.

While there are technical solutions possible with more hardware, it’s not clear that Tesla could correct for some of these things in just software...


If these are uncommon enough, a Tesla could just stop on the side of the road and alert the driver. There will be a driver inside anyway. And there is a huge difference between "no L5, you have to keep an eye of the road every 10 seconds all the time", and "drives itself, but once every 2 hours, or in extreme weather conditions safely warns you and makes you take over and force you to drive yourself". This could very well be a great usecase, even if it won't drive itself for those 1% of cases.


That is not L5. That’s L3.

And if the car camera is blinded by the sun, that raises questions about whether the car could safely pull over to the side of the road. I suppose with advanced simulation the car could perhaps predict glare from the sun and pull over ahead of time (“it’s approaching sundown and we’re driving west, you need to take over in 5 minutes”)


Sure. But that’s a moot point if it turns out you need to basically replicate large parts of the human brain in order to drive with cameras alone.


But those are needed for L5 anyway, Lidar or noLidar. The big issue is the world model, what's the prior when uncertainty rises, how to spot really out of ordinary stuff, and how to spot really ordinary-looking but dangerous stuff.


I'm fairly certain L5 automation won't need anywhere near the processing capacity of the visual cortex.


The visual cortex part is what current deep learning stuff handles pretty well. What's not so obvious is how to generate a model that can manage the "usual problems" (unclear road signs, debris on road, pedestrians/cyclists, harm minimization in accidents) without "freaking out" (phantom braking).

For example if the car in front did not even slow down while going through some obstacle (the proverbial plastic bag), then it's very likely that we don't have to either. But if something looks like a drunk guy trying to cross the highway, then it's caution time.

That said, without the actual data/studies/access on what Tesla does and how, it's hard to say how good they are at this. (Sure, in theory everything needed to drive a car well can be inferred from regular human vision visuals. But that means their model has to be able to do counterfactual reasoning - I see this thing as a wall, but other cars go through it, so it's not a wall, so maybe it's a marking. Otherwise it'll very hard to just figure out things frame-by-frame and simply by raw object detection.)


The visual cortex does a lot more. It can predict future states, it does things like attention, it can classify states that are abnormal and direct attention to it, it can make analogies between different objects and use those to decide what is normal and what isn't. It's a lot more than just object recognition - it can also generate object classes on-the-fly and use them in the future.

We are very, very far from that. And having such a developed visual processing system is a prerequisite for feeding an executive processing system that can do counterfactual reasoning.


It's not just about the processing. To match human vision, cameras would need to be able to swivel and to adjust their focus extremely quickly. Animal eyes are a lot more than static cameras.


Yep. It's clear that lidar isn't literally needed. That said, I think the argument just shifts to "the cameras on a Tesla aren't good enough either."

That's also just a shot from the hip, weasely sort of argument but it's harder to dismiss outright.


It's fairly easy to dismiss that argument; the cameras would not be difficult to replace if they are the limiting factor. Cameras are cheap.


The human eye has capabilites that you'd need a camera more expensive than a LiDAR by a good bit too replicate. No rolling shutter, very low delay, continuous signal processing instead of per-frame, maximum resolution around ~70MP eq. in the center, f/2.8 aperture at a full-frame image size, servo driven active rangefinding, continuous cleaning, etc...

A camera with those features would be around 3000-15000$, and you'd need two.

Also, the human brain can use focus information and stereo phase to deduce 3D structure, which is another ace up its sleeve.

It might very well be the case that reproducing this is more expensive in training, processing and material than LiDAR. In fact, I'd say it's very likely - we're talking loops that need to update hundreds of times a second and data rates in the gigabits, which our brains deal with by doing a lot of processing inside the eye and along the way.


> Also, the human brain can use focus information and stereo phase to deduce 3D structur

Current AI tech can do this too, from multiple images. Karpathy even talked about how they do use that in Teslas.


The human eye does it in a very different way. It combines both stereo, parallax, eye convergence, and focus.

Tesla only does the first.


You don't need human eye equivalents. Human eyes are probably not the ideal cameras for driving.


This may be the case, you might not have to reproduce everything.

But just to give an example that is very relevant to self-driving, getting a camera with similar low-light video performance as the human eye costs over 2000$.


My dash cam cost under $200 and seems to have extremely good low light performance.


In well lit areas, sure. Let's see if your dascham can see an unlit deer at night from a moving, vibrating car (it can't).


Try (don’t) to drive by looking only at the display of it than..


Yes, but in fact, imagine how much better at driving we'd be if we had LIDAR lasers shooting out of our heads.


autonomous driving needs to be safer than humans to be adopted (some say way safer).

So this argument of humans don't need it, is sort of, well yes. We're trying to do better than humans.


Humans drive great when they're not distracted, with just two 2d cameras. But we get distracted a lot.

So an advanced enough AI could easily be better at humans than driving w/o LIDAR, simply by driving as well as humans when not distracted, and not getting distracted.


Agreed. Once we're past "the singularity" a computer with human hardware can drive as good as humans. But our eyes are not cameras and the singularity hasn't happened last time I checked. Btw, we also need some additional hardware to do a good job, headlights, defogger, wiper blades...


An advanced enough AI, and we are really far from it. Current AIs are really good at isolated problem solving, but driving is not in that domain.


wow, getting downvoted for stating this fact. I can't believe the anti-Musk bias here on HN. You would think HN would actually appreciate one of the more innovative entrepreneurs on the planet.


I think it might be due to a form of jealousy. Sad to see HN so unenthusiastic about a guy who is probably the most amazing entrepreneur of our time.

It's not that he hasn't invited a little bit of controversy (eg. comments about the diver, covid, etc.) but I think those are inconsequential when it comes to HN's attitude toward him.


What are you all talking about, the original top-level post has a reasonably presented assessment of Tesla’s strategy and the ramifications of such a strategy. Sure, that assessment might be flawed, but I don’t see how any of that is “anti-Musk bias” or “jealousy”.

The comment regarding humans not needing LiDAR misses the point, which is that either every other self-driving car group is incorrectly deploying an expensive hardware solution or there are valid technical reasons why that might be a better path to L5. Tesla trying to overcome the limitations other companies are deploying LiDAR for is a technical risk they are taking on, and, as the top level post noted, might not be the worst bet given the downside which is that they’ll just have to license the tech.


The comment regarding humans not needing LiDAR misses the point, which is that either every other self-driving car group is incorrectly deploying an expensive hardware solution or there are valid technical reasons why that might be a better path to L5.

Implementing L5 without Lidar must be possible. Humans are absolute proof of that. That's not to say Lidar isn't useful, or cost effective given the complexity of the problem, or even just a really good idea that will make autonomous cars better drivers than people. It might be all of those things. All my comment says is that Lidar is not a requirement for driving, and we have a very obvious proof of that.

Anyone saying that Lidar is necessary for driving is ignoring the hundreds of millions of examples of driving with just a pair of 2d cameras and a highly specialised organic supercomputer.

EDIT: The point another poster made about weather and bright sunshine is a reasonable one - humans can't drive when they can't see either, so Lidar would potentially make an autonomous car better at driving than people. That would be an example of how Lidar is useful, but still not necessary.


> Humans are absolute proof of that

Humans also have a organic supercomputer in their brain. Variety of sensors is probably the best approach. I've grown up and seen technology prices on all sorts of weird technology fall (3d printers, sensors, midi controllers). At some point, we will basically give up on perfection on LIDAR and just do cheap multi-LIDAR arrays where the error rate is overcome by just having multiple units.


One could also argue that humans are not proof of anything, as humans frequently make mistakes and conduct dangerous maneuvers whilst driving. There are many road accidents occurring right now, today, around the world, as a result of humans being insufficient drivers.

We can say that something better than human-level driving ought to be possible without LIDAR, though; take the perception of a human (cameras) and replace the brain with a computer that can make better calculations much faster than a human can.


> every other self-driving car group is incorrectly deploying an expensive hardware solution

Given that Tesla are the only company to have put something approaching autonomy into the hands of customers, that does seem to be the case.

What happens next in this thread? Someone points out that Tesla's solution is still far from perfect and claims that they're being highly irresponsible by having shipped it. Then I'd ask how many accidents have happened as a result of FSD beta having shipped to some customers already and they assert that Tesla isn't transparent with the data so we don't know. I'd point out that if we don't know of a single Tesla FSD accident so far, it can't be that irresponsible to have put it in the hands of a few customers and that it's been rapidly improving in the weeks since. It's the same thread, repeated over and over on here, Reddit, etc.

Tesla was the company that made electric vehicles successful. To your point about groups taking different approaches, I think Musk & Tesla do take approaches that will differ from others, and succeed partly because of their approach being better in some important way. I think they had a bit of an unfair advantage when it came to autonomy. They needed hardware that could be shipped on vehicles as soon as possible, which meant that it needed to be cost efficient & energy efficient. LIDAR's energy efficiency doesn't get talked about much, and I don't mean just the LIDAR itself but also the subsequent processing power required to make sense of the resulting point cloud. Each Waymo vehicle is an expensive tech demo. It's a very good tech demo, but over the years it's become evident that they're very hesitant to scale it up right now. I'm sure they'll eventually get there, but they are not a company focused on getting this done ASAP. Tesla are getting autonomy done with urgency.


> Given that Tesla are the only company to have put something approaching autonomy into the hands of customers, that does seem to be the case.

Well, that’s Waymo (in Phoenix) which has put out anything approaching autonomy, not Tesla. Tesla requires a driver.


Yes, Waymo seem good in Phoenix for the people who have access to it. Looking forward to their expansion.


yes, but Waymo uses high resolution mapping for that particular city. That does not scale.


It does not scale because Elon Musk said so? That HD mapping doesn't scale is the biggest trope in autonomous driving that I see Tesla fans repeating everywhere. Google, for example, has large scale mapping experience and their street view cars have been carrying Lidar sensors for years now. They've already mapped a bunch of cities in the US.

Tesla itself uses HD maps for things like intersections, traffic lights and so on. It's just not at the level of detail others use it for.


Your argument seems to make sense. If it's the case, though, why are Waymo only in a small part of Phoenix?


They’re rolling out a revolutionary new technology that people aren’t used to or don’t trust yet. Phoenix provides that easy testing ground in terms of good roads and great weather year round. Last thing Waymo wants is to tackle too many things at once and cause a disaster. Remember how a single death effectively ended Uber’s self driving efforts?

They’re also offering a commercial service. Which means figuring out operations, customer service, emergency protocols, working with local administration and so on. They’ve said a larger goal in Phoenix is to figure out how to perform and scale operations. The local regulations there are much more favorable. Until recently, Phoenix was one of the only (probably only) places where you could operate a robotaxi program and charge for it. California just recently approved it, so SF is where they’ll likely go next as they are already setting up everything there.

Ultimately, mapping is the not the constraint for them to go to new places. It’s the operations and regulations that they need to figure out every time.


They've been running Waymo One in Phoenix for more than 2 years already, and driving in Phoenix for about 5 years. It just seems to be taking a bit more time than I would expect, but I suppose it could be that they're trying to get it to the point where it can be extremely efficient to scale.


Keep in mind that at least one of those other self-driving car groups has demoed camera only on the streets of Jerusalem: https://www.reuters.com/article/us-tech-ces-intel/intels-mob...

They also have lidar solutions, but they certainly haven't ruled out camera only.


The achievement of the Tesla as an electric car are inspiring and incredible.

The issue people take is the reckless approach to selling nonexistent or immature tech that by its nature puts human life at risk. It is possible to both respect and disagree.


I kind of agree. It may be a weird kind of jealousy, though no one here would admit it or even recognize it for what it is.

The arguments used against Tesla are in fact legitimate if we’re talking about a regular company. But Musk has proven to do the “impossible” multiple times and has been right on so many levels, so at this point you have to question whether he’s catastrophically wrong about this, or perhaps the others are about lidar.

Either way, the confidence that Tesla is wrong is way too high in my opinion.


It's the Musk pattern. For every single company he's started, people are always 100% confident it won't work, and also (somehow) a bit offended that he's trying.


On the other hand, there's a bit of an obsession to equate the success of a company with the respect the founder is supposed to receive. There's plenty of examples of people who became 'successful' and were still not great humans. Larry Ellison and the last president come to mind. Steve Jobs comes to mind.

People need to stop fantasizing about 'Elon' being some great guy because they like what he achieved. It's almost like the technorati need their own version of celebrities to fawn over.


Musk's immature and petulant behaviour is an embarrassment to entrepreneurs.

I for one am not remotely jealous of him but disappointed that others think his behaviour should be emulated or worshipped just because he has been successful.


I am disappointed that people remove humanity from people who are successful. We all make mistakes and any prominent figure that doesn't is presenting a facade.

I'd rather see someones imperfections and know where I stand than have another generational #metoo


Common, the rest of us are not getting nearly that level of benefit of doubt. The behavior that gets excused for successful or rich people is regularly punished in normal or unsuccessful people.

Both unofficially and when law is broken.


Are you trying to imply that's Elon Musk's fault, or that if others can't have it, he shouldn't either?


I think that such level of benefit of doubt is irrational and disproportionate. It prevents people to talk and think rationally/objectively about what was said and done, because they twist logic into pretzels just to make it all feel good.


Can I be unenthusiastic about a billionaire who reopened his factory in the middle of a pandemic, where the rate of injuries of his factories is one of the highest of the entire country, where the turnover is ridiculously high because of how he squeezes everything out of young engineers, where he tests out """self driving""" on public roads where you or I could be working? Or maybe I could be unenthusiastic about the strike breaking, the utter disrespect he has for people who criticise his PR attempts, going as far as to call them pedophiles?

Or maybe unenthusiastic about the fact that all he has is a vision, but has never built anything by himself. When you get kicked out because _Peter Thiel_ is more tolerable than you, that says a tiny bit about his character.

It's not "a little controversy". It's being a sack of shit of a human being.


If Musk is an innovator, then I guess Ian M. Banks was literally John Galt.


Elon musk is everything wrong with the tech industry as a whole, overpriced, over engineered, and QA lacking products, hes arrogant when he shouldnt be, and his "disruptions" are mostly stupid, what accomplishments he does have are largely the result of engineers he pays that he takes credit for

the only thing he is good at is marketing himself


Sorry, you are clueless. Yes, some aspects of his personality are obnoxious, but he’s built multiple companies that have delivered real innovation, his customers are largely happy, and I’m sure his engineers don’t care how much credit he takes.


His companies have few innovations, at best they are iterative, and often actively worse. and he did none of it himself, he deserves zero credit

people think he is innovative because he sold decades old tech to people who didnt pay attention before, because he is a glorified marketing exec and hes not even good at that

consumer satisfaction is a poor way to measure quality


You just like trolling Elon fans, don’t you?


So you don't use adaptive cruise control or lane centering features which are available in almost all new cars today. Clearly you are in the tiny minority, because almost every car manufacturer is adding basic AP like functionality even in base versions of cars. A lot of users feel this helps with long drives to which I agree


> Clearly you are in the tiny minority, because almost every car manufacturer is adding basic AP like functionality even in base versions of cars

the average car in germany is 12 years old. from this perspective new-ish cars are a minority .


9,6 years [1]

Finland has 12,2 because all the 10 year old German cars get imported here :D

[1] https://www.aut.fi/en/frontpage_vanha/statistics/internation...


You can see in all public talks how Karpathy focuses almost exclusively on image recognition. Yeah, that’s great and important, but that’s a step one in making any autonomous vehicle. And the easiest one.


>And the easiest one

Some of Telas biggest problems seem to stem from not recognising stationary trucks in their path.


He's also spoken at length about deriving point clouds from images alone at the Teslas autonomy day for investors.

Can't expect a commercial company operating at the bleeding edge to host a "IP giveaway" day as researchers do.


Elon talked about LIDAR at the autonomy investor day. Basically he said that the Dragon spacecraft has LIDAR and he spearheaded that effort. He also said that LIDAR and cameras both use the same wavelength, if you are going to add a sensor it makes more sense to add one which is outside of the visible wavelength.

Even though I know nothing about driverless cars, the reasoning from Elon makes sense.

https://www.youtube.com/watch?v=HM23sjhtk4Q


Whilst Lidar and cameras may have similar wavelengths, they have very different information outputs. Cameras give you a 2D colored image from which you essentially guesstimate the distance of objects. Lidar gives you a 2D depth map with much higher confidence in the data that comes out of the stream.

Elon has a high incentive to talk down Lidar in cars: He/Tesla placed the bet initially on cameras because Lidar is/was too expensive, and promised these cars will be able to drive themselves ("regulatory approval pending"). Retrofitting Lidar is not an option (price, sensor&cable placement), so a lot of his (and Tesla's) promises ride on cameras working out and Lidar not being necessary. A lot of Tesla's (Car's and Car company's) value rely on cameras working out and Lidar not being necessary.


From what I understand Teslas have a front facing radar giving them accurate range measurement forward and they incorporate that in and use nets to do SLAM for other angels where the radar isn't.

But i'm not an expert in this so take it with a grain of salt please :)


Surely you use power steering, drive-by-wire pedals, and anti-lock breaks though, right? Or even an automatic transmission? Is it so different to use lane assist or cruise control?


The key difference being they are reliable 100% of the time.

Cruise control frequently fails and lane assist that explicitly the one that take steering control away from you has put me in dangerous situations more often than it has saved me from them.

Lane assist as an alarm to alert you is great though.


I agree on lane assist, but I'm confused, how does cruise control fail?

I just set it to a speed on the highway... and the speed stays there until I brake.

Are you talking about adaptive cruise control (ACC) to follow the speed of the car in front?


I also don't use cruise control. Not because I don't trust it, but I don't trust myself to react as quickly with it on. If my "gas foot" rests, it will take longer for it to hit the brake if needed, than if I engage continuously while driving. No scientific proof for this, except my own perception of my own attention.


That's really interesting.

Funny, I don't feel the same -- my foot rests below the brake (on the floor) instead of on the gas pedal so the time to move it to the brake feels roughly the same, and I'm paying 100% attention while driving since I'm still steering and constantly monitoring distance to the cars in front of me. So I don't notice any less attention -- it's just a rest for the muscles in my right foot.

But really I'd definitely go with your own perception here -- if you feel like you're paying less attention then it's a good thing you're not using it! Everyone's attention habits and patterns are different. And I'm glad to know people vary in this.


When I have tried to find a place for my foot when driving cruise control, it has happened that I got the toes of my foot stuck under the brake pedal. It was like I was resting the foot partly below it so when I moved it up, it was in the way. Can differ between cars surely. But in the cars I have driven with cruise control, I have not found a comfortable and at the same time "alert" location to put my foot.


I have the same worry when using cruise control. What I've trained myself to do is to cover the accelerator pedal when overtaking other cars or in any situation where I might have to brake. That way the car is still maintaining speed for me but I'm ready to intervene, to get back the safety margin. With adaptive cruise it may even increase the safety as both me and the cruise control can both brake.


Nitpick: 99.(many 9s)% of the time.

I had the power steering drop out going into a turn (fuel pump failure IIRC), and it could have easily caused an accident (I physically couldn't turn the wheel enough to complete the turn).


If power steering fails in speed, it doesn't really matter - the power steering does very little when the car has some velocity. If you can't turn the wheel at speed, then there's something else than power steering pump failing (the steering must have been locked by something else).

Or you have very serious muscle weakness, at which point I'm not sure if driving a car is a smart option in any case.


I expect the total force required to turn depends also on the vehicle's weight, degree of turn, etc. My manual, 90's Japanese sedan (approximately 3,000 lbs iirc) stalled on me while I was exiting a freeway at ~30mph, along a round, ~270° turn. My first instinct was to try put the car back into gear, but before I could effectively do that, I realized I needed both hands on the wheel just to wrestle the car through the turn. I could have easily caused an accident. The disorientation of losing power steering while turning, and having to move my right hand from shifter to wheel made this a serious situation. If my car were heavier, or the turn were tighter, or if I reacted slower, (all of which are more likely than a very serious muscle weakness), it could have been much worse.


None of them have to deal with fat tail risks. (to quote nissim ntaleb).

I do not know about dbw pedals but power steering and anti-lock breaking operate 100% deterministically and the only true unknown is controlled by the driver.


There's still mechanical failure to deal with.


I am uniquely equipped to answer that question. Did an undergrad in mech E and worked as an automobile engineer for while before pivoting to computer vision and ML.

The safety standard that mechanical parts need to abide by are orders of magnitude higher than any Vision/Software product. Additionally, mechanical failures are rarely catastrophic. The part will most likely alert you hours/days in advance that it has begun to fail. Additionally, when catastrophic failure does happen, it often leads to the car stopping and not ramming through a busy intersection.

The problem with ML algorithms, is that they fail without warning and without intuition. It is incredibly hard to design against catastrophe. ML algos are also great at overfitting, so they can often learn to beat narrow tests while being inept at dealing with the exact scenario the test was supposed to evaluate it for. (I know, I know, "Don't tune on the test set!"... but human factors make it near impossible to avoid some level of it)

We are in dire need of a regulatory/evaluation body for AI that consists of top-tier ML researchers. Now that can be a govt. body or a 3rd party contractor that works closely with the Govt. But, we need to start laying the groundwork and debating around it now. So when we do need it, it is ready to be deployed.


I have a similar background, elec-mech undergrad, masters in robotics doing SLAM, spent time doing electrical design for industrial usage, then software doing computer vision, now working on drone autopilots.

I 100% agree on needing to design and regulate the design of safety critical CV (and particularly ML based CV) algorithms so that, if nothing else, the failure mechanisms are quantifiable and limited in their impact.


I agree with your sentiments on driver assist. I'm not comfortable with the ambiguity of responsibility. But that's not really the key point, it's just how we feel. These features act as premium goodies, like old cruise control or whatever. People buy them, use them. They generate data. User-developer feedback loops. That's already a fact. The debate is about the importance of this .

The core debate (of this article: waymo_v_tesla), is about technology paths. Do you start with driver assist and move up the Ls gradually as units are produced, KMs are accumulated and dollars come in? Alternatively, do you get as close as possible with a prototype and then closer with the next one..? worry about financial/product viability once you have a working archetype.

This dovetails with the lidar question.

Ultimately, I find the disagreement fascinating. There's probably value in having these two throwing their weight into opposing camps. IDK which approach will win in self driving. Every technology is its own example.

That said, I like Elon's approach as a generality. To me, it's about thinking of technology and invention as an industrial/societal effort rather than a lab effort... even if reduced to a lab at a company. Learning curves exist. Feedback loops exist. A technology with products and customer and usage progresses, often.

You could not have "produced" Moore's law the Waymo way, most likely.


I think we can reduce this argument further.

Musk thinks that is it acceptable to ship features when there is a safety risk, Waymo doesn't

There is also a bit of ego in there as well. Tesla have made a business decision not to use lidar. Its nothing to do with the maturity of the tech. Tesla have spent Billions of dollars trying to re-create a lidar like depth sensor using radar and monocular slam.

Its just not going to cut it for a number of years. CNNs are just not fast or robust enough to replace a lidar. Especially with the quality and placement of the sensors in a tesla.


That is in fact, the top comment. Respectfully, I don't think that this is a reduction though. It's a sidestep. I don't mean that the safety (and other ethical) concerns aren't real, and I accept that there are legitimate criticisms of Tesla/Elon... both philosophically and practically.

That is not the actual debate in this article though. It's not the debate between Tesla and Waymo's CEOs. That debate is about the path to a technological goal: actually autonomous vehicles.

The lidar question is closer to the core of the debate, but IMO it's a detail. Here the disagreement might have gotten unproductive. As you say, egos (famously big & nerdy ones) have have been dragged into it.

>> Tesla have made a business decision not to use lidar. It's nothing to do with the maturity of the tech.

I agree with the first and disagree with the second statement. This, IMO, is actually at the core of the debate. In (my assumed version of) Musk's Modus Operandi, technology strategy and business strategy converge much more than in the Waymo/Krafcik MO... They're at opposite ends of the poll.

Business decisions are R&D decisions. Putting things into production is R&D, affects R&D, determines R&D.

In a caricature/model version of the Waymo MO, technology is invented in a research lab. Once it's ready, it can be productized outside of the lab. "Technology" is defined as the core "proof of concept." In the Musk MO, technology is produced in an industry/company. A lab cannot reduce the price of computing by half every 18 months for decades. A lab can't improve the price/weight/performance of batteries to the point of EV viability. It takes an active industry/market to do these things. You need customers. You need commercially scale factories, increasing annual unit production. Etc. Large and increasing numbers of people working on it.

A lot of technology never leaves R&D. Lunar Travel didn't becomes cheaper over time. This confuses people because we expect technology to improve. Doing something people did 50 years ago should be cheap and easy, intuitively. In reality, that momentum is not a guarantee. If no one works on it, it doesn't improve. Software, hardware and such improve at a monstrous pace. But, every year there are way more people working on hardware and software. If the space "industry" had 100X more people working on it compared to 60s, it would have improved to.

I'm not calling Musk's approach superior, but Waymo's is certainly more common. Novelty itself is valuable. It's interesting that we have both approaches at play. n=2, but at least we'll get to see how this plays out.


>However, in the unlikely scenario that 2D videos are sufficient for L5 self-driving, Tesla wins BIG. No one will be even close, especially due to the scale of 2D data that they have gathered.

Potentially Tesla's 2D data lead means they will still be the leader even if the compute resources in the current cars isn't up to the task, and maybe even if they have to add additional sensors like lidar. They can probably retrain their models and still get a lot of out of it.

My personal guess is that full 3D lidar isn't needed, but some additional cameras and other cheap sensors probably are. There's a ton of stuff that costs a fraction of the cost of lidar that might fill in the gaps.


>costs a fraction of the cost of lidar

What makes lidar for use in self-driving that expensive? I have a robot-vacuum with a lidar that wasn't all that expensive.


>What makes lidar for use in self-driving that expensive? I have a robot-vacuum with a lidar that wasn't all that expensive.

Waymo says it cost $75,000 for the lidar in one car a decade ago, and in 2019 they said they got the cost down to $7,500. So at the time Tesla decided to forgo lidar the costs were certainly very high. I'm guessing the range in your vacuum is multiple orders of magnitude less than a car.

Waymo must have been betting costs would drastically fall over time. No idea if it's a lot less than 7500 now.


Robot vacuums typically don't use Lidar. They user a laser pointer and an offset camera. Distance is estimated by the pixel offset between the laser dot, and the center of the image.

Object very far away: Laser dot in center of image. Object very close: Laser dot at maximal offset from center of image.


I'm pretty sure mobileye has demonstrated their FSD solution without lidar as well. It seems like they are treating lidar as optional.

They have many more cameras than Tesla to support the camera only approach, though.


At some point, you have enough cameras that you are probably (implicitly or explicitly) doing 3D reconstructions that Lidar does anyways. At that point, why not just use LIDAR ?

To save some $2-3k? That feels 'penny wise pound foolish'. This is a premium feature and LIDAR keeps getting cheaper.


I think you hit the nail on the head. There is no such thing as "sort of self driving". It's either completely autonomous or I'm going to have to be paying some attention and I might as well be doing all of the driving at that point.


This is how I feel. My brain is pretty damn autonomous while driving. If I have to still pay attention to my “self-driving” car, that seems both more dangerous and taxing than just letting my brain do all the driving.


I treat Tesla's Autopilot basically like it's a very new driver or sonebody learning to drive. You're constantly paying attention to what it's doing but you can rely on it to keep its lane and distance from other cars. I appreciate it immensely on longer drives since I don't get as tired / exhausted from constantly having to be giving micro-inputs. It allows me to drive for hours and keep my head feeling fresh for much longer than when I drive other cars with no asststance features.


I'm the same, for some reason I get extremely fatigued when I'm driving a "normal" car. Something about having to adjust the car just a little bit ALL THE TIME makes my brain not be able to relax.

With a the limited lane assist in my EV (basically it stays in lane and changes lanes if I turn on the signal), I can drive longer distances and stay alert without getting tired. All I need to do is apply slight torque to the steering wheel and mind the other vehicles on the road.


How about automatic vs manual, or simple cruise control?


Be aware that Comma.ai doesn't bill itself as self-driving, it is just driver assist.


I think your derogatory comments about Karparthy are ludicrous. Considering the scale of deployment, complexity of the application, and the life-and-death stakes, what he and his team have achieved so far is probably the greatest engineering achievement in the history of AI.


The only times I have ever had accidents or close calls in a car (where another driver was not at fault for e.g. reckless driving behavior) has been with cruise control or with auto keep lane - the former I spun out, the later it tried to drive me off of a highway at 70 mph repeatedly.

The sorts of assistive technologies that would actually help are not really being worked on, assistive has been misunderstood as meaning "allowing the driver to be lazier".

Sure - if Tesla pulls it off, it's great. But it's a gamble with more than just Teslas stock, that is what makes Elon an unethical sociopath here...

Waymo's approach is assuredly safer, there is still the possibility of technical glitches but it the approach doesn't create more problems with humans than it solves.

Yes, sure, humans can be dangerous, but if that's the problem then it's safest to just not get on the road period, kinda negates the whole point of an FSD though...


> I avoid using cruise control and auto-lane-keep for the same reasons. Either I am in control or I am not.

Do you also turn off features like ABS, ESC, traction control etc?


[flagged]


That is an entirely unfaithful reading of what I said.

Convenience features that always work are good. Cruise control and lane keep assist frequently fail, and take control away from you in a way that disallows you from maneuvering to safety.

I can't think of a single instance where Automatic transmissions or ABS misbehave. If my transmission or braking failed in 1/100 instances, then I'd stop using them too.


I only fly in aircraft built more than twenty years ago, avoiding any fly-by-wire death traps.


"I watched a few hours of early unedited footage posted by Tesla owners who received the new software. The software made a number of mistakes, including two incidents where a Tesla seemed to be on the verge of colliding with another vehicle before the driver intervened."

What, Tesla still can't detect big, obvious obstacles reliably? That's pathetic. A half-dozen cases of running into stationary obstacles at full speed should have taught them something.

Running into obstacles at full speed, with no braking, is quite rare for human drivers. Usually, there's braking, but too late. Mercedes once did a study showing that over half of accidents would not have occurred if braking started about 100ms earlier.

Tesla's approach means the driver has nothing to do until something goes wrong. Then they have to react in under a second. That just doesn't work. That's been known for decades.

Great video on cockpit automation: "Children of the Magenta" (1997).[1] It's an American Airlines chief pilot talking to his pilots about automation-induced accidents. The aviation industry started dealing with over-reliance on imperfect automation a long time ago.

[1] https://vimeo.com/159496346


Instead of trusting this anecdote, why don’t you jump over to YouTube and search for “Tesla FSD” to see what it really looks like? Lots of very instructive videos available from people on the beta program.


Exactly. There's real world video footage showing the vehicle behavior that includes Tesla animations showing the vehicle's current understanding of the environment.

It's rather cheap to take trivial shots at Tesla (no matter how bad their product might be) while comparing it to 23 year old understanding of human-machine automation in an entirely different context such as aviation.

Quite unfortunate you're downvoted, but literally asking people to watch some videos of real world usage [1] to make up their minds appears to be too much.

[1] https://www.youtube.com/results?search_query=tesla+fsd+beta&...


Can the footage be trusted? From my understanding Tesla has only rolled this out to a small number of users, browsing the YT search results you’ve provided I see that almost all the results are Tesla super fan accounts.

I don’t think it’s unreasonable to disregard this until you see an objective assessment in the field.


There is a NDA. One tester has said that it [at least in part] says "you can't livestream drives"[0], and while otherwise NDA details are sparse, given the many close calls, bad performance situations, and multiple periods of rapid uploads [1,2], chances are they aren't screening videos.

0: https://youtu.be/AkexMo_jdcQ?t=346

1: https://i.judge.sh/enormous/Flim/chrome_RIGtEw3oE3.png (Chuck Cook)

2: https://i.judge.sh/sturdy/Derpy/chrome_NZrYtjHAeP.png (Tesla Owners Silicon Valley)


It's fair to question the motivations of the FSD beta testers. They were obviously hand picked to ensure maximum safety, adequate public profile, having committed large sums of money to prove their allegiance, as well as being existing fans of the company. So they're likely to be biased.

But that's a long way from alleging that they've doctored their videos, or that Tesla itself has censored them. There are many many instances of footage showing almost crashes that were only avoided because the driver intervened. If you were the censor, would you allow those?

Compare that to Waymo or Cruise or anyone else. They are obviously heavily censoring and have no independent owners recording footage, nor sharing it like this. It's a huge step up in much needed transparency.


> Compare that to Waymo or Cruise or anyone else. They are obviously heavily censoring and have no independent owners recording footage, nor sharing it like this. It's a huge step up in much needed transparency.

You clearly haven't searched Youtube. https://www.youtube.com/c/JJRicksStudios is one such channel which frequently posts videos of his Waymo rides. There are others if you search, not so much volume as Tesla though.


What you need to know about the performance of various autonomous vehicles is not observable through a bunch of youtube videos. The improvements are happening at the statistical margins, that's where 99.9% of the work is. Youtube videos won't do much to show the difference between Google's car's performance 7 years ago and today. You need granular statistical data to make any kind of informed judgement.

That Tesla is just now getting to the place where they can get a few miles in moderately complex traffic without fucking up, while still having many other videos of failures and close calls should tell you that Tesla has a long way to go. There are tens of thousands of driving scenarios that all need special attention.


Yes. Check out this one.[1]

Watch the display for what it sees and isn't seeing. Awareness of oncoming traffic is almost nonexistent. Except when the FedEx truck is reflected in some windows; then it detects the reflection as a vehicle. Although the streets are almost empty, the driver takes over frequently.

This is way below what Waymo and Cruise can do.

[1] https://youtu.be/6G1Z2J3WUSg?t=231


I always thought that it should be just as easy and safe to drive by looking at only the tesla screen and driving based on that before self driving can get anywhere close to complete. If the vehicles sensors can't pick things up , then the vehicle can't react to them.


Look closer. The oncoming traffic is detected, just the visualization on the dashboard screen in a bright yellow is difficult to mark out in the video.


4:10, the delivery truck isn't seen until its mostly pulled out. On the right at the traffic lights, It can't see humans

The traffic passing at the intersection is missed most of the time.

5:22, it drives onto the wrong side of the road into stationary traffic.

That is amateur stuff right there.


I did look closer. If you bring the image into a Photoshop-like program and adjust the levels, you can see a yellow square where one oncoming car is. You can also see yellow squares elsewhere. It's a compression artifact.


that is terrifying. the next 60 seconds after that is crazy bad.

'that is not ideal'...

We need to legislate this before more people die


Nothing scary happens in the next 60 seconds. The car would obviously not go into the opposite lane if there was traffic, and given a few more seconds it would return to the right shoulder. The driver is jumpy and taking over before it has time to correct anything.


what?!

first I dont think that's so obvious that the car would obviously not go into the opposite lane, it doesnt even see all the cars as OP pointed out and Teslas have rammed solid objects stationary at full speed.

Not being able to 'see' standard lane markings is such a basic requirement!

The car shouldn't have to correct.

if this was a student driver during a test I'm guessing they would fail - been a while I don't know the points but there are multiple really bad mistakes.


Teslas have been on the road for years and there is zero occurrences of one turning into live traffic. You’re underestimating the state of its self-driving abilities, there are no visible lane markings at that point in the video, it’s a error you might see a human make.


"It decides to go in the opposite lane. That is not ideal."

Twice in a row. At two different left turns. Yeah, not ideal.


Is this really all that important? There are 3 other cars ahead of the Tesla at this red light intersection. If you're able to react within tens or hundreds of milliseconds to a changing environment, it would actually make sense to reduce compute to save power for when the vehicle is in motion and not surrounded by other vehicles at a stop light.

Edit: It seemed to handle that intersection just fine, but a couple of turns down it ends up on the wrong side of the road, which is quite bad. That said, what proof do you have that Waymo/Cruise would be any better at this?


Here's an hour of a Cruise vehicle driving in San Francisco.[1] Watch it deal with a trash bag that fell out of a dumpster, a left turn across heavy opposing traffic, double-parked cars, cars pulling out of parking spaces, a FedEx truck changing lanes across their path... Makes Tesla look like amateur hour.

[1] https://youtu.be/EZ75QoAymug?t=122


Since they rely on pre-mapping, that’s the equivalent of a carnival ride on rails and not really comparable to what Tesla is doing (unsupervised training on billions of unknown road miles).



Being able to detect oncoming traffic seems pretty damned important to me! If it’s not detecting vehicles across the line, what assurance do I have that it’ll detect vehicles that cross the line?


What you're saying is not a capability that even most commercially available automatic braking systems can do well. Several studies have pitted the half-dozen or so cars available with automatic emergency braking systems on them, and they've all been just about the same. Apologies for a lack of citations on this since I have to run, but we at least have to compare Tesla to the rest of the industry, or compare their actual AV solution (FSD, which is in limited beta and only has been for just a month or two).


Not buying the storyline. Certainly Tesla will have to keep upgrading its systems to achieve true FSD. They could even decide that LIDAR has become cheap enough to use anyway. What they have is cars on the road collecting data. They'll have to collect new data and develop new algorithms on that data but that path is not excluded from them.

> "It is a misconception that you can just keep developing a driver assistance system until one day you can magically leap to a fully autonomous driving system," Krafcik said. "In terms of robustness and accuracy, for example, our sensors are orders of magnitude better than what we see on the road from other manufacturers."

Not a good argument, very handwavy. Better sensors is not a moat. What is your software, how is it qualitatively superior? No one suspected that NNs or even DNNs could produce AlphaGo/AlphaZero until it did.

Until Waymo gives their tech to the public and we can compare how well their less used but superior hardware and software performs it's all opinions and PR.


The proposition that is being discussed is not whether Tesla as a company will get to autonomous driving, it’s whether Tesla’s current suite of sensors and its “HW 3” inference engine will get there. Keep in mind, Tesla has taken thousands of dollars from customers with the promise that they will deliver them “full self driving” in the near future (or actually in the past, as it was promised by 2018).

If Tesla revises their Sensor package to include LIDAR, that would be a sign that the “Tesla approach” has failed.


I agree. The entire Tesla gamble really hinges on whether the HW3 + current sensor suite is sufficient for them to achieve a safe enough degree of autonomy. It's about as minimal as it gets from the hardware front, and is entirely dependent on the machine learning side to evolve rapidly along with rapid software update deployments.

I feels there's a broad misconception across many of these types of discussions, where people assume that just because Tesla has not chosen to widely deploy LIDAR across their vehicle fleet, that they do not understand it's benefits nor value them. This is entirely false. For several years now, Tesla has driven test vehicles with very high end Lidar units to basically generate "ground truth" understanding of the world that can be then used to test and improve their camera-only FSD inference. They heavily relied upon it to build ML models to generate depth maps across frames, and that capability is already widely deployed via Autopilot (poor man's FSD). That it works in itself is somewhat shocking to me.

Lastly, if there's any indication of a mea culpa coming from Musk on this front, those ideas should be easily put to rest. Recent FSD footage shows rapid and drastic improvements to the capabilities of the FSD-beta system, and Musk himself is still repeatedly bellowing about how his personal vehicle with the latest software is able to handle more and more challenging scenarios. If anything, Tesla is doubling down, and expects to win. Given their history, if I was a competitor, I would be troubled at least.


> Tesla has taken thousands of dollars from customers with the promise that they will deliver them “full self driving” in the near future

Isn't that between Tesla and the customers who paid for it?


Yes - I'm a customer who paid for it and I'm happy with it, I know it's a bet on a future capability.

I think Karpathy's talks make sense, their bet on vision makes sense.

I think the comments from Waymo are tedious - we'll see who's right in the end. I wouldn't bet on Google.


Why? You can argue that CV is enough for SDC AND add extra LIDAR sensors to make it even better/safer (esp if the cost curve of the sensors make sense for consumers).

Just because I can ride a bike doesn't mean I shouldn't wear a helmet


> Until Waymo gives their tech to the public and we can compare how well their less used but superior hardware and software performs it's all opinions and PR.

What constitutes giving tech to the public? They are in Phoenix and they say they will be rolling out in other cities soon. So you can go to Phoenix and hail an autonomous ride to compare yourself, where you will find that no else is even close to offering truly driverless rides like Waymo is.


Waymo is amazing, but it doesn't make the business profitable: it needs to scale up the investment in hardware and keep its advantage until Tesla gets good enough to compete. This PR is mostly aimed at investors in Waymo.


"What they have is cars on the road collecting data" ... what do the ToS say about this? Is this really true they are gathering data from all their customers? I havent seen uptick in my internet bandwidth as Tesla owner.


If you are using AP or FSD and you take control suddenly, that will sometimes send data to Tesla so that they can improve the autonomy. I don't know what the ToS say, but Tesla are pretty open about the fact that the cars do that.


I expect Tesla and Waymo to be the 2 best driverless solutions in the near future. Tesla is a real competitor to Waymo, as the better sensor has a litmited time advantage until deep learning algorithms improve even more.

Waymo doesn't have too much time to get back the billions of dollars of invested R&D money, while Tesla is already profitable even without self driving.


I've never been convinced that deep learning is the correct approach for something so nuanced as driving a car. Deep learning is excellent for generalisations and consolidating data into single metrics. It's not so good at identifying edge-cases and reacting accordingly, those cases are more likely to be merged into other reaction types.


> nuanced as driving a car

Ha. Have you ever tried to play Go? When you feel competent, learn that there are still many orders of magnitude that can't even be told because the language is completely foreign to one who doesn't already think in it.


drivers came to trust it way too quickly. Drivers who were supposed to be closely monitoring the system instead spent their time looking at their phones, putting on makeup, and other distractions.

Really hope no one dies because of Tesla's autopilot. This is a real risk.

Quoting from below: if the car does it right 99% of the time it's just human nature to stop paying attention. The article makes a great point that slowly improving the autopilot with a person behind the wheel is super dangerous because we get lulled into a false sense of security.


People die all the time, there are even higher profile cases like the Apple Engineer who slammed in the barriers at full speed: https://www.kqed.org/news/11801138/apple-engineer-killed-in-...

There are some spectacular failures that are very scary because the car does something that a person would never do, unless unconscious, like this one: https://www.youtube.com/watch?v=LfmAG4dk-rU

You don't hear much about it probably because Tesla fanboys are plenty and rabid, so people avoid talking about it online.

The defence is usually stats about human drivers crashing more often and it makes sense until you dive deeper into the numbers because these stats are usually oranges v.s. Apples. It feels like they have some playbook with statistics to slap when someone says something negative. If that doesn't cut, they say that the victim should have followed the manual that says "your attention should always be on the road" then proceed posting a video about how thanks to the latest update they can sleep drive to work and attach a banana to the driving wheel to disable the attention safeguards.

If someone asks how is this an autopilot, there are usually two ways to handle it:

1) Autopilot is just a brand name, the self driving software is in beta, so the victim should pay attention all the times.

2) Autopilot is like on the planes, so only fools think that it is autonomous, therefore it was working as intended but they should have been using it like an airline autopilot. Crash due to user error.

I'm actually a fan of Musk and Tesla but I feel like the community engagement is very unhealthy and lacks scrutiny due to the "online army" of his.


I'm surprised you don't just link to the source that regularly updates itself on all Tesla deaths, and makes a point to cite its sources. Note this is not just for Autopilot, but all deaths occuring from Tesla vehicles (including people not in the vehicles). There are tags for various things like "Autopilot" and "Pedestrian/cyclist"

https://www.tesladeaths.com/


I'm optimistic for this area of tech and research in general, but agree we need to stop benchmarking against average human crash rates.

Although anyone can be hit at any time, the distribution of human crashes is not purely random. People who drive compromised, for example, are way overrepresented in those events.

So, theoretically, the tech could get to a point with a lower than expected crash rate for humans generally, but still increase your personal crash likelihood.


If you believe you are a better than average driver when manually driving, why couldn't you also be better than average at intervening when autopilot is driving?

Humans don't have forward facing radar nor 360 degree always active vision, so in some cases, cars can see hazards that even a perfect human driver cannot.


It's a natural human tendency to get distracted, especially when we are not engaged with a task (as with autopilot). Driving manually physically engages your body in the task, making it harder to get distracted. On the other hand intervening is subject to distraction and longer response times.


That "Apple Engineer" was playing a video game on this cellphone according to NHTSA. But it's "rabid Tesla fans" you say?

https://www.kqed.org/news/11803406/apple-engineer-killed-in-...


Yes, that's captured by the defences I listed above. You say that this is an autopilot, you make them purchase self driving capability package, you say that the car comes with all the hardware necessary for self driving, you fans post videos online about cars driving themselves and in the small print you say that it's not autopilot but Autopilot and the drivers must pay attention all the time and you put very weak safeguards to enforce that attention.

It's simple plausible deniability for Tesla and the fanboys. Good enough to keep them off the hook, legally.


That's such a hard reach and a bunch of word mincing.

To this day in 2021, many people still don't wear seat belts. Even though that's a solved problem. Some people will do stupid things. It's human nature.


Sure but car makers don't imply or give impression that you don't need seatbelts thanks to the collusion detection system or the airbags. They don't sell seatbelt-free system that only in the small print says that the seatbelt must be worn all the times except for off-road driving.


You're being a bit pedantic here. Every time you engage autopilot, it literally tells you (in bold letters) to "Always Keep Your Hands on the Wheel" and to "Be Prepared to Take Over at Any Time". Keep abusing it and it will actually disable it for the rest of the drive. Did you know that?

There's no "impression" being given. Like I mentioned. There will always be a small fraction of irresponsible humans that will do dumb things. No amount of engineering can fix that.

Case in point is this video: https://youtu.be/VS5zQKXHdpM?t=88 Her mom is even helping this kid film this stupid act just for clout and views.

Tesla owners grilled him and said what he's doing is dangerous and irresponsible. He then deleted them all and turned off commenting. But people like to demonize Tesla owners. Which I find bizarre.


I'm not so sure the second one is something a person would never do. The second car also brakes very late, for instance. Did autopilot break at the last second for the Tesla, or the driver? I'm also bearish on autopilot and am not saying the Tesla performed well, but I bet human highway drivers run into stationary traffic all the time too.


Wow that second one is terrifying. Particularly since that was only 7 months ago!


One has to distinguish between "autopilot" and "FSD". The "autopilot" is used for driving on highways and mostly uses radar for obstacle avoidance. It works great with handling traffic, that is cars moving around, but not for static obstactles. The problem is the low spatial resolution of the radar, so nonmoving obstacles are difficult to distinguish from the background reflections.

"FSD" is creating a 3d-model of the environment based on the camera image. But so far it is only active outside of highways. This should be much better at avoiding static obstacles. In any case, they are different systems so experiences with one cannot always be transferred to the other.


That was the article, but the crash was in March 2018.


They meant the YouTube video of the Taiwanese crash from the 1st of June last year.


A lidar would have discovered that truck immediately.

Is is confirmed that this Tesla was using self-driving?


From the article:

- autopilot was not engaged, only cruise control

- the anti-collision applied brakes but not soon enough

- the driver walked away without a scratch


How do you differentiate between autopilot and cruise control in Tesla case? I assume by lane keeping engaged?


Yes - 'autopilot' is auto-steer combined with adaptive cruise control. Now, Tesla's technically do have 'dumb' cruise control that doesn't reduce speed in response to a car getting close, but that configuration is rare since you need to specifically call in to buy one without autopilot and is probably not what the article was referring to.


cruise control just keeps the speed set, as with every other vehicle. autopilot will lane-keep and adjust speed with traffic


> Autopilot is like on the planes, so only fools think that it is autonomous

Nobody other than pilots, engineers or enthusiasts knows the subtleties of how aircraft autopilot systems work.

Every other person on this planet knows it to mean "fly the plane by itself". And would expect a Tesla car to "drive by itself".


I think the parent comment was making the same point as you. We, but mostly Tesla, should be educating people on why autopilot doesn't mean it drives itself in all situations. While buying a car[0], the furthest the webpage goes is:

> automatic driving from highway on-ramp to off-ramp including interchanges and overtaking slower cars

And while it does all of this very well right now, there still is that piece of text at the bottom:

> The currently enabled features require active driver supervision and do not make the vehicle autonomous.

Honestly Tesla is doing the bare minimum here to have plausible deniability, but the text at the bottom really should be the same size as the text above it.

0: https://www.tesla.com/model3/design#autopilot


People already have died because of Autopilot years ago: https://www.nytimes.com/2018/03/31/business/tesla-crash-auto...

It is an inevitability that people will die because of any autopilot system if it lasts long enough.


Better source, also linked elsewhere in this thread https://www.tesladeaths.com/


People will die. It will also save lives. It’s probably already prevented thousands of accidents, some of which would have been fatal. It just doesn’t make national news when your car moves you out of the way of danger.


I am not so sure about this. Modern cars come with a lot of safety features, like automatic braking and adaptive cruise control. Sure, if you replace all 1995 Honda Civics with Teslas, you will save many lives. I don't think you could save any lives by replacing all modern 2020 cars from other manufacturers with Teslas.


All of the systems in other vehicles have the same pitfalls as any driver-assistance system and have the same driver behavior risks as Tesla's system.


They’re also designed and marketed very differently. The lane keeping in my Subaru isn’t good enough for me to fully trust it, so I monitor it carefully.

I suspect that there’s probably a local minimum for safety, where the autopilot is good enough to lull users into complacency, but not good enough to actually be safe.


> The lane keeping in my Subaru isn’t good enough for me to fully trust it, so I monitor it carefully.

We have a Hyundai Palisade with Adaptive Cruise Control and Lane Keep assist. Amazing for reducing fatigue (the small little corrections). Not good enough to let you get distracted. I always have to be mentally ready to hop in.

I think it's a perfect solution until we have true FSD that can handle absolutely all conditions and situations.


> The lane keeping in my Subaru isn’t good enough for me to fully trust it, so I monitor it carefully.

Basically everyone with a Tesla says the same thing.


> This is a real risk

It's even a risk to Waymo because, despite being responsible about it, Tesla made people either scared of autonomous cars or forced out legislation restricting it.


I mean people have already died sleeping with it on. But it’s also likely saved lives based on their deaths per mile number.


When has someone died while sleeping in an autopiloted Tesla? I've kept a close eye on this space and never heard of this.


There are six cases of death where autopilot was confirmed to be engaged at the time of the incident, with 15 total that claim AP was engaged but had not been proven (via post-crash analysis or otherwise).

https://www.tesladeaths.com/

Based on reviewing the articles linked to the data, only one includes a lawsuit that claims the driver fell asleep before the crash. Doing this would either require a weight on the wheel to simulate constant torque (otherwise, on the highway, you have to apply torque every 30 seconds and it'll lock you out from using autosteer if you ignore it for roughly a minute), or falling asleep within 30-45 seconds of an incident, which is entirely possible.

https://www.carscoops.com/2020/04/tesla-autopilot-blamed-on-...


I would advice you look into who actually runs tesladeaths.com and make up your own conclusion.


I'm fully aware - there's even a TSLAQ at the bottom of the page which is ironic given the recent stock surge. Nevertheless, it looks like it's the only source that tries to count deaths involving autopilot, so you can take their number then go through the articles to make a more informed conclusion.


that article says the driver fell asleep and the car crashed into pedestrians.

The original comment said "driver" died after falling asleep on autopilot - there has been no such case of that occurring.


Are you aware how low the deaths per mile is for humans?

I don't think we know if Waymo has a lower rate than a human yet. Tesla I don't know, but wouldn't be surprised if it is higher.


> In the 4th quarter, we registered one accident for every 3.45 million miles driven in which drivers had Autopilot engaged. For those driving without Autopilot but with our active safety features, we registered one accident for every 2.05 million miles driven. For those driving without Autopilot and without our active safety features, we registered one accident for every 1.27 million miles driven. By comparison, NHTSA’s most recent data shows that in the United States there is an automobile crash every 484,000 miles.

https://www.tesla.com/VehicleSafetyReport


That doesn't tell the story. Autopilot is essentially forced off for anything but highway, so they are missing the most dangerous but are racking up millions of the least dangerous miles quick.


I can't actually find anything suggesting that the most dangerous roads are not highways. I really tried. Perhaps there are more small accidents on local roads, but they would tend to be less dangerous. The evidence suggests that highways are significantly more fatal[0].

Also, Autopilot is not essential forced off for anything but highways. That is just factually incorrect. Notably, straight-aways are where Autopilot works best and are the most dangerous by crashes.[1]

I found data suggesting that a lot of accidents happen because people rear-end stopped cars at intersections[2] and issues with navigating intersections[3]. That is an area where Autopilot is pretty great today.

0: https://www-fars.nhtsa.dot.gov/People/PeopleAllVictims.aspx (see last table)

1: https://www-fars.nhtsa.dot.gov/Vehicles/VehiclesAllVehicles.... (see last table)

2: https://www.hg.org/legal-articles/when-and-where-do-car-acci...

3: https://www-fars.nhtsa.dot.gov/Vehicles/VehiclesLocation.asp...


IIRC for humans it's like 1 death per million miles driven. I believe Waymo is closing in on that. Not sure if Waymo handles the variety of conditions though. Musk and Tesla seem very flippant in their marketing.


But Waymo hasn't had any deaths and has over 20 million miles driven?


I don’t think those miles are directly comparable because most of Waymo’s miles are with trained test drivers, I think?


It’s not. It’s 1 per 100 million miles


It’s one per 100 million.


I’m not sure if I feel easy about this “statistic” - deaths per mile?? These are humans, and more importantly to the business - paying customers.

If my product has the possibility of killing my customer, I’d probably not be comfortable with making it market ready (without massively improving its safety features).


Here’s the thing, every car system will fail at some point and potentially cause a death. There is no such thing as perfect safety. It’s really important to quantify things like fatalities per million miles driven. At some point, you push your system into production because, even though at some point the system will fail and kill someone, it will in the mean time save dozens or hundreds of people from dying in preventable accidents. What is critically important is being confident that your system is safer than the alternatives that your customers would use if your product doesn’t ship.


Most products can kill someone. Entire lines of products routinely kill its user. Society deems them worth the risk. A no risk life is sitting alone in a room, where you will die to obesity.


Agreed. Where to draw that line is the debate. For kitchen knives there is a similar problem. As cold as it is the data is the best measure.


It's bound to happen, but also likely to be the result of the driver not paying attention.


Right, but if the car does it right 99% of the time it's just human nature to stop paying attention. The article makes a great point that slowly improving the autopilot with a person behind the wheel is super dangerous because we get lulled into a false sense of security. I hope Tesla just keeps it to highway lane keeping until they're ready to roll out better sensors.


This principle is glaringly obvious in retrospect, and seeing it put so succinctly actually changed my mind. I previously thought Tesla's approach was an achievable long shot, but I'm now convinced it's fundamentally wrong.

The safety-driver model pursued by Waymo and every other autonomy startup is the only responsible way to train an autonomy network because the drivers are specifically trained to look out for edge cases, versus consumers who will easily be lulled into complacency once the system looks superficially like it's running at level-5 competency.


While I agree that the Waymo, et al., way is the only responsible way, that is not an argument against Tesla’s way being workable, just that it is grossly unethical.


coffee cups say "be careful this is hot", but the regulator allows a product called "Full Self Driving" to be sold - I think ultimately consumer protection will step in here like it has in Germany [0]

0: https://www.reuters.com/article/us-tesla-autopilot-germany/g...


Yeah it's a fantastic example of technological developments far outpacing legislation


Can you pay attention to a video game for two hours, and during that time react to some important event within 1 second reaction time? I bet you can.

Can you do the same when staring at a wall for two hours?


Krafcik talks about this very issue in the article on how driver assistance systems give false sense of security to the driver.

> The Waymo team believes its own early experience—when it was the Google self-driving car project—bears that out. In the early 2010s, Google developed a driver-assistance system similar to today's Autopilot and considered selling it to automakers. But when they let Google employees test the software on public roads, they found that drivers came to trust it way too quickly. Drivers who were supposed to be closely monitoring the system instead spent their time looking at their phones, putting on makeup, and other distractions.

> The fundamental challenge here is that the better a driver-assistance system gets, the harder it is to get drivers to pay attention, and the less likely they are to be prepared if the software makes a mistake. The Google team didn't see a good solution to this problem, so they completely changed their strategy. They focused on building a self-driving taxi service that would never have customers in the driver's seat, relying on trained, professional safety drivers to oversee the software during testing.


I think the scrutiny should be whether fewer people die than otherwise would have.


There's two levels of risk, Tesla owners taking themselves out, and then there is innocent bystanders being taken out. It's pretty infuriating if you consider that we're all put at risk.


You are at a much higher risk from all kinds of humans, with varying levels of skill, in different states of mind manually driving these vehicles.


You've always been at risk of other road users, but that is well understood when you get in a car and drive somewhere.

What's new, and on top of that risk, and not well understood is the risk due to drivers mis-using a half-baked feature named "autopilot" as if it's actually an autopilot, because Musk insists that it works when it clearly does not always work.


Humans are much, much better drivers than autopilot.


For now, that’s kind of the goal here...


Road users aren't beta-testers. Especially the ones who didn't sign up for it and buy a Tesla.


People can die. Honestly in my opinion it’s worth it. Will it happen to me? No, because I’m extra cautious and won’t slack off but to make progress quickly these things must be done. If you really want to be hyper cautious, you can always make the case that it’s saving lives that would otherwise be lost.


> No, because I’m extra cautious and won’t slack off

A big issue is that some random guy in the Tesla behind you may not be so careful. So you cold still get into an accident because of Tesla's recklessness.


Watching this video of regular folk (not tech journalists or other "reviewers", or paid PR) riding in FSD Waymo has done a lot to reduce my "but it'll never happen" feeling.

https://www.youtube.com/watch?v=qAZ6tJSj9T4

It may very well be that future FSD modes will require highly detailed maps, manual curation, limited areas where things can work, but even that would be valuable. e.g. if self driving trucks only work on long haul highway-only, that's still a win. If FSD taxis are mostly rolled out in medium to big cities in the beginning, it's still be a win.

Trying to boil the ocean all at once, and launching an enduser consumer FSD program that can handle wherever the driver takes it, may be the wrong way to do it, top-down instead of bottom up. Start with small geographic areas, and expand as you go.


Huh, I'm surprised you find those videos convincing about the future of self driving cars. To me, they don't seem much better than the demos people were giving ten years ago, in academic competitions, etc.

The Tesla videos scare the hell out of me. My own auto pilot purchase scares me whenever I use it. I can't even use cruise control in my Tesla without it randomly slamming on the brakes.

It's pretty straightforward to make batteries and an electric car. To make a computer vision product work? Based on everything I've seen of the field over the past 20 years, I am not impressed by Tesla's efforts in the least. They seem crazy, not competent.


Waymo is quite a bit better than they were 10 years ago. It would be harder for a casual observer to note how much Waymo has improved in the past 4 or 5 years because the gains have been made largely at the statistical margins of performance.

It's easy to build a robot that can do one thing right, and that's all you need to get a cool demo. It's hard to build a robot that can deal with the many 1000s of things that can go wrong while attempting to do one thing right.


You don't think that driving actual customers is different from showing a demo that worked in a few cases?


> To me, they don't seem much better than the demos people were giving ten years ago, in academic competitions, etc.

From what I remember from ~10 years ago (and a quick glance at Youtube videos from then confirms it) the state back then was roughly:

- DARPA challenges where cars were driving on dirt roads at about walking speed

- High speed driving on completely closed of racing tracks (with no other cars present), e.g. BMW i330

- Simulated real world conditions on closed of tracks with the system significantly slowing down at intersections and pretty reliably producing slow speed crashes

Compared to what that video is showing, that's night and day.


Why do you choose to drive a car where you think the people created it are more crazy than competent?


I like that Waymo is confident enough in their system to put it out there for people to use. But honestly, I think that their reasoning for releasing it is exactly to reduce worries in people, that 'it'll never happen feeling'. I think they needed to put something (before it was even 99% ready) out, otherwise people would lose hope and investors balk at the progress. I really do think that it'll never happen, at least in the ecosystems current form. There needs to be special lanes or routes that self driving cars can take, special pick-up points, etc. Humans take risks while driving every day, the problem is, that we have an incredibly good picture of the risks involved in everyday things. Machines don't have this sense perfectly attenuated yet, and I have yet to see evidence that they ever will. With us, there is no variability in our senses. Our eyes mostly work the same every day, our perception skills doesn't change. Machines cannot even trust their senses, and their code has to reflect that.

Obviously, this is all speculation on my part, but when I see videos of waymo sitting in roadways and parking lots, blocking traffic, pedestrians, and just the overall flow of life, the more that I get the feeling that machines should not be allowed to participate.

https://www.youtube.com/watch?v=g5SeVxYAZzk


But people also shouldn't waste so much time of their life stuff in traffic driving vehicles either. I mean, the ideal city would probably be self-driving trains, people movers, and free scooters, segways, or wheelchairs everywhere. No AI vision systems needed.

If you live in urban Japan, do you ever need a car?

To replace all 273 million vehicles in the US with self-driving EVs would cost around $10 trillion @ $50k per car. You could built a national network of self-driving lanes for a fraction of that. You could build rail lines and subways in every major metro area.

The major upside of a car vs train though is privacy. You may not mind taking the train in Japan, but in SF, it can be disgusting, and so private vehicles and American culture may be inseparable.

But that still leaves many other solutions, like we have bike lanes, and HOV lanes, we could build AV-only lanes, and prohibit AVs from working (requiring manual takeover) when you depart them.

On long drives to work, you'd just stay in the AV lane and chill, and when you got close to an exit, you'd have to take over. It would still be a win if on a 1hr commute you only did 15 minutes of actual driving.


> You could built a national network of self-driving lanes for a fraction of that. You could build rail lines and subways in every major metro area.

Most of them will be built by government making huge losses, they will go from nowhere to nowhere and they will not function as intended. The California high speed train is a great example. Not to mention $50K will be put by people but the national rail network will have to built by government (not that they have to but very likely they will grab this opportunity).


Clearly, other governments can do it, so there must be a way for the US to do it too.


Human senses may be fairly consistent day to day, but variability across humans is quite significant. Even a single person's faculties (vision, cognition, focus) change drastically over longer periods of time, or when they're tired, sick, or worse (drunk). Machines at least have standardized hardware with self-test protocols and onboard diagnostics. There are innumerable scenarios in which machines have no good solution yet, but I suspect over time the heuristics will reach a threshold of "good enough for public use."


> It may very well be that future FSD modes will require highly detailed maps, manual curation, limited areas where things can work

This is the way.

We need a common standard of machine-readable markers along roads to make long-distance FSD viable. We have the technology, what we need is to agree on a single standard and get it installed.

For cities it's a similar thing, but there are a bunch of additional variables (called humans) to take care of.


So much hate towards Tesla here. I'm more neutral - dont have a strong opinion on this. But from Google's messaging it seems that the Waymo CEO is maybe a bit defensive (scared?). Tesla has probably the long term better strategy which leads to more accurate up to date data to constantly improve their system.

Tesla's crowdsourcing strategy might be the winning horse in this race.


That’s a very interesting neutral take that “Tesla’s strategy might be the winning horse in this race” and that the Waymo CEO may be defensive or scared. I personally don’t know enough about either company or their tech or how people react to self driving cars to call a winner when it comes to a something (in this case, fully self driving cars at scale) that literally don’t exist yet with enough certainty to predict winners or establish the emotions of either company’s leadership.


How else would you describe Waymo's message?

Trying to "educate" or talk down to a competitor with words like "this is not how this works" when Tesla has significant resources, expertise (actual cars on the road!) and knowledge in this area seems defensive to me.


To be fair, Waymo (back then Chauffeur) have solid experience with Tesla's approach. They started with the same strategy of gradual improvements. They found out (and shard) the same issues Tesla is running into in the past years: Users don't pay attention when monitoring a system doing a boring task - even when failures carry a high price. Waymo switched the approach from a gradual L2/3/4 improvements straight to developing an L4 system; Tesla is locked in to the gradual move due to it being part of their business model, and are now stuck with trying to make self-driving work with unqualified test drivers (i.e. customers) and outdated technical decisions (not using Lidar because they were too expensive back then).

So Waymo does have the experience to call out Tesla here.


I recommend watching Lex Friedman's interview with George Hotz - its has some good points that one might should consider around the different approaches on reaching L5.


The term you’re looking for is “Federated Machine Learning.” The more cars Tesla has, the faster they can train the neural networks edge cases.


They are collecting data on all cars yes, but are training a model in one place. That is not federated machine learning, it’s federated data collection at best.


I think the massive collection of data from more and more Teslas is going to eventually allow them to produce the best self-driving vehicle.

Its been a maxim of history that those with the most data generally win, whether that's military intelligence of troop movements from 2000 years ago, or LIDAR scans from a car today.


I think the best evidence for Tesla doing poorly here (or maybe that LIDAR really is necessary) is precisely that they have all this data and still aren't doing as well as others. Tesla has a seriously absurd advantage here: they can observe millions of drivers and see when the human drivers do things differently than their self driving software would have, and capture the data from those events. They've had this advantage for many years, and yet, they still don't seem to be performing that much better (if at all) than competitors.


I agree with this concern. They collect a lot of data, sure, but they are also the only one who thinks that video is enough. It's a bold bet; intuitively, one would expect that video-only is good enough for gimmicks and fares worse when it comes to covering the millions of edge cases that still need to be addressed.


This is a curious point: How much data does Tesla actually collect? I can imagine users not being too happy if their car clogs the upload for hours every time you park it in your garage?


More than you think. There are sensors everywhere and pretty much all of them are regularly uploading telemetry to the cloud. This goes much further than self driving cars, the entire CAN bus is constantly synced with the cloud, all sensors, vibration, tire pressure, etc. All in the name of predictive maintenance and whatnot.


Ah yeah you’re right. Distributed data collect.


Tesla does not use federated ML. Federated ML is specifically about training models on device, and is mainly a technique for preserving data privacy.


That’s not really what federated ml means, unless they are doing the gradient updates from the cars directly.


Thanks for sharing. I actually hadnt heard of "federated machine learning" before- need to read up on that.


Self driving is an unsolved problem, so I don't see how Waymo's CEO is qualified to say whether or not Tesla's plan is "how it works."


From other technologies such as speech recognition, "how it works" seems to be - collect a lot of data and develop and train better and better models.

So tesla is collecting data, lots of it.

Additionally, there's the capitalism side of things - make sure you have a revenue stream to continue "develop and train"

So tesla has a revenue stream matched with the data collection.

So... Tesla is head and shoulders above Waymo in this.


Tesla is quite far from full self driving. They have too much phantom breaking on the simple adaptive cruise control in good conditions.


As a Model X owner who rarely engages Autopilot for this exact reason, I concur. The number of times the car has suddenly started braking for no good reason (clear weather and good road conditions, few cars around) is way too high.


Phantom braking and similar issues are a full stop for me, no pun intended. That just seems fantastically dangerous given that so many drivers don’t pay much attention when they don’t expect you to brake. Seems like a recipe for full speed rear end collisions.

Tesla may make this stuff work eventually but it doesn’t seem safe at the moment and you can count me out.


According to Elon, phantom braking was fixed in October: https://twitter.com/EVHQ2/status/1314287584105779200.


It isn't, it still does it with all the other weird braking incidents also, like snowy road, truck comes from another direction in the corner - the car will alert and brake even when both are on their correct sides of the road. Not to mention any of those mid-road poles that are meant for the pedestrians. If they have snow then the car will again brake thinking I'm going to hit something. Impossible to use the cruise control on many roads.


“It should be”.... not exactly the lost reassuring response on a common failure mode of a safety critical system


That is true, but notice that the autopilot you use on the highway is an entirely different software than the FSD beta which recently has been released for off-highway use.


Its so frustrating when smart people can't speak plainly enough. Most people will see this video with a tile like "Watch Tesla's Full Self-Driving navigate from SF to LA with (almost) no help" and assume Tesla is FSD capable. https://www.youtube.com/watch?v=dQG2IynmRf8&feature=emb_logo

How then can Waymo say its not? Why can't they instead release a video comparing the Pros and Cons of Tesla FSD vs theirs.


TSLA and Comma.Ai aren't only ones. Ghost (https://driveghost.com/) is funded by top VCs (incubated at sutter hill ventures, same firm that incubated Snowflake) with a highly technical second time founder in John Hayes (first company is the public company Pure Storage).

They're taking the same bet (i.e. CV + ML advances on a large and high quality enough data set are enough). After listening to Hoetz on podcasts, I wouldn't underestimate this approach. People trying orthogonal technical approaches win surprisingly often.


I have a model Y and what it has today is 80% as useful as a completely self driving car. It’s mostly self driving on the highway, and the situations it can’t handle are very predictable: toll booths, stopped cars in the middle of the road, things like that. You need to available to drive but you always have plenty of warning before you need to take over.


80% as good is 1000% as dangerous. Tesla's vehicles perform just well enough to convince many users they're good enough to "self drive" and ignore while failing catastrophically in edge cases and absolutely not being ready to completely ignore. Tesla's reckless "autopilot" branding contributes to this.


Reality is not black and white. You need to strike a balance.

The claim of "1000% as dangerous" is sensational, and frankly, should be backed up by facts, or else you are just smearing mud.


My point is that it doesn’t fail in edge cases. It has very clear limitations, there are no surprises. Even if you were literally asleep at the wheel, the worst thing that would happen is it’d end up stopped in front of a traffic light, waiting for you to take over.


> the worst thing that would happen is it’d end up stopped in front of a traffic light, waiting

I get your overall point, but this particular statement is empirically false.


"very clear limitations" that users continue to ignore. Some of that is stupid users, some of that is users assuming that "autopilot" means more than it does because that term has already meant something for the last 40 years in our society - something that Tesla doesn't deliver on.

Tesla's warnings to put your hands on the wheel are obviously insufficient based on regular videos of drivers asleep at the wheel on the freeway (https://www.theverge.com/2020/9/18/21445168/tesla-driver-sle... https://www.youtube.com/watch?v=ZhObsMnipS8 https://www.businessinsider.com/drivers-sleeping-in-tesla-ca... and many more). At a bare minimum, their code needs to bring the vehicle to a safe half rather than continue on its way forever dinging if the user refuses to interact with it.

And no, there are far worse things that can happen. https://www.bbc.com/news/technology-51645566 https://apnews.com/article/ca5e62255bb87bf1b151f9bf075aaadf


> Tesla's warnings to put your hands on the wheel are obviously insufficient based on regular videos of drivers asleep at the wheel on the freeway

Every Tesla owner will tell you that those videos where people are "falling asleep" are staged.

Why? because if you don't acknowledge the warnings, it will turn on the hazards stop and disable autopilot for the rest of the drive.

https://www.youtube.com/watch?v=2uw97Gx1lYw


That particular attack had been fixed, but I remember people disabling those warnings with oranges: https://www.youtube.com/watch?v=TYZrehVQouc

It's an arms race between Tesla being legally on the safe side, and drivers using the car as Tesla advertises it.


> That particular attack had been fixed

Which one?

> drivers using the car as Tesla advertises it.

I'm confused, How exactly is Tesla advertising the car where you don't have to pay attention? It even explicitly tells you to keep your hands on the wheel and pay attention.


They call it Autopilot. On their car configuration tool, the text that appears in the biggest font says "Full Self-Driving Capability". The fine print doesn't matter - Tesla is advertising more than it can deliver.

Or this wonderful paragraph: "Navigate on Autopilot: automatic driving from highway on-ramp to off-ramp including interchanges and overtaking slower cars."

https://www.tesla.com/models/design#autopilot


> They call it Autopilot. On their car configuration tool, the text that appears in the biggest font says "Full Self-Driving Capability".

I don't think you understand. "Autopilot" and "Full Self-Driving Capability" (FSD) are two completely different features. Autopilot comes free as standar. While FSD is an optional feature, that adds things like: Navigate on Autopilot, Summon, Auto Park, Auto Lane Change etc.

FSD Beta can take you from point A to B with minimal to zero interventions.

https://www.youtube.com/watch?v=MaJCYYiDzQQ

> Navigate on Autopilot: automatic driving from highway on-ramp to off-ramp including interchanges and overtaking slower cars."

But it current does that. I used it everyday on my commute! That feature is called "Navigate on autopilot".

https://youtu.be/j6S_-35O_w8?t=406


"Autopilot" and "Navigate on Autopilot" are separate features? "Automatic driving" actually means "carefully supervised driving"? "Full self driving" means the exact opposite and requires your full attention?

If you can't see how this is massively confusing and misleading to consumers you've been a Tesla user for too long and need to step back for.some objectivity.


This argument without data is just as useless as Tesla's pro-AP argument without data.


> you always have plenty of warning before you need to take over

This only has to be false once.

I'd also like to highlight that sometimes AP has frightening regressions. One case was labeled "barrier lust", where - on a divider - AP would follow the wrong white line, and steer the car towards a barrier. See here: https://www.reddit.com/r/teslamotors/comments/8a0jfh/autopil...

It was fixed, after Walter Huang died through such a case.

Until the behavior returned some months later: https://www.reddit.com/r/teslamotors/comments/b36x27/its_bac...

So please be careful. Regardless of the value you put on your own life, car accidents impact more than just yourself.


Sounds like lane steering and adaptative cruise control on my 19k$ Toyota Corolla.


How well does that handle poorly marked sections of road? I have a similar system on a non-Tesla and that was one of its main weaknesses relative to AP.


Usually they need clear lane markings, at least one white line needs to be visible for the system to latch on.

Snowy roads, dirt roads etc. are out of scope for those.


Love the snarkiness. But no. The cruise control in your Toyota Corolla can't quite do this: https://www.youtube.com/watch?v=rWS9jjhLYSM


But that’s not what the GP comment is talking about, no? They’re specifically talking about self driving on highways. FSD beta works on city streets. On highways you’d use auto steer and traffic aware cruise control, which is exactly same as lane keep assist and adaptive cruise control in a Toyota.


Can the Toyota Safety Sense system perform an automatic lane change on the highway? I don't think it can. Please correct me if I am wrong.


I’m not sure, but I don’t believe it can.


Is anyone else confused on the merits of Waymo's argument? They differentiate "driver assistance" from "fully autonomous" systems in that Tesla somehow can never achieve the latter because they have developed the former. The only specific reason for this that I could find was a mention of LiDAR and how Waymo sees it as indispensable.

> Krafcik says that Waymo has largely completed technical work on its self-driving software and is now focused on scaling the technology up.

...cool? Again, no mention of a SPECIFIC reason Tesla cannot do the same work as Waymo outside of not having LiDAR sensors. They attack Tesla for not developing a fully autonomous vehicle from the get-go, but don't seem to explain why this makes it any harder for them to make the finished product.


The core point is you can't go smoothly from a driver assist to fully autonomous. They're just completely different from the point of view of the reliability of the software you need. A huge amount of things with driver assist you can write off as 'the driver will take over'. There's not a specific part of you can point to and say 'that's what will stop them', so much as the entire design of the system needs a different level of consideration because it becomes so safety critical, as opposed to safety-enhancing (autonomous driving is probably the most complex safety critical design ever attempted, by at least an order of magnitude. Driver assist is way easier by comparison). It affects the attributes of the sensors, the design of the neural networks interpreting those sensors, the high level design of the decision making of the whole vehicle, the low-level design of the electronics, basically everything.

Tesla hasn't put forward a plan which indicates their full autonomous development is going to be anything other than a continuation of their driver assist work (and it's certainly frequently and incorrectly stated that the state of their driver assist puts them ahead in the race to full autonomy), and if they want to be credible to those familiar with the challenges involved, they need to actually show something more than that.


> you can't go smoothly from a driver assist to fully autonomous

Ok. So Waymo has developed a 100% reliable full self driving system that handles every possible situation. It was never put in a situation in which it would have given back control to a human driver, because that would have meant "going smoothly" to driver assist mode, and we have already said that this is impossible.


I was not referring to the dynamic shift while in use, I was referring to the development process. Regardless, the waymo system does not have a driver assist mode, it's either in control or it isn't (and during development it may be overridden by or automatically fall back to a safety driver, who has a job akin to a driving instructor, something which general requires more qualifications and attentiveness than a driver).


A full discussion on this would require an article several times as long as this one as well as some understanding of how vehicle control systems work. I do a fair amount of work with OEMs in this space and to summarize it in a horribly short way:

1) A system that relies primarily on cameras is necessarily limited by camera constraints. No, just because humans primarily rely on vision does not mean an ML model with cameras can do the same and achieve the same results in situations as complex as urban driving.

2) Every major player other than Tesla in this space spends a huge amount of time architecting an entire computing, data exchange, and control platform for autonomous vehicles that is bulky by necessity, but far more robust both in terms of sensor capabilities and ability to cross-check between data inputs to decide what control signals to send. Tesla basically added some cameras and low-resolution radar (they're improving the radar now, but it sure as hell wasn't in the platform they originally promised was FSD-ready) and is hoping it will be enough when everyone else studied the capabilities years ago and decided it wasn't.

To summarize, what Tesla has built is a platform with capabilities that very strongly seem to be limited to Level 3 automation, while engineers working on Level 4 long ago concluded that the Tesla approach is insufficient, and then adjusted their own hardware and software strategies accordingly.


The argument is that they are not even in the same category of technology complexity.

Which is very reasonable.

Tesla's real-world deployment of autopilot is frankly probably an advantage of sorts. And maybe starting with those realities is 'good path'.

But it doesn't change the nature of the fact the gap is quite big.


He seems so obviously correct that I can’t fathom how any engineer could see things otherwise.


Musk seems more of a marketer than an engineer to me, even though he's always characterized as one.


Musk has degrees in economics and physics. My undergrad is in physics and my dad is an engineer. They are definitely different skillsets.

When I decided to build a treehouse for my kids, I came up with a clean sheet design. My dad saw my drift and walked me through a redesign the preserved the core idea. And then my neighbor, a general contractor, supervised the build (and pretty much doubled every safety margin along the way). My other neighbor (a self-employed roofer) did the crazy bits.

I would be inclined to credit Musk with broad outlines and I would give him credit for being able to participate in the engineering process by asking some sensible questions. I would not be inclined to credit him with imposing tried-and-true design methods and constraints. Similarly, I would give him credit for broad outline path-to-profitability. I would not give him credit for day-to-day diligence of double-entry accounting that the CFO worries about or the contract negotiations his PMs answer to the CFO's staff for.


Not only he sold two tech companies where he worked as a software engineer early in his career, but he has been the chief engineer for SpaceX for almost twenty years, and the results of that kind of speak for themselves?


Is his role at SpaceX for his ego or is he actually signing off on engine designs?


They gave up on patents over a decade ago so we might never really know, but it’s widely reported that he designed the original falcon rocket, has a hand in the engine designs, the dragon capsule, the space suits, and now starship and everything surrounding it. For most people just being a part of one of these projects would be a mark of success...


Have you even listened to him talk? Every word that comes out of his mouth is poured over to make sure that it contains truth.


It's been 311 days since Elon Musk promised to produce ventilators at the SpaceX factory.

It's been 546 days since Elon Musk claimed Tesla was increasing production of solar roofs towards one thousand per week by end of 2019

It's been 644 days since Elon Musk said there will be a million fully autonomous Tesla robotaxis in a year

It's been 726 days since Elon Musk said Tesla would likely be cash flow positive in all quarters going forward. Tesla lost $702.1 million in Q1 2019 and $408 million in Q2 2019.

It's been 747 days since Elon Musk said the new Roadster will use rocket technology that will allow it to fly

It's been 761 days since Elon Musk said brake pads on a Tesla would never need to be replaced.

It's been 817 days since Elon Musk said Teslas should be able to read and understand parking signage by end of 2019.

It's been 902 days since Elon Musk tweeted funding was secured to take Tesla private at $420 a share.

It's been 925 days since Elon Musk said he would rescue children stuck in a cave with a custom built mini-sub.

It's been 929 days since Elon Musk pledged to fix Flint's lead water crisis.

It's been 958 days since Elon Musk promised employees that there would be no more layoffs. They laid off 1000 employees 718 days ago.

---

Truth doesn't seem like something he's particularly concerned with


Please do something else with your time.


https://elonmusk.today/

that comment took me a minute to write.


Musk has no engineering training whatsoever.


He never got any engineering training. The closest he got was a BA in Physics. But yes, he’s a good marketer and his first product is himself.


I had to go back and re-read the article because I thought I had missed something. The whole content seems to consist of only three points:

1- that their sensors are better and lidar is indispensable.

2- that there is no continuum between assisted driving and fsd.

3- that they have already achieved full self driving.

That's it. Neither 1 or 2 are supported by any argument, and 3 contradicts the widespread opinion here that fsd is impossible (at least without agi).

Note: I'm not defending Tesla, I'm just saying that this article is very light on actual content.


An engineer that's being logical and thinking from first principles should come to the same solution, but the vast majority of the population base their viewpoints on what's "true" based on what the internet / mass media has told them for years is true, not by what actually makes logical sense in objective reality.

The news has been spamming PR pieces for years about self-driving cars, with absolutely no technical merit to them. Twitter/Reddit/Etc have been absolutely spammed by echo chambers that all assure themselves that self-driving cars have been invented and will be released soon.

The population is awash with the viral idea that they're possible, exist, and are on the way, because that's just what they've been told for years. The average person repeats the idea because that's what the internet told them is true, which makes it even more "true", since now there's another person repeating it.

What percentage of the population that goes online and talks about how self-driving cars are coming soon actually have the engineering background / technical knowledge to come to that conclusion?


Humans, surprisingly, don’t have LiDar, yet they can drive.


Humans have general intelligence and understand what their eyes are perceiving at a far deeper level than any CV system.


If I'm not mistaken, that's the gap that all these very smart people are trying to close.


For how long?


Nobody knows. But solving self-driving by creating AGI is a ridiculous strategy, for a variety of reasons.


I’m not sure what that has to do with my comment. Neither I nor the Waymo CEO said you need lidar to drive.


As an engineer you should care about cost and other tradeoffs. At this point Waymo is not much more than an exorbitantly expensive tech demo. I sure hope that one day it is more than that.

Tesla has a million cars on the road. It is a technology people can use and afford and it is working.

comma.ai sells a device for $1000 that can be installed on the majority of cars sold in America.

It's still too early to say much about Waymo.


The whole point is they aren't actually doing what waymo is setting out to do, and it doesn't matter how many cars tesla has with driver assist it won't really help them get to full autonomy.


If having a lot of cars with sensors collecting data doesn't help, why did Waymo build so many?


Because testing the cars does help. Beyond a certain point you also need information on how the system responds, and not just in the areas your customers are comfortable having it engaged.


To me its obvious that he is wrong. Great, now we can all just claim we are so incredibly smart and know anything based on feeling.

Waymo is wasting tons of money driving around one city, that is literally the easiest city to driver around arguably in the world. They did this for years and years and years.

Tesla has a much, much, much larger fleet driving around in literally all conditions, in almost every single country in the world. Their software already handles things that Waymo has not even bigging to solve, like special street signs in Korea and China.

Will Waymo be the first to offer a Taxi service in geofenced? Yes. Will Tesla be the first where I can go anywhere at any time? Yes.


There are other engineers in the same field with more technical expertise in this specific field of robotics than Elon, and they think that Tesla's path is not accurate enough. Maybe they're right. Maybe Elon is.

The problem is when you hold someone up on a pedestal and a cult of personality forms. That clouds your judgement and you lose your objectivity. It's great if Elon is right, but if he isn't I hope the fatalities, injuries, and property damages can be kept to a minimum and it doesn't hamper better technologies from being implemented.


Anyone wanna bet me that Tesla will be first to full level 5 self driving? Seems to be a lot of anti-Tesla folks here, put your money where your mouth is.


What Tesla has now, Waymo had back in 2012. They literally had a FSD program they were testing with certain employees, and once they found that that people quickly get distracted and stop paying attention, they gave up on that approach and went for the new strategy of jumping straight to level 4 and skipping level 3.

So either they've been doing absolutely nothing for the past 8 years until getting level 4 driverless car in Phoenix. It's only a level 4 right now, since it does not work in all conditions, but it's still far ahead of Tesla's level 3 which requires the driver's attention.

So far Tesla hasn't shown anything that Waymo did not have at least 5+ years ago.


Really. Waymo has this in 2012? https://youtu.be/YJxnVRAX2UA?t=4

Do you have an article/video you can share by any chance?

I've been following Google's self driving efforts for years and must've missed that.


If we're cherry-picking examples this comparison of 2009 Waymo vs 2020 FSD Beta is pretty funny

https://twitter.com/Tweetermeyer/status/1324122869702389761


Those are not even remotely the same... That Waymo video has been pre-mapped. To infer that Waymo in 2012-2009 can go from point A to B anywhere in the U.S. like FSD Beta is patently false.

Ever wonder why Waymo only have their fleet in a geofenced area of AZ?

https://www.reddit.com/r/waymo/comments/c24bcu/i_got_finally...


There's a difference between can't and won't. Yes their operation in Phoenix is heavily mapped and fenced, that doesn't mean that their technology can't and hasn't driven in more improvised environment before, and I highly doubt the lumbar street example was mapped.

Videos recorded by the company will always be cherry picked by definition, since they would only post videos that show the technology in its optimal working state. And there's always a driver behind the wheel checking everything goes right.

The reason Waymo is operating the way it does is they realize that they need to be 120% cautious, as ONE accident is all it takes to throw all their work a decade behind, jst like one accident is all it took to completely kill Uber's self driving effort. So again, it's not that they need a fully mapped geofenced area, it's that they choose to use that for now.

Hell, Tesla themselves posted their first "A to B" video back in 2016, yet here we are 5 years later and from a high level, what they showed in that cherry picked video they posted isn't much different to an average viewer, but no one would claim that they haven't improved at all in the past 5 years.

This is basic anecdotal data vs real statistics. A video tells you nothing about where the technology is. Unless we can see real open data about miles driven and number of disengagements, there's no way to tell if Tesla is doing well or not. So far Waymo's numbers are an order of magnitude better than any competition.


> Videos recorded by the company will always be cherry picked by definition, since they would only post videos that show the technology in its optimal working state.

Except FSD Beta videos are coming from actual Tesla owners. Like the one I linked above. A pretty glaring difference.

These are just one of many:

https://youtu.be/i8qVLmflQjs

https://youtu.be/sd1NV7lviAo

> A video tells you nothing about where the technology is. Unless we can see real open data about miles driven and number of disengagements, there's no way to tell if Tesla is doing well or not. So far Waymo's numbers are an order of magnitude better than any competition.

You just contradicted yourself. Tesla actually publishes their data here: [1] https://www.tesla.com/VehicleSafetyReport and here [2]. https://electrek.co/2020/04/22/tesla-autopilot-data-3-billio...

Unless of course, you have inside knowledge on Waymo's stats/data. If that's the case. Please feel free to share.


I'm not sure how comparable that data is. It seems to just sum up the times where random Tesla's where in Autopilot mode:

1. Autopilot from my understanding is L3 driving, with the driving needing to be attentive 2. They are not recording disengagements, or even near accidents, only full accidents 3. It seems to be like people would use Autopilot mostly on highways and other places where it works more reliably, so it's not a uniform set of miles

By comparison, here is 21 months of self-driving data from Waymo, with every disengagement and near accidents reported [0]. It's L4 driving with no drivers at the wheel. I'm sure Tesla tests their FSD similarly, and probably tracks similar data, are they releasing that? The other ones you posted says very little.

[0] https://www.theverge.com/2020/10/30/21538999/waymo-self-driv...


lol nice way of not betting


..and so does this mean you want to bet or not?


The parameters of the bet are not well defined. Both companies probably have "working" L5 technology, it just comes down to how much they're willing to gamble on the reliability and how early they're willing to put it out there. These things will obviously never be 100% reliable, so now the question is how many 9s each company is willing to wait for.

Waymo realizes that one accident is all it takes to completely destroy their operation, just like it killed Uber's self driving division. Tesla on the other hand has shown that they are prepared to be far more reckless and release things early just for headlines. To me it's meaningless who puts it out "first" because that's not a metric of how good the technology actually is. I'm interested in miles driven and disengagement statistics, which Tesla hasn't really been openly putting out like Waymo.

All I know is that right now, Waymo is the only company with an actual publicly available L4 driving car out in the streets, that is fully driverless with no one behind the steering wheel. And they have been for over a year. Meanwhile Tesla's latest private beta still requires someone behind the wheel and paying attention, and therefore is only L3.


Absolutely. I'll even go one further: we won't actually know the exact moment that Level 5 arrives.

Cars with level 4 + remote assistance will be driving people around for a decade or more, phoning home less and less over time until one day someone notes that no car has phoned home for an entire year. Then maybe a car will phone home and the clock will start again. Decades from then, people will shut down the last remote assistance center.

But at that point, nobody will care about declaring Level 5 because we will have been riding around for decades in cars without steering wheels that we don't have to drive.


I’ll take that bet. How much? I’m not anti-Tesla but I do see Tesla as a brand more than a technology company: good for business but not good good for breaking new ground with technology — I’d put a lot against Tesla on that bet (I don’t think it matters for the business much though).


I say we do $1,000, but what did you have in mind? We will have to put the bets in escrow

We need to agree to what constitutes full level 5 self driving.

My interpretation of it is a car that can drive itself anywhere in the world, has been approved for commercial use in at least one country, AND recognized as the first to full self driving in at least 2 major publications.

We can put a 5 year time limit on the bet.


Unless I'm reading it wrong, it sounds like you (sixQuarks) and the parent (vegannet) want the same side of the bet :-) ie you're both betting that Tesla won't achieve FSD L5 in 5 years.


> sixQuarks 2 hours ago [–]

> Anyone wanna bet me that Tesla will be first to full level 5 self driving?


L5 under those conditions won’t be achieved in 5 years. Might as well not make the bet.


We can make a different bet on this.


> Anyone wanna bet me that Tesla will be first to full level 5 self driving?

Well, no, because (independent of opinion on the subject of the proposed bet), counterparty risk.

There's a lot better places to bet against (or for) Tesla where that problem is much better controlled.


Agreed. The only company with actual autonomous driving hardware at a scale of a Million+ cars on the road right now.


Agree. The key to solving the problem is data and they have it, by the bucket load. The HN crowd is pretty conservative when it comes to tech it seems.


The "autonomous driving hardware" they have is about the same as anyone else. It's just that those other companies advertise it truthfully as a safety feature rather than pretending you have R2D2 as a chauffeur.


The way we bet on companies is the stock market. I am neither smart enough nor stupid enough to sell short.


This is really about two different strategies for dealing with cultural and regulator norms more than two different engineering strategies.


They did pick different sensors and approach to world mapping in the first place. How is it just a cultural and regulator thing?


One method is spending money on sensors, the other is on gathering as much data as possible (cheaper sensors, but put in all cars).

In 20 years we'll probably have good enough algorithms that can work with few data and cheap sensors, so it's just a matter of time until the 2 methods converge.


Has it ever been in the history like that? It's more like the hardware will get both more sophisticated and cheeper.


This seems like agile vs waterfall to me, which are two very different engineering strategies.


I have a feeling that AGI is a requirement for FSD. There are enough tail events while driving where you can only make "the right decision" if you actually understand what's going on.


> There are enough tail events while driving where you can only make "the right decision" if you actually understand what's going on.

I have the feeling this bar disqualifies most human drivers, too.


Human drivers fail reliably, in ways that are understandable. Non-AGI artificial intelligence based on a mix of heuristics and dumb statistics (aka 'deep learning') will fail in highly unpredictable and to a human eye completely foolish ways. The day these cars are let on the road in my area is the day I start taking the train.


You can predict the driving patterns of drunk human drivers, or humans that have seizures, cardiac, or other critical health events or just fall asleep at the wheel? Quite impressive.


I saw many unexpected driver's action on car crash YouTube channel.


> I have the feeling this bar disqualifies most human drivers, too.

There are plenty of people who wait for road-side assistance for a flat tire. It's arguable that a FSD system may be totally viable without having to solve every eventuality on the road. If it can stop someplace safe and call for help that's the most a lot of humans can manage.


To my mind there are certainly a lot of unknowns around interactions with people outside the vehicle. A simple scenario like someone directing traffic around an accident seems like a nightmare for FSD.

(Edited: slight wording tweak.)


Waymo following police hand signals: https://youtu.be/OopTOjnD3qY


That's impressive, although a dramatically easier problem than I was picturing.

Imagine nighttime, having to pass an accident, which means driving on the wrong side of the street with other cars beyond the accident waiting, and someone (quite possibly a civilian) waving you on.

Heck, I've been flagged down by a prostitute who wanted to get away from her John, running down the middle lane of an interstate at 2am. That was, admittedly, not a common scenario for a self-driving vehicle to have to deal with.


I feel like AI vs AGI is going to be the next P vs NP kind of issue.

People will mostly realize that the concept of "general" intelligence is flawed yet we might strive to get to it indefinitely.


FSD is all about collecting insane amount of data about edge cases that happen in the real world. Tesla has built the infrastructure for this, and will win the FSD race by a wide margin.


Is snow an edge case? My cameras don't function when it's snowing or there's snow in the ground. Or there's snow banks around the road. My dash is lit with the "some cameras are blocked" 24/7 for the whole winter time. And proximity readings are blinking those yellow / red lights all the time from the right side as it thinks the snow bank next to the road is too close. Or the system thinks the camera is blocked because the light reflexes from the snow.

Seriously, if they can't solve even simple problems, how are they going to solve something more complex? I'd be happy for even a implementation of a removal of these warnings when the system doesn't understand what it's doing, but I don't think they know when it doesn't function correctly.


Snow is a difficult problem even for humans.

In worst cases the road turns from a 2 lane road to a 1.5-1.8 lane road when there's enough snow on the banks. Multi-lane motorways turn from 4 lanes to 2-8-3.5 lanes. Humans have trouble navigating that, dunno how any FSD system would manage it in our lifetime.


Also sometimes the "lanes" that form on the road can cross the center divider to the oncoming lane, which is really dangerous if you're not paying attention. I almost hit an oncoming semi-truck just one day because of this. It's an example of a rare edge case.


If camera don't function in snow, then Waymo is in a worst position because they rely on Lidar -- which simply cannot work in snow, at all.


It doesn't work perfectly yet, but they're on their way making it work. Give it some time.


> They believe that lidar sensors will be indispensable to get early self-driving vehicles on the road.

I agree with this assessment. It seems to me that lidar has a consistency in its ability to detect objects that computer vision based techniques just don't have. While cameras may eventually be able to achieve the same thing, I would argue that lidar will always be as safe or safer, and for that reason I think it will be the only acceptable version if self driving breaks out in the next few years.


Is anyone here working in the self-driving industry? I have a background in Mechanical Engineering so that means knowledge in controls/motion planning stuff.

Is this still something that's being worked on in companies like Waymo? Or is it in less demand compared to computer vision/object detection problems? I like the idea of working on both technologies however I feel the knowledge gap between them is big.


I think Tesla has a distinct advantage over Waymo because of their ability to control the entire vehicle stack from low-level ECUs to the batteries, motors and high-level autonomous stack.

Waymo has to partner with automobile OEMs who are still figuring out how to build electric vehicles, and build them in a way that can integrate with Waymo's technology.


I don't think there is any reason for Waymo not to pair up their system with a regular petrol car. Electric vehicle capability is orthogonal to self driving.


Depends, if they are planning to run a taxi fleet or restrict themselves to new personal vehicles, integration and operational costs for ICE vehicles is likely going to be higher than EVs in a couple of years. There is little common between the vehicle dynamics and control for ICE and Electric vehicles, and it might be more practical to focus on one platform.


Im anti-robot car ... so many ppl will die for progress with this.

I'm even more after my friend's Jeep Compass's AI system decided to lock up her brakes and steering wheel as she was turning into the bathroom area after the Golden Gate Bridge. The AI said nope im taking over your going to crash and thus you cant maneuver your way out of hitting the highway wall which she did and total that piece of junk; all bruised up too!

I will avoid getting any AI features in any new cars I buy or just buy older ones. Unfortunately, those around me will be driving these cars where their brakes and steering wheel just might lock up too and boom they and or we are in accident. An accident that the driver who could had control didnt because algorithms can never guess the millions of different driving scenarios & execute the right one!


My Dad avoided a crash where he was nearly killed with children in the car and believes he did it because an angel took over the steering wheel and corrected his driving error faster than he could.

How is this relevant to you friend's story, you may ask? It's relevant to show humans are not necessarily reliable narrators of why a crash occurred.


Whether your Dad thinks "Jesus, Took the Wheel," your Dad still had control of the wheel himself (he may think a spirit helped him but his physical self still had control) vs. AI saying I'm locking your brakes and steering wheel & you or Jesus can't do anything about it!


But my point is your friend may be wrong about A.I. causing the particular crash the same way my Dad is wrong about Angels.


Your dad was in control of the wheel and made a subconscious/reflexive decision and avoided the crash. Whether he chalks it up to an angel or not is completely irrelevant. It is not about human narrative or beliefs, it is about whether or not a human was controlling the wheel.


I would bet quite confidently that your friend would not have been able to avoid a crash, and the “AI” is just a convenient third party to blame. You need to be at a speed way above recommended for an exit, to trigger the anti-collision system, and actually sustain injuries. Humans tend to overestimate their abilities quite strongly, especially in hindsight.


Sure, I hear what your saying, but her insurance company said oh no worries we will pay for it all. It's not your fault... do you not think they have data about these things and why would they pay for it carte blanche?


Unless you only have third-party coverage, I don’t think there’s any insurance company that would deny payment unless you were drunk or willingly taking risk somehow.


While I’m all for being critical of TSLA’s FSD rollout I do think a bit of perspective is warranted: in the USA drivers are routinely drunk, distracted, or just plain stupid.

Everyday day I get on the road and I’m forced to drive defensively because I cannot trust any car on the road to behave logically.

How is that situation changed at all by having computers that don’t behave logically?

There’s a huge confirmation bias at play where you all seem to think human drivers are good... they’re not. They’re absolutely 100% not. The average driver is a complete moron. Go spend 10 minutes scrolling the /r/IdiotsInCars subreddit to get a feel for what I’m talking about.

So no, I don’t think TSLA is being irresponsible in their FSD rollout. At worst it’s equivalent to the average human driver.


> Go spend 10 minutes scrolling the /r/IdiotsInCars subreddit to get a feel for what I’m talking about.

I don't think that is enough to form a conclusion. I have driven hundreds of thousands of miles and have yet to see an /r/IdiotsInCars moment. I think drivers are better than you think they are. If there are even just 10 recorded idiots a day, then that sub can be active and interesting every single day, but that doesn't reflect the fact that there may be 10,000,000 responsible drivers for each of those idiots.

> At worst it’s equivalent to the average human driver.

So I think this needs to be backed up with actual data and not some anecdotes from Reddit.


Oh they've got the data. I see it everywhere, Tesla has way less accidents per mile than us horrible human drivers. I don't know what it's based on because every few minutes of footage there's a scary moment on FSD beta videos which are produced by Tesla enthusiasts who have signed some form of NDA (no live stream allowed).


Self-driving doesn't work until we develop AGI with a self preservation drive akin to that of human beings.

Doesn't matter how good your sensors are if you can not efficiently integrate sensor inputs into a model of the world that is useful for driving including so called "edge-cases." (Edge-cases are the whole problem by the way...)

Lets use this analogy. Imagine a person with severe mental retardation or a person zonked out of their mind on a dissociative drug. Their eyes (sensors) may work just as good as anybody else...but would you trust them to drive a car?


"For us, Tesla is not a competitor at all," Krafcik said. "We manufacture a completely autonomous driving system. Tesla is an automaker that is developing a really good driver assistance system."

This is absolutely hilarious to me. What an amazing cope. To downplay Tesla's approach is something you'd expect from a competing CEO. To straight up declare they are not even a competitor is delusional. They are clearly competitors.


It's a long road ahead, but right now Tesla has orders of magnitudes of more real driverless miles driven, in more areas and conditions. All done safer than humans statistically. It's possible for someone else to win, but right now Waymo is certainly the underdog under any metric you could come up with EXCEPT that lidar is required in the short term. So it's makes sense that is his public stance.


One of the reasons I doubt self driving will happen any time soon (beyond Lane assist and similar minor changes) is that everyone is desperate to talk about why they're the expert and how it will work and no one seems to have a product ready to sell.

It reminds me of chip makers arguing over the definitions of 2nm, when they were not producing anything smaller than 20nm...


Musk has stated that since humans manage to get all info from the road using only 2 cameras, then Tesla cars can do the same and don't need lidar. Which makes sense, but also humans cause thousands of deadly accidents per year using those 2 cameras. So there is no certainty Tesla will be able to do better, it is a gamble.


Can someone please describe a scenario where an alert driver in a FSD Tesla would be allowed to blame the vehicle for the accident? Every Tesla + Autopilot accident I’ve read about has been blamed on driver error. This “heads I win, tails you lose” approach is disingenuous at best.


Tesla has been regressing with so many easy-to-fix bugs like Spotify playlists being cut off at 100 songs (broken for two months now). I don't ever see myself trusting their FSD when their developer and QA teams are causing or missing so many obvious regressions.


Elon Musks reply on twitter:

> To my surprise, Tesla has better AI hardware & software than Waymo (money)

https://twitter.com/elonmusk/status/1353564655088619520


"I'm rubber, your glue". A completely void reply.


I doubt that regulators will just sign off on self driving tech across the board. I imagine that it will be transitioned in starting with freeways / motorways - in which case, the question is probably a technicality.


What I think most people miss with FSD is that it's going to have a higher crash rate than average while the kinks are still being ironed out. That is ok because we're seeking to reduce the crash rate overall as fast as possible. I've been in many, many accidents in my life. Usually as the person not at fault or in the passenger seat or on the bicycle. I love the idea of self-driving cars. They will save many lives.

The critical part to get right, however, is to test FSD on people that are more prone to get in car accidents in the first place. The elderly, recovering alcoholics, etc. That way we have the best of both worlds. Real world data to improve algorithms on, but also not raising the overall crash rate.


If anything, technology should make people more responsible, not less. Everything that puts your life on autopilot is garbage.


On the one end, you have a company that is going head-first and releasing things which aren't ready into the market. On the other end you have a company that is terrified of releasing anything to the market.

It's interesting to see these two companies butting heads.


Why are it now the car companies who decide "how it works"?


Why not develop roads that act like tracks and can be guided by an algorithm. Seems like a simpler problem albeit less robust than autonomous vehicle


What happens when a deer crosses the road?

What happens when there's a fire along the highway (e.g California)?

What happens when there's debris and objects in the road?

What happens if a car needs to make an emergency stop?

Where do you place these new roads?

Do they require roads to be re-built or can they be added to existing roads?

What is the cost of replacing the entire highway and city infrastructure?

The AV companies decided that the simpler problem is to have AI that works on highways, then incrementally teach it how to handle more complex situations. This is the most additive solution, whereas your solution is not necessarily additive and does not necessarily preclude the need for AI.


Simpler doesn't mean cheaper. Also simpler doesn't preclude AI either


The misanthropic, cynical view: Crashes are excellent training input.


You're not too far off; all SDC systems (including Tesla & Waymo) use disengagements to train on. So it's not accidents themselves, but human takeovers prior to what might otherwise have been an accident.


another baseless musk bashing. Waymo doesn't have nearly as much data as Tesla. They're not playing in the same category.


Ah yes, the "more data will magically solve self driving" argument.


More data makes it easier to train and evaluate systems but will not fix an inadequate approach.

Humans can drive with vision alone because humans are way way smarter than glorified regression models. We have a higher order model of what is happening. I am skeptical that you can replicate this with current AI. The approach of using LIDAR to compensate for dumber AI with more and better data seems more sound.


“Seems” That’s all it is to you LIDAR fanboys.

You simply do not know what you are talking about.


also Google being short on data given they've been working on this problem for 12 years is a pretty poor argument. If anything, it should tell us how are the problem of self-driving really is.


He didn't say that. You're putting words in his "mouth."


I mean, the argument literally was Tesla supposedly having more data than Waymo and nothing else. So I'm not sure how they didn't imply what I said.


So on one hand, since humans can drive using only vision - lidar isn't needed.

But on the other hand, even though humans don't need that much training data suddenly vast amounts of data is absolutely required?


Musk's propensity for risk tolerance, projected into the consumer market, with some weird hubris about this, might just end up killing a few people and a few lawsuits.

It may not impeded Tesla in any way though.


I really wish Tesla would abandon this quixotic quest for self-driving and focus on what they do best: Building the world's best cars.

If they could fix their build quality and repair issues I'd buy one tomorrow without any fancy driver assistance software because the Tesla Model S is the most fun car I've ever driven (manually).


Part of having a good car is abstracting away the biggest downside to driving: actually paying attention to the road for over an hour to and from work until you retire at 62 or later.

> If they could fix their build quality and repair issues I'd buy one tomorrow without any fancy driver assistance software because the Tesla Model S is the most fun car I've ever driven (manually).

While I'm not sure about the S, you might still be able to buy a "standard range" Model 3 which doesn't have any autopilot functionality, although is only marginally cheaper at $35k vs $38k - https://redd.it/eumoyl




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: