Hacker News new | past | comments | ask | show | jobs | submit login

I think the FSD beta rollout is completely irresponsible from Tesla. Rolling out a clearly half-baked safety critical technology to its customers (yes, I know it's only a few beta testers) who are untrained is nothing but a tactic to generate hype and get more customers to buy that $10k FSD package.

This is on top of Tesla being the least transparent company out there in reporting safety data or their testing methodologies (Tesla's quarterly safety report is a grand total of a single paragraph). Compare this to how transparent Waymo is [1], the difference in safety culture between Tesla and its competitors is stark. Not to mention how Tesla skirts around the rules by refusing to report autonomous miles with the excuse of classifying it as a driver assist system, while naming the technology "Full Self Driving" and Elon Musk hyping it up every chance he gets on how it will be Level 5-ready by end of the year.

[1] - https://waymo.com/safety/




I have one question for Tesla customers who trust the company to deliver full FSD.

How do you reconcile that belief with the fact that Tesla's embedded developers did not understand the extremely simple concept of write endurance?

The NHTSA opened an investigation into premature HUD failures because they prevented the backup cameras from working. But the fact of the matter is, the company used a small partition of internal Tegra Flash to store rapidly-refreshing log data. And you are trusting these devs with your life when you enable autopilot.

You're also entrusting my life, and those of my family, to them. But we'll gloss over that, because it's expensive not to.


Tesla is organized functionally. The Infotainment Group did the Console electronics. The SW people there did GUIs and such. So yes between the electronics folks and the app folks 'somebody' didn't consider write cycles. In other Tesla groups, such as Body Controls, and Propulsion -- I can assure you those geeks know such things and plan to deal with funky hardware. The Autopilot group is again separate. There really isn't much crossover. "Systems" is unfortunately an unknown word at Tesla. You know, parts is parts.


This is interesting to know, and your comment flipped a switch in my head - I'd like to know the organizational structure of a lot of companies out there. Is this information you acquired personally? Or is there a resource out there where you can refer to the structure of different companies?


Typically the annual report will give you an org chart with the division heads for public companies. If it isn't there it will be on the website or some other publication and if you can't find it and are an investor you can always simply ask.

Here is Tesla's:

https://theorg.com/org/tesla

and a bit more detail here:

https://theorg.com/org/tesla/org-chart

From there on down it takes a bit of work to get more detail, we typically spend a day on this during the run-up to a DD to verify what we receive and use a lot of googling, linked-in, and other sources to figure out who works in the company and in what role.

The GDPR has made this a bit harder. Team pages are a good source of info for lots of companies in the 10-100 people range, they sometimes list all of their employee names + titles.

I'm not aware of a single source of truth for detailed org charts, if it exists we'd be happy to buy it, it would save us a lot of time and effort.


> There really isn't much crossover

except, apparently, the execution platform. you know, the bit that matters.


> How do you reconcile that belief with the fact that Tesla's embedded developers did not understand the extremely simple concept of write endurance?

That sounds more like the kind of situation where the software department said "we need to have a system that has X amount of storage" and the hardware department made the hardware for it, but there was some missing communication about endurance. It's likely not the same people writing the autopilot software.

That being said, I'm not a Tesla customer, and the way autopilot is deployed and marketed makes me very uneasy.


Well it still speaks volumes about internal culture. Everyone on the team should know they are developing a safety-critical system/component. Yet, the Bob from software can write a sloppy spec and Alice from hardware can not care about the spec being sloppy. It is entirely baffling.


>Yet, the Bob from software can write a sloppy spec and Alice from hardware can not care about the spec being sloppy.

Ehhh, you have not been long in industry, have you? :)

And that's why you get everything in writing and doubly signed off from all parties involved. Even telling people directly, to their face, with witnesses, does not work. Checklists for the departments does however.


> Tesla's embedded developers did not understand the extremely simple concept of write endurance

Equally likely is that they do understand but just don't care.

From all accounts, Tesla seems to have a culture of move fast and break things.


Hmm I don't want to defend Tesla, but I do want to push back on this a bit!

Facebook made "move fast and break things" famous, which was Zuckerberg's way of presenting a tradeoff. Everyone company says they want to move fast, but Zuckerberg made it clear that the company should care more about velocity than stability.

I don't believe that's Tesla's attitude. Rather, I think their attitude is more "move fast and ignore regulations". It's not that things won't break, but rather that the tradeoff Tesla is making is around regulation rather than things breaking.


Regulations are all we have to keep things from breaking...ignoring them about something like a self driving car should be a criminal offense.


The other side of the coin as that they hobble 5-20 years behind tech possibilities. If you want to push boundaries, you sometimes have little choice.

Self-driving cars in the next 10 years have been a serious possibility for the last decade or so, yet governments and insurers don't have a policy ready and won't until a couple of years after the first self-driving cars are available.


Tesla, Google et all are not little startups but huge corps with plenty of cash and the ear of any politician or CEO. If they can't get a policy enacted, maybe there's a reason for it.


Pushing boundaries is fine when there are no life-threatening implications.

Self driving is all about convenience and costs[1] and as such it's not necessary, nor is it advisable, to inflict the bleeding edge on the general public. Waymo's geofenced approach is less bad than Tesla's, and it's something that regulators can readily work with also.

1. But teh safeties!1! No. Just no. ADASes (advanced driver assistance systems), particularly autonomous emergency braking, remove the safety argument for self driving. With ADASes you have 95% or more of the (asserted) safety of self driving, and ADASes are available today, on a large and increasing range of cars. There are even retrofit kits.


There is nothing like self driving in current regulations, much less 10-15 years ago when people seriously started working on it. So, in your views, even starting working on this stuff should be criminal offense?


>all we have to keep things from breaking...ignoring them about something like a self driving car should be a criminal offense.

I think it would potentially be a manslaughter charge.


I don't disagree! I just think people misinterpret "move fast and break things" and use it anytime something breaks. I realize it's a nuanced point.


Also I think it was just to cover the backs of FB engineers. Like you implement a new feature and you are afraid you get scolded because it broke something. You know you are covered. So you dare to change things. Actually was there even a handful of cases where things broke? (And I am sure FB will get rid of the engineer who breaks too many things)


Yup, that's exactly it!

So, the culture was "it's okay to break things as long as you're moving fast". I don't think Telsa explicitly would say "it's okay to break things" to their engineers, but I do think they'd say "it's okay to ignore regulations".

In the end, they may have the same results, however it's all about what employees know they're safe getting away with.


Tesla might actually say: "Everybody, we need to push end of quarter sales. You gotta release the FSD as it is. App team, you gotta implement some butt purchase button for FSD that has no undo. Thanks."


Safety critical regulations, it seems. That gets close enough to "brake things" if you aks me.


But things have literally broken.


You have to kill a few people to make an omelette, as they say.


Sure, but I think you're missing my point.


That can be forgivable in some situations, but Move Fast And Break Customers? Not so much.


It's even worse to move customers fast and break innocent bystanders.


Nah, not as bad. They aren't a source of revenue. Now if the bystander also owned a Tesla…


A lot of this also stems from a culture of quarterly earnings reports and idiotic "fiduciary" duty to some shareholders instead of the primary duty being to customers and humanity.

The incentives are fundamentally defined in the wrong way, and the system has just optimized itself for the those incentives.


There is no challenge reconciling imperfect FSD with high trust of that FSD.

When I decided to play the game of risk minimisation, I sold my car. Minimising risk isn't the most important goal of drivers, almost by definition. Cars are not safe in any objective sense. They are tools of convenience.

A fun hypothetical, you and a good friend get tested for spatial intelligence and it turns out there was a big difference in your favour - how big does the difference need to be before you tell your friend you are no longer comfortable letting them drive when you are in the car?


While spatial awareness is important during driving,I believe being focused on driving is even more so.

When driving the tight streets of old European cities with pedestrians jumping out everywhere, I usually watch for hints like too tall cars parked on the sidewalk and potentially hiding pedestrians planning to cross the street, and move my foot from the gas to hovering above the brake pedal. And a million other things like that, mostly by paying close attention to driving.

Sure, I believe my spatial awareness is also great, but that helps me to parallel park in fewer back-and-forths or to remember a way to a place I've been to once six months ago through a maze of one way streets. But it does not help me reduce the chances of an impactful collision (sure, I might ding a car on the parking lot or not because of it, but nobody is going to get hurt because of that).

You are right that cars are not safe, but for some part, you've got control of the risk yourself. I also watch for hints a car will swerve in front of me, and I am sure I've helped avoid 100s of traffic accidents by being focused on the whole driving task. And other drivers have helped avoid traffic accidents that I would have caused in probably a dozen cases too. I think I am an above average driver simply because of that ratio.

You run similar risks when you board a public bus without knowing how the driver feels that day, and how focused they generally are.


> You are right that cars are not safe, but for some part, you've got control of the risk yourself.

I don't want to be in control of the risk, I'm a bad driver. Haven't owned a car for some years. Still drive on occasion when I need to with a hire car.

I want a computer that is better at driving than I am to do it. It is easy for me to see why perfect is the enemy of good on this issue.

You don't want to share a road with me when you could share it with a Tesla FSD.


>You don't want to share a road with me when you could share it with a Tesla FSD.

This might be irrational, but I'd rather be killed by a human than killed by a computer made by a company that's run by a gung-ho serial bullshitter. That would somehow suck worse.


> You don't want to share a road with me when you could share it with a Tesla FSD.

I'd rather share a road with you, a human.

Even if you're a self-admitted bad driver, humans have a strong instinct of self preservation which helps.

Software has no such thing, a bug in the code will let it accelerate full throttle into a wall (or whatever) without flinching because it's not alive.


Bugs in humans let them do that too: "The US NHTSA estimates 16,000 accidents per year in USA, when drivers intend to apply the brake but mistakenly apply the accelerator."

https://en.wikipedia.org/wiki/Sudden_unintended_acceleration


Or: look-but-failed-to-see-errors, which are an "interesting" cause of accidents. When I took my motorcycle driver's test, my driving instructor sometimes warned me that I needed to make movements in a particular way. He claimed that even though I would make eye contact with a car driver, they may look-but-not-see-me. His reasoning was, as a motorcycle rider, I'm horizontal/upright when a car driver may be looking for something vertical (another car).


Riding a motorcycle is a tough one for car drivers, and not just because of the issue you mention: bikes can accelerate and brake much more rapidly due to their lower mass, and inattentive drivers can easily be caught by that. Them appearing where it shouldn't be possible for a car to show up also amplifies the issue (you don't need to look over your shoulders in a single lane street, but bikes easily show up there).

To be honest, I'd trust software even less if I was a bike rider riding in a European (or Chinese, Phillipine...) city, but that's just me :)


> bikes can accelerate and brake much more rapidly due to their lower mass

Cars are typically able to brake faster than motorcycles. One of the reasons why tailgating on a bike is extremely dangerous.


Good point, thanks!


Being a software engineer,I do want to share a road with you more than any self-driving tech out there today.

You need to experience truly bad roads to understand the complexity involved that you would easily navigate and software would be perplexed!

Sure, we need to be building it today to get there some day, but we are so far away!


There is no need for FSD, just a simpler AI/sensors that detects collision and breaks before the driver does (which is already a feature in some cars)


You mention focused driving, but here's a cool idea. Your subconscious which actually handles most of ur behavior and decision making and nuanced calculations gradually learns from your conscious. When you focus on things, you gradually train your subconscious to mirror that behavior and do it autonomously.

This is demonstrable by reflecting on new things you learn versus old things. Old things like walking you barely put any conscious effort in, yet once you reach a certain age the daily obstacle course that is life, which is full of tripping hazards becomes effortless to avoid ensnaring ur foot on and succumbing to a sudden tumble. But if you were to try to roller blade for the first time, suddenly you have to put massive conscious strain and focus on every movement just to avoid falling on something so simple as a slight texture change on a surface.

Also interesting thought on (conscious) spatial awareness: Here's a question is your conscious aware of things first or is your subconscious aware first? When you conscious becomes aware how sure are yah that it's not your subconscious first alerting your conscious beforehand? These are rhetorical questions which psychologists and neuroscientists already have insights about :).

Life is dangerous, but many of the dangers are predictable and the brain is adept growing to adjust to that predictability AND at learning to recognize indicators for unpredictable dangers(humans receive anxiety in these moments). In those latter situations Intelligence and consciousness is needed. Dangers that are predictable can be learned to be subconsciously handled without much worry & with much practice + experience.

Tesla autopilot is a computerized subconscious that's consciously trained by all the tesla drivers.

I strongly suspect that we'll never have level 5 autopilot with or without lidar sensors unless the computers get a human adaptable intelligence module OR some convention simplifies the environment such that new unpredictable dangers can be minimized to a miniscule and acceptable failure rate. I think people in this debate are focusing on the wrong issues.


You say how we subconsciously handle things like obstacles during walking, but here I am at my 38 years of age, tripping on uneven sidewalk where there's a sudden unnoticeable drop in the level of a couple cm (an inch): the same feeling when you go down the stairs in dark, and forget that there is one extra step.

I agree we get subconsciously trained (here, my brain is expecting a perfectly flat sidewalk), but when I say focused driving, I am mostly thinking of *not-doing-anything-else*: to an extemt that I also keep my phone calls short (or reject them) even with the bluetooth handsfree system built into my car with steering wheel commands.

The thing is that a truck's trunk opening in front of you and things starting to fall out on a highway at 130kmph (~80mph) is very hard to train for, but all four of us car drivers that were right behind when it happened did manage to avoid it without much drama or risk to themselves or each other. What self driving tech today would you trust to achieve the same today? Sometimes you don't care about averages, because they are skewed by drunks or stupid people showing off on public roads.

And stats being by miles covered is generally useless: if it was accidents per number-of-performed-manouvres, it'd be useful. Getting on an empty highway and doing 100 miles is pretty simple compared to doing 2 miles in a congested city centre.


Cars are most dangerous for pedestrians, cyclists and bikers.


The hud as you call it is not a safety critical part of the car in Teslas. You can reboot it while driving without effecting the car. The self driving computer is separate and has full redundancy to the point of having two processors running redundant code. There is a reason Tesla’s are consistently rated as the safest cars on the road with low probability of being involved in an accident and lowest probability of injury when an accident does happen.


2 Processors? What happens with majority vote if they disagree? (or maybe they wanted to avoid a 'minority report' situation :-)). But honestly do you know what they do? Although since it is not flying, probably some red indicator will light up. And maybe a stopping maneuver.


Apparently they just try again:

"Each chip makes its own assessment of what the car should do next. The computer compares the two assessments, and if the chips agree, the car takes the action. If the chips disagree, the car just throws away that frame of video data and tries again, Venkataramanan said."


The self driving computer was discussed in some detail at their 2019 Autonomy Day event.

https://youtu.be/Ucp0TTmvqOE?t=4244


Fail safety doesn’t mean anything, if the decisions it makes is bad, like thinking a plastic bag is a solid object on the road, or simply forgetting where the lane is over a distance and swerving into oncoming traffic..


>You're also entrusting my life, and those of my family, to them. But we'll gloss over that, because it's expensive not to.

This is my biggest concern. I'm going to get killed by some jackass' tesla while he's texting, because Elon Musk is a megalomaniac.


There are loads of people texting and driving who aren’t in a Tesla. I worry much more about them.


I worry about them, but not more than people abusing AP, they're all in the same boat.

The people texting and driving are idiots, distracted idiots, but they have no misconceptions about if their car will save them if they take a nap.

Elon's made comments about "your hands are just there for regulatory reasons" and overpromised for years so now people abuse it until it's just as dangerous if not more dangerous than distracted driving (stuff like intentionally sleeping or using a laptop full time)

Other manufacturers are coming out with features that protect me from texting drivers without generating a new breed of ultra-distracted drivers like those who are falling for Elon's act.

Now a base model Corolla, pretty much the automotive equivalent of a yardstick, will steer people back into their lane and warn drowsy drivers that the car is intervening too much.

A Tesla can't even do the latter.

-

One day we're going to look back and wonder why we allowed things like automatic steering without fully fleshed FSD.

I mean the driver is actually safer if you only intervene when something goes wrong. They're forced to be attentive, yet in all situations where they fail to be attentive and AP would have saved them... it does save them. And tells them to get their head out of their backside.

If AP did that every person it has saved would have still been saved, but a few people it got killed would still be here today.


Same here. However, I am assuming you have decent sight, so you can at least protect yourself in certain situation. Me is blind, and I am getting increasingly weary about the future as a pedestrian. As a kid, one of my biggest fears were automatic doors. I sort of imagined they would close on me and try to kill me. I am afraid this horror is going to be true at some point of my life. Automation is going to kill me one day.


Don’t worry, it’s much more likely that you’ll be killed by someone who is texting and driving some other make of car.


> How do you reconcile that belief with the fact that Tesla's embedded developers did not understand the extremely simple concept of write endurance?

Easy: compartmentalization of knowledge. Most software developers I have met have no idea about the storage stuff under their application, they trust the OS and the hardware people to deal with it. I mean, who can blame them in the age of AWS or actual servers where companies simply throw money at any problem and hardware is rotated for something new before write endurance ever becomes an issue And the hardware people probably knew that the OS people would run Linux, but didn't expect logfile spam.


Please note that you are asking this question at the tail end of pandemic where a significant portion of the country decided it was preferable to "just let some old people die" than to lockdown or even wear masks.

Those people will twist themselves into giving you a PC answer but the truth is they're willing to crack a few eggs in order get FSD today. They'll tell you no one else is even trying and in the long run FSD will save more lives and Musk should be praised for having the gumption to get the ball rolling.


> in the long run FSD will save more lives and Musk should be praised for having the gumption to get the ball rolling.

In the long run, for the historical perspective, this is a very plausible outcome. It's happened before (any major construction projects prior to the 1950's, anything involving Thomas Edison, most large damming projects), where a historical event is tied to a bunch of dead innocents, but history books praise the vision and determination of the ones in charge to not give up just because a few measly blue collars kicked the bucket early.


Where the difference in both this and the grand parent post is about choice.

If you fear the pandemic and what to lock yourself up in isolation we should as much as possible allow that. And if you want to work on very dangerous projects for better rewards you spud be able to.

With autonomous cars the choice of risk may not be so easy.

Arguing about what real choice you have is overly pedantic and we should rather concentrate on the principles for the right out come.


That reflects of values and focus of those who wrote those pop tech/business history books/articles.

I have seen history books that do reflect "but he also killed some people" or are critical. They are however less popular in some circles.


I don't understand why it's legal. How can it be allowed to send out uncertified software to cars on public roads? Aren't there safety standards that need to be met? I thought that safety critical products where better regulated than this.


Welcome to being me three or four years ago. I've literally met with MPs and ministers trying to get regulations and other things. They're coming. But they don't happen without civic engagement. Write to your elected representative and call for legislation. It works. It takes time, but it works.


In Europe, most of the Tesla features are disabled because they were deemed hazardous. Only lane assist and adaptive cruise control is enabled. The others are severely limited or disabled (Summon etc.)


While I'm not a big Tesla fan, I don't think legislation is any indication of the actual safety that can be provided. Europe legislation tightly follows what German carmakers can deliver. Once they can offer the same features, it will be legal in no time.


For the pollution aspect of engines, I have some sympathy with this view, even though its not entirely correct.

When it comes to safety though, I disagree. One of the decent things to come out is the Euro NCAP rating. The manufacturers are not part of the testing process, apart from they need to supply cars. Each car is then given a rating.

For Autonomous driving, from what I can see its still down to individual states.


>Europe legislation tightly follows what German carmakers can deliver.

Do you have a source for this? Cause it seems to me that they could easily deliver something like summon, seeing as parking-assist/automatic-parking already exists.


Wouldn’t it make sense that “hardware” gets tested before it can go on the roads, why is it not the case with software? And if they find a bug, than disable the software whole-sale until it is tested by regulators again (since it can introduce new bugs as well)


And they make driving with the remainder of AP extremely unsafe in the EU sadly. For example they limit the angle of turn, meaning the cars cannot drive safely around a lot of non-highway road corners without drifting into oncoming lane (upon which the car brakes and beeps due to another feature called lane assist). Admittedly the car could slow down before the curve, but that would get you into trouble with cars from behind not expecting you to slow down for these kinds of curves.


It's heavily regulated, just ask geohot [1]. Most of the players in this space do their self-driving vehicle testing carefully under controlled circumstances with some approval from local agencies. Tesla seems to be cheating by punting all the responsibility onto their customers, because instead of testing Self-Driving Cars they're just shipping software to customers and letting people run that on their cars.

1: https://www.reuters.com/article/us-selfdriving-safety/comma-...


Well, software that controls a car is a new thing. So I would imagine that it hasn't really been regulated yet in most jurisdictions. For some reason we have a tendency to write laws so that they are specific to individual things rather than general and future prof laws.


Software in cars is not "a new thing". Safety has increased immeasurably in cars over the last few decades, in no small part due to software. Complexity and opacity are a problem, but cars are much safer and more efficient today due in large part to software. I don't think that would have happened if there was some regulatory committee in place to audit the software for safety. We have liability for that, a much better model than a regulatory one.

> However, as the importance of electronics and software has grown, so has complexity. Take the exploding number of software lines of code (SLOC) contained in modern cars as an example. In 2010, some vehicles had about ten million SLOC; by 2016, this expanded by a factor of 15, to roughly 150 million lines. Snowballing complexity is causing significant software-related quality issues, as evidenced by millions of recent vehicle recalls.

https://www.mckinsey.com/industries/automotive-and-assembly/...


I don’t get why it’s not covered under existing laws, clearly every vehicle has to be operated by a driver with a valid license correct? You would think letting a vehicle drive without a licensed operator at the wheel would be negligence.


I agree, but when it came to bullying the anti bullying laws apparently didn't cover online bullying which didn't really make sense to me.

So I assume that this is a case of the person in the drivers seat must have a drivers license, but there is no law saying how much of the driving they must do.


Nobody is doing that. In the locales where truly autonomous vehicles are being tested, it’s happening under specific legislation and regulations put in place by the states. For Tesla, behind all their bluster, the terms and conditions you accept to use their “self-driving” features make it clear that the human driver is always responsible for safe operation of the car, and all of these features are just to assist the human driver.


Kinda hard to square that with a product called "Full Self Driving". Terms of service are generally worthless as a legal shield against catastrophic harm to customers.


Unless you're driving a Flintstones car, much of the driving you do in any modern car is done by software. From fuel injection, to power steering to anti-lock breaks


FSD rollout is purely driven by Elon’s ego. He’s addicted to fame and power, and that comes from image he created of himself being an Iron Man. I’m pretty sure he got to a point where he believes in it, and thinks he’s invincible and can solve any problem, because he’s so much smarter than everyone. Stock bubble making wealthiest man only helped to solidify that.


I don't know, I don't get that feeling. What makes you think that? I don't know much about Elon, but I listened to one of his interviews with Rogan, and he struck me as extremely optimistic, but also grounded and not arrogant.

(I do agree partial-self-driving just seems like a terrible idea. I guess crash stats can reveal if this is true or not, but are perhaps not available.)


Autopilot and FSD beta are not the same. The latter is currently available to maybe a couple dozen testers that are clearly very well informed about the capabilities of the system, as well as the changes in each update. If you really don't believe it, watch their hours and hours of (frankly boring) videos of analyzing the behavior of the system in complex situations while still staying on top of safety.

It does remain to be seen how well Tesla will trust the general public with this level of improved autonomy. As you get closer and closer to the uncanny valley where things just appear to work, you get into the more tricky situations that truly befuddle humans and machines alike.

NHTSA scrutinizes crashes that involve anything close to Autopilot and FSD quite heavily. Aside from one or two incidents they've had complaints about, none of them have risen to the point where they had to put their foot down. Admittedly, Tesla were a big bunch of jerks about how they handled the situation, but still, these were isolated incidents with clear misuse from the driver's part.

I agree with you, in that Musk is overly optimistic (no shit, he's been saying this would be ready in 2018, and it's unclear if it will be in 2021). But he's also quite well informed of the facts on the ground, and is clearly aiming for the moonshot-winner-take-all prize by skipping Lidar and high-precision mapping. That might be a gamble, but need not be an inherently dangerous one, depending on how Tesla handles the super-grey areas around the uncanny valley, where the system appears to work, but really isn't worth risking your life upon. To some, it's already there, as you can see from idiots sleeping in their Teslas while on Autopilot. But again, outside of a couple of incidents over years and millions and miles, the rate of catastrophic failure (accidents) has been surprisingly low.


> clearly very well informed

I completely underestimated the role of professional safety drivers for autonomous vehicles. I thought it's "just a guy" sitting in the car for good measure, but it turns out that the majority of drivers is not fit for the job even after lengthy training, see e.g. [1] (a gread podcast in general).

Also all autonomous driving companies employ safety drivers - except one.

> NHTSA scrutinizes crashes that involve anything close to Autopilot and FSD quite heavily.

I wouldn't put too much hope into the NHTSA regulating Autopilot. It took a two year legal battle to get the data driving their analysis of the Autopilot in 2017, turns out it was completely provided by Tesla, but worse, when confounders where removed, it still showed a higher crash rate for Autopilot.

If you take a non-American view of Autopilot, Europeans agencies did scrutinize the crashes more closely and as a result have restricted the use of Autopilot.

If you are interested in the topic of autonomous driving I recommend the Autonocast podcast.

[1] http://www.autonocast.com/blog/2020/10/29/205-why-teslas-ful...

[2] https://arstechnica.com/cars/2019/02/in-2017-the-feds-said-t...


Elon will get into some pretty bizarre bouts on twitter. I realize this is common for celebrities, but that whole "diver is a pedophile" thing was truly wtf.

https://www.wired.com/story/elon-musk-pedo-guy-doesnt-mean-p...


The whole thing just screamed of a man with with an issue surrounded by yesmen.


Westerners living in Thailand and elsewhere in SE/E Asia have gained well-known reputation for such questionable behavior among other things.


If you go read the court testimony it isn’t as strange as it seems on the face. The diver started the tiff and the insult Musk sent in return was said to be common vernacular in South Africa where he grew up.


Is it also tradition in South Africa to then e-mail press to insist that it investigate the target and pay "private investigators"?


Good point, but the whole thing is a made-up excuse. "Pedo" just means pedophile in South Africa.


his covid comments last year were beyond the pale. The low point for me was when Shannon Woodward, who played a scientist on TV and to my knowledge isn't one, had to explain to him that tests are indeed not a big pharma conspiracy

https://twitter.com/shannonwoodward/status/13275176940992757...


I don’t see any explaining going on in that thread?


There is a lesson in sales and trust hidden in there. No matter how good your device is, an inexperienced (and worse when famous) person can sow distrust in it instantly. Wouldn’t surprise me if next machine is just 4 machines glued together to make it Elon Musk proof.


“Had to” and she just replied on Twitter with some odd assumptions are very different. I think anyone would agree that a test that is wrong half the time is not a good test.


LOL, I love it... an actress with no higher education has to explain antigen tests to the world's richest man.

We really are in the best possible timeline for general lulz and nonsense.


> FSD rollout is purely driven by Elon’s ego. He’s addicted to fame and power...

Whoa. You have strong feelings.

One of my Life Rules that has served me enduringly well is to avoid believing that I can confidently have real insight into what motives drive someone. That rule is not for their benefit, but for my own: to putatively "know" something is to invite an incorrect model of reality and thereby incorrect action. It doesn't actually matter what drives Musk or anyone else: ego, pure business interests, a desire to take humanity to the stars, boredom, pathology, space ghosts, whatever. What matters is behavior and its effects.

If you criticize Musk's ego, ambition, pride, there's nothing actionable for anyone, neither Musk himself nor Tesla's potential customers. If you criticize the Tesla FSD rollout, now there's something people can do.


He also personally received an obscenely large windfall as a result of this strategy.

Debatable how significant this impacts his decision making but it's pretty hard to argue that doesn't impact it at least a little.


I don't think FSD rollout is the central goal. The goal is to build hype and thus market cap that Tesla can use to build factories. Ten years ago, Tesla had no factories (at high capacity). They've had absolutely ridiculous velocity since then precisely because of their ridiculous valuation. Even if this FSD thing doesn't work out, it might have been necessary for Tesla to get this far. If Tesla's stock crashes, they still have the factories. All they need is to make sure nobody can prove Elon knows he's full of it (securities fraud). His Twitter persona suits this strategy to a T.


Conspiracy level nonsense here. Elon tried to take Tesla private. He himself didn't want raise money on higher evaluation. He himself multiple times said they don't deserve the high stock.

He has been an optimist on this technology of a long time and has made that argument for a long time.

The claim that the FSD testing is to boost the stock price is truly bizard, specially because there is very little evidence of any correlation between FSD and stock price.

Go look at the Wallstreet models, most of them don't have large considerations income from FSD. If you look at FSD testers, their videos don't have millions of views.


He didn't create that image on his own. He had a lot of help. He had a cameo in Iron Man 2 where Iron Man treated him as a peer!

(That scene, by the way, is a perfect example of why I hate superhero movies.)


RDJ’s portrayal of Iron Man is partially based off Elon Musk, not the other way around.

Edit: Since I’m being downvoted, here you go, today you learned: https://www.linkedin.com/pulse/true-story-elon-musk-robert-d...


and Larry Ellison, Elon Musk was still too young on the scene in 2008, and not the celebrity he is today.



Not sure about that tibit, but in general that book is said to be inaccurate/too rosy of musk.

From what I have heard, it was both:

Elon Musk, Larry Ellison Appear In Iron Man 2

https://www.forbes.com/sites/velocity/2010/04/29/elon-musk-l...


Kinda weird comment. Not that it matters, but in the scene Iron Man brushed Elon off -- He was "politely rude", and the scene (while serving as an ad for Elon) served the film by giving cred to Stark by having Elon fawn over him.

More importantly though, treating someone "as a peer" is kinda a weird criticism? Superheros on TV these days don't seem terribly stuck-up. When other characters in the films seem honest the heroes usually treat them with respect and dignity.

Anyway, thanks for the heads-up that you hate superhero movies, will keep it in mind next time I see one on a plane.


That safety 'Report' from Waymo isn't a report at all though. It is a general overview on their approach how they say they will handle it and a direction. Basically a primer for regulators and customers that don't know anything yet about it. It is extremely light on numbers over time.


There are a couple more PDFs linked in the same page. What did you find lacking in the safety performance data whitepaper?


What I wonder is would countries like China be able to reach FSD faster since they would possibly be less ethically bound? And similarly with other tech which otherwise would be limited ethically?


IIRC so far China doesn't seem to be doing anything like Tesla, most things I've seen were testing in a closed zone with an engineer, Waymo-style.

They also don't have any self-driving deaths for now, unless I missed something.

I thus don't think they will have this advantage. They might have the advantage of more chaotic traffic?


So this makes me wonder, why not. What's stopping them from advancing quicker in this particular field if they could cut more corners faster? And could they cut more?


Having an ethical framework that differs from yours is not the same as the absence of ethics.


I mean with a single authority you could get away with more. Not saying they have absence of ethics, just saying they could possibly be less bound to develop their tech faster without having to worry about short term trade-offs as much.

They could in this way become stronger than other nations who are more bound. With FSD they could start testing earlier even if it's not guaranteed to be completely safe, and they could adopt it immediately full scale at the point when it's definitely safer or almost as safe as humans, while in western world it would have to be much more safer than humans driving. Maybe this could mean years of advantage and they can start immediately adopting and optimising their economy.

Maybe ethics is not even the correct word. I'm talking about a trolley problem where they would be allowed to switch tracks while western world maybe not. In this case it might mean more deaths up-front, but fewer later with optimised economy.


Telsa will be very transparent when the first round of lawsuits begins and they're facing multi-billion dollar lawsuits.

Toyota lost billions from simply not following accepted safety design practices. Does anyone really think Tesla's self-driving tech is ISO26262 or MISRA compliant?


No it's is IEC61508 compliant.

IEC61508-7 F is giving an example how to come from test hours to a quality and safety statement, in terms of failure in time rates.

With how many, 700.000 morning and evening commutes, tesla is gathering evidence pretty quickly that their vehicles save more lives than they kill due to those rare FSD faults which lead to fatal failures.

Air bags kill people. Always have, always will. Fully misra and 26262 compliant. And air bags save many more people.

Tesla FSD will come, fully validated and fully standard compliant, so will Waymo (I doubt they go with misra).


I need to apologize, it's iec61508-7 Annex D, not F

> This annex provides initial guidelines on the use of a probabilistic approach to determining > software safety integrity for pre-developed software based on operational experience. This > approach is considered particularly appropriate as part of the qualification of operating > systems, library modules, compilers and other system software.

Tesla, given the size of their fleet, can also follow such an approach for their "application" especially the neural nets and the rule sets.


Safety culture is an important topic too, but this article is about something else: technology paths.


> while naming the technology "Full Self Driving" and Elon Musk hyping it up every chance he gets on how it will be Level 5-ready by end of the year.

I am an anti-goverment anti-regulation Libertarian and even I think the government has a very legitimate role to play here in regulating this fraud. This is a bit like Humpty Dumpty where the words do not mean what they mean.


> nothing but a tactic to generate hype

Congratulations, you've figured out Elon Musk.


WOW 1 paragraph of data > 48 page pdf of "look at me i did this first so i know what im doing" . Honestly this is quite dumb , waymo will NEVER i REPEAT NEVER make it to market let alone with any amount of volume ... And it has nothing to autonomy or tech ... just by virtue of having bulky expensive sensor's (which need to be serviced regularly) and power draw of the compute and tech .... other than having a very niche proof of concept like project loon i think waymo will also fold like all of its fellow google x brother-in.


Waymo had one of their employees not properly monitoring their car and they killed someone. I dont believe anyone had been killed by the Tesla development team? Couple of customers did die but that was their fault for not monitoring the car? One was watching harry potter I remember.


You're probably thinking of Uber which killed someone in Arizona, not Waymo.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: