Hacker News new | past | comments | ask | show | jobs | submit login

I have one question for Tesla customers who trust the company to deliver full FSD.

How do you reconcile that belief with the fact that Tesla's embedded developers did not understand the extremely simple concept of write endurance?

The NHTSA opened an investigation into premature HUD failures because they prevented the backup cameras from working. But the fact of the matter is, the company used a small partition of internal Tegra Flash to store rapidly-refreshing log data. And you are trusting these devs with your life when you enable autopilot.

You're also entrusting my life, and those of my family, to them. But we'll gloss over that, because it's expensive not to.




Tesla is organized functionally. The Infotainment Group did the Console electronics. The SW people there did GUIs and such. So yes between the electronics folks and the app folks 'somebody' didn't consider write cycles. In other Tesla groups, such as Body Controls, and Propulsion -- I can assure you those geeks know such things and plan to deal with funky hardware. The Autopilot group is again separate. There really isn't much crossover. "Systems" is unfortunately an unknown word at Tesla. You know, parts is parts.


This is interesting to know, and your comment flipped a switch in my head - I'd like to know the organizational structure of a lot of companies out there. Is this information you acquired personally? Or is there a resource out there where you can refer to the structure of different companies?


Typically the annual report will give you an org chart with the division heads for public companies. If it isn't there it will be on the website or some other publication and if you can't find it and are an investor you can always simply ask.

Here is Tesla's:

https://theorg.com/org/tesla

and a bit more detail here:

https://theorg.com/org/tesla/org-chart

From there on down it takes a bit of work to get more detail, we typically spend a day on this during the run-up to a DD to verify what we receive and use a lot of googling, linked-in, and other sources to figure out who works in the company and in what role.

The GDPR has made this a bit harder. Team pages are a good source of info for lots of companies in the 10-100 people range, they sometimes list all of their employee names + titles.

I'm not aware of a single source of truth for detailed org charts, if it exists we'd be happy to buy it, it would save us a lot of time and effort.


> There really isn't much crossover

except, apparently, the execution platform. you know, the bit that matters.


> How do you reconcile that belief with the fact that Tesla's embedded developers did not understand the extremely simple concept of write endurance?

That sounds more like the kind of situation where the software department said "we need to have a system that has X amount of storage" and the hardware department made the hardware for it, but there was some missing communication about endurance. It's likely not the same people writing the autopilot software.

That being said, I'm not a Tesla customer, and the way autopilot is deployed and marketed makes me very uneasy.


Well it still speaks volumes about internal culture. Everyone on the team should know they are developing a safety-critical system/component. Yet, the Bob from software can write a sloppy spec and Alice from hardware can not care about the spec being sloppy. It is entirely baffling.


>Yet, the Bob from software can write a sloppy spec and Alice from hardware can not care about the spec being sloppy.

Ehhh, you have not been long in industry, have you? :)

And that's why you get everything in writing and doubly signed off from all parties involved. Even telling people directly, to their face, with witnesses, does not work. Checklists for the departments does however.


> Tesla's embedded developers did not understand the extremely simple concept of write endurance

Equally likely is that they do understand but just don't care.

From all accounts, Tesla seems to have a culture of move fast and break things.


Hmm I don't want to defend Tesla, but I do want to push back on this a bit!

Facebook made "move fast and break things" famous, which was Zuckerberg's way of presenting a tradeoff. Everyone company says they want to move fast, but Zuckerberg made it clear that the company should care more about velocity than stability.

I don't believe that's Tesla's attitude. Rather, I think their attitude is more "move fast and ignore regulations". It's not that things won't break, but rather that the tradeoff Tesla is making is around regulation rather than things breaking.


Regulations are all we have to keep things from breaking...ignoring them about something like a self driving car should be a criminal offense.


The other side of the coin as that they hobble 5-20 years behind tech possibilities. If you want to push boundaries, you sometimes have little choice.

Self-driving cars in the next 10 years have been a serious possibility for the last decade or so, yet governments and insurers don't have a policy ready and won't until a couple of years after the first self-driving cars are available.


Tesla, Google et all are not little startups but huge corps with plenty of cash and the ear of any politician or CEO. If they can't get a policy enacted, maybe there's a reason for it.


Pushing boundaries is fine when there are no life-threatening implications.

Self driving is all about convenience and costs[1] and as such it's not necessary, nor is it advisable, to inflict the bleeding edge on the general public. Waymo's geofenced approach is less bad than Tesla's, and it's something that regulators can readily work with also.

1. But teh safeties!1! No. Just no. ADASes (advanced driver assistance systems), particularly autonomous emergency braking, remove the safety argument for self driving. With ADASes you have 95% or more of the (asserted) safety of self driving, and ADASes are available today, on a large and increasing range of cars. There are even retrofit kits.


There is nothing like self driving in current regulations, much less 10-15 years ago when people seriously started working on it. So, in your views, even starting working on this stuff should be criminal offense?


>all we have to keep things from breaking...ignoring them about something like a self driving car should be a criminal offense.

I think it would potentially be a manslaughter charge.


I don't disagree! I just think people misinterpret "move fast and break things" and use it anytime something breaks. I realize it's a nuanced point.


Also I think it was just to cover the backs of FB engineers. Like you implement a new feature and you are afraid you get scolded because it broke something. You know you are covered. So you dare to change things. Actually was there even a handful of cases where things broke? (And I am sure FB will get rid of the engineer who breaks too many things)


Yup, that's exactly it!

So, the culture was "it's okay to break things as long as you're moving fast". I don't think Telsa explicitly would say "it's okay to break things" to their engineers, but I do think they'd say "it's okay to ignore regulations".

In the end, they may have the same results, however it's all about what employees know they're safe getting away with.


Tesla might actually say: "Everybody, we need to push end of quarter sales. You gotta release the FSD as it is. App team, you gotta implement some butt purchase button for FSD that has no undo. Thanks."


Safety critical regulations, it seems. That gets close enough to "brake things" if you aks me.


But things have literally broken.


You have to kill a few people to make an omelette, as they say.


Sure, but I think you're missing my point.


That can be forgivable in some situations, but Move Fast And Break Customers? Not so much.


It's even worse to move customers fast and break innocent bystanders.


Nah, not as bad. They aren't a source of revenue. Now if the bystander also owned a Tesla…


A lot of this also stems from a culture of quarterly earnings reports and idiotic "fiduciary" duty to some shareholders instead of the primary duty being to customers and humanity.

The incentives are fundamentally defined in the wrong way, and the system has just optimized itself for the those incentives.


There is no challenge reconciling imperfect FSD with high trust of that FSD.

When I decided to play the game of risk minimisation, I sold my car. Minimising risk isn't the most important goal of drivers, almost by definition. Cars are not safe in any objective sense. They are tools of convenience.

A fun hypothetical, you and a good friend get tested for spatial intelligence and it turns out there was a big difference in your favour - how big does the difference need to be before you tell your friend you are no longer comfortable letting them drive when you are in the car?


While spatial awareness is important during driving,I believe being focused on driving is even more so.

When driving the tight streets of old European cities with pedestrians jumping out everywhere, I usually watch for hints like too tall cars parked on the sidewalk and potentially hiding pedestrians planning to cross the street, and move my foot from the gas to hovering above the brake pedal. And a million other things like that, mostly by paying close attention to driving.

Sure, I believe my spatial awareness is also great, but that helps me to parallel park in fewer back-and-forths or to remember a way to a place I've been to once six months ago through a maze of one way streets. But it does not help me reduce the chances of an impactful collision (sure, I might ding a car on the parking lot or not because of it, but nobody is going to get hurt because of that).

You are right that cars are not safe, but for some part, you've got control of the risk yourself. I also watch for hints a car will swerve in front of me, and I am sure I've helped avoid 100s of traffic accidents by being focused on the whole driving task. And other drivers have helped avoid traffic accidents that I would have caused in probably a dozen cases too. I think I am an above average driver simply because of that ratio.

You run similar risks when you board a public bus without knowing how the driver feels that day, and how focused they generally are.


> You are right that cars are not safe, but for some part, you've got control of the risk yourself.

I don't want to be in control of the risk, I'm a bad driver. Haven't owned a car for some years. Still drive on occasion when I need to with a hire car.

I want a computer that is better at driving than I am to do it. It is easy for me to see why perfect is the enemy of good on this issue.

You don't want to share a road with me when you could share it with a Tesla FSD.


>You don't want to share a road with me when you could share it with a Tesla FSD.

This might be irrational, but I'd rather be killed by a human than killed by a computer made by a company that's run by a gung-ho serial bullshitter. That would somehow suck worse.


> You don't want to share a road with me when you could share it with a Tesla FSD.

I'd rather share a road with you, a human.

Even if you're a self-admitted bad driver, humans have a strong instinct of self preservation which helps.

Software has no such thing, a bug in the code will let it accelerate full throttle into a wall (or whatever) without flinching because it's not alive.


Bugs in humans let them do that too: "The US NHTSA estimates 16,000 accidents per year in USA, when drivers intend to apply the brake but mistakenly apply the accelerator."

https://en.wikipedia.org/wiki/Sudden_unintended_acceleration


Or: look-but-failed-to-see-errors, which are an "interesting" cause of accidents. When I took my motorcycle driver's test, my driving instructor sometimes warned me that I needed to make movements in a particular way. He claimed that even though I would make eye contact with a car driver, they may look-but-not-see-me. His reasoning was, as a motorcycle rider, I'm horizontal/upright when a car driver may be looking for something vertical (another car).


Riding a motorcycle is a tough one for car drivers, and not just because of the issue you mention: bikes can accelerate and brake much more rapidly due to their lower mass, and inattentive drivers can easily be caught by that. Them appearing where it shouldn't be possible for a car to show up also amplifies the issue (you don't need to look over your shoulders in a single lane street, but bikes easily show up there).

To be honest, I'd trust software even less if I was a bike rider riding in a European (or Chinese, Phillipine...) city, but that's just me :)


> bikes can accelerate and brake much more rapidly due to their lower mass

Cars are typically able to brake faster than motorcycles. One of the reasons why tailgating on a bike is extremely dangerous.


Good point, thanks!


Being a software engineer,I do want to share a road with you more than any self-driving tech out there today.

You need to experience truly bad roads to understand the complexity involved that you would easily navigate and software would be perplexed!

Sure, we need to be building it today to get there some day, but we are so far away!


There is no need for FSD, just a simpler AI/sensors that detects collision and breaks before the driver does (which is already a feature in some cars)


You mention focused driving, but here's a cool idea. Your subconscious which actually handles most of ur behavior and decision making and nuanced calculations gradually learns from your conscious. When you focus on things, you gradually train your subconscious to mirror that behavior and do it autonomously.

This is demonstrable by reflecting on new things you learn versus old things. Old things like walking you barely put any conscious effort in, yet once you reach a certain age the daily obstacle course that is life, which is full of tripping hazards becomes effortless to avoid ensnaring ur foot on and succumbing to a sudden tumble. But if you were to try to roller blade for the first time, suddenly you have to put massive conscious strain and focus on every movement just to avoid falling on something so simple as a slight texture change on a surface.

Also interesting thought on (conscious) spatial awareness: Here's a question is your conscious aware of things first or is your subconscious aware first? When you conscious becomes aware how sure are yah that it's not your subconscious first alerting your conscious beforehand? These are rhetorical questions which psychologists and neuroscientists already have insights about :).

Life is dangerous, but many of the dangers are predictable and the brain is adept growing to adjust to that predictability AND at learning to recognize indicators for unpredictable dangers(humans receive anxiety in these moments). In those latter situations Intelligence and consciousness is needed. Dangers that are predictable can be learned to be subconsciously handled without much worry & with much practice + experience.

Tesla autopilot is a computerized subconscious that's consciously trained by all the tesla drivers.

I strongly suspect that we'll never have level 5 autopilot with or without lidar sensors unless the computers get a human adaptable intelligence module OR some convention simplifies the environment such that new unpredictable dangers can be minimized to a miniscule and acceptable failure rate. I think people in this debate are focusing on the wrong issues.


You say how we subconsciously handle things like obstacles during walking, but here I am at my 38 years of age, tripping on uneven sidewalk where there's a sudden unnoticeable drop in the level of a couple cm (an inch): the same feeling when you go down the stairs in dark, and forget that there is one extra step.

I agree we get subconsciously trained (here, my brain is expecting a perfectly flat sidewalk), but when I say focused driving, I am mostly thinking of *not-doing-anything-else*: to an extemt that I also keep my phone calls short (or reject them) even with the bluetooth handsfree system built into my car with steering wheel commands.

The thing is that a truck's trunk opening in front of you and things starting to fall out on a highway at 130kmph (~80mph) is very hard to train for, but all four of us car drivers that were right behind when it happened did manage to avoid it without much drama or risk to themselves or each other. What self driving tech today would you trust to achieve the same today? Sometimes you don't care about averages, because they are skewed by drunks or stupid people showing off on public roads.

And stats being by miles covered is generally useless: if it was accidents per number-of-performed-manouvres, it'd be useful. Getting on an empty highway and doing 100 miles is pretty simple compared to doing 2 miles in a congested city centre.


Cars are most dangerous for pedestrians, cyclists and bikers.


The hud as you call it is not a safety critical part of the car in Teslas. You can reboot it while driving without effecting the car. The self driving computer is separate and has full redundancy to the point of having two processors running redundant code. There is a reason Tesla’s are consistently rated as the safest cars on the road with low probability of being involved in an accident and lowest probability of injury when an accident does happen.


2 Processors? What happens with majority vote if they disagree? (or maybe they wanted to avoid a 'minority report' situation :-)). But honestly do you know what they do? Although since it is not flying, probably some red indicator will light up. And maybe a stopping maneuver.


Apparently they just try again:

"Each chip makes its own assessment of what the car should do next. The computer compares the two assessments, and if the chips agree, the car takes the action. If the chips disagree, the car just throws away that frame of video data and tries again, Venkataramanan said."


The self driving computer was discussed in some detail at their 2019 Autonomy Day event.

https://youtu.be/Ucp0TTmvqOE?t=4244


Fail safety doesn’t mean anything, if the decisions it makes is bad, like thinking a plastic bag is a solid object on the road, or simply forgetting where the lane is over a distance and swerving into oncoming traffic..


>You're also entrusting my life, and those of my family, to them. But we'll gloss over that, because it's expensive not to.

This is my biggest concern. I'm going to get killed by some jackass' tesla while he's texting, because Elon Musk is a megalomaniac.


There are loads of people texting and driving who aren’t in a Tesla. I worry much more about them.


I worry about them, but not more than people abusing AP, they're all in the same boat.

The people texting and driving are idiots, distracted idiots, but they have no misconceptions about if their car will save them if they take a nap.

Elon's made comments about "your hands are just there for regulatory reasons" and overpromised for years so now people abuse it until it's just as dangerous if not more dangerous than distracted driving (stuff like intentionally sleeping or using a laptop full time)

Other manufacturers are coming out with features that protect me from texting drivers without generating a new breed of ultra-distracted drivers like those who are falling for Elon's act.

Now a base model Corolla, pretty much the automotive equivalent of a yardstick, will steer people back into their lane and warn drowsy drivers that the car is intervening too much.

A Tesla can't even do the latter.

-

One day we're going to look back and wonder why we allowed things like automatic steering without fully fleshed FSD.

I mean the driver is actually safer if you only intervene when something goes wrong. They're forced to be attentive, yet in all situations where they fail to be attentive and AP would have saved them... it does save them. And tells them to get their head out of their backside.

If AP did that every person it has saved would have still been saved, but a few people it got killed would still be here today.


Same here. However, I am assuming you have decent sight, so you can at least protect yourself in certain situation. Me is blind, and I am getting increasingly weary about the future as a pedestrian. As a kid, one of my biggest fears were automatic doors. I sort of imagined they would close on me and try to kill me. I am afraid this horror is going to be true at some point of my life. Automation is going to kill me one day.


Don’t worry, it’s much more likely that you’ll be killed by someone who is texting and driving some other make of car.


> How do you reconcile that belief with the fact that Tesla's embedded developers did not understand the extremely simple concept of write endurance?

Easy: compartmentalization of knowledge. Most software developers I have met have no idea about the storage stuff under their application, they trust the OS and the hardware people to deal with it. I mean, who can blame them in the age of AWS or actual servers where companies simply throw money at any problem and hardware is rotated for something new before write endurance ever becomes an issue And the hardware people probably knew that the OS people would run Linux, but didn't expect logfile spam.


Please note that you are asking this question at the tail end of pandemic where a significant portion of the country decided it was preferable to "just let some old people die" than to lockdown or even wear masks.

Those people will twist themselves into giving you a PC answer but the truth is they're willing to crack a few eggs in order get FSD today. They'll tell you no one else is even trying and in the long run FSD will save more lives and Musk should be praised for having the gumption to get the ball rolling.


> in the long run FSD will save more lives and Musk should be praised for having the gumption to get the ball rolling.

In the long run, for the historical perspective, this is a very plausible outcome. It's happened before (any major construction projects prior to the 1950's, anything involving Thomas Edison, most large damming projects), where a historical event is tied to a bunch of dead innocents, but history books praise the vision and determination of the ones in charge to not give up just because a few measly blue collars kicked the bucket early.


Where the difference in both this and the grand parent post is about choice.

If you fear the pandemic and what to lock yourself up in isolation we should as much as possible allow that. And if you want to work on very dangerous projects for better rewards you spud be able to.

With autonomous cars the choice of risk may not be so easy.

Arguing about what real choice you have is overly pedantic and we should rather concentrate on the principles for the right out come.


That reflects of values and focus of those who wrote those pop tech/business history books/articles.

I have seen history books that do reflect "but he also killed some people" or are critical. They are however less popular in some circles.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: