Hacker News new | past | comments | ask | show | jobs | submit login
Tesla Dojo Custom AI Supercomputer at HC34 (servethehome.com)
77 points by belltaco on Aug 24, 2022 | hide | past | favorite | 93 comments



The most striking thing about the architecture is that it appears so heterogeneous and complex. Considering the vast amount of software/machine learning engineering behind model/data/pipeline parallelism schemes like Megatron-LM and ZeRO (which target hardware topologies that seem almost simple by comparison) I'm curious what abstractions are in place to make this beast of an architecture friendly to programmers. Can you program a small tile in the same way you would a large tile and like you would in CUDA for a large/small GPU? Are there dedicated kernel teams that implement common blocks like multiheaded attention with the topology in mind so researchers/engineers doing modeling don't have to worry about scaling the model architecture in a hardware-friendly way? Do they have a monstrous fork of PyTorch with "Dojo" supported natively?


I'm not sure I would call the architecture very complex. It's about as simple as you can make a scale-out supercomputer. I assume they essentially do static positioning of the cluster for training jobs, and have a translation layer from the TensorFlow middle-end to their thing. Google did a similar thing with their TPUs, so it makes sense that they would have architected TF to accept exotic supercomputers as backends.


Tensorflow (and pytorch) convert your computation graph (constructed in python) to XLA, which is then specialized to a specific hardware architecture. XLA is a good intermediate language and in fact, you can convert some memory movement in the graph to network calls, allowing you to run on parallel systems (like a cluster of GPUs or TPUs with their own non-host-based networking).

It still requires many experts, both to write the XLA to hardware translation, and ML engineers who know how to write TF python that executes quickly.

(note: Google has transitioned many projects to Jax, which also writes to XLA, as TF ended up being a bit of a pig with wings)


> TF ended up being a bit of a pig with wings

Can you say more about this?


Everybody at Google wanted to add their specific feature to TF (gets visibility, users, research cred). Unfortunately, many different teams added multiple incompatible features that didn't compose. TF1 also had some serious problems where it was fundamentally designed around C++, without an understanding that most people wanted to work in Python. With TF2 a lot of stuff got redesigned, making many examples on the web stop working. There are too m any ways to parallelize your computation (along any of the dimensions), and they change too frequently.

But I think the writing was on the wall when the folks building ML Pathways hit some performance problems and realized that Jax made it much, much easier for them to express the computations they wanted and see them run quickly on TPUs (DeepMind had also concluded this). Once Jeff saw that Jax was making stuff that ran faster than TF (for his pet projects) the writing was on the wall.


It's interesting to see to what extent matrix multiplication and backpropagation are dominating the AI space these days.

I wouldn't be surprised if other approaches like genetic programming will make a comeback one day.

If there are any papers out there arguing for/against neural networks to stay in the king's seat forever, I would love to see them.


> I wouldn't be surprised if other approaches like genetic programming will make a comeback one day.

Why would they?

If you assume that your objective is kinda smooth genetic algorithms and related are guaranteed to suck. And most "real" world things of interest can be assumed to be pretty smooth


There are two parts to this. One is the representation: Neural Net vs Program. The other is the search approach: Back propagation vs crossover and mutation.

Your smoothness argument applies to the search approach. I would say "smooth" is not enough to describe the search space of real world solutions. I would rather call it fractal like. There are many smooth areas in a visualization of the Mandelbrot set. But the most interesting things happen in areas that are not smooth but follow some delicate logic. You find tiny local minima there that are hard to reach via gradient descent.

Genetic algorithms have a lot going for them in this type of search space. To pick one factor: GA's distribute the search power so that areas of the search space that are more promising get more resources. Similar to the optimal approach to deal with so called "multi-armed bandit" problems, GA's divide the search space in an optimal way, computation wise.


NNs only ended up working (at the level of human performance) because of 60+years of research punctuated by periods of very little interest (including times when GP/GA were being explored widely in the literature). And a bunch of that research was in entirely different areas, like GPUs, which went through many cycles of refinement for gaming and HPC before the DL people recognized its utility.

To achieve parity, GPs and GAs would need demonstrations on the level of what was seen about 10 years ago in NNs, then enough people would get involved to start doing more tech dev to make the hardware do GAs really, really, really fast.


I think you have to start to narrow your question before you get results. That is, for the use-case of fast moving sensor inputs (like in a car) or reviewing a million videos to find identifiable patterns, cNN performs, but other specific queries in domain knowledge or patterns of robotic movement, decision trees and random forest immediately differentiate themselves. So papers on specific topics that are not the specialty of cNN might demonstrate effectiveness, and leave it up to the reader/critic to compare widely against other uses overall.


You can take a look here: https://youtu.be/27PYlj-qNb0


That is a genetic algorithm at work.

With "genetic programming" I was referring to applying genetic algorithms to code. So that the result of the evolution is a program that performs well doing a task like driving a car.

Nice research on this has been done by John R. Koza. But nothing new came out in this area for quite a while.


If this stuff is any good, Tesla should make it available for everyone to use colab-style.

A hosted Jupyter notebook in a sandboxed VM able to send jobs to this new silicon is something that might be possible to set up by a small team in a few months, and could turn into a billion dollar business.

As a bonus, Tesla can use revenue from that to grow this supercomputer, while using any spare/unsold capacity for themselves.


It is already Tesla's plan to build AWS-style paid access to Dojo.

I think they said that during the first AI Day. Here's a 19 minute supercut of AI Day: https://www.youtube.com/watch?v=keWEE9FwS9o


I suspect they've just deprioritized it for other work.

But I think that was the wrong strategic move - they should have opened it up, together with some 'Tesla AI' demo models, in a colab environment. They can hire new employees to do that - it is separate work from that involved in making the self driving car, and will not block or interfere.

The only reason I think they might not is that they don't want to step on Googles toes - there are very close links between Musk companies and Google, and a direct competitor to Googles TPU product might hurt that relationship more than it generates in revenue.


What makes you think they deprioritized “it”?


>and a direct competitor to Googles TPU product might hurt that relationship more than it generates in revenue.

There is approximately zero chance that this is a consideration.

My guess would be that, like a lot of things under the Elon Musk umbrella, what they claim they are able to do in theory diverges far from reality. We've seen similar slideshows.


> They can hire new employees

The bottleneck isn't employees, it's chips.


If that were the case, they'd have already launched a 'demo' version with very low usage limits. Users can start building/porting their models, and then run them in a few months when the next batch of chips arrives.


Sometimes R&D just takes time.


The idea of having it publicly accessible has been public for 3 years. Maybe internally for more than that.

Yet, building a basic colab-like interface, using opensource tools, shouldn't take more than a few people a few months.

Obviously, they might not want to launch that before their hardware is ready, but their hardware has been in production for over a year now.

Clearly something hasn't gone to plan.


I’m almost certain that Google/Amazon/Apple et al have similar specialized computing hardware, but at smaller scales. We don’t publicly know much about their internal hardware. If they had the infra, I think it would be interesting to see some Tesla cloud service offering indeed.

[1] https://cloud.google.com/blog/products/ai-machine-learning/g...


Google has had TPUs for a while. Most of the incredibly powerful transformer architectures like Parti and PaLM are trained on it. They even merge TPU pods with Pathways so they can go up to 540B parameters (gpt-3 is only 175B)

https://blog.google/technology/ai/introducing-pathways-next-...


Tesla should focus on making full self driving cars finally work.

If you listened to their talk their silicon comes without support for access control and multi tenancy. Letting random people run on this will be an unlimited disaster.


Tesla should aim for full self parking first. They're behind other manufacturers on that:

https://www.youtube.com/watch?v=nsb2XBAIWyA


So they have one of the world’s most powerful super computers and they still can’t figure out how to make the car handle an incoming merge lane or how to properly drive down a two lane road without a lane lines…. I grew so frustrated with Tesla constantly trying to solve hard problems without mastering the easy stuff I sold the car.


This might even be a symptom of their approach. They want to train a model that is able to generalize and behave like a human. In doing so they reject simpler approaches that can handle simpler scenarios much more reliably.


Maybe it not an easy problem?


Maybe Tesla shouldn't be claiming it was solved in 2016, if it wasn't an easy problem.

The deadline for "Coast to Coast Full Self Driving" is over 5 years old now. https://techcrunch.com/2016/10/19/musk-targeting-coast-to-co...


I once drove a Tesla on a road that was asphalt covered over what appeared to earlier be a concrete road. The concrete was visible through cracks in the top asphalt surface in straight lines. To a camera, they sort of looked like lane lines, but they were right down the middle of the lanes!

I was manually driving, but the car was constantly freaking out because it thought I was running off the road. :lol Yes, it can be a hard problem.

OTOH, my car is running regular AP and will not engage when no lane lines are available. Nevertheless, I've managed to trick it into engaging in a few cases by engaging while the lines are there and keeping it on when they disappear. It does this close to flawlessly in my neighborhood, which is actually pretty surprising.


>I was manually driving, but the car was constantly freaking out because it thought I was running off the road. :lol Yes, it can be a hard problem.

If cracks in asphalt fooling the car's AI is a "hard problem", than full self-driving is doomed.


That's a really weird take, tbh. That particular case was one that _should_ have been easy, but wasn't. But I've also been on roads where the humans were confused because construction had messed up the lines so much.

TBH, getting a computer to navigate a parking lot is one of the hardest self-driving problems. That doesn't make them all doomed.


It must be, he’s been selling Full Self Driving for years.


To be fair, it has been Full Self Driving for years.

Here's the car. Drive it yourself.


Yes, you have to still pay attention to the road and drive the car, since the car is still unable to drive itself reliably without a human, which that was supposedly the Level 5 Full Self Driving (FSD) promise of 2020 with the 1 Million robo-taxis still missing.

The product has been 'Fools Self Driving' demoware for years.


You missed the joke.


?

My comment already agreed with it, and it plays right in to the entire point of you having to still pay attention and drive the car yourself. Hence why I said it is 'Fools Self Driving'

Is this you as well? [0]

[0] https://news.ycombinator.com/item?id=32559771


No offense intended. I thought you were saying FSD meant that the car was supposed to drive itself, it just doesn't work very well (yet?). Where the joke was saying that FSD always meant that the human is the "self" and therefore FSD already works as intended.

Yes [0] was me as well. Apparently the poster was not being sarcastic.


Cool. Can anyone chime in on how this compares to other ML SoC/ASICs? I know many places are going hard on general purpose GPUs, but I’d imagine ASIC based supercomputers (like Google TPU) are the way to go forward.


This is a crazy oversimplification, but let me take a shot. The difference to others is easily a large "novel size" discussion.

Most training SoCs are focused on building something the size of an NVIDIA GPU, but designed for ML versus general purpose GPU HPC compute (FP64) plus ML. Often those accelerators today have a few types of models they are optimized for. NVIDIA is the baseline and so the competitors are looking for areas where they can get a large boost at a lower cost with something about the size of a A100/H100.

Cerebras is perhaps the biggest exception with its WSE-2, a wafer size chip. Having the wafer size chip means that Cerebras does not need to go into higher-latency and higher-power off-package interconnect as frequently because its chip is 50x larger. In turn, Cerebras drives performance and cost savings by not needing NVLink4 NVSwitches / InfiniBand.

Tesla's Dojo Tile is 25 chips roughly equivalent in size to a NVIDIA GPU in a single package with die-to-die communication facilitated by the base tile and then built for scale up units. Tesla also has focused on the interconnect and pipeline feeding the D1s and Tiles.

Ultimately, I think that it takes something beyond a "solution X saves 30% over NVIDIA in these workloads in performance/ $" to survive. NVIDIA has a massive software ecosystem and can handle more types of tasks versus some of the other AI accelerators. That goes beyond just the training and also to other parts of the data prep and movement pipeline. NVIDIA extracts high margins from this work so that is why some effectively are competing with "it costs less and on some problems can be faster" architectures but what Tesla, Cerebras, Google, and a few others have another level of differentiation.

Nothing is perfect, nor was that explanation, but just a high-level view of why the technology featured is impactful.


It's a very focused and impressive effort. Training from SRAM can be one-two orders of magnitude faster than training from DRAM.

Modern GPUs, even with some process advantage (and no shipments her, H100) strive to be general purpose processors too much, and that caps their peak deep learning performance.

Every serious player will have to make or buy their own TPU.


They'll be at a process disadvantage. D1 is allegedly TSMC 7nm, as per last year's information (https://www.tomshardware.com/news/tesla-d1-ai-chip).

An ASIC can strip out features they don't need and save some space. But a good chunk of modern GPUs are memory-controllers, registers, and SIMD-cores. And modern GPUs (both AMD's MI250x and NVidia's A100) have 16-bit matrix multiplication units (aka: Tensor cores). Once we factor in the process disadvantage, I'm not sure if the D1 will be as competitive as they hope.

Tesla's hope is that their D1 chip has more 16-bit matrix multiplication cores than the NVidia / AMD designs. But A100 is quite solid, and NVidia Hopper has been announced at HC34 (aka: NVidia's next generation).

https://www.nvidia.com/en-us/technologies/hopper-architectur...

-------

Most of this presentation on the Tesla Dojo is about the interconnect system. Alas, NVidia's on like the 4th (or was it 5th?) generation of NVlink, available from their DGX servers (and I'm sure a Hopper version will come out soon).

AMD's not far behind, also with a lot of good presentations this year from HC34 that points out how AMD's "Frontier" Supercomputer has huge bandwidth. In particular, each MI250X GPU is a twin-chiplet design (two GPUs per... GPU), with 5 high-speed links to connect to other GPUs in a high-speed fashion. There's a reason why Frontier is the #1 supercomputer of the world right now, in both absolute Double-precision FLOPs, and in Green Double-precision FLOPS-per-watt.

NVidia's Hopper will be hitting 4nm. AMD's MI250x is 5nm. That means the D1 chip has less than 1/2 the transistors at the same area compared to NVidia.

> but I’d imagine ASIC based supercomputers (like Google TPU) are the way to go forward.

Only if you keep up with the process shrinks. 7nm is getting long in the tooth now. All eyes are looking forward to 5nm, 4nm, and even 3nm designs (now that Apple is the customer of TSMC's 3nm node).

-----------

That being said, if the 7nm node is cheaper, maybe this exercise was still cost-effective for Tesla. As the newer nodes obsolete the older nodes, the older nodes become more cost-effective.

Cost-efficiency is less popular / less cool, but still an effective business plan.

> but I’d imagine ASIC based supercomputers (like Google TPU) are the way to go forward.

The issue is that it probably costs hundreds-of-millions of dollars to design something like the D1. Sure, the mass production of the chip afterwards will be incredible, but chips have stupidly high startup costs (masking, engineering, research, etc. etc.)

GPUs on the other hand, are more general purpose and are applicable to more situations. So you can sell the GPU to more customers and spread out the R&D costs. In particular, GPUs capture the attention of the video game crowd, who will fund high-end GPU research just to play video games.

Much like how Intel's laptops allow servers to share the R&D effort, so too does NVidia's consumer GPUs share research/development costs with their high end A100 cards.


Nvidia’s stuff is good but it’s pretty high margin. They don’t give access to it for cheap,& in the last five years, the cost per unit performance has been nearly flat. They’re also more generalized than Tesla needs. Performance advantages from process shrinks have also stagnated. A good time for a custom approach.


NVidia is claiming 1000 16-bit Tensor-FLOPS on Hopper: https://developer.nvidia.com/blog/nvidia-hopper-architecture...

While Tesla is claiming less than 400 Tensor-FLOPS on D1.

So yeah, the claims of NVidia's GH100 / Hopper GPU are an order of magnitude faster than the D1. Which is no surprise, because when your transistors are less than 1/2 the size of the competition, you can easily have 2x the performance in a embarrassingly parallel problem.

--------

Note that the A100, released in 2020, offers 312 TFlops of 16-bit Tensor matrix-multiplication operations per second. Meaning D1's chip is barely competitive against the 2-year-old NVidia A100, let alone the next-generation Hopper.

And note that NVidia's server-GPUs (like A100 or GH100) already come in prepackaged supercomputer formats with extremely-high speed data-links between them. See the DGX line of NVidia supercomputers. https://www.nvidia.com/en-us/data-center/dgx-station-a100/

--------

You can't beat physics. Smaller transistors use less power, while more transistors offer more parallelism. A process-node advantage is huge.


The transistors aren’t always literally smaller with each processor node, and there are negative quantum effects that also occur as you shrink.

But anyway, you dodged my entire point about cost-to-performance ratio by looking just at performance. If NVidia is insisting on pocketing all the performance advantages of the process shrink as profit, then it still make sense for Tesla to do this.


> But anyway, you dodged my entire point about cost-to-performance ratio by looking just at performance

"Dodged" ?? Unless you have the exact numbers for the amount of dollars Tesla has spent on mask-costs, chip engineers, and software developers, we're all taking a guess on that.

But we all know that such an engineering effort is in the 100-million+ project size or more. Maybe even $Billion+ size.

All of our estimates will vary, and the people who work inside of Tesla would never tell us this number. But even in the middle hundreds-of-millions, it seems rather difficult for Tesla to recoup costs.

-------

Especially compared to say... using an AMD MI250X or Google's TPUs or something. (Its not like NVidia is the only option, they're just the most complete and braindead option. But AMD MI250x have tensor cores as well that are competitive to A100, albeit missing the software engineering of the CUDA ecosystem)

------

Ex: A quickie search: https://news.ycombinator.com/item?id=26959053

> For 7 nm, it costs more than $271 million for design alone (EDA, verification, synthesis, layout, sign-off, etc) [1], and that’s a cheaper one. Industry reports say $650-810 million for a big 5 nm chip.

How many chips does Tesla need to make before this is economically viable? And for what? They seemingly aren't even outperforming the A100 or MI250x, let alone the next-generation GH100.

What's your estimate on the cost of an all-custom 7nm chip with no compiler-infrastructure, no software support and 100% all manual software built from the ground up with no previous ecosystem?


I wonder what the current owners of Tilera and their series of Tile processors think about Teslas naming here


Proper noun?


I wonder how this compares to the other revolutionary supercomputer: Power Mac G4


[flagged]


Nice


[flagged]


Nice


They recycled a lot of material from last year's presentation. Tesla is the only company you really have to say this about: there is a non-negligible probability that this thing doesn't exist. There are no published results and we aren't seeing the mind-blowing pace of FSD improvements that Musk promised us in 2020 when Dojo 1.0 was only a year away.


Musk promising FSD by the end of the year - year after year - is one major thing that tarnished the Tesla image in my mind. I now assume that he has to believe that in order to avoid lawsuits. Maybe the same is with the computing infrastructure. They need to build it to show that they honestly believed it could work - even though they failed to deliver on their promises many times.

That said, they do make progress and the real reason why it's slow is probably because the problem is actually really hard and nobody has found the fast way to the solution yet.


Musk is absolutely the reason I would never consider buying another Tesla, having just sold mine.

Dude just straight up lies. He may not realize he's lying, but he lies constantly.

Edit, because I'm getting downvoted. Here's some examples: self driving, battery swaps, robotic snake chargers, cybertruck windows, Bitcoin won't be converted to fiat, starlink speeds will improve, he will sell his home, first mars mission 2024, Tesla solar roofs, power packs at every supercharger, brake pads on Tesla cars will never need to be replaced, the gigafactory will be 100% renewable powered by 2020, fixing the Flint water crisis, making bricks from boring company waste, founding a media credibility organization, and probably a TON more.

Oh man, I forgot he took money to fly tourists to the moon.

There are so many lies he's told. So many.


I can understand some of these 'lies' but most of them are not really lies. I can understand being angry about self driving, most of the rest of it is pretty absurd.

Most of these are either things that are simple changes in strategy, mostly good choice regards to internal investment and roadmap. Others are research projects that were never promised to be products. I really don't understand how anybody can be angry about any of those.

Some of these are products, but apparently they are not as perfect as they could be according to you. This is pretty absurd definition of 'lie'.

> first mars mission 2024 > Oh man, I forgot he took money to fly tourists to the moon.

This is maybe the most ridiculous thing I have ever read. They are experiencing delay on one of the most difficult engineering projects ever.

I guess we should execute him for daring to not having flown people around the moon.

In aerospace even simple LEO orbit rockets often launch late, its normal and costumers know that its a probability. Contracts do handle these cases.

Most of this boils down to Musk saying 'this is our current plan' and then people get angry when plans change. I don't understand why. Did you personally sign a contract with Tesla for a battery swap station or something?

I for one really like getting updates on what SpaceX/Tesla is planning or working on. But I don't get upset when they change it. When I see a snake charger I don't go 'oh they promises this and it will be in my garage in 3 month'. They have absolutely no obligation to me.


> Did you personally sign a contract with Tesla for a battery swap station or something?

I've found that a surprisingly large number of people are really committed to the idea that battery swapping is the only way that EVs could work. These are personally offended that Tesla abandoned it.

But doing battery swapping well is much more capital intensive than their supercharger strategy was. They could never have afforded it on their own anyway.


And people act like some Tesla scammed California by not rolling out the technology widely. But this was never a requirement of the grand.

The whole point of such grants is to figure out if its a commercially viable solution or not, and it wasn't. Or at least not the best one.


Or, they take both positions simultaneously: The beta was rolled out recklessly but also they haven't delivered what they promised. Turns out self-driving is harder than people thought - and a few upper middle class folks like myself dropped 3k on something that isn't that useful. This occurs... millions of times a day with all sorts of products. What an absolute tragedy?

I'm actually glad they they've decided to slow down iteration on the self-driving stack - in the early days the self-driving would change behavior from patch to patch - it was extremely difficult to get an idea of what it would be thinking. It's obviously not the product that we all wished it was - but... does any product in the world hold up to consumer imagination?

Apple promises that the newest iPhone is a "new superpower". Am I supposed to believe that? Does anyone believe that? It never fails to crack me up that the folks most upset about Tesla's self-driving promise tend to not be customers and vow that they never will be. What in the world is so upsetting about a company over promising and under delivering? Are these people just constantly boiling with rage? Have they never experienced being a consumer before?


You were scammed for $3,000 and you like it? I think you need to cultivate some self respect.

These days Tesla wants to con people out of $15,000:

https://www.thedrive.com/tech/even-tesla-fans-think-fsds-pri...

Why should anyone "consume" this "product" when it simply doesn't work as claimed and doesn't meet any of Tesla's self declared deadlines?


I don’t like it - I regret the purchase. Don’t you ever buy things you’re excited about only to find out it’s not what you hoped?

I’m just not furious is all - and not nearly as furious as folks who… didn’t buy the product. Me being disappointed in a cool toy is hardly a problem worth discussing.


If I had been "scammed" out of $3k for "FSD", I'd be perfectly happy right now too. The existing highway features are worth about $3k to me.

I wouldn't pay $6k for EAP or $15k for "FSD" though.


I feel like there is some ambiguity here. Tesla only allowed a limited number to use it and there are some signs that they weren't serious about it. They may have taken advantage of a loophole to gain more than they credibly should.


“He may not realize he's lying, but he lies constantly.”

You have to know what your saying is untrue for something to be a lie. Otherwise it is simply called being wrong.


When you keep being wrong in the same way for 9 years straight then it graduates from "wrong" to "lie":

https://jalopnik.com/elon-musk-promises-full-self-driving-ne...


Agreed. And I own a Tesla and am in the “FSD” beta. It’s nowhere near ready. There is absolutely no way in hell it was even a thing when he first started promising it given the current state.

And there’s incredibly small chances this system will actually work given the limited sensor set in the car, and especially with the removal of the radar (which is not them “cleaning up the data”, it is 100% entirely due to supply chain issues).

It’s a great car, and I applaud them for making various major advances in the automotive space including the push for EVs and OTA updates. But FSD is a pipe dream and won’t be anywhere near full self driving in the next decade. And other manufacturers are quickly catching up to the current public feature set.


Curious... did you sell your Tesla because he lies? or for other issues? What replacement car/company did you decide to go with? Volkswagen?


That's the tough thing. Tesla still has the early mover advantage in the EV space and despite all the dumb stuff, most other EVs can't yet compete in the basic driving experience for the price (assuming no FSD).

I just want a Toyota Corolla or Camry-style car, for a decent price, that is electric. The i3 was kinda close to what I wanted in spirit, but they had to make it look all "tech" and "future-y".


Not a real answer, but I got a RAV 4 Prime- it's hybrid- but I never engage the engine. I've driven 6000 miles on two tanks of gas.


The way I would put it is that he's been attempting to erect a reality distortion field to distract from that fact it will take much longer than he predicts to achieve some of his more distant goals. And that RDF basically means promising things that can't be delivered, for some time.

I almost bought a tesla a year ago based on the idea FSD was going to work at some point in the future but after Elon announced some last minute hardware changes to my model's radar (removing it), and I read about all the steps you should perform at pickup, I went and bought a toyota instead.


Dear Moon was supposed to be 2022. But he was supposed to land a manned dragon capsule on Mars in 2020 and before that 2018, plus passenger rocket flights from New York to Shanghai for between the price of a coach and business class ticket by 2028 (first made the claim in 2018, later doubled down on the 2028 date around 2020).

https://www.theverge.com/2017/2/17/14652026/spacex-red-drago...

Starlink satellites were supposed to have laser interlinks from the beginning and enable low latency multiplayer gaming around the world. The Starlink latency is pretty bad for gaming right now and is variable.

The battery swap stations were a bigger scam than you might remember: California gave them almost a billion for "delivering" them.


>California gave them almost a billion for "delivering" them.

Source?


Edward Niedermeyer, "Ludicrous", Chapter 9, has a somewhat lower estimate:

“In 2013, California revised its Zero Emissions Vehicle credit system so that long-range ZEVs that were able to charge 80 percent of their range in under fifteen minutes earned almost twice as many credits as those that didn’t. [...] By demonstrating battery swap on just one vehicle, Tesla nearly doubled the ZEV credits earned by its entire fleet even if none of them actually used the swap capability. [...] For the 2015 to 2017 model years, CARB created a new rule requiring Tesla to actually document a certain number of battery swaps to prove that the capability was actually being used. This development just so happened to coincide with Tesla’s half-hearted “pilot program” and its subsequent decision that its customers had no interest in fast, convenient battery swaps. [...] By exploiting CARB’s fast-refueling rules, Tesla appears to have earned as much as $100 million in additional revenue by demonstrating and hyping a system it seems to never have intended to commercially deploy.”


The amount earned is unclear, but California was giving more credits to vehicles based upon battery swap capability: https://www.thetruthaboutcars.com/2015/03/tesla-battery-swap...


Half the things you mentioned happened. And most of the rest of the things you mentioned were never planned to happen (i.e. simply thinking out loud).


He lied about Hyperloop solely to convince CA to cancel HSR projects.



He is the modern day pied piper telling his devoted fans to buy his Fools Self Driving (FSD) contraption and gives lots of false promises so that he can keep the scam running and for his customers to keep believing his lies that it will be Level 5 'real soon'™ by these so-called 'updates'.

So far with this Fools Self Driving deception:

    Claim 1: 'Full Level 5 Autonomy by end of 2019' (2019) [0]
    Reality: As of this year 2022, it is still Level 2.

    Claim 2: '1 Million robo-taxis on the road by the end of 2020' (2019) [1] 
    Reality: As of this year 2022, ZERO Tesla robo-taxis on the road.

    Claim 3: 'Tesla's FSD tech will have Level 5 autonomy by the end of 2021' (2021) [2]
    Reality: As of this year 2022, it is admittedly Level 2.

    Claim 4: "I would be shocked if Tesla does not achieve FSD that is safer than human drivers this year" (2022) [3]
    Reality: Clearly it still isn't any safer 8 months ago. [4]
So even if we have given them more time since those claims in 2019, nothing has changed.

Not only it was admitted to be Level 2, [5] and still requires the full attention of the driver having their eyes on the road, they have ever continuously raised the prices on a system that clearly doesn't work as advertised in order to get customers to FOMO into purchasing it.

A complete scam of a contraption that can only be perfectly described as a 'Fools Self Driving' system.

[0] https://www.motortrend.com/news/tesla-autonomous-driving-lev...

[1] https://www.engadget.com/2019-04-22-tesla-elon-musk-self-dri...

[2] https://www.cnet.com/roadshow/news/elon-musk-full-self-drivi...

[3] https://electrek.co/2022/01/31/elon-musk-tesla-full-self-dri...

[4] https://news.ycombinator.com/item?id=32401444

[5] https://www.news18.com/news/auto/teslas-full-self-driving-cl...


Don't forget that Tesla released a doctored video clamining the problem was solved, past tense, all the way back in 2016.

I do not understand how they continue to get away with it.


My autopilot drives me on the highway just like he said it would. That was what he predicted about 5 years ago. FSD is taking longer - so what. It's progressing for the public far more than another other attempt.


[flagged]


??? Why keep bringing that up? The sub wasn't needed because of fortunate good weather and less cave flooding than expected, the "advisor" who never even dived was a middle age white dude in SEA.


All of that is irrelevant to Musk calling the man a “pedophile” with zero evidence.


Pedophile was not the exact insult used why do you have it in quotes as if it were spoken verbatim?


You left out all the important details. No only did he say "pedo guy" specifically to implicate the guy as a pedophile, he hired a sketchy guy to investigate him.

Also: """In emails to Buzzfeed News in September 2018, Musk also called him a "child rapist" and accused him of moving to Chiang Rai for a "child bride who was about 12 years old." He added that he "f---ing hope[d]" Unsworth would sue him. """

The person we're talking about (Unsworth) was the known expert on that cave, who was handling an emergency situation. Elon inserted himself in this situation unnecessarily and when he was rebuffed, resorted to some very serious things to say and took unwise actions (https://www.businessinsider.com/elon-musk-convicted-felon-in...)


I think most important part was that Elon tried to feed the child bride story to buzzfeed news "off the record". https://www.buzzfeednews.com/article/ryanmac/elon-musk-thai-...


The sub was an insane idea by the account of all involved experts. Do you really think that being a middle aged white dude in SEA is enough to warrant pedophilia accusations?


> The sub was an insane idea by the account of all involved experts.

Please provide a source. The only person ever comment about it as far as I know was not one of the main divers involved, it was a British guy who was accidentally there and was losely associated consultant and by the tone of his comments, he seem to be mainly angry that Musk go media attention.

The SpaceX engineers develop the sub in coordination with the diving team for one specific contingency case and were happy when the sub didn't have to be used.

Then after that one of the consultants who apparently doesn't like Musk started giving interviews and because the media didn't really have anything to report about the actual rescue they played up the drama and tried to find anybody willing to comment.

So I guess we are angry here that SpaceX invested a fair amount of resource to try something that could potentially help.

The comment by Musk was clearly stupid, but beyond that I don't really see what is wrong with what Musk/SpaceX did. The best case one can make is that 'it took attention away from what important' ok, but that's not really in their control.


> Never forget the cave rescue ready submarine and pedo accusations.

I never forget that people like yourself continue to bring up the nonsense. He tried to help save the kids for goodness's sake. All he gets is ridicule for trying to help. You people are insufferable.

Did no one ever teach you "don't look a gift horse in the mouth"?


>He tried to help save the kids for goodness's sake.

He tried, very loudly, to take over. He had absolutely no relevant knowledge or experience. He came in with a ridiculous solution that would kill everyone involved and refused to back off. When called on it, he responded with slander and harassment. With friends like this, who needs enemies?

For the record, I agree that he doesn't deserve ridicule. He deserves 50 years in prison. But that's for the FSD scam, not for the "pedo guy" incident.


> He tried, very loudly, to take over.

That is completley ahistorical.

> He had absolutely no relevant knowledge or experience. He came in with a ridiculous solution that would kill everyone involved and refused to back off.

He was requested, by the divers, to assist in any way possible. He was never asked to "back off". And what makes you think his team had no relevant knowledge or experience? You think being able to dive is that rare of a skill?

> He deserves 50 years in prison.

Good lord. It's people like you are why progress forward freezes.


I half like/admire musk and half am disappointed he’s so thin skinned. It wasn’t a gift horse, it was a toddler having a tantrum and doing something like that (saying the things he said) reflect poorly on his character.


I don't see them making progress. They have gone backwards. A few years back, when they still used radar, they had a ho-hum suite of driver assistance features. Now they have nothing I would trust.


+ the robot guy dancing in spandex


I don't understand why they need this. Tesla has been selling fully-autonomous cars with L5 automation for years. They totally cracked self driving cars back in 2016, to the extent that "the person in the driver’s seat is only there for legal reasons", and have had 1,000,000 fully autonomous robotaxis on the road since 2020. What work remains to be done?

Still, it is good to see powerful hardware being put to good use for once. I understand it is silly to feel sorry for inanimate objects, but it saddens me to see silicon squandered by some manchild at a national lab working on a dead-end vanity project when it could be crunching numbers for Tesla or—better yet—mining Bitcoin.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: