He seems to open up more when talking to people he considers "on his level" or at least deeply interested in what he is interested in. Take for example photographer and journalist Tim Dodd. Since Tim has a huge passion for rockets and actually wants to dig into the details of rocket engine cycles, manufacturing scale up - all the stuff Elon finds interesting - he really opens up.
That's actually pretty common among those "gifted with mild ASD/ADHD" types, they can't be assed to talk about anything that doesn't pique their interest. I struggled with it a lot until I learned to be more social with other folks' topics of discussion. I think Elon is the logical endpoint of what happens when you have zero pressure to socially accommodate.
It does seem like he's increasingly emboldened to act an ass since being crowned world's richest man.
On the other hand, he says outright stupid things all the time and only if he gets into a topic you know a thing or two about, you realize how wrong he often is.
One thing that almost everybody on HN should be able to judge as completely wrong is his claim about "L5 autonomy very close / later this year" [1].
L5 autonomy is the equivalent of the halting problem. L5 is a goal that can't be achieved [2], just like no program can be written that determines if the input program will ever terminate. [3]
So what to make of this, if this apparently smart guy says entry-level stupid things?
Edit: Because so many replies here about why I compare it to the Halting problem: That comparison is invalid as many of you pointed out. My reasoning was not in a strict mathematical sense, but more like this: even experienced humans can't drive in every condition. There are situations where you just need to stop. L5 autonomy will only work if we create AGI (https://en.wikipedia.org/wiki/Artificial_general_intelligenc...), so that the system can observe itself and think about itself. An AGI might be possible (in the far future), whereas the Halting problem is mathematically impossible. Thanks for pointing this out.
The difficulty of creating L5 autonomy and the provable impossibility of the halting problem are not comparable at all.
There is absolutely nothing fundamental that makes L5 autonomy impossible, while the halting problem is provably impossible, as normally formulated.
I don't know how close or far away L5 autonomy is, but it's definitely theoretically reachable, while the halting problem is always going to be impossible.
Halting problem comparison is silly, but L5 autonomy as defined ("all conditions") is probably not attainable.
A car that would complete all the trips I'm willing to complete, and some more, but refuses under some circumstances (e.g. whiteout blizzard conditions) would be a L4 vehicle under the ISO definition.
When we are actually have cars that sufficiently close to what humans can do in terms of range of condition, I suspect the level 5 definition will be updated to be more like:
Can drive in approximately the same set of conditions as a human (professional driver?), possibly being unwilling to drive in certain (seldom encountered) conditions where most humans would, but offset by being willing to drive in conditions few people would.
Importantly, the car must avoid completely giving up on driving mid trip (as opposed to deciding "too dangerous, turn around and go back at next opportunity"), unless conditions are comparable to those in which a human would give up mid trip (which are pretty limited, as humans seldom just stop and give up on the road unless the car is broken down, or fully stuck. At worst, in some really bad conditions, humans may pull other to wait for the storm/extreme-fog/etc to blow over.)
Or perhaps at that point we won't need the definition anymore. It becomes a bit arbitrary and market-y at that point. Assuming that we don't end up with a single vendor.
"My BMW still drove during the snows in February, but my neighbor's Tesla said it wouldn't drive 2 of the days." "Yah, but mine doesn't insist that the windows are perfectly clean and pristine before starting a trip!"
We probably should never have had level 5, but split L4 into a couple levels: heavily geofenced/restricted vs. relatively unlimited applicability with some restrictions.
L5 autonomy is a absolutely a long way off (if ever practially acheivable) and Elon is a bullshit artist, but I don't understand your comparison to the halting problem. Could you elaborate?
I interperate your comparison to the halting problem to mean that even if we ignore the feasability of a particular solution, it is literally an impossible problem to solve.
My understanding of L5 is driving without external human intervention.
In one sense we already have the technology to do that - our brians. It would never be feasible (or ethical), but if we could put a human brain inside every tesla, wouldn't that achieve L5?
I think equating to the halting problem is silly, but the ISO levels are a bit of a mess.
* Level 5 is a vehicle that never needs a human to take over and can drive in "all conditions". Humans do not meet this driving standard.
* Level 4 is a vehicle, that within a set of vehicle-defined conditions can drive the vehicle without a human ever having to take over. It refuses to drive unless the conditions are met.
So a level 4 car could be a vehicle that can drive in a 1 block area of residential streets only... or something that can drive in way more conditions than I could safely attempt, but refuses to drive in say, whiteout blizzard conditions at night.
> In one sense we already have the technology to do that - our brians. It would never be feasible (or ethical), but if we could put a human brain inside every tesla, wouldn't that achieve L5?
It would achieve L5 of a sort, but people don't usually mean "L5 autonomy" in the sense of "capable of crashing the vehicle deliberately to protest the horror of their existence".
I mean, he has a pretty clear financial incentive to say stuff like this given that he runs an auto manufacturer that heavily invests in vehicular automation.
Not saying he's right, but find me a company that doesn't polish their own turds, even just a little bit. Everyone trying to sell something is painting the best picture of their product possible.
Yeah, but even this doesn't seem very thoughtful. He recently acknowledged his predictions around full self-driving were wrong, and said it was because he'd failed to appreciate that it would essentially require artificial general intelligence.
Then he claimed we'd have that solved by 2023.
Not sure what the upside is of being known for repeatedly making non-rational predictions and being wrong.
> Not sure what the upside is of being known for repeatedly making non-rational predictions and being wrong.
It can significantly move markets in the short term, which he seems to have become adept at doing over the last few years. And unfortunately, the stock market isn't really interested in long-term thinking, it's largely about breaking news and twitter rumors nowadays.
Well, he's definitely mastered market manipulation. I take your point there.
But, the degree of absurdity across predictions undermines even that strategy over time. I think he does himself a disservice here. He should get out of his own way and allow his actual achievements to speak louder than irrational predictions (and other distractions).
I think he knows intellectually that a lot of his claims and predictions are bogus, but he also knows that his fanboys are all over it. Look at the amount of preorders for cars that - as it turns out - were years away still, if at all (thinking of the new roadster, pick-up truck and big trailer truck at the moment). Look at Tesla's stock which is based entirely on hype and less so on actual product, market share or financial results. Look at how many companies and universities around the world started developing a Hyperloop just because he mentioned it - I don't even know if there was like a grant for it or some other financial incentive.
Intellectually everyone can deduct that a long distance hyperloop is science fiction, ridiculously expensive, complicated, and will likely face long outages at any incident (see the channel tunnel, but like if it was 10-100x as long and a vacuum). But because Musk says it with Confidence, an army of fans jumps onto it.
Unfortunately you are doing the very thing that you accuse Musk of doing. L5 autonomy is not formalized (nor do I think it is able to be formalized) to the extent that would permit a rigorous proof showing it is isomorphic to the halting problem.
Your claim conflates a nebulous, squishy, human goal with a formally and rigorously proven mathematical problem. The only support offered is links to wikipedia and news articles, none of which help connect the two in an equally formal and rigorous fashion.
I know what the halting problem is and I had to study why it cannot work.
However, for L5, you just have a quote saying it doesn't work. We know it is mathematically possible for L5 to work because, well, humans perform at that level. We know that our vision, our ears, our hands and senses are enough input to solve the problem.
Do you have a direct connection between them or are you just using it as a metaphor for an unsolvable problem?
I mean, they already ship it, don't they? How do their customers square these statements with what they were sold and can enable with the flick of a button?
Or is this FSD in the sense of "my car can drive itself home after dropping me off" type of thing?
Hmm... While I don't see any evidence that L5 autonomous driving is near, I don't follow your argument that L5 is equivalent to the halting problem. Can you explain?
I am not convinced that L5 is fundamentally impossible (unless we posit that humans are also not L5 autonomous, which I suppose one could argue, as they are prone to driving errors). Granted I subscribe to Universality, and assume that humans are not capable of hyper-computation.
That Waymo CEO quote is about feasibility of fully autonomous driving in snow / rain. Seems unreasonable to even expect that. But I don't think it's as intractable as the halting problem. There's no formal proof against feasibility of L5.
But as far as Musk, yes he lies and isn't shy about it. It's shameless and overt. Perhaps he justifies it as being part of his job.
Will never be achieved? Only if humanity dies ous quickly. If, literally, even the dumbest people can learn how to drive, I'm sure with enough time we'll be able to replicate that autonomously.
I watched those interviews on his Everyday Astronaut YouTube channel.
Musk looked bored with Tim, was often evasive, gave the appearance of wanting to get away from a fanboy.
The few times he would "open up" it was more like a recitation from someone to a disinterested audience — or as though Musk's mind was somewhere else, not really engaged or focused on the interviewer.
EDIT: Skimming the two-part interview again, Musk seems to switch between seemingly being engaged to not. Maybe it is because the interview went on really long and appears to be uncut.
That's charitable. That was a really long interview/behind the scenes. I also interpreted it mostly as being tired (he normally works obscene hours and around this time he was particularly burning both ends).
I think it's telling that he didn't tell Tim to scram or he even got that close of a look at all. If he wanted to get away, he could have easily done so.
The disparity is even more obvious in the pressers when reporters ask typical reporter questions, vs when someone (often Tim, but there are others) asks something technical.
> The disparity is even more obvious in the pressers when reporters ask typical reporter questions, vs when someone (often Tim, but there are others) asks something technical.
Can you even imagine being Musk and running an EV company and a rocket company and having to field questions from your typical journalists? Like that Q from a journalist about why the new image of the black home at the center of our galaxy is so blurry: https://news.ycombinator.com/item?id=31353677.
one thing about that interview i've always wondered about. There was a part where Musk was talking about the heat shield and lamenting about progress and then watched some guy bang on a heating tile with their hands for about 10 seconds. After that, he picks up his phone to make a call then the interview cuts to a different scene entirely. I get the feeling that phone call was not a pleasant one and SpaceX asked it be removed from the footage.
Later on when he was giving that update at Starbase he mentioned the heat shield and thanked someone for a "robust" shield. I couldn't quite tell if he was being sarcastic or not.
not trying to make excuses but Musk mentioned at the end he was suffering from pretty bad back pain too. Back pain on your feet really sucks and can destroy a coherent thought.
Having said that, Musk is pretty much the worst communicator i've ever heard at the C suite level. I hope/pray he's better when talking to direct reports trying to get his crazy ass ideas and timelines done. Those poor poor people if he's not...
> That's actually pretty common among those "gifted with mild ASD/ADHD" types, they can't be assed to talk about anything that doesn't pique their interest.
Despite people in tech conflating ASD/ADHD and using them as some kind of weird bragging right (and excuse for not considering others), I don't know that there is any public information about his having either condition.
> He seems to open up more when talking to people he considers "on his level" or at least deeply interested in what he is interested in.
That's just called childish behavior, and despite it's name it is common in a lot of adults, not just Musk.
But most adults who act that way can't get away with it. When it is paired with wealth and a megaphone as it is with Musk and others like him, not only can they get away with it, but it can be amplified by a mass following of people who wish they could get away with it, and live vicariously through them. That is basically how cults work.
He openly admitted to having (deprecated term) Asperger's syndrome on SNL.
> That's just called childish behavior,
That's pretty judgy, IMHO.
> But most adults who act that way can't get away with it. When it is paired with wealth and a megaphone as it is with Musk and others like him, not only can they get away with it, but it can be amplified by a mass following of people who wish they could get away with it, and live vicariously through them.
Let’s not diagnose him sympathetically without evidence
He’s posting his cold brew pics to Twitter. Caffeine is a psychoactive substance that can foster manic behavior. Lack of sleep can create cognitive stability issues. Been there with both.
Who knows if he’s taken other things here and there as Mr Private Plane bounces around socializing.
Despite Twitter, how much of Elon’s life we don’t see is significant.
That's actually pretty common among those "gifted with mild ASD/ADHD" types, they can't be assed to talk about anything that doesn't pique their interest. I struggled with it a lot until I learned to be more social with other folks' topics of discussion. I think Elon is the logical endpoint of what happens when you have zero pressure to socially accommodate.
It does seem like he's increasingly emboldened to act an ass since being crowned world's richest man.