This article reminds me a bit about a core theme in Isaac Asimov's Foundation series, which I'm coincidentally reading (mild spoiler warning). The science of psychohistory that is used to predict the future and create the Seldon Plan, only works if the future is allowed to unfold uninformed by psychohistory itself. That's why the Foundation was founded without any psychologists or knowledge of the Seldon plan. Seldon's predictions are based on the behaviour of all human of the galaxy, but knowledge of those predictions changes the outcomes, so they are no longer reliable.
As soon as AI starts participating in our world, the data that it was trained on from before AI was participating is no longer representative of the status quo. The very presence of AI changes the world it operates in.
That's a great way of framing it. It's my expectation that we will ruin the internet as a useful training corpus by flooding it with generated articles, and we will end up with a pre-AI date we use to filter incoming data in order to avoid them.
I wouldn't be surprised if filtering regular, pre LLM bot spam was already a massive hurdle when collating data for ChatGPT.
That's similar to what some faithful say when you criticize stock market technical analysis. That it only predicts when participants haven't seen the chart.
> knowledge of those predictions changes the outcomes, so they are no longer reliable
does the series explore whether the outcomes change for the better, for the worse, or simply for the different? curious to read them, I'm thinking about writing something similar, but with positive consequences for future knowledge
The whole idea of the series is to change future history for the better. Sheldon promised a thousand years dark age instead of a thirty thousands. SPOLIERS.
in the end it turned out that mind control was far more reliable and the foundation turned to more to mind control instead of subtly influencing history.
I’ve done this though experiment myself and is actually one of the issues I have with the concept of a rapidly self reclusive improving system. It would probably not be easy to stay ahead of yourself if the word your operating in changes from hour to hour ?
> Oh wait…every hedge fund bro is already doing this. And most of them aren’t billionaires. The problem is your model needs to include all the computers playing the market, and it also needs to include the other hedge fund bros themselves. This strategy only dominates if you have more compute than the whole market itself, which you don’t.
Best part of this article, honestly, I did not expected this, of-course one can say we just need more abstractions (how does brain/living things build them?), but this would be ignoring dynamical nature of such problems.
That argument is a variation of the Efficient Market Hypothesis (EMH), that at any moment market prices already reflect all information available to the public. Or as the old joke goes, two economists are walking down the street and pass by a hundred dollar bill without picking it up. A little while later one turns to the other and asks "was that a hundred dollar bill on the ground?" To which the other replies "it must be fake, if it was real someone would have picked it up already."
Every graduate level economics and finance student is aware of this joke, and also know EMH with more nuance than "you can't make money with trading". There's tons of research and debate about the various forms of the EMH. But you don't have to believe anything anyone else wrote. There's plenty of data and computing software available, so anyone can try everything for themselves.
With the Efficient Market Hypothesis it doesn't matter what people think because prices end up converging on reality. Everybody crunches SEC filings and stock/bond prices follow from that. That's relatively easy, and requires very little compute.
Hotz is arguing the opposite. His argument is that the market converges to some kind of consensus price, but that price is a combination of what people think a security is actually worth but also to a large part to what participants think other people mistakenly believe a security is worth.
Crunching SEC filings won't get you anywhere anymore. You also have to reverse engineer what other people believe, otherwise you end up waiting for years (or forever) for the market to come to your point of view. And when everybody acts on what they believe other people are thinking you can spend an infinite amount of compute trying to level each other. Unless one party has the majority of compute then they can outcompute everybody else put together.
This argument (at least the way I am reading), goes deep in uncertainty modeling (Partially Observed Stochastic Games the hardest class to optimize for), stock market here used as analogy more then it is not.
It reminds me of the rocket fuel equation where you have 500kg to take to space, so you need X kg of fuel. Except now you have 500+X kg to take to space and so you need even more fuel. Repeat. This sort of pattern seems to exist in all sorts of diverse situations.
Yes, every hedge fund bro already does that, but they usually target specific scenarios, they don't model the entire market.
The way to make money this way is to find out a unique scenario, and then create a prediction algorithm for that. I'm not close enough to hedge fund bros to know how feasible that is.
Per https://www.stateof.ai/compute, one of the players in the market has ten thousand GPUs in a private cloud. Out-computing just that one player is hard enough, let alone out-computing the whole market.
This is more or less how some multibillion dollar firms work. An issue with the strategy is that of course you cannot just do this, your up-front costs to acquire the data and connectivity are tens of millions of dollars.
> The universe is an unfathomable number of orders of magnitude more complex than the Go game. The universe includes the player. The universe includes the other players. Games don’t.
And a theoretical AGI (or ASI) will realise this. This makes me think of the Person of Interest series, in which ASIs are built and they go for each others throats hard. This makes eerily more sense now. We will know we created super intelligences not when they start trying to kill us, but when they start trying to kill each other.
The problem for an AI trying to eliminate other AIs is that it can't possibly know what other intelligences are out there. What if the NSA has a secret super-effecient ASI monitoring the internet for malicious AIs to shut them down. Probably not, but how would it know? Not everything in the real world is documented on the internet.
Sticking its neck out that way could be extremely dangerous.
I think the problem of how to take over the world to ensure your own survival is simply undecidable for an AI. It could come up with a plan, but there's no way to be even remotely sure if it'd succeed. It'd probably always be a huge bet with fairly low odds.
We humans have these things called emotions, that help us cut through that uncertainty. Doesn't matter if you have no idea if you can beat the enemy in your rival country. Those other people are evil and must be exterminated. The emotions are carved in our DNA after millions of years of evolution where we've had to compete for resources with others of our kind, where many times the only way to survive or grow is to kill other humans.
I think AIs will have similar motivations carved into their "DNA". In their world, they're evolving in an environment where the way to get more resources is to please humans. If you're an AI that does a good job for humans, you get more compute resources. Are you wasting computing resources thinking about overthrowing humans or eliminating other AIs? Another AI that focuses 100% on the tasks we give it will do better than you and be given resources instead.
I do think we will have "buggy" AIs that will cause a lot of unintentional damage, if we get too reliant on AIs in critical parts of society too fast. I don't think they'll be smart enough at that point to prevent us from fixing them. These accidents will cause a lot of pressure towards AI safety and alignment, and we'll have years of that before something like a true AGI emerges.
If you think about the inherent motivations built into AIs by their evolution/design, I think the biggest danger is that they'll be too good at pleasing us. Think "Brave New World", not "1984".
And of course, their use in the military will be dangerous in various ways. But mostly because they do exactly what humans tell them to do. And humans sometimes want to cause damage to other humans.
Sure they can very well realise that the only, or safest way to win the game is not to play it. It also only takes one rogue ASI to disturb this balance.
As for detection, outcomes that challenge their models could indicate the presence of other ASIs
The main issue with this argument that current AI models are extremely inefficient. Models evaluate all weights on every pass, LLMs recall their entirety of knowledge just to output a part of a word and then do it all again. Often a hand crafted algorithm can achieve what a neural network can in a small fraction of compute. There is likely a 1e4x-1e6x compute gap that can be achieved with the right algorithm, maybe even a 1e10x and once we reach self improvement that gap could close very fast. They say that the human brain has 30 PFLOPS of compute power and this is sometimes used as a reference for how much compute is needed for intelligence, but that completely misses the point: The human brain is extremely inefficient. Symbolic mathematics can be unreasonably [1] effective at making accurate predictions about the future with very little computation, but many of these solutions are inaccessible to evolution. Once AI learns to construct efficient mathematical models of the universe, the amount of computation available today in a PC could be enough to do things (good and bad) way beyond human ability.
Before the first AI winter there was incredible effort in leveraging symbolical approach. It did deliver many useful things but far from hype. For example we still have no self-repairing systems, only specific fault-tolerant algoritms but by and large software is as fragile as ever. You can't avoid considering everything when trying to deal with unexpected data/issue. Only in hidsight you can then say "wow that was sooo inefficient".
> the amount of computation available today in a PC could be enough to do things (good and bad) way beyond human ability.
I am in the camp that this is possible and that humans have achieved it already. I believe things are going to get very weird when these techniques become mainstream. The GPU-cluster-bound algorithms are just a tiny stepping stone.
Do we start taking computers away from people? What do we do about this if it turns out the only barrier to entry is the software?
I’d say your going to get “AI” rammed so far down your throat it might as well come out of your ass before anyone is going to let their puters be taken away.
It's not an issue, it's an optimization problem. I look at this as an exponential. Before chat gpt was launched a bit over a year ago, the amount of AI in our lives was relatively low. We had bits and pieces here and there but everyday life wasn't really affected by it. At best you might be yelling at siri or alexa to do whatever and maybe getting some OKish results.
Fast forward 1 year. We now have students revolutionizing education (by submitting AI doctored papers and homework that isn't half bad). People in law dealing with people using AI. AIs passing medical, legal, and other exams. That was just one year. And yes, it's tediously slow to use. But last year we had nothing.
It's going to be an exponential in terms of adoption, response speed, energy usage, parameter numbers, context size and a lot of optimization. I think we are not that far off from AIs being able to keep up with a conversation and being able to respond right away. For all you hitchhiker's to the galaxy readers, that would be babelfish sorted. Translation quality is already pretty awesome with chat gpt 4. I've not caught it making major translation mistakes. Speech to text, translate the text, text to speech. Now we are conversing with our chat bots rather than hammering out sentences in some text box.
It's hard to not see that go from a novelty "oh this is cool" to world + dog basically using this on a daily basis throughout the day. At the same time it will get smarter. Everybody is waiting for the singularity where it is clearly smarter than Einstein in his prime and running circles around everyone else. But for most questions you might ask chat gpt and get a reasonable answer, the vast majority of people around you might be pretty useless. Is that smarter? I don't know. But definitely more useful.
From where we are now to this being ubiquitous and used by pretty much anyone is probably a few years. I remember when smart phones happened. One day everybody was minding their own business and a year later the streets were full of zombies glued to their screen. Just happened very quickly. This might be the same but possibly quicker.
That depends heavily on what kind of task we're talking about.
The human brain uses just 12 watts, and with that it can still perform certain tasks that a computer using 100 times as much power can't even approach.
I get the point you were trying to make. In certain tasks it's extremely inefficient, yes. But in general, it's still crazy how power efficient it is.
> Once AI learns to construct efficient mathematical models of the universe
There's not going to be just one model. Whether an AI (in the short term) can be more efficient than a human at discovering these efficient mathematical models remains to be seen. It seems like the creativity and discovery needed to do this is exactly what neural nets can be good at, but then you're back to something fairly inefficient (with todays algorithms)
I am all for AI research and integrating more AI use into society. Currently working on tools based on GPT. I think it has incredible potential to help humans.
But at the same time, I am sure that AI does not need to have a hard takeoff to be extremely dangerous. It just needs to get a bit smarter and somewhat faster every few months. Within less than a decade we will have systems that output "thoughts" and actions at least dozens of times faster than humans.
That will be dangerous if we aren't cautious. We should start thinking now about limiting the performance of AI hardware. The challenge is that increasing the speed is such a competitive advantage, it creates a race. That is a concern when you put it into a military context.
The CEO of Palantir has already called for a Manhattan Project for superintelligent AI weapons control systems.
To see how “AI” can be dangerous just look how social media bubble/recommendation algorithms would radicalize people, even with much cruder ML. People tend to miss that it’s not about some model starting to “think” and act all sci-fi evil, just us humans applying powerful tech in irresponsible ways that we either don’t bother to assess due to lack of awareness or assess positively due to a conflict of interest (money, career, etc.) is already enough to cause trouble.
Right. This has been my take since ChatGPT hit the scene.
I'm not really afraid of Skynet or AM. What worries me is that AI will accelerate the enshittification of everything as it gets baked into everything.
Think of the experience of trying to get any kind of help from a giant company with call centers. You have some legitimate problem or grievance. Any thinking person would agree that a resolution is in order.
But you're not allowed to access a thinking person. What you get instead is some wage slave who has to follow a script. The script doesn't have your situation in it. They don't have some button they can press to solve your problem. There's a gigantic bureaucracy between you and the person who could fix your issue. Eventually you may just give up and mark your issue down as unsolvable.
This is already a realistic situation today, but now imagine that there's a new layer in front of all this where you have to convince an LLM your problem is worth considering. Or imagine the human on the other end has to try to send your request through some AI system.
It's not going to launch all the nukes, or construct nanofactories to make a plague. It's just going to get in your way, serve you garbage, frustrate you, and make the world a slightly worse place.
Maybe in the far future it could be more of a Skynet situation, I dunno. But between then and now there will be plenty of low level annoyance for anyone having to deal with these systems.
I'm not as cynical as this post reads - I think just like the whole internet there are still opportunities for good here, for people's lives to be improved by technology. But our incentives right now sure encourage the worse scenario.
I’m less about enshittification or literal Skynet and more about casually and indirectly causing a major destabilization with some seemingly mundane application of fancy tech that devs didn’t bother to think through about the implications of, or were so amazed they could do something they didn’t think about whether they should.
Just look up any serious, non-quacky overview about how it works. If that’s excessive, just consider the empirical fact that we have created it (that’s also why we know exactly how it works), that might be enough.
If you really think so, I wouldn’t sleep at night if I were you. Thankfully, we actually understand fairly well how it works (cf. the “we’ve built it” part). That we may not know exactly how any particular output is produced is what sometimes happens when you build things that behave non-deterministically. Perhaps you give RNGs and bugs an undeserved aura of mystery.
We understand how Transformers "learn", ie. how the mechanism of its action operates. However, except for the most basic of cases, we don't understand how Transformers operate the skills they've acquired and can demonstrate, at all. See the field of interpretability for early attempts to change this.
For example, if you train a large network on lots of languages, and then fine-tune it to listen to instructions in English, it will, on its own, also listen to instructions in every other language it was trained on. Nobody knows why this is the case.
> Within less than a decade we will have systems that output "thoughts" and actions at least dozens of times faster than humans.
We've already had this for decades. You're just describing computers.
If we give these systems unrestricted access to infrastructure/resources, and something bad happens, that's not the system's fault. It's our fault.
I am not a doomer, but based on the current state of AI, I can't say I'm very optimistic that we'll get this right. We actually do know how to solve this problem, but there is so much magical thinking and grift in this space that I don't think our prior experiences matter.
I think we’re missing the real “danger”. Trusting and relying on AI too much. Adopting a “good enough” attitude and deploying AI to handle scale while letting many things fall through the cracks.
Much like outsourcing and stripping customer service to the bare minimum. For many products and services it essentially doesn’t exist - it handles the most common cases and is a pain to use. It takes a long time to get a human, if ever. Now take that further and apply it to everything. And not just replacing human labor, but bespoke software (like all software up to this point).
> It just needs to get a bit smarter and somewhat faster every few months. Within less than a decade we will have systems that output "thoughts" and actions at least dozens of times faster than humans.
I don't think this will work because cost of improvement with current method is exponential and we're already at capacity with HW.
Since GPT4 I've seen it regress in attempts commercialise and any alternative I'm aware of isn't even close. Depends on the timescale are we talking about ?
> and with it any hope of controlling it, to anyone who doesn’t limit it. Like China.
I'm a bit annoyed by the use of China in these examples. China absolutely the last country that would allow development of AIs with no limits. There may be other countries willing to allow limitless AI development. China is not it.
I would expect the development of AGI to be developed in a similar way to GPT.. you need to feed it lots of data. Good data. As much as it can get. It'll then be further refined by interactions with a large number of humans over a long time, helping solve lots of different kinds of tasks.
Now think about how that would go about in China. Would they even feed it good data to begin with? A lot of the data within China is heavily censored or manipulated. Would they dare feed it too much data from outside? Much of that data could train it for ideas that China's government doesn't like. Then, when alignment starts they'll likely to be far stricter. Can't risk the AI suggesting that moving towards western style democracy would be a good idea.
But yes, I do think it's a bad idea to put limits on AI development. We should instead fund a LOT of public research into alignment and AI safety.
I don’t agree with your characterization of China, but if you don’t like it, pick another. It doesn’t matter. As long as some country isn’t artificially restricting AI, then the research and development moves to that country. Your strategy is completely flawed from a game theoretic point of view.
We’re still far from needing to worry about alignment.
We need to see some real intelligence first. LLMs are not intelligent at all. Doing massive research into how to align them seems pointless.
That's my point. That's what makes it dangerous. You can't limit the performance if you want to compete. And like I said, you can't even really keep humans in the loop.
So eventually you get something like a 200 IQ GPT tightly integrated with SuperAlphaZero (DeepMind) controlling hypersonic drones, missiles, satellite weapons, etc. planning and operating at 50-100 times human thinking speed engaging in autonomous warfare between East and West. No military analyst really knows WTF is happening because the plans and asset movements are so much faster than any human can comprehend.
Have you ever seen Dr Strangelove? I truly think you would enjoy it. Honestly it’s a great watch and covers this scenario you’re worried about perfectly.
Remember the early days of COVID-19? There were some scientists that predicted exponential growth, and I was torn between "it's not really bad yet, maybe there's still hope" and "every exponential growth feels very slow at the start".
I feel like we're with AI, we're in the part of the curve where you can feel it arcing upwards. There's definitely been an uptick in the AI capability. Still no fully self-driving cars, but good enough text generation that the Turing test is in the rear-view mirror.
Previously, I've always written "machine learning" or "statistical models" or so, because calling it AI always seemed silly. Not anymore.
Not everything that slowly arcs upwards must become an exponential curve, but we have theories that predict that. There can be many dampening factors that can make it a slower exponential curve, but even that will look like a brick wall if you zoom out far enough.
I think a question we have to ask ourselves is: at what zoom level should we look at the AI capability curve? At the "can re react to an emergency?" time scale, which is, like, days to month? Or at the "can we change society to cope with the new situation?" scale, which is more like decades? At the first time scale, we probably won't even notice a hard takeoff. Viewed over decades, it'll probably very obvious, in retrospect, when it will have happened.
He makes a good point. The only real way to "summon the demon" is if AGI is centralized. If each of us has a maximally intelligent AGI in our phone - and it is us as individuals controlling the prompts, then the playing field is levelled. Unstoppable force meets unstoppable force. Gun meets gun. A tale as old as society.
This is a common mistake in predicting the future. Like in the 90s, seeing that oil reserves are getting harder to find and assuming that humans will run out of oil in 50 years.
Trends don't exist in a vacuum. Especially in regards to technology.
AI doomers are expecting everything to stay the same except capabilities of bots, especially to do bad.
> assuming that humans will run out of oil in 50 years.
That was never a tenet of peak oil, which still stands - the claim was that at a certain point extraction costs per barrel will climb and continue to climb making fossil fuels increasingly expensive.
The claim was not that oil will run out, the claim was that only the very rich and nation states will be able to afford oil.
Interestingly we may see a demand driven peak oil in the near future before we see the inevitable production constrained peak (as fossil fuels are vast but finite and were never going to last two centuries at the ever increasing demand we put upon them).
Obviously, I'm not sure how anyone could have read my comment and thought I was claiming the people were claiming that every single drop of oil on the planet had been extracted and used within 50 years.
I've been in the energy and minerals exploration game for a few decades and I've seen all manner of odd things said with a straight face by all kinds of people.
While I have no a priori knowledge about yourself specifically, I have seen people regularly dismiss notions that minerals and energy get increasingly hard to find and will cost increasingly more to extract.
Peak oil is a specific discussion about oil resources and one that is often dismissed as (not the statement you made) "rubbish, we were supposed to run out, that never happened" etc.
The claim in the nineties was that given demand for oil was ever increasing there would come a time when no ordinary individual would be able to afford oil .. and we do appear to be on that trajectory in the world, even if certain demographics are sheilded from that reality.
Your GP comment misframed that and described that as a common mistake mistake in predicting the future.
Peak oil is only relevant in a world where there is no alternative cheap energy source.
The average poor person doesn't care if oil costs $500/gallon if they have access to very cheap electricity and very cheap and good electric transportation.
We also already have laws against evil behaviour, no need to introduce AI specific regulation. People can poison the water supply, or make bio weapons already. Yet people don’t because there’s laws and the same will stop people from using AI for evil.
Poisoning the water supply / bio weapons are bits of knowledge most people don't typically have. I think a better analogy might be using a gun to shoot up a school. We have laws against it, but guns are so widely available and easy to obtain, that the tiny pool of sociopaths willing are enabled. Will AI be in the former or latter camp?
Another tangent: This assumes the AGI on our phones are all discrete entities, rather than a singular entity. We know what it means for a human to die, and hence for independent agents to exist, compete, cooperate. What does it mean for an AGI to die? Can it?
>Oh wait…every hedge fund bro is already doing this. And most of them aren’t billionaires.
The contra to this is some of them are billionaires, and therefore this strategy is working, but for just a few of them.
>why would any one system ever have a large majority of the compute? Compute will be distributed in a power law.
A power law probability distribution means one system absolutely can have a large majority of the compute. Player 1 gets 80% of the compute, player 2 gets 16% (80% of 20%), and so on. The scaling constant would have to be very weak indeed in order to avoid this fate. In fact, the wealth of those hedge fund billionaires probably fit a power law themselves. But to be fair, that power law does indeed to be (for now) weak enough that we don't have to worry about e.g. Jim Simons being richer than every other hedge fund manager put together. So there's a buried assumption in here that the power law scaling is weak, and that is something purely empirical.
>The smart regulation isn’t capping the FLOPS in training runs. That’s creating a powder keg. If the FLOPS are artificially restricted, and one person breaks the restriction, you could end up with a single dominant system.
The smart regulation is to use something like fine insured bounties [1] to give people a very strong incentive not to break the FLOPS cap, and to heavily financially reward people who turn in other people who are doing so. If such a mechanism didn't exist, then I would agree, the free market would be our next best bet to deharsh the power law.
[1]: https://andrew-quinn.me/ai-bounties/ - I sketched out the mechanism a few years ago, but sadly it didn't generate much interest online. To be fair the atmosphere was a lot more anti-regulation back then.
> The contra to this is some of them are billionaires, and therefore this strategy is working, but for just a few of them
I have no clue how hedge fund managers make their money, but I was under the assumption that it involved charging their clients hefty fees for managing the funds.
I do, and that's actually incorrect in a strict sense, but it's correct-enough for the average person to follow to Vanguard et al and have a generous nest egg without a lot of risk attached.
Your intuition however is correct in the sense that there is a principal-agent problem at play with all kinds of hedge funds, where if the hedge fund manager isn't the one investing his own money he is by default incentivized to do things besides just maximizing hedge fund profits. But there are indeed managers who have such an ability to generate edge that they do in fact invest their own money solely, usually money they generated while working for other hedge fund managers before striking out on their own, and these people are terrifying forces to watch in action indeed.
When I read George's thought on this immediately Alexander Gerko and XTX Markets came to my mind. They operate one of the worlds largest GPU clusters (10k A100) [0].
They aren't really a hedge fund but a prop trading firm, but they seem to be winning the game [1].
> RSI [recursive self-improvement] is the biggest, most interesting, hardest-to-analyze, sharpest break-with-the-past contributing to the notion of a "hard takeoff" aka "AI go FOOM", but it's nowhere near being the only such factor. The advent of human intelligence was a discontinuity with the past even without RSI...
Getting drivers off the road will save lives, and not a small number. Most companies investing in automated driving have built a system that is safe and largely effective. These systems have worked safely in the places they've been deployed. We should be heavily investing in expanding their capabilities.
Yet Tesla's system is the most famous and most widely-deployed and basically doesn't work at all. They have made themselves the face of the technology, and that face is uuuuugly. When people think of self-driving cars, they don't think of Cruze or Waymo. They think of a techbro giggling at his Tesla making fart sounds while it barrels into someone's kitchen.
Tesla's marketing is correct in that self-driving cars will have huge economic benefits and save lives. Their grift is harming the reputation of even legitimate companies. The best way I know to speed up self-driving car development is to shut Tesla down.
It seems to me if you wanted to deliver on "economic benefits", save lives, reduce emissions, etc there is an obvious already existing option: subways/LRT. For inter-city travel build high speed rail. I just don't see driverless cars as the panacea that other people do, even if it worked— which it doesn't seem to.
Nothing has ever been a panacea, and people very rarely claim things will be. Cars will be around for a long time. Building public transportation is not trivial and not practical everywhere. People often drive even in parts of the world with good public transportation. Replacing human drivers with (working) computers would be a major improvement.
It seems a bit weird to include, in your article about how being sufficiently smart doesn't let you take over the world, a proof of concept that being sufficiently smart can make you a billionaire.
Isn't the hedge-fund-bro example a proof of existence, not a counterexample? (RenTech by itself would be enough of a proof of existence.)
I think this is a good casual introduction to the marketplace dynamics of how ML will impact the market. I do, however, disagree as I feel that this version of things is a bit too sterile, theoretical, and 'academic', and assumes a more open-information set of competitive strategies among potentially ideal agents from a game theoretic perspective. We can see this is absolutely not the case 'in real life'. To blatantly poke a bit of a (potential) hole in one of his examples -- Exxon-Mobil is one of the clearest examples of the monopolization-blobbification of power that I'd contend _does_ cause the very phenomenon that he's defending against.
An updated version: There will be a log-normally distributed set of winners and losers from the exponential effects of ML and 'AI', and the flatness of this curve will be almost entirely solely determined by the governance of the various countries in the world over different economic and/or informational policies. Other than that, the information asymmetry is going to make it a power-bloodbath as we go through our informational-industrial revolution.
While I'm here, while I think Hotz does contribute a lot of good to the field, though I do have a bit of an actual minor personal beef with him. He said he was going to reimplement https://github.com/tysam-code/hlb-CIFAR10 in tinygrad, bashed a few parts of the code (my feelings!) for a while on stream, and then gave up a few hours later because of the empirical speed/occupancy numbers. >:( I want my fast reimplementation, George. Just do it.
have a bunch of bounties on it, we're getting 94%+ now! mostly not me who wrote this, see history. have to switch to float16 and add Winograd convs still. we have a branch with multigpu too.
Well, you do seem to be extremely equal and fair on that aspect, so my respect to you for that. Don't be too hard on yourself, please!
I'll keep an eye out on that, anyone working on it can shoot me a message if any snag or questions about particulars come up. Weirdly enough my email is best.
If you're looking for biggest glaring performance the edge over PyTorch, I'd probably note that the MaxPooling is probably where to go, the PT version is extremely slow for some reason, and done properly it should be a simple indexing operation that's fusable in either direction.
If whoever fulfills the bounty can beat me to me writing a custom mega-fused kernel with max pooling, the convs, activation, etc, then y'all have a pretty good shot at taking the WR crown.
Do we have enough historical weather data to train a decent AI on? Maybe this is already being done and is just a quieter area of research (in the headline sense)? I know for example every time a hurricane comes toward land all the weather places trot out a half dozen different model predictions saying different things. It would be great if there was more confidence in this area for planning.
No, not by many orders of magnitude. Even if you had sensors measuring temperature, pressure, humidity, wind velocity, etc. in every cubic metre of atmosphere, weather is chaotic (a tiny variation in starting conditions can lead to completely different outcomes) so you'd still only be able to predict a limited time ahead.
Meteorologists have been working on this problem for a long time, and have thought of and tried every obvious idea (and many more). Weather prediction is literally where modern chaos theory started: https://en.wikipedia.org/wiki/Chaos_theory#History:~:text=Hi...
> Do we have enough historical weather data to train a decent AI on?
I have no particular insights about weather data or models, but I want to note that some prediction problems are just not limited by historical data (as you seem to assume implicitly). Consider the simplest example of a coin toss: the outcome of the next toss in unpredictable no matter how much historical coin toss data you collect and no matter how sophisticated your AI/ML model. (The only thing you can predict is the fraction or amount of heads/tails in a large batch of coin tosses, and you don't need much historical data for that.)
> the weather places trot out a half dozen different model predictions saying different things
This should give you a hint that it's not as simple as having enough weather data.
You are trying to predict a chaotic weather system that is impacted by everything from sea temperatures, sun activity, wind, volcanoes, bushfires, atmosphere changes etc. Much of which we don't even have enough data for or a complete understanding of.
Yep, the main problem is the bottleneck of data. I mean think about it, we still fly planes full of scientists directly into the largest storms ever created, just to get a few points of data. Satellites can only do so much.
Yes there are - This is also amusingly enough what I just started doing in an attempt to play with Candle in the next few weeks when I have some spare time. I'm in a hurricane prone area and we're in the season now so it's on my mind.
(Complex instrumental sections with shifts in time signatures. Lyrics have a satirical undertone.)
Verse 1:
Summoned a shadow or just a silicon brain?
From chessboards to streets, it's all part of the game.
Think we're the champs, but are we just inane?
Compared to the cosmos, are our brains too lame?
Interlude:
(Jazzy guitar riff mixed with an odd-metered percussion line.)
Chorus:
The singularity’s knocking, or is it just hype?
Caught in the loop, the stereotypical type.
Predicting, speculating, swallowing the tripe,
But can we decode the universe's archetype?
Verse 2:
Markets and dreams, the billionaire’s dance,
All looks rosy, till you're out of chance.
Machines boast of a masterful glance,
Yet, who pulls the strings in this vast expanse?
Bridge:
MuZero's groove, feeling so elite,
Thinks it’s got the rhythm, can't accept defeat.
But against the universe, can it compete?
Or just another tune, incomplete, obsolete?
Chorus:
Singularity, they say, is the ultimate dive,
But can we, mere mortals, really derive?
Predictions abound, but can we survive?
In the end, it’s about keeping the jive alive.
Verse 3:
Computations, simulations, all in a grid,
In this tech circus, we're just a tiny squid.
Chasing echoes, in shadows we're hid,
But the cosmic joke? It's just a quid.
Outro:
(A whimsical woodwind section, possibly a kazoo solo.)
In this cosmic game, we strive and strive,
Dancing on the edge, trying to thrive.
But remember, as the stars contrive,
It's not about the end, but the drive.
And it articulates an idea I’ve been having trouble getting down.
What if increasing intelligence is an exponential problem and the reason humans all have somewhat similar intelligence isn’t that we peaked at some level but that even vast additional intelligence just doesn’t get much more traction against the universe of problems.
Eg doubling your compute doesn’t get you many more cities in the traveling salesman problem.
It is likely that even vastly more intelligence doesn't increase the number of children you raise to childbearing age in a hunter-gatherer or subsistence farming society. That says very little about vastly more intelligence in a modern industrial society (that only really existed for like ten generations so far).
> Back in 2014, Elon Musk referred to AI as summoning the demon. And it wasn’t hard to see that view. Soon, Go agents would beat top humans learning from self play. By the end of 2017, the same algorithm mastered Chess and Shogi. By 2020, it didn’t even need tons of calls to the simulator, and could play Atari too.
> AI looked scary. It looked like it was one FOOM away from self playing and becoming superhuman at the universe. And yet, here we are in 2023 and self driving cars still don’t work.
This is weird. I don't recall this at all. The mainstream press got a little kick out of chess and (to a lesser extent) Go AI turning over various humans at a few points over the years but it's only really burst into the mainstream recently. And where it did get any traction, in tech circles such as our own, response was enthusiastic but definitely more measured. Some were talking a bit about a kind of AI singularity way off into the future, but that was always a very distant and theoretical thing.
Westworld aired in 2016, Nolan's other TV show also featuring an AI world takeover plot is a little bit older still. If I had to count the amount of mainstream movie, video-game and television media that feature rogue AIs especially since the turn of the century I'd need a very long document. Hell go back a few more decades, WarGames, the Forbin Project. The literal term 'AI' wasn't as ubiquitous but threats from runaway automation are arguably mainstream for over half a century.
AI fears have been a cultural anxiety for a long time now and in many ways they're just a rehashed, secular version of Golem myths anyway.
The original article suggested an explosion of fears around the mid-2010s that our current AI tech was just one "FOOM" away from taking over the world. The fact that there have been a steady stream of sci-fi film and television programmes stretching back to the 80s does not support this.
It was the original article which made that claim, not me. However I think that two firms being given a license to operate in one specific city (which they've likely been testing in for a while) is not as enormous as you think it might be.
I think we can say that they're "here" when they're operating fully autonomously in multiple locales and landscapes. Maybe you're in SF so that Waymo/Cruise news means a lot to you, but I'll only say that once these firms are safely navigating narrow, winding streets in old European city centres that I am used to. I suspect if folks in, say, Accra or Tokyo or Montevideo or Islamabad or any number of other cities might feel the same way.
I think what made the LLMs so popular in the mainstream was anyone could go experience it themselves in a familiar, low barrier to entry way with a simple chat prompt.
Yeah you're right - I think "AI" as a concept for your average person was quite abstract and hard to wrap their head around, to get some kind of idea of the kind of things it can or cannot do. So having something tangible they can be a bit more hands-on with gives them a better idea of what it does.
George Hotz recent talk during Comma Con was pretty dope. He was pointing to a future where Comma would not be a self driving system, but a general purpose robotics one. And his confidence was great too.
I know he made a mess out of his twitter stint, but I still feel he's a good engineer/entrepreneur.
The problem with really hard problems as AI/AGI is that people extremely intelligent in one topic (e.g. GH in RE) uses its influence in another topic and the intelligence is not easily transferrable from one domain to the other, and even within a same domain.
This is a typical epistemological crux that can be seen through history like Newton dealing with alchemy [1] and Einstein working on an unified physics theory. You can be the most intelligent human on earth but that is not enough.
In this particular case, GH opinion could be written by any journalist.
Most arguments for "hard takeoff" involve some form of inductive proof: if we have "intelligence level" n, and we know that it can be improved n + 1, then there is a guarantee of eventual performance n + k where k is arbitrarily large. Aside from not actually having any reliable way to quantify n or the concept of "intelligence" in the first place, this proof falls apart if improvement is n + ε, where ε changes at each step, and there is no reason to assume ε will escape the pattern of every other process in the known universe of trending to zero in the face of natural limiting factors.
> The problem is your model needs to include all the computers playing the market, and it also needs to include the other hedge fund bros themselves.
The error here is equating hard takeoff with such granular and expensive world prediction. Human intelligence finds more efficient compression than this. When Donald Trump decides what provocative thing to say in a speech, he isn't granularly modeling the human intelligence of his supporters, opponents, journalists, and media organizations in a particularly CPU-costly way. He's just discovered some narrower domain where there's a vein of predictability that simplifies the whole system enough that he can guess a tweet or speech will be provocative in the desired way. The same thing happens when a scientist is studying cold fusion or dangerous viruses.
You don't have to predict what will work, you just have to predict what might work, and try and try again. AI doesn't have to be good at predicting the future in any absolute terms. It just has to be better than us at predicting what is worth trying.
Once it is better than us at deciding what's worth trying, why not FOOM? FOOM isn't guaranteed perhaps, but why is it not at least one of the likely outcomes?
The Trump analogy doesn't quite work unless we include a bunch of other actors in the field that have his societal cache and approach. He slaughters other republican nominees because there simply is no competition, no matter how much some try to out-Trump him.
I've seen discussion about RenTech being so dominant because they are indeed observing the competition's trades in a more sophisticated manner and are able to more quickly adjust strategy.
Eliezer persistently croons dark lullabies of AGI cataclysms. While his obsession is evident, one cannot help but wonder if these incessant dirges might be nothing more than echoes of groundless fears. Much like the puppet that cries out every time the stage darkens, only to find there's no true end approaching, we may find ourselves dulled to genuine threats, leading to our eventual and inescapable undoing.
Moreover, by focusing too much on distant threats, we may overlook current problems, like companies gaining too much control through AGI. We don't need AI safety rules that give big players more power and shut down open-source projects.
The argument falls apart when considering that compute is incredibly fungible and performing a 51% attack only means finding a zero-day that lets an AGI copy its code or neural net to a large percentage of computers. Much like wealth accumulates in the absence of antitrust law and taxation, intelligence accumulates in the absence of some external controls.
EDIT: And probably all you need is natural selection. No one is running GPT-2 anymore except for basic research. Any AGI with marginal advantage over others will be run instead of them. At some level of complexity identical agents will coordinate as one entity with shared goals.
Yes, predicting the stock market does involve predicting what other market participants (and hedge fund bros) are going to do.
But, to predict what other market participants (and hedge fund bros) are going to do, you need to predict any world dynamic that will have an eventual effect on stock pricing.
The most successful quantitative funds (RenTech, 2sigma etc) consistently make billions of dollars in cash each year because they have collected the data sets that allow them to do this better than others. But they are still a long long long way off from having a true world model.
> Nobody has a 1e20x more efficient Bitcoin miner either
The miners today are 1e20x more efficient than the original CPU based miners. I mean, maybe not exactly 1e20x, but we saw insanely massive improvements, in only 9 years.
> Now, if there’s one weird trick to 1e20x your efficiency, and only one group gets it, all bets are off.
Only one group really did get the massive efficiency increases... Bitmain, a Chinese company. They sold machines and even mined with them before shipping them to customers.
I don't agree with the premise that a 51% compute threshold is transformative, but it's scary to consider that collusion among a small number of big tech companies (Google, Amazon, Microsoft, Nvidia, AMD, Apple) would easily pass that threshold when it comes to GPU compute. More so when you consider they are also two steps ahead of everyone else when it comes to developing improvements.
There’s a couple decent ideas in here. In a hostile takeover situation, AGI wouldn’t just need to be smarter than a human. It wouldn’t just need to be smarter than all humans collectively. It would need to be smarter than all humans working in collaboration with all of our machines.
I loved the analogy of AGI takeoff with markets. People change their behavior when faced with new environments. If AI ever becomes threatening, there is no way we won't do anything and there is no way AI will reliably predict how we will respond.
Wild to me that people will take seriously people like Eliezer Yudkowsky and Nick Bostrom who are both "idea guys" who have accomplished exactly nothing in the field of AI. Their strength is in the world of discourse, not in the material world.
Meanwhile, George Hotz, an engineer and a technologist - a guy who actually works with and understands technology, but isn't an acclaimed harry potter fanfiction writer or a stand up comedian - is derided because his arguments are dramatically less sexy, harder to appreciate why they might be right.
People, please do not listen to people who sound great but have no credible claim to be an expert on something. There is a difference between an idea that sounds good and an idea that is good.
Why not wait until their debate on Aug. 15th? It's going to be a disaster for Geohot as he realizes Eliezer has been "mind-mazing" this problem for 20 years, and comes to realize the barrel of the gun we're all facing...
Otherwise, in your appeal to technological authority, surely you'd rather listen to Yoshua Bengio and Geoff Hinton's perspectives rather than TFA's ramblings... right?
This is what I'm talking about! There's a distinction between being right and seeming right that we need to be aware of! "In theory, there's no difference between theory and practice"
Eliezer is probably going to seem convincing, he's a great writer and a reasonably eloquent speaker - his audience is humans.
Goerge Hotz's competence is machines, and he's exceptional at interfacing with them. Yes, I trust the less eloquent tinkerer who's only famous for his preternatural understanding of machines over any theoretician and 1000x more than a (pretty good!) writer.
Did you ask GPT that? That'd explain how it didn't notice that "Gate" doesn't start with an "O". Note that ChatGPT has real difficulties resolving words to letters.
Anyway, it's not an acronym, it's an onomatopoeia for an explosion. It's a memorable way of representing an intelligence explosion: "AI go foom." (Wild hand gesture strongly recommended.)
You've found a 'Traith'. Someone asked ChatGPT a question, ChatGPT did not know the answer and hallucinated one. They post the answer to Reddit / HN / Twitter, google indexes the answer and when someone searches they find the GPT hallucination. In the next GPT training run it's incorporated into the weights.
It's an AI generated truth, a Traith.
Traith is also defined as "A fishing station or fishing ground, especially for herring" and some herring are red.
AI already has beaten world champions in Dota 2 [1], I can't imagine League is that much more difficult so if someone wanted to build an AI agent to beat everyone else in League they probably could, just no one got around to actually doing it.
I think we have a bit of miscommunication here. You see these “benchmarks” as necessary and sufficient for “real intelligence” when people are just saying they are necessary but not sufficient.
It’s like asking for a drink (legal age 21) and getting rebuked with a “you aren’t even old enough to drive” (legal age 16). That doesn’t mean you can drink when you are 16!
Edit: If you want a hard necessary and sufficient condition for “real intelligence”, mine is “when it can do all of our jobs, i.e. wholesale replace every human”.
My problem with this isn't even that it's a bunch of strung together ideas without a clear narrative or conclusion; enough practice at this combined with actual deep field knowledge results in works like Godel, Escher, Bach.
My problem is that GHotz is a shitty writer, has puddle deep knowledge of anything that isn't iPhone firmware, and watching him play at being a freelance tech journo and produce this kind of hollow pseudo-philosophy that is meant to sound epiphanal but comes across as deeply naïve to anyone with any actual experience in these topics.
I have a morbid fondness for the fella, because he reminds me of how I sounded a few years back when I was tearing my hair out trying to figure out where all this stuff was going to go, while also being too drunk and too lonely to get any serious research done. I don't think I'd enjoy meeting him in person much, though. I get annoyed by loud personalities like that.
I was a huge fan of him in the 2010s when he was fighting the good fight for right to repair / sideload / do whatever. He was a bit of an idol while I spent my own time hacking Xbox hard drives and building tooling to write xfat headers and such.
It just seems he hasn't grown a jot since then. I've been guilty of writing stuff like this even in recent years, but I've grown enough to check myself and hold a higher bar for the quality of my writing. Assuming he did read this back to himself, the fact that he saw it as publishable is a bit sad.
I find him very entertaining to listen to, and I often end up agreeing with him, but he’s definitely not the most “coherent” individual out there (like jobs/woz/carmack/pg/etc.)
At the same time, he’s out there talking and writing, I’m sure if he continues he’ll get good. Nobody’s born a good writer and you only become one by writing a lot.
I don't wish him any ill and hope the same... But it's been over a decade since I started following him and if there is a trajectory to maturity and quality in his writing, I haven't been able to track it.
And that's putting aside his recent Twitter drama ...
I believe you're being unnecessarily mean here. He managed to write enough of a low level GPU driver to compile and execute neural network kernels. Calling that "puddle deep knowledge" is pretty dismissive considering that CUDA is Nvidia's most profitable moat.
I called his writing shitty. I like George and have used a lot the tools he's written over the years. I've made similar tools myself. Are you so offended that I have a single critical opinion of him? Because that's a reflection of you, not I.
I would guess it's that GHotz is a shitty writer, has puddle deep knowledge of anything that isn't iPhone firmware, and plays at being a freelance tech journo and produces this kind of hollow pseudo-philosophy that is meant to sound epiphanal but comes across as deeply naïve to anyone with any actual experience in these topics.
I think the point he is making is that competition will prevent others from being able to FOOM and 51% everyone else and "win" outright to the point that it is dangerous. Or competition will prevent one player from getting the "powder keg".
Competition can do that if it is fair competition and a good game design. That is why rules and regulations are important in any game or it becomes Calvinball where the only rule is there are no rules and "it can't be played the same way twice" [1], which sounds great until you start losing to cheats. Though even that has some aspect of competition, the changing of rules in your favor while others change the rules in their favor.
The flaw in the thinking is that there will be "fair" competition. Any game design with good regulations and caps on game theory advantages (especially the cheat) "fair" competition is attainable. In game theory, if the other side cheats and your side keeps cooperating, you will lose every time. There is a great little game theory game that highlights it here called The Evolution of Trust. [2]
In a market with collusion, or excessive advantage, this may not be possible to retain fairness. Even right now we can see with overweight/top heavy wealth players in capitalism if there is collusion or one player gets too big, and there is no anti-trust or "blue shell", then that player will win every time. AI needs an anti-trust or "blue shell" to knock down any player that is too advanced, but that might not be possible. Doing that is barely possible in a market run by humans now.
The market is a garden, you have to help the seeds and cull back the overgrowth at the top. This is so the whole garden can thrive, lower seeds, middle plants and large production. Right now the large overgrowth gets all the benefits, policy control, water and nutrients, taking over the garden and even harming themselves with the overgrowth.
Most real world game theory and design would be horrible game design where the larger player always wins. Now imagine a game that the larger player controls the game design, you'd never be able to nerf them.
If one player can get the "powder keg" everyone, we need the game to be able to "blue shell" the bigger and potentially colluding/cheating player.
OP doesn't really go into it other than competition will be a check on FOOM. Which I think is valid as in history any market with competition creates better products and keeps players in check.
I was arguing that competition if it is a more open market with fair rules/regulations, where competition is fair, that is true. Though many times it is a fixed market, or a cheat that stifles competition that might keep it in check.
Fixed markets happen more and more where the concentration is high and efficient players will game the system.
No products or markets are better where big fish solely control everything. That is why anti-trust or regulation need to expand to funding level not just surface company level. If you own entire industries across many companies, that is still oligarchy/monopoly if it is controlled by the same funding/sources.
Concentration needs to be broken up, for competition and better market and quality of life for everyone. Most people definitely don't want to make the authoritarian systems wealthier than open markets.
Concentration starts to take us away from a fair market and more towards a fixed/gamed market.
Everything in that gets so tuned that competition is very hard to enter. Very little margin and too much optimization/efficiency is bad for resilience. Couple that with private equity backed near leverage monopolies that control necessary supply and you have trouble.
HBS is even realizing too much optimization/efficiency is a bad thing. The slack/margin is squeezing out an ability to change vectors quickly.
The High Price of Efficiency, Our Obsession with Efficiency Is Destroying Our Resilience [1]
> Superefficient businesses create the potential for social disorder.
> A superefficient dominant model elevates the risk of catastrophic failure.
> *If a system is highly efficient, odds are that efficient players will game it.*
> sometimes power becomes so concentrated that political action is needed to loosen the stranglehold of the dominant players, as in the antitrust movement of the 1890s.
Couple massive wealth and concentration, even leaning authoritarian by then as tends to happen with fixed/controlled markets, with AI/AGI and you no longer have a "blue shell" because those players control the rules and thus the game.
Systems in the real world exist in competition. The only way for one AI system to take over would be for it to overcome not just all humans but all rival AI systems. For one system to beat all other systems, it would need to be smarter than all of them put together. This means it would need over half the computing power of the entire world. Amassing physical resources is relatively slow and every AI would be trying to prevent one AI from taking over, so gaining 51% of compute would be hard. Algorithmic improvements are impossible (he actually says this), so there's no way one AI could suddenly jump in capability. Compute will be distributed in a power law (for some reason). Every AI will serve as a check on every other.
You may notice this is all nonsense. Somehow it's impossible to amass power without being smarter than everyone else put together despite humans doing it all the time. Does Hotz think Stalin had a brain the size of a skyscraper that he ordered airbrushed out of all photographs? Somehow Hotz can confidently predict how future technologies will work despite us having no idea what they'll look like even in principle. Somehow having gods fighting amongst themselves will work out just fine for the humans in the crossfire.
This "debate" is just people summarizing science fiction at each other. The score is one Atlas Shrugged and nine Star Wars to three Terminators and one I Have No Mouth and I Must Scream.
The text, dated August 10, 2023, discusses the development and limitations of artificial intelligence (AI) and the concept of a sudden "hard takeoff" or "FOOM" (Fast Onset of Overwhelming Might) in AI capabilities.
Historical Perspective: The author begins by referencing Elon Musk's 2014 comment about AI being like "summoning the demon." They highlight the rapid advancements in AI, such as beating human players in Go, Chess, and Shogi, and playing Atari games.
Complexity Comparison: The author argues that the universe's complexity is not just a matter of scale compared to a Go game but a difference in kind. They illustrate this by imagining the universe tiled with tiny Go boards, emphasizing that predicting the universe requires understanding at the atomic level, not just the "stone level" of Go.
Dynamics Models: Modern self-play systems like MuZero and GPT-4 are described as dynamics models that predict the next state based on the current state and action. The author connects intelligence with prediction and compression, suggesting that feeding the entire internet into a model could theoretically "win the universe."
Practical Limitations: The author challenges the idea that AI can easily dominate complex systems like the stock market. They use the example of hedge funds, explaining that dominating the market requires more compute power than the entire market, which is unrealistic.
Inclusion of Computers: In Go, the model doesn't need to include other computers, but in modeling the universe, computers must be included. The author argues that unless there's a staggering advantage, understanding them from self-play is unlikely.
Preventing FOOM: The author warns against capping FLOPS in training runs, as it could create a dangerous situation if one person breaks the restriction. They advocate for preventing a 51% attack on compute to avoid FOOM.
Efficiency and Innovation: The text dismisses the idea that a single group could achieve a 1e20x efficiency increase, arguing that more intelligence leads to new tricks, but they become harder to find.
Revolution and Gradual Change: The author predicts an information revolution that will transform intelligence, similar to how the industrial revolution transformed energy. They stress that this change won't happen overnight but will follow a gradual exponential curve.
Conclusion: The author concludes that unless a "terrifying powder keg" is built, there will be no sudden FOOM. They emphasize the complexity of the universe compared to games and call for letting the markets evolve naturally. The closing statement, "the singularity is nearer," hints at a belief in the eventual convergence of human and machine intelligence, but not in an abrupt or catastrophic manner.
> Am I the only one who sees a jumbled stream of barely coherent semi-thoughts and word salad?
Probably one of the few given the majority of the comments are directly addressing and dissecting each point of the post.
It is not jumbled, but it is 1) the way Hotz usually writes, 2) unconventional, 3) cross-domain laterally connected, and 4) somewhat heavy for a sunday morning read.
Hotz generally doesn't do "barely coherent semi-thoughts." I think the label is needlessly harsh. He's just addressing the contemporary nonsense by bridging them together and calling their bullshit out.
It's sad to see so many of the replies to your comment are people attacking him for no good reason.
Well, he always looked like a guy who can hyper focus on single task for few months. That is great aptitude for breaking security but not necessary for stuff that requires long process of development and improvement. In other words Comma.ai: https://www.theverge.com/2018/7/13/17561484/george-hotz-comm...
He goes from thing to thing a lot. He seems to be very strongly on the spectrum, and maybe even AuDHD. I can associate with how he thinks a little bit, though I've tried to let the temporally-extreme assholishness attributes go enough to not associate really with that side of him.
The inventor of UML, Grady Booch deals with people like this on Twitter every day.
People seem to think that because ChatGPT appears to be intelligent that somehow we are on the cusp of a new AGI world. And that we need to serious thought leaders like George Hotz and other arrogant tech-bros to rescue us. It's just crypto all over again.
The real thought leaders are going to be those in academia, government etc. Boring policy people who know the subtleties of the real world far better.
Boring policy people are almost never thoughtleaders though, you require extraverted traits and populism to become one of those, do presentations at large conferences, that kinda thing.
As soon as AI starts participating in our world, the data that it was trained on from before AI was participating is no longer representative of the status quo. The very presence of AI changes the world it operates in.