I wish I understood what Sam Altman is trying to accomplish here. Regardless of your opinion of LLMs and their potential impact, he seems to be talking weird nonsense.
The only thing that makes sense to me is that he's trying to distract the world from actual risks of LLMs by getting everyone worked up about imaginary risks of LLMs.
But I need to admit my bias. I considered Sam Altman to be sketchy and untrustworthy from before OpenAI existed, so I'm going to be very skeptical of anything he has to say and his motives for saying it.
Regulation hugely benefits established companies.
Startups would have to deal with hugely complex legal problems just to exist, while the cost for compliance is insignificant to larger companies.
It is also an extremely good advertisment. Getting in front of the global press and telling them how dangerous the technology you can use right now, on their web site is, is absolutely awesome marketing.
I’m curious how many people he’s actually convincing. It seems like trying to argue that people should buy your product and also that that same product category could have existential risks would raise doubts. Perhaps the “AI is dangerous but not yet” line will work though.
Edit: Upon further thought, I suppose that’s the appeal of the nuclear analogy? “Nuclear power is to nuclear weapons as ‘aligned’ super intelligent AI is to malevolent superintelligent AI?
Given the enormous success of his company in acquiring users, I think he has done quite well.
Having the debate is advertisment enough. "Maybe this could kill us all, maybe not? Try it out at openai.com and see for yourself." Seems like a very good promotion.
And his stance obviously isn't that AI is so dangerous that his company should be shut down.
Agreed in the debate being an advertisement in itself.
I’m not sure their success shows that the strategy of warning of existential risk is working. Arguably, the success could be entirely due to word-of-mouth marketing from people posting their ChatGPT sessions and media talking about GPT’s capabilities with little skepticism. But it’s certainly not hurting it significantly, and on balance it probably is helping.
He is certainly sketchy, mendacious and untrustworthy. It doesn’t mean that he is necessarily wrong.
Achieving GPT-4 deserves a lot of credit. It’s true, it was built on stolen books, ideas, renegading on the non-profit and using the technology brought by xooglers, but it hardly matters. It’s a clear success and achieved vision of an AI.
So it is certainly worth listening to the predictions and ideas that are coming from the same team.
Creating a moat slowing or preventing other entrants through arbitrary regulation seems to be the goal, free and open source models would be at a huge disadvantage under the new models being pitched today.
> The only thing that makes sense to me is that he's trying to distract the world from actual risks of LLMs by getting everyone worked up about imaginary risks of LLMs.
It's absolutely this.
He's selling us the threat of extinction while actively working to undermine the very idea of industry itself.
I'll take a page out of his book and turn my garage into a meth lab, then offer to protect the neighbors from the danger it poses by selling only the purest of meth to their kids so they don't try to buy it from some shady guy who'll lace it with fentanyl.
Those were basically 100% effective. It literally took a popular revolt to break the taxi cartel's stranglehold. The majority of customers decided they would rather participate in an illegal business model -- one that involved getting into strangers' cars, no less -- than put up with taxis any longer.
Is that likely to happen here, when the moat is so much wider? As long as training a model takes exaflop (zettaflop?)-level computing resources, the moat is very real.
Are there any mechanisms right now that prevent a single actor from developing a human level intelligence or above? Think slightly better than GPT-4. What if that actor is malicious, misguided or mistaken? What if that actor misaligns that intelligence accidentally?
Imagine, you’ve never seen modern technology, say you are a kid in a very poor country. And then a helicopter flies into your village and starts shouting air-to-ground missiles.
We might end up in a position of that kid, when “helicopters” built by such an AI will fly in.
Don’t underestimate of what you don’t know and what is achievable with the right knowledge and technology. It might be a difference of a level of a villager and a level of technological civilization able to build a helicopter.
Seems very much like a science fiction story. I hate the whole AI debate, as it always seems to focus on some big "what if", but really the implications of AI running right now are already grave enough.
Targeted advertisment/content are good enough to do effective oppinion control onto large parts of a population. But who cares about that if matrix multiplications could become sentient...
The population had clearly been a subject of opinion control from various factions: from the church to TV to internet. And we’ve seen quite strong attempts before, including various nation level propaganda machines. All of these do work to some degree. They can be life threatening. But, unless these result in a nuclear war, these are not civilization-ending events.
On the other hand, something slightly better (on a log scale) than GPT-4, can be a game changer.
I think it gets even better. You can target specific groups and push particular ideas to them. With that you can create an entire fake discourse.
You probably don't even want that many people believing your particular ideology, they might demand standards and having enemies you control is about the best situation to be in.
Elon tweeted that we should all go watch this doc because it's the most important doc there is on the harms of AI[1]
I watched it... it's 17 minutes of smart people saying AI is going to destroy us but 0 minutes of anyone actually describing a situation succinctly.
The sorta of hodgepodge summery is: AI will in the near future be so smart that anything we think of to curtail it, it will have already thought of because it's orders of magnitude smarter than us. Additionally, it is able to improve itself without us, such that it becomes smarter and smarter at some insane pace we can't even comprehend, solving it's own problems etc etc. We are not working on making sure that any large-scale AI developed is inherently aligned with us under all circumstances.
So we might say unplug the servers, but it's already exploited a flaw in the security system of the server rooms and re-programmed them such that we can't access them, or copied itself to IoT devices, distributed itself in such a way that it can heal itself, etc etc.
Nick Bostrom - Super Intelligence is actually a pretty good book and covers some of this thinking.
The more I sit and think of these issues, the more I start to accept them as potential issues. I'm still very much on the fence, but I've gone from there is no merit in this and it's regulatory capture for business advancement to: I suppose there might be some merit.
If this actually does get totally out of hand and some crazy AI system takes over, the only realistic mitigation I can think of is totally isolated EMP near DCs and power centers. They would require human DNA/biology to activate, but even then the AI would have effectively become a cancer, I don't know how much you'd have to destroy to stop it. It's an interesting thought experiment anyway, and I'm sure there must already be some good scifi movies about this kind of thing.
It’s a fun thought experiment but it essentially begins by assuming “the computer becomes God in a box”. Yes, if you make this assumption, everything goes to shit.
In a world where problems have computational complexity and the halting problem is undecidable, I find it very unconvincing that once we design an AI that can improve itself, it will achieve that level of being a God in the Box and not like, a 20% better version of itself after racking up $100MM in AWS compute to work on the task of improving itself.
In the video transcript, the main focus is on the development and potential consequences of AI in terms of its intelligence and decision-making capabilities rather than physical manifestations. However, an extension of humans without the ability of AI to manifest in physical form could be achieved through the development of advanced software systems, which might be embedded in existing technologies and human-computer interfaces.
These advanced software systems could provide personalized assistance, enhance cognitive abilities, offer instant access to vast knowledge, and allow for seamless communication between individuals and machines. In this way, AI would act as an extension of human cognition and decision-making, but not necessarily have a direct physical presence.
AI-enabled technological misuse: Advanced AI systems could be exploited by malicious actors for harmful purposes, such as creating sophisticated deepfakes, conducting cyber warfare, creating advanced biological weapons, or manipulating public opinion.
Consequential impacts on society: AI technologies have the potential to displace jobs, exacerbate existing inequalities, and contribute to surveillance and privacy concerns. If not properly regulated and managed, these social impacts could lead to unrest and negative consequences for humanity.
Existential risks: In the worst-case scenario, a superintelligent AI system could pose an existential threat to humanity if it outperforms human intelligence and becomes uncontrollable. This would assume it "jail breaks" itself and takes control of systems and we are unable to gain the control back. I've actually seen a project on Github that proposes linking models to cryptopayments which may be one way to regulate this, in my opinion: https://github.com/ScaleChain/distai
If AI really has the potential to “take off” in the way that Bostrom etc fear, humanity doesn’t stand a chance.
Between us and AI takeover there is trillions of dollars in the form of automating all white collar work in the world. If that value can be captured, it will be captured, period.
Whether humanity survives depends on how close we need to get to AI takeover to capture that value.
Anyways, that’s almost certainly a problem for the next generation. In this generation it takes a decade and billions of dollars of research just to make AI drive cars in limited contexts
Besides the nuclear thing, I’ve also heard breathless hysterical comparisons with chemical weapons. Think of the _worst_ thing, and now imagine something MUCH WORSE! That’s how dangerous LLMs are according to these bridgetrolls.
They just want a moat, and the game is on to frighten the politicians and other unwashed masses into giving them one.
I’m sorry to be unable to provide ‘proof’ but through personal contacts am aware that this is theatre organised along with governments (in the plural).
It’s irrelevant that I find it highly irritating. The real question is whether and to what degree it’s necessary.
It seems to be the playbook. Get huge, ask for regulation. Sam Bankmen-Fried was doing the same with ftx, asking for crypto to be regulated, once FTX got huge. Granted that didn't end the same, but still same concept
This might be the craziest sounding thing I'll say on HN, but it's not impossible to think counter-intuitive moves like this are actually in service of Roko's basilisk. What better than centralizing power in this way.
I wish high-profile tech CEO would stick to their knitting.. they usually end-up embarrassing themselves as their ‘fame’ get to their head and they feel they need to opine on topics and issues they have little understanding of.
The only thing that makes sense to me is that he's trying to distract the world from actual risks of LLMs by getting everyone worked up about imaginary risks of LLMs.
But I need to admit my bias. I considered Sam Altman to be sketchy and untrustworthy from before OpenAI existed, so I'm going to be very skeptical of anything he has to say and his motives for saying it.