The path between A and B has enough tangled branches that I'm reminded of childhood maze puzzles where you have to find which entrance even gets to the right goal and not the bad outcomes.
The most positive take is: they want to build a general-purpose AI, to allow fully automated luxury for all; to built with care to ensure it can only be used for positive human flourishing and cannot (easily or at all) be used for nefarious purposes by someone who wants to sow chaos or take over the world; and to do so in public so that the rest of us can prepare for it rather than wake up one day to a world that is alien to us.
Given the mental image I have here is of a maze, you may well guess that I don't expect this to go smoothly — I think the origin in Silicon Valley and startup culture means OpenAI, quite naturally, has a bias towards optimism and to the idea that economic growth and tech is a good thing by default. I think all of this is only really tempered by the memetic popularity of Eliezer Yudkowsky, and the extent to which his fears are taken seriously, and his fears are focussed more on existential threat of an optimising agent that does the optimising faster than we do, not on any of the transitional dynamics going from the current economy to whatever a "humans need not apply" economy looks like.
Covid does not hate you, nor does it love you, it simply follows an optimisation algorithm — that of genetic evolution — for the maximum reproductive success, and does so without regard to the damage it causes your body while it consumes you for parts.
Covid is pretty stupid, it's just a virus.
And yet: I've heard the mutation rate is 3.8 × 10e−6 / nucleotide / cycle, and at about 30,000 base pairs and 10e9 to 1e11 virons in an infected person, so that's ~1e8-1e10 mutations per reproductive cycle in an infected person, and that the replication cycle duration is about 10 hours. Such mutations are both how it got to harm us in the first place, why vaccination isn't once-and-done, and this logic also applies to all the other diseases in the world (including bacterial ones, which is why people are worried about bacterial resistance).
As an existence proof, Covid shows how an agent going off and doing its own thing, if it does it well enough, doesn't even need to be smart to kill a percentage point or so of the human species *by accident*.
The hope is that AI will be smart enough that we can tell it: humans (and the things we value) are not an allowed source of parts. The danger happens well before it's that smart… and that even when it is that smart, we may well not be smart enough to describe all the things we value, accurately, and without bugs/loopholes in our descriptions.
I understand it how it applies to a virus, because a virus has a mechanism for action. In fact a virus is all mechanism for action; its behaviour is emergent from just being the surviving behaviour, rather than any grand strategizing.
But for AI, they are all brains (or that's the claim) but no brawn. How do they act?
Some of those viruses shut down hospitals, some damage uranium enrichment facilities, some mine or steal bitcoin, some steal normal money.
Polymorphic viruses have been around for decades, so it's not inconceivable that there's already some almost entirely evolved (simulated evolution is a kind of AI training mechanism) virus on the internet already, even before LLMs.
Right now, I'd expect an LLM to be fairly mediocre at writing a virus. But it doesn't need to be an LLM that does the writing, just as it doesn't need to be an LLM that is the payload once the infection has happened.
The most positive take is: they want to build a general-purpose AI, to allow fully automated luxury for all; to built with care to ensure it can only be used for positive human flourishing and cannot (easily or at all) be used for nefarious purposes by someone who wants to sow chaos or take over the world; and to do so in public so that the rest of us can prepare for it rather than wake up one day to a world that is alien to us.
Given the mental image I have here is of a maze, you may well guess that I don't expect this to go smoothly — I think the origin in Silicon Valley and startup culture means OpenAI, quite naturally, has a bias towards optimism and to the idea that economic growth and tech is a good thing by default. I think all of this is only really tempered by the memetic popularity of Eliezer Yudkowsky, and the extent to which his fears are taken seriously, and his fears are focussed more on existential threat of an optimising agent that does the optimising faster than we do, not on any of the transitional dynamics going from the current economy to whatever a "humans need not apply" economy looks like.