I don't think it simply raises awareness - it's a biased statement. Personally, I don't think the advocated event is likely to happen. It feels a bit like the current trans panic in the US: you can 'raise awareness' of trans people doing this or that imagined bad thing, and then use that panic to push your own agenda. In OpenAI's case, they seem to push for having themselves be in control of AI, which goes counter to what, for example, the EU is pushing for.
In what sense is this a 'biased statement' exactly?
If a dozen of the top climate scientists put out a statement saying that fighting climate change should be a serious priority (even if they can't agree on one easy solution) would that also be 'biased'?
That's curiously the standard crackpot line. "They doubted Einstein! They doubted Newton! Now they doubt me!" As if an incantation of famous names automatically makes the crackpot legitimate.
The signatories on this are not crackpots. Hinton is incredibly influential, and he quit his job at Google so he could "freely speak out about the risks of A.I."
But that is the point. Just because scientific community is on agreement does not guarantee that they are correct. It simply signifies that they agree on something.
Note, language shift from 'tinfoil hat' ( because tinfoil hat stopped being an appropriate insult after so many of their conspiracy theories - also a keyword - became proven ) to crackpot.
In retrospect, you can find tangible proof from way back for anything that gets accepted as true. The comparison was with how climate change was discussed in the public sphere. However prominent the fossil fuel companies' influence on public discourse was at the time, the issues were not taken seriously (and sill aren't by very many). The industry's attempts to exert influence at the time were also obviously not widely known.
Rather than looking for similarities, I find the differences between the public discussions (about AI safety / climate change) quite striking. Rather than stonewall and distract, the companies involved are being proactive and letting the discussion happen. Of course, their motivation is some combination of attampted regulatory capture, virtue signaling and genuine concern, the ratios of which I won't presume to guess. Nevertheless, this is playing out completely differently so for from e.g. tobacco, human cloning, CFCs or oil.
>Extinction risk due to AI is not a generally accepted phenomenon
Why?
You, as a species, are the pinnacle of NI, natural intelligence. And with this power that we've been given we've driven the majority of large species, and countless smaller species to extinction.
To think it outside the realms of possibility that we could develop an artificial species that is more intelligent than us is bizarre to me. It would be like saying "We cannot develop a plane that does X better than a bird, because birds are the pinnacle of natural flying evolution".
Intelligence is a meta-tool, it is the tool that drives tools. Humanity succeeded above all other species because of its tool using ability. And now many of us are hell bent on creating ever more powerful tool using intelligences. To believe there is no risk here is odd in my eyes.
Perhaps open letters like this are an important step on the path to a phenomenon becoming generally accepted. I think this is called "establishing consensus".
I don't think it simply raises awareness - it's a biased statement. Personally, I don't think the advocated event is likely to happen. It feels a bit like the current trans panic in the US: you can 'raise awareness' of trans people doing this or that imagined bad thing, and then use that panic to push your own agenda. In OpenAI's case, they seem to push for having themselves be in control of AI, which goes counter to what, for example, the EU is pushing for.