Hacker News new | past | comments | ask | show | jobs | submit login

> it simply raises awareness

I don't think it simply raises awareness - it's a biased statement. Personally, I don't think the advocated event is likely to happen. It feels a bit like the current trans panic in the US: you can 'raise awareness' of trans people doing this or that imagined bad thing, and then use that panic to push your own agenda. In OpenAI's case, they seem to push for having themselves be in control of AI, which goes counter to what, for example, the EU is pushing for.




In what sense is this a 'biased statement' exactly?

If a dozen of the top climate scientists put out a statement saying that fighting climate change should be a serious priority (even if they can't agree on one easy solution) would that also be 'biased'?


Climate change is a generally accepted phenomenon.

Extinction risk due to AI is not a generally accepted phenomenon.


Now it is, when climate scientists were first sounding the horn they got the same response that these people are getting now. For example:

"This signatory might have alterior motives, so we can disregard the whole statement"

"We haven't actually seen a superintelligent AI/manmade climate change due to CO2 yet, so what's the big deal?"

"Sure maybe it's a problem, but what's your solution? Best to ignore it"

"Let's focus on the real issues, like not enough women working in the oil industry"


That's curiously the standard crackpot line. "They doubted Einstein! They doubted Newton! Now they doubt me!" As if an incantation of famous names automatically makes the crackpot legitimate.


The signatories on this are not crackpots. Hinton is incredibly influential, and he quit his job at Google so he could "freely speak out about the risks of A.I."


Yet the lame rationalization was similar to that of a crackpot (previous comment).

The correct expression is as you so correctly point out: to appeal to the authority of the source.


But that is the point. Just because scientific community is on agreement does not guarantee that they are correct. It simply signifies that they agree on something.

Note, language shift from 'tinfoil hat' ( because tinfoil hat stopped being an appropriate insult after so many of their conspiracy theories - also a keyword - became proven ) to crackpot.


We have had tangible proof for climate change for more than 80 years; predictions from 1896, with good data from the 1960s.

What you are falling for are fossil industry talking points.

We have had not any proof that AI will pose a threat as OpenAI and OP's link outline; nor will we have any similar proof any time soon.


In retrospect, you can find tangible proof from way back for anything that gets accepted as true. The comparison was with how climate change was discussed in the public sphere. However prominent the fossil fuel companies' influence on public discourse was at the time, the issues were not taken seriously (and sill aren't by very many). The industry's attempts to exert influence at the time were also obviously not widely known.

Rather than looking for similarities, I find the differences between the public discussions (about AI safety / climate change) quite striking. Rather than stonewall and distract, the companies involved are being proactive and letting the discussion happen. Of course, their motivation is some combination of attampted regulatory capture, virtue signaling and genuine concern, the ratios of which I won't presume to guess. Nevertheless, this is playing out completely differently so for from e.g. tobacco, human cloning, CFCs or oil.


>Extinction risk due to AI is not a generally accepted phenomenon

Why?

You, as a species, are the pinnacle of NI, natural intelligence. And with this power that we've been given we've driven the majority of large species, and countless smaller species to extinction.

To think it outside the realms of possibility that we could develop an artificial species that is more intelligent than us is bizarre to me. It would be like saying "We cannot develop a plane that does X better than a bird, because birds are the pinnacle of natural flying evolution".

Intelligence is a meta-tool, it is the tool that drives tools. Humanity succeeded above all other species because of its tool using ability. And now many of us are hell bent on creating ever more powerful tool using intelligences. To believe there is no risk here is odd in my eyes.


Perhaps open letters like this are an important step on the path to a phenomenon becoming generally accepted. I think this is called "establishing consensus".


Yes it would? Why do you think it wouldn't?


It is technically biased. But biased towards truth.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: