Hacker News new | past | comments | ask | show | jobs | submit login

According to a study of 700 ML researchers (not AI safety) the median chance that that the long-run effect of advanced AI on humanity will be “extremely bad (e.g., human extinction)" is 5%. And 48% of respondents gave an answer of 10% or more.

Even if this is 2 orders of magnitude too high it's worth a lot of our time. But these are probably mainly normy researchers not AI safety-ists. https://aiimpacts.org/2022-expert-survey-on-progress-in-ai/#...




According to a survey 700 ML researchers, we should throw money at their subject or we might all die...


With great power comes great responsibility. If we start plugging ML everywhere - which we are currently doing - we better be sure that it is behaving as expected.


So far we have seen no power at all from ML. So spending large amounts of money making it "safe" (not that anyone knows how or even what it would look like) seems like an over reaction.

With no power, comes no responsibility.

But of course, if ML is a person's pet interest, they will likely think we should increase spending on it. That doesn't make that a smart decision...


I believe that you conflate “power” with capabilities wrt AGI.

I’d argue that power is in relation to reach and the capacity to influence. In this regard, ML has quite a bit of power these days, from deciding whether you are allowed to take a loan, to driving engagement in social media, to predicting and influencing what you will watch, to the way our language is even used.

There are also quite a few implicit ways that ML models have influenced society, eg GPGPUs with emphasis on ML, and ML accelerators everywhere. The effectiveness of certain algorithms made us use them more often which influenced NVDA and Google towards developing hardware to accelerate those common cases thereby creating a feedback loop where our algorithms are chosen based on works well on our GPUs (cf transformers).

As such systems become more prevalent and influential, ensuring safety and explainability will help us prevent pitfalls that put humans at risk.


We can argue back and forth about the definition of power. Right now, ML has had very little actual impact. And it is very far from the sort of "skynet" general intelligence that people are afraid of. Those are just facts.

Personally I view it as no different to the hype over crypto or the fact we get a huge breakthrough in fusion every 3months but somehow never get any closer to an actual commercial reactor...

Meanwhile, we KNOW as a fact that climate change is going to be devastating. And there are numerous areas with nuclear wars brewing from far east to Europe.

I get that a super AI is a more interesting thing to worry about. But we need to deal with out actual, serious, issues first IMHO.


Given that ML has been exploited to drive polarization which fuels said wars and causes instability, can you claim that there is no impact with respect to said wars?

Power is not just about being more intelligent than us. An isolated airgaped super intelligence is powerless.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: