Hacker News new | past | comments | ask | show | jobs | submit login

> Thus we get quotes from them like "overpopulation on Mars"

Context, please? :)




The point being that AGI is at least 50 years and probably 200 years or more away.

It is such a distant threat and yet such a focus, and all the while we sit in our lovely apartments 40 minutes from nuclear incineration.


It's also just a cherry picked example too. Surveys of experts show there isn't really a consensus, but they generally aren't that optimistic.

https://nickbostrom.com/papers/survey.pdf

>We thus designed a brief questionaire and distributed it to four groups of experts in 2012/2013. The median estimate of respondents was for a one in two chance that high-level machine intelligence will be developed around 2040-2050, rising to a nine in ten chance by 2075. Experts expect that systems will move on to superintelligence in less than 30 years thereafter. They estimate the chance is about one in three that this development turns out to be ‘bad’ or ‘extremely bad’ for humanity.


yeah clearly Dr. Ng was addressing fears of AGI in that quote. Thing is that machine learning can be used for nefarious purposes, intentionally or even unintentionally, without AGI being a reality. Those are more realistic fears

the conflation of machine learning and AGI causes confusion yet again..


Agree - I think that your point of the use of the tool rather than the tool itself being of sole concern is really important. Clearly devices (like Nukes) that can kill thousands must be regulated tightly, but to extend the same level of control to the whole technology makes applications (like medical scanners using nuclear tech) that are of great benefit potentially untenable.






Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: