Hacker News new | past | comments | ask | show | jobs | submit login

Yudkowsky wants it all to be taken as seriously as Israel took Iraqi nuclear reactors in Operation Babylon.

This is rather more than "nationalise it", which he has convinced me isn't enough because there is a demand in other nations and the research is multinational; and this is why you have to also control the substrate… which the US can't do alone because it doesn't come close to having a monopoly on production, but might be able to reach via multilateral treaties. Except everyone has to be on board with that and not be tempted to respond to airstrikes against server farms with actual nukes (although Yudkowsky is of the opinion that actual global thermonuclear war is a much lower damage level than a paperclip-maximising ASI; while in the hypothetical I agree, I don't expect us to get as far as an ASI before we trip over shorter-term smaller-scale AI-enabled disasters that look much like all existing industrial and programming incidents only there are more of them happening faster because of all the people who try to use GPT-4 instead of hiring a software developer who knows how to use it).

In my opinion, "nationalise it" is also simultaneously too much when companies like OpenAI have a long-standing policy of treating their models like they might FOOM well before they're any good, just to set the precedent of caution, as this would mean we can't e.g. make use of GPT-4 for alignment research such as using it to label what the neurones in GPT-2 do, as per: https://openai.com/research/language-models-can-explain-neur...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: