Hacker News new | past | comments | ask | show | jobs | submit login

General research into AI alignment does not require that those models are controlled by few corporations. On the contrary, the research would be easier with freely available very capable models.

This is only helpful in that a superintelligence well aligned to make Sam Altman money is preferable to a superintelligence badly aligned that ends up killing humanity.

It is fully possible that a well aligned (with its creators) superintelligence is still a net negative for humanity.




If you consider a broader picture, unleashing a paperclip-style cripple AI (aligned to rising $MEGACORP profit) on the Local Group is almost definitely worse for all Local Group inhabitants than annihilating ourselves and not doing that.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: