Hacker News new | past | comments | ask | show | jobs | submit login

Why did they release this model then?



Their public statements that the only way to safely learn how to deal with the things AI can do, is to show what it can do and get feedback from society:

"""We want to successfully navigate massive risks. In confronting these risks, we acknowledge that what seems right in theory often plays out more strangely than expected in practice. We believe we have to continuously learn and adapt by deploying less powerful versions of the technology in order to minimize “one shot to get it right” scenarios.""" - https://openai.com/index/planning-for-agi-and-beyond/

I don't know if they're actually correct, but it at least passes the sniff test for plausibility.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: