Hacker News new | past | comments | ask | show | jobs | submit login

It was complete amateur hour for the board.

But that aside, how did so many clueless folks who understand neither the technology, or the legalese, nor have enough intelligence/acumen to forsee the immediate impact of their actions happen to be on the board of one of the most important tech companies?




I think when it started it was not the most important tech company but just some open research effort.


Not many and even fewer if you consider folks that have a good grasp of themselves, their psychology, their emotions — and how they can mislead them, and their heart.

IME most folks at Anthropic, OpenAI or whatever that are freaking out about things never defined the problem well and typically were engaging with highly theoretical models as opposed to the real capabilities of a well-defined, accomplished (or clearly accomplishable) system. It was too triggering for me to consider roles there in the past given that these were typically the folks I knew working there.

Sam may have added a lot of groundedness, but idk ofc bc I wasn’t there.


Is this a way of saying that AI safety is unnecessary?


It's a way of saying that what has been historically been considered "studying AI safety" in fact bears little relation to real life AIs and what may or may not make them more or less "safe".


Yes, with the addition that I do feel that we deserve something better than I perceive we’ve gotten so far and that safety is super important; but also I don’t work at OpenAI and am not Ilya so idk


Pretty sure that Sutskever understands the technology, and it looks like he persuaded the others.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: