Hacker News new | past | comments | ask | show | jobs | submit login

I don’t see it.

If anything, it seems to me that unlocking OpenAI and the broader market from what’s been an effective monopoly through more chip competition would be inline with the charter.

> Technical leadership

> To be effective at addressing AGI’s impact on society, OpenAI must be on the cutting edge of AI capabilities—policy and safety advocacy alone would be insufficient.




>I don’t see it.

LOL!


Can you report back on which part of the charter you think he violated?

I’ll link it for you:

https://openai.com/charter


What, to you, is laugh out loud funny about the parent comment? They gave a counter argument with examples from the charter and you respond with "LOL!"? How about responding with a better argument?


>They gave a counter argument with examples from the charter

Examples is a very generous word. They merely just quoted parts of the charter and pretended the argument would stand on its own.

Watch me literally do the same thing.

Here are all the parts of the carter Sam violated, and I'll even do one better and provide insight:

>Long-term safety

He has been very clear about his position that OpenAI should press forward, despite risks, and used faulty equivocation to justify said position. “A misaligned superintelligent AGI could cause grievous harm to the world; an autocratic regime with a decisive superintelligence lead could do that too.” - Sam

>Technical leadership

Sam doesn't value technical leadership, as proven by his history at other companies and he isn't technical himself. He will immediately pivot to cashing out the moment the time is right. OpenAI is a steward of the domain, not a profiteer. Attempting to solicit special arrangements with other vendors isn't going to progress the domain forward, that happens through research, not special kickbacks.

>Cooperative orientation

The board clearly didn't believe he was being cooperative, with them and perhaps the larger AI community. Given his positions on safety and progress it's not surprising to see him being outed.

Since my comment is easily twice the effort of GP, and I have now baselined my comment with the standards you clearly see as valuable, I look forward to your constructive input.

Which I doubt there will be, which is what was funny about the original comment. All that it deserved was "LOL."


Allow me to counter:

> Ignoring risks

This is not even on topic. You seem to think it is literally about the risk of a magical incantation of AGI that someone was going to accidentally utter. Instead it is about working the conversation for support.

> Sam doesn’t value technical leadership

He doesn’t prioritize technical decisions over all, which is want you want from an organizational leader. He has hired and enabled some of the best technical competency in a generation to do things no one thought were possible.

> The board didn’t believe he was being cooperative

“Being cooperative” as defined here is so naive as to be comical on its own. Internal politics are a constant presence. His job is not to be maximally cooperative without regard for strategy.

The only thing that is clear to me is that non profit structures as presently conceived are totally inadequate for the use OpenAI has put them to and in particular are not up to withstanding growth pressures.


>things no one thought were possible.

Yea, no one... except Googlers


You’re right, I shouldn’t have gone absolute. However many Googlers also thought many other things were possible that weren’t, so from here maybe we devolve into discussions of the relative cost/value of Type I vs Type II error.


Will you at least admit that this comment of yours, with quotes from the charter and thoughts about each quote, is contributing more to the conversation?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: