This is pretty much directly in violation of OpenAI's charter, not to mention how concerning it is to have a CEO putting so much effort into side hustles.
Can you point to what's in violation? I don't see anything at all.
And "side hustles" are not inherently concerning at all. There are quite a number of well-known tech CEO's running multiple companies at once. The only things that matter are a) that the board is aware and feel like they're paying the right amount for the proportion of the CEO's time that they're getting, and b) that they don't involve a conflict of interest. (And generally speaking, being a potential supplier for your company isn't a conflict -- to the contrary, it's a pattern that's been successfully followed before.)
I love it how when a CEO runs 7 companies at once, he's seen as a titan of tech, master of multitasking, hulk of hustle. Everyone is in awe and points to him as an example of awesomeness. But if I, a worker bee, were to get a second full-time job as software engineer at TechCompany2 unrelated to the work of my TechCompany1, I would be a traitor, disloyal, distracted, double dipping, deserving of being fired.
For the vast majority of CEOs, if they tried to start another company as a side-hustle then they'd also get fired (and potentially sued).
For the CEOs who can run multiple companies at once, there's several factors
- they founded the company, so they have a majority of voting rights so they can't get fired
- they're so valuable to the company that the shareholders are willing to put up with them running multiple companies at once (and shareholders will definitely grumble about it)
For you, you're not really valuable to the company so if you try to work multiple jobs at once then they have no qualms with replacing you.
If you were a valuable enough IC then you could definitely get into a situation where you're working with multiple companies at once.
People do this with consulting arrangements, where they create a consulting company and are able to work with multiple entities at once because they've given themselves enough leverage to do so.
The issue isn't a double standard. The issue is that you haven't put yourself in a situation where you have enough leverage over the company to work multiple jobs.
They’re complaining about the emotional rhetoric around employees working multiple jobs, not the actual mechanics behind why companies disallow it.
It sometimes seems that they aren’t as harsh with executives.
However, IMO, when companies dialog executives from working multiple jobs they use the same sort of rhetoric: “ceo is distracted, directionless, uncommunicative, unable to see past this conflict of interest, etc.” The board will paint the CEO as some dilettante fop.
Look at what happened to Altman, he’s been fired and they insinuated that he’s a lying, double dealing, dirtbag.
I know that sounds callous but it isn't rare at all for high level functionaries to hold positions in more than one entity, the board of directors of OpenAI is an excellent example of that. And some of those are arguably already conflicted.
Worker bees get paid to work, and companies would like to get a certain amount of time for their money because that what it says in your employment contract. You don't necessarily have to agree with that but then you have to carve your own path rather than to hitch your horse to someone else's wagon.
Most salaried engineering contracts don't specify anything about time besides an undefined "full-time". I've had several companies very happy with my production at <30 hours per week. But the contracts state very clearly that I should not take another job.
That's exceptional. Usually a minimum number of hours is specified, what your compensation is, any overtime arrangements and so on. That's also where the 'butts in seats' post COVID backlash comes from: it may not be specified but there are clear expectations.
The exclusivity clause is fairly common but may not be enforceable depending on where you live, especially if it is for a part-time job (but in IT most jobs are full-time).
I don't think it's exceptional. I've signed probably a dozen engineering employment contracts and don't remember any of them specifying minimum hours or overtime. Isn't that the definition of salaried employee? I agree that "butt in seat" hours can be an expectation but usually it's not in the contract.
It depends on your goals and perspective. Cynically it could also be true. Cooperatively viewed he provides an environment for you to do what you expressed a preference for training to do.
That really depends on the company. In many companies, work and worth don’t correspond to the org chart. And everyone knows who’s actually doing the work and why the CEO is there/how they got there. Most companies aren’t large tech companies.
I think we are talking about different things in very different contexts but yes, there are exceptions to everything.
My point is not about relative contributions of value necessarily, which are difficult to compare, but about the structure of employment. The value of a CEO who is doing a good job is not measured by the number of hours they spend on a project. They cannot iterate on solutions the way an engineer can.
The skillset and behavior are different at the meta, macro and micro levels.
So, if a CEO was talking to investors about a new venture (or a set of new ventures) in a closely related field, perhaps even sharing some details about the existing company with those potential investors, would it be fair to say that said CEO was not being entirely candid with the board?
Those are some big "if" statements. I think we would have to see what information was disclosed by Altman to the board with respect to these discussions before making judgements. The precise details of such disclosures may be a matter of legal interpretation.
Or, more likely, all of this will get settled behind closed doors and we will never really know.
>We commit to use any influence we obtain over AGI’s deployment to ensure it is used for the benefit of all, and to avoid enabling uses of AI or AGI that harm humanity or unduly concentrate power.
Concentration of power, mostly. No power greater than a fab for that right now.
Not a lawyer, but I don’t think it is allowed to take a company intellectual property or research results and use these results for a side venture, without proper and timely disclosure. Even if there is no direct conflict of interests at the time when it happens.
Unless you specifically have a contract that allows you to avoid disclosures. Or have a specific agreement that transfers intellectual property or research results to you.
No, they will just add it to their reserves. They don't distribute the profits, so no dividends (no shareholders), bonuses or stock buy-backs (no stock). But you can totally use the profits made in this year to fund next year's costs.
The obvious argument is that since we have not solved alignment, accelerating AGI is unsafe. I’d wager that is the general line of reasoning from Sutskever and the board.
“We are growing quickly enough and a GPU shortage gives everyone more time to catch up on safety research” seems like a logically consistent position to me.
If anything, it seems to me that unlocking OpenAI and the broader market from what’s been an effective monopoly through more chip competition would be inline with the charter.
> Technical leadership
> To be effective at addressing AGI’s impact on society, OpenAI must be on the cutting edge of AI capabilities—policy and safety advocacy alone would be insufficient.
What, to you, is laugh out loud funny about the parent comment? They gave a counter argument with examples from the charter and you respond with "LOL!"? How about responding with a better argument?
>They gave a counter argument with examples from the charter
Examples is a very generous word. They merely just quoted parts of the charter and pretended the argument would stand on its own.
Watch me literally do the same thing.
Here are all the parts of the carter Sam violated, and I'll even do one better and provide insight:
>Long-term safety
He has been very clear about his position that OpenAI should press forward, despite risks, and used faulty equivocation to justify said position. “A misaligned superintelligent AGI could cause grievous harm to the world; an autocratic regime with a decisive superintelligence lead could do that too.” - Sam
>Technical leadership
Sam doesn't value technical leadership, as proven by his history at other companies and he isn't technical himself. He will immediately pivot to cashing out the moment the time is right. OpenAI is a steward of the domain, not a profiteer. Attempting to solicit special arrangements with other vendors isn't going to progress the domain forward, that happens through research, not special kickbacks.
>Cooperative orientation
The board clearly didn't believe he was being cooperative, with them and perhaps the larger AI community. Given his positions on safety and progress it's not surprising to see him being outed.
Since my comment is easily twice the effort of GP, and I have now baselined my comment with the standards you clearly see as valuable, I look forward to your constructive input.
Which I doubt there will be, which is what was funny about the original comment. All that it deserved was "LOL."
This is not even on topic. You seem to think it is literally about the risk of a magical incantation of AGI that someone was going to accidentally utter. Instead it is about working the conversation for support.
> Sam doesn’t value technical leadership
He doesn’t prioritize technical decisions over all, which is want you want from an organizational leader. He has hired and enabled some of the best technical competency in a generation to do things no one thought were possible.
> The board didn’t believe he was being cooperative
“Being cooperative” as defined here is so naive as to be comical on its own. Internal politics are a constant presence. His job is not to be maximally cooperative without regard for strategy.
The only thing that is clear to me is that non profit structures as presently conceived are totally inadequate for the use OpenAI has put them to and in particular are not up to withstanding growth pressures.
You’re right, I shouldn’t have gone absolute. However many Googlers also thought many other things were possible that weren’t, so from here maybe we devolve into discussions of the relative cost/value of Type I vs Type II error.
Will you at least admit that this comment of yours, with quotes from the charter and thoughts about each quote, is contributing more to the conversation?
I'm not surprised the board had to act.