Hacker News new | past | comments | ask | show | jobs | submit login

Why? If the product is useful (it is to me), then why do you care so much as to the internal politics? If it ceases to be useful or something better comes along, sure. But this strikes me as being serially online and involved in drama



These internal drama can play out in the service. Frame the question as: do you want to build on an unstable/unsteady platform?


Do you want to build on subpar technology?

Nothing beats OpenAI at the moment. Nothing is even close.


Phind is an example where they use their own model and it is pretty good at it’s specialty. OpenAI is hard to beat “in general” and especially if you don’t want to fine tune etc.


As long as you can outrun the technical debt, sure. Nothing lasts forever. Architect against lock in. This is just good vendor/third party risk management. Avoid paralysis, otherwise nothing gets built.


I'm convinced embeddings are the ultimate vendor lock in of our time.


If OpenAI decides to change their business model, it might be bad for companies that use them, depending on how they change things. If they are looking unstable, might as well look around.


I despise the engineering instinct to derisively dismiss anything that involves humans as "politics".

The motivations of individuals, the trade-offs of organizations, the culture of development teams - none of those are "politics".

And neither is the fundamental hierarchical and governance structure of big companies. They influence the stability of architectures, the design of APIs, the operational priorities. It is absolutely reasonable to have one's confidence in depending on the technology of a company based on the shenanigans OpenAI went through.


It’s not about politics, it’s about stability and trust.

Same reason I’m hesitant to wire up my home with IoT devices (just a personal example). Nothing to do with politics, I’m just afraid companies will drop support and all the things I invested in will stop working.


Eventually you have to make a decision though? Even if it’s the wrong decision?

Our time is finite.


Not filing your home with more triangulating spyware is a decision.


Yes, but that's not the decision the person in this thread was struggling with - they were struggling with the idea that they may invest $$ into something that 2,3,10 years down the road no longer works because a company went out of biz.

Sounds like they would like to have the devices but have a hard time pulling the trigger for a fear of sinking money into to something temporary.


Yeah, and the operational stability of a company is a factor that goes straight to its ability to continue as a going concern. So it's reasonable for many people to base their decision on this kind of drama (even if not everyone agrees on the importance of this factor).


you may want to go back and re-read the thread you are replying to ... the person I replied to wasn't talking about drama they made a "IoT home devices all spy on you" argument.


It’s possible to make a more charitable reading of their comment as being on topic.


Because you don't rely on a business that had 80% of its staff threaten to quit overnight?


> staff threaten to quit overnight

They didn't, though. They threatened to continue tomorrow!

It's called "walking across the street" and there's an expression for it because it's a thing that happens if governance fails but Makers gonna Make.

Microsoft was already running the environment, with rights to deliver it to customers, and added a paycheck for the people pouring themselves into it. The staff "threatened" to maintain continuity (and released the voice feature during the middle of the turmoil!).

Maybe relying on a business where the employees are almost unanimously determined to continue the mission is a safer bet than most.


>They didn't, though. They threatened to continue tomorrow!

Are you saying ~80% of OpenAI employees did not threaten to stop being employees of OpenAI during this kerfuffle?


They're saying that ~80% of OpenAI employees were determined to follow Sam to Microsoft and continue their work on GPT at Microsoft. They're saying this actually signals stability, as the majority of makers were determined to follow a leader to continue making the thing they were making, just in a different house. They're saying that while OpenAI had some internal tussling, the actual technology will see progress under whatever regime and whatever name they can continue creating the technology with/as.

At the end of the day, when you're using a good or service, are you getting into bed with the good/service? Or the company who makes it? If you've been buying pies from Anne's Bakery down the street, and you really like those pies, and find out that the person who made the pies started baking them at Joe's Diner instead, and Joe's Diner is just as far from your house and the pies cost about the same, you're probably going to go to Joe's Diner to get yourself some apple pie. You're probably not going to just start eating inferior pies, you picked these ones for a reason.


They showed they are hypocrites.

Blaming the board the hindered OpenAI mission by firing Altman but at the same time threaten to work for MS which would kill that mission completely.


I don't think that's necessarily true or untrue, but to each their own. Their mission, which reads to, "... ensure that artificial general intelligence benefits all of humanity," leaves a LOT of leniency in how it gets accomplished. I think calling them hypocrites for trying to continue the mission with a leader they trust is a bit...hasty.


>Microsoft was already running the environment, with rights to deliver it to customers.

But they don't own it. If OpenAI goes down they have the rights of nothing.


> But they don't own it. If OpenAI goes down they have the rights of nothing.

This is almost certainly false.

As a CTO at largest banks and hedge funds and serial founder of multiple Internet companies, I assure you contracts for novel and "existential" technologies the buyer builds on top of are drafted with rights to the buyer that protect them in event of seller blowing up.

Two of the most common provisions are (a) code escrow w/ perpetual license (you blow up, I keep the source code and rights to continue it) and (b) key person (you fire whoever I did the deal with, that triggers the contract, we get the stuff). Those aren't ownership before blowup, they turn into ownership in the event of anything that costs stability.

I'd argue Satya's public statement on the Friday the news broke ("We have everything we need..."), without breaching confidentiality around actual terms of the agreement, signaled Microsoft has that nature of contract.


They threatened to walk across the street to a service you aren’t using.


And if they walk across that street, I'll cancel my subscription on this side of the street, and start a subscription on that side of the street. Assuming everything else is about equal, such as subscription cost and technology competency. Seems like a simple maneuver, what's the hang up? The average person is just using ChatGPT in a browser window asking it questions. It seems like it would be fairly simple, if everything else is not about equal, for that person to just find a different LLM that is performing better at that time.


It's super easy to replace an OpenAI api endpoint with an Azure api endpoint. You totally correct here. I don't see why people are acting like this is a risk at all.


Not that easy, MS can sell the service of GPT but don't own it.

No OpenAI no GPT.


I was going on the assumption that MS would not have still been eager to hire them on if MS wasn't confident they could get their hands on exactly that.


That's not how contracts like this are written.

It's far more common that if I'm building on you, if you blow up, I automatically own the stuff.


It’s a bit like buying a Tesla.


Based on the how their post is worded, I'm guessing they never needed OpenAI's products in the first place. For most people, OpenAI's offerings are still luxury products, and all luxury brands are vulnerable to bad press. Some of the things I learned in the press frenzy certainly made me uncomfortable.


You don’t believe that the non-profit’s stated mission is important enough to some people that it is a key part of them deciding to use the paid service to support it?


> why do you care so much as to the internal politics?

agree and why did they go from internal politics -> external politics (large scale external politics)


It’s a dramatic story - a high-flying ceo of one of the hottest tech companies is suddenly fired without explanation or warning. Everyone assumes it’s some sort of dodgy personal behavior, so information leaks that it wasn’t that, it was something between the board and Sam.

Well, that’s better for Sam, sure, but that just invites more speculation. That speculation is fed by a series of statements and leaks and bizarre happenings. All of that is newsworthy.

The most consistently asked question I got from various family over thanksgiving beyond the basic pleasantries was “so what’s up with OpenAI?” - it went way outside of the tech bubble.


> why did they go from internal politics -> external politics (large scale external politics)

My guess is it has something to do with the hundreds of employees whose net worth is mostly tied up in OpenAI equity. It's hard to leverage hundreds of people in a bid for power without everyone and their mother finding out about it, especially in such a high-profile organization. This was a potentially life-changing event for a surprisingly large group of people.


The public drama is a red flag that the organization's leaders lack the integrity and maturity to solve their problems effectively and responsibly.

They are clearly not responsible enough to deal with their own internal problems maturely. They have proven themselves irresponsible. They are not trustworthy. I think it's reasonable to conclude that they cannot be trusted to deal with anybody or any issue responsibly.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: