Hacker News new | past | comments | ask | show | jobs | submit login

I think OpenAI made two strategic blunders:

1. Publishing the Dall-E papers and pre-trained CLIP weights. This inspired Stable Diffusion/MidJourney, reducing the amount of time OpenAI had for their Dall-E service to get established by maybe a year. During that time, they could've gained long-term customers, generated some revenue, and established partnerships with Adobe and other graphics software makers. Now they're second string.

2. Publishing the GPT-3 paper and releasing ChatGPT for free online. Instead, they should've improved it by adding things like references and released it already integrated in Bing/Word/Cortana. This would've been more valuable to Microsoft. Now, Google has time to catch up and have their own model in their search engine not long after Bing if they do it right. And Anthropic is working on a chat model and will have a good one as well soon.

One possible counterargument could be that these haphazard releases allowed OpenAI to gain mindshare among researchers. But this is much more vague and speculative.




> I think OpenAI made two strategic blunders

to quote the "Introducing OpenAI" article from 2015, dec 11:

"OpenAI is a non-profit artificial intelligence research company. Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return. Since our research is free from financial obligations, we can better focus on a positive human impact."

https://openai.com/blog/introducing-openai/


I believe OpenAI is for profit since 2019 now. See https://techcrunch.com/2019/03/11/openai-shifts-from-nonprof...


ok, so?

The mentioned things are strategic blunders if their goal is to maximise profit (maybe). If they have some other goal, they might not be blunders. In fact they might be on track with what they want to achieve.

The legal form of the corporate structure is not the deciding factor here. What are their goals is.


They are acting as if they want to be acquired. Publishing results and making a splash with your technology is a good way to get acquired.


Hard disagree, I think 2 is actually a genius move after major fail with dall-e 2 pricing model. In general, they are probably looking for stable diffusion like adoption curve, but unless they have an offering people can use for free that is flexible enough (giving users just a prompt box is not flexible enough), someone else is going to eat their lunch on LLM's too, when you won't need colab to run them.


I don't understand: many more people would use it for free in Bing/Word/Cortana than on a website that nobody used previously.


True, they would. But there is a limit to how much you can get out of it with just a text box. Just look at dalle2 vs sd - even though dall-e is much more capable and clearly better one out of the two, sd gives much more consistent results if you know what you are doing. Because you have full access to model we now have novel ways of inlining some state to pieces of text to assign importance, whole algebra done on embeddings etc. When similar thing happens to llm's nobody will care about the text box in ms products.


>Publishing the Dall-E papers

Isn't stable diffusion implementation based on imagen paper?


It gets a confusing between DALL-E, DALL-E 2, Stable Diffusion 1.x and 2.x. But Imagen is using some ideas introduced by OpenAI and they also published the influential GLIDE paper. To greatly simplify, my impression is that OpenAI's big impact was proving that scale works and causing many other large scale projects to get going.


I hope your ideas are never implemented and I'd rather that these ideas burn in hell.

OpenAI is already a misnomer, don't make them comically evil.

AI needs more democratization ala huggingface. Not more """""Open"""""AI


I assume it also allowed them to get a lot of testing with people pushing its limits in different ways.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: