Hacker News new | past | comments | ask | show | jobs | submit login

I hate how OpenAI got a lot of their public name and attention through the original benefactors that started it and their pledge of opening up AI research to the public, and now they are basically the opposite - monetized opaque controlled systems. Really off-putting.



They just chose a misleading name in the first place. It's pretty obvious that they couldn't properly pursue their core mission as a purely open research organization. The desired end result is too valuable.

And if they end up getting close to what they're really after, it's really undesirable to get an arms race where solely OpenAI's contributions are known to everyone.


> They just chose a misleading name in the first place.

It wasn't just the name, it was the whole rationale they presented for creating OpenAI.

Nick Bostrom was having his 15 minutes of fame with "Superintelligence" [1] and scenarios like a an "AI arms race" and the sudden emergence of a super-intelligent singleton [2] were legion. OpenAI was founded on the premise that making leading edge AI research public was the best way to mitigate that risk [3].

It was presented to the world as a "research institution which can prioritize a good outcome for all over its own self-interest" where "Researchers will be strongly encouraged to publish their work, whether as papers, blog posts, or code, and our patents (if any) will be shared with the world" [4].

That lasted all of three years.

[1] https://en.wikipedia.org/wiki/Superintelligence:_Paths,_Dang...

[2] https://www.nickbostrom.com/fut/singleton.html

[3] https://en.wikipedia.org/wiki/OpenAI#Strategy

[4] https://openai.com/blog/introducing-openai/


I don’t disagree with your summary of the facts; part of the premise of my comment is that I disagree with that stated rationale for starting OpenAI. I rather think that always publicizing the bleeding-edge research would increase the risks.

Not sure if this is the main reason OpenAI went less open though, as I’m not an insider.


They do publish papers. I'm not sure they were ever about providing open source implementations? AI safety was their thing.


I could be misremembering but I believe the idea was that AI advancement was going to happen anyway so open it up and put it into the hands of the people instead of a few power brokers who would abuse it.


It’s a good thing our benefactors at OpenAI are looking out for “AI safety” so we can’t do things like ask a computer to make a graphically violent image or verbally racist string of text.


Their research remains open:

https://arxiv.org/abs/2102.12092

Their code/models are indeed closed, but there is no realistic alternative.

If they let the public have unrestricted access, deepfakes + child images would appear on Day 1, and OpenAI would get cancelled.

For OpenAI to survive, it has to be closed source.


In the age of a crisis of reproducible research, publications without data and models aren't really "open". Having read many ML papers very few are reproducible easily.


How would you explain the dozens of other open source versions of computer vision and language models that didn't generate all those harms, even ones that were trained to recreate the exact models that OpenAI withheld due to those concerns?


The open source models can and do generate those harms.

The harms themselves are probably overblown. There are plenty of deepfakes of various celebrities. Mostly people can tell the difference or they just don't care.

I think the reality is that training these models and paying ML engineers is incredibly expensive. Not a good fit for the open source model, thus OpenAI had to convert to SaaS.


No it doesn't. DALL-E 2 is purposely vague on model architecture to make it impossible to reproduce from the paper alone.


This is the problem with all advanced technology. It has to have control built in. Think of all the bad things you could do with self-driving cars for example. Imagine we make interstellar travel possible, the amount of energy involved could destroy worlds. It's a very sad thing about the future.

In a way, censorship seeks to make the Human AI "safe".


Then no one should have or is your chosen overlord so benevolent to never use it against their live stock?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: