Hacker News new | past | comments | ask | show | jobs | submit login
OpenAI Trains Language Model, Mass Hysteria Ensues (approximatelycorrect.com)
152 points by zackchase on Feb 17, 2019 | hide | past | favorite | 114 comments



Ilya from OpenAI here. Here's our thinking:

- ML is getting more powerful and will continue to do so as time goes by. While this point of view is not unanimously held by the AI community, it is also not particularly controversial.

- If you accept the above, then the current AI norm of "publish everything always" will have to change

- The _whole point_ is that our model is not special and that other people can reproduce and improve upon what we did. We hope that when they do so, they too will reflect about the consequences of releasing their very powerful text generation models.

- I suggest going over some of the samples generated by the model. Many people react quite strongly, e.g., https://twitter.com/justkelly_ok/status/1096111155469180928.

- It is true that some media headlines presented our nonpublishing of the model as "OpenAI's model is too dangerous to be published out of world-taking-over concerns". We don't endorse this framing, and if you read our blog post (or even in most cases the actual content of the news stories), you'll see that we don't claim this at all -- we say instead that this is just an early test case, we're concerned about language models more generally, and we're running an experiment.

Finally, despite the way the news cycle has played out, and despite the degree of polarized response (and the huge range of arguments for and against our decision), we feel we made the right call, even if it wasn't an easy one to make.


> - The _whole point_ is that our model is not special and that other people can reproduce and improve upon what we did. We hope that when they do so, they too will reflect about the consequences of releasing their very powerful text generation models.

If this is your whole point, then I think you are missing something fundamental. Implementing these models doesn't require reflection, or introspection, or any sort of ethical or moral character whatsoever; and even if it did, all that will happen eventually is someone (without the technical background) will simply throw a lot of money at someone else (with the technical background, but who needs to, you know, eat, and pay rent, and so on) to implement it. You are fooling yourself if you think your stance makes a single mote of difference in this arms race.


>You are fooling yourself if you think your stance makes a single mote of difference in this arms race...

In fairness, if that's true, then no one has any need of her model.

More seriously speaking, why does anyone need, say, "training set x", or "model y", to make their implementation work? You don't. So I don't really understand why everyone is so worked up about not releasing this stuff? If you want to do it, do it. If not, don't. But there's no need to say, "I demand everyone do it, and I'll have a meltdown if they don't."


No one is saying "I demand everyone do it." There are two points:

- If they are going to publish the research, and want to claim it as research (which they will, either by submitting it to a conference or putting on arxiv for the citations), then they should publish the supporting material, because without the supporting material it is impossible for reviewers or other researchers to evaluate. This is not just the model--they are also not publishing the training code or the dataset.

In short, they want to have it both ways, by having their work accepted as scientific research, yet providing absolutely no way of determining if the results are reproducible. That is a horrible, horrible standard. (other companies are guilty of this as well, btw). I mean, think about how absurd it is that they are saying "our scientific results are too good to publish. Trust us." Why is this acceptable? because it sure as hell wouldn't be acceptable if was a random person releasing a paper claiming incredible accomplishments, yet they provided absolutely no evidence.

- The other criticism is that the justification for why they aren't publishing (which is that they are too concerned with the moral and ethical implications of their work) is, well, a load of crap. They aren't doing anything to contribute to the ethical or moral use of these tools by doing this and they aren't slowing research into the area one bit. If they really wanted to have an impact here they should have just not said anything (but of course, then the authors couldn't put this on their resume...).

Whether they are releasing the model is not the issue, own its own, and I don't think anyone is throwing a fit because someone doesn't release their model. It's the _why_ and the implications that bother people.


Exactly. This is like holding up spam samples or how spammers operate from the spam detecting work. That side (and the cultural discussions) needs all the headstart it can get, not be complacent that some arbitrary "experts" will patronizingly "protect" them.


If you look at it as a PR stunt, it is almost certainly a good idea. If a bad actor can auto-generate text that is not really distinguishable from something written by a human, how does a community with open membership (eg, HN) protect itself? I imagine this technology will enable interesting new attacks against online communities; we havn't seen that for a while.

OpenAI are extremely sensible to draw attention to the fact that AI is approaching a boundary that has practical implications. It is good that everyone is being alerted that that boundary might be crossed at any time in the foreseeable future.


But ... it's not novel. We could already generate convincing gibberish years ago.

Now the novelty is that this can be better targeted. But even simple Markov-chain based text generators were good enough to fool people for a bit.

And there was always people that had too much free time to write. A lot. (See for example the crackpots and conspiracy theorists that bombard physics forums. See the 9/11, Zeitgeists, etc. movies. See how much has been written about anti-vaxx, about quantum woo, etc.)

Reputation systems work pretty well for countering spammers.

And against APTs (advanced persistent threats, spearfishing attacks, etc) there's no real "universal" protection anyways. (You need a competent security team to out think and out resource the attackers in every possible dimension.)

This AI is the same as the paid Russian trolls and the unpaid scammers, and so on.


The OpenAI samples are leaps and bounds ahead of traditional Markov-chain generated text. I don't think you can compare the two. It's the fluency and plausibility that gives pause around a public release.

I agree with your last point though - it falls into the same category as paid Russian trolls. I think that's exactly why they were hesitant to release the pre-trained models - they didn't want to make it easier/cheaper for a bad actor to replicate the 2016 election.

It remains to be seen whether their decision will make an iota of a difference. But I understand their motivation.


But ... it's not novel.

I work in this field, and yes, this is very novel (at least in terms of the quality).

It's the biggest improvement in quality I've ever seen. The long term coherence is so much better than anything else that has ever been built.


No, I'm sorry, I wasn't precise enough. Yes, it's an amazing feat of engineering, and a truly great peak of text generation. But it's that. Text generation.

Yes, it can serve as great customized propaganda generator, and yes, people can be spin 'round and 'round with it. But they can be already with pretty much anything, from the simplest of phrases from "make X great again" to the elaborate scams of new age bullshit.

I simply disagree on the "virulence" or weaponization factor of this with others. (Especially when it comes to the possible "defenses", none can be "deployed" in 6 months. You can't teach critical thinking to billions of people overnight.)


I've worked in the computational propaganda field, and I tend to agree that there is no real known defense yet.

I don't have a strong opinion about if they should have released this model or not.

I do know it would make a great commercial spam generator though. Want a million product reviews which seem legitimate quickly? This is the thing..


Markov-chain generators are extremely lacking in long-term coherency. They rarely even make complete sentences, much less stay on topic! They were not convincing at all-- and many of the GPT-2 samples are as "human-like" as average internet comments.

Conjecture: GPT-2 trained on reddit comments could pass a "comment turing test", where the average person couldn't distinguish whether a comment is bot or human with better than, say, 60% accuracy.


That's an indictment of reddit comments more than AI. Remember that conditioned on the human-provided seed prompt, there is no statistical surprise (the definition of information) in the generated text. If all reddit comments are are riffs on the OP based on second-hand information, well then they may as well be bot-generated already.

At this stage, these AI's can only help. Imagine we are given this tool that can generate samples from the "uninformative but realistic looking text" distribution, we can then put it in a discriminator to filter out blabbering bots and humans together, or invert it to summarize the small kernel of information, and that would be a great thing. The better these models learn about typical human behavior the better off we are at identifying the truly exceptional. It's when AI starts to sense and incorporate novel information from the non-human environment that you really have to worry.


>That's an indictment of reddit comments more than AI.

Perhaps, but that's the world we live in. I suspect the average reddit commenter is already more articulate than the average person (citation needed, I know. But reddit skews highly educated young male in a first-world country. There's no way they do worse than a worldwide average).

Other than that, I agree with your comment.


I know they are extremely lacking, but compared to that a hyper-fancy NN with layers and layers of the darkest of black magic, trained at the zenith of the night for thousands of man years in the crypts of the terror itself, the TPU ... yeah, so it's not surprising it's better.

But it's no symbolic reasoning. It's not constructing a counter-argument from your argument. It simply lives off previous epic rap battles of internet flamewar history about .. well, about anything, since it's the Internet, and people like to chat, talk, write essays on every topic there is. Satire too. So there is always something to build that lang model on.

Though that will come too. Eventually.


I'm not sure it has much in the way of implications.

There is no real profit to be made by generating realistic looking text. Spammers don't work that way, spammers haven't cared about realistic looking text for years. Nor have spam filters cared much about text for a long time, exactly because it's so easy to randomise. Anti-spam is not a good reason to hold back on language generation models, in my view.

As for HN, if bots can write posts as good as humans, great, why hold back?


You’re fooling yourself if you think there are no significant uses of text generation. Fake news, propaganda, advertising, fake reviews, fake everything. Fabricated email from friends family and colleagues. Whole online communities fabricated out of whole cloth. It is a weapon, and a powerful one.


No, it's useless and I speak from experience of dealing with spammers who forged mail from friends family and colleagues in the past.

People are not trivial automatons who can have their opinions rewritten on the fly by auto-generated text. If auto-generated text reaches into its giant grab-bag of learned expressions and produces something actually interesting or insightful, people might be interested in that new line of thinking, but if - like many of these examples - it's essentially rambling if coherent nonsense then it won't have any impact at all.

So I rather think it's you fooling yourself. You've been reading comments online for years without knowing who or what produced them. If you discovered half of them were artificial tomorrow, what difference would it make? The people around you are already judging arguments based on the content, not their volume or who wrote them.


No, a more effective PR stunt would be to release the model, and better ones, and make it so easy any idiot could use them. THAT would catch the attention of Congress, and THAT would result in funds and lesiglation to combat it. This won’t even register on a sub committees staffers wet dream. It is not human nature to pay attention to far off hypothetical abstract threats, only concrete and immediate ones. You could release a thousand papers like this and it wouldn’t do anything even approaching the effect of congressmen and their staff getting assloads of fake but convincing email/docs/etc, the press being indicated with thousands of fake but convincing tips, of tens of thousands of people calling the police because some asshats are spamming them with convincing letters from their dead grandma or whatever, of convincing communication to banks or brokers, letters to agencies claiming widespread danger (ie there is salmonella in half the food at xyz), kids sending forged letters to their school from their supposed parents to let them leave campus, and so on. I’m sure you can think of better examples.


I’m not entirely sure that that bad actor would get any more scalablity form it than from a Mechanical Turk farm, at least as far as impact goes.

It seem that as far as information warfare goes “less is more” works quite well and they rely on targeted people to spread the news for them.

When you want to drive an agenda you don’t need unique 100,000 comments you need a good copy pasta.

Overall I’m sick of this dramatization of the AI catastrophe until there will be a proven path with agency for it to actually operate in the real world.

A chat bot isn’t a threat to anyone even if it turns homicidle.


But a Mechanical Turk is traceable and definitely not anonymous. Using a self contained model somewhere on a server/cluster/workstation could be.

Regarding an agenda, sure, good pasta is fine and all, and regular ol people are fine, but it is not cost effective. This is a million times cheaper, which means you can use it everywhere, not just the obvious places, you can be everywhere, and you can do more than just push a couple big items, you could push tens of thousands of them, micro targeted all the way down to the individual. Don’t dismiss it so easy—the potential scale is far, far larger than anything existing to date.

And I would note that the reason 100,000 comments aren’t effective now is precisely because they are too formulaic, too obviously fake when used on such a large scale. This has the potential to create real, live, seemingly active and believable online communities of millions of people, all at fractions and fractions and fractions of a penny compared to current methods. People read news, then comments (or reviews or whatever), because they use them to determine the validity of the content they just read; if it’s no longer possible to tell from the comments what’s a scam and what isn’t... well, you could do a lot of things with that.


Ok but isn’t this the opposite of OpenAI’s “nukes are safer when multiple actors have them” strategy wrt AI?

I’m also confused by the threat models earnestly put forth in your blog post. Are we really concerned about deep faking someone’s writing? The plain word already demands attribution by default: we look for an avatar, a handle, a domain name to prove the person actually said this.


> Ok but isn’t this the opposite of OpenAI’s “nukes are safer when multiple actors have them” strategy wrt AI?

It seems more like the "nukes are safer when multiple rational state level actors have them", rather than anyone able to pull a git repo.


Yep. Maybe I misunderstood the subtler points of OpenAI’s “democratize AI” strategy, and this has been the plan all along. But AFAIK they haven’t put an “among a few rational state actors” asterisk on anything up until now.

Regardless, I agree with TFA that this is a silly and arbitrary time to yell “fire.” It’s PR.


> But AFAIK they haven’t put an “among a few rational state actors” asterisk on anything up until now.

True. On the PR side though, it'd be incredibly hard to say "we want to make replication moderately difficult, but not too difficult." Everyone would end up arguing exactly how much should be released, how it would prevent X,Y,Z folks from contributing to AI, etc.

> Regardless, I agree with TFA that this is a silly and arbitrary time to yell “fire.” It’s PR.

Alternatively, it does provide good insight into the reactions in the community as a whole, and continues the conversation on exactly how much should be released. Maybe I'm not far enough into the ML community, but the decision not to put the "keys to the kingdom" on github for every script kiddie to weaponize seems reasonable to me, especially as a precedent.


More like “nukes are safer when we control them and the rest of you cite them”


> some of the samples generated by the model

Mostly it's scary not because it's good - as writing goes, it's quite bad. It forms coherent sentences, but otherwise it's nonsense. I've seen similar nonsense producers in early 90s on basis of Markov chains and what not.

No, the scary part is how much it reminds me of what I am reading in the media all the time. My current pet concern is that AIs will start passing the Turing test not because AIs are getting so good but because humans are getting so bad. A bunch of nonsensical drivel can easily be passed as a thoughtful analysis or a deep critical think-piece - and that's not my conjecture, have been repeatedly proven by submitting such drivel to various academic journals and it being accepted and published. I'm not saying people are losing critical thinking skills - but they are definitely losing (or maybe never even had?) the habit of consistently applying them.


> I've seen similar nonsense producers in early 90s on basis of Markov chains and what not.

Exactly. When it comes to generating a large volume of apparently-good sentences, non-AI (or classical) approaches are still better than good. Those will be equally disruptive, since the defending side is yet to develop a proper countermeasure based on the "sensible"-ness of content. Plus, they will be much easier to customize and adapt to the situation, while ML-based solutions often need remodeling and retraining when repurposed.

> My current pet concern is that AIs will start passing the Turing test not because AIs are getting so good but because humans are getting so bad

AI will start deceiving the public even before it pass Turing test. It's much harder to spot bots amidst people than in a 1vs1 chatroom.


> Exactly. When it comes to generating a large volume of apparently-good sentences, non-AI (or classical) approaches are still better than good.

Can you cite your source? I find this hard to believe.


> The _whole point_ is that our model is not special and that other people can reproduce and improve

Only people with a large amount of money and a lot of expertise. What you are doing is the opposite of democratizing AI.


Actually this shows why OpenAI matters. Google have been training and refining Transformer architectures for years; how unlikely is it nobody tried training a language model at this scale or larger with similar results?

Yet from Google we heard nothing. Which is the optimal decision for them - they only lose by blowing the whistle.


A lot of people have results similar to this - but most people generating a paragraph of slightly_weird_but_plausible_if_you_read_quickly text using a primped version of BERT one time out of 25 regarded it as more or less pointless. But journalists don't.

This would be ok if this is the first time that anyone had a media go wild over AI story. But actually this has happened 10000 times this year already.


Seems like the way it worked is that the blog post was discussed here and on Twitter and many people thought it was interesting. Then some journalists picked it up and wrote about it.

That much is nothing out of the ordinary. It is interesting (at least to those of us who aren't natural language researchers) so why shouldn't we talk about it? Why shouldn't journalists write about it?

Inevitably their mildly controversial decision to hold some data back got a lot of people discussing whether it was necessary. Which is also perfectly okay.

So, in the end, the complaint is just about why people don't have smarter takes on things. I don't know what to tell you; that's just how social media works sometimes.


I'd shrug and move on, but the problem is that I believe that these flaps about AI are distracting attention from the real concerns and forces that are having a serious impact on people now.

The distortion of public debate caused by community exclusiveness on social platforms, by the curation and manipulation of social feeds and by the dynamics of online debate where the loudest and angriest voices dominate is one place that we could do with some focus.

Another place is the management of simple models - plain Jane stuff like a learned classifier - people are making these with Python and R and releasing them into infrastructures and apps and we don't know what they are and where they are and how they are interacting.

Instead we have wizard of oz style stories to distract us from who's actually hiding behind the curtain. If we fall for this then we may find ourselves living in a vicious totalitarian society with no obvious way out of it.

Journalists should write about it in an informed and professional way, that's fine. But they need to write about stories that are impactful and important, and if they were to write about this one in this way ("text scrambler makes a pretty good paragraph one out of 30 tries, has no idea of what is going on") they would get no clicks (there will now be a second wave of follow-ups like that to ride on the coattails of the story). Instead they have to make it sound like robots are going to take children from schools and experiment on them live on TV, and this makes them famous and rich.

There is no real revision of the story because the follow on stories disappear from view while search engines and other journalists use the original hysteria. Look at what happened with the two negotiating bots at facebook (the game was to negotiate to get books and balls, the bots tended to use a short hand to negotiate rather than the english they were trained on) This was "Facebook researchers have to pull the plug on AI that they no longer understand", and that is the narrative that we will have on that story more or less forever.


I've just read i.e https://twitter.com/gdb/status/1096098366545522688 and even though it's "best of 25" (I guess cherry-picked by a human) - this is mind-blowing. I am actually having a very hard time believing this is legit generated text.


I couldn't be more disappointed with this bullshit honestly. The texts have almost zero coherence and keep repeating the same patterns (which they presumably learned from the data set) over and over again. If this is their best out of 25 samples then they aren't going to fool anyone.

>Recycling is NOT good for the world.

>It is bad for the environment,

>it is bad for our health,

>and it is bad for our economy.

>Recycling is not good for the environment.

>Recycling is not good for our health.

>Recycling is bad for our economy.

>Recycling is not good for our nation.

The first paragraph keeps repeating the <X> is <bad | not good> for the <Y> pattern 8 times.

>And THAT is why we need to |get back to basics| and |get back to basics| in our recycling efforts.

"get back to the basics" is repeated twice in the same sentence.

>Everything from the raw materials (wood, cardboard, paper, etc.),

>to the reagents (dyes, solvents, etc.)

>to the printing equipment (chemicals, glue, paper, ink, etc.),

>to the packaging,

>to the packaging materials (mercury, chemicals, etc.)

>to the processing equipment (heating, cooling, etc.),

>to the packaging materials,

>to the packaging materials that are shipped overseas and

>to the packaging materials that are used in the United States.

It literally repeated packaging 5 times in the same sentence and the overall structure was repeated 9 times. Also what type of packaging is based on mercury?


The parts you criticise are the parts I was most impressed with. These sorts of repetitions can be persuasive in writing/arguments, and it's impressive to me that a model learned this type of writing.


> These sorts of repetitions can be persuasive in writing/arguments

That is the saddest part. It's not because AI is good, it's because we count saying "X is good/bad" 3 times as a persuasive argument. It won't be hard to learn this kind of "arguing", it's just sad that's what we're teaching our AIs to do and get excited when they do it.


> saying "X is good/bad" 3 times as a persuasive argument

I didn't say that it's a persuasive argument, I said that it can be persuasive IN arguments. There's nothing sad about an AI learning it, or people being happy with it, it's very impressive.


Why? It is pretty much a well juxtaposed mix of random internet comments. And it's the best of 25, which means the other 24 is even more regular internet banter noisy.

(This of course doesn't make it an amazing feat of computer engineering.)

The overarching narrative is great, but that's probably driven by the great antithesis supplied by the experimenter.

It'd be interesting to know how this works, what happens if less or more is given as thesis/antithesis/assignment, and after how much output it turns into gibberish (or repeats).


Definitely impressive work, but the fact that this is hard to distinguish from human text, if true, is pretty sad for humans. Even sadder if anyone reading this could be swayed by such an argument.

Heck, maybe having to compete with this will raise human discourse (Joking).


It's impressive in terms of having a coherent flow - there is a clearly stated "opinion" in the beginning and everything that follows is in support of that opinion. However, the dead giveaway is that there is zero reasoning, just related statements linked together.


I read it and found it to be a bunch of walking in circles and repetitive baloney. It starts with a bunch of claims that is just the reversal of a pro-recycling poster and then goes into a repetitive meandering exploration about paper being made from materials, which is made from another materials. Probably something a model would regurgitate if fed with some popular literature about recycling. The most astonishing fact for me is that people actually think it's somehow surprisingly good.


> - I suggest going over some of the samples generated by the model. Many people react quite strongly, e.g., https://twitter.com/justkelly_ok/status/1096111155469180928.

Have you done a plagiarism search on that text to see how similar it is to the input corpus? I'm by no means an ML expert, but I've played around with models for random name generation and one thing I've noticed is that as the models become more accurate, they also become much more likely to just regurgitate existing names verbatim. So if you search the list of names and notice something that seems particularly realistic, it could be because it's literally taken in whole or in part from the training data set!


You're welcome to check out the samples [https://raw.githubusercontent.com/openai/gpt-2/master/gpt2-s...] and evaluate them for memorization yourself (I haven't found any so far).

(The talking unicorn example on their page is also meant to demonstrate that, no, it's not just memorizing, but I think it's a bit more compelling to check from the raw samples)


So a small number of individuals decided what's best for everybody?

How is that open?

How is that not centralization of power?


If they did release it, there would be an equivalent outcry about how OpenAI was contributing to fake news, etc.



What solutions are you proposing?

Here are a few that comes to mind.

-Secrecy? but how will you continue to exist on the PR scene if you don't release anything?

-Are you willing to pay every developer who is able to replicate your paper, more than what the black market would pay?

-How are you working on incentive alignment to make sure that all people who can replicate your results have more incentive to do good than bad, specially in the current environment where users and valuable data are silo-ed by a few companies?

-Misdirection to keep an edge, i.e. planting bugs/ Not fixing bugs for public ; spreading false results; only working on problems that need high resources to limit the number of actor who will be able to replicate ?

-Tracking the people who have the competence to replicate and take preemptive measures.

-Restrictions on GPU/CPU/silicone wafer.

Who can regulate? How can we regulate? What are the negative consequence of regulation? What happens if we don't, at what odds and time horizon?


This seems very reasonable to me. All the outcry seems... disproportionate.

That said, withholding the pretrained models probably won't make much difference, because bad actors with resources (e.g., certain governments) will be able to produce similar or better results relatively quickly.

All it will take is (1) one or two knowledgeable people with the willingness to tinker, (2) a budget in the hundreds of thousands to a few millions of dollars at most, and (3) a few months to a year. Nowadays a lot of people are familiar with Transformers and constructing and training models across multiple GPUs.


> - If you accept the above, then the current AI norm of "publish everything always" will have to change

Ok, accepting that premise, what people/organisations would you share the research with and based on what criteria?


I think you should at least release a small portion of the training data (e.g. anything recycling related) so people can measure to what extent the model is generating new sentences and to what extent it's just regurgitating training data.


Hello Ilya. Great work.

One of the reason Elon distanced himself because of what OpenAI team wanted to do. I am wondering if this new paper has anything to do with that? Or what it is in general that Elon doesn't agree with what OpenAI is doing?

Thanks!


Deciding if feeding the media with fear was worth the attention you will get wasn't easy, ha. Let me tell you, you are the shame of the profession.


Yep, put out handpicked samples to stoke fear, then release nothing of the internals to stoke more fear and act like self-appointed gods.


>Elon Musk distances himself from OpenAI, group that built fake news AI tool

This is the worst headlines in this matter. This is one of the leading media in India. A language model being touted as Fake news AI tool. This is like calling a car, A run over machine by Ford.

https://www.hindustantimes.com/tech/elon-musk-distances-hims...


> This is like calling a car, A run over machine by Ford.

That's a great dysphemism. Gonna start using that.


Hindustan Times is far from being a leading media outlet.it squarely falls in the same category as The National Enquirer of Jeff Bezos fame.


That's not quite fair. The sample output they're touting is really nothing other than false text (when it's coherent), almost all of which is in the style of news.

So for the Ford analogy to be apt, Ford would have to have designed a car nobody has ever seen, and released a video which is basically just hundreds of hours of the car running people over.

I mean, a car has lots of well understood non-running-people-over capabilities. But have they demonstrated that this model is useful for anything other than generating fake news-sounding spam text?


It seems disingenuous that this article fails to quote examples of GPT-2’s stunning results, or give any contrasting results from BERT to support the claim that this is all normal and expected progress.

Like many, I was viscerally shocked that the results were possible, the potential to further wreck the Internet seemed obvious, and an extra six months for security actors to prepare a response seemed like normal good disclosure practice. OpenAI warned everyone of an “exploit” in which text humans can trust to be human-generated, and then announced they would hold off on publishing the exploit code for 6 months. This is normal in computer security and I’m taken aback at how little the analogy seems to be appreciated.


> Like many, I was viscerally shocked that the results were possible.

Why? There were news about bots writing news ~5 years ago. Given a few simple facts the AI generated the regular info-scarce but fluffy news-piece.

Now OpenAI added better everything (better language models, more data, better "long-term memory" for overall text coherence), and we got better fluff.

It seems like a GAN and a simple Markov chain generator. (Even if it's not that simple of course.)

And maybe it's the equivalent of the "modern art meme" style transferred to AI/ML research. ( https://i.pinimg.com/236x/71/e1/21/71e12151f4b59d8433d32c126... )

What I'm trying to convey is that wrecking the net with auto-trolls was already possible, but for some reason Mechanical Turk was cheaper.

> OpenAI warned everyone of an “exploit” in which text humans can trust to be human-generated

Sokal already did that, and so did http://thatsmathematics.com/mathgen/ ... but of course this might be qualitatively different, because it can be targeted. (Weaponized, if you will.) But the defense/antidote is the same, but it takes a lot more than 6 months to make people better at critical thinking, but maybe you already heard about the difficulties of that :)


What's so shocking about this? Why do we trust this in the hands of a few self-appointed experts than anyone else? Are they supposed to be more moral than any others? What will security experts do in six months that wouldn't benefit from more security experts looking at it? Why do you care that garbage text is machine generated, from a spammer or influencer, or a mechanical turk? If it's volume you're concerned about, should we complain when search/recommendation engines already aggregate and reweight a tiny opinion into a continuous out-of-proportion stream that can last you a lifetime to consume? What is the practical difference to have more volume existing "out there"?


Many reactions across here / twitter / reddit seem totally out of proportion. And an odd mix of "stop acting so self-important, this research isn't special so you shouldn't have any qualms about releasing it" and "this research is super important, how dare you not release it".

The strongest counterargument I've seen to OpenAI's decision is that the decision won't end up mattering, because someone else will eventually replicate the work and publish a similar model. But it still seems like a reasonable choice on OpenAI's part–they're warning us that some language model will soon be good enough for malicious use (e.g. large-scale astroturfing/spam), but they're deciding it won't be theirs (and giving the public a chance to prepare).


In other fields such as infosec, responsible disclosure is a standard approach. You don't just throw a zero-day out there because you can. Whilst the norms for AI research needn't be identical, they should at least be informed by the history in related fields.

The lead policy analyst at OpenAI has already tried to engage the community in discussing the malicious use of AI, on many occasions, including this extremely well-researched piece with input from many experts: https://maliciousaireport.com/ . But until OpenAI actually published examples, the conversation didn't really start.

In the end, there's no right answer - both releasing the model, and not releasing the model, have downsides. But we need a respectful and informed discussion about AI research norms. I've written more detailed thoughts here: https://www.fast.ai/2019/02/15/openai-gp2/


> Namely, he argued that OpenAI is concerned that the technology might be used to impersonate people or to fabricate fake news.

This seems to be a particularly weak argument to make. How is their model going to impersonate someone in a way that a human can not?


Cheaper cost to put out a bigger volume of content.


Is volume really what dictates whether or not you can impersonate someone? It's never seemed that way to me.


It lets you impersonate a crowd, or various crowds.


"PR firms" already have an army of fake/paid accounts on every important platform.

This new AI could help them with that. They can let go of the paid writers and hire an IT guy/gal to operate the bots - and the VPNs. (Or they can just pay a lot less to the paid trolls just for their home ADSL/Cable/4G connection.)

But so far this AI is not going to pass a Turing test. Sure, maybe it can be integrated with a chatbot. And it'll be interesting how internet communities will react.


More to the point, is volume what counts as danger? All these deepfake risks boil down to online (for now) sock puppetry. We've been dealing with that for the whole life of the internet. The only reason it's even a problem in recent years is the growth of uninnoculated masses who haven't been on the internet that long, and positive feedback recommendation bots. That seems a qualitative issue not a quantitative one.


It affects how many people you can impersonate, cheap enough for many authors with small readerships each. (I guess, it does seem overblown.)


Cheaper than paying "influencers". Paid blogging was huge during the .com era. I wonder if this could be adapted, with suitably good speech synth, to produce podcasts en masse.


It could. Adobe has a model to generate speech from arbitrary text. All it needs is typical samples from a speaker and transcripts for training. You could easily make it sound like Obama, for example. It will match intonation, timing, etc. and maybe insert the occasional "uhm" or "uh" when appropriate.


But cheaper cost for everyone else also, provided they own the tech. That seems an argument for wide distribution. (Do you want to be the lone human voice against the bots or do you want to have your own bots to amplify your voice?)


Its hype and marketing


You might be able to tailor things to individual people very specifically based on what their views are and what might push their buttons. Like spearfishing but for propaganda. Not with this exact tool, you'd probably need some more knobs, but a similar one. This would be impractical to do at scale without computer assistance.


What if OpenAI didn’t write the piece? What if the research was announced by the machine, and the folks at OpenAI are all dead?


You joke, but there's a real point here -- many commenters in this thread are complaining that OpenAI's position on this is a marketing stunt. Presumably, if this stuff gets commercialized, it will probably be adept at a few domains first, and I feel like writing good marketing copy will be one of them. So perhaps the bot itself didn't do so here, but it wouldn't surprise me if a self-marketed bot exists in the near future.


p.s. I was kidding, but I was completely serious. If they can train a machine to write good copy, they can train the best Russian bots to troll people on Facebook, write New York Times pieces, and fake and influence pretty much anything done through a written text. Heck, they could write a business book and get it into the top-10 that year. Actually, that last part, they should, it would be amazing!


What if I am the machine and you the last human left alive?


You know how when we broke the Enigma we couldn't really let Germans catch onto it, so we had to mask our knowledge of their positions by maintaining statistically insignificant number of accidental wins? Much the same, a good AI should make deliberate typos.


It’s amazing to me that no one has yet pointed out the blatant irony that their name is OpenAI, yet they are concealing far more than what is typical.


I assure you, people have pointed it out...


Elon Musk was kicked out because he poached Andrej Karpathy from OpenAI to lead Autopilot. Anyways, it was worth it, Andrej is doing an amazing job, and OpenAI is still alive :)


> Anyways, it was worth it, Andrej is doing an amazing job, and OpenAI is still alive :)

Tesla does not even offer their full self driving package anymore. No coast to coast drive yet. Hard to say that's an amazing job.

OpenAI abandons their open source GitHub repos after a year, is now not releasing code, and is always in DeepMind's shadow. Alive, yes. Successful, no.


Did you really expect Tesla to launch full driving? They started with being 5 years behind Waymo and without lidars or high resolution mapping, precise GPS sysyem that Waymo has...basically Elon wanted the impossible.

At the same time Andrej dropped out the idea of a fully learned end-to-end model (that's just impossible with the current deep learning technology), and started replacing the somewhat working heuristics with machine learning methodically one-by-one. Also he ramped up the data gathering pipeline.

He needs to build the full simulation, agent systems that can simulate other drivers/humans, implement reverse reinforcement learning...there's so much to do where Waymo is far ahead (but Tesla is ahead in data gathering).


To what extent is this not just finding text samples written in its training sample and regurgitating it near verbatim?? -Non ml guy


If you look at their paper, section 4 is entirely devoted to this question. They present compelling evidence that it is generating original content, the simplest of which is it's ability to write coherently about ridiculous things like talking unicorns that nobody has ever written about in the training set.

https://d4mucfpksywv.cloudfront.net/better-language-models/l...


The talking unicorns piece was shockingly good. That is at least as coherent of a news story than the average human could easily invent about it.

Reading that piece gives me the same weird feeling as watching AlphaStar playing through a StarCraft game.


You bring up a good point. Without seeing their code and training metrics, how do we know that this isn’t some extremely overfitted model?


From the paper:

"All models still underfit WebText and held-out perplexity has as of yet improved given more training time."


Does someone have a description of the network somewhere? Does it use LSTM for memory or what? Is there anything unusual about the size or structure of the network? Does it use an attention mechanism?


I would recommend reading the paper: https://d4mucfpksywv.cloudfront.net/better-language-models/l...

and the previous paper

https://s3-us-west-2.amazonaws.com/openai-assets/research-co...

It's a transformer, not LSTM, and it's very large but not structured in a particularly unusual way.


Can you imagine if the teams that worked on the Internet decided not to make it available to the public because of the potential misuses. OpenAI is a joke.


I think OpenAI should change org name to ClosedAI.


So an article about recycling generated by OpenAI model (best out of 25) already makes more sense than presidential speeches or most of ramblings of average politicians. Can we automate them away as well?


How do we know this article isn’t just fake news being written by an AI?


When a bug is caught on your palm, it pretends to be a dead bug. When a moose is scared, it plays dead moose. When AI wants to fool a human or a captcha filter, it impersonates a human.

Only when a human wants to fool a human, it impersonates whatever possible but a human, then suddenly charges a shitload of ape shit, and then behaves like it never happened.

Without a decent natural language translation or automatic reasoning, which they have not, looks like N-gram where N equals to number of words in language corpus.


It’s a great marketing hack. That’s the real accomplishment here.


Ms. Anandkumar nailed it, this is blatant hype bordering on hucksterism. Elon Musk May have left, but his influence remains I guess.


First, it's clearly the goal of OpenAI to bring more public attention to advances in the field, specifically to help voters and policymakers consider potential ramifications well in advance of any "truly" groundbreaking work before it's too late. Of course they're "hyping" this technology.

Secondly, have you seen the results? I was dumbfounded and fascinated. I spent hours reading the samples.

Maybe I'm just out of the loop and this truly isn't anything significant, but then that only proves that OpenAI was successful: Now I am aware of the latest advances in NLP and hopefully so too are many more.


>Secondly, have you seen the results? I was dumbfounded and fascinated. I spent hours reading the samples.

Yes, I've seen the result. They're nice but, as the article points out, not extraordinary compared to state of the art, open NLP research.

OpenAI's behaviour here smells of Gibsonesque 'anti-marketing', using the misunderstanding of AI and its capabilities in the general population as a means to stir up publicity for their organisation.

This is unethical, misrepresents progress in the field, and produces confusion in the press.


> not extraordinary compared to state of the art, open NLP research

> misrepresents progress in the field

Can you point me to some examples of unsupervised learning with similar results? Not asking for rhetorical purposes; I just genuinely was shocked by how compelling their results were, especially given this was unsupervised.

> OpenAI's behaviour here smells of Gibsonesque 'anti-marketing'

I don't disagree that the ethics are questionable, but I think it's highly speculative to suggest that they didn't release the full model purely as a marketing ploy (I'm assuming this is the main objection to their marketing "tactics"). As you say, it "smells" this way, but I fail to see how it's really so clear-cut.


>Can you point me to some examples of unsupervised learning with similar results? Not asking for rhetorical purposes; I just genuinely was shocked by how compelling their results were, especially given this was unsupervised.

Model wise this is just openAI's GPT with some very slight modification (laid out in the paper).

Ilya has now commented in the thread and essentially made the same point, this is state of the art performance, but reproducible by everyone because it uses a known architecture.

The secrecy and controversy makes no sense if the model is open, even the methodology of data collection is laid out. There is no safety here assuming that anybody who wants to rebuild the model can do so simply by putting enough effort into rebuilding the dataset, which is not an issue for a seriously malicious actor.


> Model wise this is just openAI's GPT with some very slight modification (laid out in the paper).

> The secrecy and controversy makes no sense if the model is open, even the methodology of data collection is laid out.

This is exactly why I found the results so compelling: It suggests that this technology is already accessible to some big players: The odds that a Big Corp. or govt agency has already begun using the technology are high, which is precisely why the public needs to start thinking about it.

I cannot know exactly why OpenAI chose to withhold the model, especially given how easy it would be to recreate, but even if we assume that OpenAI withheld the full model purely to drum up controversy, the controversy is justified, as it's very likely that this technology is already in the hands of a few big players.


Interesting to think about whether state actors already have such technology.

If they did, I bet it would be used for automated "troll farms".

Like weaponized malicious ELIZA, it would have fake user profiles reacting to keywords, spinning suitable counter-argumentation and/or lies for as long as it takes to change opinions and perceptions, relentlessly, day and night.


>Yes, I've seen the result. They're nice but, as the article points out, not extraordinary compared to state of the art, open NLP research.

This isn't my impression.

It's not the best in many domains but it a single network that is moderately decent in many domains. You can use it to summarize by adding TLDR to the end of text. You can use it to translate by listing previous translations. And of course it does blow away any state-of-the-art RNN text generation I've seen. RNNs tend to fall apart after 1 or 2 sentences where as this holds it together for multiple paragraphs.


Have you been following NLP closely lately? It seems like most of the frustration and/or skepticism is coming from those closest to the field (i.e. researchers), so I'm pretty sure I'm missing a big part of the picture.

I'm trying to get a sense of just how quickly things have been advancing. I read a few NLP white papers about a year ago and never saw anything as compelling as this, but I am definitely an outsider possibly on the left hand side of the Dunning Kruger graph...


I have seen the results and I don't get why people think this is any more dangerous than journalists who selectively report to fit a predetermined agenda or make shit up on the spot. Which, today, is a lot of them.


It's Bulk. Same reason why spam is a problem.


The problem for spammers, as well as for fake news writers, has never been in coming up with the text for the spam email or the fake news story. This is already cheap and easy enough. The problem is with distribution and getting enough eyeballs. This new and so very dangerous AI may enable you to come up with 1M fake news stories with the click of a button but it won't get any of those stories published in NYTimes.


>> get any of those stories published in NYTimes

I wouldn't be so sure about that. Take their reporting of Charlottesville events and Trump's comments about them. Here's what Trump _actually_ said: https://twitter.com/ZiaErica/status/1096572062196486144. Pretty reasonable point of view, all things considered. What was NYTimes "reporting"? That Trump is "defending white supremacists", of course. Don't believe me? See for yourself: https://www.google.com/search?q=trump+charlottesville+nytime.... Why was NYTimes doing that? It's either deliberate malice or incompetence, both of which would make NYTimes quite friendly to automatically generated fake news as long as they fit their narrative.

But there's a bigger issue with all of this. When people see this tech, they immediately think that it'll be used to generate fake news (which it will be, to be sure). BUT, it could also be used to do the exact opposite: take facts and summarize them without agenda-driven omissions, without "reading minds" or inventing "sources" "familiar with" someone's "thinking", or passing off uncorroborated dossiers or book chapters as gospel truth.


You can't do that for near 0 cost though, nor generate a different story per user on the fly.


> Fictitious state of emergency

Pretty dumb and disrespectful to politicize a blog post about OpenAI.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: