Hacker News new | past | comments | ask | show | jobs | submit login
Medium bans AI-generated content from its paid Partner Program (bleepingcomputer.com)
47 points by thunderbong 6 months ago | hide | past | favorite | 60 comments



All publishers are grappling with this where they want generally higher quality content (defined as more engaging, intrinsic quality is something else) and less low effort spam or retreaded concepts.

AI-generated content has really dropped the floor on how cheap it is to produce niche blogspam, and they're saying: "Hey if you're pushing out 1000s of articles a day on a topic and we look at it and it's insane gibberish, we're just going to assume it's no good on principle and ban you."

Nobody is getting banned for the "Gray-AI" of assisted writing, where you get help with an outline or use Grammarly Pro to rewrite your sentences into coherence.


I believe this to be the correct take away.

They don't actually want to ban AI content. They want to ban garbage content. If they claim that something is AI content, then it's not like there's a good defense a writer can offer. Short of meticulously recording themselves typing the article up on a typewriter in real life ahead of time. In real time so everyone can see them going through revisions.

This is AI and we're banning you, is simply a sufficiently socially acceptable reason to get rid of content that they otherwise don't want clogging up their site.


Well someone could easily whip up a prompt-to-video which, given a prompt, outputs an entire article AND a screenrecording of laboriously typing-and-revising from a draft to the final output.

It would probably take a weekend.


I would love to see something like that. I think it would be really neat. (Any takers? I hear it probably won't even take that long.)

[Of course if you were creating and uploading these videos as you create your content over time. And the upload site was either relatively well known or sufficiently third party that you can trust the timestamps, then I think it wouldn't matter if it was AI or not. "Hey, this person isn't using AI, they've been uploading videos of their process for years. Who in the world would do that just so they can get away with using AI five years later?"

It's not really a proof so much as it is a social proof. The person under fire needs a socially acceptable way to fight back against a socially acceptable ban-able offense. At least in my experience, lots of paper work normally does the trick.]


Indeed. xkcd 810 comes to mind.



The real issue isn't 'insane gibberish', because that's quickly and cheaply discernible. The real issue is eloquent, well written text that looks like a new contribution but is actually a re-hashed mixture of the same six talking points as the other 1000 articles published by the same entity, because you have to actually read and think about it to realise that it's wasted your time.

We're not fighting against random noise, we're fighting against manufactured pap.


Even so, if the paradigm shift occurs most of us will be summarizing the manufactured pap via an LLM. Our time won’t be wasted in that case.

No clue how this all ends, but I can’t see any reason for websites to exist anymore. We’ll be going from one black prompt screen LLM with a Reddit logo on the top left, to another tab with the same LLM with a Medium logo on the top left.

Seems pointless, just combine it and list the sources.


The problem is you're summarizing bland fake information, though.

There are entire categories of searches that I've now given up on. Want to remove <x> stain from <y> fabric? You can only find SEO-optimized results (not just Google ; DuckDuckGo and Qwant are the same) which will always tell you

- use vinegar

- use baking soda

- use both!

- try lemon

- if all fails, try (insert guaranteed epic fail recipe eg ammonia mixed with salt)

There you go, I just "summarized" thousands of those websites for you. Now try it on your real-word stain?

I think we're going back full-circle to calling Mom and do what she'll recommend.


>I think we're going back full-circle to calling Mom and do what she'll recommend.

In a sense, the old-fashioned way works so well because it relies on actual human connections and trust. Your mom isn't trying to sell you Miracle Cleaner 2000. Your friends don't get a commission for recommending a book they think you'd like.

On the internet, the whole world is trying to sell you something, and they have little incentive to STFU. There's no connection, only transaction for fractions of pennies.


>AI-generated content has really dropped the floor on how cheap it is to produce niche blogspam

For all the potential good utility AI has, my current outlook is that it's primarily going to be used as a noise-generator at scale. As long as there's money (or influence) to be had from clicks and advertisement, the near-zero cost of AI-generated content means people will take advantage of that arbitrage opportunity.


On the bright side we’ve desperately needed a noise filter for, like, a decade or so. At least these LLMs are a big noticeable discontinuity to make the problem obvious.


On the gloomier side, if we couldn't come up with a good noise filter in the relatively calm times of the pre-LLM era, are we going to find something now?


> Nobody is getting banned for the "Gray-AI" of assisted writing, where you get help with an outline

Maybe not but it's certainly worth worrying that this is what will happen just because the line has to be drawn somewhere and that's an easier way to draw it. Baby, bathwater, etc.


> use Grammarly Pro to rewrite your sentences into coherence

Is it just me or is this a bizarre practice for someone who is actively making money on writing?

Like, shouldn't you be expected to at least have competence in your craft? Writing with correct grammar should be the baseline of someone writing for profit. Or even "help with an outline", if you can't write an outline of what you're trying to communicate, and you need the averaging machine to help you, do you actually have a unique understanding of the topic? Or are you merely rehashing existing content that's been fed into the averaging machine?


It is bizarre if you don't take it quite literally. In practice, it is possible, even for seasoned writers, to sometimes create sentences that could use a bit of attention. If Grammarly is the right tool for that can be debated, but that is the case for any tool.

I feel like your response is along the lines of "programmers shouldn't use syntax highlighting" and similar arguments against tooling.

Then there are also the folks out there who want to publish their writings in English, but for who English is not their native language. They might use sentence structures that don't work quite as well in English as they do in their native language.


I think I may have been taking it too literally.

I read "writer feeds incoherent sentence into a tool", when it was probably an exaggeration.

The multi-lingual thing is also a good point.


I guess it depends on what you call making your money on writing. At one point, most of my money was from writing emails. I can't really spell and somehow my grammar is worse. LanguageTool, and whatever plug in/extension/google search copy and paste that I've previously used, allows me to communicate the same message but without spelling/grammar mistake many people would see.

There is almost no chance I'm going to spend a significant amount of time relearning grammar and how to spell. But that has almost nothing to do with my ability to communicate or make money writing. The grammar tools just translate my sentences into the format we've all agreed is correct. Almost like a linter but for English.


This use-case makes a lot more sense.


I think it's just you. That's like saying you should know how to spell perfectly if you're a professional writer, and should not use spell check. LLMs are just spell check on steroids.


What about the argument of using the AI to create an outline? Surely spelling has no effect on the outline if you understand the concept? The outline isn't being published, it's just the author laying out their thoughts on a topic that they're claiming to have some authority in.

If the author cannot outline their ideas, even with bad grammar and spelling, do they actually understand the concept?


I am not a natural language professional so I don't fully understand how outlines are used by authors. If I map that to software, I might think of asking an LLM to generate pseudocode for something. Of course I could do this myself (assuming I understand the problem domain), but why not ask an LLM to help me? I will just use whatever it generates as a starting point and edit it to fit my goals and specific objectives. Maybe outline tools are used to do something similar?


I don't really care about grammar so long as the point is communicated. Some people are trying to write in a non-native language, or just need a bit of help. Whatever tooling that helps improve communication is great, I think the content is more important than the language used to communicate it.

Also your point could be made equally well about people using spellcheck.


I've written a number of books, 3 of them bestsellers, and I've always looked for outside input to help with the outline.

When you're close to a deadline, or too close to the subject, you need some outside guidance to make sure you're not missing some obvious step in getting your point across.

It's not always easy to find someone that knows how to be a great sounding board. They need to be very smart system thinkers, naive about the topic, engaged and interested. I have such friends, but I try to use their time sparingly.

So now I do use AI to either provide a first-draft of an outline for new content, or (my preferred MO) to review my own first draft suggest missing items I should cover.


Exactly, tools like this help bring your writing up to "average", if you're making a living writing, you should already be much better than that. It's kind of like a painter using a "paint by numbers" canvas.


> Is it just me or is this a bizarre practice for someone who is actively making money on writing?

No, its not bizarre for paid writers to use tools that are a lower-overhead replacement for paid copy-editors when using a platform that's entire value is that it serves as a lower-overhead replacement for a publishing house but which also doesn't provide most of the things a publishing house would provide except distribution and payment processing.

It's really the most natural thing in the world. If AI was up to snuff for it, it wouldn't be surprisibg for Medium authors to use automated tools to function as editors (not merely copy editors) as well.


Applying your argument to coding, it would be about writing near perfect code without autocomplete, boilerplate templates, and be able to cleanly match types without static analysis or the compiler yelling after us.

That's a nice ideal to strive for, but that wouldn't make us all better programmers or not worth actively getting paid for programming.


I think I was taking the original comment too literally, but I still don't think this comparison applies.

Tools to rewrite basic sentence structure requires a lack of sentence structure in the first place. It's like needing a tool to make your foo-bar run.


How is this policy enforceable? There aren’t any reliable ways to detect LLM-generated text, are there?


The policy is enforceable through moderation. They're probably looking for multiple indicators then doing a manual review. The only way to monetize Medium is through the partner program, which is reliant on high volume of posts or a post to go viral. It's unlikely an AI generated post will go viral, but it is very likely people will spam the system and start posting 3-5x a day to get a few dollars. I don't think they're as concerned with someone using AI as an assistant to write a few posts a month, but rather AI slowly draining their payouts from real people.


Using common sense, probably. You don't need an LLM detector to tell that something is AI generated.


You don't need an LLM detector to tell that something is visibly AI generated. With some minor prompting and editing it can be indistinguishable from any other writing.


>> Using common sense, probably. You don't need an LLM detector to tell that something is AI generated.

This could quickly turn into a witch hunt


Many people now believe things are AI generated the moment the writing says something they disagree with. That's fun!


There's money to make with a Chrome extension that flags comments contrary to your beliefs and tags them as "likely AI-generated misinformation" while highlighting the "report" button.


Maybe there is money to be made. crypto lock the PCs of anyone petty enough to use such an extension. They deserve it. And boom, revenue.

Disclaimer: I'm joking - don't do this. It could land you in the big house.


That strategy works for lazy publishers; with a bit editorializing and prompt refinement you can make a AI generated text indistinguishable from written text. Believe me it’s easier than it has any business being.


It looks like you're getting downvoted, but my opinion is that your statement is correct.

Yes, technically, common sense doesn't work because some things written by AI won't be obviously written by AI and some things written by people can appear to be written by AI when it was not.

However, I suspect that medium doesn't actually care if something is written by AI or not. They want to stop the person writing hundreds of garbage articles a day. The person writing a nonstop deluge of poor quality articles sans AI is going to be erroneously banned. The person writing high quality articles 100% powered by AI won't be banned. AI will simply be the justification for culling what they do not want.


Exactly. People are getting too caught up on AI or not. If an article is good, it's good, regardless of AI.


this works until an entire generation of people are used to reading LLM generated text and they write like an AI


They have years of data on how fast a regular account can generate content. If you are producing 100 times more content.... or content 100 times faster....


Because the actual policy is “we can remove anything”. Just the positioning is they’ll only use it on AI.


Read a TOS some time, it's enforceable because they can choose to do absolutely whatever the hell they want. If they're wrong, well... tough nuts.


Is this story not AI-generated? Every sentence feels like it's just repeating the same thing and some of them are... like what is this: 'With the banning of AI-generated content from the Paid Partner program, Medium has further cracked down on distributing this type of content.'

With the posting of this post, I am making people more aware that there are posts on HN, further cracking down on not posting.


it's medium that should be banned


Both, really


Honest question... Why?

I understand that medium is not absolutely necessary and ideally we would have an open source tool that is easy to deploy. In the meanwhile, I use medium quite a bit.


When we use medium, we tell our audience we don't care if they get bombarded with bad UX, like subscription popups. And we tell our accessibility activist audience that we don't want them to read us at all.

I immediately nope out of medium articles. I black list medium on my search engines. And I criticize medium authors on HN instead of reading their articles. This is all with the hope of bringing back more accessible old school blogs.

If I wrote a blog in 2024, I would use a free GitHub Pages site, and use a markdown based blog generator. This distributes my message for free, without attacking my audience's attention with sign up screens.


Or, for the non-technical audience, Ghost.org seems a solid recommendation, followed by Substack.


Yeah. Medium did help to consolidate searching for blogs, but it also vastly increased the extremely low effort, and encourages bringing hot garbage takes to the spotlight.

I have yet to find a technical writer on medium that isn’t basically musing about what they learned that day in boot camp, or just regurgitating some stupidity they just read about how “FP is totally actually extremely performant if you just squint really really hard while lying about numbers!”

The bar for “good” on medium is already so insanely low that I question their move here.


How can they tell? as far as I know the state of the art can't reliably distinguish ai generated content.


Hmm, I didn’t consider this. Anyone actually generating AI articles on these types of platforms and profiting?

Thanks for letting me know Medium lol.


Just like many of these schemes, the first person to do it will probably make a good amount of money, but every nth person after that will make much less. There has always been a large group of people trying to spam their way through the Medium partner program.


> Any story using AI help must clearly mention this within the first two paragraphs.

What’s AI here? Is the spelling correction tool in Google Docs "AI"? Are the suggestions from my iPhone "AI"? This is the kind of very vague sentence written by someone who doesn’t understand AI.


https://help.medium.com/hc/en-us/articles/22576852947223-Art...

> We define AI-generated writing as writing where the majority of the content has been created by an AI-writing program with little or no edits, improvements, fact-checking, or changes. This does not include AI writing tools such as AI outlining, or AI assisted fact, spelling, or grammar checkers

> Do I need to disclose the use of AI-assistance such as grammar or spell checkers, outline assistance, or fact verification?

> No, you do not need to disclose this use. You must disclose if you are using text or images generated by AI


This is not very clear about the case where the text is AI generated and substantially edited, fact checked, and improved by a human co-author.

I think that kind of writing is perfectly valid. AI contexts can now include primary references, and with human critique, AI can fix a lot of its own style and factuality issues.

A good author writing this way will end up writing in their own voice, but spend less time drafting and revising, since they can delegate that to their AI assistant.


I believe that would be, from my link, "AI-assistive." Such text would need "a disclosure at the beginning of the story (within the first two paragraphs)."

The reason given is, "in practice, AI-assist is often a way to insert small snippets of AI-generated content that often suffers from the same problems as fully AI- generated stories."

In other words, readers need a heads up that, no matter how well-edited the text is, there's a chance of hallucinated junk to be found within.

I'm sure I'm missing some edge cases, but this all seems pretty darn lucid to me!


I feel it's fair to filter out writing that is fully AI without human proofing. That's because that writing is awful.

However, assuming an author competently edits and proofs the final work, I don't need or want to know if they used AI. Just like I don't need to know if they used a human intern, editor, or other co-author - assuming the co-author did not demand attribution.

Let's stick to worrying about lazy and incompetent authors. There is no reason to penalize otherwise competent authors for co-authoring with AI.

Prediction: This is going to hurt medium, by driving off competent AI assisted writers. They will find inclusive platforms.

And frankly I'm glad. Medium is a plague on the internet, and needs to die horribly. This is the way.


Thanks; it’s not very clear that "AI help" excludes "AI-assistance".


They mean "entire posts generated by LLMs". They aren't going after AI-assistance. They are telling you not to set up your content spam farm on Medium.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: