Hacker News new | past | comments | ask | show | jobs | submit login

Startup idea: create a synthetic subset of natural language in which it's difficult to write annoying comments, to simplify social media moderation. Sell it as an API.

All extant moderation* is of the form "start with the set of all text strings, and remove unwanted things incrementally". What if we tried it the other way around -- start with *absolutely nothing*, and incrementally add words / production rules which are "probably safe"? You could build up a restrictive, stilted "internetspeak" that's much safer / cheaper to moderate, allowing text comments in places where otherwise the costs would exceed the benefits. Or allow big tech platforms who have too much text to effectively control, much of which they don't want, to grasp firmer control of that.

I'm imagining the UX to be something like autocorrect, that takes real-time text input and projects it onto the closest-matching strings in the subset language, which are output as suggestions / prompts. But ideally it'd be a language users could quickly learn to master, without needing continuous assistance / nagging which disrupts the flow.

Is this doable now, or is natural language just too insidiously nuanced?

*(Broadly interpreted: everything from "humans manually reading / removing things" to "word / regex badlist" to ML approaches -- they're all "default allow").




I think there are two categories: Counter arguments and sarcasm.

- It’s easy to filter counter-argument using vocabulary.

- And I think it would be possible to filter sarcasm by cutting off the little insidious nuances, because that’s the only ones which distinguish sarcasm from approval.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: