Hacker News new | past | comments | ask | show | jobs | submit login

ML algorithms are already constantly deciding what you should and shouldn't be allowed to read, by surfacing, amplifying, and suppressing content on Google, Youtube, every social media platform, baked-in Windows ads I presume, etc. based on each website's own financial and/or political interests. And soon ML algorithms will be increasingly deciding what you can and can't read, by vomiting out the "content" in the first place according to those interests.

Maybe I'm just fatigued from the constant erosion of user autonomy over information consumption, but as long as a ML ad blocker runs locally by the user's own decision, it doesn't seem that Big Brother-ish (compared to serverside shovel-feeds), and as long as there's a single-click toggle to disable it like in existing ad blockers (or if E.G. the UI exposes a side pane to show the content it's blocked) and it can be configured to be less or more aggressive based on content patterns, it hardly seems like censorship… As far as uses of "AI" go.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: