Hacker News new | past | comments | ask | show | jobs | submit login
Meta apologises over flood of gore, violence and dead bodies on Instagram (theguardian.com)
70 points by chrisjj 17 hours ago | hide | past | favorite | 72 comments





Instagram is borderline a porn/gore website if you open a new account and spend one day doom scrolling

The amount of nsfw content including women “breastfeeding” fake babies, wild accidents, weird ultra graphic shock videos is insane.

I have a few accounts that I use to post for work. My explore pages are wild compared to my personal account which is mostly tech, cars and architecture

I don’t “like” anything on my work accounts, ever, the algorithm just knows what to feed me because of what I click on the most (impossible not to click on those thumbnails)


You can say the same thing about Snapchat. My kid had this on their phone last year. I check the phone from time (12 yo) to time to make sure there is no weird stuff or message in there. I open Snapchat and saw the "recommend feed" with very "strange" thumbnails. I deleted the app and told them not to install again. I put the restrictions to their apple account to explicitly forbid apps for 4 year+. Somehow Snapchat is for 10+. Am I crazy?

On the "Threads" app I got very similar experience, a lot of breastfeeding erotica and other weird adult content that made me feel I'm doing something illegal.

Also, their algorithm must be assessing human features too because at some point Instagram started showing me women with ever largest breasts. I get that they try to understand what I'm into but it's utterly ridiculous to push for the more and more extreme.

I wonder how large is the cohort for all these weird stuff, considering that Instagram itself works as "not a creep" credit score when you meet someone new.

Recently I gave in, started following some of those weird accounts and bookmarking deep fake scam ads out of fascination, probably adding fuel to the fire. It's just immensely weird.


I don't use Instagram or any social media apps--I do use webapps. For Youtube if I'm tempted to click something I wouldn't have searched for myself I use open in private. I mark watched videos with a Like. I also actively click "Not Interested" and even "Do not recommend channel". My recommendations are narrowly curated. I even got Youtube to 'run out' of recommendations showing me the same set on reloads.

Same here. I had to create a custom uBlock filter for YouTube because it ran out of recommendations and half of the suggested videos in my home page were from my Watch Later playlist.

> For Youtube if I'm tempted to click something I wouldn't have searched for myself I use open in private.

Private browsing? I think that's not this kind of private.


X is similar now for new accounts now. Except it's not just the timeline content, it is the advertisers too. My active account gets regular advertisers but my inactive one has all X rated and scam advertisements. And tons of "18+" bots that will go around auto-liking people's posts.

I have the complete opposite X experience

I have a new account, I only follow ai, coding and tech accounts

My following page is amazing! Not one dumb thing in my face. The for you feed is garbage but I never go there


I have no doubt that is your experience and it's good to hear some people are having good experiences with it.

My main account is fine but I have an account that follows very few other accounts (it's for a product I was building that never really took off) and I just signed in and scrolled until I hit 8 ads. The ads were in this order (completely unedited list... I'm not editorializing here):

- Robinhood

- A very explicit "male enhancement" ad

- A movie ad for a movie I've never heard of until now

- A "bedroom aid" (blue pill)

- A marijuana dispensary

- A scammy looking mobile app

- Another ad for a pill for men who are having trouble in the bedroom

- An affiliate marketing scam

... I stopped scrolling at that point because it felt like enough.


[flagged]


Shocking and enraging content activates minds. These products don't just count likes but also things like pausing your scroll over a piece of content. It is really hard to avoid the immediate lizard-brain response of pausing over shocking content, leading to more and more of this content. The net effect is a product that doesn't make people feel happy or enriched but a product that makes people feel angry and stressed but in a way that is difficult to pull away from.

I’m surprised the “carp” is there in the first place. Without me clicking anything at all, on a brand new fresh account with zero prior interaction with any content

As always, it's the users who are at fault. Nevermind the thousands of kids and teens who are exposed to this shit day in and day out.

Before social media you had sites like rotten. And if you were a teenager with access to internet you perused some of those.

There's been garbage online forever. But you had to seek it out and if you didn't visit those sites you weren't going to come across this content organically in the same way.

Now you just have an infinite feed that shows you this stuff organically and unless you are diligent in clicking "not interested" over and over, it'll show up in your feed more and more often.


You checked out rotten once in a while

A lot of users spend HOURS each day doom scrolling and are exposed to 1000s of pieces of such content weekly if not daily


You didn't stumble up on Rotten while scrolling past your cousin's prom photos like you do with "suggested reels" on Insta.

It's being deliberately pushed to the kids, that's my problem.


It is the user's fault when the user says in his post that he doesn't interact with the algorithm or give it anything to go on except to click on content he doesn't like and then lo and behold it gives him content he doesn't like.

Children should not be on social media at all.


So the algorithm should show an elephant stomping a man to death to anyone who starts a new account until they "correct" "their algorithm" by interacting with better content?

The issue is that users are being pushed towards violence, gore, and pornography by default.

The bar for these giant multi billion dollar tech companies with hundreds of thousands of employees and the most advanced AI tech in the world is truly as low as it's ever been.


this is why instagram is so fun. a lot of people have their friend group chats in imessage, wechat, fb msg etc... but the best group chats i've ever belonged to are on insta and the unhinged meme feed is glorious.

This was the original report from 404media[0]. They are doing really great work over there.

[0] https://www.404media.co/instagram-error-turned-reels-into-ne...


Thanks.

Seconded. They have been doing great work for a few years now.

If this is because of reduced moderation, does that mean the algorithm defaults to shock content? That gets the most attention, and they're artificially suppressing it most of the time?

If the attention economy is a race to the bottom, then the most popular content should be stuff that doesn't quite break the public decency rules.


> If the attention economy is a race to the bottom, then the most popular content should be stuff that doesn't quite break the public decency rules.

I'm not sure I follow if that's indeed what gets people's attention and engagement, good or bad.


it certainly means they can successfully detect it, which makes it all the more abhorrent that they don't moderate it very well (or at all)

Ultimately it’s because users consume shock content; if they didn’t it wouldn’t be surfaced and instead they would get whatever generates views be it cats or rainbows.

No, stop blaming the users.

Look at this comment: https://news.ycombinator.com/item?id=43204869

It clearly shows that passively scrolling on a fresh account will get you gore in no time.

Yes, as humans, shock content makes us look but that doesn't mean users want this kind of content. They are being manipulated by these algorithms. Social networks are optimizing for "engagement", meaning ultimately profit, not what users actually want.

This is the lazy excuse of every company ever hurting society. Oh, the users want it. No, nobody asked for more gore on Instagram. You are just wanting to save money on moderation.


"passively scrolling" is still sending a stream of user actions and choices. But you're right, I trained myself to swipe past animal cruelty and ragebait on FB as fast as possible and still get it recommended. Maybe in some indirect way making me feel bad still increases my engagement.

I remember when ISIS posted that beheading video of a journalist, I had to go out of my to avoid seeing it. And that was before Instagram.

There are two things here. The company has an algorithm that tried to find out what makes users click/scroll onwards/return soon.

And then there is what users like.

The company chooses to act as though the two are equal. That's not given. Nobody forces Insta to assume that.


No one is ever forced to be evil. Everything in the human world is the result of someone's choices.

"Shit rolls downhill."

And all of Zuck's works are a detriment to humanity.


What did you mean stop blaming the users?

You mean that the firms are incentivized to show outrage inducing content to gain attention?

If so, that’s also NOT the problem with firms. You, me, and everyone with a phone is in some form of information and attention war. Or over farming situation.

- There is a competitive process for attention - What is crazy today is boring tomorrow, as people get desensitized - all content competes with all other content, for the little working memory that people have.

Personally I describe it as a gearing ratio issue. We have one small gear which is everyone’s short term attention.

In the mean time, we have a larger gear which is the content cycle. This gear is spinning faster and faster as the velocity of content generation increases.

Anyone who takes a stand to slow down or make their content less attractive, will get out-competed by more competitive content.

Any moderation of this process breaks the lay definition of free speech.

So you have a runaway effect where the content must get more outrageous.

This isn’t the fault of anyone, but a fundamental problem when free speech meets faster and faster content creation tools.

We’re at the point where GenAI slop will simply out compete everything, and grow everywhere like a weed.

I’m telling people I know that you have to give up on facts, the signal to noise ratio of the future is going to be hopelessly poor.

Instead, you will have to make do with whatever content you are looking at, and focus on the techniques which you use to interact with the content or people.

Its hard to blame a singular firm for this, when anyone who takes a stand will be crushed by the wheel of competition and free-speech.


This is like talking about tobacco companies and saying there is a competition for bodily health and that youngsters should make a better effort not to become addicted because it isn't the fault of poor Phillip Morris that they have to spend millions on aggressive advertising campaigns.

Competition? The social network market is split between a few big monopolists that will buy up or destroy any new upstart.

And even if we assume a free market, I don't follow your logic. Twitter has vastly more lax moderation since Elon Musk took over and people are LEAVING it. It clearly shows that many users actually prefer better moderated platforms.


The underlying competition I am referring to is the competition for attention.

——

Yet, if we are too look at firms, TikTok came on the scene and at everyone’s lunch, till they came up with clones.

Your example, Bluesky, is also competition.

——-

The fact that people are leaving Nazis doesn’t mean that people are somehow escaping the battle for attention.

See for example, the radicalization of the left which has finally accelerated enough to be its own force.

Given enough time, Bluesky will also find that its users must deal with the same forces. If you want content to succeed on Bsky, you need to get eyeballs, which means you need to compete with other things people will watch.

——

You also missed the part that moderation construes a violation of the popular definition of “Free speech”, and is thus censorship.

I know it’s nonsense.

I used to run a boring community; people eventually leave or start importing drama from the more active parts of the web. You have to tamp down on that, which results in the “violation of free speech”.

So you now have pressures to stop moderation. Heck, Wikipedia is being targeted, senators are bankrupting misinformation researchers, all to ensure that they are able to maintain the lowest level of moderation possible.

People need to wise up to the point of free speech, i.e. ensuring the functioning of a market place of ideas.


It’s somewhat more nuanced than that; a classic sampling problem.

When Instagram knows nothing about user X, it trends to shocking content that user X may not want. Which is bad.

But the reason it does so is that the shocking content has higher engagement, in average, among users A - W. It is not right to blame user X, and it is very likely that users D, H, and M are dismayed at the shock content but can’t help slowing down when they scroll past it.

So we can blame Meta to some degree, and users A - W (except D, H, and M) to some degree, and users D, H, and M to some degree.

But if the total ad revenue from users A-W was lower when shock content was featured, user X would not be presented with it.

Of course, Meta could just be decent. But so could the other users. The fact that neither are true is an emergent property of sociopathic corporations and an unfortunate side of human nature.


I read that comment's "doom scrolling" as implying that it intentionally watched negative-emotion-producing-material. Is this an incorrect reading? If so how does "doom scrolling" differ from "scrolling"?

I generally interpret "doomscrolling" as "compulsive scrolling", e.g. whem someone is bored and just keeps going through one of the infinitly scrolling social media feeds.

Bored or tired or just "filling in time" at e.g. a bus stop. That is, times when your energy isn't completely there for making sure you do everything right to manage your micro-behaviours (things like not hovering too long or reading too many comments on a post) in order to curate your feed.

The difference is in the material that you end up consuming.

It's not necessarily an intentional act: it might be that...

- the algorithms are pushing you depressing/apocalyptic content, because it increases engagement - you intentionally followed people because of their technical or political insights... And these people (often because of world events), end up propagating depressing/apocalyptical news (this doesn't imply any ill intent on their part... It might be genocide, or it might be something milder as layoff, but if you're often writing about technical topics, mentioning layoffs in IT shouldn't be too surprising for anyone)

PS: also of course from the name itself (doomscrolling), there's the acknowledgement that this is an activity that doesn't make you feel better. Which means that the non-intentionality interpretation should be considered more important. In fact, complaining about doomscrolling, can be seen either as criticism of modern media algorithms, or as criticism of the state of the world


I want to answer your question, but i don’t know what “negative-emotion-producing-material” means, nor how to look up definitions.

Could you type out an explanation by hand for me?


Stuff that makes you feel bad; about the state of the world or just life. Helpless, powerless. A story about how orphans died because a billionaire bought the building and kicked then out. Somebody's pet dying. A big corporation steam rolling little people. A friend getting cancer. War.

OK, doom scrolling is “scrolling compulsively” without intention, and with the vague feeling that time would be better spent elsewhere.

It doesn’t have to do with the content.


I had this experience where Facebook.com started to spam me more and more car accident videos. Probably since I hovered over them to block the sender.

I would guess that there could be a similar effect here. And I guess my eyes looks longer on a car getting stuck on a rail road crossing than something that is not inheritly dangerous, too.


It is hard to look away from somebody beating a dog with a belt. In our daily lives we are ethically trained not to look away from this sort of thing.

Even just pausing your scrolling over a video is enough to train your feed to show you more videos like that. So when you encounter something like animal cruelty in your feed and you, quite naturally, pause for a moment in shock you will start seeing more of this content. Again, as you pause in shock on this content you'll keep seeing more. And more.

And we don't permit this "it is because users consume this content" as a general rule. Users consume hardcore porn. Nevertheless, these platforms seek to not show people hardcore porn.


It only surfaces because the algorithm is set to a simple more engagement = more visibility. If it was removing outliers, then this would not only not be the case, but also arguably create a more pleasant experience for everyone.

Of course greed prevails, which is why we are where we are.


Which users most heavily weight algorithms for viral effects? The guy who looks at dogs and bikes, or the teen user who doesn’t know their arse from their elbow?

> users

survivorship bias. you aren't account for how many users were repulsed or shocked (read: shook up, not a momentary "wtf") and left the platform.


So what I don't understand is that its perfectly able to categorise this content really well, but is perfectly happy to allow it on the network, even though it clearly violates the community guidelines.

Yes moderation at scale is hard but most of the battle is getting decent categorisation.


The recommendation algorithm isn't necessarily identifying this as shock content or gore, all it's doing is identifying it as having the hallmarks of something that a certain circle of people engage with, followed by it deciding that everyone engages with it and it should be the default. It's possible that internally it's tagging it as gore, it's also possible that it's simply categorising it as "fast moving video with an abundance of red".

I would assume after such a successful AI based categorization, one single human moderator could do wonders deciding whether it's an abundance of red or something else. But yeah, why would we censor the money making?

The extreme content gets the most user interaction and leads to higher daily active users

Which is a very important metric.

As long as the content is not illegal, if it positively affects time on site/app, it stays for as long as it can on the platforms


> The extreme content gets the most user interaction and leads to higher daily active users

For some people yeah, but for a lot, it causes the opposite. That's why its quite heavily policed. Its not like thirst traps which are liberally sprinkled in the first 5 minutes of creating an account.


They moderated it up until January of this year. The decision was politically motivated, not technical.

The most insane death and mutilation content I have ever seen has been just by scrolling my Instagram feed long enough. This stuff is not an accident. No matter how much you say you are not interested a guy in India being ran over by a train will be back within a few minutes of doomscrolling.

> "We have fixed an error that caused some users to see content on their Instagram Reels feed that should not have been recommended"

How did that stuff exist on their platform at all?


Users post it?

Shouldn't a multi-billion dollar company have moderation in place to tag content like that?

I'm pretty sure if you start posting spoilers about Marvel properties or straight-up pirated content it'll be taken down as fast as possible.

But gore? Nah, why bother.


And nobody really moderates it... pretty sure meta is breaking some laws at least in some jurisdictions but this is nothing new

I have in recent year maybe seen a massive increase of various fake posts from unknown groups on facebook. Made up shorts, AI generated fake photos of various weird outrageous claims, which all upon some side check turned out fake but it was not obvious on first glance. I try to report them, mark them as not interesting etc. but they keep popping back without me ever being interested in them.

Meta is really shitting on its users and just milking current status, or doesn't care how content on their platforms degrade further, or even quietly allows it. I'd say its the second choice, and it is really by choice. Youtube has similar potential but somehow they manage to sanitize it all well, so clearly its possible.


Right? Not a fan of Meta, but the flip side of this is “how dare those censorious corporate overlords limit what I can post to my account”

Mine has become all things that look like genitalia at a glance but aren't...

I'm guessing some algorithm picked up on me stopping on and being like "what the hell?" as liking it...


I heard a rumor that certain account was planning a raid on Instagram for today (feb 28 2025) a couple months ago, that said "something will happen on that date". But I just thought it was some kind of weird advertising method from Meta; it kinda worked though, it got me to open the app (I'm too curious).

Never used it, but how about apologizing for the neo-NAZI items too ? I have to assume since they now allow right-wing propaganda.

Mark Zuckerberg will never be happy, after all the horrors his businesses have helped perpetuate, the kinds of lifestyle his brands perpetuate, the folks he has helped put into power.

The look on his face at the inauguration is a testament to how awful he feels every minute of every day. Few people in history have caused as much misery to so many people.

We ALL reap what we sow, for good or ill, so I suggest doing things that make other people happy, and never cause them misery or grief.

Look at the face of Lebron James when he surprised the kids at his personally-funded school in Akron, OH. His smile demonstrates the joy that is possible when one spends their money on others' happiness.

"There's still time to change the road you're on." --Stairway to Heaven

Happiness requires making money in an ethical way, and then spending it ethically after having earned it.


Lebron also said there ain't no party like a Diddy party.

Had to delete mine a few weeks ago when I suddenly got a pro-book burning reel, showing Nazis burning books with the caption "They tell you they burned books, bit never which books."

It had 30,000 likes and the top comments were all along the lines of "The victors write history, they will never say the bad guys won WW2" and "Hitler was just fighting the Jewish transgender conspiracy in Germany. Trump will defeat it in America" etc.

I used Instagram to follow traditional musicians and people tailoring their own historical clothing. But I can't stay on a platform that is so happy to give a place to that kind of outright hate speech.


I haven’t been served that type of content but I feel the same way as you lately. It feels like it’s shock content with a race to the bottom to get users attention instead of showing interesting content I would actually like

Instagram is ad matching machine. Meta is not in the business to care about people.

It seems short sighted. I know a lot of trans people who use the app, and I used to follow many content creators of colour etc. If we start normalising violently silencing them, paving the way to sending them to camps to work and die, then Meta loses a big part of their customer base.

We are all choosing sides, each and every day, between compassionate concern for our fellow human beings, and callous disregard for others' happiness.

Choose well, everyone. The karma you reap is always exactly what you deserve, for good or ill.

And the pleasure of having power over others is not a good replacement for peace of mind, joy, and happiness.


If I had to delete an app every time I see something I politically disagree with I would have no need for a smartphone.

That's a really unfair representation of what happened. I frequently see things I "disagree with" on most platforms and I am happy to ignore it or go into discussion about it. Facism is not something I just "disagree with." Calling for people to be silenced by violent mobs is not something I just "disagree with." Holocaust denial is not something that is up for debate.

I felt terrified by what I saw, quite literally sickened by it. I closed my phone completely and struggled to sleep afterwards. It took me three days and some conversations with my family before I could open the app again just to delete my account.

I'm not just taking some moral stand against Instagram by leaving it for publishing views that differ to mine, I'm simply too disgusted to ever open it again, the thought of having an app like that on my phone makes me queasy.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: