Interesting article, though I find some of the reached conclusions somewhat unexpected:
> The issue is not just about making graphic content disappear. Platforms need to better recognize when content is right for some and not for others, when finding what you searched for is not the same as being invited to see more, and when what‘s good for the individual may not be good for the public as a whole.
> And these algorithms are optimized to serve the individual wants of individual users; it is much more difficult to optimize them for the collective benefit.
This seems to suggest that the psychological effects of recommenders and "engagement maximizers" are not problematic per se - but today are simply not used with the right objectives in mind.
I find this view problematic, because "what's good for the public" is so vaguely defined, especially if you divorce it from "what's good for the individual". In the most extreme cases, this could even justify actively driving people into depression or self-harm if you determined that they would otherwise channel their pain into political protests.
If we're looking for a metric, how about keeping it at an individual level but trying to maximize the long-term wellbeing?
As the popular quip on HN goes, 'The majority of mobile web tech fortunes have been built on avoiding the costs of consequences and regulation, while retaining all the profits.'
> when what‘s good for the individual may not be good for the public as a whole
Is as good a summary of what Facebook has done wrong as anything I've read.
The problem is not that Facebook and its ilk are inherently evil, but that they seem willfully ignorant. Ignorant that past a certain scale they have an obligation to the public: an obligation very different from the laissez-faire world The Facebook started in.
The internet majors seem to be gradually awakening to this, but I'd argue that only Apple (with their stance on privacy) and Google (with their internal privacy counterparties) really grok the change. And to be fair, both have business models that can tolerate having principles.
When you've got a recommendation algorithm that could push someone to suicide or change an election outcome, you have a responsibility to optimize for more than corporate profit.
(And to be clear: I'm talking about corporate management with the ignorance comment. Employees have pushed dealing with these issues at many companies.)
Now, maybe I'm biased having lived in a country who started policing the Internet telling people they are fighting child pornography, and quickly evolved into a black hole of censorship and blocked Wikipedia couple of years ago because it doesn't fit it's own narrative.
I see the Internet as a great force multiplier. Want to watch courses from top professors for free? Here you go. Want to buy a yacht? Here is some videos of 10 best yachts reviewed. Endless entertainment to last you a million years? Check. Want to slit your wrists? Here's five pro-tips to make it quick and painless. It certainly makes everything orders of magnitude easier, as it's supposed to.
If I'm seeking information or encouragement about suicide, technically an algorithm that provides me exactly that is just doing its job, and I don't see why we would like to change -or god forbid, police- that. What I'd see as a problem is when the algorithm becomes more eager to find this content than I am, fights with me to have its point of view accepted (like, messing with elections) or becomes fixated on providing me the content even if I changed my mind. So, maybe the best way forward is to enable to user to tweak the algorithm, or at least make it more responsive to changes in his mood/wishes.
> What I'd see as a problem is when the algorithm becomes more eager to find this content than I am, fights with me to have its point of view accepted (like, messing with elections) or becomes fixated on providing me the content even if I changed my mind. So, maybe the best way forward is to enable to user to tweak the algorithm, or at least make it more responsive to changes in his mood/wishes.
That absolutely would be the way forward. However, my impression from blog posts where technicians explain their rationales and iteration processes behind recommenders and curation algorithms, the development most often seems to be motivated by growth, with the metrics that are actually considered being "user engagement" and "user growth".
As such, I would argue that recommenders always had an "agenda" separate from that of the user, it was just commercial rather than political: Keeping the user on the site for as long as possible.
As such I'm pessimistic that, in the current incentive structure, sites would make their algorithms adjustable by users just like that - doing that would simply be a bad business descision.
I don't get this line of reasoning. Humans are a LOT better than algorithms at creating horrible feedback loops, and we never hold those responsible.
Hell, if there's one thing you keep reading in psychological books about suicide it's how institutions ostensibly meant to help people reinforce the suicidal thoughts. Either by placing suicidal people together, at which point they also advise one another on how to go "painlessly" (hell I remember discussing painless ways to commit suicide several times with a group of friends on the playground in high school. Not at all often, once or twice in 6 years).
(I must say, now that I know a lot more about medicine, what I remember: slitting wrists in the bath is pretty bad advice. Peaceful ? Sure. But takes a very long time, and easy to screw up in so many ways. Hell, just cold water is probably going to save you, and of course it will get cold)
Second thing they do is even worse: making communication about it impossible. This is done through repression. Either locking people in their room (or worse: isolation rooms)
I've yet to hear a single story of people being held responsible. Why should Facebook face this sort of scrutiny ?
Your local suicide prevention feedback loop really only encompasses your community, and revamping that system is left up to the people most affected by it. (The community)
Facebook/Google et al are everywhere, and are increasingly becoming everyone's problem. Google in particular has become so unreliable in terms of finding what I'm actually looking for without an overly specific query because it just has to push Google's idea of what they think I want, rather than what I want.
Honestly, I'm almost to the point of starting to figure out how to write and provision the infrastructure for web crawling and search indexing just because I find I simply cannot rely on other search engines to give me a true representation of the web anymore.
Facebook is not doing this on purpose. Facebook is allowing communication about this, which of course provides a purpose and actually mostly helps prevent people from carrying this out.
They are not leading people to problematic posts on purpose - however, from what we know, I think we can reasonably assume they are tuning the recommender to maximize engagement - which leads to more problematic and controversial posts be recommended.
I think you'll find in the psychiatry literature that if there's one thing that can help a lot AGAINST suicide, it's engagement. As long as you keep the patient engaged, there is little danger of suicide (with the significant exception of a patient that came in determined to commit suicide and is executing a plan). Which is why I'm saying that even when keeping people engaged with strategies for suicide, that still works against suicide.
Of course engagement is expensive to do for humans and therefore is often explicitly not done in clinical settings, or to put it differently: hospitals are surprisingly empty for patients staying there and psychiatric hospitals are no different
Because you effectively can't do it with humans, preventing the "slide towards suicide", engagement, even discussing the suicide itself, is actually helpful.
A very recurring element in descriptions of suicide tends to be a long history of the patient with constantly dropping reaction/interaction/engagement and slowly increasing "somberness", suicidal thoughts and discussions, then suicide attempts. Then, days or sometimes less before the actual suicide you see a sudden enormous spike in engagement with staff, and while we obviously can't ask, it seems deliberately designed to mislead. And staff often "falls for it". That spike is designed to make staff give the patient the means for suicide or somehow prevent them from responding to it, or getting them information (essentially when they're not looking for some reason, such as watch change meeting)
When push comes to shove, once enough will to commit suicide exists, nothing even remotely reasonable will prevent the suicide. So knowledge about suicide mechanics seems to me much less destructive than people obviously think.
Therefore, knowledge about suicide doesn't matter much. People see it as being obviously associated and assume. Knowledge of suicide is not what causes suicides. It is therefore not "dangerous knowledge".
> though I find some of the reached conclusions somewhat unexpected
> > The issue is not just about making graphic content disappear. Platforms need to better recognize when content is right for some and not for others, when finding what you searched for is not the same as being invited to see more, and when what‘s good for the individual may not be good for the public as a whole.
Isn't this a good thing? It's very easy for politicians and bureaucrats to simply say "ban everything", so it's welcome that they're saying "it's complicated, we don't want to ban everything, we do want to make it harder for some people to access some content". It's a more honest discussion.
It sounds only slightly better. In essence it's still about the government telling others that they know better than the person. This kind of thing is bound to get false positives. We also know that the government doesn't care about false positives, because I almost never see a politician address abuse from involuntary commitment.
Why do we need algorithms, AI, etc.? Seems like over-engineering. Either don't put your kids online, or, if you do, the sites need to send the parents a report of all their activity -- if parents don't take responsibility to review or control what their children get up to, no AI can do it for them.
Instagram thinks you want to starve yourself to death when you search for #fasting and offers help... but has no problem with #bingeeating or #meth...
Maybe it's not only algorithms that have a problem: I think it's our very belief that Instagram should do something to curb certain ideas and push others that's wrong.
I think there is good and important reasons for unfettered social media and internet in general, but no one should knowingly serve graphic content to kids, specially when they're actively targeting them.
Similarity and suitability are two radically different scores. But for various reasons - including, but not limited to, the ease of calculating a similarity score - we end up conflating the two in a lot of cases.
While calculating similarity scores is getting easier by the day across a lot of content formats (think image classifiers, sentiment scores, etc), the same is not true for suitability scores.
Technically, I'd think calculating suitability would need more than just matching patterns based on some selected criteria, which is how essentially all recommendation engines work today.
This reminds of the Person of Interest episode "Q&A" where a software company was specifically targeting people with mental illnesses and debt to satisfy their advertisers. Some serious effort needs to go into preventing algorithms from suggesting this stuff, lest we risk a few negligent developers threatening the lives of millions.
Then again, that course of action could be a slippery slope: what happens if the algorithms start censoring things that could potentially upset us? We could end up in bubbles, completely unprepared and unwilling to face the hardships that life presents.
I think the problem with personalized advertising is that it often isn't personal, since the algorithms base their assumptions on data gathered by observing people who haven't lived through the same experiences. I mean, yes, we can average things out and disregard outliers in the hopes of maximizing our finances, but by doing so we'd be neglecting the individual circumstances befallen a person.
I suppose this is the million dollar ethical dilemma that advertising companies are struggling with right? Too much moderation makes content stale, but a lack of it makes things dangerous.
I would think that the impact of this content is overrated. You aren't not prepared for the hardships of life just because you are in an algorithmic filter bubble. People don't kill themselves just because they see graphic content. This reminds me of the discussion in the early 2000s about first person shooter games. You don't turn into a school shooter because you played those.
>People don't kill themselves just because they see graphic content
This was never the point. The article describes how people who have a predisposition to self-harm get an above-average amount of content related to that, which just compounds the negative feelings they may already have and thus possibly accelerate them actually making the step of harming themselves. Respectfully, but you seem very insensitive to the subject matter.
Is there research on the matter? Can displaying graphic content actually nudge somebody into committing suicide? While this may sound insensitive to the matter, I'd rather opt for having data than join into the generall HN hatred of algorithmic optimization of content.
> In Austria, "Media Guidelines for Reporting on Suicides", have been issued to the media since 1987 as a suicide-preventive experiment. Since then, the aims of the experiment have been to reduce the numbers of suicides and suicide attempts in the Viennese subway and to reduce the overall suicide numbers. After the introduction of the media guidelines, the number of subway suicides and suicide attempts dropped more than 80% within 6 months. Since 1991, suicides plus suicide attempts - but not the number of suicides alone - have slowly and significantly increased. The increase of passenger numbers of the Viennese subway, which have nearly doubled, and the decrease of the overall suicide numbers in Vienna (-40%) and Austria (-33%) since mid 1987 increase the plausibility of the hypothesis, that the Austrian media guidelines have had an impact on suicidal behavior.
It seems quite likely that the inevitable publicity of mass shootings encourages copycats. Occasionally the shooters even leave a note / manifesto explaining this.
I don't mean to be a weekend psychiatrist, but sending the girl who committed suicide pictures of hangings -- consistently -- seems unhealthy, or at least unethical to me. The father spoke about "helping to kill" his daughter, and the social media being "partly responsible". I agree with that phrasing: they're not wholly responsible, as much as I don't believe that everyone who plays shooter games becomes a real-life killer. But there seems a clear (to me, again, not a subject matter expert) that there is a correlation, and if you get your social media in front of a few billion people, the numbers add up and you can't just absolve yourself of all responsibility. We have to look after each other, take care of the weaker and more vulnerable.
Here in the UK, gambling and gambling advertising is legal. Gambling companies are licensed by the Gambling Commission and are required to follow a code of conduct with respect to social responsibility.
If a gambling company sent sales representatives to Gambling Anonymous meetings to offer free bets to recovering addicts, there would rightly be a public outcry and that company would likely be penalised or stripped of their license. I can open an incognito tab right now, visit an online support forum for gambling addiction and almost immediately start seeing advertising for gambling; thanks to ad tracking algorithms, those advertisements will start following me around the internet. That behaviour isn't any less antisocial simply because it's automated and online.
The idea that gambling companies should be allowed to specifically target gambling addicts is not a popular policy position, but it's the default behaviour of online advertising platforms. Personalisation and targeting algorithms are innately amoral; they only reflect the values of a company or a society if they are specifically engineered to do so.
This isn't a binary argument between "the internet should be a lawless free-for-all" or "the internet should be regulated until it resembles network television". It is primarily an argument about corporate social responsibility - companies should not be insulated against the externalities of their business practices. We're never going to completely agree on how social media companies should behave and moderating content at scale is immensely challenging, but that doesn't give them a free pass to ignore the risks and negative effects of their platform.
Given the gambling example, should we be mad at the advertisers or at the forum for making money off of allowing such ads?
If the Gambling Association made a bunch of posters about their new lotto and paid anyone who put them up in buildings, and the person running the Gambling Anonymous meeting came and picked up a few to put up during their meetings, do you blame the Gambling Association or the one who picked up the posters?
Ads are currently such a nightmare because almost everyone making money off of them has chosen to go with services that handle everything instead of filtering their ads and hosting them locally because letting those services handle it all pays better. It allows for far more tracking and targeted ads, sometimes for better and sometimes for much worse.
He's current secratary of state for the department of health and social care. He's by far the most tech-orientated SoS we've had for years, doing a lot of work to push digital in health. He's rampantly pro-IT.
Sometimes when politicians make requests like this (make it harder to access images of self harm) people dismiss them as "think of the children". That would be a mistake here. He's not asking for all images to be removed; he is asking for the malgorithmic pushing of self harm content to vulnerable people to be fixed.
People sometimes complain about laws that appear out of the blue. His tweet above is the start of a long slow proces of building a law. It's a clear warning: get better at self-regulating, or we'll regulate you.
"Self-harm images on Instagram just part of problem we need to address. In our national study, 1/4 under 20s who died by suicide had relevant internet use & most common was searching for info on methods"
"Important change in political/social attititude. Just a few years ago, internet seen as free space, no restrictions, complete lack of interest in #suicideprevention from big companies. Now mood is for regulation, social responsibility, safety."
Finally, here's my example of malogrithm ad placement. I've mentioned this example before, and I think it got fixed (so thank you if you fixed it!) but I search for suicide related terms for my work, and sometimes the ads are terrible.
> People sometimes complain about laws that appear out of the blue. His tweet above is the start of a long slow proces of building a law. It's a clear warning: get better at self-regulating, or we'll regulate you.
You're absolutely right! One reading is that it's a request. Please fix this problem, before we have to regulate you into fixing it.
Is it possible that there may be an alternative reading? A cynic might suggest that humoring such a plea is a great way to demonstrate that content problems like this can be solved! Then regulators can require those very useful tools be applied to whatever they please in a much more general way.
The odds of whatever Secretary Hancock gets to solve the very real, pressing problem he has so wisely pointed to being completely inapplicable to literally anything else are virtually zero. I can think of a few places where safety and social responsibility means things like never disagreeing with The Party.
As technologists, it's on us to think through the consequences of our choices where we can. It's often not plausible - nobody thought TCP/IP would lead to malgorithmic ads! But tools designed to enforce arbitrarily defined social mores?
The problem with this approach is that you're going to be fixing this "leaky pipe" forever. It also sets another precedent that if you make enough noise then our internet is going to be curated "for our own good".
Social media seems to blur the lines between fantasy and reality for many individuals in a way that they don't seem able to deal with.
In times gone by we'd generally expect that children realise what happens in a movie or a videogame is fantastical.
By contrast, social media is treated as a set of interactions with real people, whether those be your friends or whoever else.
Even posting here on HN is an example. The platform guides me; my (and I assume your) viewpoint of what the development community thinks about things is swayed.
I don't think the platform creators are to blame as much as, well, the entire society we're in. We really need to push organic interactions with the communities we're in, the people around us, not online bubbles with incredible bias that aren't even necessarily made of real humans.
It's the same exact thing where YouTube is promoting pedophilia. These systems are content agnostic and will give you what it thinks you will click on. If somehow people really really liked videos about the number 27, then if you click on one video it will start showing you more. It seems to me that it's a fundamental part of what these systems are. It's nigh impossible to say "do this but keep the bad stuff out" unless you have human moderation. I'm certainly not saying these companies aren't culpable. It just seems weird to talk about individual cases in abstraction like Instagram is going out of it's way to promote self harm.
What’s pretty funny, I always thought, and difficult to openly discuss; click on a somewhat “typical” blonde on LinkedIn, and unlock the rabbit hole of baywatch-esque related/recommendations pour in.
Algorithms are brutally efficient and this is what happens when their creators don't have any incentive to think about the end result of their work beyond "user engagement" or "more clicks".
As much as I don't like the whole 'blame the algorithm' movement that's been going on I think it's pretty clear this is a large part of the issue for me.
I know that I've been trapped into YouTube's reccomendation trap before - good luck getting out without deleting all previous history etc. whilst I'm 'wise' enough to notice this fact and clear all YouTube history etc does this option even exist for Instagram and would kids/ those vulnerable think to do this?
Information systems are a force multiplier on gaining knowledge. They don't magically stop working if what you're interested in is not what "society" thinks you should think.
Machines are getting more and more intelligent. They find content for us, summarize things, generate speech and text.
Look at how complicated human sosiety is. Trying to directly program questionable and socially unacceptable behavior is next to impossible since border is way too thin.
There are loads of unwritten rules about minors, for example, that vary from culture to culture. About what is ok for what age.
So anyone who uses machine intelligence opens himself to liability. Filters are needed, and everyone needs same set of filters
This censorship slow motion train wreck started back when these third parties inserted themselves into what used to be person-to-person interaction. First, it was that they were simply providing convenience and hosting. Then editorializing, because simply following the DMCA wasn't draconian enough. Then algorithmically determining what content we should even see, because self-selection requires thought. Then going all-in on filter bubbles as search capability failed. Then gaming the recommendations for their perverse metric of human time wasted.
Unfortunately, from a business perspective there isn't really much other choice. If Youtube solely put itself out there as a commodity video hosting site (with no discovery), then people could switch at the drop of a hat. Whatever they were using for discovery would simply grow a video hosting feature, and we'd get Newtube. As in all media, what matters is the captive audience.
The only real path forward for freedom is to repudiate this corporate-mediated garbage and start seriously adopting software based on peer interaction. As long as third parties remain in the loop, the incentive to blame them for not doing our preferred magical thing is just too strong. Hopefully this can happen before these censorship calls grow loud enough to start targeting alternative protocols themselves.
In tandem we should start thinking of economic systems more capable of investing in such platforms.
We’ve more or less tried both extremes of public vs private owners of capital. Could we try a commons based economy next?
I mean, at least developers, seems capable of producing massive wealth in a decentralized fashion more or less motivated by the public good (or aligned interests) as open source, with very limited capital assets. What if we could make more capital available to that part of the economy?
The primary issue here is not censorship, but actively recommending content to a vulnerable person, that may influence their behaviour towards a destructive or terminal outcome.
Behaviour influence for commercial purposes (via data-gathering and targeted-advertising) is one of the biggest topics that HN users are generally critical of regarding the big tech companies.
Surely we can be just as concerned about these mechanisms when they may lead someone towards serious/terminal harm to themselves or others.
Supplementing such images with suicide hotline numbers is much more effective than filtering them. If anything, censoring suicide-related content even further marginalizes suicidal behaviour and isolates those vulnerable and in need of help.
I understand your point, parents and society as general are responsible for the people they create, however, please consider that the line between online and offline life is getting ever blurrier, specially for younger generation.
When large media outlets* are not mindful of the kind of content they push to younger people, they're failing the society and should be hold responsible for it, this of course, is not mutually exclusive with parents responsibility for monitoring what kind of content their children are consuming.
* I am intentionally using the term media-outlet instead of social-media to bring the parallel with television and radio. Consider how you would think if TV or Radio pushed graphic content for audience they did as matter of fact know are kids?
You don't know anything about suicide. It's okay to be ignorant about a topic. It's less okay to be so ignorant while giving such stong opinions.
People die by suicide. For some of them the family all knew the person was suicidal and were powerless to prevent the death. For others nobody knew the person was suicidal, and we don't know if the death could have been prevented.
For the case talked about it's likely that the culture of self harm prevented her from seeking help. She was accesing content that gave advice about hiding self harm from others, and about the futility of mental health treatment.
> then again the most efficient solution has to be found in the education, scholarization, in the mindful usage of these tools and in taking the responsibilities from the bases
This takes time, and is the reason why we need to treat kids differently and have legal and practical systems to protect them in their journey into mature adults.
> algorithms should be perfected but accusing them to be the cause of death of a fragile, insecure, mentally unstable adolescent is so utterly ridicule.
There is no question that exposing people to certain kind of content impacts their mental health and outlook. We also know that even a cursory look at any kind of material online can end up with you being exposed to "related content" all over the internet, so it is possible that a perfectly healthy kid could somehow stumble upon one or two such material and end up being bombard by it all over the internet.
But here is the important thing, the specifics of how or why these platforms are exposing kids to such content is irrelevant, or implementation details in the programmer speak, algorithm or not.
What matter is a simple fact that these platforms are exposing kids to dangerous content and we should do something about it.
> if the father or the family of that girl was well aware of what was going on in the messed up head of that girl, she would not have committed suicide
How? Involuntary commitment to a psychiatric institution?
The act of growing up is fundamentally one of rebellion, of doing things one is not supposed to do, of running away from the nest and being pushed away from it. While the expectation of privacy is something that varies dramatically across the world, I am doubtful that there is any culture where parents know everything their children are up to. The internet has amplified the availability of this privacy. As a closeted transwoman, I am grateful for the opportunities this has given me, of being able to find solidarity, but still keep it as one of my deepest secrets. But... I am also thankful for having grown up just before the internet turned into a vast, self-amplifying panopticon.
In the pre-internet world, there was more human mediation in information consumption: one consulted libraries which were staffed by human librarians, who could distinguish access to books about self-harm from books about self-help, one read magazines delivered home by mail, which would presumably have been easier for parents to monitor, or one would socialize with friends, in the physical world, who presumably had a greater interest in your well-being, so that empathy and other human judgments could best help people in need.
In today's world, with ready access to information, anonymously to one's physical circle of family and friends, but entirely publicly to the vast tracking network of automatic recommendation systems, there is no such ready access to help. The systems serving information are essentially paperclip maximizers, providing access to articles and links that maximize their narrow-minded objective functions. Which leads to sub-optimal social outcomes.
Of course the algorithm is doing exactly what it was designed to do. But what it was designed to do is not what we really want it to do, as a human species, with complex, empathetic, and altruistic objectives. On a final sidenote, conveying emotions online is very tricky, but we are talking about a family who has just lost their daughter. Her parents are almost certainly devastated, questioning every little act of theirs. It would be nice to exercise restraint and temper one's comments.
We wanted platforms that connect the world - now we’ve got them. We’ve given everyone a voice: the pedophiles, the self-harm fetishists, the terrorists. Now we’re reaping what we’d sown.
I think that misses the step inbeteewn, where the platforms that connect the world started to prioritize "growth hacking" and keeping the user glued to the screen as long as possible above all else.
You do not need agressive autoplay, recommendations plastered everywhere and "we miss you" emails every few hours to organise the world's information.
> The issue is not just about making graphic content disappear. Platforms need to better recognize when content is right for some and not for others, when finding what you searched for is not the same as being invited to see more, and when what‘s good for the individual may not be good for the public as a whole.
> And these algorithms are optimized to serve the individual wants of individual users; it is much more difficult to optimize them for the collective benefit.
This seems to suggest that the psychological effects of recommenders and "engagement maximizers" are not problematic per se - but today are simply not used with the right objectives in mind.
I find this view problematic, because "what's good for the public" is so vaguely defined, especially if you divorce it from "what's good for the individual". In the most extreme cases, this could even justify actively driving people into depression or self-harm if you determined that they would otherwise channel their pain into political protests.
If we're looking for a metric, how about keeping it at an individual level but trying to maximize the long-term wellbeing?