Now, maybe I'm biased having lived in a country who started policing the Internet telling people they are fighting child pornography, and quickly evolved into a black hole of censorship and blocked Wikipedia couple of years ago because it doesn't fit it's own narrative.
I see the Internet as a great force multiplier. Want to watch courses from top professors for free? Here you go. Want to buy a yacht? Here is some videos of 10 best yachts reviewed. Endless entertainment to last you a million years? Check. Want to slit your wrists? Here's five pro-tips to make it quick and painless. It certainly makes everything orders of magnitude easier, as it's supposed to.
If I'm seeking information or encouragement about suicide, technically an algorithm that provides me exactly that is just doing its job, and I don't see why we would like to change -or god forbid, police- that. What I'd see as a problem is when the algorithm becomes more eager to find this content than I am, fights with me to have its point of view accepted (like, messing with elections) or becomes fixated on providing me the content even if I changed my mind. So, maybe the best way forward is to enable to user to tweak the algorithm, or at least make it more responsive to changes in his mood/wishes.
> What I'd see as a problem is when the algorithm becomes more eager to find this content than I am, fights with me to have its point of view accepted (like, messing with elections) or becomes fixated on providing me the content even if I changed my mind. So, maybe the best way forward is to enable to user to tweak the algorithm, or at least make it more responsive to changes in his mood/wishes.
That absolutely would be the way forward. However, my impression from blog posts where technicians explain their rationales and iteration processes behind recommenders and curation algorithms, the development most often seems to be motivated by growth, with the metrics that are actually considered being "user engagement" and "user growth".
As such, I would argue that recommenders always had an "agenda" separate from that of the user, it was just commercial rather than political: Keeping the user on the site for as long as possible.
As such I'm pessimistic that, in the current incentive structure, sites would make their algorithms adjustable by users just like that - doing that would simply be a bad business descision.
I don't get this line of reasoning. Humans are a LOT better than algorithms at creating horrible feedback loops, and we never hold those responsible.
Hell, if there's one thing you keep reading in psychological books about suicide it's how institutions ostensibly meant to help people reinforce the suicidal thoughts. Either by placing suicidal people together, at which point they also advise one another on how to go "painlessly" (hell I remember discussing painless ways to commit suicide several times with a group of friends on the playground in high school. Not at all often, once or twice in 6 years).
(I must say, now that I know a lot more about medicine, what I remember: slitting wrists in the bath is pretty bad advice. Peaceful ? Sure. But takes a very long time, and easy to screw up in so many ways. Hell, just cold water is probably going to save you, and of course it will get cold)
Second thing they do is even worse: making communication about it impossible. This is done through repression. Either locking people in their room (or worse: isolation rooms)
I've yet to hear a single story of people being held responsible. Why should Facebook face this sort of scrutiny ?
Your local suicide prevention feedback loop really only encompasses your community, and revamping that system is left up to the people most affected by it. (The community)
Facebook/Google et al are everywhere, and are increasingly becoming everyone's problem. Google in particular has become so unreliable in terms of finding what I'm actually looking for without an overly specific query because it just has to push Google's idea of what they think I want, rather than what I want.
Honestly, I'm almost to the point of starting to figure out how to write and provision the infrastructure for web crawling and search indexing just because I find I simply cannot rely on other search engines to give me a true representation of the web anymore.
Facebook is not doing this on purpose. Facebook is allowing communication about this, which of course provides a purpose and actually mostly helps prevent people from carrying this out.
They are not leading people to problematic posts on purpose - however, from what we know, I think we can reasonably assume they are tuning the recommender to maximize engagement - which leads to more problematic and controversial posts be recommended.
I think you'll find in the psychiatry literature that if there's one thing that can help a lot AGAINST suicide, it's engagement. As long as you keep the patient engaged, there is little danger of suicide (with the significant exception of a patient that came in determined to commit suicide and is executing a plan). Which is why I'm saying that even when keeping people engaged with strategies for suicide, that still works against suicide.
Of course engagement is expensive to do for humans and therefore is often explicitly not done in clinical settings, or to put it differently: hospitals are surprisingly empty for patients staying there and psychiatric hospitals are no different
Because you effectively can't do it with humans, preventing the "slide towards suicide", engagement, even discussing the suicide itself, is actually helpful.
A very recurring element in descriptions of suicide tends to be a long history of the patient with constantly dropping reaction/interaction/engagement and slowly increasing "somberness", suicidal thoughts and discussions, then suicide attempts. Then, days or sometimes less before the actual suicide you see a sudden enormous spike in engagement with staff, and while we obviously can't ask, it seems deliberately designed to mislead. And staff often "falls for it". That spike is designed to make staff give the patient the means for suicide or somehow prevent them from responding to it, or getting them information (essentially when they're not looking for some reason, such as watch change meeting)
When push comes to shove, once enough will to commit suicide exists, nothing even remotely reasonable will prevent the suicide. So knowledge about suicide mechanics seems to me much less destructive than people obviously think.
Therefore, knowledge about suicide doesn't matter much. People see it as being obviously associated and assume. Knowledge of suicide is not what causes suicides. It is therefore not "dangerous knowledge".
I see the Internet as a great force multiplier. Want to watch courses from top professors for free? Here you go. Want to buy a yacht? Here is some videos of 10 best yachts reviewed. Endless entertainment to last you a million years? Check. Want to slit your wrists? Here's five pro-tips to make it quick and painless. It certainly makes everything orders of magnitude easier, as it's supposed to.
If I'm seeking information or encouragement about suicide, technically an algorithm that provides me exactly that is just doing its job, and I don't see why we would like to change -or god forbid, police- that. What I'd see as a problem is when the algorithm becomes more eager to find this content than I am, fights with me to have its point of view accepted (like, messing with elections) or becomes fixated on providing me the content even if I changed my mind. So, maybe the best way forward is to enable to user to tweak the algorithm, or at least make it more responsive to changes in his mood/wishes.