I ran a website for youngsters several years ago.
One of the duties to maintain it was to moderate discussion boards.
Some kids were difficult to manage and would not accept to be banned (via email/IP/or whatever solution) and would keep recreating profiles.
Ultimately I dealt with those ppl by “greylisting” them. Added a sleep() prior each page rendering of 5 to 25 secs (actually it was more sophisticated and would stream chunks over TCP so the slowness feeling was even more real).
Worked like a charm. Few days after the recalcitrant would no longer come on the website.
I called this “moderation by degradation of user experience”, and was pretty effective like the solution described in your post.
Think about page load if you need to restrain visits.
This reminds me of the old VBulletin plugin "Miserable Users"[0].
We also had a community suffering from this problem (during the early 2000's). Bans would take care of a lot of problem users, but would just give energy to those truly out for blood, troll, bored, or very immature.
We had one user get banned over a dozen times while we tried banning IPs, name regex or anything else we could think of. Finally, like you, we found that if we annoy them first, they get bored and shuffle off to some other, lower barrier place.
Some of the nice features from that plugin (via the site) were:
1. Slow response (time delay) on every page (20 to 60 seconds default).
2. A chance they will get the "server busy" message (50% by default).
3. A chance that no search facilities will be available (75% by default).
4. A chance they will get redirected to another preset page (25% & homepage by default).
5. A chance they will simply get a blank page (25% by default).
6. Post flood limit increased by a defined factor (10 times by default).
7. If they get past all this okay, then they will be served up their proper page.
> So, how do we escape this parasitical leech without triggering his vindictive rage? Gray Rock is primarily a way of encouraging a psychopath, a stalker or other emotionally unbalanced person, to lose interest in you. It differs from No Contact in that you don’t overtly try to avoid contact with these emotional vampires. Instead, you allow contact but only give boring, monotonous responses so that the parasite must go elsewhere for his supply of drama. When contact with you is consistently unsatisfying for the psychopath, his mind is re-trained to expect boredom rather than drama. Psychopaths are addicted to drama and they can’t stand to be bored. With time, he will find a new person to provide drama and he will find himself drawn to you less and less often. Eventually, they just slither away to greener pastures. Gray Rock is a way of training the psychopath to view you as an unsatisfying pursuit you bore him and he can’t stand boredom.
It worker really well -- although not perfectly -- when I was being constantly annoyed by someone. I explicitly told them I didn't want to deal with them, but they kept coming. They stopped once, to their "how are you?", I replied with how I really was.
Psychopath doesn't have a clinical definition. It's strictly pop-sci bollocks. (EDIT: "psychopath" does not appear as a diagnosis in ICD10 or DSM IV or DSM V. Other disorders have some overlap with the concept of psychopathy, but there isn't any dx that has a clear mapping.)
That's a good thing, because it avoids stigmatising a real diagnosis that's given to a group of people who are already deeply stigmatised.
Can confirm this worked with an ex-friend. She was such an emotional drain, that just not responding just generated more ire. In the end, just gave a lot of boring responses and she grew tired, eventually allowing me to no-contact simply and easily.
> his mind is re-trained to expect boredom rather than drama
> he will find a new person to provide drama and he will find himself drawn to you less and less often
> he can’t stand boredom
Unnecessarily gendered language is jarring for the reader and also, (possibly) unintentionally, sexist. The singular they/them/their is generally acceptable to use in cases such as these.
Should all previous content be rewritten on the internet to be PC? This is from 2012, and as far as I've noticed, this gender neutral thingy started coming up a few years after that.
In the same way that you wouldn't quote somebody on the whimsical nature of the feeble woman, you shouldn't propagate ideas that paint men as perpetrators.
In 2012, this was already well-known. The language is jarring and sexist for no apparent reason. It is not undue revisionism.
Perhaps we need a browser plugin for people to detect when ‘he’ is being used as the default gender and convert it?
‘He’ worked as the default pronoun for a long time in many languages all over the world. (And still does in many languages) I find it curious how many people can be sold into the idea that a language can be considered sexist.
I noticed the same thing. Maybe I should have got a quote from elsewhere as it's not particularly well written. Nevertheless, it still gets the concept across.
This sounds pretty terrible when you think about how arbitrarily websites can determine you're a "bad user", but then again I like the idea of doing this programmatically for sites I myself want to stop using.
Either that or a plugin to encourage productivity.
Don't block social media sites, just turn them all into slow ineffective shitty websites.
If you don't already, you should browse HN with ShowDead turned on. There are people who have been banned for years and blissfully post comments several times per week completely unaware that almost nobody is seeing their comments.
Often, the comments they're posting into the black hole are indicative of why they were banned in the first place but if I see a comment that is dead and it does add to the conversation then I vouch for it which un-deads it.
I don't think many people are unaware. Sometimes when I see dead comments I trace their comment history back to the point they were banned. Cause I'm curious how those subterranean dwellers got that way. Almost invariably they were reprimanded by dang, repeatedly, and refused to change their ways, or to write to HN saying they'd change their ways. I don't recall seeing someone just mysteriously banned without an obvious refusal to behave decently/put some effort into their comments. The white dead comments don't start at some random point, they start with the person being told they are banned, and what to do about it.
Although I did see a fairly new account the other day with every comment dead, from the very first one, with no visible mod intervention; I figured it was from being a sockpuppet or someone with a track record.
p.s. I wouldn't want showdead on all the time! I use this "HN ungrey" bookmarklet to see whited-out comments on a page. Also it's useful to read comments that have just been voted down enough to be hard to read:
For other sites–on the topic of useful bookmarklets–this one, "Kill Element", is a lifesaver. On any site with an annoying fixed window thing on the screen, like a "cookies?" popup etc, just click on the bookmarklet then on the offending element and it disappears! Hugely satisfying, and instant. Enjoy!
> 'Almost invariably they were reprimanded by dang, repeatedly, and refused to change their ways, or to write to HN saying they'd change their ways.'
'Almost invariably'? Dude, I created my first account here a few months ago and was "shadow banned" within 15 comments, simply for expressing honest opinions, which somebody somewhere took anonymous offense to and then SHADOW BANNED ME. Is that fair? Do you expect to attract new faces here with such an attitude? I'm a Slashdot poster going way back, with a mid-6-digit UID there.
Do you have evidence for that? I was sceptical about the "simply for expressing honest opinions" part. I looked at your comments on this account. It seems your most recent comment was downvoted, then you replied to yourself beginning with
> Who the fuck downvoted this post, and what is your issue, asswipe? Seriously.
I stopped reading it at that point.
Your comment before that ends with
> Take me, for instance. Made my first account a few months ago, posting my opinions innocently. Shadow banned in less than 15 POSTS. Holy shit! Talk about being shocked. Did you care, when I brought this to your attention? Does it matter to you that you are driving good people away from here every day? Of course not. You're too self-absorbed to care.
And you even sit around having these long discussions, thinking up clever and creative ways to be even more of a hellspawn against these people whose opinions you hate so much. New ways to separate people from society and trap them inside a cage where you don't have to hear their thoughts or opinions. Has there ever been a more sick or contemptible society in the history of man?
> Go ahead and ban/shadowban this account also, please. I will just make another. And another. And another. And another...........and another......... from now until the end of time. Eventually I will automate the process, and submit my comments in batches, at a far higher rate than your mods can find and hide them. In my irritation at your unchecked fascism, my comments will ALL be as sharp as a knife, to keep driving the truth home as frequently as possible, until these "mysteries" you "just don't understand" are finally and definitely solved in your mind.
Ok, so.. That sounds pretty insane. Please find something better to do with your life and/or get help.
I browse HN with showdead. I encounter very few hidden comments and I regularly vouch for comments by new accounts that were apparently caught in some kind of spam filter.
I've had it on for a while. IMO, there aren't that many dead comments, and most are clearly dead for good reason. I suppose I like it, as a check that the mods aren't doing anything funny.
Then again, it's always possible that there's another, deeper "dead" in there, and only the really lousy spam shows up on "showdead". But that way lies paranoia.
I just checked, and I have ShowDead on, so I must have had it on for a rather long time since I don’t at all remember hearing about it or turning it on. I also don’t recall seeing a dead comment, or perhaps I don’t know how to identify them.
Another sinister way of doing this is having users solve captchas in order to comment and keeping a badness score of the troublemakers. Then pretend they failed the captchas at a rate proportionally to their score.
you use a vpn and have anti tracking features enabled in a browser. actually it's sometimes impossible to win the puzzle, they just keep serving you new ones to keep you busy. maybe it's effective against 3rd world click farms.
Is it just me or has the web's compatibility with Firefox taken a nosedive recently? It used to just be my employer's HR software that was chrome-only, but in the last year my power utility website, apartment complex website, and even major websites like https://www.deviantart.com/ (which I was trying to visit just 10 minutes ago) have broken in Firefox but not chrome. Badly, too. These aren't "the layout is different in FF and nobody noticed" bugs, they're "site infinitely redirects" bugs or "login button doesn't submit" bugs.
Debug steps: turn off bitwarden, my only extension. Never helps. Ctrl+Shift+Del cookies. Never helps. Sigh, open chrome. Works first time.
Is it just me or did the web up and dump firefox just when it started to get good?
I've noticed some of this lately - in a significant fraction of cases, it comes from Firefox honoring X-FRAME- OPTIONS while chrome ignores them, so e.g. payments work on chrome on sites that don't work on FF.
At my current employer, the web apps are only ever tested on chrome. If it works on chrome, it ships. I think I’m the only one using Firefox and making sure it works there before chrome.
We recently had some "FE devs" make a spiffy new SPA for some internal product. When I got to testing it on Firefox cause that's my main browser, I got a blank white page.
I asked them and they're like "yeah, it only works on chrome-based browsers". Or something to that effect. It's not like some CSS was wonky, or a bug somewhere... No, the default process of them building the SPA somehow yielded a completely non-functioning app for Firefox.
Services with absent engineers should be breaking left and right this month due to changes to SameSite attributes on cookies that hit browsers in early Feb. The intention of the change is to provide some long overdue changes to defaults on cookies with better privacy.
This is a change that’s been underway for years but came as a surprise when it actually shipped. I coordinated updates to ~40 packages owned by 5 different teams at my company, and had to put aside a good amount of other critical product work for about a week to ensure we didn’t encounter any customer issues.
The crux of the issue for maintainers is that Auth flows that require cookies to be sent around different origins (e.g. OAuth with form_post) will no longer work unless they update the cookies to explicitly be SameSite=none and Secure=true. Chrome led the pack on shipping the changes to browsers, but also implemented a special timeout rule that temporarily allows cookies that don’t meet the new spec to be set anyway to try to ensure auth flows don’t break. Eventually they will lift this timeout. Firefox has shipped support but has not implemented such a timeout.
At one place i was at, people were completely aware but firefox issues were always deprioritized because the analytics showed low percentage of users affected. I wouldnt be surprised if a higher proportion of users with firefox also have adblock which further skew these usage stats
I've unexpectedly had precisely the opposite experience; as of recent changes to cookie handling and 3rd party content in Chrome, several sites / webapps have either stopped working at all in chrome, or have serious issues -- while rendering and performing just fine in FF.
Some tech demo sites are Chrome only but I’ve yet to encounter a broken site on Firefox. The only issues I have are mostly due to adblock or my Pi Hole. I haven’t used Chrome in years.
I also added about 30 seconds of latency to every page I visit, but for completely different reasons as op. Switching to Brave and blocking all cookies and JS by default made me have to manually enable it for nearly every site that I actually wanted to use.
About a week later, Chrome was reinstalled. Maybe I'll try it again once I level up my willpower.
I'm using nextdns.io and no-script with firefox, it works quite smooth when you accumulate the settings.
You can export/import the no-script settings and merge with meld to keep the setting in sync between your PCs and laptops
That explains a lot... I frequently have to solve 10+ captchas when I'm using Firefox, many of them rate-limited. It feels like a punishment for resising surveilance. These things should be illegal due to the accessibility problems they cause if not the fact they're a nuisance.
Why should people be punished for using VPNs, Tor, an ISP with CGNAT? All of these should be supported regardless of how much abuse originates from them.
"Oh, you dare to oppose our surveillance? You want to block tracking scripts, fingerprinting and use VPN? You're a baaaad consuumer, we're going to correct your behavior by making your browsing experience miserable or submit to our rules and switch to Chrome"
There's also the double standards involved. It's totally fine when they run their abusive javascript on my computer but if I even so much as scrape their website suddenly it's abuse just because they don't like it.
Everything is okay and justified when rich corporations do it. "Normal" people just have to accept it without fighting back in any way. Company directly and openly transmits malware to people's browsers, collects all personal information and creates detailed profiles of people in order to sell to interested parties? If I did that, I'd no doubt get charged with some sort of crime. They just make it part of their terms of service which nobody ever reads much less agrees to and somehow everything is justified. Suddenly it's not malware but "surveillance capitalism", a totally legitimate activity. And if we try to resist in any way, they use the lack of tracking to say we're indistinguishable from the networks of bots spamming them or DDoSing them or whatever. Since it's part of their terms of service, any attempt on our part to circumvent their fingerprinting is abuse.
> we're going to correct your behavior by making your browsing experience miserable
Hopefully the only thing they'll achieve is the death of their own online community. Imagine if HN forced people to solve a captcha before every single post.
Should be, but unfortunately we're still trying to invent a better abuse-resistance system than a captcha. Invent a better one and the world will throw money at you. Telemarketing calls are an example where better abuse-resistant systems would be awesome.
> we're still trying to invent a better abuse-resistance system than a captcha.
> Invent a better one and the world will throw money at you.
It already exists.
The abuse stems from the fact servers connected to the wider internet are designed to respond to anyone who tries to talk to it. That's the fundamental problem with internet security today: computers talk to strangers they don't know much less trust.
What if computers dropped all packets by default and networked only with authorized users? The risk of exploitation and abuse becomes negligible because to unauthorized users it's like the computer is not even there to begin with.
This can be done with single packet authorization. The internet would lose its mass market appeal but it's much better than normalized widespread surveillance.
I can see that it could be effective against brute force attacks. A real user would assume they fat fingered their password and try it again, a brute force attack would miss the password and carry on forever.
No, he calls it slowbanning. Hellbanning is when you let them keep posting but other people can’t see the posts. Hackernews hellbans as well and you can see the comments from the deplorables if you turn on “showdead” in your profile.
IIRC hellbanning is a variant of shadowbanning that prevents easy discovery of the banned status by putting all suspect accounts into the same "hell" invisible to normal users.
This works well if you have a network of fake accounts from a single "persona" or ring of personas - by all their indications they can't see their own posts are being ignored.
Notice: it's almost the exact response to the persona management software problem [1] (aka bots).
This also works incredibly well for cheaters in videogames.
Give them their own queue with their own games with other cheaters to play against, and as long as nobody is cheating in a way that breaks the servers, they can play their own version of the game if they want without ruining the game for those who don't cheat.
This reminds me of someone telling me that they are using cheat sites for an online Scrabble game because they suspected their opponent (a “friend”) was cheating. It’s hilarious to think that two humans are watching two instances of a likely optimal bot play against each other and rooting for their instance of the bot.
It doesn't work for logged-out users. If I can just look at e.g. the Internet Archive's copy of reddit and see if my accounts are in there, it defeats the purpose.
Neither reddit nor HN make any attempts to make it hard for sophisticated users to figure out they're shadowbanned.
Forcing you to query IA at least reduces the frequency of feedback, as they're taking snapshots instead of giving a live feed. You could also shadowban IA. You can also do things like guess based on IP address or browser fingerprinting, or require a login from various IP ranges.
Of course your main point - that this is all terribly imperfect and won't stop a determined, sophisticated user, who has realized what's happening - is spot on. That, however, is perhaps a rare combination, rare enough to simply continue dealing with manually.
The feedback doesn't have to be very fast. If I'm botting correctly, my accounts will almost never be banned. Even once every 24 hours will be more than enough.
IA was just an example, and Tor would be easier. But anyway, I think it shows the flaw in doing so:
> You could also shadowban IA.
If the spammer manages to get all the IPs hellbanned just by looking at things, he gets more eyeballs on his spam.
My point is, you can't get much better than normal shadowbans, which are trivially detectable for moderately sophisticated users (just log out and try to check your profile) but not anyone else. "Hellbanning" is a stupid extension of this concept which only works in video games.
Also, shadowbanning is a spineless and deeply unethical move. If I get banned, I know what I did wrong and can reflect on that. If I get shadowbanned, I'm just screaming into the ether. That is not a Good Thing™, it is atrocious.
> Even once every 24 hours will be more than enough.
Depends on the use case. Once every 24 hours is a lot easier to moderate than a minute by minute spam wave.
> IA was just an example, and Tor would be easier.
TOR would indeed be easier... assuming it's not already blocked through other means, as it frequently is. There's a whole ecosystem around blocking TOR and other proxy mechanisms - imperfect and permeable though they may be.
> My point is, you can't get much better than normal shadowbans
Not sure I agree or that you've supported your point - however, even shadowbans are often unnecessary. The goal is never perfect moderation, merely to stack the deck in favor of the moderators for blocking problematic content in terms of time effectiveness until either the moderation effort available can handle it, or until the spammer moves onto easier, more cost effective targets (which even basic shadowbanning can achieve, mooting the need for better tools even if they're available.)
> Also, shadowbanning is a spineless and deeply unethical move.
As a first line of defense against mere rules breakers, I might agree. As a second, third, or nth line of defense against particularly problematic ban evaders and spambots, I will gladly resort to such tools - or worse - and sleep soundly at night.
> 'As a first line of defense against mere rules breakers, I might agree. As a second, third, or nth line of defense against particularly problematic ban evaders and spambots, I will gladly resort to such tools - or worse - and sleep soundly at night.'
So then you will have no problem recognizing where I'm coming from when I say that the Hacker News 'mods' are some of the most evil Nazis on the planet. After all, I created my first account here only a few months ago, and was shadowbanned in less than 15 posts despite posting with honest intentions. Is that what decent people do to other people?
I mean, I wouldn't use your hyperbole, but I get the anger and fustration and the disagreement about how communities should be run. I'd certainly like to think I'd run any of my communities differently, given some of the outlier cases of automatic shadowbans I've heard of.
On the other hand, I also realize just how badly the moderation team is outnumbered. Once you get to a certain scale, you only have bad options at your disposal. Showdead and vouching at least add a twist to it that make it not quite as bad as some of the other options out there.
But back to the first hand - I've quit other communities over less. That's my preferred form of retaliation when I don't merely disagree with moderator decisions, but feel strongly that they're outright behaving poorly, mistreating those I care about, and not being reigned in - let them reap what they sow. I'm happy to take my knowledge, advice, technical chops, and general aid elsewhere. The internet is vast and infinite, and there are communities out there with moderation styles to my liking, where my contributions will be appreciated.
And they are appreciated, at least from time to time. I've had people reach out to me in another community over a decade old abandoned and archived github project for questions and appreciation, to say nothing of my more current projects. I've had people credit me for teaching them the programming language they used to enter the industry - or perhaps blame me ;). I've helped more people find and understand their bugs than I can easily count. Which isn't to say I'm a perfect, always behaving individual, or that I won't cut moderators some slack for honest mistakes or just generally being human. But I will aim to aid the communities I appreciate, and withhold from those I don't, by voting with my feet.
We only do that when an account doesn't have much history on HN and there's evidence of spamming or trolling. For established accounts, we tell people we've banned them and why.
I've always thought people might catch on, since no one is engaging with their takes. I'm curious if letting bots do some markov responses might keep them in the dark a little longer.
I see it far less frequently now (thanks @dang!), but a few years ago it wasn't uncommon to see shadow banned HN users continuing to post for years, talking into the void. Sometimes I'd look into their post history and so many of them had been banned for utterly trivial reasons, it was pretty sad.
Would you have a profile or two handy that I can take a look at? I left showdead on for a while but found it useless in terms of coming across interesting comments from such users. Thx.
I seem to remember being pretty upset when I found out Terry Davis of Temple OS was being shadowbanned. Maybe it was just temporary for a short period or a single comment, but I didn't like it.
Terry was permanently shadow-banned, the subject came up quite often. He suffered from schizophrenia and would make racist and paranoid comments about 80% of the time, but his remaining posts were often pretty technical / insightful. As the victim of an illness it seemed unkind that he was banned, but at the same time a lot of his output was obviously very offensive.
Cases like that are why we introduced vouching, which allows the community to restore the good posts made by a banned account.
Incidentally, if a banned account is making only good posts, we're happy to unban it. I often look at the recent commenting history of banned accounts in the hope of finding such cases, and users sometimes email us about them (as mirmir mentioned elsewhere in this thread). That's super helpful!
One strange phenomenon is that there are banned accounts that post good comments, but revert to posting bad comments that break the site guidelines as soon as we unban them. Then we ban them again and their comments get good again...go figure. Any large-enough population sample includes a long tail of behaviors.
> One strange phenomenon that comes up occasionally is that there are banned accounts that post good comments, but revert to posting bad comments that break the site guidelines as soon as we unban them. Then we ban them again and their comments get good again...go figure. Any large-enough population sample includes a long tail of behaviors.
It would be somewhat ironic, if re-enabling interaction with the community is what's driving them to back to bad behaviour. You know, HN as a bad influence.
That's one model; I've come up with others over the years. But there's no easy way to test any of them. We can't simply ask, either, because asking a question like that is enough of a perturbation to significantly affect what one is asking about—and who knows if people are even aware that they're doing this to begin with.
The truth is right in front of your face. You stare at it in the mirror every time you brush your teeth.
The problem here is your BONDAGE AND DISCIPLINE moderation system, as I explained to you. You make up all of these bullshit rules and then stand around with tasers and shotguns forcing people to follow them "OR ELSE." You do not exercise any discretion or any moderation whatsoever, but just go around shooting people in the face simply for opening their mouth to speak. And then you're surprised as shit when this behavior breeds nothing but contempt for you and your idiotic rules. "How could this be happening?"
Take me, for instance. Made my first account a few months ago, posting my opinions innocently. Shadow banned in less than 15 POSTS. Holy shit! Talk about being shocked. Did you care, when I brought this to your attention? Does it matter to you that you are driving good people away from here every day? Of course not. You're too self-absorbed to care.
And you even sit around having these long discussions, thinking up clever and creative ways to be even more of a hellspawn against these people whose opinions you hate so much. New ways to separate people from society and trap them inside a cage where you don't have to hear their thoughts or opinions. Has there ever been a more sick or contemptible society in the history of man?
Go ahead and ban/shadowban this account also, please. I will just make another. And another. And another. And another...........and another......... from now until the end of time. Eventually I will automate the process, and submit my comments in batches, at a far higher rate than your mods can find and hide them. In my irritation at your unchecked fascism, my comments will ALL be as sharp as a knife, to keep driving the truth home as frequently as possible, until these "mysteries" you "just don't understand" are finally and definitely solved in your mind.
I've seen people like this one message boards before and I always just assumed they handle disagreement poorly. Specifically, they can make an original comment that is fine, but if someone responds in a way they don't like, they kind of fly off the handle. So I guess if they never have people respond to them, they seem fine.
If I see an account that's been shadowbanned for years, and has consistently posted appropriately for maybe several months, I report it via the contact address.
I think nowadays you might catch on because the culpit might not be identified to your site with both their phone and their PC or something among these lines.
That's not true. Software filters sometimes kill such comments, but the accounts themselves are unaffected, and moderators review the killed comments and unkill the clearly good ones, and mark such accounts legit to immunize their future posts from those filters. Also, users often vouch for comments that have been killed in this way, which restores them.
My main issue was with the deplorables comment. I actually quite like the HN system of vouching.
> and mark such accounts legit to immunize their future posts from those filters
I remember seeing you restore a post from someone who made their account via tor and their comments were auto-deleted. Their next post was autodeleted in the same way, so I presume that this feature is buggy (or was, as this was quite a while ago).
There might be other explanations. For example, if an account shows signs of being connected to previous banned accounts, we might unkill a good post, but not immunize the account overall, until it establishes more of a track record.
I'm glad you like the vouching system! I still feel like it's the best single change we've made to HN since pg retired.
“Tarpitting” is a specific type of network-layer defence which is not related to degradation of service. A tarpit will typically stretch out response time to network-layer and some application-layer communications in order to waste wallclock time of spammers.
We did the same thing at reddit. If someone was abusing the site we would redirect them at the load balancer to a single server with an extra sleep in it.
Parent comment was a out implementing a dark nudge to encourage behavior. Next comment was from someone who worked for a company that does it. Next comment was about another poorly implemented dark pattern at same site. Comment is on point. Relevance identified.
I'm pretty sure it has at times even said browsing will be faster in the app than in mobile browser. It wouldn't be surprising if that Very dark pattern was applied as part of pushing the app.
Adding dark patterns and then telling the users to decide for themselves is itself a dark pattern: it's not actually letting users decide for themselves, it's instead punishing users who actively chose not to do what the site wanted them to do.
I am not talking about asshole design. Users know that there is an app store (proof: the huge usage of messengers like Whatsapp). If they want the app they know where to find it. There is no need to applying asshole (or braindead) design to the site.
You can easily game reddit and other similar popularity contest sites or at least sort of. Play their game. E.g. if you delete your own messages immediately after you notice they get a negative momentum you will be quickly considered a model citizen (because you will eventually only show a positive impact to the site and it won't even take long to show because even a couple of days of posting may give that impression).
'jedberg hasn't worked at reddit for years, it's pretty ridiculous to use "you" and "your" there. Further, he was never in marketing, he was basically just a sysadmin. There's really no charitable interpretation of your comment that isn't just "Hey, let's yell at a guy who isn't currently and has literally never been in a position to where he might have been responsible for Annoyance!"
I just wanted a jab at reddit, cuz this week I can't even browse it because "this community is available in the app" (after years of marke... mental abuse of asking me to use an app). Nothing against the guy, he used "we", I used "you", I'm sure he is a cool guy, and I hope its the charitable interpretation.
FWIW, I feel your pain as a daily user of the site myself. I hold no ill will against you for taking it out on me. When I left I specifically said that users can continue to blame me for all problems. :)
I wasn't aware that some communities are now limited to the app only. I haven't run into any yet.
> I do wonder if this scheme was introduced to monetize users
Probably. As a shareholder I get it. The site needs to make money so it can keep providing the service it provides. And ads are the best way to do that.
I also get it from a development standpoint. It takes effort to maintain a frontend, and maintaining two of them takes twice as much effort. With limited resources, I can see why it makes sense to focus on the mobile interface and let the web interface fall by the wayside.
I'm not entirely sure I'd have made a different decision if I still worked there. I don't know enough about the internal structure or costs or revenue to say for sure.
I can say that I know the people in charge, and they are good people, and if this is the choice they made, it was probably for good reason.
They want to redirect users from a relatively open platform they can't control (e.g. adblockers, browser extensions) to one they can control but for some reason can't manage properly (mobile app).
Seriously, the number of times the Reddit app hasn't worked but Apollo has is kind of ridiculous.
I wouldn't even mind using the app if it didn't chew data like it does.
When I had it installed (up to last week) it was using more than 10x anything else on my iphone. I was definitely not using reddit enough to justify that.
That's pretty neat. It seems like it would be a perfect response to trolling, in particular, since a big part of that is the emotional high they get from getting a rise out of people. Slowing down that process would be sure to dull the dopamine hit.
I can even imagine it tamping down "reply wars" and long arguments since you get more decompression time between impressions.
Pretty sure this was used years ago here since it happened to me. If it happened to me now I would slowloris the hell out of the site. But years ago I didn't know better. :P
This sort of stealthy manipulation reminds me of an idea I had while reading 'Linked: The New Science of Networks.' It's a dangerous idea, I think. The book discusses what would be necessary to take the network of film participants and effectively 'break' the '6 degrees of Kevin Bacon' game. Most would presume the way you'd go about it is by finding the nodes connected to the most people and remove them. That's wrong. Because of clustering, removing the most-connected nodes results in almost no change to the general connectivity of other nodes.
Nodes which are actually important are 'bridge' nodes that provide a means of moving between mostly-disconnected groups. I started wondering what these ideas looked like in an actual social graph, like society. What would 'bridge' nodes look like, and what would eliminating the connection to them look like and what effect would it have? I think a social bridge node would be something like a biker whose main social group is his motorcycle gang, but who also participates in his elderly aunts knitting circle once a month. He provides a means through which ideas and concepts and information can flow from biker gangs, and those connected to them, to a group of elderly ladies and those they are connected to. They are, almost by definition, tenuous links. Ones which, if someone had influence over the communication networks they were using, it might be very easy to disrupt. What consequences would there be to breaking those links on a large scale? In the '6 degrees of Kevin Bacon' situation, you can get the average number of links needed to get to Kevin Bacon up over a dozen by only removing a couple handfuls of bridge nodes.
I think doing such a thing on a real social graph could be very quiet, possibly undetectable (drop messages from rarely-connecting pairs of users... they rarely connect, so how many of them will go through the trouble to re-establish contact? Have bridge nodes have something go haywire and they have to be issued a new phone number, 'their facebook got hacked', etc). And the consequence would be to freeze most things in place, or at least radically slow down any kind of large-scale social change. Disruption of the status quo on the scale of regime change in a government, say, requires buy-in from large and very mostly-disconnected segments of the population. If only pockets of people are interested in change, it doesn't matter how intensely they want the change to happen, it only matters if they can join forces with very disparate compatriots. If you had high-level control of communication networks and a vested interest in guarding the status quo against large-scale social upheaval, you could probably do it very quietly and without really needing anything more than the metadata of connectivity. No need to find out what ideas are being spread, you could just make sure ALL ideas remain trapped in their own little bubbles or that their spread is greatly contained.
Thank you for the book recommendation. Your description stands out to me because it feels like that’s the way “Russian” (as often claimed; sorry for a political example) influence on American and other societies via Internet manifests.
For the past few years specifically it feels like a story gets a suspicious amount of immediate and very widespread reach when they’re on the topic of an outrageous member of some certain political or other identity group. Any group, as in this is occurring in all directions simultaneously. I felt this way just yesterday when I saw a Reddit thread about some transgender sports participation drama and the “Other Discussions” tab had fifty other identical threads making sure the “link breakage” you describe is broadcast as widely as possible. Jessica Yaniv is another recent example. I don’t doubt that those divisive people themselves are genuine, but the absolute fervor around these topics just feels so fake. I could see the argument that it’s a natural feedback loop of people becoming more aware of and attuned to certain topics, but the truly scary thing is there’s no way to know.
This is clever and an effective way, but has a downside: it's like death penalty. You're giving the person no chance to improve, and there's risk ruining experience for someone who didn't deserve it.
But it's not life and death, it's like a private club. There's not much to be gained by assuming someone will change their ways and then become a member of the club who gives back.
There isn't much to be gained by simply hoping someone will magically decide to change there ways, but private clubs often have complex initiation rituals with the goal of pushing possible new members to conform to their standards of behavior. In this analogy though, HN "proceedings" are public, and membership is hardly exclusive as the barrier to register a new, undead account is low, so it's not like a private club, either.
Not necessarily assuming, but giving a chance. A person may have a bad period in his life. Silently banning someone is as cool as reviewing a person for a job then giving him no feedback. It's ghosting. Or gossiping behind my back. Or downvoting me without explanation. You can do dick moves against me, but if I find out I'm going to badmouth you and tell everyone what a loser you are.
Ultimately I dealt with those ppl by “greylisting” them. Added a sleep() prior each page rendering of 5 to 25 secs (actually it was more sophisticated and would stream chunks over TCP so the slowness feeling was even more real).
Worked like a charm. Few days after the recalcitrant would no longer come on the website.
I called this “moderation by degradation of user experience”, and was pretty effective like the solution described in your post.
Think about page load if you need to restrain visits.