Hacker News new | past | comments | ask | show | jobs | submit login

I've used Usenet quite extensively in its heyday. At one point I even ran a Usenet site with dozens of peers and thousands of users. All on a very beefy (for its time) Sun E450.

In my opinion Usenets greatest strength was also its downfall. It is a distributed system without central authority over who could connect to the network or firm control over groups and their contents. To connect to it as a peer, you just had to find at least one Usenet site willing to exchange messages (peer) with you. Usually that wasn't a problem. ISPs, universities, organizations of all kinds were running their own servers, offering Usenet client access to their customers and members for no additional charge.

In a distributed system without central authority, innovation is exceedingly hard though. The protocols and features were basically set in stone. Taking the concept of offering discussions among users to a centralized or closed system made it possible to innovate very fast and offer superior features to the users.

As the article pointed out, that wasn't the only protocol which got picked up that way and innovated upon until it displaced its origins.

Since Usenet is still frozen feature wise were it was 30 or more years ago, I don't see a revival coming anytime soon.




This is the key problem with decentralised systems. They simply can't innovate as quickly as centralised systems.

We saw this with usenet and we also saw it with IRC. IRC really is what slack was, 30years ahead of it. However, IRC has so many essential missing features it never caught on - push notifications, saving your state when offline, admin controls, etc. I know some of these were solved with extensions and workarounds (having a shell connect to irc an then you connect to the shell for persistence for example) but it's a huge hack. This compares to Slack or Teams where they can push a back and front end update to millions/billions(?) of users in a very short space of time to add additional functionality.

This is another reason why all the decentralised 'blockchain' things would have struggled (on top of many others).


I think this argument is seductive but wrong because it ignores an invisible elephant in the room: funding.

Decentralization is a business problem, not a technical problem. Engineers tend not to see this because we're engineers and so we see technical problems first.

Usenet had no economic model. All the problems you list are solvable if there were funding available to solve them.

Free volunteer developer work tends to stop at the level of polish with which developers are comfortable, which is usually command line interaction and fairly manual processes. Developers generally have to be paid to develop new features and polish those features for the general audience, which is why there are precious few open source systems used by anyone other than developers.

Those that do exist tend to be subsidized by huge companies for the purpose of "commoditizing your compliments" or as tools to herd users into an ecosystem that has upsell opportunities built into it. Examples: Chrome, any open source client for a SaaS service, etc.

Non-profits can fund to some extent, but the truth is that polished feature-rich easy to use software is extraordinarily expensive to produce. A system that a developer can create in their spare time might cost millions to render usable to non-developers. Computers are actually very hard to use. We just don't see this because we're accustomed to it. Making them easy to use is a gigantic undertaking and is often far more difficult and complex than making something work at the algorithmic level.

Centralized systems with built-in economic models like SaaS or commercial software tend to triumph because they can fund the polish necessary to reach a wider audience. A wider audience means exponentially larger network effects. See Metcalfe's Law.

Cryptocurrency could have offered an alternative model but failed for entirely different reasons: perverse incentives that attract scammers. In crypto by far the most profitable thing to do is build a fake project that can appear just credible enough to attract retail buyers onto whom you can dump your tokens. There is no structural incentive to stick with a project and really develop it because all the money is made up front through the initial offering. This also ruins the ecosystem because "the bad chases away the good." Scammers make legitimate people not want to go anywhere near crypto, transforming the whole ecosystem into a "bad neighborhood."


> Decentralization is a business problem, not a technical problem. Engineers tend not to see this because we're engineers and so we see technical problems first.

> Usenet had no economic model. All the problems you list are solvable if there were funding available to solve them.

You hit the nail on the head. Usenet was very much a "back channel" in its early history, with school and corporate IT people setting up feeds on the quiet. Indeed, it began outside the Internet, using dialup UUCP links to exchange news and mail.

Email addresses were "bang paths" where you had to route your mail to its destination. If your outside mail gateway was foovax, an outgoing address might be like "foovax!decwrl!ihnp4!uiucuxa!example" instead of "example@uiucuxa.uiuc.edu". In posts, it was a convention to specify your email address path from one of the famous well-connected sites.

One of the biggest early players was (surprise) Bell Labs. If the name "ihnp4" rings a bell to you, you might be as old or older than me. :) That was a central hub for the UUCP network, both news and mail. I wonder if any AT&T bean counters knew about it back in the day, or, getting back to the original point, how many bean counters found out about Usenet in their respective companies and forced it out?


I'd add, polish often means different things to different people. For example, failure planning and risk management often only happen when there is funding.

There also was no content moderation.


> However, IRC has so many essential missing features it never caught on

IRC never caught on? IRC was extremely popular throughout the 90's and into the 2000's


You're right; but I read that as IRC didn't implement ("catch on to") those essential features as technical feasibility marched forward.


> extremely popular

Email and the web were extremely popular. Facebook is extremely popular. Very few people have ever known what IRC is.


Decentralised systems solve this problem with a BDFL, which many have used successfully. Look at Linux, for example.


> without central authority over who could connect to the network or firm control over groups and their contents

Moderation. In public forums, everything else is trivial and moderation is the invisible stinking hairy elephant in the room. To the question "why isn't there a decentralized open protocol free software X" the answer is moderation. Think or it as the discreet but efficient bouncers without which a sufficiently popular public place cannot maintain a welcoming atmosphere. Newsgroups became sufficiently popular, death by inadequate moderation ensued.


"Moderation" is a terrible excuse used by the corporate types as a reason to stick in their safe walled gardens.

We had moderation on Usenet. It was simple, but damn, did it work. We had plonka lists (client selected ignored users), keyword scoring, server level scoring, some anti-spam. There was EARLY work in Bayesian filters built in to clients, but was too late for the "big event". (When ISP's everywhere killed Usenet servers, for what is pretty surely assumed as anti-piracy.)

But lets look corporate moderation. It's done in secret, with vague rules that may or may not be stated, and by content moderation farms that use and abuse people. https://www.wired.co.uk/article/facebook-content-moderators-... . Or worse yet, you fool people in faux-ownership (reddit) and get them to do your moderation for you. Well, that is unless you do a bad job and get your subreddit banned for "low/no moderation".

I'd much rather see everything, and craft my own blocklists as I see fit. You know, just like Mastodon. Sure, there's some fediblocks at admin-level over pervasive content from a single server. But aside admin tasks, its pretty damn sweet. And it's loads better than the shit that happens over on Twitter, Reddit, or Facebook.


We, the technologically literate, are not the target audience. Client-side filtering requires awareness of client-side filtering. How many Reddit users want that ? Facebook users ? Oh, maybe they want to fine-tune their scoring rules library or tweak they bayesian filters - so much fun ! And lacking social intelligence, client-side filtering fails against organized harassment. By the way, who is liable for infractions to the local law ?

Enter the party, meet regulars, meet new faces, enjoy a drink and don't even think about safety. Sure, you can manage it yourself for a birthday party at your home but at larger scales even the wild rave party thrown in an abandoned hangar out there will have some form of institutionalized enforcement, even if they are not corporate security.

Yes, that means that successful large public places tend towards bland - the same reason why successful edgy underground dives are small affairs and their character can't survive growth.


There's no reason a private/paid service addon to a client can't work for filtering. Been thinking similarly for Mastadon, in that you can pretty easily have a gated experience, and get top ranking for Android/iOS client use with a monthly service that includes filtering with relatively sane defaults, but toggle-able filtering/reporting.

You can use fdroid, webapps or side-load the "full" unfiltered client but the experience for the normies would be that paid app experience. And I say paid app to avoid the pitfall that is advertising fed results.


> There's no reason a private/paid service addon to a client can't work for filtering.

There should be no reason why a service can't provide that feature natively.


Cost.


> Client-side filtering requires awareness of client-side filtering. How many Reddit users want that ? Facebook users ?

Facebook[1] and Reddit[2] both have a block user feature. It's a very rudimentary version of the killfile that usenet clients have. If users didn't want features like that, then they wouldn't be available on either platform.

> client-side filtering fails against organized harassment.

Usenet killfiles were advanced enough that you could easily filter out messages even from organized campaigns. For example, filtering it based on a number of conditions such as group header, from header, a unique header, certain keywords in the body, etc.

[1] https://www.facebook.com/help/168009843260943

[2] https://support.reddithelp.com/hc/en-us/articles/214548323-H...


> We, the technologically literate, are not the target audience.

Highly disagree. It's the early adopters of ANY platform, hardware, or whatever that drive real organic demand and adoption. And those early adopters ARE exactly that: technologically literate.

Alienating that group is how you get relegated to the "Digg v4" area of history.

> How many Reddit users want that ? Facebook users ? Oh, maybe they want to fine-tune their scoring rules library or tweak they bayesian filters - so much fun !

How many Facebook or Reddit users even HAVE the capability of client side anything? The big threads are that Reddit's killing most API access unless you pay a pound of flesh. And Facebook does enough horrible things in their coding that makes it impossible to even screen-scrape. The text "Sponsored" shows up as shittery like "essprnodo".

For those gated islands, you'd need their permission to really do any sort of client side anything.

Places like Mastodon make blocking trivial. Ive shared that link, and no, you don't need to know about Bayseian weights or whatever new mod scheme you want to make. Its there, the initial functions are easy-peasy, and the API hooks are there to add whatever you want.

> And lacking social intelligence, client-side filtering fails against organized harassment.

And that already is a major problem on the corporate side. Reddit calls it "brigading". HN has chill-down scripts to defuse flamewars. Twitter, Facebook, and Reddit all have mass-reporting as harassment. There's even current 4Chan threads to organize just that. Sure seems like an unsolved problem, not just "unsolved for federated systems".

> By the way, who is liable for infractions to the local law ?

Irrelevant. Someone in another area could read content that is illegal to them. That's *their* responsibility, not mine. As long as I follow the laws in my jurisdiction and where my server is located, I'm fine. And 17 USC 512 covers me as long as I do good-faith DMCA process.

> <example of comparing physical real world with raves/alcohol/drugs with online "dangers">

And in your latter half, your other problem is that you're equating online "safety" with in-person bodily safety with alcohol and drugs. Those 2 aren't even remotely the same, and its laughable that you'd try to compare them.

If someone is coming on to you in a sexual manner, and your "No" isnt effectively heard or acknowledged, you have a big problem. If someone uploads a bad image (say, animal torture, for something abhorrent), you can block the user, the server, report the user to their/your server admins. You can also choose not to show images. It's terrible, but physical assault is no comparison to a bad image or terrible text.


Someone in another area could read content that is illegal to them. That's their responsibility, not mine

Unless you serve people from the EU or sell things to US customers like bitcoin.


Moderation isn't only about removing bad content, it's about removing content that's not necessarily bad but is unwanted. /r/gaming and /r/games are entirely different subreddits, with different content focuses and different communities because of their moderation. If I really wanted junk food content consumption I can go to /r/gaming. If I want to see news about video games I can go to /r/games. How do you accomplish that with client side filtering? /r/games has Indie Sunday where developers can post their own games on Sunday without competition from news about the massive AAA games everyone's already heard of. How do you accomplish that with client side filtering?


Good point. The majority of my Usenet memories revolve around meta discussions. What is and what is not supposed to be posted. Mid 2000s and people still had flamewars about whether a four line signature was to heavy.


Most meta discussions (beyond the basic etiquette and moderation, which were mostly worked out in the 1980s already) don't have objective answers, just subjective preferences. The solution is to keep aggressively iterating on splitting subreddits/communities/categories of users, until quality interaction is maximized. Obvious example: one category of user wants to discuss politics via memes, another category wants to discuss in long-form essays, another in threaded discussions, Canadians want to discuss Canadian politics without being flooded with US stuff, and plenty of others not at all i.e. they want any political posts off-topic and banned. The obvious way to keep all these groups maximally happy is to split into different groups.


Corporate/algorithmic moderation is almost entirely about removing "bad" content. You're talking about community moderation, which works exactly the same in decentralized systems.


> the "big event". (When ISP's everywhere killed Usenet servers, for what is pretty surely assumed as anti-piracy.)

I thought this was due to the deal[1] that Andrew Cuomo (New York's attorney general at the time) made with major ISPs to restrict access to child pornography. Many major ISPs discontinued their usenet service in that time period.

[1] https://www.cnet.com/tech/tech-industry/n-y-attorney-general...


There were plenty of well moderated newsgroups actually.

A moderated newsgroup had all messages going through the moderator's system, with their programs deciding whether or not the message should be forwarded to the rest of the newsgroup. This doesn't necessarily mean that a human read all messages either.

Some of the oldest flamewars I've participated in are about moderated vs unmoderated USENET newsgroups. It amuses me that we're still debating the issue today.


What kind of moderation programs were people using then? Word filters? I can't imagine anything that isn't both easy to circumvent and likely to catch innocent posts merely quoting a "bad" word.


It wasn't like programmers were less creative or productive than they are now. The languages were fully capable of doing anything on a server you could do now. It was mostly C. Programmers were creative and smart, just like now, and could implement anything, albeit ground-up since there were fewer existing building blocks to reuse.

A lot of sophisticated anti-spam software depended on some sophisticated anti-anti-spam showing up first, and there is a lot more of that now for sure.


Even now forums rely on manual after-the-fact moderation and not just algorithmic pre-moderation.


Several programs are available to do it: https://www.big-8.org/wiki/Moderated_Newsgroups#Moderation_S...

The type and quality of moderation is up to the moderation team. Might be automatic, might involve a review by a human for each post.

As someone else already mentioned, there are groups which were configured to be moderated but have no moderators at all to approve posts. Either they disappeared or there never were any. To participate in these groups you have to set the "Approved:" header in your post yourself. Obviously that would also be the way to circumvent the moderation process for any other moderated group. The countermeasure is that the moderators send out a cancel control message which instructs all the newsservers receiving it or at least those who honor cancel messages, to delete this specific message.


I wasn't "tech awakened" at this time, so I simply didn't know which programs were used.

What I can say is that plenty of spammers would have one or two posts on Newsgroups, before the mods would catch them and ban their email address.

Depending on the Newsgroup, I'm sometimes "new users" would have all of their posts reviewed before their posts were forwarded to everyone else. But after being added to some kind of whitelist, they could say whatever they wanted (well, until a flamewar caused them to go over the line and get banned).

So similar tools as moderators use today on Reddit / IRC / whatever. Or at least, it felt like that to me. Maybe it differed in implementation, but the overall effect was the same.


People used Killfiles [1]

[1] https://en.wikipedia.org/wiki/Kill_file


I don't think a "kill file" was the same as "moderation".

A kill file is "User X is tired of seeing User Y, so X stops reading posts from Y". Everyone else can still read Y.

A moderator is "Moderator is tired of seeing User Y, so Y cannot post to Newsgroup anymore". A much stronger action.

---------

Its been a few decades since I was part of any Usenet though. So maybe I'm mis-remembering the lingo. Or are you saying that the moderators used kill-files that somehow applied to the whole newsgroup?

Because... the later would make sense. But I really don't know how things worked back then.


There's a third layer: the servers. Newsgroups relied on servers propagating messages to other servers as well as delivering them to end users, and they could have various types of filters.

The same sort of model exists in modern distributed systems like Mastodon/ActivityPub. I can block a user from another server or an entire server individually, or my server admin can block them from communicating with anyone on my server.

What's missing that I think will be required for anything that gets popular enough is a means of sharing blocklists automatically, preferably with some machine-readable details so that they're useful even when servers have different rules (e.g. I want to subscribe to bans from Foo if the ban is for hate speech, but not for porn).


The machinery for a moderated newsgroup works by the news server you post an article to having a list of moderator email addresses for moderated groups. Instead of posting your article the server emails it to the moderator. The mod will then either just discard it, or else post it to their own news server with an Approved header to say it's moderated (and these days some crypto signing stuff). Then the post gets propagated as usual.

The "approve/deny/edit" stuff is handled by moderation software, which in principle can do anything you like.


No, killfiles were at an individual level. But they were more than "I'm tired of seeing User Y", as you could also plonk text patterns and the like.

IIRC group-wide moderation was handled per-post in that every post was approved by hand.


> IIRC group-wide moderation was handled per-post in that every post was approved by hand.

Definitely not. The kinds of "mistakes" I've seen in moderation back then suggests that the process was highly automated.

Moderated Usenet is way more similar to modern Reddit than people probably realize.


There was moderation. There was even a newsgroup that probably has some overlap with here that required you to circumvent it in order to post.


[flagged]


Your summary is incorrect, and insulting. That is not how it happened. The old academic usenet was more than content to discuss sensitive topics. The problem with post-September usenet was spam.

Small groups were ok. As soon as a group had any reasonable traffic, the spammers moved it. Since the spammers didn't care about conversation, they didn't need to maintain stable identify and evaded killfiles. They could quickly drowning out the topic with 10x ads. Not wanting this is not about being uncomfortable - it was about swimming in 90% garbage ads.


> Your summary is incorrect, and insulting. That is not how it happened. The old academic usenet was more than content to discuss sensitive topics.

I see no insult towards you, or anyone else in particular. In fact, to the opposite, others here are saying Usenet died due to lack of content moderation - aka other people enforcing their will on others.

> The problem with post-September usenet was spam.

Now I know you're just guessing. The 'September that never ended' was when AoL peered with internet gateways to bring everyone on the AoL network to the internet, along with knowing little about computers or anything. "Me too" was around then, as a derogatory comment of what an AOLer would say. Even Weird Al in "All about the Pentiums" had a line to put them down like Old Yeller.

Usenet really died in 2007 winter-2008 Spring, when all major ISPs killed their subscriber Usenet servers. They never officially stated why, but grumbles that made it out in the systems engineering folk were that it was alt.binaries were eating up loads of bandwidth, and much of it was 100% piracy.

Spam was a thing, but most of us used good NNCP clients that removed a good 90+% of spam. And having lived through when Usenet was ubiquitous, I fully reject your hypothesis.

And you might want to consider taking conversations here more impersonally. Feeling "insulted" over a comment that was directed to nobody in particular isn't healthy.


> Usenet really died in 2007 winter-2008 Spring, when all major ISPs killed their subscriber Usenet servers. They never officially stated why

It was because the attorney general of New York (Andrew Cuomo) made a deal with major ISPs to restrict access to child pornography[1]. ISPs decided to drop their usenet service in response.

[1] https://www.cnet.com/tech/tech-industry/n-y-attorney-general...


Towards the late 2000s, alt.binaries was irrelevant to consumer ISPs etc, as it was not available. It was trivial for any systems engineer to exclude alt.binaries.* from their news server, and most did.


This is the main thing IMO.

Usenet went from students posting at a modest number of elite schools, .mil sites, and a relative handful of tech companies--often under their real name--to the Eternal September and beyond. Like many problems with the Internet over the years which occurred because systems that grew up on the assumption of mostly trustworthy actors.


This is level 5 of the Content Moderation Speedrun.

https://www.techdirt.com/2022/11/02/hey-elon-let-me-help-you...


More it was overwhelmed with bad actors spamming everything, and no way to stop that.

Free speech is not about letting you post your conspiracy theory in the wrong group


I can see you are a "I support free speech but [except]" and then proceed to explain that you do not actually support free speech


What if someone followed you around and just kept saying "You're a cunt. You're a cunt. You're a cunt." over and over?

Any time you're not on private property. "You're a cunt".

Is that free speech?

I mean, I'm allowed to be on public property. I'm allowed to say whatever I want. Exercising those two rights means I'm allowed to follow you around and call you a cunt.

You'd probably try to say "No, that's harassment". Aye, but now you've agreed there should be limits on speech.


When people say that there should be no restrictions on speech, it's usually about content, not manner. No free speech absolutist would think it's acceptable to tell someone "you're a wonderful person" through a 600 dB speaker (if it was possible to construct such a device). If I scream at you so loudly that it liquefies your organs, I don't think that the fact the sound happened to form words makes the act any less harmful. Why should it be any different with harassment?


> If I scream at you so loudly that it liquefies your organs

At that point, the "speech" is objectively harmful.

> Why should it be any different with harassment?

Free Speech absolutionists act as if words on their own are not harmful, such there's no such thing as verbal harassment.

Look at the sibling comment [0]. phpisthebest actually claims that being followed and told "You're a cunt" repeatedly would be amusing.

[0] https://news.ycombinator.com/item?id=36202392


If they have that kind of time, and no life I would find that rather amusing...

I am sure they would tire of it long before I would get offended by it. and I can assure you I would could, and have responded far more decisive and penetrating insults...


If there's one thing we've learned from the last two-decades-odd of the Internet being ubiquitous, it's that one cannot, in fact, outlast the harassers. They organize and they take shifts. They can keep the harassment going as long as it takes. They can keep it up until you are dead. And they can do it at scale.

That's why cutting off their ability to do it through various channels is so important.


The internet reduces the cost of being an ass, both in terms of reach and being punched in the face for harassment


>What if someone followed you around and just kept saying "You're a cunt. You're a cunt. You're a cunt." over and over?

Free Speech is just that. It's not freedom from consequences too.


Not sure how the "Freedom from consequences" bit enters the conversation here.

"Free Speech" that has consequences from the government isn't free speech. The "Freedom from consequences" has to do with private entities reacting to your speech. For example, asking you to leave the premises or getting banned.


Is it possible to separate free speech from free consequences? I'm not sure it can be. From your house, to government, to nature, doesn't every person/place have their own set of compromises/laws/rules for every set of conditions? Are you saying there should be no laws or no moderation, anywhere, at all? I don't think you mean to. Also, should 'online' be distinct/exempt from all other human endeavours, where rules don't apply? Anywhere there is a line between individuals and governments regarding free speech, and indeed law, then neither free speech, or law, exist truly. But it is difficult to give an example of a country where this is not so.

I am reminded of the other hot potato 'privacy'. The 'Solid Project' has an interesting way of dealing with that, which may also have a positive effect on how free speech evolves.


> Are you saying there should be no laws or no moderation, anywhere, at all?

I wanted to address this first. Nowhere did I mean to imply that I think there shouldn't be limits on speech or no moderation. There should certainly be some speech that is illegal (Calls to violence, harassment, etc), and web platforms absolutely have a right to moderate.

But, I think everyone can agree that the right to free speech granted by the First Amendment does mean that you can criticize the government, for example. If someone tries to claim "Yes, you have the right to criticize the government, but the government can arrest you for it! Freedom of speech does not mean freedom from consequences!", then I would have to ask that person what they think "Freedom of speech" means, and what a "right" means, because to me, a "right" means either "You are allowed to do the thing and the government must not interfere or retaliate" (ie, free speech) or "You must be allowed access to the thing, and the government must provide it for you if you can't provide it yourself" (ie, a lawyer when you are on trial).

The notion of "You have the right to something, but the government will punish you for it" is just completely non-sensical.

But...all this only applies to government. Private entities are another matter entirely. They have the right to remove content and users however they want. Your freedom of speech does not trump their right to decide what gets displayed on their platform.

Private entities can also react however they want. If you go on a racist tirade on Twitter, your employer has a right to fire you, because having an extreme racist in the workplace is just asking for trouble.


Billy Joel's rule: "You are free to speak your mind, but not on my time."

That is, sure, you can speak to whoever wants to listen. You can't make anyone want to listen, though - including, you can't make a forum want to listen. It wants to talk about gardening, and you want to talk about the latest Covid conspiracy theories? Great, knock yourself out, talk about your conspiracy theories - but not on the gardening forum.

See, I support free speech. I support their freedom to talk about gardening without having to wade through a bunch of conspiracy theory posts. "Post anything anywhere" reduces actual communication - willing speakers finding willing listeners - because the listeners give up rather than wade through a bunch of garbage that they don't care about. I support their freedom to talk about gardening without getting drowned out.


But here the question is who is doing the controlling

Usenet is a open protocol, not a forum that someone owns. So if you support moderation on open protocol then you support most likely systemic, or worse government censorship. Not someone controlling their own property

With Usenet there was plenty of tools (just like with email today) to block spam, content, users etc at the client level if an individual did not want to see certain speech.

With time those tools would have improved just like they have with email


> So if you support moderation on open protocol then you support most likely systemic, or worse government censorship.

Feel free to stop telling me what I support. You're lousy at it.

If a protocol is open, that doesn't mean that my servers are. I'm moderating my servers, not the protocol or the traffic carried using the protocol. (In the same way, you don't control who drives by your house, but you control who's allowed in the door.)

I don't want systemic censorship of speech (including by ISPs). I don't want government censorship of speech. But I support the right of servers to say "I'm not carrying that". And I support the right of a community to have a space that is public to that community, and yet to keep stuff that they don't want out of it. And I support the right of those who don't agree to go form their own community, and to decide for themselves what the rules are for their community. And all of this can happen on open protocols without systemic or government censorship.

Specifically on usenet, you could have, say, alt.hobby.gardening, and have it get overrun by spam, and have alt.hobby.gardening_moderated start up with moderation to keep the spam out. And if you don't like it, and want to post your conspiracy rants, go post them on alt.hobby.gardening with all the spam that nobody reads. Knock yourself out. Or go start alt.hobby.gardening_conspiracy, or whatever. But let the gardening nuts have their space without your rants.


I, for one, own that label.

I support free speech except in some circumstances where curtailing it makes almost everyone's lives better.

Where I support it to the most extreme: criticizing the government.

Where I want it most throttled: my own living room, so I can have some peace and quiet.

I think most functioning adults have a similar spectrum of allow / deny and aren't absolutists on the topic.

And, I mean... I don't think we need to look deep into the history of the Internet to see that becoming the dominant pattern. USENET failed. Email without spamblockers failed. Every forum either eventually adopts moderation policies or goes underground where barely any casual user hears of its existence.

The user interprets lack of censorship as damaged and routes away from it.


Here we have a fundemental problem

It is often NOT the users of the platforms that want censorship, it is users of another platform, targeting something they do not like on the internet, and then going after underlying structures be it ISP's, DDOS services, Domain Registrars, Hosting providers, Credit Card Processors, etc etc etc

We have plenty of examples of this on the internet, where it is not the users of the sites that want something censored, but 3rd party actors using systemic pressure on critical infrastructure to effect the censorship

You have this utopia where someone puts up a gardening forum and the users or even the owner of the forums says "We do not allow talk about Politics here"...

Where in reality someone sets up a political forum, someone posts something, then someone on Twitter, Facebook, Insta, etc gets offended and starts a campaign to have CloudFlare, or AWS ban them.


Isn't campaigning to have something banned also free speech? If they start blowing up data centres, that's a different question.


Yep. It's a pretty clever approach, and I for one am in hindsight surprised that it took this long for people to realize that the system is a network and is therefore vulnerable to network dependency attacks.

This is not necessarily a bad thing. Nazis, for example, deserve no platform and if companies decide to revoke service when they discover they're serving Nazis that's their prerogative. It can be abused, especially given that the largest players in the space are going to trend sensitive to controversy... But the internet has become embedded deeply enough in our lives that people can no longer enjoy the pretty fantasy of the neutral service provider; IBM got away with it in World War 2 and that's generally pointed to as a moral failing these days.

If people are calling for enforced neutrality, they're really calling for government regulation (who else would enforce the neutrality?) which is its own can of worms.


>Nazis, for example,

Then you have the problem of labeling everyone that disagrees with you a "nazi" for example just because they favor strong border laws, and immigration controls, or oppose social justice and ESG "capitalism" or other such things

Or the true crime of having a difference of opinion with a trans person...

>>This is not necessarily a bad thing.

yes it is a bad thing, it will always be a bad thing, even if doing to to actual nazi's not the new age defination of "nazi's" which seemly is anyone that happened to vote for Trump, or holds conservative political positions.

>>If people are calling for enforced neutrality, they're really calling for government regulation

yes and no, depends on what we are talking about

Are we talking about an already heavily regulated industry like banking. This yes 100% I am calling for enforced neutrality. JP Morgan Chase, nor MasterCard should be able to refuse business to someone for the protected political speech.

Services like AWS, or CloudFlare.... No I would not call for supported regulation but then we come full circle back to loss of cultural support for Free Expression

There was a time were the ACLU would defend the KKK's right to speak, We should return to those days, where the axiom was "I disagree with you but I will defends your right to say it"

If we at the point was the population is pushing for AWS to de-platform competitors of Twitter because of disagreements over politics we are not far removed from government censorship

The reality is you would likely support both, as you clear abhor actual free speech

This is also why Elon's purchase of Twitter was so critical and 100% support his efforts.


> The reality is you would likely support both, as you clear abhor actual free speech

Broadly speaking, I'm a fan of the marketplace of ideas. That marketplace includes boycotts, callouts, and choices by private individuals to provide or revoke a platform for someone else to speak in their space. Freedom of the press has never implied that everyone gets a free press.

> which seemly is anyone that happened to vote for Trump, or holds conservative political positions.

Hacker News isn't really a great venue for that kind of discussion. But there's a reason the pushback on him and his supporters has been so strong relative to previous politicians. He and his supporters actually do represent something fundamentally caustic to the American body politic. https://www.bu.edu/articles/2022/are-trump-republicans-fasci...


>>That marketplace includes boycotts, callouts, and choices by private individuals to provide

Which means I am sure you support the ongoing boycotts happening to BudLight and Target? Or are boycotts only for one side?

> Freedom of the press has never implied that everyone gets a free press.

The problem here is when the government subsides the people that makes the press, regulates the people that makes the press, and limits the who and produce presses it ceases to be a free market, and becomes a regulated market

One can make the case, that since the internet was started by the government (dod) and regulated for decades by the government (Dept of Commerce Via ICANN), and tons of subsidies ISP;s have gotten and continue to get, locations where the internet is a government service, and tons of other factors that the idea that the internet is a purely free market is clearly a false narrative

One often used for convince when it meets peoples political goals, and then the second that freedom is not aligned with their goals anymore.

Further when it comes to consumer boycotts, you have to actually be a consumer, 99% of the people that have effective policy changes at CloudFlare and AWS have never spent a single dime at either of those services.

Today cancel culture is something more than just a boycott, it is something new not seen before in human civilization, and it is a huge threat to not only freedom of expression but freedom in general

I am unclear why people do not get that.


> Which means I am sure you support the ongoing boycotts happening to BudLight and Target? Or are boycotts only for one side?

Of course I do. Nobody should feel compelled to buy a beer they don't want to buy.

> 99% of the people that have effective policy changes at CloudFlare and AWS have never spent a single dime at either of those services.

I think I need a citation for that. I used to work for a cloud company and there is a lot of back room negotiation that people don't realize happens. My default suspicion is that product advocates and product managers received real clear signal from people with money to spend that they were going to spend it elsewhere.

> Today cancel culture is something more than just a boycott, it is something new not seen before in human civilization

No, shunning is actually very old. It is the way communities small and large dealt with unacceptable behavior in their midst. Nobody is compelled to associate with somebody they don't want to.

What did change is that for a brief period of time, the existence of the internet, it's relative obscurity, and the pseudonymity it provided convinced a generation that they had subverted the old cultural norms. What we are witnessing is a reification of those cultural norms to the new technology. The USENET day is dead. The techno utopia was tried and found very wanting. Old patterns are reestablishing themselves, though with more voices at the table because ultimately, you can't actually stop the signal.

The former president was kicked off of somebody's microblogging service and responded by creating his own. That's how it's supposed to work.


> Where I support it to the most extreme: criticizing the government.

> Where I want it most throttled: my own living room, so I can have some peace and quiet.

This is a category error. What if someone wants to criticize the government in your living room?


They are free to buy a living room and criticize away. My living room, I am allowed to determine whom to allow and whom not to allow in. And I would like to find online spaces where the loser spam (both commercial and ideological) are kept out, so I can find the speech I am interested in. Why is this desire felt as oppressive by so many?


> Why is this desire felt as oppressive by so many?

Because they're the spam being kept out.


Before you wrap yourself too tightly in the 1st Amendment to the Constitution of the U. S., perhaps it would be helpful if you defined what “free speech” means to you. Because repeatedly typing “no true Scotsman…” doesn’t do much to move the conversation along, and definitions obviously differ.

For example, I feel that you can say what you like, just not in my living room. Free speech does not guarantee an audience.


That is the often cited copout that has no meaning.

"I feel that you can say what you like, just not in my living room" is a pointless axiom that provides nothing of value to the conversation.

The debate becomes infinitely more complex when you step 1 inch outside of the "living room" context and start talking about Public and Private companies operating platforms on a global scale all under a infinitely complex set of regulations, subsidies, and other factors

" I feel that you can say what you like, just not in my living room" seems to me then you want to translate that into "Reddit, twitter, etc can choose what people say in their living room", the problem is they do not have a living room, and attempting to make that analog is ridiculous


Why should a company be forced to carry anything anyone wants to say?

If you want to publish your own speech, buy your own server


What you are calling free speech is boring to the audience. Usenet died because the endless unfiltered spam and shit posting was boring compared to slashdot and then reddit and then here.


>Lots of people like to claim they support the idea of free speech, only then then list 100's of things they want to ban...

Sounds just like Elon Musk!


Obviously, this is why 4chan died back in 2012. If only they had moderation, they might still be around today.


They're a special niche, and I believe they've been forked multiple times over moderation disasters resulting in 8chan etc.

If 4chan "breaks cover" into the mainstream in the way that, say, Facebook has, it will make for some very entertaining Congressional hearings.


4chan has a million jannies, it's heavily moderated...


4chan has moderation.


The old chestnut "The Net interprets censorship as damage and routes around it" referred specifically to USENET if I recall correctly; it is often mis-applied to the Internet itself or to other services. But it referred to the notion that since USENET servers just routed their stash of stories to any node that requested them, censorship was functionally a non-starter; if you (the node owner) wanted content that some other node owner was squelching, you could request it from another node.

But it turns out that model had some major flaws. Nowadays, one could say the 'net interprets lack of censorship as offensive and migrates away from it. If there's one thing the experiments after USENET (from Slashdot to Mastodon) suggest, it's that people on average want some control (and don't on average want the burden of being their own moderator for every damn bitstream that comes down the pipe).


The net interprets hate as noise and filters it out.

At least, that's the goal.


Interested to see if we find these negatives paralleled in the fediverse.


Relevant: The ecosystem is moving (Challenges for distributed and decentralized technology from the perspective of Signal development)

https://media.ccc.de/v/36c3-11086-the_ecosystem_is_moving




Consider applying for YC's first-ever Fall batch! Applications are open till Aug 27.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: