Hacker News new | past | comments | ask | show | jobs | submit login
Usenet, authentication, and engineering: early design decisions for Usenet (columbia.edu)
170 points by fanf2 on Feb 27, 2018 | hide | past | favorite | 54 comments



I'm so happy to see a post about Usenet. In some ways, Usenet is doing just fine, thank you very much. It's been forgotten by most of the world, and even as a method of sharing illegal binaries it was surpassed by bittorrent years ago. But there's a small core of users that still participate on Usenet the way it was intended: anonymous conversation with like-minded individuals, each armed with a kill file to weed out the abusers. In fact, I just posted a link to this article to comp.misc. If you're nostalgic for the old experience, go get yourself a free Usenet account at http://news.solani.org (there are other free portals, but because they are more popular they suffer bandwidth issues and struggle to support their users) and fire up Thunderbird, slrn, Xnews, tin, alpine, pan, knode, or other client. Have fun!


Meh. In the 1990s I was active on a Usenet newsgroup for a science field. Initially, it was mainly academics who participated (and a few interested laymen like myself). But then the group began to attract cranks – and obviously mentally ill people – who claimed that they had all the answers and mainstream scientists were wrong. These cranks would post all day, every day, poking their nose into every thread, and eventually they drowned out most other discussions in the group.

Killfiles didn’t seem to work. Even if you blocked a crackpot, one of the non-crackpots would still stupidly respond to him – a lot of people never understood “Don’t feed the trolls”. So you’d click on a post only to find out the discussion was people pointlessly trying to argue with the crackpot. The crackpot’s own posts might have been hidden, but everyone just quoted his posts in their own replies anyway.

A few years ago, after someone somewhere claimed, like you, that Usenet is still healthy, I had a look back at the group to see what it was like: still dominated by those same crackpots.


> These cranks would post all day, every day, poking their nose into every thread, and eventually they drowned out most other discussions in the group.

> Killfiles didn’t seem to work.

What I used to do was to killfile crossposts to certain groups, the trolls themselves, and subthreads started by the trolls. It worked reasonably well at the time.


Yeah, I figured that there might be more power to killfiles than I knew about. Still, even if you block the cranks, all those non-killfile-using good people continue to pointlessly argue with the crank, and that poisons the group spirit and it drives people away. As a member of the community, you eventually suffer from the crank’s activity even if you don’t see his posts directly.


That happens, but it doesn't always happen. Some groups persevere while others succumb to the abuse. Usenet is still so huge that it's really not fair to write it off completely.

Usenet, much like SMTP (but arguably unlike XMPP) built enough momentum before the abuse exploded that there's no reason it can't continue in perpetuity, even if individual groups might come and go. But unlike SMTP it's much more at risk of collapse if good people walk away unnecessarily.


> A few years ago, after someone somewhere claimed, like you, that Usenet is still healthy, I had a look back at the group to see what it was like: still dominated by those same crackpots.

Usenet's not really a single entity. I used it fairly heavily from about 1997-2005; most of the groups I read then are now either tumbleweed or overrun by spam. But a couple of them are still running along happily (if a little smaller than they were). So I think 'still healthy' is pretty contextually dependent.


I run a public server, http://csiph.com. There's a salt formula linked there if anyone is interested in starting their own feeder. I'd be happy to provide initial peerage.

The most novel thing about usenet is that is effectively an overlay network and completely agnostic to transport. That mattered a lot in the old days, you could run a modem pool at night for instance and download the spools. It need not be a network transport even, you could use bag files and sneaker net.

It would be kind of cool to see a blockchain integration with existing usenet. For instance a header that clients could optionally verify like a Basic Attention Token or other reputation system. That would cut down on spam significantly. Maybe even implement up and down votes and revenue sharing in the fashion of Steemit.


Any netnews set of servers not owned by cooperative entities is more supportive of free speech than any single-point-of-censorship/chokepoint like a website. If a website's owners don't want to carry one's posts, or if a sufficiently organized set of readers don't want others to read that post it's easy to score the post low and effectively keep others from reading it by default. Not so easy to do with netnews where servers share posts with each other; servers that share posts will typically spread that post to another server. There are multiple points at which one can upload the post and wait for the normal store-and-forward mechanism to work. That post can be scored low or marked read by an individual, but such action won't keep others from reading the post.

I think this is a big part of the reason why centralization is always censorship and, in turn, why organizations desire that power when setting up yet another discussion site.


Usenet always struck me as having come the closest to the centralized/decentralized happy medium.


Tin! That brings back memories. I read some magnificent flamewars with tin.


Wait, it was supposed to be anonymous? :)

Joke aside, in the German parts of the internet there was a widespread culture of using real names, especially on usenet - at least by the late 90s.


"If you do not wan't to publish your e-mail address, you can use pseudo.address@example.invalid or any other address ending with the top level domain .invalid. This namespace has been reserved for this purpose."

Tried to register with x@x.invalid and got "Too bad, we won't be able to send the password to that address." Am I missing something? Do you first have to register with a valid email and then change it to a .invalid address?


I bit the bullet and registered with my real email address. The email from solani reiterates "If you don't want to use your real E-Mail-address, please use an address from the top-level-domain ".invalid""

Where/how do I do that now that I've registered? They don't seem to understand that, to me, giving them my real email address just to sign up is the same as "publishing" it.


Here's one you don't need to register/sign up to use:

https://news.aioe.org/

As for Solani:

> The email from solani reiterates "If you don't want to use your real E-Mail-address, please use an address from the top-level-domain ".invalid""

They mean on posts you make to Usenet. Put that ".invalid" address in the From field on your Newsreader.

You should probably check with Solani if they display your IP address on posts though (NNTP-Posting-Host), because some news servers do.


Hmmm, I wonder if the Scary Devil Monastery still exists...


It still is getting posts, though few and far between. https://groups.google.com/forum/#!forum/alt.sysadmin.recover... . Thats an aspect of the "if no one is there, new people don't stick around when they show up."

As that one slowly became luserish, the both hierarchy was created that had a restricted feed http://bofh.taronga.com/bofh.html - that one is harder to find and as I've been out of sysadmin for some time where I could ask the right person for either a feed or access to their server... I can't say anymore.

I recovered many years ago ( https://groups.google.com/forum/#!searchin/alt.sysadmin.reco... )... crap... thats two decades.


Hmmm, some familiar names there, but yeah - quiet...

I wonder if I've still got working credentials to The Other Place?


I tried using solani, but I keep getting "connection refused". Is it down right now?


you have to sign up for an account! :)


Never mind, I tried connecting from school WiFi not realizing most ports are blocked.


What I loved about usenet as the author described was that the "barriers to entry" were pretty high. Because only researchers or highly technically competent people could generally gain access, the quality of discussion was really high. Less cat pictures, more learning. HN I think has retained some of the great spirit of the original usenet forums though... I hope we can keep it.


...so we need a uucp interface to HN?

Sarcasm aside, HN could have worked well as private newsgroups. I really do think there is a lot to be said for store and forward, especially when you want to file something away for reference.

We lost something worthwhile with the rise of the newsfeed model. Especially now search engines give extra marks for recency.


> ...so we need a uucp interface to HN?

There's no quicker way to the front page...


And access to alt.hackers was even higher---that was a moderated USENET group, but there was no moderator.


Same with the scary devil monastery. You could post, but you needed to know the "chicken."


There aren't many free Usenet relays around these days, which combined with Usenet's low profile has helped limit abuse and deterred the obnoxious. I've been a paid Giganews subscriber since 2001--so long that I think I pay twice the current plan pricing; and so long that I can't be bothered to care.

Since universities and ISPs began dropping Usenet service, the most popular free gateway is Google Groups, but I filtered out most Google Groups posts years ago when they became the primary gateway for abusers. Maybe things are better now, but I wouldn't know as I usually only see replies. Anyhow, the way that Google Groups formats posts is borderline abusive itself.

HN is great but on the scale of decades I can't imagine HN outlasting Usenet. There's been a very slow changing of the guard in the technical groups I read, but except for a lull a couple of years ago there's still strong technical discourse and occasional new blood. (Newcomers always chafe when regulars rebuke off-topic discussion, but eventually they see the light, or at least become less reactionary.) The big exception among the groups I read is sci.crypt, which never survived a series of flooding campaigns. After one campaign, posters like DJB disappeared completely. There were still regulars from the professional and academic communities. (Not me; I was always a lurker incapable of adding substance.) IIRC, subsequent to another flooding campaign (circa 2010-2013?) most of those people drifted away, and sci.crypt was never the same. There are still knowledgeable posters there, but there's some threshold below which strong, challenging discourse can't be sustained. You need enough collective intelligence in a group to keep everybody honest and engaged, otherwise the uninformed dominate discussion.

Some people would argue that Stack Overflow has replaced Usenet. But I disagree. Stack Overflow simply doesn't have the consistency and collective intelligence that Usenet groups had and, in many cases, still have. Which isn't to say there aren't amazing contributors on Stack Overflow, but the signal to noise ratio, on the one hand, and absolute substance on the other, just isn't comparable. I've rarely come across a Stack Overflow thread where an answer was better than what a less lazy person could have found by reading primary sources (specifications, easily discovered technical papers, etc). Whereas Usenet discourse often provides insights you couldn't easily find anywhere else, if at all. This was especially true in its heyday when some Usenet posts might rightfully be considered the primary, definitive source of truth on some matter.

I suppose part of the reason is because of the rules for discourse on Stack Overflow, which prevent it from becoming a forum where people can bounce ideas off each other, and explore and develop them. The rules and structure of HN similarly prevent it from consistently harboring the same kind of discourse that happens Usenet, though I think it does to a greater extent than Stack Overflow.

In any event, please return. Some of your favorite groups may be lost causes, but probably others are worth your while and surely could benefit from greater participation. Unless and until a real replacement comes along, one might say its a civic responsibility. Even though for many individual subjects there are better forums (e.g. web bulletin boards), fragmented and proprietary forums have real costs.


> The latest figures, per Wikipedia, puts traffic at about 74 million posts per day, totaling more than 37 terabytes.

Just yesterday, I spoke to someone who mentioned Usenet as a means of pirating movies. I cry a little inside whenever I hear that people only know of the once great decentralized discussion system as a dumb pipe.


The problem was the excessive spam and the fact that people preferred central moderation versus doing everything client-side. The groups I used to frequent have essentially been abandoned at this point (though they were active as recently as 6 or 7 years ago).

Personally, I feel a lot was lost in terms of capability of online discussion by moving it to web forums that lack actual message threading and closed source platforms. Even people who prefer email lists seem to not consider NNTP as a viable solution to online discussions.


NNTP clients didn't just have threading, but also lots of other advanced features (like kill files; client-side tagging, scoring, and filtering; advanced search, etc) that even now, 20 or 30 years later, web forums still lack.

I also dread the day when some popular web forums finally kick the bucket. It's not at all clear that the years or decades of posts on them will still be accessible. With Usenet, everyone could easily keep their own archive of all the messages or groups they were interested in, if they cared to. Trying to do the same for web forums is much more of a pain.


It may be feasible to export the comment history of HN, reddit, slashdot, and other popular forums to usenet. I don't know if that would violate some policy or TOS though.


Usenet was marvellous before the first ads showed up (and before Endless September of course).

So were the many private news feeds that companies would set up for developer support.


I (hazily) recall at sgi there was a news-bug tracker gateway.

New post? New bug. Reply to post? Add a comment.

This was also how sgi exported its bugs to partners. sgi.bugs.legato was a feed that legato picked up and could then use with standard software (Netscape had a built in news reader) to be able to interact with the bug tracking system.


There's also a public nntp access point via gmane that one can use to browse various project mailing list archives.


What I don't quite understand is that the binary groups in usenet seem to be exclusively used to distribute encrypted files, while the decryption keys are shared in invitation-only web forums and chat groups.

So, everyone who says they're using usenet for pirating stuff is in at least one of those as well? Wouldn't the fact that you have to, essentially, join "a conspiracy to commit copyright infringement in an organized manner" make the legal situation of even a passive consumer much worse compared to someone who goes to public sites?


There are still a good number of public NZB indexers out there. The vast majority put hard limits on the number of NZBs you can grab as a free user however.

But even if what you're saying were true, wouldn't the same apply to private torrent trackers? As far as I know, even when a popular private tracker is taken down, authorities only pursue operators and heavy content uploaders.


> Just yesterday, I spoke to someone who mentioned Usenet as a means of pirating movies. I cry a little inside whenever I hear that people only know of the once great decentralized discussion system as a dumb pipe.

To be fair, this was true even in the mid-90s. A lot of my college peers knew of Usenet only as a source for porn JPGs & GIFs.


I would be curious how much is text vs video/images/audio?


I'm not sure how to calculate it accurately, but the top 1000 usenet servers page maintains statistics based on the number of articles each server listed has seen [1]. The contributors list for the previous month [2] shows the server names along with the number of articles they've processed that month (I believe).

The eternal-september news server which only serves text newsgroups processed 7,029,191 articles. usenetexpress, on the other hand, processed 1,654,703,028 articles (since it serves both text and binary newsgroups). Assuming that the text articles are a strict subset of the total number of articles, the text traffic is about 0.4% of total traffic.

On top of that, the average size of a text article may be around 3 kilobytes and the average size of a binary article is about 750 kilobytes, the percentage by size is even lower than that (about 0.0017%).

[1] http://top1000.anthologeek.net/

[2] http://top1000.anthologeek.net/participants.txt


You can look at my innreports for a full text feeder at http://csiph.com/.


It is good to see a post about netnews/usenet. I thought things started going to hell when HTML-encoding became a significant part of the feed. Half of a group's subscribers would plonk the sender, another half would flame the sender, and the third half would try to muddle through and treat the article as plaintext with a line break somewhere in the 70-80 character line range. That might have been the start of endless September.

The article didn't touch on the suggested way to handle authenticating control messages that reconfigure the INN groups file, it's based on PGP-signing the body of a control message along with selected and identified headers. The PGP signature was stuffed away in a header making it unobtrusive. It was fairly obscure, perhaps that's why the article missed it. The technique is described here https://ftp.isc.org/pub/pgpcontrol/README.html and an example signed control message here ftp://ftp.isc.org/pub/pgpcontrol/sample.control

I worked with NoCeM, nocem-on-spool, and the cancel moose (tm) back in the days. Applying retroactive cancels to the spool was and remains controversial.


> What we did not know was how to authenticate a site's public key. Today, we'd use certificate issued by a certificate authority.

He's correct, of course, that we'd use a CA, but I don't know if we ought to. Why should I trust dozens or hundreds of companies worldwide to certify that I'm talking to my local university?

> The next thing we considered was neighbor authentication: each site could, at least in principle, know and authenticate its neighbors, due to the way the flooding algorithm worked. That idea didn't work, either. For one thing, it was trivial to impersonate a site that appeared to be further away.

I'm actually much more confident that neighbour authentication could have worked: each message could have been signed by the originating user, by his site, and by each site in the path it took to reach its destination. Keys could have been exchanged when setting up links between sites.

This wouldn't have been able to fix the Sybil problem (e.g. my local university's news admin would have been able to create as many fake sites claiming to be on the other side of the university from me), but it would have enabled admins to trace the source of bad messages, and potentially cut off misbehaving sites, in a way that Usenet ultimately didn't really support.


> Why should I trust dozens or hundreds of companies worldwide to certify that I'm talking to my local university?

Couldn't the university provide you a copy of the certificate chain it uses such that you can import it into your browser (or other client) certificate authority store? Then you personally have verified the university's identity and can tell your client to trust them as an authority.


What would a modern authentication on a peer to peer system look like?

In a large chat, where you're basically playing a game of telephone, you could sign your messages. At scale it would be infeasible for every user to retrieve a copy of every other user's key to validate messages. Perhaps having optional validation (e.g. you see an important message and you can choose to validate it, that reaches out to the user, gets their key, then checks the signature).

In smaller chats it would be feasible to hold every user's public keys and retrieve them directly. Then allow the messages to be relayed between participants, or encrypted and relayed along the larger network.


This may not be exactly what you are looking for, but you might be interested in how signed messages work in OStatus (salmons) and ActivityPub (LD Signatures). The technical details are borderline terrible (especially for ActivityPub), but the upshot is that when a server receives a message from another server that it federates with, it checks the message signature to be sure it's from who it claims to be.


I am apparently stupid when it comes to this but what is blocking that sort of key exchange at scale?


My thought on this is say there is a network of 1 million peers. You join the network and post in the chat with a signed message. Now there are 1 million clients who want to verify that message and hit you up for your key at the same time. I could see that effectively turning into a DDoS, especially for people on slower connections. Haven't tested it though, so I could be making a wrong assumption.


new thing -> meeting place for specific topic -> wider adoption -> laypeople -> "assertive" people with weird points of view -> arguments -> drama -> original people seek new meeting place -> repeat

usenet, forums, message boards, reddit, twitter, the internet.

edit: this happens everywhere. hopefully the original people can sneak back into these places after the nutjobs have finished gloating over the empty battlefield and have gone off to annoy someone else.


Long ago I setup an NNTP peering system at a major Usenet provider. This article made me go look at the distribution of articles across the top providers (this is what I lived by back in the day): http://top1000.anthologeek.net/#stats

My feeder is still doing well, and it looks like Usenet is still doing well. 32 billion articles in January. That's a lot of article reading to do. ;)


> What we did not know was how to authenticate a site's public key.

Naive question-- why not "trust on first use?"


You're not talking to other sites directly.

Alice can tell you, "Here's a message from Bob. His public key is 0x1234," when Bob's public key is actually 0x5678. This opens a new avenue for trolling: publish a fake key for someone else and watch all their messages get dropped.


But then you can't replace the key before it expired (how can you tell that the key has changed or that you are being mitm?).


Can't I just send a signed message to revoke my key?

Seems like that would cover cases except the one where a node cannot recover its own key.


If you're trusting on first use, and have been man-in-the-middled, then the true source won't have the original key to revoke the false key with.


Random aside - I took a few classes many years ago with Prof. Bellovin - he's still my favorite CS prof.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: