jducoeur: (Default)
jducoeur ([personal profile] jducoeur) wrote2023-06-18 10:23 pm
Entry tags:

Thoughts about Nusenet

For a while now, I've been pondering the idea of, "What would / should Usenet look like if we were to rebuild it today?" As Reddit tries to go full Twitter, that topic is getting a little more timely.

So let's take the question seriously, and kick it off with some initial requirements analysis.

(I'm going to post this on both Mastodon and Dreamwidth; comments solicited on both.)


Personal context: Usenet was basically my introduction to the Internet, back in '87: I was one of the founding members of the Rialto, the SCA newsgroup (rec.org.sca), and pretty much lived on Usenet for about five years.

I've been contemplating this "what would a new Usenet be?" for a fairly long time. (I actually own nusenet.org, specifically to provide a home domain if this ever goes anywhere.)


For those going, "WTF is Usenet?", it was the original distributed forum system. Conversations on hundreds of topics, copied from server to server around the world. The tech was primitive by today's standards, but it was fairly cutting-edge then.

So let's think about requirements from a Usenet lens. What did it do well? (+) What were its problems? (-) What were we not even thinking about then?


+ Usenet was topic oriented, not person oriented. That's an important niche, and surprisingly poorly served nowadays.

+ "Topics" could include communities. Some of my favorite newsgroups were for particular niche communities (like the SCA).

+ The topic namespace was hierarchical; you could easily split rec.humor.funny out of rec.humor.


- The Usenet namespace (the list of groups) was controlled by centralized mechanisms that scaled fairly poorly. This worked for hundreds of topics; it wouldn't work for tens of thousands.

(The community quickly devised a workaround, in the form of unofficial "alt" newsgroups for topics that were too new or controversial. These weren't necessarily distributed as widely, but it generally worked.)

IMO, folks should be able to devise whatever groups they want: it shouldn't be centralized.


+ Other than the namespace, the system was highly distributed. Not only wasn't it centrally controlled, it was architecturally almost impossible to control.

(This didn't seem radical at the time, since the other major system was email. Now, it seems kind of radical.)


+ Conversations were explicitly threaded, and threads could branch as needed. No, this isn't obvious, and there are both pros and cons to it.

+ It was defined by the protocol, not by the specific client: more like email, less like Facebook. (Again, this isn't obvious, especially nowadays.)


+ You could block individual posters. For the time, that was a bit radical.

- I suspect the moderation tools weren't nearly good enough for modern requirements, although they were evolving pretty rapidly.

? I'm not entirely sure what moderation means for this sort of medium. Getting this right is important, and not simple. (This is a big topic.)


? While you could avoid reading the messages from a toxic poster, there was no way to prevent a toxic poster from seeing you.

(This was a concept that just plain didn't exist, and still doesn't exist in many systems. But a lot of folks in the Fediverse care about it, so it's worth mentioning and thinking about.)


- Spam was (and is) a problem. Usenet was where we really learned how much of a problem spam could be.

(Yes, this ties into the moderation problem, but is a different problem than bad behavior or toxicity, and probably needs to be looked at separately.)


Okay, that's an initial list, to start the conversation. What have I missed? Do I have some of the plusses and minuses wrong?

For now, let's focus on requirements rather than architecture -- "what do we want?" rather than "how should it be built?" (Or "does this already exist?") Those can come later.

Thoughts?

dsrtao: dsr as a LEGO minifig (Default)

moderation

[personal profile] dsrtao 2023-06-19 09:29 am (UTC)(link)
Self-moderation, the killfile, is a feature that every social system needs but is usually worse-implemented than usenet -- which is weird, because usenet does it client-side.

Two forms of moderation exist on usenet. The well-known one is the 'moderated group', where every post passes through to a human gateway before being allowed to post. It's a lot of work, and restricts traffic significantly. It works very well at a high cost and low throughput.

The other one is the anti-spam mechanism, which uses public key infrastructure to sign forged cancel messages which chase the spam around the distribution network. It generally did not work well.
brooksmoses: (Default)

Re: moderation

[personal profile] brooksmoses 2023-06-19 06:54 pm (UTC)(link)
There was a third form of moderation that was less-visible, because it likewise was primarily an anti-spam mechanism: Server-side filtering of incoming cross-server traffic. IIRC, most servers had extensive blocklists and filters, often based on the metadata of which other servers the message had passed through but also including things like "does the message contain binary data posted to a non-binaries group?"

My guess is that this filtering was also largely responsible for implementing the "moderated group" form of moderation, in that unsigned messages to moderated groups would also be filtered out. (Ideally, they would be rejected by the server where one tried to post them, but relying on that requires trusting everyone else's servers, which generally wasn't the way things were done.)
Edited 2023-06-19 18:57 (UTC)
alexxkay: (Default)

Re: moderation

[personal profile] alexxkay 2023-06-20 03:20 am (UTC)(link)
Seen elsenet:

"One of the (exceedingly boring) pillars of oppression is to keep oppressed people busy. AKA "just block the person who upset you."

Like if Nazis spend no time blocking and I spend half my time blocking that means the system is serving the Nazis."
etherial: an idealized black vortex on a red field (Default)

Re: moderation

[personal profile] etherial 2023-06-20 11:18 pm (UTC)(link)
I wonder if a tool that is somewhere between spam filters and content moderation could be developed where certain users, topics, or keywords could be automagically (and adaptively) put behind a spoiler/community content tag.
dsrtao: dsr as a LEGO minifig (Default)

Re: moderation

[personal profile] dsrtao 2023-06-21 12:18 am (UTC)(link)
That would fall into the general bucket of content moderation (except for users, that's user moderation).

Here's the thing: if you have a generally cooperative userbase, then making tools that help them stay cooperative is good and useful. "You used three keywords we associate with politics: would you like to apply the politics tag to this post?": peachy.

If you have griefers -- and every sufficiently large group will -- they will learn that they can't use specific words and instead substitute others. If one account is shut down, they open three more -- unless there's an effective tool to prevent that.
etherial: an idealized black vortex on a red field (Default)

Re: moderation

[personal profile] etherial 2023-06-21 06:58 pm (UTC)(link)
To be clear, I am talking about an automated tool capable of escalating degrees of new user moderation. Both Facebook and Reddit already have tools that "greylist" new members: Facebook can hold messages from new users until they are approved by a Moderator and Redditors can set a maximum number of downvotes for a post to be viewable. So if a group's userbase is frequently moderating new posts, the group can provide automatic downvotes (or whatever) to new posts and *also* escalate the number of automatic downvotes if the problem persists.

A good faith actor would eventually be in the clear, having only suffered some difficult to measure delays in their initial posts whilst a bad faith actor finds an increasingly uphill battle to build enough credibility just to get *one* bad post through to the end users.