jducoeur: (querki)
[personal profile] jducoeur
There's been a lot of discussion recently about the asymmetry implicit in "free speech" online. Many services naively subscribe to the principle that More Speech is Better, and that the way to defend yourself against harassment is through more speech. In practice, that's largely bullshit. (See this article from Yonaton Zunger for one good exploration of the topic; it's what got me thinking about the problem more concretely today, on top of [personal profile] siderea's related article a little while ago.)

At the moment, none of this is really a concern for Querki: I initially started with the hyper-safe approach that *only* members of a Space can comment in that Space. This is precisely the opposite of most websites: it means that you at least aren't going to get harassment from outside the community, and you can always boot somebody out of the Space if they turn into a problem.

But in the medium term, that's too limiting: many Querki Spaces are public, and their use cases are going to *want* to allow public commentary. (Part of the inspiration here is that Querki is about to become a viable blogging platform, and public comments are, I believe, necessary for many blogging use cases.) The plan has always been to put moderation front-and-center in such cases, but as I get closer to actually implementing this (it's only a couple of steps down the Roadmap now), I'm starting to chew on this asymmetry, and its implications.

Oh, and keep in mind: while I'm framing everything below in terms of comments, the same policies are likely going to apply to *contributing* to this Space. That is, we're eventually going to provide ways for non-members to suggest new Things in this Space. I *think* the considerations are the same as for comments, since "Things" tend to be text-heavy, so the potential for abuse is the same.

Here are some thoughts and plans, but I welcome your ideas and comments -- this is *not* easy stuff, and I want to get it right, at least in the long run. The following is a combination of long-planned features and thinking out loud about how to make them better.

(NB: Yes, I'm probably overthinking this, and not all of the below will be implemented immediately. But I think it's best to take the problem seriously from the outset.)

Assumptions and Theories

First, a hypothesis (not quite a firm assumption, but I think it's true): if public comments are allowed, there *will* be harassment. I would love to believe otherwise, but in practice it's hard to imagine there won't be. Even if a specific Space is non-controversial, sometimes the owner of that Space will be, and there will be people looking for ways to get at them.

Second: I disbelieve that Querki will be able to simply filter out harassment in an automated way. Even if that is technically possible (which *might* be true in a limited way today, although I'm not confident of it), we certainly don't have the resources. And even if we had the resources to pro-actively police discussions, it would be both ethically and legally dicey at best, not least because different communities, quite appropriately, have different standards. So our priority has to be to provide individuals and communities with as much tooling as possible to decide and enforce their own standards, and to defend themselves against harassment.

Third: a primary focus should be on flipping those asymmetries. Harassers should get as *little* reward as possible from doing so; it should be as *easy* as possible to address harassment; and it should be as *inconvenient* as possible for harassers to continue. We probably can't prevent harassment entirely, but there's some hope that we can change the cost/benefit ratio of doing it. (Yes, this is coldly practical, but I think that's necessary here.)

Fourth: pseudonymous and anonymous posting are qualitatively different. I don't agree with the airy assumption of Google and Facebook that people will somehow be magically more polite if their name is on something (all evidence says otherwise), but there *is* a major difference in whether a given comment has a persistent identity attached to it. Anonymous commenting is particularly prone to trolling precisely because it is so *cheap*: if all I have to do is post something nasty and run away scot-free, and nobody can track me down, I might as well do it. Whereas the more effort I have put into an identity, the more "expensive" it is if I damage that identity.

Designs and Features

So how do we put all this into practice? Here's a bunch of preliminary design. Note that when I say "member", I mean a member of this specific Space.


Distinguish pseudonymous from anonymous in the tooling: at some level, an anonymous commenter isn't *that* different from a random Querki user who isn't a member of this community. But given that "effort" equation, it's worth letting the owner of a Space separately decide whether to allow comments from logged-in users vs. anonymous comments from the Internet. (Personally, I would tend to allow the former but not the latter, but not everyone will necessarily agree; in the medium term I suspect we need to allow both.)


Pre-moderation: at least initially, Querki is going to have a hard rule that, if a non-member posts a comment, it is *always* moderated before it becomes publicly visible. (In LJ/DW terms: if you aren't pre-approved, you get screened.) This is specifically so that abusers get the minimum possible ego-boo -- nobody except the moderators will ever see the comment before it gets deleted.

It is *possible* that I might allow post-moderation of comments by logged-in Querki users, if you turn that on specifically for this Space, although somebody's going to have to convince me that there are important use cases for it. I'm disinclined to ever allow post-moderation of anonymous comments -- that just seems like a recipe for pain.

Yes, I plan to be hard-assed about this. I want Querki to be a place for building good communities. To some degree, that means we have to prevent people from doing dumb things -- and IMO all the evidence says that allowing anonymous, unmoderated comments is a bad idea in the long run. I'm open to counter-arguments, but they're going to have to be persuasive.


Easy Whitelisting: that said, if pre-moderation is the rule, it needs to be *extremely* easy to say, "this person is okay, let them post". (In practice, this will probably be implemented as adding this person as a member with very limited permissions.) But the key is that this should be as easy as possible -- likely a button to the effect of "Post this and let this person comment from now on".


Easy to Reject and Ban: similarly, it needs to be extremely easy to boot somebody permanently. This is part of the economics -- we want it to be easier to reject a harasser, and make it stick, than it was to commit the offense in the first place. It's part of why I am likely to encourage folks to allow pseudonymous comments but not anonymous ones: there's an identity you can ban.


Easy to Report Abuse: not every rejected comment -- not even every ban -- constitutes "abuse". I expect some bans to be simply "I don't like your style of rhetoric": totally a valid reason to ban somebody from commenting in this Space, but not a reason to kick them off the site.

That said, bans often *will* be a signal of abuse, and it should be very easy for the banner to say so. Again, this is part of the economics: it should be cheap to fight back. (We might even use some number of bans as a signal for us to investigate pro-actively, but I need to understand the legalities of that first.)


Abuse Must be *Clear* Grounds for Ejection: I'll need to look at Querki's Terms of Service and make sure that we're properly setting this up. I believe that we need to state things such that there is *some* ground for judging things case-by-case, but it should be rock-solid-clear that if you are reported for abuse, we may, within our judgement, kick you off and delete your account. (And then we're going to have to evolve clear internal policies for how to evaluate these cases. I'm sure this is going to be *loads* of fun to figure out, but you just have to look at Facebook's history to see how badly things can go if the ejection policies aren't reasonably clear and well-thought-out.)

Similarly, we need to make clear that astroturfing, especially astroturfing for purposes of harassment, is grounds for ejection, and that we retain the right to decide that you are doing so. I wouldn't want to use this particular hammer at all casually -- it's challenging to get right, especially since having multiple identities is not just legit but designed into Querki's architecture -- but it's a common tactic of abusers, especially in pseudonymous environments.


Easy Moderation Assistance: Querki is aimed principally at communities, so moderation should *not* just be the responsibility of a Space's owner. It should be easy for them to share the job with willing volunteers. This is especially important when facing a determined trollstorm, but IMO it's good practice in general.


Higher Bar for New Account?: this one is hypothetical, but worth mulling over. It's related to the astroturfing problem: if I get banned from a Space, what's to stop me from creating a new account and just starting right in again? (The one time I got stalked on LJ, they did this to me twice before finally going away.) So the question is, can we inject a little bit of sand in these gears, without making it too inconvenient for genuine new users?

One possibility would be something vaguely akin to the "reputation" notion on sites like StackOverflow: I can set a minimum threshold for commenters in my Space. You gain points by participating in normal ways, and lose bigtime from anything that smacks of abuse.

Possibly even better would be some sort of formal "web of trust" approach, that contextualizes reputation within *this* community. One advantage we have over SO is that strong focus on community and relationships; we might be able to do useful things with that. (This needs research.)

It's easy to build something like this naively; it's much harder to make it resistant to people gaming the system. But if we could get it right, it might provide a good backstop that allowed people to leave the comment door semi-open while auto-rejecting anything that was outside their desired bounds. Very advanced -- certainly a long-term project -- but might be worth exploring once we have the cycles to do so.


What Else? Ideas? Comments? This is a challenging and important topic, and one I care particularly passionately about doing well. I'm open to any brainstorming folks might be interested in here...
(will be screened)
(will be screened if not validated)
If you don't have an account you can create one now.
HTML doesn't work in the subject.
More info about formatting

Profile

jducoeur: (Default)
jducoeur

June 2025

S M T W T F S
12 34567
891011121314
15161718192021
22232425262728
2930     

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags