jducoeur: (querki)
[personal profile] jducoeur
There's been a lot of discussion recently about the asymmetry implicit in "free speech" online. Many services naively subscribe to the principle that More Speech is Better, and that the way to defend yourself against harassment is through more speech. In practice, that's largely bullshit. (See this article from Yonaton Zunger for one good exploration of the topic; it's what got me thinking about the problem more concretely today, on top of [personal profile] siderea's related article a little while ago.)

At the moment, none of this is really a concern for Querki: I initially started with the hyper-safe approach that *only* members of a Space can comment in that Space. This is precisely the opposite of most websites: it means that you at least aren't going to get harassment from outside the community, and you can always boot somebody out of the Space if they turn into a problem.

But in the medium term, that's too limiting: many Querki Spaces are public, and their use cases are going to *want* to allow public commentary. (Part of the inspiration here is that Querki is about to become a viable blogging platform, and public comments are, I believe, necessary for many blogging use cases.) The plan has always been to put moderation front-and-center in such cases, but as I get closer to actually implementing this (it's only a couple of steps down the Roadmap now), I'm starting to chew on this asymmetry, and its implications.

Oh, and keep in mind: while I'm framing everything below in terms of comments, the same policies are likely going to apply to *contributing* to this Space. That is, we're eventually going to provide ways for non-members to suggest new Things in this Space. I *think* the considerations are the same as for comments, since "Things" tend to be text-heavy, so the potential for abuse is the same.

Here are some thoughts and plans, but I welcome your ideas and comments -- this is *not* easy stuff, and I want to get it right, at least in the long run. The following is a combination of long-planned features and thinking out loud about how to make them better.

(NB: Yes, I'm probably overthinking this, and not all of the below will be implemented immediately. But I think it's best to take the problem seriously from the outset.)

Assumptions and Theories

First, a hypothesis (not quite a firm assumption, but I think it's true): if public comments are allowed, there *will* be harassment. I would love to believe otherwise, but in practice it's hard to imagine there won't be. Even if a specific Space is non-controversial, sometimes the owner of that Space will be, and there will be people looking for ways to get at them.

Second: I disbelieve that Querki will be able to simply filter out harassment in an automated way. Even if that is technically possible (which *might* be true in a limited way today, although I'm not confident of it), we certainly don't have the resources. And even if we had the resources to pro-actively police discussions, it would be both ethically and legally dicey at best, not least because different communities, quite appropriately, have different standards. So our priority has to be to provide individuals and communities with as much tooling as possible to decide and enforce their own standards, and to defend themselves against harassment.

Third: a primary focus should be on flipping those asymmetries. Harassers should get as *little* reward as possible from doing so; it should be as *easy* as possible to address harassment; and it should be as *inconvenient* as possible for harassers to continue. We probably can't prevent harassment entirely, but there's some hope that we can change the cost/benefit ratio of doing it. (Yes, this is coldly practical, but I think that's necessary here.)

Fourth: pseudonymous and anonymous posting are qualitatively different. I don't agree with the airy assumption of Google and Facebook that people will somehow be magically more polite if their name is on something (all evidence says otherwise), but there *is* a major difference in whether a given comment has a persistent identity attached to it. Anonymous commenting is particularly prone to trolling precisely because it is so *cheap*: if all I have to do is post something nasty and run away scot-free, and nobody can track me down, I might as well do it. Whereas the more effort I have put into an identity, the more "expensive" it is if I damage that identity.

Designs and Features

So how do we put all this into practice? Here's a bunch of preliminary design. Note that when I say "member", I mean a member of this specific Space.


Distinguish pseudonymous from anonymous in the tooling: at some level, an anonymous commenter isn't *that* different from a random Querki user who isn't a member of this community. But given that "effort" equation, it's worth letting the owner of a Space separately decide whether to allow comments from logged-in users vs. anonymous comments from the Internet. (Personally, I would tend to allow the former but not the latter, but not everyone will necessarily agree; in the medium term I suspect we need to allow both.)


Pre-moderation: at least initially, Querki is going to have a hard rule that, if a non-member posts a comment, it is *always* moderated before it becomes publicly visible. (In LJ/DW terms: if you aren't pre-approved, you get screened.) This is specifically so that abusers get the minimum possible ego-boo -- nobody except the moderators will ever see the comment before it gets deleted.

It is *possible* that I might allow post-moderation of comments by logged-in Querki users, if you turn that on specifically for this Space, although somebody's going to have to convince me that there are important use cases for it. I'm disinclined to ever allow post-moderation of anonymous comments -- that just seems like a recipe for pain.

Yes, I plan to be hard-assed about this. I want Querki to be a place for building good communities. To some degree, that means we have to prevent people from doing dumb things -- and IMO all the evidence says that allowing anonymous, unmoderated comments is a bad idea in the long run. I'm open to counter-arguments, but they're going to have to be persuasive.


Easy Whitelisting: that said, if pre-moderation is the rule, it needs to be *extremely* easy to say, "this person is okay, let them post". (In practice, this will probably be implemented as adding this person as a member with very limited permissions.) But the key is that this should be as easy as possible -- likely a button to the effect of "Post this and let this person comment from now on".


Easy to Reject and Ban: similarly, it needs to be extremely easy to boot somebody permanently. This is part of the economics -- we want it to be easier to reject a harasser, and make it stick, than it was to commit the offense in the first place. It's part of why I am likely to encourage folks to allow pseudonymous comments but not anonymous ones: there's an identity you can ban.


Easy to Report Abuse: not every rejected comment -- not even every ban -- constitutes "abuse". I expect some bans to be simply "I don't like your style of rhetoric": totally a valid reason to ban somebody from commenting in this Space, but not a reason to kick them off the site.

That said, bans often *will* be a signal of abuse, and it should be very easy for the banner to say so. Again, this is part of the economics: it should be cheap to fight back. (We might even use some number of bans as a signal for us to investigate pro-actively, but I need to understand the legalities of that first.)


Abuse Must be *Clear* Grounds for Ejection: I'll need to look at Querki's Terms of Service and make sure that we're properly setting this up. I believe that we need to state things such that there is *some* ground for judging things case-by-case, but it should be rock-solid-clear that if you are reported for abuse, we may, within our judgement, kick you off and delete your account. (And then we're going to have to evolve clear internal policies for how to evaluate these cases. I'm sure this is going to be *loads* of fun to figure out, but you just have to look at Facebook's history to see how badly things can go if the ejection policies aren't reasonably clear and well-thought-out.)

Similarly, we need to make clear that astroturfing, especially astroturfing for purposes of harassment, is grounds for ejection, and that we retain the right to decide that you are doing so. I wouldn't want to use this particular hammer at all casually -- it's challenging to get right, especially since having multiple identities is not just legit but designed into Querki's architecture -- but it's a common tactic of abusers, especially in pseudonymous environments.


Easy Moderation Assistance: Querki is aimed principally at communities, so moderation should *not* just be the responsibility of a Space's owner. It should be easy for them to share the job with willing volunteers. This is especially important when facing a determined trollstorm, but IMO it's good practice in general.


Higher Bar for New Account?: this one is hypothetical, but worth mulling over. It's related to the astroturfing problem: if I get banned from a Space, what's to stop me from creating a new account and just starting right in again? (The one time I got stalked on LJ, they did this to me twice before finally going away.) So the question is, can we inject a little bit of sand in these gears, without making it too inconvenient for genuine new users?

One possibility would be something vaguely akin to the "reputation" notion on sites like StackOverflow: I can set a minimum threshold for commenters in my Space. You gain points by participating in normal ways, and lose bigtime from anything that smacks of abuse.

Possibly even better would be some sort of formal "web of trust" approach, that contextualizes reputation within *this* community. One advantage we have over SO is that strong focus on community and relationships; we might be able to do useful things with that. (This needs research.)

It's easy to build something like this naively; it's much harder to make it resistant to people gaming the system. But if we could get it right, it might provide a good backstop that allowed people to leave the comment door semi-open while auto-rejecting anything that was outside their desired bounds. Very advanced -- certainly a long-term project -- but might be worth exploring once we have the cycles to do so.


What Else? Ideas? Comments? This is a challenging and important topic, and one I care particularly passionately about doing well. I'm open to any brainstorming folks might be interested in here...

(no subject)

Date: 2017-05-16 08:03 pm (UTC)
alexxkay: (Default)
From: [personal profile] alexxkay
I don't understand what ypu ,ean by "post-moderation".

(no subject)

Date: 2017-05-16 08:21 pm (UTC)
alexxkay: (Default)
From: [personal profile] alexxkay
What if the content is not just offensive, but actively dangerous and/or illegal? "I think people should go to [insert real address] and attack [real person]."

It seems to me like, at minimum, you want moderators to be able to make it harder to find such information. One approach is the Wikipedia one, where there's lots of controversy in the History and Talk sections of some pages, but few people go there. Another is the Making Light approach of "disemvowelling".

(no subject)

Date: 2017-05-17 04:28 am (UTC)
alexxkay: (Default)
From: [personal profile] alexxkay
I can think of at least one, though it's fairly out on the long tail.

* A Querki community forms around niche topic Foo.
* The community becomes moribund, unmaintained and unmoderated.
* Person X wants to ask a question about Foo.

Given my current avocation, it's not at all uncommon for me to leave a question on a blog post from a decade ago. Similarly, I've gotten legitimate comments/questions on ancient posts of mine.

Now, that's not a "use case" for the people who (used to) run the now-dead community, and not clearly one for Querki itself. With no (extant) moderators, there'd be no way to catch spam.

As I think more about this, I guess the real way to 'solve' this situation is probably to have a mechanism for requesting "Hey, this Space seems super-moribund; can I take it over?"

(no subject)

Date: 2017-05-17 11:24 am (UTC)
dsrtao: dsr as a LEGO minifig (Default)
From: [personal profile] dsrtao
That's the sort of feature that might not be needed until much later, but working it out in the terms of service early is important.

(no subject)

Date: 2017-05-16 10:09 pm (UTC)
dsrtao: dsr as a LEGO minifig (Default)
From: [personal profile] dsrtao
Lots of meat here. I'm going to make several comments, and limit each one to one topic.


"Real Names", Pseudos and Anons. The default for a Querki accountholder is one or more pseudos, yes? To manage multiple pseudos, accountholders need tools that always show them what pseudo they are currently using, assign pseudos to Spaces so that they need to make an active decision to post as a different pseudo in a Space where they have already commented, and then on the other side...

One of the default settings for a Space should be "accept only one pseudo from an accountholder in this Space". That prevents the easiest forms of sockpuppeting.

(no subject)

Date: 2017-05-17 02:31 pm (UTC)
dsrtao: dsr as a LEGO minifig (Default)
From: [personal profile] dsrtao
a policy that you simply can't (at the technical level) have more than one Identity belong a given Space

That would work pretty well, I think. Yes, you have the Hypnotized Spy Oracle (spies are hypnotized to never reveal that they are spies; therefore, you line up all your people and ask them to say "I am not a spy." The one who can't say it is a spy.) but the negatives are probably not as consequential as the positives of avoiding sockpuppets agreeing with each other.

Hrm. This brings up invitation systems. Sam sends an out-of-band message to Ana asking her to join his Space; Ana can create a new pseudo or use an existing one. But if Sam sends an in-Querki message to one of Ana's pseudos inviting it to join, Ana has to use that pseudo or reveal another one.

I'm guessing that most of the time, most people will want just one or two pseudos (perhaps reflecting a public persona and a special-interest one). The more interesting stuff is how to prevent undesired disclosures.

If people grow to depend on this, enumerating an account's pseudos will be a prime target.

(no subject)

Date: 2017-06-07 10:08 pm (UTC)
etherial: Earthdawn Logo (earthdawn)
From: [personal profile] etherial
What is the likelihood that Querki is a suitable platform for running online RPGs? In situations like that, it will be not only common but actively desirable for GMs (and sometimes players) to be able to post using multiple identities.

(no subject)

Date: 2017-06-07 10:45 pm (UTC)
etherial: Galliard Moon (galliard)
From: [personal profile] etherial
I currently play in One World By Night, a network of interconnected World of Darkness LARPs. Every game has its own "forum", varying in wildly in quality, many of them far far less advanced than a Facebook group because (surprise surprise) not everyone can get a web developer to play their LARP and work on their back-end in the free time. But this means that if you have N characters in M games, you could have as many as N*M accounts just for your PCs as they visit other games in the area (or not so in the area). I would *love* to see if we could migrate the entire Network onto a Querki Metaspace, enabling instantaneous sharing of character sheets, characters visiting other games, cross-game plot and connections, etc., etc. And then when my character advances in Rank or dies, I can update their profile on all the games simultaneously and simply create a new identity in my home game for my new character.

(no subject)

Date: 2017-06-08 02:13 pm (UTC)
etherial: Galliard Moon (galliard)
From: [personal profile] etherial
An individual conversation usually starts out with a werewolf howling an introduction at the edge of another's territory or describing a scene where they are working on a project or describing their character actively looking for someone

BUT

A lot of the game forums are heavily subdivided by geographic location, so another translation might be to construct the Things in the databases to be locations in game and scenes would then erupt from there. Also private subforums usually exist for the differing sects and factions in the game and each of those could be a Thing.

Regardless, each character would have their own Thing where they could hang their character sheet, record their downtime and experience, and record conversations with the GMs.

(no subject)

Date: 2017-05-16 10:17 pm (UTC)
dsrtao: dsr as a LEGO minifig (Default)
From: [personal profile] dsrtao
New Accounts / Reputation: Here's an approach: just as all anons get auto-pre-moderated, every pseudo should be auto-pre-moderated for the first few comments in each Space. At least as a default. That way everyone starts out equal.

The Space owner could whitelist pseudos, possibly even across all their Spaces.

I really like the way that Dreamwidth has separated "do you want to see what this person posts?" from "do you want to give this person access to non-public posts?"

(no subject)

Date: 2017-05-16 10:27 pm (UTC)
dsrtao: dsr as a LEGO minifig (Default)
From: [personal profile] dsrtao
Moderation Assistance: I'm part of the moderation team for a well-known SF author. They have a fairly popular comment section on their blog; there are moderators in three or four well-separated timezones.

Handing over mod authority is sharing control: it's necessary for certain traffic levels, and scary. It's best if there's an undo available, preferably along with an easily reviewable history list.

And that brings me to two related topics: writing an entry and scheduling it for later publication, and changing the visibility of entries after the fact. If you've got a group blog with, say, a major writer and many invited guests, the ability to schedule posts to not arrive on top of each other is great. So is the ability to make a post visible only to the moderation/editor group, and then later spring it open for all.

(Surprise third thing: editing another person's post. Editors want to edit in place, which is different from moderators zapping a post and inserting a behavior warning.)

Yes, but

Date: 2017-05-17 08:54 pm (UTC)
drwex: (Default)
From: [personal profile] drwex
I think that Zunger's point is not a bad one, but it's also not original and it runs aground on this question: who gets to define what "hate speech" is?

These sorts of arguments look fine when we're all sitting in our comfortable white Western Enlightened worlds where Nazis are bad and that's easy. But what do you say to the person who says that "insulting Islam" or "slandering the King" are hate speech and must be prohibited for exactly the reasons given? The King, after all, is a holy person, the actual embodiment of god on Earth, and it is not only damaging to that person, but their family and to all adherents of this religion for you to point out that the King has no clothes.

There are lots of environments (e.g. Turkey, Thailand) that we don't share views with about what makes something "hate speech." This is why US law has evolved to call out particularized threats, which Zunger seems to miss.

If I say "Jews are scum and the only thing wrong with the concentration camps is that it didn't kill all of them" that's hate speech by most definitions - it's a categorical attack on a group of persons based on a protected designation (religion). I can't discriminate in that way in employment, service provision, and so on. But I can still utter that sentence in public and have it be First Amendment-protected. However if I say, "You, Joe Jew, should have been put to death in a gas chamber" that is particularized and generally not protected speech.

(There are some exceptions for public figures but leave that off for now.)

By the same token we prohibit Westboro Baptist members from picketing funerals of specific gay soldiers while permitting them to continue promulgating "god hates fags" beliefs.

Finally, Zunger's approach ignores the reality that the language of damaging speech has been appropriated by the empowered groups to further their aims. Men's Rights and reverse discrimination being just two prominent examples of that. If you carve out speech exceptions for people who are harmed by hate speech then you have to figure out who is allowed to claim these exceptions. The UK, for example, is notorious for having libel laws that allow the powerful to suppress speech they don't like and in the US we've had to invent anti-SLAPP statutes to stop powerful entities such as corporations from using these sorts of laws to suppress speech they don't like.

tl;dr Zunger's piece is a nice bit of passion for people who are all like-minded but once you start admitting the wide world it breaks down, and in fast and nasty ways. I'll stick with my antiquated liberal constitutionalisms, thanks.

In communities such as you seem to care about the issue goes most often to particularized threats. These are and should be blockable. In addition, people need to understand that Querki spaces are not public fora. First Amendment rules simply don't apply there and anyone claiming so is entirely missing the point. What rules you establish to create the desired sort of civil discourse are entirely up to you; free speech doesn't enter into it.

Re: Yes, but

Date: 2017-05-17 08:57 pm (UTC)
drwex: (Default)
From: [personal profile] drwex
ETA: I could make a similar argument about the snide "implicit incitement to violence" claim. Incitement to violence is not protected and nobody claimed it was. The "implicit" part is a weasely way of saying "speech we agree is icky" and it sounds great right up until you get to the question of "what counts as implicit and who gets to decide that?" See the recent debates over trigger warnings as a good example of where this can get very complicated.

Re: Yes, but

Date: 2017-05-18 01:41 am (UTC)
drwex: (Default)
From: [personal profile] drwex
he's not wrong in his diagnosis of where many sites fall down: the belief that More Speech is the way to deal with Bad Speech looks increasingly naive

I vehemently disagree. The notion that 'do nothing and allow the bad actors to dominate' is equivalent to "more speech" is pernicious nonsense. I'll agree that lots of places fall down on creating more equitable speech environments but a bad implementation doesn't make the belief wrong. That's like saying "you used a crappy RNG so your crypto algorithm is bad."

You cannot improve the general air of speech by ruling some viewpoints illegitimate because someone always has to make that ruling. Viewpoint-neutral wasn't invented on a whim. It's like the scientific method for speech - we can't prove it's the best; we just haven't found anything better.

Re: Yes, but

Date: 2017-05-18 02:41 pm (UTC)
drwex: (Default)
From: [personal profile] drwex
do you believe that communities (not the government -- individual communities) *should not have the right or power* to set their own internal standards of speech?

Of course. That's not even a question because "do nothing" is itself setting a standard. Unfortunately, it's one that too many communities take. Communities regulate membership, and members' conduct. That's what makes them communities rather than random assemblages of persons.

"Oh, just shout over the bad guys and they'll go away". That is the viewpoint that Zunger is decrying.

That's certainly not the impression I got from his piece. He didn't talk about setting standards of conduct; he talked about the damaging effects of hate speech. Furthermore he (and I think maybe you) seem to think that louder shouting is the only approach, or is the only way to interpret "more speech." There are lots of other ways to make "more speech" happen. Just off the top of my head:
- Make sure that your community moderators come from diverse backgrounds and represent different segments of your community.
- Have a method whereby members of the community, particularly those disadvantaged by an exchange, can bring in their preferred speakers.
- Create fora devoted to the expression of views that hate speech disadvantages.
- Provide robust, well-publicized, and vigorously enforced standards of personal safety in the community.
- Be transparent in reporting incidents and their resolutions.
- Have a development team that enacts and adheres to strong affirmative action principles in hiring and promotion, and let that team actively review the ongoing status of how the community code and features are performing.
- Encourage the developers to participate in the fora mentioned above.

I'm sure I can think of more, but none of the above says anything about banning hate speech. Each of them should (and some have been shown to in practice) increase the amount, variety, and responsiveness of speech. Some of that comes from bringing in people who can speak, because not everyone is a good speaker or wants to be a poster child and some of that comes from creating an environment where those who don't enjoy background social privileges nevertheless feel safe and able to have themselves be heard when they choose.

I agree with you that trying to do so through regulation is inappropriate

I didn't say anything about regulation. What I said was that a point of view that says "the answer to hate speech is banning hate speech" (which is what I read Zunger to be espousing) is required to answer the two troubling questions of "what is hate speech?" and "who gets to decide what is hate speech?"

I pointed out that Zunger's handwave works well as long as we stay within the comfortable American bubble but it breaks down as soon as you step even a little bit outside of it.

(no subject)

Date: 2017-05-21 10:38 pm (UTC)
cellio: (Default)
From: [personal profile] cellio
I have a lot of experience on the Stack Exchange network (though not specifically its 800-pound gorilla, Stack Overflow), including years of being a moderator on multiple sites. Here are some unordered thoughts.

It sounds like you have the notion that each Space is its own community and can set its own rules for what is considered offensive, abusive, just plain not welcome, etc. This is good; there are few *global* rules for this. In addition, a community might need to set more-restrictive-than-average rules for its members to feel safe in discussing certain topics.

On Stack Exchange, if an account gets destroyed for abusive behavior (including spamming) and the person comes right back to try again on a new account, that account is auto-suspended. I won't talk in public about how that works, but I think you can work out some of the ways that could be done. :-)

For a (personal) blog it makes sense that there's one moderator -- there's a clear owner. For something that's more of a community space from the start, it's helpful to have multiple moderators -- so you have someone to confer with if you're unsure about something, to (possibly) increase availability (time zones, vacations, crazy week at work...), and to share the load of an active site. If you're going to have multiple moderators, don't do two -- if they disagree there's no clear way forward. I think that's why SE starts communities with three.

Do Spaces have (sub)Spaces? Will you ever need to have the notion of "whitelist *here* but not *there*"? That seems like something that would be hard to retrofit later, so if you think you might need it, think about what hooks you'll need to put in place to add it later.

Is account creation easy -- as easy as signing in with an OpenID credential? If so, and given that pseudonymity is fine, I can't think of a reason to allow anonymous comments at all.

Defaulting to moderated comments is wise. Moderators can then decide to whitelist globally (in that Space), whitelist in this thread only, whitelist except in some subspace/thread, keep moderating this user, or ban. (I don't know how broad a Space is; if your scope is food, for example, you can imagine people who are reasonable in most regards but just have a *thing* about veganism that you want to contain. Or, you know, politics-related spaces...)

You have flagging, right? So if moderators approve, or a whitelisted user posts, something that shouldn't have made it through, it can be brought to moderators' attention? Well-intentioned moderators can still make mistakes and good users can go bad (maybe on a specific issue only), so it's good if that can be corrected.

Keeping a per-user history of important moderation-related events is important. Don't rely on moderators' memory. If you think that "delete account, make new one to try again" is something that could happen, history needs to survive account deletion so mods can later answer the question "wait, is this the same user who did that other thing?".

I don't think you can auto-detect abuse. Spam yes to an extent (SE's spam-prevention means most of it doesn't come through at all, and the rest gets swatted quickly), and even *they* don't try to detect other kinds of abuse beyond filters involving some specific words. (Which can be circumvented, of course, though when people do that makes it that much clearer that they knew what they were doing was wrong and did it anyway.)

BTW, there's an SE community about communities (sounds meta, I know): https://communitybuilding.stackexchange.com/. It's mostly populated by SE folks, but also people with experience moderating Reddit, games, forums, and other communities. (I just handwaved around games, I know, because I don't know very much about online gaming myself.). I think of the community as small but mighty. :-) As you know SE is for Q&A not discussion, but consider trying more-specific questions there as you have them.

(no subject)

Date: 2017-05-23 03:37 am (UTC)
cellio: (Default)
From: [personal profile] cellio
[Each space is its own community?]
Yeah, I think that's necessary.

I think that's a feature. Let the users of a particular corner of the web decide how it should be managed, what kind of content is welcome, and so on. At the extreme, you posited Querki as a blogging platform, and the owner of a blog certainly should be able to control that space.

I don't rule out the possibility of sub-Spaces, but I haven't come across any use cases yet.

Committees and subcommittees. I sit on a board of trustees. Among its members is the smaller executive committee, whose deliberations are not necessarily available to the full board (e.g. personnel matters). The board has other committees where access isn't restricted but interests are, enough that the rest of the board might want us law wonks to go form a separate Space for our bylaws discussions. That sort of thing.

I'm not lobbying for this use case, mind -- just sharing one that came immediately to mind.

There is a plan for *super*-Spaces down the line (in the form of "App Communities"), but those are qualitatively different: they're really voluntary aggregations of data amongst folks who share the same App. (Basically Querki's version of crowdsourcing.)

That makes sense. You want to allow bottom-up formation of shared spaces.

Not yet. OAuth2 is planned for the nearish future -- it's a lousy protocol, but it's the big dog. I might support OpenID later, although honestly there isn't an awful lot of demand for it that I've seen -- I'm not sure that many people outside the LJ/DW world are even aware of it any more.

Perhaps I was accidentally over-precise. I don't know about OAuth2 in particular. What I meant was: can I create an account using my Google or Twitter or Facebook or Yahoo or (insert others here) credentials, like I can on DW or Stack Exchange or Medium? I called that OpenID because DW does, but if there's another technology that makes that sort of credential-reuse work, I didn't mean to exclude it.

And I've had people come up with reasonable use cases for anonymity. So I'm inclined to allow it, although on a strictly opt-in basis.

Ok, fair enough. It should definitely be opt-in, and you might want to make some IP tracking and blocking available to moderators on spaces that allow anonymous commenting. Trolls gotta troll, and all that -- at least make 'em work a little.

Hmm -- whitelist-per-thread is a new idea that hadn't occurred to me. It would be a fair amount of work to implement (it's qualitatively different from whitelisting across the Space), so I'm not going to do it upfront, but I'll keep that in mind as a possible enhancement.

Its utility depends a lot on how Spaces tend to get structured. (Also, where I said thread, possibly I meant Page. I'm sorry there's a lot of basic Querki stuff I don't know and you're having to explain things for probably the hundredth time.) I was thinking of things like: if your Space represents a set of (say) articles on some topic, you might want to allow the author of any particular article to engage freely in comments on that article, without giving him free rein over the entire Space.

Flagging: glad to help. :-) Yes, definitely private; the purpose is to get moderator attention, not create public drama.

Per-user moderation history: also glad to help. :-) This sort of record-keeping has been very helpful in dealing with problem users on Stack Exchange. Before certain things were automatically annotated for us, moderators had to resort to keeping notes in private (moderator-only) chat rooms, which only worked to the extent that the same or different moderators in the future remembered to search for hints -- big pain in the neck. Much easier when you can see all the relevant history in one place.

And I'd be happy to give you more of an introduction to Querki, if you're ever interested -- I'd love to get your thoughts on it more broadly...

I am interested, and happy to kibbitz in my copious free time. :-) I wish we lived in the same city so we could just chat over our beverages of choice and shoot the breeze for a couple hours. Pennsic?



Profile

jducoeur: (Default)
jducoeur

October 2017

S M T W T F S
123 4567
8910 1112 1314
15161718192021
22232425262728
293031    

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags