33

Currently, reputation is the primary line of defense for preventing comment abuse. You can't post comments until you earn at least 50 rep - but after that, you have free rein; there's very little practical oversight for comments. As a result, if we want to explore opening up commenting to a broader audience, or experiment with making voting more accessible (and thus gaining reputation), we need to make sure that reputation isn't the sole line of defense - if those changes are made without any changes to comment moderation, then we open up huge vectors for abuse.

With that in mind, here's a proposal for a system to put us in a better position to detect, handle, and prevent comment abuse. This proposal is based on my experience as a moderator and experience with community-developed tools for moderating comments and other content. I'm including all of the different aspects in this one post, since they're rather interconnected, even though in theory parts could be implemented without the others.

Split "No longer needed" back into "chatty" and "obsolete"

This is relatively minor, but it'll be relevant later on.
A while back, we had different flags for "chatty" and "obsolete" comments. At some point, they were rolled into one flag, called "No longer needed". If we split this into two different flags again, it makes it clearer what types of comments are supposed to be flagged, and gives handlng moderators a better sense of context when handling as to why it was flagged. The distinction also allows for automated action - more on that later.

I wouldn't necessarily be opposed to merging "rude or abusive" and "unfriendly or unkind" into a single "violates the Code of Conduct" flag, but that's less relevant for this discussion.

A new comments dashboard

There's currently no way for anyone to view new comments posted on the site. That needs to change.

I'm envisioning a dashboard that presents all new comments posted on the site - essentially a onebox of the comment with a link to the parent post, much like the "new answers to old questions" tool. This would at least allow people to trawl through the list of new comments and spot abuse, making oversight at least in theory possible. This would allow for noticing abusive (or just chatty) comments on old posts that wouldn't necessarily be noticed otherwise, for instance, or help in spotting a single user making a number of problematic comments in a short amount of time.

Within this dashboard, comments that might need attention should be marked in some way. Comments on posts that have not been active in, say, 30 days, should be marked with a little notice along the lines of "new comment on inactive post". Short comments that contain certain keywords such as "thanks" should be marked as "possibly chatty".

I'm not sure access to this dashboard needs to be gated at all. Getting more eyes on new comments and people participating in curation seems like a good thing. There is an argument that people shouldn't be getting involved in comment moderation without being familiar with the specific site culture and site-specific policies, but even so I think 200 rep is the top reputation gate that I'd be comfortable with; it shouldn't be too hard to get involved.

However, once you do have access, just a list isn't the most useful....

Filters for the dashboard

You should also be able to filter explicitly for comments that are detected as a certain type, such as new comments on inactive posts, possible "thanks" comments, or comments containing external links (particularly important if commenting is ever allowed for 1-rep users). This would allow curators to easily go through comments that are possibly in need of flags.

Moderators should also have an option to include deleted comments in the full list, and to view recently deleted comments. There are cases where deleted comments are symptoms of larger issues; better spotting them would help get attention to the root issue sooner.

Automated action based on flags

Once curators have identified comments that should be deleted, and flagged them, those flags should result in action. Currently, it takes at least three non-CoC-violation flags to delete a comment without moderator intervention; the higher a score a comment has, the more flags it takes. There should be a practical cap on the amount of flags it takes to remove a comment, even highly-scored obsolete comments, to allow the community to handle these cases. (Having moderators be able to view recently deleted comments also helps prevent abuse in this arena.)

Furthermore, enough helpful "chatty" flags on comments by a single user within a certain time period should trigger an automated message to the user whose comments are being flagged, saying something to the effect of "Our system has detected that many of your comments have been removed for being chatty. We wanted to remind you that comments are to be used for the purpose of improving the post; you're welcome to check out chat [and Discussions, if it still exists] for some less restrained discussion". If the chatty comments continue after that point, an automated mod flag is probably a good idea. (See, this is why we need the distinction between "chatty" and "obsolete".)

It should also be possible for comments to be automatically moved to the chat room, once comments have been moved to chat, by flagging them as chatty. Comments currently can't be moved to chat more than once; that should be made possible via either moderator action ("move comments to existing chat room") or by comments accumulating enough flags after a conversation on the post has already been moved.

Gradually open up commenting to users just earning the privilege

Instead of allowing brand-new users to post an infinite number of comments immediately, there should be diminishing rate limits for users who have just earned the privilege. Users should have a limited pool of comments at first - open to ideas about the exact numbers - that then expands or stays the same, based on the reception of their comments; users who are receiving upvotes on their comments should have a larger pool, while users whose comments are receiving chatty or CoC violation flags shouldn't necessarily have their available pool increased. I think it would make sense to remove the limitation entirely once you hit a certain rep point, or other milestone (Pundit badge?), and have the same behavior for being allowed to leave comments that exists after you hit 50 rep today.


All together, I believe this system would put us in a much better position to handle comment abuse, and is a step toward moving away from reputation being the sole defender of the site. Once we move away from reputation being the primary method to prevent abuse and misuse of the system, we can be a lot more flexible in experimenting with concepts such as voting and awarding privileges.

I'm interesting in hearing thoughts on this proposal - issues, things I haven't considered, etc. I'm tagging this as well to encourage that discourse. How can we turn this into a robust comment moderation ecosystem?

10
  • 3
    It's already (3 + Score / 3) flags to delete a comment without moderator intervention, so 3 flags for the vast majority of comments, and only more flags if multiple users feel the comment has value (i.e., they upvoted the comment). It's 1 flag to delete a comment, if the comment text matches the various regular expressions indicating that it's a more problematic comment. These are the limits that have been in place for years. See: "Who can delete comments?" in "How do comments work?". Commented Feb 27 at 19:28
  • 2
    I'm aware of the 1-flag deletion, @Makyen; that system involving auto-deletion and auto-mod-flags for multiple R/A comments works, and so this is more directed at other aspects of the system. I did misremember the exact number of flags needed to delete a comment, but that's been addressed and isn't the primary point here - it does take more flags to remove upvoted comments, which, let's be honest, chatty and obsolete comments often are. Commented Feb 27 at 19:31
  • 8
    "There should be a practical cap on the amount of flags it takes to remove a comment, even highly-scored obsolete comments, to allow the community to handle these cases." I see no reason for this. Moderators are expected to be human exception handlers. Deleting a highly scored comment is reasonably exceptional. Thus, it's reasonable to have a moderator involved. An argument for such a change would be evidence that deletion of such highly scored comments was a significant load on moderators. Is there evidence of such a load on a systemic level (i.e. enough to justify a change)? Commented Feb 27 at 19:33
  • 1
    What do you think about the unfriendly comments detection robot? Is it not used or updated anymore? Could one maybe use it to automatically remove comments or feed a review queue. Maybe simply suspensions should be handed out more often for abusing comments? Commented Feb 27 at 19:47
  • charcoal.se/blaze is a way to view new comments Commented Feb 27 at 19:50
  • 1
    @NoDataDumpNoContribution - It was never rolled out to the network and I don't believe it's in use anymore Commented Feb 27 at 19:51
  • @NoDataDumpNoContribution Its called HeatDetector and you can find it SOBotics Commented Feb 27 at 20:19
  • Just wondering, I have currently developed an AI bot to detect NLN comments (works pretty well, in a private chat room per staff request to avoid interfering with the comment experiment). Perhaps this bot could be integrates into your "filtered dashboard" Commented Feb 27 at 20:21
  • It would be nice if the triggers would be adjustable per community. Some communities, e.g. tex.se, appreciate thank you comments. Commented Feb 28 at 10:28
  • Can I also suggest that it should not be possible for someone to comment on literally every post at a site? It's particularly problematic at small sites where someone comments negatively to every single question, so noone wants to ask anything. Commented Mar 2 at 3:48

1 Answer 1

3

I like the ideas in the question post here.

I should probably write this up in a separate feature request post, but I've mulled over some sort of system that gets people to gain access / higher access to privileges based on their ability to recognize proper/improper usage in a simulated environment. So, essentially, passing a review queue that's all audits (like getting some percentage accuracy streak). One way to test whether a user understands what comments are or aren't appropriate is to put them through a queue of "audits" showing them comments and testing if they can correctly select "OK" or an appropriate flag reason. And when they can demonstrate that they get it, let them comment / comment more. If the system notices that they seem to have forgotten, put them through the test again. Not a fully baked idea, but one I'm interested in. It's kind of a double win, because if it works as I imagine, you get someone who understands how to use a feature, and has been trained to take appropriate action if they see misuse.

5
  • 1
    Audits are easy to get around though Commented Feb 28 at 10:53
  • @Starship I didn't suggest that such a training queue should be based on the existing queues, with their flaws. list me some of those workarounds and let's see if I can design them out. I feel some confidence. (chat might be a better place for discussing this) Commented May 28 at 2:57
  • Let’s start with the obvious flaw: You can easily figure out if a post is an audit and then click it to see what action the audit wants you to take Commented May 28 at 12:07
  • @Starship I'm not necessarily of the position that a training queue like I'm proposing would need to provide items where the correct response is "looks okay". maybe training on what isn't okay is enough. and maybe in such a training queue, it should be relatively easy to know what the correct answer is (don't give tricky cases). if using already deleted comments based on flags (and there's plenty where that came from), there'd be nothing "live" to go click to to see if it's deleted or not. (assuming that's the particular workaround you're referring to). Commented May 28 at 16:15
  • Why? Knowing what isn't problematic is also important. Commented May 28 at 21:14

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.