Skip to main content
promote comment into post
Source Link
blackgreen
  • 3.8k
  • 1
  • 10
  • 18

Appeals of moderator actions that cannot be validated need to be shown to the moderator team

When taking action against a user, we need to have strong evidence that we are correct. That evidence should be, whenever possible, documented in a form that allows a second person to double check that the action was correct.

When reviewing appeals, the standard should not be whether a moderator action can be proven wrong. We hold ourselves to a higher standard to that. If the reviewer can't find sufficient evidence to demonstrate the action to be correct, then we should be contacted to supply any additional details and the results of our original investigation. If the reviewer then feels the action hasn't been demonstrated to be correct, then the action should be reversed.

I can only find 3 instances when the Stack Overflow moderator team was contacted by Community Managers about appeals related to ChatGPT/AI suspensions. Of those:

  • One poster admitted that they had used ChatGPT, but wanted some answers undeleted that they had written themselves and posted among the GPT answers. (To be clear, those posts were recognized as non-GPT but still deleted for another reason. The Community Manager had been informed in detail about that, and did not object.)
  • One set of posts was affirmatively confirmed to be ChatGPT to the satisfaction of the CM.
  • One set of posts was not specifically confirmed to be AI-generated, but a CM agreed that it was unlikely that this user was writing their answers specifically for each question they answered. That suggests that there were, at the very least, quality issues (the handling moderator noted that there was a lot of dupe answering).

However, your post strongly implies that there are many appeals that we have not seen. More specific numbers were discussed internally, leading us to believe that we have not seen the vast majority of appeals that could not be validated. We need to know if we're making mistakes so that we can figure out how it happened and prevent it from happening again. And if we're not making mistakes, then perhaps we need to document our work better. Either way, we'll have more information on possible root causes for the problem you're trying to solve.

Please share with the moderator team these and any future appeals that cannot be validated (regardless of the reason for the action).

Appeals of moderator actions that cannot be validated need to be shown to the moderator team

When taking action against a user, we need to have strong evidence that we are correct. That evidence should be, whenever possible, documented in a form that allows a second person to double check that the action was correct.

When reviewing appeals, the standard should not be whether a moderator action can be proven wrong. We hold ourselves to a higher standard to that. If the reviewer can't find sufficient evidence to demonstrate the action to be correct, then we should be contacted to supply any additional details and the results of our original investigation. If the reviewer then feels the action hasn't been demonstrated to be correct, then the action should be reversed.

I can only find 3 instances when the Stack Overflow moderator team was contacted by Community Managers about appeals related to ChatGPT/AI suspensions. Of those:

  • One poster admitted that they had used ChatGPT, but wanted some answers undeleted that they had written themselves and posted among the GPT answers.
  • One set of posts was affirmatively confirmed to be ChatGPT to the satisfaction of the CM.
  • One set of posts was not specifically confirmed to be AI-generated, but a CM agreed that it was unlikely that this user was writing their answers specifically for each question they answered. That suggests that there were, at the very least, quality issues (the handling moderator noted that there was a lot of dupe answering).

However, your post strongly implies that there are many appeals that we have not seen. More specific numbers were discussed internally, leading us to believe that we have not seen the vast majority of appeals that could not be validated. We need to know if we're making mistakes so that we can figure out how it happened and prevent it from happening again. And if we're not making mistakes, then perhaps we need to document our work better. Either way, we'll have more information on possible root causes for the problem you're trying to solve.

Please share with the moderator team these and any future appeals that cannot be validated (regardless of the reason for the action).

Appeals of moderator actions that cannot be validated need to be shown to the moderator team

When taking action against a user, we need to have strong evidence that we are correct. That evidence should be, whenever possible, documented in a form that allows a second person to double check that the action was correct.

When reviewing appeals, the standard should not be whether a moderator action can be proven wrong. We hold ourselves to a higher standard to that. If the reviewer can't find sufficient evidence to demonstrate the action to be correct, then we should be contacted to supply any additional details and the results of our original investigation. If the reviewer then feels the action hasn't been demonstrated to be correct, then the action should be reversed.

I can only find 3 instances when the Stack Overflow moderator team was contacted by Community Managers about appeals related to ChatGPT/AI suspensions. Of those:

  • One poster admitted that they had used ChatGPT, but wanted some answers undeleted that they had written themselves and posted among the GPT answers. (To be clear, those posts were recognized as non-GPT but still deleted for another reason. The Community Manager had been informed in detail about that, and did not object.)
  • One set of posts was affirmatively confirmed to be ChatGPT to the satisfaction of the CM.
  • One set of posts was not specifically confirmed to be AI-generated, but a CM agreed that it was unlikely that this user was writing their answers specifically for each question they answered. That suggests that there were, at the very least, quality issues (the handling moderator noted that there was a lot of dupe answering).

However, your post strongly implies that there are many appeals that we have not seen. More specific numbers were discussed internally, leading us to believe that we have not seen the vast majority of appeals that could not be validated. We need to know if we're making mistakes so that we can figure out how it happened and prevent it from happening again. And if we're not making mistakes, then perhaps we need to document our work better. Either way, we'll have more information on possible root causes for the problem you're trying to solve.

Please share with the moderator team these and any future appeals that cannot be validated (regardless of the reason for the action).

Referring to the original post, not this posted answer
Source Link
Andy
  • 22.1k
  • 15
  • 53
  • 94

Appeals of moderator actions that cannot be validated need to be shown to the moderator team

When taking action against a user, we need to have strong evidence that we are correct. That evidence should be, whenever possible, documented in a form that allows a second person to double check that the action was correct.

When reviewing appeals, the standard should not be whether a moderator action can be proven wrong. We hold ourselves to a higher standard to that. If the reviewer can't find sufficient evidence to demonstrate the action to be correct, then we should be contacted to supply any additional details and the results of our original investigation. If the reviewer then feels the action hasn't been demonstrated to be correct, then the action should be reversed.

I can only find 3 instances when the Stack Overflow moderator team was contacted by Community Managers about appeals related to ChatGPT/AI suspensions. Of those:

  • One poster admitted that they had used ChatGPT, but wanted some answers undeleted that they had written themselves and posted among the GPT answers.
  • One set of posts was affirmatively confirmed to be ChatGPT to the satisfaction of the CM.
  • One set of posts was not specifically confirmed to be AI-generated, but a CM agreed that it was unlikely that this user was writing their answers specifically for each question they answered. That suggests that there were, at the very least, quality issues (the handling moderator noted that there was a lot of dupe answering).

However, thisyour post strongly implies that there are many appeals that we have not seen. More specific numbers were discussed internally, leading us to believe that we have not seen the vast majority of appeals that could not be validated. We need to know if we're making mistakes so that we can figure out how it happened and prevent it from happening again. And if we're not making mistakes, then perhaps we need to document our work better. Either way, we'll have more information on possible root causes for the problem you're trying to solve.

Please share with the moderator team these and any future appeals that cannot be validated (regardless of the reason for the action).

Appeals of moderator actions that cannot be validated need to be shown to the moderator team

When taking action against a user, we need to have strong evidence that we are correct. That evidence should be, whenever possible, documented in a form that allows a second person to double check that the action was correct.

When reviewing appeals, the standard should not be whether a moderator action can be proven wrong. We hold ourselves to a higher standard to that. If the reviewer can't find sufficient evidence to demonstrate the action to be correct, then we should be contacted to supply any additional details and the results of our original investigation. If the reviewer then feels the action hasn't been demonstrated to be correct, then the action should be reversed.

I can only find 3 instances when the Stack Overflow moderator team was contacted by Community Managers about appeals related to ChatGPT/AI suspensions. Of those:

  • One poster admitted that they had used ChatGPT, but wanted some answers undeleted that they had written themselves and posted among the GPT answers.
  • One set of posts was affirmatively confirmed to be ChatGPT to the satisfaction of the CM.
  • One set of posts was not specifically confirmed to be AI-generated, but a CM agreed that it was unlikely that this user was writing their answers specifically for each question they answered. That suggests that there were, at the very least, quality issues (the handling moderator noted that there was a lot of dupe answering).

However, this post strongly implies that there are many appeals that we have not seen. More specific numbers were discussed internally, leading us to believe that we have not seen the vast majority of appeals that could not be validated. We need to know if we're making mistakes so that we can figure out how it happened and prevent it from happening again. And if we're not making mistakes, then perhaps we need to document our work better. Either way, we'll have more information on possible root causes for the problem you're trying to solve.

Please share with the moderator team these and any future appeals that cannot be validated (regardless of the reason for the action).

Appeals of moderator actions that cannot be validated need to be shown to the moderator team

When taking action against a user, we need to have strong evidence that we are correct. That evidence should be, whenever possible, documented in a form that allows a second person to double check that the action was correct.

When reviewing appeals, the standard should not be whether a moderator action can be proven wrong. We hold ourselves to a higher standard to that. If the reviewer can't find sufficient evidence to demonstrate the action to be correct, then we should be contacted to supply any additional details and the results of our original investigation. If the reviewer then feels the action hasn't been demonstrated to be correct, then the action should be reversed.

I can only find 3 instances when the Stack Overflow moderator team was contacted by Community Managers about appeals related to ChatGPT/AI suspensions. Of those:

  • One poster admitted that they had used ChatGPT, but wanted some answers undeleted that they had written themselves and posted among the GPT answers.
  • One set of posts was affirmatively confirmed to be ChatGPT to the satisfaction of the CM.
  • One set of posts was not specifically confirmed to be AI-generated, but a CM agreed that it was unlikely that this user was writing their answers specifically for each question they answered. That suggests that there were, at the very least, quality issues (the handling moderator noted that there was a lot of dupe answering).

However, your post strongly implies that there are many appeals that we have not seen. More specific numbers were discussed internally, leading us to believe that we have not seen the vast majority of appeals that could not be validated. We need to know if we're making mistakes so that we can figure out how it happened and prevent it from happening again. And if we're not making mistakes, then perhaps we need to document our work better. Either way, we'll have more information on possible root causes for the problem you're trying to solve.

Please share with the moderator team these and any future appeals that cannot be validated (regardless of the reason for the action).

suggestions; modify/revert as desired
Source Link
Cody Gray
  • 64.9k
  • 23
  • 199
  • 314

Appeals of moderator actions that cannot be validated need to be shown to the moderator team

When taking action against a user, we need to have strong evidence that we are correct. That evidence should be, whenever possible, documented in a form that allows a second person to double check that the action was correct.

When reviewing appeals, the standard should not be whether a moderator action can be proven wrong. We hold ourselves to a higher standard to that. If the reviewer can't find sufficient evidence to demonstrate the action to be correct, then we should be contacted to supply any additional details and the results of our original investigation. If the reviewer then feels the action hasn't been demonstrated to be correct, then the action should be reversed.

TheI can only find 3 instances when the Stack Overflow moderator team has beenwas contacted by Community Managers about 3 appeals that I can find related to ChatGPT/AI suspensions. Of those:

  • One poster admitted that they had used ChatGPT, but wanted some answers undeleted that they had written themselfthemselves and posted among the GPT answers undeleted.
  • One set of posts was affirmatively confirmed to be ChatGPT to the satisfaction of the CM.
  • One set of posts was not specifically confirmed to be AI-generated, but a CM agreed that it was unlikely that they werethis user was writing their answers specifically for each question they answered. ThatThat suggests that there were, at the very least, quality issues (the handling moderator noted that there was a lot of dupe answering).

According toHowever, this post, strongly implies that there are many appeals that we have not seen. More specific numbers were discussed internally, which suggestleading us to believe that we have not seen the vast majority of appeals that could not be validated. We need to know if we're making mistakes so that we can figure out how it happened and prevent it from happening again.. We need to know if we're making mistakes so that we can figure out how it happened and prevent it from happening again. And if we're not making mistakes, then perhaps we need to document our work better. Either way, we'll have more information on possible root causes for the problem you're trying to solve.

Please share with the moderator team these and any future appeals that cannot be validated (regardless of the reason for the action) that cannot be validated with the moderator team.

Appeals of moderator actions that cannot be validated need to be shown to the moderator team

When taking action against a user, we need to have strong evidence that we are correct. That evidence should be, whenever possible, documented in a form that allows a second person to double check that the action was correct.

When reviewing appeals, the standard should not be whether a moderator action can be proven wrong. We hold ourselves to a higher standard to that. If the reviewer can't find sufficient evidence to demonstrate the action to be correct, then we should be contacted to supply any additional details and the results of our original investigation. If the reviewer then feels the action hasn't been demonstrated to be correct, then the action should be reversed.

The Stack Overflow moderator team has been contacted by Community Managers about 3 appeals that I can find related to ChatGPT/AI suspensions. Of those:

  • One admitted that they had used ChatGPT but wanted some answers that they had written themself and posted among the GPT answers undeleted.
  • One was affirmatively confirmed to be ChatGPT to the satisfaction of the CM.
  • One was not specifically confirmed to be AI, but a CM agreed that it was unlikely that they were writing their answers specifically for each question they answered. That suggests that there were, at the very least, quality issues (the handling moderator noted that there was a lot of dupe answering).

According to this post, there are many that we have not seen. More specific numbers were discussed internally, which suggest that we have not seen the vast majority of appeals that could not be validated. We need to know if we're making mistakes so that we can figure out how it happened and prevent it from happening again. And if we're not making mistakes, then perhaps we need to document our work better. Either way, we'll have more information on possible root causes for the problem you're trying to solve.

Please share these and any future appeals (regardless of the reason for the action) that cannot be validated with the moderator team.

Appeals of moderator actions that cannot be validated need to be shown to the moderator team

When taking action against a user, we need to have strong evidence that we are correct. That evidence should be, whenever possible, documented in a form that allows a second person to double check that the action was correct.

When reviewing appeals, the standard should not be whether a moderator action can be proven wrong. We hold ourselves to a higher standard to that. If the reviewer can't find sufficient evidence to demonstrate the action to be correct, then we should be contacted to supply any additional details and the results of our original investigation. If the reviewer then feels the action hasn't been demonstrated to be correct, then the action should be reversed.

I can only find 3 instances when the Stack Overflow moderator team was contacted by Community Managers about appeals related to ChatGPT/AI suspensions. Of those:

  • One poster admitted that they had used ChatGPT, but wanted some answers undeleted that they had written themselves and posted among the GPT answers.
  • One set of posts was affirmatively confirmed to be ChatGPT to the satisfaction of the CM.
  • One set of posts was not specifically confirmed to be AI-generated, but a CM agreed that it was unlikely that this user was writing their answers specifically for each question they answered. That suggests that there were, at the very least, quality issues (the handling moderator noted that there was a lot of dupe answering).

However, this post strongly implies that there are many appeals that we have not seen. More specific numbers were discussed internally, leading us to believe that we have not seen the vast majority of appeals that could not be validated. We need to know if we're making mistakes so that we can figure out how it happened and prevent it from happening again. And if we're not making mistakes, then perhaps we need to document our work better. Either way, we'll have more information on possible root causes for the problem you're trying to solve.

Please share with the moderator team these and any future appeals that cannot be validated (regardless of the reason for the action).

Source Link
Ryan M
  • 29.5k
  • 10
  • 81
  • 134
Loading