Research

Designing Word Filter Tools for Creator-led Comment Moderation.

Witnessing Removal Explanations
Influences Observers'
Future Activity

Shagun Jhaver, Himanshu Rathi, and Koustuv Saha (2024), “Bystanders of Online Moderation: Examining the Effects of Witnessing Post-Removal Explanations,” Accepted In Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems (CHI ‘24). Association for Computing Machinery, New York, NY, USA,


Abstract

Prior research on transparency in content moderation has demonstrated the benefits of offering post-removal explanations to sanctioned users. In this paper, we examine whether the influence of such explanations transcends those who are moderated to the bystanders who witness such explanations. We conduct a quasi- experimental study on two popular Reddit communities (r/askreddit and r/science) by collecting their data spanning 13 months—a total of 85.5M posts made by 5.9M users. Our causal-inference analyses show that bystanders significantly increase their posting activity and interactivity levels as compared to their matched control set of users. In line with previous applications of Deterrence Theory on digital platforms, our findings highlight that understanding the rationales behind sanctions on another user significantly shapes observers’ behaviors. We discuss the theoretical implications and design recommendations of this research, focusing on how investing more efforts in post-removal explanations can help build thriving online communities.

BibTeX citation


@inproceedings{jhaver2024bystanders,
    author = {Jhaver, Shagun and Rathi, Himanshu and Shaha, Koustuv},
    title = {Bystanders of Online Moderation: Examining the Effects of Witnessing Post-Removal Explanations},
    year = {2024},
    publisher = {Association for Computing Machinery},
    address = {New York, NY, USA},
    booktitle = {Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems},
    numpages = {15},
    location = {Hawaii, USA},
    series = {CHI '24}
}