Due to travel disruptions, this panel has been canceled. However, Jasmine will be presenting a lightning talk about her work on Wednesday.
Scientific research and engineering to support content moderation is crucial. However, supporting moderation is challenging, in large part because nuanced differentiation between types of moderative tools run into the reductive quality of policymaking for the broader whole. To ensure we understand the challenges, advantages, and the failures in the practice of moderation, we need the contextual specificities of the platforms and communities they serve. One size fits all technological and policy solutions to content moderation problems won’t cut it. In this listening and learning panel, we’ll hear from community content moderators and scholars aiming for more nuanced discussions of the data behind the policy discourse. Through their perspectives, we'll frame opportunities for a more complete picture of the different kinds of moderation and trust and safety experiences. And we’ll try to place AI-driven content moderation, the expertise of moderators, and scholarship in perspective, to more effectively merge policy and practice.