This talk is part of a two-presentation session running from 1:30 PM - 2:20 PM. The session will feature two presentations back-to-back, with Q&A after each presentation.
Misinformation is a threat to the health and well-being of all users. Yet, current approaches to misinformation mitigation are hampered by ambiguity surrounding cognate terms like disinformation, fake news, rumors, and propaganda. This ambiguity hinders the development of effective policies, valid and performant models, and reliable moderation processes. This presentation explores an alternative approach centered on narrative identification, its alignment with existing T&S policies and processes, and how it relates to these other cognate issues. Results of misinformation narrative detection models trained using distant supervision on large-scale user-generated content are shared. Finally, the presentation covers how this narrative-centric approach to defining and detecting misinformation can be used by platforms to craft policy, integrate with content moderation operations, and drive enforcement to improve information integrity on the platform.