Generative AI has taken the tech world by storm, with many observers prognosticating about how tools like ChatGPT and Midjourney tip the scales of content creation, making it easier and cheaper than ever to make social media posts, journalistic articles, and realistic-looking images and videos. This imbalance has implications for trust and safety enforcement, as T&S teams face an onslaught of AI-generated content. Can existing machine learning systems and other enforcement mechanisms and processes keep up? Could the same breakthroughs in AI result in improved detection systems for content moderation? In this panel, hear from experts on the new risks that generative AI introduces, as well as how both AI developers and social platforms can mitigate those risks. From discussing new capabilities available to both would-be attackers and integrity workers aiming to stop them, we aim to provide a comprehensive understanding of the issues and solutions at hand.