The Trust & Safety profession has grown relatively separately from the ML communities working on Responsible AI, Large Language Models, recommendation and search. With the race to adopt and release LLMs and the fierce discussion about the societal impacts of GenAI and recommender systems — these communities have to rapidly get to know each other’s domains. This siloing leads to foundational misunderstandings and internal team misalignment. Industry standards are lacking. AI teams investigating datasets may not be aware of existing T&S processes around illegal content. ML-Engineering teams may not have experience with policy or governmental processes. Concerns may be lost in translation, resulting in inclusion of problematic content in foundational models and bias in AI decisions. This panel will highlight areas of complementary knowledge, and identify where communication breakdowns may occur. This panel brings together perspectives from Content policy, Responsible AI, as well as LLMs in Trust & Safety contexts.