The bulk of Trust & Safety involves content moderation - scanning images, videos or text for specific policy violations. But leveraging meta-data and other behavioural signals can help expand content and actor detection. By leveraging network analysis techniques and other behavioural-based metrics, platforms can detect bad actors, find more violating content and also address recidivism and users with multiple accounts more efficiently. This talk will cover different behavioural detection approaches and use-cases through different case studies 1. Trading of illegal content in group messaging 2. Using viewing behaviour to find bad actors, users at risk and more content 3. Leverage network analysis to detect fake or fraudulent profiles 4. Find repeat offenders through network analysis and meta-data analysis It will summarise each case-study with a framework on how to apply these types of analyses and detection to broader use cases, what policy changes are required and how to operationalise this across the T&S workflow. It will also address the pros/cons of these approaches with respect to over-enforcement and how to trade-off precision over recall in these areas.