At Bitly, we've put in place a robust Abuse Detection system designed to identify and block abusive content attempting to be shared on our platform. Bitly links are created for sharing at a volume in the hundreds of millions per month. Our Abuse Detection system evaluates each link to determine the likelihood that it falls into one of the familiar harmful content categories, including malware, phishing and CSAM, among others. This talk will cover the architectural components of that system from an engineering perspective. How do we handle the volume? How do the pieces fit together. What happens between the shortening of a link and the presentation of a "Sorry, this link points to harmful content" page? We'll cover the how we coalesce data from disparate sources -- partner APIs, abuse reports, partner data streams as well our own detection algorithms -- to form a central Abuse Detection database used to make final determinations about each individual piece of content. We'll cover the system architecture, microservices, algorithms and datastores we use to run the system.