How does a regulator know if an algorithm is compliant with existing anti-discrimination law? This is an urgent question, as algorithmic decision-making tools play an increasingly significant role in the lives of humans, especially at critical junctures such as getting into college, getting a job, getting a mortgage, housing, or insurance. In each of these regulated situations, moreover, the legal meaning of unlawful discrimination is different and context dependent. Regulators lack consensus on how to audit algorithms for discrimination. Recent legal precedent provides some clarity for review and provides the basis of the framework for algorithmic auditing outlined in this article. This article provides a review of precedent, a novel framework which explicitly decouples technical data science questions from legal and regulatory questions, an exploration of the framework’s relationship to disparate impact. The framework promotes algorithmic accountability and transparency by focusing on explainability to regulators and the public. Through case studies in student lending and insurance, we demonstrate operationalizing audits to enforce fairness standards. Our goal is an adaptable, robust framework to guide anti-discrimination algorithm auditing until legislative interventions emerge. As an ancillary benefit, this framework is robust, easily explainable, and implementable with immediate impacts to many public and private stakeholders.