Artificial intelligence (AI) tools are increasingly being integrated into decision-making processes in high-risk settings, including employment, credit, health care, housing, and law enforcement. Given the harms that poorly designed systems can lead to, including matters of life and death, there is a growing sense that crafting policies for using AI responsibly must necessarily include, at a minimum, assurances about the technical accuracy and reliability of the model design.Because AI auditing is still in its early stages, many questions remain about how to best conduct them. While many people are optimistic that valid and effective best practice standards and procedures will emerge, some civil rights advocates are skeptical of both the concept and the practical use of AI audits. These critics are reasonably concerned about audit-washing—bad actors gaming loopholes and ambiguities in audit requirements to demonstrate compliance without actually providing meaningful reviews.This chapter aims to explain why AI audits often are regarded as essential tools within an overall responsible governance system and how they are evolving toward accepted standards and best practices. We will focus most of our analysis on these explanations, including recommendations for conducting high-quality AI audits. Nevertheless, we will also articulate the core ideas of the skeptical civil rights position. This intellectually and politically sound view should be taken seriously by the AI community. To be well-informed about AI audits is to comprehend their positive prospects and be prepared to address their most serious challenges.