Citation

Auditing Government AI: How to assess ethical vulnerability in machine learning

Author:
Kennedy, Alayna A; Coates, Daphne; Lindquist, Katelyn

Governments are increasingly using algorithmic systems to make important decisions that impact large populations. Governments struggle to thoroughly audit, monitor, regulate, and evaluate these systems, leaving them vulnerable to ethical infractions such as biased outcomes, disparate impacts on different populations, and solutions that are unexplainable and unaccountable to citizens. This paper outlines a tool that quickly assesses the vulnerability of a machine learning project to ethical infractions by translating data scientist’s technical expertise into a high-level risk score. This score allows governments to quickly identify high risk projects across their portfolio. As a result, they can appropriately allocate resources, continuous monitoring, governance and audit cycles to regulate and mitigate ethical concerns.