Citation

ML4H Auditing: From Paper to Practice

Author:
Oala, Luis; Fehr, Jana; Gilli, Luca; Balachandran, Pradeep; Leite, Alixandro Werneck; Calderon-Ramirez, Saul; Li, Danny Xie; Nobis, Gabriel; Alvarado, Erick Alejandro Muñoz; Jaramillo-Gutierrez, Giovanna; Matek, Christian; Shroff, Arun; Kherif, Ferath; Sanguinetti, Bruno; Wiegand, Thomas
Year:
2020

Healthcare systems are currently adapting to digital technologies, producing large quantities of novel data. Based on these data, machine-learning algorithms have been developed to support practitioners in labor-intensive workflows such as diagnosis, prognosis, triage or treatment of disease. However, their translation into medical practice is often hampered by a lack of careful evaluation in different settings. Efforts have started worldwide to establish guidelines for evaluating machine learning for health (ML4H) tools, highlighting the necessity to evaluate models for bias, interpretability, robustness, and possible failure modes. However, testing and adopting these guidelines in practice remains an open challenge. In this work, we target the paper-to-practice gap by applying an ML4H audit framework proposed by the ITU/WHO Focus Group on Artificial Intelligence for Health (FG-AI4H) to three use cases: diagnostic prediction of diabetic retinopathy, diagnostic prediction of Alzheimer’s disease, and cytomorphologic classification for leukemia diagnostics. The assessment comprises dimensions such as bias, interpretability, and robustness. Our results highlight the importance of fine-grained and caseadapted quality assessment, provide support for incorporating proposed quality assessment considerations of ML4H during the entire development life cycle, and suggest improvements for future ML4H reference evaluation frameworks.