Systematic auditing is essential to debiasing machine learning in biology.

Affiliation

Eid FE(#)(1)(2), Elmarakeby HA(#)(3)(4)(5), Chan YA(#)(3), Fornelos N(3), ElHefnawi M(6), Van Allen EM(3)(5), Heath LS(7), Lage K(8)(9)(10).
Author information:
(1)Broad Institute of MIT and Harvard, Cambridge, MA, USA. [Email]
(2)Department of Systems and Computer Engineering, Al-Azhar University, Cairo, Egypt. [Email]
(3)Broad Institute of MIT and Harvard, Cambridge, MA, USA.
(4)Department of Systems and Computer Engineering, Al-Azhar University, Cairo, Egypt.
(5)Dana-Farber Cancer Institute, Boston, MA, USA.
(6)Informatics and Systems Department, Division of Engineering Research, National Research Centre, Giza, Egypt.
(7)Virginia Polytechnic Institute and State University, Blacksburg, VA, USA.
(8)Broad Institute of MIT and Harvard, Cambridge, MA, USA. [Email]
(9)Department of Surgery, Massachusetts General Hospital, Boston, MA, USA. [Email]
(10)Harvard Medical School, Boston, MA, USA. [Email]
(#)Contributed equally

Abstract

Biases in data used to train machine learning (ML) models can inflate their prediction performance and confound our understanding of how and what they learn. Although biases are common in biological data, systematic auditing of ML models to identify and eliminate these biases is not a common practice when applying ML in the life sciences. Here we devise a systematic, principled, and general approach to audit ML models in the life sciences. We use this auditing framework to examine biases in three ML applications of therapeutic interest and identify unrecognized biases that hinder the ML process and result in substantially reduced model performance on new datasets. Ultimately, we show that ML models tend to learn primarily from data biases when there is insufficient signal in the data to learn from. We provide detailed protocols, guidelines, and examples of code to enable tailoring of the auditing framework to other biomedical applications.