Mayo Clinic launches digital referee for spotting potential bias in healthcare AI programs

Mayo Clinic has launched a new product for healthcare tech developers to sniff out potential biases within their artificial intelligence models.

Dubbed Platform_Validate, the program is designed to put an algorithm’s credibility to the test, and act as a third party to confirm the AI’s efficacy in meeting its intended clinical purpose. 

By generating standard reports on specificity and sensitivity, the Mayo Clinic said it aims to address some of the skepticism of implementing AI programs in healthcare and diagnostics—especially where programs may inadvertently reinforce inequities in the current system, by underlining disparities present in potentially poor-quality data used to construct and train the algorithm in the first place.

An analysis published earlier this year by the journal Health Affairs explored several areas where bias can remain in machine learning tools: for example, among predictive tools used by health insurers to forecast medication adherence, resulting in lower prescription rates among racial and ethnic minorities. Biases within algorithms can also lead to incorrect diagnoses and other serious patient risks.

Mayo’s Platform_Validate measures for model bias in categories including age and ethnicity. The clinic said it hopes that—by providing information similar to nutrition and ingredient labels on food products—it can describe how an AI algorithm would perform in different scenarios, such as when faced with varying demographics across race, gender and socioeconomic status.

The new product joins Mayo’s previous Platform_Discover offering, which provides AI engineers with curated and deidentified health data for model development, training and testing. Platform_Discover includes real-world records from more than 10 million patients spanning geographies and therapeutic areas, as well as over 111 million electrocardiograms, 1.2 billion lab test results and 9 billion pathology reports.