How to ensure fairness in machine learning models for diagnosing illness
Oct 10, 2022

How to ensure fairness in machine learning models for diagnosing illness

HTML EMBED:
COPY
Machine learning can enhance the information in medical imaging, but bias related to disparities in databases could reduce their effectiveness.

Physicians and medical experts are starting to incorporate algorithms and machine learning in many parts of the health care system, including experimental models to analyze images from X-rays and brain scans.

The goal is to use computers to improve detection and diagnosis of patients’ ailments. Such models are trained to identify tumors, skin lesions and more, using databases full of reference scans or images.

But there are also potential biases within the data that could result in skewed diagnoses from these machine learning models.

Marketplace’s Kimberly Adams spoke to María Agustina Ricci, a biomedical engineer who is pursuing a Ph.D. at the Hospital Italiano de Buenos Aires in Argentina. She has studied how the disparities between low-income and developed countries could worsen, or create, these biases.

The following is an edited transcript of their conversation.

María Agustina Ricci: Databases developed in high-income countries tend to underrepresent dark-skinned individuals or patients. This is an issue that concern us to a great extent because we are Latin American researchers. When models generated from public databases from First World countries are evaluated on our populations, they tend to underperform. There are like more structural races for low-income countries and for specific populations in accessing the health system. There are profound, in some countries, profound economic inequalities or even the lack of funding for research or the unaffordable fees when trying to make publications of open access articles or databases.

Kimberly Adams: What are the consequences of these disparities in who’s represented in these databases, feeding the algorithms that shape the future of medical technology?

Agustina Ricci: The impact may make these algorithms perform worse. For example, the patient that is underdiagnosed may leave the hospital without a correct diagnosis, or false positives, [which] mean the algorithm is telling that subject is ill when they are healthy. Both kinds of errors may have a very important impact in that patient.

Adams: What can be done to mitigate or prevent these biases?

Agustina Ricci: Well, some of the options are to create diverse international databases. This is a huge challenge and requires a lot of ethical, legal considerations regarding data sharing, for example. We can generate by machine learning methods also scientific data to compensate the lack of representation of minorities in that database. This is like a continuously growing field, so I’m sure new methods will emerge in the near future. And in fact, our future work includes developing methods to mitigate those biases.

You can read Agustina Ricci’s research on this topic here.

The Food and Drug Administration has, as of Wednesday, reviewed and given varying degrees of authorization to over 170 medical devices that use algorithms or machine learning, including a few that focus on imaging.

It includes a kidney test, brain imaging software, a pulse monitor that can detect an irregular heart rhythm and software that examines images of a heart to help doctors make more informed diagnoses.

The future of this podcast starts with you.

Every day, the “Marketplace Tech” team demystifies the digital economy with stories that explore more than just Big Tech. We’re committed to covering topics that matter to you and the world around us, diving deep into how technology intersects with climate change, inequity, and disinformation.

As part of a nonprofit newsroom, we’re counting on listeners like you to keep this public service paywall-free and available to all.

Support “Marketplace Tech” in any amount today and become a partner in our mission.

The team

Daniel Shin Producer
Jesús Alvarado Associate Producer