Race-detecting AI

Rate this post

Doctors cannot tell a person’s race from medical images such as X-rays and CT scans. But a team including MIT researchers was able to train a deep learning model to identify patients as white, black or Asian (based on their own description) just by analyzing those images, and they still can’t figure out how the computer does . this

After looking at variables such as differences in anatomy, bone density and image resolution, the research team “couldn’t come close to identifying a good proxy for this task,” says the co-author from the document Marzyeh Ghassemi, PhD ’17, assistant professor at EECS. and the Institute of Medical Engineering and Science (IMES).

That’s troubling, the researchers say, because doctors use algorithms to help with decisions such as whether patients are candidates for chemotherapy or an intensive care unit. Now, these findings raise the possibility that algorithms are “looking at your race, ethnicity, sex, whether you’re incarcerated or not, even if all that information is hidden,” says co-author Leo Anthony Celi, SM ’09 , director scientific researcher at IMES and associate professor at Harvard Medical School.

Celi believes that doctors and computer scientists should turn to social scientists for information. “We need another group of experts to evaluate and provide input and feedback on how we design, develop, implement and evaluate these algorithms,” he says. “We also need to ask data scientists, before any exploration of the data: Are there disparities? Which patient groups are marginalized? What are the drivers of those disparities?”

Algorithms often have access to information that humans don’t, and that means experts must work to understand unintended consequences. Otherwise, there is no way to prevent algorithms from perpetuating existing biases in medical care.

Source link

Leave a Comment