This MIT robot can accurately predict your run using X-rays.  Scientists have no idea how it works.

This MIT robot can accurately predict your run using X-rays. Scientists have no idea how it works.

Artificial intelligence has a problem with racism. Look no further than bots that go on racist rants, or facial recognition technology that refuses to see black people, or discriminatory HR bots that don’t hire people of color. It’s a pernicious problem plaguing the world of neural networks and machine learning that not only reinforces existing biases and racist thinking, but also compounds the effects of racist behavior towards communities of color everywhere.

And when coupled with existing racism in the medical world, it can be a recipe for disaster.

That’s what’s so concerning in a new study published in The Lancet last week by a team of researchers from MIT and Harvard Medical School, who created an AI capable of accurately identifying a patient’s self-reported race based on medical images such as X-rays alone. As if that wasn’t scary enough, the researchers behind the model don’t know how he comes to his conclusions.

The team found that the model was able to correctly identify race with around 90% accuracy, a feat virtually impossible for a human doctor to achieve when looking at the same images.

Marzyeh Ghassemi, an assistant professor in MIT’s Department of Electrical Engineering and Computer Science and co-author of the paper, told The Daily Beast in an email that the project was originally created with the goal of finding out why a AI model was more likely. underdiagnosed women and minorities. “We wanted to establish the extent to which this bias could be removed from the models, which led us to wonder how much information about the patient’s self-reported race could be detected from these images,” she said. .

To do this, they created a deep learning model trained to view X-rays, CT scans, and mammograms of patients who self-reported their race as Asian, Black, or White. While the images contained no mention of the patient’s race, the team found that the model was able to correctly identify race with around 90% accuracy, a feat virtually impossible for a human doctor to achieve. looking at the same pictures.

Of course, this poses a number of important and hairy ethical issues with terrifying implications. On the one hand, research like this could give ammunition to so-called race realists and other conspiracy theorists who peddle pseudoscience that claims there is an inherent medical difference in different racial groups, even though this is, of course, complete and absolute BS.

There’s also the fact that a model like this can be extremely harmful if deployed on a large scale in hospitals and other practices. The medical industry continues to grapple with an incredibly dark history of medical racism and resulting malpractice. It has irrevocably shaped how communities of color interact (or don’t interact) with the healthcare system. If an AI were to be introduced to somehow detect a person’s race from a simple X-ray, it could further deteriorate this already strained relationship.

To their credit, however, that is not the goal of the study authors. In fact, they seek to strengthen the guardrails to help protect communities disproportionately impacted by practices such as medical racism, especially when it comes to hospitals and medical providers using neural networks. .

“The reason we decided to publish this article is to draw attention to the importance of evaluating, auditing and regulating medical AI,” said Leo Anthony Celi, principal investigator at MIT and co-author of the article, at the Daily Beast. “The FDA does not require that model performance in non-medical settings be reported by subgroups, and commercial AI often does not report subgroup performance either.”

However, there’s still the huge deep learning elephant in the room: researchers still have no idea how AI determines the race of patients from an X-ray. The opaque nature of the model is disconcerting, but not uncommon when it comes to AI. In fact, scientists have struggled to understand some of the world’s most advanced machine learning algorithms, and MIT’s model is no exception. However, this one is further underscored by the dark implications of how it can be used and weaponized to harm people of color.

At the heart of the mystery is proxy discrimination, a term that describes a fundamental problem with large AI models that could be unwittingly trained to identify race using a proxy other than a person’s race. In the past, for example, we’ve seen home loan algorithms that disproportionately rejected black and brown applicants using their zip code. Because America is so segregated, zip code will correlate very strongly with race.

Disconcertingly, while the study authors looked at some proxies the model might use to determine patient race, such as bone density, they couldn’t find the one it was using.

“There were no obvious statistical correlations that humans could rely on,” Brett Karlan, a postdoc in cognitive science, ethics and AI at the University of Pittsburgh, told The Daily Beast. did not participate in the study. “It was just a feature of the opaque network itself, and it’s really scary.”

According to Karlan, the reason this is scary is simple: we deserve to know how AI, especially when used to manage our physical health, reaches its conclusions. Without this explanation, we don’t know if it puts us at risk of harm through racist, sexist, and otherwise biased behavior. “You would want to know that an algorithm suggesting a specific diagnostic result for you or taking a specific type of medical treatment was treating you as a member of a racial category,” Karlan explained. “You could ask your doctor why you’re on a specific type of treatment, but you might not be able to ask your neural network.”

While why the AI ​​is able to reach its conclusions remains a big question mark, the researchers behind the paper believe that patients’ melanin, the pigment that gives black and brown people their skin color, could be the cause.

You would want to know that an algorithm suggesting a specific diagnostic result for you or taking a specific type of medical treatment treated you as a member of a racial category.

Brett Karlan, University of Pittsburgh

“We hypothesize that melanin levels in human skin alter very slight patterns in all parts of the frequency spectrum during medical imaging,” Ghassemi said. “This hypothesis cannot be verified without pairing images of patients’ skin tone with their chest X-rays, which we did not have access to for this study.”

She added that similar medical devices are known to be poorly calibrated for darker skin tones and their work “can be seen as a further result in that direction.” So it could just be a case where the AI ​​detects very subtle differences between X-ray images that cannot be discerned by the human eye. In other words, they simply created a glorified melanin detector. If so, there is a proxy that we can pinpoint as the cause of these amazing findings. However, more research is needed before a firm conclusion can be drawn, if at all.

For now, the team plans to unveil similar results in another study where they found that an AI was able to identify the race of patients based on race-suppressed clinical notes. “Similar to the medical imaging example, we found that human experts are not able to accurately predict patient race from the same redacted clinical notes,” Ghassemi said.

As with medical imaging AI, it is clear that proxy discrimination can and will continue to be a pervasive problem in medicine. And that’s something that, unlike an x-ray, we can’t always see through so easily.

Leave a Comment

Your email address will not be published.