Despite the fact that machine learning has enhanced clinical diagnosis practices, there’s a risk that the technology could also demonstrate racial biases, according to a new paper published in JAMA Dermatology.
Paper authors Adewole S. Adamson, MD, dermatologist and assistant professor at the University of Texas Dell Medical School, and Avery Smith explain that mole detection algorithms might be less accurate for people of color as they were not developed with images from diverse populations. Melanoma presents differently on white people than it does on people with dark skin, and most machine learning programs are developed by using primarily white skin examples. Dr. Adamson and Smith say these discrepancies originate from geographical convenience: for example, the International Skin Imaging Collaboration: Melanoma Project pulled its images from the U.S., Europe, and Australia — countries with significant white-skin populations.
Research has historically focused on white subjects, which could explain why people of color have higher late melanoma detection rates and lower chances of survival than white patients. In an article for The Atlantic, Dr. Adamson and Smith warn that the lack of diversity in machine learning could also result in people of color being falsely diagnosed with cancer or the algorithm failing to detect the cancer at all.
“Part of the problem is that people are in such a rush,” said Dr. Adamson. “This happens with any new tech, whether it’s a new drug or test. Folks see how it can be useful and they go full steam ahead without thinking of potential clinical consequences.”
© 2024 Created by dermRounds Dermatology Network. Powered by
You need to be a member of dermRounds Dermatology Network to add comments!
Join dermRounds Dermatology Network