Dermatology has a race problem — AI might make it worse
Artificial Intelligence is transforming medicine at an astonishing rate. You’ll find AI right on the front lines, in clinics and in hospitals. Smart systems are interacting with patients, interpreting case files, and helping clinicians to make a diagnosis.
AI learns everything from us, which can be both good and bad. On the positive side of things, AI can absorb an astonishing amount of patient data, making it possible for digital diagnosis tools to work with a vast range of conditions.
Nevertheless, there’s also a downside. Particularly, if there are biases and iniquities in our medical system, AI will absorb them, which could serve to make matters worse.
One of the most promising applications for diagnostic AI is in the field of dermatology. Unfortunately, dermatology has a history of being one of the most racially biased disciplines in medicine.
Why is AI such a good fit for dermatology?
Dermatology is the study of conditions of the skin ranging from an array of mild-to-moderate disorders. These conditions include things such as acne and psoriasis, as well as much more severe conditions, such as skin cancer. This field also plays a vital role in recognizing rashes that indicate a high-risk ailment, such as meningitis or Covid-19.
While other specialists have various sources of diagnostic data, dermatologists usually have to do things the old-fashioned way: they take a look at the patient’s skin and check for rashes, lesions, or other abnormalities.
To be a great dermatologist, you have to see a lot of patients and observe many different skin disorders. Over time, you build an extensive body of knowledge, so you can immediately tell who has a serious condition and who just needs a topical treatment.
Dermatologists have certain diagnostic tools that they use to identify conditions. For example, when looking at a possible melanoma (skin cancer), they’ll use the ABCD rule:
Melanomas tend to be asymmetrical.
Melanomas usually have an irregularly shaped border.
Melanomas often have several shades of color, while moles and freckles tend to be uniformly colored.
Melanomas are usually bigger than 6mm.
No two melanomas look alike. Dermatologists have to study many potential skin cancer cases before they become experts at identifying potentially malignant tumors.
Artificial Intelligence is the perfect tool to help improve diagnosis. Contemporary AI uses a technique called Machine Learning. ML algorithms study massive sets of data and “learn” by detecting recurring patterns. The algorithm can then use these patterns to predict future behaviors.
So, you can give an AI a training data set that consists of thousands of photos of skin cancers. The AI will study the photos and assess them according to the ABCD rule. Eventually, the algorithm will learn to flag potential cancer risks with unerring accuracy.
But, for this to work, you have to give the AI the right data.
Why is there a race problem in dermatology?
In 2011, a survey of US dermatologists found that almost half of them felt their medical training was inadequate for treating Black skin. The problem is a lack of data. In medical textbooks, only 4.5% of pictures show Black skin, while disparities in health insurance coverage mean that Black patients are less likely to see a doctor.
More recently, researchers discovered that there was a racial blind spot in literature around Covid-19. Since a rash is one of the symptoms of the pandemic, it’s important that dermatologists know how to identify a Covid rash on sight. Yet, despite 30% of Covid patients being Black, there were zero photographs of Black skin in published literature.
Representation has a practical impact on patient care, as skin color affects the appearance of skin conditions. For example:
- Malignant melanomas look darker against white skin.
- Common conditions such as eczema and psoriasis are less visible on skin of color.
- Diseases like lupus can cause the skin to darken, which is more noticeable on light-colored skin.
- People with skin of color often experience hyperpigmentation later in life, where patches of skin become darker.
If dermatologists don’t know what to look for, they can’t offer the proper treatment. This has a substantial effect on patient outcomes. Black patients, for example, are substantially less likely to receive treatment for psoriasis than other patients, even when they have health insurance.
How AI struggles with skin color
Artificial Intelligence is a long way from being truly intelligent.
A Machine Learning tool will examine the training data and extrapolate sophisticated patterns, but it will never question the quality of the data.
In other words, if we provide biased inputs, we’ll get biased outputs.
Here’s an example of how that can happen:
1.Researchers prepare a training set of data, consisting of photographs of scarlet fever rashes. All of the patients photographed have light skin.
2.The ML algorithm studies the data and identifies recurring patterns in the photographs.
3.The algorithm notices that the scarlet fever rash always has a distinctive pink hue.
4.Later, a clinician uses the ML algorithm to help diagnose a patient with skin of color. This patient has scarlet fever and a rash, but their rash is not pink.
5.The algorithm does not observe the pink hue that it learned to associate with scarlet fever.
6.The algorithm declares that the patient is negative for scarlet fever symptoms.
From the AI’s perspective, this is the correct diagnosis. It is simply making an observation based on the training data.
So, obviously, we have to ensure that AI has a diverse body of data to analyze. But what else can we do to ensure that everyone gets an accurate diagnosis from their robot doctor?
How can we create unbiased dermatology AI?
LEO Pharma recently held a Hacking Dermatology challenge, with some of the finest minds in technology and medicine working to solve this very problem. Some ideas that emerged were innovative, such as developing specialist sunblock for people with skin of color, or training barbers to identify scalp disorders.
Three particular ideas stand out, however, as being relevant to medical AI:
Open source dermatology
A public repository of dermatological images, with a focus on skin of color. This library could provide medical students with a useful resource and improve the quality of published materials.
A specialist photo box that produces high-quality images of dermatological conditions for all skin types. Typical cameras don’t always capture different skin tones in detail.
A Big Data project to assemble a massive library of dermatological images. This repository will be the basis for future AI and machine learning projects.
To create diverse AI, we need access to diverse data. Projects like this will go a long way towards supplying researchers with the data that they need.
But we have to take other steps too. To build the dermatology tools of the future, we need:
Diversity in data
Data-gathering projects like those above are a great first step, but medical AI development teams have to make sure they’re using this data. Diversity should be one of the basic quality checks to ensure that the AI training data is suitable.
Diversity in development
There’s a lack of diversity in tech– only 22% of AI professionals are women, while only 2.5% of Google’s entire workforce is Black. Diverse teams are an excellent defense against accidental bias creeping into algorithms. When there is a range of voices on the team, there’s a better chance that someone will flag up potential discrimination issues.
Diversity in testing
Clinical-grade dermatology AI is already in the testing stage. Stanford ran initial trials on a deep-learning system in 2017, which equaled the performance of a panel of 21 qualified dermatologists. While results like these are encouraging, researchers must ensure that control groups contain people of color. That way, we can be sure that the AI works for everyone.
Patients have been complaining for years that there is a diversity issue in dermatology. If there are similar issues with AI tools, patients will be the first ones to raise the alarm. Healthcare providers have an obligation to listen to patients, explore their concerns, and take action when something’s not right.
In summary, diversity isn’t something that you review at the end of the process. If you’re creating any kind of medical technology, you have to ensure representation at every step along the way.
Whenever we’re discussing the issue of diversity and AI, we have to remind ourselves of one thing:
AI can’t fix our cultural problems.
Artificial Intelligence can help provide fast, efficient, precise healthcare. But no computer will be able to tackle discrimination or question our commitment to inclusiveness. That job will always be our responsibility.
But we can build great digital medical tools if we’re mindful of the challenges. With the right people and the right data, we can build AI tools that deliver effective healthcare for everyone.