Table of Contents
Diversity is Key to the Next Generation of AI
Artificial intelligence is going to shape our future, transforming the way we live and the way we work. Many people hope that machines will also help resolve a problem as old as time: institutional bias. After all, a machine can’t be racist, sexist, or otherwise prejudiced.
Can it?
But as AI systems go live, we’re seeing that machines always reflect their creator’s biases.
How AI exposed our invisible biases
In September 2020, Twitter went into meltdown when users discovered what appeared to be systemic bias.
The issue was with Twitter’s image management system, which crops images to fit a standard size. This system is a machine learning algorithm that focuses on what it deems to be the most interesting part of the image, such as text or human faces.
The problem: when the algorithm had to choose between a white person and a person of color, it regularly chose the white person.
On closer investigation, this turned out not to be quite true. The algorithm works by identifying the most information-rich parts of the image file, which tend to be the brightest or busiest. Lighting and background are more likely to influence the image cropping system than the subject’s race. But this was still a humbling PR disaster for the social media giant.
Bias can creep into AI in much more disturbing ways. In 2018, Amazon was publicly embarrassed when the AI system they use for head-hunting decided to ignore female candidates. In the US justice system, many people have raised grave concerns about COMPAS, an automated system that assesses recidivism risk among offenders. An investigation by ProPublica found black defendants are twice as likely as white defendants to be inaccurately flagged as likely re-offenders.
One of the oldest acronyms in computing is GIGO: Garbage In, Garbage Out. GIGO is a reminder that computers can solve many problems, but they cannot overcome bad programming or bad data. This is true even of AI, which is designed by humans with human inclinations, and trained on data sets that reflect society’s prejudices.
A report by McKinsey puts it in plain language: “AI can help reduce bias, but it can also bake in and scale bias.”
How diversity can save AI
Building fairness into algorithms is both a logical and philosophical puzzle. Arvind Narayan, a comp-sci professor at Princeton, developed 21 models of fairness, and even he admits that he hasn’t covered all eventualities.
But it seems like there’s one simple step that can help a great deal: greater diversity in all levels of AI development.
It’s a movement that’s starting right now. Stanford’s Human-Centered Artificial Intelligence team recently blogged about the great initiatives happening across the country to encourage women and minorities to become involved in AI development.
At the heart of their mission is the AI4ALL summer program. This initiative operates 16 residential summer camps at universities across the country, with up to 30 high-performing students from diverse groups, including women, low-income students, under-represented ethnic groups, and LGBTQIA and non-binary people.
Many of these students have a limited computer science background, but all of them have the drive and vision to become the next generation of leaders in AI. The program manager, Jonathan Garcia, says, “We need to foster a community of diverse leaders in the field, because we understand that artificial intelligence impacts our daily lives — everything from who gets a job, to who gets a loan, to who can enter the country. These tools are made by people, and we’re not including some of the most under-represented and marginalized people in the making of these tools and in the creation of this technology.”
There have been 500 graduates of the program so far, all of whom are part of a network known as Changemakers. This network offers support and advice to program graduates as they progress in their careers.
While there are many national programs to encourage diversity in STEM, AI4ALL is one of the few focused entirely on artificial intelligence. As the program grows, it will hopefully lead to a new generation of developers, data scientists and project managers. This generation will help to build fair and objective AI applications.
What diversity means for your AI
AI is rapidly becoming part of everyday life, even for those who don’t work in tech. For example, finance professionals rely on AI tools for credit scoring and fraud prevention. HR teams use AI to scan resumes and identify the most promising candidates.
AI tools can, as McKinsey pointed out, help to reduce bias, but they can also help to bake it in. Take the example of HR. Multiple studies show that recruiters favor candidates with white-sounding names over people with more ethnic names, even if their resumes are identical. An AI-powered candidate selection system is less likely to show this kind of bias.
However, there are other ways that bias can creep in. For example, the Amazon algorithm that ignored female candidates did so because they trained it on historical recruitment data, which seems reasonable. But high-level recruitment has been historically biased towards men.
The only way to avoid this kind of problem is to take positive steps now.
Build a diverse team
Bias is often the result of a blind spot. An all-male team might not understand women’s requirements, just as an all-white team can’t fully appreciate what people of color need.
The only way to fill in those blind spots is to include a range of voices on your team. Try to include people from places other than IT. Departments like customer service, marketing, finance, operations, and HR should all have a voice.
Also, everyone must have an equal voice. When someone raises a diversity concern, the rest of the team should take it seriously and ask how to do better.
Ensure training data sets have a broad range of data
AI and machine learning use data sets for training. For example, Twitter’s image cropping algorithm trains with the entire corpus of images uploaded to the site. And yet, even with this library of billions of image files, the algorithm still displayed some signs of bias.
Most companies don’t have access to such a big data set, which means there’s a greater chance of bias emerging. If you’re building a training set for a machine learning tool, you need to ask questions like:
- Where is this data coming from?
- Is this data representative?
- Does the data hierarchy favor some data over others?
- If so, who decides what’s important, and how did they decide?
Again, a diverse team can help you to flag up obvious data quality issues.
Include a range of people in user experience testing
Facial recognition hit the mainstream in 2017 with Apple’s Face ID, but it immediately ran into one big problem: it struggled with black faces. The situation is much better now, but it’s still an embarrassing fail. Clearly, Apple’s pre-release testing wasn’t diverse enough. Otherwise, someone would have flagged the issue.
If you’re adopting any kind of AI tool, the first step is a full user test. Whether that testing involves your staff or your customers, you need to ensure that you include a wide range of people. This is the only way to assess all possible outcomes.
Listen to public feedback
Nobody likes to think that they’re biased. Accusations of bias can hurt, even if your AI tools are the accused party.
But most companies recognize the need to apologize and move on. Google issued a mea culpa when its natural language processing API was found to discriminate against certain religions. Twitter begged forgiveness for the perceived bias in their image algorithm. Both companies went back to the drawing board and found ways to fix the issues.
And both companies will benefit from this public feedback. Bias is not only damaging to the subject, it’s also damaging to your company. AI bias leads to inaccurate or misleading data that impacts your analytics and ultimately undermines your ability to do business. By responding to feedback and fixing your AI, you’re doing what’s best for your company.
The future of AI
Artificial Intelligence is here to stay. Experts predict that AI could generate as much as $13 trillion by 2030, or 1.2% of the entire global GDP.
AI will impact life at every level. AI is already becoming part of our home, in the form of Siri and Alexa. AI helps deliver our healthcare and recommends what to watch next on Netflix. At work, we’ll team up with “co-bots” – collaborative robot partners. At the executive level and in government, analytics-powered bots will steer us in our most crucial decisions.
Which means that now is the time to start asking the big questions about AI. How do we build and train these systems? How do we make sure that they treat everyone fairly? It’s going to be a challenge. But the first step is to make sure that we have diversity at every stage of the AI development process.
#SmartCities #AI #diversity #STEM #AI #Stanford #HAI