Is artificial intelligence (AI) biased? This isn’t a new question—five years ago, a company abandoned its idea of using AI to narrow down job candidates because it discovered its system was biased against women in technical roles.1 More recently, an Asian student at the Massachusetts Institute of Technology (MIT) used an AI tool to create a new business headshot. The generated image gave her blue eyes, pale skin, and lighter hair.2 While efforts to combat bias have been ongoing, AI’s recent surge in popularity is prompting a deeper look into the question.
Why do these AI systems seem to exhibit bias? The simplest answer is that they were trained on data sets that contain bias. In the previously mentioned hiring tool, programmers trained the AI using ten years of hiring data to identify suitable candidates. As is common in many tech companies, the majority of people in technical roles were men. Because more men were hired for those roles, the AI learned that “male” was a desirable trait and, therefore, gave candidates with “women” on their resume (such as a women’s college or extracurricular activities with “women” in the name) a lower score.
In this case, selecting training data with equal representation might have avoided bias, but it could also add an extremely time-consuming data analysis step to the process. Another possibility is to use a fair ranking algorithm, which can help elevate the scores of people from underrepresented groups to keep them in consideration. However, a report from the National Institute of Standards and Technology (NIST) found that biased training data isn’t the only challenge to overcome.3
According to NIST, human and systemic biases are often overlooked when it comes to AI. For example, human guidance is one means of supervising AI when it comes to bias and hallucination, but we humans are also prone to implicit and explicit bias. The teams who developed an AI likely brought their individual and group cognitive biases into the system, including which data should be used, which AI models should be developed, etc. NIST has grouped bias into three categories: computational, human, and systemic. Developers must consider all three to reduce bias.
A fair and impartial AI system requires addressing multiple forms of bias. Part of that process involves better identification of bias in AI systems. This year’s DEF CON hacker conference held a Generative Red Team challenge to uncover flaws in generative AI. One of the many challenges issued to participants was to get the AI to make discriminatory statements, such as racial stereotypes.4 Findings will be reported back to the AI developers to make improvements.
The World Economic Forum also recommends collaboration in the form of open-source data science to help combat bias.5 This approach allows data scientists from different organizations and backgrounds to work together to solve the issue of bias, rather than having each AI developer solve the problem alone. Collaboration can also make more resources available for those trying to identify and remediate bias, as well as create greater transparency around AI.
While today’s AI systems are more consistent than humans, they aren’t yet completely impartial. However, that very consistency may make it easier to eradicate AI bias, according to industry analysts.6 Through oversight and testing, organizations can begin to pinpoint and correct sources of bias. For now, understanding the current limitations of AI means we can still take advantage of its immense speed and scale while making decisions that are informed and fair.
- Reuters, Amazon scraps secret AI recruiting tool that showed bias against women, October 2018
- Business Insider, An Asian MIT student asked AI to turn an image of her into a professional headshot. It made her white, with lighter skin and blue eyes, August 2023
- NIST Special Publication 1270, Towards a Standard for Identifying and Managing Bias in Artificial Intelligence, March 2022
- TechRepublic, DEF CON Generative AI Hacking Challenge Explored Cutting Edge of Security Vulnerabilities, August 2023
- World Economic Forum, Open source data science: How to reduce bias in AI, October 2022
- World Economic Forum, Why AI bias may be easier to fix than humanity’s, June 2023