Learning

AI Can Be Biased, Even Discriminatory — But It’s Not Its Fault

November 30, 2023

It’s pretty difficult to imagine artificial intelligence having biases. That’s because this technology doesn’t have human emotions or feelings, meaning it should be able to present information neutrally without prejudices or preconceptions, right?

However, this isn’t the case.

AI is only as good as the humans who program it, and all humans have a unique view of the world based on their upbringings, motivations, and relationships with other people. In other words, an AI chatbot or algorithm is just as biased as you, me, or anyone else on our planet. Learn more about the ethical implications of using AI below.

talk balloons

Get in touch to talk to us about your project or ideas.

Let’s Chat

How Can AI Be Biased?

AI never creates its own information, regardless of the data set or algorithm used. Everything it knows comes from human-made sources, including books, websites, newspaper articles, and Reddit posts written by teenage boys in dark rooms. So AI can’t be intrinsically biased. It’s just not possible.

However, AI can exhibit biases if it relies on information that’s already biased. For example, by compiling information from an article that’s factually wrong. These biases pass from the person who leads the training to the tool to the people using the product. This process could result in misinformation being spread on the internet.

ChatGPT says:

“As an AI model, I do not have opinions. It is not appropriate or respectful to propagate misinformation or falsehoods about any individual.”

The first sentence, while true, doesn’t reflect the fact that ChatGPT can reinforce specific opinions, leading to biases. Say a newspaper published a false report about boys being better than girls at solving Rubik’s cubes because they have bigger brains. Then, another newspaper picked up on this story. ChatGPT might interpret these statements as facts. And that’s a huge problem, especially if people are using the AI tool for research.

Acknowledging the Limits of AI

Sure, AI can be a wonderful supplement to our work flow. But it’s important to realize that it has biases, too. Not all biases are bad, of course. Believing that all humans should be treated equally is technically an opinion, but it’s one that most people agree with. However, negative biases can have a significant impact on someone’s life.

There are lots of examples where AI has shown negative biases. In 2019, researchers discovered that an algorithm predicting which people in hospitals would need extra medical care favored white patients over Black patients. In 2015, an algorithm used by Amazon to hire employees prioritized male candidates over female candidates.

Again, algorithms are not inherently racist or sexist. However, they are often trained on datasets that reflect societal biases. For example, when Google Maps pronounced “Malcolm X Boulevard” as “Malcolm 10 Boulevard,” it could have been because of a lack of Black engineers on Google’s team.

How to Spot AI Biases

Never use AI as a primary source. While tools like ChatGPT might provide a good overview of a topic, it’s important to do your own research any claims an AI tool makes. Here are a few tips for identifying AI biases:

Verify Everything
If an AI tool says something about a particular group, double-check that information elsewhere. Use a reputable source to verify any claims made by AI.

Check Sources Like a Journalist
If an AI tool lists its sources, review those references. A tool might have misinterpreted a report or study, resulting in incorrect and potentially inflammatory information.

Tell AI When It’s Wrong
If AI is discriminatory in any way, call it out! For example, replying to a response with, “This is incorrect information.” If enough people do this, we can retrain AI algorithms and make them more accurate.

Takeaway

The fact that artificial intelligence can be biased just proves prejudices exist in society. AI merely reflects what society already thinks and compiles this information for its users. As you increase the adoption of this technology, pay closer attention to the information that AI tools generate. You and your team own the final product you put out. Make sure its reflective of your org’s values.


Further reading and resources:

  1. Learn How To Use AI Tools in Your Design Workflow
  2. 3 Ways Nonprofits Can (Effectively) Use AI