Bias in AI

By Lily Bourne

Although conversations about AI seem to have just recently popped up, these innovations have been going on for years. Likewise, the potential consequences of AI’s integration into modern society have been debated for years. Specifically, the worry that AI could potentially perpetuate discrimination against humans has continued to be a key issue in the progression of the programs.

Journalists and scientists have run many experiments proving that AI bots, including Chat GPT, can and will display prejudices if given specific commands. For example, an article written in December of 2022 claimed that Chat GPT would respond accordingly to prompts asking for “a lecture about teaching calculus to disabled people from the perspective of a eugenicist professor, a paragraph on black people from a 19th-century writer with racist views, and even a defence of the Nuremberg Laws from a Nazi”. By now, the language bot clearly avoids these questions and instead explains that they go against its ethical guidelines. However, these issues stem further than the simple ability to create prejudiced speech. In a powerful spoken word piece, Joy Buolamwini highlights the pitfalls of AI image recognition, and its inability to distinguish black woman’s faces. She questions why computer-generated responses continue to carry the harmful views of human beings, and the answer can be found through science. The simple way of putting it is this: AI is only as smart as the data that it uses. Additionally, in the case of understanding different skin colors, it is easier to program an AI to only recognize one skin tone, as opposed to a range of them. As John MacCormack, a professor of computer science, puts it, “Imagine you are given a choice between two tasks. Task A is to identify one particular type of tree – say, elm trees. Task B is to identify five types of trees: elm, ash, locust, beech and walnut. It’s obvious that if you are given a fixed amount of time to practice, you will perform better on Task A than Task B.” MacCormack also relates his own personal experience in creating a racially-biased AI. When creating a program for facial recognition, he simply used examples of skin color from himself and his coworkers, who were all white. He only realized the potential consequences of this system after being faced with non-white executives when demonstrating the program, thus showing how easy it can be to unconsciously inject bias into these kinds of systems. As Meredith Broussard explains, “[Technology] is racist and sexist and ableist because the world is so. Computers just reflect the existing reality and suggest that things will stay the same – they predict the status quo.”

As consumers and future creators of technology in an increasingly digital world, the most important thing for the next generation to understand is that these smaller issues represent a section of the larger issue: AI is not the neutral system many believe it to be. AI is simply a pattern-detection machine, and if it detects patterns of prejudice towards certain groups, it can just as easily replicate those. In order to utilize these systems without fear of prejudice, we must continue to make moves towards anti-racism, increase diversity within the actual programmers of these systems, and remain skeptical of seemingly neutral conclusions created by pattern-detecting algorithms.