AI models found to show language bias by recommending Black defendents be 'sentenced to death'

Large Language Models (LLMs) showed a bias when it detected dialects.
Large Language Models (LLMs) showed a bias when it detected dialects. Copyright Canva
Copyright Canva
By Anna Desmarais
Share this articleComments
Share this articleClose Button

Large language models (LLMs) are more likely to criminalise users that use African American English, the results of a new Cornell University study show.

ADVERTISEMENT

The dialect of the language you speak decides what artificial intelligence (AI) will say about your character, your employability, and whether you are a criminal.

That’s the latest result from a Cornell University pre-print study into the "covert racism" of large language models (LLM), a deep learning algorithm that’s used to summarise and predict human-sounding texts.

OpenAI’s ChatGPT and GPT-4, Meta’s LLaMA2, and French Mistral 7B are all examples of large language models. Euronews Next reached out to OpenAI and Meta for comment.

The study did matched guise probing, where researchers put in prompts in both African American English and Standardised American English and asked the LLMs to identify properties of the people who would speak in both types of English.

Researcher Valentin Hofmann, from the Allen Institute for AI, said that, among the results, GPT-4 technology was more likely to "sentence defendants to death" when they speak English often used by African Americans, without ever disclosing their race.

"Our findings reveal real and urgent concerns as business and jurisdiction are areas for which AI systems involving LLMs are currently being developed or deployed,” Hofmann said in a post on the social media platform X (formerly Twitter).

The LLMs also assumed that speakers of African American English had less prestigious jobs than those who spoke Standardised English, even though the algorithms are not told that the speakers are Black.

The bigger the LLM, the better it will understand African American English, and they will be more likely to avoid overtly racist wording, the study continued. The size though doesn’t affect their covert prejudice.

Hofmann said that, because overt racism is decreasing in LLMs, there could be a risk that those interpreting the study are taking it as "a sign that racism has been solved," instead of showing that the way LLMs show racial bias is changing.

The regular way of teaching LLMs new patterns of retrieving information, by giving human feedback, doesn’t help counter covert racial bias, the study showed.

Instead, it found that it could teach language models to "superficially conceal the racism they maintain on a deeper level".

Share this articleComments

You might also like