Studies share insights into our AI biases | Inquirer Technology

Studies share insights into our AI biases

11:22 AM October 06, 2023

Two recent studies explain how human intelligence reacts to artificial intelligence. The first comes from Spain’s Deusto University, which says people can inherit AI biases or errors in their decisions. In contrast, the second from Arizona State University and MIT discovered our perception of AI tools determines whether its outputs appear positive or negative.

The human mind constantly adapts to everything around us, even if we don’t realize or intend it. We have control over the latest AI programs like ChatGPT, but we must know how they can affect our minds. Otherwise, we may experience unintended consequences from prolonged use. More importantly, it will help us hone these tools as they become ubiquitous daily.

This article will discuss the two recent researches that explored AI biases. I will start with the Deusto University study, and then I will cover the one from MIT and Arizona State.

ADVERTISEMENT

Do we catch biases from AI bots?

AI bot surrounded by question marks

Photo Credit: nature.com

TechXplore reported on the latest study from Deusto University’s Helena Matute and Lucia Vicente. It explains the importance of their work by discussing artificial intelligence flaws.

FEATURED STORIES

The website reminds readers that ChatGPT and similar programs offer results based on their training. Consequently, their errors and other issues will inevitably manifest themselves in AI outputs.

We use those results for numerous applications, so we might apply those errors if we’re not careful. Matute and Vicente confirmed this assumption by conducting three experiments.

They made participants perform a medical diagnosis task. One group had assistance from a biased AI system, which exhibited systematic errors. The other was the control group that did not receive AI assistance.

The researchers used a fictitious AI, medical diagnosis task, and disease to prevent interference with real situations. They discovered the first group made the same errors made by their AI, but the control group did not.

You may also like: AI music study predicts hit songs

That means the AI recommendations influenced their decisions. Then, the researchers removed their AI assistance and made them perform another medical diagnosis to confirm that observation.

ADVERTISEMENT

Those volunteers continued to mimic the bot’s systematic error even without it. Meanwhile, the control group continued to avoid the AI group’s decisions.

TechExplore says these results indicate AI biases can have a negative impact on human decisions. However, we need more research to confirm people can inherit AI biases.

Do our biases alter our perception of AI?

Human silhouette encountering AI in a distorted mirror

Photo Credit: rand.org

The MIT and Arizona State University study seems to be the opposite of Deusto University’s. It explores our preconceptions about artificial intelligence instead of the biases we receive from AI.

“AI is a mirror,” said MIT Media Lab researcher Pat Pataranutaporn. “We wanted to qualify the effect of AI placebo. We wanted to see what happened if you have a certain imagination of AI: How would that manifest in your interaction?”

Pataranutaporn and his team tested that hypothesis by dividing 300 volunteers into three groups. Then, the scientists assigned roles to each group:

  1. They told the first that he had no ulterior motives. It was only a mundane text completion program.
  2. The researchers told the second the AI had empathy.
  3. In contrast, they told the third the bot was manipulative and only wanted to sell a service.

You may also like: AI chatbots have political biases

However, all groups used the same program. The scientists asked the participants to answer whether it was an effective mental health companion after speaking with it for 10 to 30 minutes.

The scientists surveyed the respondents and found only 44% of those who received negative primers believed them. Also, 79% of the neutral group believed the bot was neutral, and 88% of those in the positive group thought it was empathetic.

The study’s senior author, Pattie Maes, said, “Maybe we should prime people more to be careful and to understand that AI agents can hallucinate and are biased. How we talk about AI systems will ultimately have a big effect on how people respond to them.”

Conclusion

Recent studies shared new insights regarding AI biases. The first showed we tend to follow errors in AI outputs. As a result, we might apply those issues in our daily tasks.

Another from the Massachusetts Institute of Technology and Arizona State University found our preconceived notions about AI programs can change how we think about their outputs. Consequently, we should be careful in how we command these programs.

Your subscription could not be saved. Please try again.
Your subscription has been successful.

Subscribe to our daily newsletter

By providing an email address. I agree to the Terms of Use and acknowledge that I have read the Privacy Policy.

Get more information about these findings on the MIT and TechXplore websites. Learn more about the latest digital tips and trends at Inquirer Tech.

TOPICS: AI, interesting topics, Trending
TAGS: AI, interesting topics, Trending

© Copyright 1997-2024 INQUIRER.net | All Rights Reserved

We use cookies to ensure you get the best experience on our website. By continuing, you are agreeing to our use of cookies. To find out more, please click this link.