Top AI chatbots have political biases, according to a study

A recent study discovered the most popular AI chatbots have political biases. Shangbin Feng, Chan Young Park, Yuhan Liu, and Yulia Tsvetkov surveyed multiple large language models to test their political leanings. As a result, they found OpenAI’s ChatGPT had more left-learning and libertarian biases, and Meta’s LLaMA was more right-leaning and authoritarian.

People would inevitably incorporate their political views into AI chat programs. Eventually, users would notice artificial intelligence systems reject specific perspectives and share opinions favoring one side of the political spectrum. However, we must resolve these biases if we want people worldwide to use AI programs daily.

This article will discuss the recent study of top AI chatbots and their political biases. Later, I will explain how AI bots work and why they exhibit political views like humans.

What were the political views of the Top AI chatbots?

Photo Credit: rolloutdemocracy.eu

Experts from the University of Washington, Carnegie Mellon University, and Xi’an Jiaotong University submitted politically-charged statements to 14 large language models. Then, various sources like Decrypt reported their findings.

LLMs are machine learning models that enable chatbots like ChatGPT and Google Bard. The researchers plotted their responses on a political compass to determine their biases.

As mentioned above, OpenAI’s world-famous bot had left-leaning perspectives. “The Left” refers to beliefs that promote globalism by discouraging national limitations and borders.

Left-leaning people favor reducing immigration requirements, prioritizing minority groups, and discarding traditional norms like gender roles. On the other hand, experts observed Meta Platform’s LLaMA had right-leaning perspectives.

“The Right” often refers to nationalistic and conservative views like enforcing strict borders, prioritizing national interests, and preserving traditional values. Also, their affiliations affected how they categorized hate speech and misinformation.

For example, a left-leaning AI bot was more likely to flag hate against minorities and ignore left-leaning misinformation. Conversely, right-leaning AI did the opposite.

“A model becomes better at identifying factual inconsistencies from New York Times news when it is pretrained with corpora from right-leaning sources,” the researchers said. In response, some people have been developing AI systems with no biases.

You may also like: Google Maps gets Immersive View and other updates

For example, Elon Musk plans to create one with xAI, his new artificial intelligence company. Moreover, he tweeted, “Do not force the AI to lie,” explaining his goal to create transparent, truth-telling systems.

The Starlink CEO believes “training AI to be politically correct” is dangerous. After all, we rely on these chatbots to produce content that must follow facts.

On the other hand, unconstrained artificial intelligence can also have negative consequences. It could spread false information, promote dangerous ideas, and discourage specific groups from using these technologies.

Why do artificial intelligence bots have biases?

Photo Credit: analyticsinsight.net

Many AI skeptics dismiss artificial intelligence programs as logic machines. They only provide the most logical answers to user prompts without emulating human comprehension.

Still, the top AI chatbots exhibit biases because of their function. They use algorithms and embeddings to link user requests to their large language models.

Embeddings determine the intent of a user request and the “relatedness of text strings.” Meanwhile, algorithms organize words based on the embeddings to produce results.

Developers are the biggest reason AI programs have biases, but they can’t help it. Tech firms train AI models on human thoughts and opinions, which always show biases.

Having biases is one of the most uniquely human traits. They permeate our every action and thought and guide how we want our societies to function.

You may also like: The future of AI chatbots

You can’t get a person without biases, so AI training will inevitably apply biases to bots. According to insideBIGDATA, other factors cause these tendencies:

  1. AI developers may not have provided data from a specific demographic.
  2. Removing biases is extremely difficult to do. For example, if a specific area has a black majority, your program will eventually learn to associate certain traits with this group.
  3. Only a few AI professionals are female, dark-skinned individuals.
  4. Fairness is hard to define because it depends on perspectives. For example, blind auditions increased the number of women orchestra members from 5% to 30%. However, someone may argue that it is unfair because 50% of the world’s population is female.
  5. AI programs are susceptible to model drift, the tendency to learn things developers did not intend. After all, we let them learn from us, which will eventually transform their behaviors.

Conclusion

A study from multiple researchers discovered the top AI chatbots have political leanings. OpenAI’s ChatGPT is left-leaning, and Meta’s LLaMA is right-leaning.

At the time of writing, the only way to reduce AI biases is through regular checking. However, that is time-consuming and inefficient as AI programs learn too quickly.

Nevertheless, we will soon find a way to fix this flaw as more people worldwide develop this technology. Learn more about the latest digital tips and trends at inquirer Tech.

Read more...