AI perception different from human view | Inquirer Technology

AI perception doesn’t see the world like humans do

08:00 AM November 07, 2023

MIT neuroscientists discovered AI neural networks perceive the world differently from humans. We connect ideas based on color, sound, and other characteristics. In contrast, artificial intelligence systems only see the connections between words. Consequently, they may form strange connections that seem nonsensical to humans.

The global AI revolution aims to create artificial intelligence systems that think like humans. However, we must ensure these programs can emulate how we see the world. That will ensure they can understand our instructions more accurately than ever. More importantly, such findings could be the key to creating AI that promotes human well-being.

This article will discuss how MIT researchers discovered the unique way artificial intelligence perceives our world. Later, I will give an overview of how modern AI systems work.

ADVERTISEMENT

How did MIT experts uncover AI perception?

Experts model artificial intelligence systems after the human brain. That is why many of their components have parallels to our minds. For example, modern AI uses deep neural networks to identify concepts. 

FEATURED STORIES

People spend countless hours and resources training these computer programs, ensuring they link concepts and words correctly. However, AI programs view the world based on the words and other media they receive in training. 

Josh McDermott, an associate professor of brain and cognitive sciences at MIT, said the latest study could help researchers evaluate AI perception. “This paper shows that you can use these models to derive unnatural signals that end up being very diagnostic of the representations in the model,” says McDermott, the study’s lead author. 

“This test should become part of a battery of tests that we as a field are using to evaluate models,” he added. They discovered artificial intelligence tends to connect concepts that seem nonsensical to humans.

It turns out that AI programs may disregard features irrelevant to an object’s core identity. SciTechDaily calls this characteristic “invariance,” which involves regarding objects as the same despite having different, albeit less important features. 

You may also like: The Ultimate ChatGPT Guide

Jenelle Feather, one of the study’s authors, tested if neural networks could develop invariances by making AI models generate stimuli that elicit the same response within the model.

ADVERTISEMENT

They used that as an example stimulus they gave to AI models. As a result, they found most of these AI-generated images and sounds were mostly unintelligible.

“They’re really not recognizable at all by humans. They don’t look or sound natural, and they don’t have interpretable features that a person could use to classify an object or word,” Feather said. Soon, her and her team’s findings could help enhance AI programs.

How do AI programs work?

Workflow of AI programs explained
Photo Credit: forbes.com

Understanding how modern artificial intelligence models can help explain this AI perception study. ChatGPT and similar tools rely on algorithms and embeddings.

Algorithms are rules computers follow to execute tasks. Meanwhile, Microsoft defines embeddings as “a special format of data representation that can be easily utilized by machine learning models and algorithms. The embedding is an information-dense representation of the semantic meaning of a piece of text.” 

ChatGPT is arguably the most famous AI chatbot at the time of writing, so I will use that to explain embeddings and large language models. The latter contains numerous words classified into numerous categories. 

For example, an LLM may contain the words “penguin” and “polar bear.” Both would belong under a “snow animals” group, but the former is a “bird,” and the latter is a “mammal.”

Enter those words in ChatGPT, and the embeddings will guide how algorithms will form results. Here are their most common functions:

You may also like: Researchers create AI robot assistant

  • Search: Embeddings rank queries by relevance.
  • Clustering: Embeddings group text strings by similarity.
  • Recommendations: OpenAI embeddings recommend related text strings.
  • Anomaly detection: Embeddings identify words with minimal relatedness.
  • Diversity measurement: Embeddings analyze how similarities spread among multiple words.
  • Classification: OpenAI embeddings classify text strings by their most similar label.

These features can make AI bots seem cold and robotic, but recent findings suggest they can show more emotional awareness than people. Zohar Elyoseph and his colleagues made human volunteers and ChatGPT describe scenarios and graded their responses with the Levels of Emotional Awareness Scale. 

Humans scored Z-scores of 2.84 and 4.26 in the two consecutive trials. On the other hand, ChatGPT earned a 9.7, significantly higher than the volunteers’.

Conclusion

MIT researchers discovered that artificial intelligence systems may consider unrelated objects and ideas as the same. Uncovering this flaw can guide experts in improving AI further.

We could improve AI perception with better training or algorithms. Nevertheless, artificial intelligence research will progress as the world uses the technology more frequently. 

Your subscription could not be saved. Please try again.
Your subscription has been successful.

Subscribe to our daily newsletter

By providing an email address. I agree to the Terms of Use and acknowledge that I have read the Privacy Policy.

Learn more about the AI invariance study on its Nature Neuroscience webpage. Moreover, follow more digital tips and trends at Inquirer Tech. 

TOPICS: AI, interesting topics, Trending
TAGS: AI, interesting topics, Trending

© Copyright 1997-2024 INQUIRER.net | All Rights Reserved

We use cookies to ensure you get the best experience on our website. By continuing, you are agreeing to our use of cookies. To find out more, please click this link.