AI systems are not a catastrophic threat to humans – study

AI systems are not a catastrophic threat to humans – study

/ 07:30 AM August 26, 2024

Artificial intelligence has been advancing rapidly, causing many to worry about its possible devastating consequences for humanity.

However, a study from the United Kingdom’s University of Bath and Germany’s Technical University of Darmstadt says AI systems aren’t as dangerous as most believe.

Their researchers found that large language models, the AI programs that operate ChatGPT and similar tools, can operate without safety concerns. 

Article continues after this advertisement

READ: China’s help sought on development of AI safety guidelines

FEATURED STORIES

Improving them with larger datasets lets them perform their tasks more effectively, but they are unlikely to gain complex reasoning skills.

Why are AI systems safer than previously thought?

The University of Bath website reports Professor Iryna Gurevych at the Technical University of Darmstadt led the collaborative research team. 

Article continues after this advertisement

The group ran experiments to test the ability of large language models (LLMs) to complete tasks they’d never encountered before. The University calls them emergent abilities. 

Article continues after this advertisement

For example, LLMs can answer questions about social situations without related training or programming.

Article continues after this advertisement

The researchers said LLMs did it with their well-known ability to complete tasks based on a few examples, also known as “in-context learning (ICL).”

The researchers conducted thousands of experiments to reveal that a combination of LLMs’ abilities accounted for their capabilities and limitations. 

Article continues after this advertisement

Dr. Harish Tayyar Madabushi, University of Bath computer scientist and study co-author, stated: 

“The fear has been that as models get bigger and bigger, they will be able to solve new problems that we cannot currently predict, which poses the threat that these larger models might acquire hazardous abilities, including reasoning and planning.” 

“Our study shows that the fear that a model will go away and do something completely unexpected, innovative, and potentially dangerous is not valid.” 

“While it’s important to address the existing potential for the misuse of AI, such as the creation of fake news and the heightened risk of fraud, it would be premature to enact regulations based on perceived existential threats.” 

For his part, Gurevych reiterates that their findings do not suggest AI is completely safe.

Your subscription could not be saved. Please try again.
Your subscription has been successful.

Subscribe to our daily newsletter

By providing an email address. I agree to the Terms of Use and acknowledge that I have read the Privacy Policy.

However, users should remember that they can control how AI learns to address proven risks, such as generating misinformation.

TOPICS: technology, Top Stories Technology
TAGS: technology, Top Stories Technology

Your subscription could not be saved. Please try again.
Your subscription has been successful.

Subscribe to our newsletter!

By providing an email address. I agree to the Terms of Use and acknowledge that I have read the Privacy Policy.

© Copyright 1997-2024 INQUIRER.net | All Rights Reserved

This is an information message

We use cookies to enhance your experience. By continuing, you agree to our use of cookies. Learn more here.