AI systems are not a catastrophic threat to humans – study
Artificial intelligence has been advancing rapidly, causing many to worry about its possible devastating consequences for humanity.
However, a study from the United Kingdom’s University of Bath and Germany’s Technical University of Darmstadt says AI systems aren’t as dangerous as most believe.
Their researchers found that large language models, the AI programs that operate ChatGPT and similar tools, can operate without safety concerns.
Article continues after this advertisementREAD: China’s help sought on development of AI safety guidelines
Improving them with larger datasets lets them perform their tasks more effectively, but they are unlikely to gain complex reasoning skills.
Why are AI systems safer than previously thought?
The University of Bath website reports Professor Iryna Gurevych at the Technical University of Darmstadt led the collaborative research team.
Article continues after this advertisementThe group ran experiments to test the ability of large language models (LLMs) to complete tasks they’d never encountered before. The University calls them emergent abilities.
For example, LLMs can answer questions about social situations without related training or programming.
The researchers said LLMs did it with their well-known ability to complete tasks based on a few examples, also known as “in-context learning (ICL).”
The researchers conducted thousands of experiments to reveal that a combination of LLMs’ abilities accounted for their capabilities and limitations.
Dr. Harish Tayyar Madabushi, University of Bath computer scientist and study co-author, stated:
“The fear has been that as models get bigger and bigger, they will be able to solve new problems that we cannot currently predict, which poses the threat that these larger models might acquire hazardous abilities, including reasoning and planning.”
“Our study shows that the fear that a model will go away and do something completely unexpected, innovative, and potentially dangerous is not valid.”
“While it’s important to address the existing potential for the misuse of AI, such as the creation of fake news and the heightened risk of fraud, it would be premature to enact regulations based on perceived existential threats.”
For his part, Gurevych reiterates that their findings do not suggest AI is completely safe.
However, users should remember that they can control how AI learns to address proven risks, such as generating misinformation.