New training lets AI learn words like humans

Scientists have developed a new AI training method that provides artificial intelligence with an essential human skill: connecting two learned concepts into a new one. Their Meta-learning for Compositionality (MLC) makes artificial intelligence complete composition tasks repeatedly. Soon, it may remove the need to retrain the machine each time it encounters a new concept. 

AI companies like OpenAI admit that their work will eventually lead to a machine that can think and behave like humans. Their research has given the world generative AI tools that produce nearly any media we want. Soon, artificial intelligence may produce new ideas without human intervention, paving the way for more opportunities. 

This article will discuss this revolutionary AI training technique. Later, I will explain how much artificial intelligence has progressed since ChatGPT became mainstream.

How did scientists create their new AI training?

Let’s briefly discuss how generative artificial intelligence tools work before we tackle MLC. ChatGPT and similar tools rely on large language models containing numerous words. 

It matches its words with the user’s and combines them into coherent, relevant answers. For example, if it receives the word “jump,” it can make phrases like “jump twice” or “jump around right twice.”

However, the AI program will not provide a result if it gets an unknown word like “spin.” You would have to retrain the entire LLM for that word, which is a painstaking, expensive process.

That is why scientists created a new way of training artificial intelligence: Meta-learning for Compositionality. As mentioned, it makes the AI tool apply different rules to newly learned words. 

It also gives feedback on whether it followed the rules correctly. The researchers used the following steps to test their AI training method: 

  1. They made humans match the same words using the same rules. 
  2. Then, they recorded the human errors. 
  3. Afterward, they instructed the AI to learn as it completed tasks. The conventional method involved following a static data set. 
  4. The experts compared AI and human performance by applying human errors to their artificial intelligence.
  5. The AI program shared answers almost identical to those from the human volunteers. 

“For 35 years, researchers in cognitive science, artificial intelligence, linguistics, and philosophy have been debating whether neural networks can achieve human-like systematic generalization,” New York University scientist Brenden Lake said. 

“We have shown, for the first time, that a generic neural network can mimic or exceed human systematic generalization in a head-to-head comparison.”

Ella Bruni, a specialist in natural language processing at the University of Osnabrück in Germany, said the study could make AI programs become more efficient learners. 

What are the capabilities of modern AI?

ChatGPT opened the world’s eyes to the capabilities of artificial intelligence. It amazed everyone, generating nearly any text imaginable, from jokes to research papers.

However, it has become so advanced that scientists are struggling to filter manmade research papers from AI-generated ones. Nowadays, more experts are passing AI-generated studies, presenting a new challenge for peer review.

Dr. Catherine Gao, a physician-scientist from Northwestern University, tested if ChatGPT can create a convincing research paper. “I wondered if it could write scientific abstracts,” Gao said. 

“I asked it to write an abstract about a hypothetical machine-learning study focusing on pneumonia in the intensive care unit,” she added. Consequently, the program surprised her with a “scarily good abstract.”

ChatGPT displayed surprisingly high emotional awareness before the Meta-learning for Compositionality study. The AI program can’t exhibit or report having emotions at the time of writing. 

Zohar Elyoseph, Dorit Hadar-Shoval, Kfir Asraf, and Maya Lvovsky presented scenarios from the Levels of Emotional Awareness Scale. 

It usually involves human respondents imagining themselves in various scenarios and writing down their “you” emotions. The AI researchers replaced the “you” with “human” because that method won’t work on a machine learning model.

The separate testing sessions help the experts validate results. The first generated a Z-score of 2.84, and the second got 4.26. Z-score is a statistical measure that measures how close a value is to a median or average score.

Z-scores above 1 indicate higher values than average people, meaning ChatGPT exhibited a higher emotional awareness than most people. Moreover, it had more accurate responses than humans, earning a 9.7 out of 10.

Conclusion

Researchers have created a new AI training method that gives artificial intelligence a crucial human skill. It enables AI tools to combine learned concepts into new ones. 

Previously, we had to retrain AI models each time humans invented a new idea or object. Soon, artificial intelligence could create new concepts by itself.

Imagine if we applied that capability to AI programs that generate other media, such as videos and images! Learn more about the latest digital tips and trends at Inquirer Tech. 

Read more...