ChatGPT Hallucinations Solution Announced | Inquirer Technology

ChatGPT Hallucinations Could Be Eliminated With New OpenAI Solution

12:01 AM June 02, 2023

One of the biggest limitations of ChatGPT is its tendency to cite made-up facts. OpenAI knows this flaw and has announced a potential solution on May 31, 2023: process supervision. The AI company plans to change how its world-renowned chatbot checks its information. As a result, it could reduce ChatGPT “hallucinations” and make it easier to regulate.

ChatGPT significantly impacted the world despite being active for only six months. Yet, OpenAI knows that its bot program has more potential for improvement. More importantly, the company claimed that reducing AI factual errors will become a massive step towards creating artificial intelligence that can think like humans.

This article will discuss how OpenAI could reduce ChatGPT hallucinations by implementing a new data assessment system. Then, I will cover expert opinions that support and doubt this proposal.

Article continues after this advertisement

How would OpenAI solve ChatGPT hallucinations?

FEATURED STORIES

AI chatbots execute user commands using large language models. LLMs connect the words of the prompt and its database via algorithms to produce results.

Most bots like ChatGPT check outputs after creating them, which is a process called outcome supervision. However, that is also why the AI bot cites made-up information.

Article continues after this advertisement

Outcome supervision does not check how the chatbot derived a solution. Consequently, the AI program will likely produce made-up facts as its algorithms form patterns.

Article continues after this advertisement

On May 31, OpenAI announced a solution: process supervision. The company would adjust its artificial intelligence to check each step of its solution.

Article continues after this advertisement

Consequently, it could significantly reduce ChatGPT hallucinations. More importantly, process supervision could encourage its LLM to follow a human-approved process and produce interpretable reasoning.

In other words, people could understand how the chatbot produces its solutions easier than ever. This upgrade has numerous potential benefits for ChatGPT and its users.

Article continues after this advertisement

As mentioned, ChatGPT could reduce errors in its results. Moreover, process supervision could enable the bot to solve more complicated tasks that require multi-step reasoning.

On the other hand, people may find it easier to understand how ChatGPT works. Also, making it follow a “human-approved process” means we could make it think in a specific way.

You may also like: How To Write Code With ChatGPT

People could find it easier to make chatbots that suit their needs. As a result, more folks could create products and services with this technology.

Process supervision is helping ChatGPT get closer to human-like thinking. Consequently, OpenAI believes it could be another step towards creating artificial general intelligence or AGI.

Besides mathematical models, the company has not provided evidence to support its effectiveness. Thus, we do not know if the new system would enable the AI bot to produce better results.

What do experts say about the OpenAI proposal?

Experts' views on the OpenAI proposal for ChatGPT hallucinations.

Photo Credit: zee5.com

The AI company has only proven the effects of process supervision in its mathematical analyses. Ben Winters, senior counsel at the Electronic Privacy Information Center, told CNBC he demands more evidence.

“I just don’t think that this alone does any significant mitigation of concerns about misinformation and incorrect results when it’s actually being used in the wild,” Winters said.

Also, he said, “It definitely matters whether they plan on implementing whatever they have found through their research here [into their products], and if they’re not, that does bring some fairly serious questions about what they are willing to release into the public.”

You may also like: The Ultimate ChatGPT Guide

Suresh Venkatasubramanian, director of the Center for Technology Responsibility at Brown University, says the process supervision paper is merely a preliminary observation. “This will need to shake out in the research community before we can say anything about this.”

“In this world, there are a lot of results that come out very regularly, and because of the overall instability in how large language models work, what might work in one setting, model, and context may not work in another setting, model, and context.”

“Some of the hallucinatory stuff that people have been concerned about is [models] making up citations and references. There is no evidence in this paper that this would work for that. It’s not that I’m saying it won’t work; I’m saying that this paper does not provide that evidence.”

Conclusion

OpenAI proposed using process supervision to reduce ChatGPT hallucinations. However, the paper explaining this process has not undergone peer review.

OpenAI researcher Karl Cobbe said the AI firm “will likely submit [the paper] to a future conference for peer review. However, OpenAI has not specified when it would apply process supervision to ChatGPT.

Your subscription could not be saved. Please try again.
Your subscription has been successful.

Subscribe to our daily newsletter

By providing an email address. I agree to the Terms of Use and acknowledge that I have read the Privacy Policy.

Time will tell whether the new approach will deliver the expected results. In the meantime, follow the latest tips and trends in artificial intelligence, gadgets, social media, and more.

TOPICS: AI, ChatGPT, interesting topics, Trending
TAGS: AI, ChatGPT, interesting topics, Trending

Your subscription could not be saved. Please try again.
Your subscription has been successful.

Subscribe to our newsletter!

By providing an email address. I agree to the Terms of Use and acknowledge that I have read the Privacy Policy.

© Copyright 1997-2024 INQUIRER.net | All Rights Reserved

This is an information message

We use cookies to enhance your experience. By continuing, you agree to our use of cookies. Learn more here.