Superintelligence OpenAI team launched| Inquirer Technology

OpenAI is forming a team to control future AI ‘superintelligence’

12:01 AM July 07, 2023

ChatGPT creator OpenAI is forming a team to prepare for the emergence of an AI ‘superintelligence.” The company believes as artificial intelligence progresses further, humanity will create an AI system that surpasses people. Ilya Sutskever, the tech firm’s chief scientist and co-founder, will lead other experts to prevent such a disaster.

Advanced artificial intelligence systems used to only appear in science fiction, such as “The Matrix” and “2001: A Space Odyssey.” Nowadays, it seems we will turn these ideas into reality as AI progresses blindingly fast. OpenAI says it “could lead to the disempowerment of humanity or even human extinction if we don’t act immediately.”

AI is in our daily lives, so we must understand how it will transform our future. This article will discuss how OpenAI’s Superalignment team would tackle this global threat.

Article continues after this advertisement

How will OpenAI beat an AI superintelligence?

FEATURED STORIES

The tech firm understands the creation of artificial intelligence smarter than humans is still distant. However, it believes such technology may arrive this decade.

OpenAI requires new institutions for governance, further scientific innovations, and other things to manage its risks. Also, humans cannot reliably supervise an AI system smarter than them.

Article continues after this advertisement

Nowadays, we don’t have such solutions. Consequently, the ChatGPT creator formed a Superalignment team to develop them. One of its co-founders, Ilya Sutskever, will lead this group of experts to create a human-level automated alignment researcher.

Article continues after this advertisement

In other words, OpenAI will create another AI to control a future AI “superintelligence.” The Superalignment team will train the former model with the following steps:

Article continues after this advertisement
  1. The team will provide training signals on tasks humans can’t handle so that AI systems may take over instead. Moreover, Sutskever’s group will understand and control how their models spot tasks humans overlook. Both methods, scalable oversight and generalization, will help develop a scalable training method.
  2. Next, the Superalignment team will validate their AI training or “alignment.” It will automate searching for problematic behavior and internals. The experts call both methods robustness and automated interpretability.
  3. Afterward, the AI pros will test their new alignment researcher by deliberately training misaligned models. Then, the OpenAI team will perform adversarial training by checking if their researcher can spot the latter’s errors.

You may also like: OpenAI asks China to help with AI regulations

Why use another AI to combat a potential AI superintelligence? On August 24, 2022, OpenAI experts Jan Leike, John Schulman, and Jeffrey Wu explained the method in another blog:

Article continues after this advertisement

“As we make progress on this, our AI systems can take over more and more of our alignment work and ultimately conceive, implement, study, and develop better alignment techniques than we have now.”

Are other firms preparing for an AI superintelligence?

This is another company working on an ethical AI chatbot.

Photo Credit: bloomberg.com

Most people know OpenAI, but other companies have been creating safer AI tools. For example, Anthropic launched Claude, an AI chatbot that rivals ChatGPT.

It claims it can do everything OpenAI’s tool can, except it avoids “harmful outputs.” Unlike ChatGPT, Claude has a “constitutional AI” model, requiring the chatbot to follow 10 principles. The AI firm said its tool follows laws based on three concepts:

  1. Beneficence or maximizing positive impact
  2. Nonmaleficence or avoiding giving harmful advice.
  3. Autonomy or respecting freedom of choice.

You may also like: How to protect your data from Google AI

Meanwhile, another AI separate from Claude answers questions while referring to these principles. Then, it chooses answers that correspond to the AI constitution.

Anthropic uses the final results to train Claude. Despite focusing on ethical standards, it performs well as an AI chatbot. In January 2023, the chatbot impressed a Virginia’s George Mason University professor by passing college exams.

It earned a “marginal pass,” so the professor praised the program. Professor Alex Tabarrok said Claude produced answers to his law and economics exams “better than many human responses.”

Conclusion

OpenAI is developing an AI team to prepare for a potential AI superintelligence. It will train an AI model to detect and correct such a technology to mitigate global risks.

You may learn more about the Superalignment team’s work by reading OpenAI’s latest blog. It explains the potential limitations of their work since the technology continues to develop.

Your subscription could not be saved. Please try again.
Your subscription has been successful.

Subscribe to our daily newsletter

By providing an email address. I agree to the Terms of Use and acknowledge that I have read the Privacy Policy.

The AI trend continues to shift daily life worldwide, so everyone must prepare with the latest digital tips and trends. Read more about them at Inquirer Tech.

TOPICS: AI, interesting topics, OpenAI, Trending
TAGS: AI, interesting topics, OpenAI, Trending

Your subscription could not be saved. Please try again.
Your subscription has been successful.

Subscribe to our newsletter!

By providing an email address. I agree to the Terms of Use and acknowledge that I have read the Privacy Policy.

© Copyright 1997-2024 INQUIRER.net | All Rights Reserved

This is an information message

We use cookies to enhance your experience. By continuing, you agree to our use of cookies. Learn more here.