Former OpenAI researcher shares the next 10 years of AI

Former OpenAI researcher shares the next 10 years of AI

/ 07:11 AM June 10, 2024

Business Insider reported on a 165-page essay from fired OpenAI employee Leopold Aschenbrenner. The outlet says he worked on the AI firm’s Superalignment team, which mitigates AI risks.

The former employee says the tech giant fired him for leaking information about the company’s readiness for artificial general intelligence. AGI refers to an AI system that can think like humans. 

READ: China unveils world’s first AI child

Aschenbrener noted that he based his report on “publicly available information, my own ideas, general field knowledge, or SF gossip.” His findings may help the world glimpse into our AI-dominated future as it unfolds.

FEATURED STORIES

What did the former OpenAI researcher say?

Business Insider summarized the report using GPT-4, the large language model behind ChatGPT. It said, “Leopold Aschenbrenner’s essay discusses the transformative potential of artificial general intelligence (AGI) and superintelligence, and forecasts significant advancements in AI technology in the near future.”

More importantly, the essay claims AI companies may create AGI faster than anticipated, based on current technological advancements. Here are the other crucial points from the former OpenAI researcher:

  1. AGI by 2027: Leopold Aschenbrenner says that the world may create artificial general intelligence by 2027, based on the innovation progress from GPT-2 to GPT-4. 
  2. Superintelligence following AGI: The former AI safety member predicts an “intelligence explosion, where AI goes from human-level to superhuman capabilities. He says this transition is likely to happen because AI can automate and accelerate its research and development.
  3. Trillion-dollar AI clusters: Aschenbrenner says the AI sector will receive more corporate and government funding as more institutions worldwide prepare for AGI and superintelligence.
  4. National and global security dynamics: Countries may enact stricter national security measures to manage and control AI developments. However, international competition, especially between the US and China, may intensify and lead to an “all-out war.”
  5. Superalignment challenges: Aschenbrenner says we may struggle to ensure AI works for human values and interests.
  6. Societal and economic transformations: The former AI researcher believes AI will significantly affect society and the economy. Specifically, it may restructure industries and the job market as the technology takes over numerous previously manned jobs.
  7. US government involvement: The United States may develop artificial intelligence further by 2027 or 2028 due to the technology’s strategic importance.
  8. Technological mobilization: Countries may deploy artificial intelligence to support infrastructure for their national policies. 

These trends may occur sooner as OpenAI transitioned from GPT-4 to GPT-4o for its flagship model. Follow Inquirer Tech to stay up-to-date.

Your subscription could not be saved. Please try again.
Your subscription has been successful.

Subscribe to our daily newsletter

By providing an email address. I agree to the Terms of Use and acknowledge that I have read the Privacy Policy.

TOPICS: AI, technology
TAGS: AI, technology

© Copyright 1997-2024 INQUIRER.net | All Rights Reserved

We use cookies to ensure you get the best experience on our website. By continuing, you are agreeing to our use of cookies. To find out more, please click this link.