Sam Altman admits OpenAI doesn’t understand how AI works

Tech giants OpenAI, Microsoft, Google, and others want to transform the world with artificial intelligence (AI). They rapidly create and launch AI-powered products and services that shift many parts of life.

Most would assume these companies know how their technologies work. Surprisingly, however, Sam Altman, the head of the company that started the AI revolution, admitted he doesn’t know how AI operates.

Last week, The Atlantic CEO Nicholas Thompson asked Altman about AI’s nuts and bolts. He was not able to give a clear answer.

What did Sam Altman say about AI?

Innovation news outlet Observer reported on Altman’s statement during an interview on May 30. Thompson hosted the discussion at the International Telecommunications Union (ITU) AI for Good Global Summit.

READ: OpenAI employee had a heartfelt conversation with ChatGPT

The AI leader talked about AI safety and the technology’s potential benefits. However, he did not provide a satisfactory answer when asked how “GPT” works. 

“We certainly have not solved interpretability,” Altman responded. Interpretability or explainability is the understanding of how AI systems make decisions.

“If you don’t understand what’s happening, isn’t that an argument to not keep releasing new, more powerful models?” asked Thompson.

Altman meandered until he said, “These systems [are] generally considered safe and robust.” 

“We don’t understand what’s happening in your brain at a neuron-by-neuron level, and yet we know you can follow some rules and can ask you to explain why you think something,” he added.

READ: OpenAI’s Sam Altman declared billionaire by Forbes

AI experts called this issue the “black box problem,” where even its creators do not understand how their AI systems work.

For example, Google said its Bard chatbot learned the Bengali language despite not receiving relevant training.

Altman nevertheless emphasized that OpenAI prioritizes safety and security for its next model. It has begun training its next flagship model on its “path to AGI.”

Artificial general intelligence or AGI refers to an AI that has human-like cognitive abilities. For example, it can be aware of itself and learn new things.

“It does seem to me that the more we can understand what’s happening in these models, the better,” Altman declared despite not fully understanding AI.

“I think that can be part of this cohesive package to how we can make and verify safety claims,” the AI leader added.

Read more...