OpenAI lobbied to weaken EU regulations on AI
TIME released an exclusive report about OpenAI’s Sam Altman lobbying the European Union to “water down” its AI Act. The news outlet cites documents regarding the company’s engagement with EU officials from the European Commission. Also, TIME says many of the AI firm’s proposals made it to the final text of the EU law.
Sam Altman spent several months meeting with numerous world leaders about AI regulations. However, his EU lobby starkly contrasts his recent worldwide campaign. As artificial intelligence systems like his ChatGPT become more prevalent, we must follow issues that can affect their impact on daily life.
This article will discuss OpenAI’s lobbying efforts with the European Union and its AI Act. Later, I will cover the CEO’s other actions for regulating artificial intelligence, especially in the United States.
Article continues after this advertisementHow did OpenAI lobby for reduced AI regulations?
In 2022, OpenAI repeatedly argued to European officials that the forthcoming AI Act should not consider its general purpose AI systems to be “high risk” https://t.co/R3yfjzro7e
— TIME (@TIME) June 20, 2023
TIME says it obtained its recent AI news via freedom of information requests. According to its reports, OpenAI repeatedly argued to European officials that the AI Act must be more lenient on its general-purpose AI programs.
These include GPT-3, the large language model that enables ChatGPT to understand texts. Moreover, the company mentioned its image generator DALL-E.
Article continues after this advertisementThe company suggests removing them from the AI Act’s “high risk” classification. This designation requires strict legal requirements, such as traceability, transparency, and human oversight.
OpenAI says compliance with the law’s most stringent requirements should only apply to companies that deliberately aim to apply AI tech for high-risk use cases. It must not affect larger companies that only build general-purpose AI programs.
TIME cited one of the firm’s statements in a previously unpublished seven-page document sent to the EU Commission and Council officials in September 2022. The title is “OpenAI White Paper on the European Union’s Artificial Intelligence Act”:
“By itself, GPT-3 is not a high-risk system. But [it] possesses capabilities that can potentially be employed in high-risk use cases.” TIME published the White Paper you can read on its website.
The publication says OpenAI’s lobbying efforts in Europe had no prior reports until now. However, Altman has been mentioning the legislation more often. In May, he told London reporters OpenAI might “cease operating” in the continent if the company cannot comply with the regulation.
Moreover, he said he had “a lot” of criticisms. Later, Altman took back the warning. He said his AI firm has no intentions to leave and will cooperate with the European Union.
What were OpenAI’s AI discussions in other countries?
TIME compared the company’s recent EU lobbying efforts to its appeals to strengthen AI regulations in the United States. On May 17, 2023, he testified to the country’s first-ever AI Senate hearing.
Surprisingly, the public and private sectors cooperated in regulating artificial intelligence. Senator Dick Durbin noted, “I can’t recall when we’ve had people representing large corporations or private sector entities come before us and plead with us to regulate them.”
Altman told the US Senate Section 230 requires amendments. It is the 1996 Communications Decency Act, which states, “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”
You may also like: OpenAI is Making A New Copyright-Friendly ChatGPT
The ChatGPT creator said this law does not apply to generative AI. Consequently, the US needs a new framework to hold businesses liable for offering artificial intelligence systems. On June 10, he spoke at a Beijing conference by the Beijing Academy of Artificial Intelligence. He asked China to help create AI guidelines:
“With the emergence of increasingly powerful AI systems, the stakes for global cooperation have never been higher. China has some of the best AI talents in the world, and fundamentally, given the difficulties in solving alignment for advanced AI systems, this requires the best minds from around the world.”
He shared a similar message to another country a day before the Beijing event. He called for international generative AI regulations during a high-profile trip to South Korea.
Conclusion
TIME recently reported on OpenAI’s lobbying efforts with the European Union for weaker AI regulations. As a result, many of the company’s proposals are part of the AI Act.
Nevertheless, artificial intelligence continues to improve, so governments must keep up with its rapid advancement. Believe it or not, OpenAI suggested such measures.
Soon, your will live with these new regulations in your country. Learn more about how artificial intelligence and other technologies shape our world by following Inquirer Tech.