G7 creates a voluntary AI code of conduct
On October 30, 2023, the G7 countries announced the International Code of Conduct for Organizations Developing Advanced AI Systems. It builds on the previous Hiroshima AI Process to promote AI safety and trustworthiness. The new law came the same day the United States issued its AI executive order.
Artificial intelligence usage continues to spread worldwide, so it’s not surprising that countries are collaborating to make global guidelines. Soon, it will affect how you use AI programs daily, so you should know how the rules would work. As a result, you can avoid negative repercussions and maximize benefits from this technology.
This article will discuss the G7’s new AI code of conduct. Later, I will cover the latest US AI executive order, which could also affect artificial intelligence use worldwide.
Article continues after this advertisementWhat are the parts of the AI code of conduct?
The latest AI code of conduct comes from the G7: Italy, Japan, France, Canada, Germany, the European Union, the United Kingdom, and the United States. VentureBeat says it consists of the following points:
- Take appropriate measures throughout development to identify, evaluate, and mitigate risks. Developers should enable traceability with datasets, processes, and decisions.
- Identify and mitigate vulnerabilities, incidents, and patterns of misuse after deployment. This can include monitoring for vulnerabilities, incidents, and emerging risks and facilitating third-party and user discovery and incident reporting.
- Publicly report advanced AI systems’ capabilities, limitations, and domains of appropriate and inappropriate use. This should include transparency reporting that is supported by “robust documentation processes.”
- Work towards responsible information-sharing and reporting of incidents.
- Develop, implement, and disclose AI governance and risk management policies. This applies to personal data, prompts, and outputs.
- Invest in and implement security controls, including physical security, cybersecurity, and insider threat safeguards.
- Develop and deploy reliable content authentication and provenance mechanisms such as watermarking. Provenance data should include an identifier of the service or model that created the content, and disclaimers should also inform users that they are interacting with an AI system.
- Prioritize research to mitigate societal, safety, and security risks. This can include conducting, collaborating on, and investing in research and developing mitigation tools.
- Prioritize the development of AI systems to address “the world’s greatest challenges,” including the climate crisis, global health, and education. Also, organizations should support digital literacy initiatives.
- Advance the development and adoption of international technical standards. This includes contributing to the development and use of international technical standards and best practices.
- Implement appropriate data input measures and protections for personal data and intellectual property. This should include appropriate transparency of training datasets.
What is the other influential set of AI regulations?
As mentioned, the United States recently announced its AI executive order. It includes clear, comprehensive goals, such as creating AI safety standards and promoting innovation and cooperation.
White House Deputy Chief of Staff Bruce Reed said it is “the strongest set of actions any government in the world has ever taken on AI safety, security, and trust.” However, the White House Senior Advisor for AI, Ben Buchanan, said the EO would likely evolve as technology progresses.
Article continues after this advertisement“I think in many respects AI policy is like running a decathlon, where we don’t get to pick and choose which events we do,” Buchanan told PopSci via phone call. “We have to do safety and security, and we have to do civil rights and equity.”
“We have to do worker protections, consumer protections, the international dimension, government use of AI, [while] making sure we have a competitive ecosystem here.”
“Probably some of [order’s] most significant actions are [setting] standards for AI safety, security, and trust. And then require that companies notify us of large-scale AI development and that they share the tests of those systems in accordance with those standards,” stated the senior advisor. “Before it goes out to the public, it needs to be safe, secure, and trustworthy.”
On the other hand, critics say the White House launched its executive order too late with limited impact. “A lot of the AI tools on the market are already illegal,” said Albert Fox Cahn, executive director for the tech privacy advocacy nonprofit Surveillance Technology Oversight Project.
“[M]any of these proposals are simply regulatory theater, allowing abusive AI to stay on the market. “The White House is continuing the mistake of over-relying on AI auditing techniques that can be easily gamed by companies and agencies.”
Conclusion
The G7 created an AI code of conduct the same day the United States issued its new AI executive order. As a result, we would likely see these rules appear in more countries.
Nowadays, more nations require AI laws as more citizens use the technology. It could be more practical and effective to take inspiration from a global standard than to create different rules.
Increasing legislation means more people are using artificial intelligence. In response, everyone should learn how to use it to maximize benefits and minimize risks. Learn more at Inquirer Tech.