We cannot deny that artificial intelligence (AI) is integrating into our daily lives. However, we need proper rules to reduce risks from this technology.
Many countries have released laws, but none offer concrete and specific measures, until now.
The European Union (EU) released the world’s first comprehensive AI regulations. Since the rest of the world uses artificial intelligence, these may influence AI rules in other countries.
Those include your nation, so you should see how these regulations could shift your AI usage.
How does the AI law work?
The AI regulations classify AI projects as minimal, high, and unacceptable risks. Then, they will impose obligations depending on their classification.
High-risk artificial intelligence programs will fit into two other groups:
- AI systems in products under the EU’s product safety legislation, such as toys, cars, medical devices, and aviation.
- AI systems that must enter an EU database, such as education, management, and law enforcement.
The AI law states tools with unacceptable risks are a threat to people. These include the following:
- Biometric identification and categorization of people
- Real-time and remote biometric identification systems like facial recognition.
- Cognitive behavioral manipulation, such as those from voice-activated toys that encourage dangerous behavior in children
- Programs that classify people based on behavior, socioeconomic status, or personal traits
It has special requirements for generative AI like ChatGPT. They won’t fit as high-risk, but they must comply with EU copyright law and transparency requirements:
- Disclosing that AI created specific content
- Designing the AI model to ensure it doesn’t make illegal content
- Publishing summaries of copyrighted data used for training
READ: OpenAI leaders propose AI regulations
Thierry Breton, the European commissioner for the internal market, hailed the new AI law. “Europe is NOW a global standard-setter in AI,” he said on X.
However, Yahoo Finance said civil society groups raised concerns regarding the new EU regulations.’ Specifically, Corporate Europe Observatory (CEO) and LobbyControl wrote the following in a statement:
“This one-sided influence meant that ‘general purpose AI,’ was largely exempted from the rules and only required to comply with a few transparency obligations.”