OpenAI’s ChatGPT breaches privacy rules, says Italy watchdog
MILAN — Italy’s data protection authority has told OpenAI that its artificial intelligence chatbot application ChatGPT breaches data protection rules, the watchdog said Monday as it presses ahead with an investigation started in 2023.
The authority, known as Garante, is one of the European Union’s most proactive in assessing AI platform compliance with the bloc’s data privacy regime. Last year it banned ChatGPT over alleged breaches of EU privacy rules.
The service was subsequently reactivated after OpenAI addressed issues concerning, among other things, the right of users to decline to consent to its use of personal data to train its algorithms.
Article continues after this advertisementAt the time, the regulator said it would continue its investigations. It has since concluded that there are elements indicating one or more potential data privacy violations, it said in a statement without providing further detail.
OpenAI did not immediately respond to a request for comment.
READ: Navigating the AI wave: Promises and pitfalls of ChatGPT in journalism
Article continues after this advertisementThe Garante on Monday said that Microsoft-backed OpenAI has 30 days to present defense arguments, adding that its investigation would take into account work done by a European task force comprising national privacy watchdogs.
Italy was the first West European country to curb ChatGPT, the rapid development of which has attracted attention from lawmakers and regulators.
READ: European privacy watchdog creates ChatGPT task force
Under the EU’s General Data Protection Regulation (GDPR) introduced in 2018, any company found in breach of the rules faces fines of up to 4 percent of its global turnover.
In December EU lawmakers and governments agreed provisional terms for regulating AI systems such as ChatGPT, moving a step closer to setting rules governing the technology.