VIENNA—A Vienna-based privacy campaign group said on Monday it would file a complaint against ChatGPT in Austria, claiming the “hallucinating” flagship AI tool invents wrong answers, which creator OpenAI cannot correct.
NOYB (“None of Your Business”) said there was no way to guarantee the program provided accurate information.
“ChatGPT keeps hallucinating—and not even OpenAI can stop it,” the group said in a statement.
The company has openly acknowledged it cannot correct inaccurate information produced by its generative AI tool and has failed to explain where the data comes from and what ChatGPT stores about individuals, said the group.
‘No the other way around’
Such errors are unacceptable for information about individuals because EU law stipulates that personal data must be accurate, NOYB argued.
“If a system cannot produce accurate and transparent results, it cannot be used to generate data about individuals,” said Maartje de Graaf, data protection lawyer at NOYB.
“The technology has to follow the legal requirements, not the other way around.”
ChatGPT “repeatedly provided incorrect information” about the birth date of NOYB founder Max Schrems “instead of telling users that it doesn’t have the necessary data,” said the group.
OpenAI refused Schrems’ request to rectify or erase the data despite it being incorrect, saying it was impossible, NOYB added.
It also “failed to adequately respond” to his request to access his personal data, again in violation of EU law, said NOYB, and the firm “seems to not even pretend that it can comply.”
The campaign group, which has emerged as a fierce critic of tech giants since its creation in 2018, said it was asking Austria’s data protection authority to investigate and fine OpenAI to bring it in line with EU law.
Tech frenzy
Bursting onto the scene in November 2022, ChatGPT sparked a frenzy among tech users dazzled by its ability to reel off dissertations, poems or translations in mere seconds.
But criticism of the technology has since prompted legal action in some countries.
Italy temporarily blocked the program in March 2023, while France’s regulatory authority began an investigation after a series of complaints.
A European working group has also been set up to improve coordination, although NOYB remains skeptical about the authorities’ efforts to regulate AI. —AFP