Ever since the first generative artificial intelligence (AI) tools exploded onto the tech scene, there have been questions over where they’re getting their data and whether they’re harvesting your private data to train their products. Now, ChatGPT maker OpenAI could be in hot water for exactly these reasons.
According to TechCrunch, a complaint has been filed with the Polish Office for Personal Data Protection alleging that ChatGPT violates a large number of rules found in the European Union’s General Data Protection Regulation (GDPR). It suggests that OpenAI’s tool has been scooping up user data in all sorts of questionable ways.
The complaint says that OpenAI has broken the GDPR’s rules on lawful basis, transparency, fairness, data access rights, and privacy by design.
These seem to be serious charges. After all, the complainant is not alleging OpenAI has simply breached one or two rules, but that it has contravened a multitude of protections that are designed to stop people’s private data from being used and abused without your permission. Seen one way, it could be taken as an almost systematic flouting of the rules protecting the privacy of millions of users.
Chatbots in the firing line
It’s not the first time OpenAI has found itself in the crosshairs. In March 2023, it ran afoul of Italian regulators, leading to ChatGPT getting banned in Italy for violating user privacy. It’s another headache for the viral generative AI chatbot at a time when rivals like Google Bard are rearing their heads.
And OpenAI is not the only chatbot maker raising privacy concerns. Earlier in August 2023, Facebook owner Meta announced that it would start making its own chatbots, leading to fears among privacy advocates over what private data would be harvested by the notoriously privacy-averse company.
Breaches of the GDPR can lead to fines of up to 4% of global annual turnover for the companies penalized, which could lead to OpenAI facing a massive fine if enforced. If regulators find against OpenAI, it might have to amend ChatGPT until it complies with the rules, as happened to the tool in Italy.
Huge fines could be coming
The Polish complaint has been put forward by a security and privacy researcher named Lukasz Olejnik, who first became concerned when he used ChatGPT to generate a biography of himself, which he found was full of factually inaccurate claims and information.
He then contacted OpenAI, asking for the inaccuracies to be corrected, and also requested to be sent information about the data OpenAI had collected on him. However, he states that OpenAI failed to deliver all the info it is required to under the GDPR, suggesting that it was being neither transparent, nor fair.
The GDPR also states that people must be allowed to correct the information that a company holds on them if it is inaccurate. Yet when Olejnik asked OpenAI to rectify the erroneous biography ChatGPT wrote about him, he says OpenAI claimed it was unable to do so. The complaint argues that this suggests the GDPR’s rule “is completely ignored in practice” by OpenAI.
It’s not a good look for OpenAI, as it appears to be infringing numerous provisions of an important piece of EU legislation. Since it could potentially affect millions of people, the penalties could be very steep indeed. Keep an eye on how this plays out, as it could lead to massive changes not just for ChatGPT, but for AI chatbots in general.
Editors' Recommendations
- 8 AI chatbots you should use instead of ChatGPT
- Apple finally has a way to defeat ChatGPT
- OpenAI needs just 15 seconds of audio for its AI to clone a voice
- New report says GPT-5 is coming this summer and is ‘materially better’
- We may have just learned how Apple will compete with ChatGPT