A central allegation by Noyb is that individuals captured by ChatGPT do not enjoy rights under the General Data Protection Regulation (GDPR). In particular, the organization complains that OpenAI does not adequately address requests for correction or deletion of data. Even if the data is incorrect, it seems difficult to correct or remove it. While OpenAI claims to be able to filter or block certain data upon request, this appears to be inadequate. It seems that the company is unable to filter specific information about individual complainants without searching through all available data.
Noyb finds OpenAI’s inability to correct false information and the lack of transparency regarding the origin and content of stored data extremely concerning. Maartje de Graaf, a data protection lawyer at Noyb, emphasizes the potentially serious consequences that can arise from the dissemination of false information about individuals. She underscores the importance of accuracy and transparency in data processing and argues that technology must adhere to legal requirements, not vice versa.
Under Article 16 of the GDPR, individuals have the right to correct inaccurate data and have false information erased. Furthermore, companies processing personal data are obliged, under Article 15, to grant individuals access to their data and disclose what information is stored and where it comes from.
Since the widespread introduction of ChatGPT at the end of 2022, OpenAI has faced criticism for data protection issues repeatedly. The company has already encountered problems in various countries, including Italy, where it was temporarily unavailable due to privacy concerns. The debate over the use of generative language models and the potential risks of hallucinations they can generate remains ongoing and contentious.