EU Taskforce Evaluates ChatGPT's Privacy Compliance

EU Taskforce Evaluates ChatGPT's Privacy Compliance
EU Reviews ChatGPT's GDPR Compliance Issues

A taskforce in the European Union (EU) has spent over a year analysing how the EU's data protection rules apply to OpenAI’s popular chatbot, ChatGPT. On Friday, they shared their preliminary findings but still haven’t reached a conclusion on major legal issues, such as the legality and fairness of OpenAI’s data processing.

The Stakes for Open AI
This issue is crucial because violations of the EU's privacy laws can lead to hefty fines, up to 4% of a company's global annual turnover. Regulators can also demand that any non-compliant data processing be stopped. Therefore, OpenAI faces significant regulatory risks in Europe, especially since specific AI laws are still in the works and won’t be fully implemented for years.

Without clear guidance from EU data protection authorities on how current laws apply to ChatGPT, OpenAI might continue operating as usual despite increasing complaints about potential violations of the General Data Protection Regulation (GDPR). For example, Poland’s data protection authority (DPA) began investigating a complaint that ChatGPT had fabricated information about a person and then refused to correct the mistake. A similar complaint was filed in Austria.

GDPR Challenges and Potential Solutions
The GDPR applies whenever personal data is collected and processed. Large language models like ChatGPT do this extensively by scraping data from the internet, including social media posts. This could potentially put OpenAI in violation of GDPR if data protection authorities decide to enforce it strictly.

Last year, Italy’s privacy watchdog temporarily banned ChatGPT from processing data of Italian users. OpenAI had to make changes to its system to comply with Italian demands before resuming operations in Italy. However, the Italian investigation into the legal basis for OpenAI’s data processing is ongoing, leaving ChatGPT under a legal cloud in the EU.

Under GDPR, any entity processing personal data must have a legal basis. Most options are not suitable for OpenAI, leaving them with consent or a broad basis called legitimate interests (LI). Since Italy’s intervention, OpenAI has claimed LI as their legal basis for data processing. However, the Italian DPA’s draft decision found OpenAI had violated GDPR, though details are not yet public.

The taskforce's report emphasizes that ChatGPT needs a valid legal basis for all data processing stages, from data collection to the use of data for training models. They suggest that AI companies should be more careful about data collection to minimize privacy risks, potentially deleting or anonymizing data before training AI models.

Moving Forward
OpenAI seeks to rely on LI for processing user prompts for model training, but the taskforce stresses that users must be clearly informed about this. Individual DPAs will decide if OpenAI meets the requirements for using LI. If not, OpenAI would need to obtain consent from users, a complex task given the vast amount of data involved.

The taskforce also addressed transparency and fairness, noting that OpenAI cannot shift privacy risks to users and must provide accurate information about ChatGPT’s outputs, which may sometimes be unreliable or biased.

The taskforce’s work is influencing how quickly DPAs act on complaints about ChatGPT. Some DPAs may delay enforcement, waiting for the taskforce's final report. OpenAI has recently set up an EU operation in Ireland, likely to benefit from Ireland's business-friendly approach to GDPR enforcement.

OpenAI has not yet responded to the task force’s preliminary report.

Contact Us

Source - Techcrunch