OpenAI Fined €15 Million for GDPR Violations in Italy
The Italian Data Protection Authority (Garante) has imposed a fine of €15 million on OpenAI following an investigation into the company’s use of personal data for training its AI chatbot, ChatGPT. The authority determined that OpenAI did not have an adequate legal basis for using this data and violated principles of transparency and information obligations toward users. Additionally, the investigation highlighted the absence of a sufficient age verification system to protect users under 13 from exposure to inappropriate content generated by the AI.
OpenAI’s practices have raised significant concerns regarding user privacy and data protection. The Garante has mandated that OpenAI conduct a six-month public awareness campaign in local media. This campaign is intended to inform both users and non-users about how personal data is collected and the rights individuals have under the General Data Protection Regulation (GDPR). The authority aims to empower users to oppose the use of their personal data in training generative AI.
Previously, the Garante had temporarily suspended ChatGPT’s availability in Italy due to privacy issues while investigating a potential data breach. OpenAI has labeled the fine as “disproportionate,” claiming it amounts to nearly 20 times the revenue generated in Italy during that year. The company has expressed its commitment to collaborating with privacy authorities globally to ensure that its AI solutions respect users’ privacy rights.
Regulators in both the United States and Europe are scrutinizing OpenAI and other companies involved in the AI sector. Governments are actively working on legislation to mitigate risks associated with AI technologies, with the European Union’s AI Act being a prominent example of such regulatory efforts. The evolving landscape of AI regulation emphasizes the importance of compliance with data protection laws.
Source: Italy hands OpenAI €15 million fine after ChatGPT data privacy probe