ChatGPT Has a Big Privacy Problem

From Privacy Wiki
Jump to navigation Jump to search
ChatGPT Has a Big Privacy Problem
Short Title ChatGPT Has a Big Privacy Problem
Location ChatGPT (online platform)
Date 31st March 2023

Solove Harm Secondary Use, Exclusion
Information Computer Device, Professional, Preference, Knowledge and Belief
Threat Actors Open AI

Individuals
Affected Millions of poeple
High Risk Groups Consumers
Tangible Harms

Italy's data regulator ordered OpenAI, the creator of ChatGPT, to immediately stop using personal data from millions of Italians in ChatGPT's training, citing GDPR violations. This raised concerns about exclusion and secondary use of data. Italy temporarily blocked ChatGPT access, marking the first major regulatory challenge against such AI models in the Western world. OpenAI's defense, based on "legitimate interests," was rejected. This highlights growing scrutiny of AI models regarding privacy and ethical concerns.

Description

In this event, Italy's data regulator, Garante per la Protezione dei Dati Personali, issued an emergency order on March 31, targeting OpenAI, the organization responsible for ChatGPT, demanding the cessation of personal data usage from millions of Italians in ChatGPT's training data.

This situation involves OpenAI and the millions of Italians whose personal information was employed in ChatGPT's development without their consent or proper legal justification, resulting in normative harm categorized under Solove's taxonomy as exclusion (lack of informed consent) and secondary use (repurposing of data). The violation centers around GDPR, Europe's data protection regulation, with the Italian regulator asserting that OpenAI failed to implement age controls, potentially provided inaccurate information, neglected to inform individuals about data collection, and critically, lacked a legal basis for amassing personal information for ChatGPT's massive dataset.

Italy's regulator has taken immediate action by temporarily blocking access to ChatGPT in Italy, and this incident is noteworthy as it marks the first regulatory challenge against a large AI model like ChatGPT in the Western world, potentially setting a precedent that other European countries might follow. OpenAI's defense, relying on "legitimate interests" in its privacy policy, has been deemed inadequate by the Garante. This case underscores the growing scrutiny of AI models and their compliance with data protection laws, reflecting concerns about privacy and ethics in AI development.

Laws and Regulations

GDPR

Sources

https://www.wired.com/story/italy-ban-chatgpt-privacy-gdpr/