ChatGPT violated European privacy laws, Italy tells chatbot maker OpenAI

FILE - The OpenAI logo is seen on a mobile phone in front of a computer screen displaying output from ChatGPT, on March 21, 2023, in Boston. Italian regulators said they informed OpenAI that its ChatGPT artificial intelligence chatbot has violated European Unions stringent data privacy rules. The countrys data protection authority, known as Garante, said Monday, Jan. 29, 2024, that it notified San Francisco-based OpenAI of breaches of the EU rules, known as General Data Protection Regulation. (AP Photo/Michael Dwyer, File) (Michael Dwyer, Copyright 2023 The Associated Press. All rights reserved)

LONDON – Italian regulators said they told OpenAI that its ChatGPT artificial intelligence chatbot has violated European Union’s stringent data privacy rules.

The country's data protection authority, known as Garante, said Monday that it notified San Francisco-based OpenAI of breaches of the EU rules, known as General Data Protection Regulation.

Recommended Videos



The watchdog started investigating ChatGPT last year, when it temporarily banned within Italy the chatbot that can produce text, images and sound in response to users' questions.

Based on the results of its “fact-finding activity," the watchdog said it “concluded that the available evidence pointed to the existence of breaches of the provisions” in the EU privacy rules.

OpenAI, which has 30 days to reply to the allegations, said it would work constructively with Italian regulators.

“We believe our practices align with GDPR and other privacy laws, and we take additional steps to protect people’s data and privacy," a company statement said. "We want our AI to learn about the world, not about private individuals. We actively work to reduce personal data in training our systems like ChatGPT, which also rejects requests for private or sensitive information about people.”

The company said last year that it fulfilled a raft of conditions that the Garante demanded to get the ChatGPT ban lifted.

The watchdog had imposed the ban after finding that some users’ messages and payment information were exposed and because ChatGPT didn't have a system to verify users’ ages, allowing children to get answers from the AI tool that were inappropriate for their age.

It also questioned whether there was a legal basis for OpenAI to collect massive amounts of data used to train ChatGPT’s algorithms and raised concerns that the system could sometimes generate false information about individuals.

The growing popularity of generative AI systems like ChatGPT are also drawing increasing scrutiny from regulators on both sides of the Atlantic.

The U.S. Federal Trade Commission opened an inquiry last week into the relationships between AI startups OpenAI and Anthropic and the tech giants that have bankrolled them — Amazon, Google and Microsoft. Competition regulators in the 27-nation EU and Britain, meanwhile, are also examining Microsoft's OpenAI investments.

AI systems also face broader oversight in the EU, which is finalizing its groundbreaking AI Act, the world's first comprehensive rulebook for artificial intelligence. The bloc's 27 member states are expected to endorse a version of the legislation Friday.


Recommended Videos