A Norwegian man has filed a privacy complaint against OpenAI after ChatGPT falsely described him as a convicted murderer.

Arve Hjalmar Holmen discovered that when he queried the AI about himself, it incorrectly claimed that he had been convicted of murdering two of his children and attempting to kill a third, serving a 21-year sentence in Norway. The chatbot mixed real details—such as his hometown and family structure—with entirely false allegations.

The Austrian advocacy group Noyb has now filed a complaint with Norway’s Datatilsynet, arguing that OpenAI has violated GDPR’s accuracy requirements by spreading false personal data. The group is calling for OpenAI to be fined and forced to improve ChatGPT’s ability to prevent such errors. Noyb’s data protection lawyer, Joakim Söderberg, emphasized that a small disclaimer about potential mistakes is not enough, stating that false claims must be corrected or removed under EU law.

The query that produced the defamatory response is no longer replicable, as OpenAI has since updated ChatGPT to include web search results. This is Noyb’s second GDPR-related complaint against OpenAI—its first, in April 2024, involved an incorrect date of birth for a public figure. At the time, OpenAI claimed that errors could only be blocked, not corrected, a stance that Noyb argues directly violates GDPR’s “right to rectification.”

This latest complaint raises further concerns about AI-generated misinformation and whether OpenAI can adequately comply with global data privacy laws.

READ
U.S. Bans Chinese AI DeepSeek on Government Devices Over Security Concerns