Consumer groups from 13 European countries on Tuesday called on regulators to launch urgent investigations into the risks of generative AI, such as ChatGPT, and to enforce existing legislation to protect consumers.

The call to investigate generative AI risks came with a new report which sheds light on the several risks of generative AI, on the existing rules which can protect consumers, and on which rules still need to be developed.

“Generative AI such as ChatGPT has opened up all kinds of possibilities for consumers, but there are serious concerns about how these systems might deceive, manipulate and harm people. They can also be used to spread disinformation, perpetuate existing biases which amplify discrimination, or be used for fraud,” said Ursula Pachl, Deputy Director General of the European Consumer Organisation (BEUC).

“We call on safety, data, and consumer protection authorities to start investigations now and not wait idly for all kinds of consumer harm to have happened before they take action. These laws apply to all products and services, be they AI-powered or not and authorities must enforce them,” Pachl added.

Buy Me A Coffee

BEUC wrote to consumer safety and consumer protection authorities in April calling on them to start investigations because of the breadth and speed of the rollout of generative AI models, such as ChatGPR, and the possible harm resulting from its deployment.

The European Data Protection Board has already created a task force to look into ChatGPT.

READ
Samsung Begins Mass Production Of Industry-Leading SSD For AI Computers

“All AI systems, including generative AI, need public scrutiny, and public authorities must reassert control over them. Lawmakers must require that the output from any generative AI system is safe, fair, and transparent for consumers,” Pachl said in a statement.

BEUC wrote to consumer safety and consumer protection authorities in April calling on them to start investigations because of the breadth and speed of the rollout of generative AI models, such as ChatGPR, and the possible harm resulting from its deployment. The European Data Protection Board has already created a task force to look into ChatGPT.