As countries around the world prepare for elections this year, OpenAI has outlined its plan to combat misinformation, with a focus on promoting transparency around the source of information.

The company said that its teams are working to prevent abuse, provide transparency on AI-generated content, and improve access to accurate voting information.

“We have a cross-functional effort dedicated to election work, bringing together expertise from our safety systems, threat intelligence, legal, engineering, and policy teams to quickly investigate and address potential abuse,” OpenAI said in a blog post on Monday.

The company said that it is working to prevent relevant abuse — such as misleading “deep fakes”, scaled influence operations, or chatbots impersonating candidates.

“Prior to releasing new systems, we red team them, engage users and external partners for feedback, and build safety mitigations to reduce the potential for harm,” OpenAI said.

Buy Me A Coffee

To provide transparency around AI-generated content, the company said that it is working on several provenance efforts.

Early this year, it will implement the Coalition for Content Provenance and Authenticity’s digital credentials — an approach that encodes details about the content’s provenance using ‘cryptography’ — for images generated by DALL·E 3.

OpenAI is also experimenting with a provenance classifier — a new tool for detecting images generated by DALL·E.

As the US gears up for the presidential election later this year, the maker of ChatGPT mentioned that they are working with the National Association of Secretaries of State (NASS), the nation’s oldest nonpartisan professional organization for public officials.

READ
Delta Sues CrowdStrike Over Major July IT Outage

“ChatGPT will direct users to CanIVote.org, the authoritative website on US voting information when asked certain procedural election-related questions — for example, where to vote,” the company explained.