The agency announced that a telecom company responsible for transmitting a deepfake robocall imitating President Joe Biden’s voice has agreed to pay $1 million as part of a settlement with the Federal Communications Commission (FCC).
In January, Lingo Telecom was involved in broadcasting a fraudulent message to voters in New Hampshire, discouraging them from participating in the Democratic primary. The FCC identified political consultant Steve Kramer as the mastermind behind the AI-generated calls and had earlier proposed a separate $6 million fine for him.
As part of the settlement, Lingo Telecom must comply with strict caller ID authentication protocols, including adhering to “know your customer” standards. Additionally, the company must thoroughly verify the accuracy of information provided by its clients and upstream providers, as stated in an FCC press release. Lingo has not yet commented on the settlement.
“Everyone deserves to trust that the voice on the other end of the line is who they claim to be,” FCC Chair Jessica Rosenworcel said. “If AI is involved, that should be transparently communicated to any consumer, citizen, or voter. The FCC will take action when the integrity of our communications networks is at risk.”
This settlement comes on the heels of a recent FCC decision in February to ban the use of AI-generated voices in robocalls without the recipient’s consent, following the New Hampshire incident. The agency has also proposed new rules requiring political advertisers to disclose the use of generative AI in radio and TV ads.
Bijay Pokharel
Related posts
Recent Posts
Subscribe
Cybersecurity Newsletter
You have Successfully Subscribed!
Sign up for cybersecurity newsletter and get latest news updates delivered straight to your inbox. You are also consenting to our Privacy Policy and Terms of Use.