Google has released its 2024 Ads Safety Report, revealing it blocked 5.1 billion ads and suspended 39.2 million advertiser accounts last year.

The surge in enforcement activity reflects the growing misuse of AI-generated content, impersonation scams, and other ad platform abuses.

A major concern cited in the report is the rise of AI-generated deepfake videos, particularly those impersonating celebrities and public figures to promote investment and cryptocurrency scams. Google says it permanently suspended over 700,000 advertiser accounts linked to such scams following updates to its Misrepresentation policy and the deployment of a specialized team of over 100 experts. This led to a 90% reduction in scam ad reports.

While AI has enabled more sophisticated scams, Google also leveraged it to improve ad safety. In 2024, the company rolled out over 50 machine-learning enhancements to detect threats before user reports could surface.

Google also cracked down across several ad categories, blocking:

  • 193.7 million financial services ads
  • 146 million gambling-related ads
  • 122.5 million adult content ads
  • 104.8 million healthcare-related ads

Additionally, Google took action against publishers by enforcing violations on 1.3 billion web pages and 220,000 publisher sites. In a year marked by heightened political activity, Google also verified 8,900 new election advertisers and removed 10.7 million election ads from unverified sources.

READ
Google to Redirect Country-Specific Search Domains to Google.com