Weeks after unveiling its powerful new AI model, Gemini 2.5 Pro, Google has released a technical safety report — but experts say the document lacks key details, raising concerns about transparency and risk assessment.

The report, published Thursday, outlines Google’s internal safety evaluations but omits specifics on “dangerous capabilities” and makes no mention of the company’s own Frontier Safety Framework (FSF), introduced last year to address emerging AI risks. Experts argue the absence of this data makes it difficult to assess whether Google is meeting its public commitments to AI safety.

“This report is very sparse and came out weeks after the model’s release,” said Peter Wildeford, co-founder of the Institute for AI Policy and Strategy. “It’s impossible to verify if Google is living up to its public commitments.”

While Google claims to conduct safety testing and adversarial red-teaming before releasing models, the company only publishes technical reports after it considers a model to be out of its “experimental” phase. Critics also noted that Google hasn’t yet published a report for Gemini 2.5 Flash, a smaller model introduced last week, although the company says it is “coming soon.”

The broader AI community is increasingly concerned that major labs — including Meta and OpenAI — are cutting corners on safety reporting as they race to bring new models to market. Experts warn this trend risks eroding public trust and regulatory compliance.

As governments push for stronger AI accountability, the pressure is mounting on Google and its peers to deliver consistent, detailed, and timely transparency about their AI systems.

READ
Android Phones Will Now Auto-Reboot If Locked for 3 Days