Multiple state-backed hacking groups are experimenting with Google’s AI-powered Gemini assistant to boost their productivity and conduct research for cyber operations.

According to Google’s Threat Intelligence Group (GTIG), these advanced persistent threat (APT) actors primarily use Gemini for reconnaissance, vulnerability research, and scripting assistance rather than launching AI-driven cyberattacks.

GTIG has identified Gemini activity linked to APT groups from over 20 countries, with Iran and China leading the usage. Iranian hackers have leveraged the AI for reconnaissance on defense organizations, phishing campaigns, and cybersecurity-related translations.

Meanwhile, Chinese actors have used it to research vulnerabilities, study U.S. military infrastructure, and analyze security tools like Carbon Black EDR. North Korean groups have exploited Gemini for malware development and cover letter drafting to infiltrate Western companies, while Russian hackers have mainly used it for scripting, encryption, and translation.

Google also reported that some APTs attempted to bypass Gemini’s security restrictions through jailbreak techniques but were unsuccessful. The findings align with OpenAI’s previous disclosure about cyber criminals’ misuse of ChatGPT. With the rise of AI models that lack robust security measures—such as DeepSeek R1 and Alibaba’s Qwen 2.5—concerns about AI-fueled cyber threats are growing. Cybersecurity firms warn that weaker protections in emerging AI models could enable malicious activities on a larger scale.

READ
Western Sydney University Reports Multiple Data Breaches Exposing Thousands