Cybersecurity researcher Jeremiah Fowler recently discovered an unprotected database containing nearly 100,000 records from GenNomis by AI-NOMIS, a South Korean AI company specializing in face-swapping and AI-generated adult content.
The database contained 93,485 images and JSON files, totaling 47.8 GB, exposing AI-generated explicit material, including disturbing content. While no personal user data was found, the leak raises serious concerns about privacy and misuse of AI-generated images.
Fowler immediately notified GenNomis and AI-NOMIS, leading to the restriction of public access. However, the company did not acknowledge the warning, and it remains unclear whether a third party managed the database or how long it had been exposed. Notably, some questionable content, including AI-generated depictions of children and celebrities as minors, was found before the database was removed.
This incident highlights the growing risks associated with AI-generated deepfake pornography. Studies indicate that 96% of deepfake content online is pornographic, with 99% targeting women without consent. Such images can be used for extortion, harassment, and severe reputational damage. Governments worldwide are increasingly addressing this technology’s legal and ethical challenges.
In March 2025, a global law enforcement operation led to the arrest of 25 individuals linked to AI-generated child abuse material. In the U.S., a teacher was arrested for creating fake explicit videos of students, while South Korea sentenced a deepfake crime perpetrator to ten years in prison.
Although GenNomis’ policies prohibit illegal content, such material in the leaked database raises concerns about enforcement and oversight. Fowler stresses the urgent need for AI service providers to implement stricter safeguards, such as identity verification and content moderation, to prevent abuse. Stronger regulations and accountability measures are crucial in preventing the misuse of AI technology.
Victims of AI-generated extortion should report incidents to law enforcement. The “Take It Down Act,” currently awaiting approval in the U.S. House of Representatives, aims to criminalize the distribution of non-consensual AI-generated images. While Fowler does not accuse GenNomis of wrongdoing, his findings emphasize the critical need for better data security and ethical AI practices to prevent future abuses.
Bijay Pokharel
Related posts
Recent Posts
Subscribe
Cybersecurity Newsletter
You have Successfully Subscribed!
Sign up for cybersecurity newsletter and get latest news updates delivered straight to your inbox. You are also consenting to our Privacy Policy and Terms of Use.