top of page

Gennomis Data Leak: Exposing Underage Deepfake Concerns and AI Misuse

  • Writer: Nox90 Engineering
    Nox90 Engineering
  • Apr 5
  • 2 min read
Image for post about Gennomis Exposure of Underage Deepfakes

Detailed Analysis Report on Gennomis Exposure of Underage Deepfakes

Introduction

Recently, a significant data leak from the AI image platform Gennomis, operated by the South Korean firm AI-NOMIS, has surfaced, exposing sensitive content involving underage deepfakes. This report details the incident, its implications, and the broader concerns it raises within the field of AI-generated content.

Incident Overview

  • Date of Discovery: April 3, 2025
  • Discovered by: Cybersecurity researcher Jeremiah Fowler
  • Data Exposed: 47.8GB of data, comprising 93,485 images and JSON files
  • Nature of Content: Explicit AI-generated material, face-swapped images, depictions of underage individuals, and celebrities portrayed as children

Technical Details

  • Database Configuration: The database was publicly accessible and lacked essential security measures such as password protection or encryption.
  • Data Content: Included command prompts, links to generated images, and explicit imagery raising concerns about the exploitation of minors.

Exploitation in the Wild

The incident highlights the growing misuse of AI-generated content for creating deepfake pornography. According to Fowler, a staggering 96% of deepfakes online are pornographic, involving non-consensual images of women. The data exposed in this incident underscores the potential for misuse in extortion and reputation damage scenarios.

Aftermath and Response

  • Company Response: Upon receiving a responsible disclosure notice, AI-NOMIS took the affected database offline. However, some data folders, such as "Face Swap," were removed before the notice was sent.
  • Platform Status: The Gennomis website was taken offline following the incident.

Mitigation and Recommendations

  • Detection Systems: There is an urgent need for AI platforms to implement detection systems to identify and block explicit deepfakes, particularly those involving minors.
  • Developer Responsibility: Emphasis on developer accountability and the implementation of identity verification and watermarking technologies to prevent misuse.
  • Regulatory Compliance: Platforms must adhere to strict guidelines prohibiting explicit content involving children and ensure robust security measures to protect user data.

Conclusion

This incident serves as a critical reminder of the potential for abuse within the AI image generation industry. It highlights the necessity for stringent security practices and the ethical responsibility of developers to prevent the creation and distribution of harmful content.

References

This comprehensive report aims to inform stakeholders about the risks associated with AI-generated content and the importance of proactive measures to safeguard against similar incidents in the future.


Comments


bottom of page