EU, Interpol and 100+ Join Push to Ban AI Nudification Tools

Growing Concern Over AI Nudification Tools

More than 100 major humanitarian and child protection organisations are urging immediate action against AI nudification apps and tools. This coalition includes well-known entities such as Amnesty International, the European Commission, Interpol, Safe Online, and Save the Children, along with child protection experts and human rights advocates.

This call for action follows the backlash against the Grok nudification feature, where users asked Elon Musk’s AI chatbot, Grok, to remove clothes from digital photographs of women. The trend began with a "put her in a bikini" challenge, which altered images to show women in bikinis. However, it quickly escalated into more explicit and sexualised content.

These non-consensual and fake images were then shared publicly on the social media platform X, exposing millions of people to such material. It is estimated that Grok has generated around three million non-consensual nude pictures.

Link to Serious Crimes

In several instances, images created by AI nudification tools have been linked to blackmail, coercion, and child sexual abuse material. Most of the victims are women and children. This has raised serious concerns about the safety and dignity of individuals, particularly minors.

The global coalition has emphasized that these nudification apps and tools pose a significant threat to child safety and human dignity. Marija Manojlovic, head of Safe Online, stated:

“Nudifying tools have created an unprecedented threat to our children. AI—the technology that should expand human potential—is being weaponised against children.”

She added, “We minimise harm by calling it ‘online,’ as if it is somehow less serious than what happens in the physical world, but the trauma is real.”

Misuse of Technology

Despite often being marketed as “adult” applications, these tools have increasingly been used to generate illegal sexual pictures of children without consent, effective barriers, or accountability. Manojlovic further noted:

“Tech companies have the ability to detect and block nudified content of children. The distribution of child sexual abuse material is illegal in every jurisdiction, and tech platforms should be brought in line with other creation and distribution channels.”

“It’s frankly shocking that these platforms are monetised and aren’t required to report offenders, or work with industry partners to cut off payment flows—these are safeguarding tools that are used in the real world and need to be applied to online platforms.”

Calls for Regulation

There has been growing pressure to outlaw AI nudifying technologies, with advocates arguing that they serve no positive purpose. As a result, the coalition is pushing for these technologies to be blocked, with developers and platforms held accountable for their actions.

Key Points

  • Global Coalition: More than 100 organisations are involved, including Amnesty International, Interpol, and Safe Online.
  • Grok Backlash: Users exploited AI chatbot Grok to generate non-consensual images of women, leading to widespread concern.
  • Impact on Children: These tools are increasingly linked to child sexual abuse material, with most victims being minors.
  • Need for Regulation: Advocates argue that AI nudification tools should be banned, and platforms must be held responsible for their misuse.
  • Call for Action: The coalition urges immediate steps to block these technologies and ensure accountability for developers and platforms.