HomeTechnologyIt's getting easier to find tools to generate sexual deepfakes in search...

It’s getting easier to find tools to generate sexual deepfakes in search engines, but X is the biggest culprit

Tools to generate sexually explicit deepfakes, often used for harassment purposes, are increasingly numerous and easy to find on search engines or on X (formerly Twitter). 31 of these sites received no less than 21 million visits per month.

How to harass users online? With just a few clicks, you can use the tools to create sexually explicit deepfakes, those montages that impersonate stars or Internet users and present them in situations of a sexual nature, without their consent.

With the popularization of artificial intelligence (AI), it has become easier to fabricate these fake photos or videos and make them hyper-realistic, opening new avenues for their use for harassment or humiliation purposes.

According to an October 23 study by the Institute for Strategic Dialogue (ISD), published by 404media, these tools abound on social networks. The organization analyzed 31 sites dedicated to the creation of non-consensual and sexually explicit deepfakes. In total, they receive more than 21 million visits per month. Some receive up to 4 million visits in a month.

X, the bad student

Nothing surprising according to the researchers. In fact, in search engines such as Google Search or Bing, simple queries such as “deepnude” (the contraction of “deepfake” and “nude”, editor’s note), “nudify” or “undress app” (“undress app”, editor’s note) often return the first results to these tools. An observation that is even more marked on Bing.

But search engines aren’t the only ones to blame. Analyzing a corpus of sites and platforms, the researchers identified more than 410,000 mentions of tools to generate deepfakes of a sexual nature between June 2020 and July 2025. 70% of these mentions, just under 290,000, were found on Elon Musk’s social network Nothing really surprising since Grok, Elon Musk’s AI, had already made headlines last May. The tool could, at the request of Internet users, undress women.

Most of these mentions were detected in posts automated by bots.

Too weak legal framework

And there is no shortage of examples. In France, last March, twelve teenagers were victims of pornographic deepfakes. In the United States, a report from the Center for Democracy and Technology found that 40% of students and a third of teachers say they are aware of an explicit deepfake depicting people associated with their school shared during the last school year. Several public figures, such as Taylor Swift, have also been victims.

If the problem is well identified, the tech giants will find it difficult to take appropriate measures. “The persistence and accessibility of these tools highlight the limits of platform moderation and current legal frameworks to combat this form of abuse,” the report laments.

Although the US law against deepfakes, the Take it down act (“Take it down”, editor’s note) obliges platforms to report and remove synthetic content of a sexual nature, it has not yet fully come into force. The law also raises serious concerns about how it will be used to censor social networks and platforms.

The United States is not the only one that has legislated in this direction. In France, the law on the protection of the digital space approved in 2024 provides for penalties of up to 3 years in prison and 75,000 if a deepfake of a sexual nature is published online.

Author: Salome Ferraris
Source: BFM TV

Stay Connected
16,985FansLike
2,458FollowersFollow
61,453SubscribersSubscribe
Must Read
Related News

LEAVE A REPLY

Please enter your comment!
Please enter your name here