The scenes represented are false, but the danger is real. Internet users use image generation AI such as Stable Diffusion to generate images of child pornography, according to the US media Bloomberg.
In the first four months of 2023 alone, 68 AI-generated sets of child pornography images were posted on a forum dedicated to such content, according to Avi Jager, a manager at moderation company ActiveFence. This number stood at just 25 in the last four months of 2022, according to the expert quoted by Bloomberg, who declines to name the forum in question for security reasons.
Imperfect (or non-existent) moderation
Image-generating AIs like Midjourney can generate increasingly realistic images. Capabilities that open up new artistic possibilities, but also raise many concerns about the increased risk of misinformation and abuse.
Many generative AI companies like Stable Diffusion have security measures in place to prevent the creation of illegal content, for example, by blocking queries with certain terms. But these safeguards are not foolproof: according to Avi Jager, interested netizens exchange tips to circumvent these limits, for example by using languages other than English, which are less moderated.
Some structures choose not to set any limits on their AI, even for potentially illegal requests. This is for example the case of the AI Openjourney, which does not block any word. Some users claim to have managed to generate bloody or child pornography images without difficulty.
Finally, some of these users share models created specifically for the production of pornographic images partially trained with images of children, as well as instructions for using them, according to an ActiveFence report published on Tuesday, May 23.
“Just Beginning”
Stability AI, the startup behind Stable Diffusion, “strictly prohibits any use of [ses] platforms for illegal or immoral purposes,” their spokesperson told Bloomberg. The company claims to have taken numerous measures against the generation of child pornography content, including blocking certain words and training models on expunging these types of images.
At this time, the National Center for Missing & Exploited Children [National Center for Missing and Exploited Children] US has not detected a massive increase in AI-generated child pornography images, but “we anticipate that it will increase,” its legal director, Yiota Souras, told Bloomberg.
The organization already receives more than 32 million reports a year, almost all involving possession of child pornography images. She says that she is in talks with US lawmakers and platforms to discuss how to proceed if AI-generated images increase.
Such artificial images are not considered illegal in the United States. They are, however, in many other countries such as France, the United Kingdom, Australia or Canada.
Source: BFM TV
