Was Emmanuel Macron really seen picking up trash in Paris? Was Donald Trump really arrested? Was the Pope photographed wearing a down jacket? According to certain images widely disseminated on social networks, yes.
All these viral photos were actually generated by an artificial intelligence: Midjourney. In seconds, you can generate stunningly realistic images from a simple written description. But how to tell the difference between these images and the actual photos? Here are some tips to avoid being fooled by AIs.
look at the hands
At first glance, most of the viral images of the past few days seem real. But the first way to tell the difference is to simply stop and take a closer look.
For example, this image was supposed to show the arrest of Donald Trump. Seeing it go by quickly on social media, the image can pass for a shot taken on the spur of the moment. But if you zoom in, some details quickly put the chip on the ear.
For example, the hands on the right of the image: some have only 4 fingers or look strangely translucent.
And this is not an isolated case. Although they’ve gotten better of late, AIs like Midjourney still have hand or tooth issues, even in the much-talked-about image of Pope Francis. To know if an image has been generated by AI, this is the first thing to look at.
illogical texts
Another current weakness of these AIs is the text. When they try to reproduce billboards, posters, labels… Many times they create non-existent or totally illogical series of symbols. As in this image it is supposed to represent Donald Trump as a prisoner, reading an indecipherable book.
Identify inconsistent details
But some AI-created images don’t feature weird text or missing hands. It is therefore necessary to take out the magnifying glass and go in search of the incoherent detail.
Because there’s almost always one: an odd shape, like in this image that’s supposed to show a policeman hugging a protester, with a two-tone traffic light in the background…
People in the background who are poorly defined or whose faces seem to have melted…
Or unreal textures, with “the grain of the photo giving it a flat texture”, as explained by the figaro Tina Nikoukhah, PhD in image processing at the École Normale Supérieure de Paris-Saclay. Enough to give the central characters the air of a wax statue.
Search by image
Another simple method: use the reverse search on the image. The person who originally posted it was able to clarify that it was an AI-generated image. If it appears almost nowhere else, or in publications that never explain where it came from, this may give you a clue. And if the articles already identify it as the creation of an AI, there is little doubt about its origin.
Finally, some AIs voluntarily leave a trail of their passage. Dall-E, for example, always adds a signature multicolored band to the bottom right of his creations.
But just because that band is missing doesn’t mean the image is authentic: other AIs like Midjourney don’t leave such a noticeable trail, and the image creator can resize or edit the image to hide that signature. Dall-E isn’t the most commonly used AI for creating realistic “fakes” anyway.
Always stay alert
These already limited methods may soon become obsolete. Given the impressive progress of these AIs in recent months, defects in fingers or teeth could quickly become ancient history.
The researcher explains that the ideal solution would be to force all AIs to place a signature more or less visible to the naked eye on their creations. But the challenge is daunting: we’d have to find a signature that won’t blur with some basic makeovers, impose this measure on foreign-based AI giants as well as the ever-growing “open source” software. . and that everyone can modify to their liking. Therefore, the most effective method remains human surveillance.
Source: BFM TV
