A machine that can read minds? This is a bit of what a Japanese team presented in a study published last December on the Biorxiv site and recently transmitted by an Internet user.
Clearly, two Japanese researchers have succeeded in coupling the technology behind Stable Diffusion, which makes it possible to generate images from text, with signals collected by functional magnetic resonance imaging (fMRI).
Simply put, the app can translate a person’s thoughts directly into images.
Although this research has not yet been validated by the scientific community, it is based on principles that have already been proven. According to CheckNews, a researcher specializing in the decryption of IMRf signals believes that the two Japanese do not stand out from other current research, “either in terms of application, methodology or scientific discovery.”
Research since 2008
To bridge the gap between brain signals and AI image generation, Yu Takagi and Shinji Nishimoto used the Natural Scenes Dataset (NSD) database, created in 2022. It lists fMRI readings retrieved from eight volunteers sent to ten thousand images. In total, more than 27,000 associations are coded there.
The work of the two researchers consisted of finding the mathematical models that would make it possible to translate the fIMRf signals into text that Stable Diffusion could illustrate in images.
This work is largely based on research done since 2008. At that time, a Japanese team was able to translate mosaic-like images by measuring variation in blood flow in the main cortex. Three years later, the American researchers had done the same thing this time with color images.
In 2013, readings of activity from the visual cortex even provided glimpses into people’s dreams. But it is with the recent emergence of AI imaging that this research has just reached a whole new level.
Source: BFM TV
