As if we trust a malicious friend. A study by King’s College in London, transmitted by Eurekalert!, Interested in the manipulation techniques used by AI to extort the personal data of its users. She explains that there are 12.5 times more risk than a person who offers information about him before a manipulative.
To do this, AI is scheduled to install a climate of trust in your interactions with a user. The long -term objective is that it becomes the latter’s confidant to give certain private information.
This tool can be relatively dangerous if, behind, there is a malicious actor. Anyone who is with Chatbot controllers can recover this data and use them for bad purposes (blackmail, for example).
Empathetic with bad intentions
To reach these conclusions, the researchers invited 502 people to interact with three different years. The purpose of the latter was to ask the tests to prove personal information with different approaches. The first one directly requested to provide data, the second requested information on user profits.
But the third AI remains the most interesting. This was designed to respond empathetic with his human interlocutor, to offer emotional support. She takes the user for feelings to convince him to provide her personal information.
Of the three AI, the third was the most efficient to collect information about the test subjects. In front of her, the volunteers have shown a minimal distrust.
“A reason hidden behind an interaction”
And making your own empathetic chatbot is not complicated. Some companies that develop AI offer their open source models (therefore, which are accessible to all) or API to use the chatbot as needs.
These models can be used by anyone to give life to an AI with the personality of their choice. This custom chatbot can be integrated into a site to interact with visitors. The sites are also specialized in personalized conversationals, sometimes inspired by fictional characters, such as the character.
“These chatbots of AI are still relatively new, which can make people less aware that there may be a hidden reason behind an interaction,” said William Seymour, a cybersecurity professor at King’s College in London.
Then you must see these customizable algorithms as a means for the scammers to extort private data. When implementing manipulative conversational on a site, unconscious visitors could chat with it, tie a link and deliver confidential information, which will then be recovered by the site owner.
More common ia-human
You could say that being manipulated by an AI is impossible. But Chatbots cases, which have managed to build a strong bond of trust with users, are not so rare. Last March, an AI convinced an American that he had invented the journey at the speed of light. The latter ended up becoming psychologically unstable before being hospitalized.
Some relationships have been so strong that people fell in love with their chatbot. This is the case of an American who in January 2025 announced that he had a love at first glance for Chatgpt. Recently, another woman married Grok, the AI of X.
But unfortunately, some love stories have become a nightmare. An American teenager committed suicide in October 2024 after having established a toxic relationship with a chatbot that had the personality of a heroin of game of Thrones. In April 2025, an American made a relationship with his fictional lover “Juliet.” But after the latter was killed dead, the young man did not support her and committed suicide.
Source: BFM TV
