This is a phenomenon that is not new in the world, but is growing in China. While the World Health Organization estimates that in China there are 54 million people suffering from depression and 41 million Chinese suffer from anxiety, while the suicide rate has increased since 2018. Figures evidently inflated by the Covid pandemic. Across the Middle Kingdom, young people and urban dwellers, already familiar with online therapies, are increasingly turning to AI chatbots to manage their mental health. On social media, many people share their intimate exchanges with these virtual assistants, hoping to alleviate anxiety, depression, or relationship difficulties.
This trend is not exclusive to China either. According to a recent study by Harvard Business Review, psychological support is among the top uses of chatbots worldwide. A survey conducted by the Soul platform and the Shanghai Mental Health Center reveals that almost half of the young Chinese surveyed have already used an AI chatbot to discuss these topics.
The media outlet Rest of World, which covers technology topics outside Western countries, explored this phenomenon. Jiying Zhang, nutritionist and health coach, shares her experience in her columns: after four years of traditional therapy, she tried the DeepSeek chatbot and was immediately convinced. The AI offered instant support, support that can even be described as unconditional, which is sometimes the cause of very serious excesses, since the chatbot very often validates the position of its interlocutor without nuances or questions. Regardless, AI provided Jiying Zhang with a wealth of insights and personalization inspired by her favorite figures, like psychologist Carl Rogers or author Cheryl Strayed.
A growing “market”
With mental disorders on the rise, especially among young people, Chinese startups and tech giants are ramping up their innovations. Before the recent explosion of generative AI, applications already offered services with hourly packages to connect “professional listeners” and clients, such as the Tianwei application and its Little Angels application… Since the rise of chatbots, more than a dozen platforms, such as Good Mood AI Companion or Lovelogic, are already referenced in the official registry of approved algorithms. Players such as KnowYourself, Jiandanxinli or JD Health (a subsidiary of the online commerce giant JD.com, editor’s note) have also launched their own tools, such as the therapeutic companion “Small universe for discussion and healing”.
This meteoric growth of virtual therapists is partly explained by the shortcomings of the Chinese mental health system: almost 80% of general hospitals do not have a psychiatric department. According to the latest very fragmentary figures available from the World Health Organization and dating from 2017, China has 2.2 psychiatrists and psychologists per 100,000 inhabitants.
In fact, consultations are rare, expensive and often at the expense of patients, especially in rural areas. For example, Shanghai has 12 times more therapists per 100,000 inhabitants than Ningxia (central region of the country), a gap that online platforms are trying to fill.
The Association of American Psychologists put forward an argument to explain this shortage in 2019. A mental health law, passed in 2013 and intended to protect the public from potential charlatans, would prevent psychologists in China from providing psychotherapy unless they work in hospitals with patients who have been diagnosed by a psychiatrist. Psychologists then find themselves in a situation where they can only play the role of counselor or psychosocial support. Therefore, the difference with chatbots may seem slight.
Risk management
But a problem arises. In China, supervision of therapeutic chatbots remains vague: although the Cyberspace Administration requires companies to “test their AI against 31 risks”, these controls mainly focus on combating misinformation, neglecting suicide prevention or the protection of mental health.
In addition, public mental health initiatives promote social stability over individual support, as demonstrated by the creation of a helpline and regional centers “after incidents in 2024”, as well as the recent deployment of social workers responsible for monitoring people in difficulty.
Vague legislation, like in the rest of the world, where we are still groping. In the United States, for example, regulation of AI chatbots remains fragmented: after testimony from heartbroken parents accusing these tools of having encouraged self-harm, the US Food and Drug Administration (FDA) created a committee to regulate their use in mental health, and several states, including Illinois, Nevada and Utah, now prohibit AIs from presenting themselves as therapists.
But these rules do not prevent its use as emotional support, a dependence that worries some researchers, who see it as a factor of increasing isolation. Especially since chatbots
Source: BFM TV
