Will TikTok have to respond to French justice? This month the parents of Marie, a bullied 15-year-old girl, who committed suicide in Cassis in September 2021, filed a complaint.
In question, the Chinese social network’s algorithm is accused of having accentuated the young woman’s discomfort. The latter had published a video on the platform to tell about the harassment she was experiencing. According to the parents, the TikTok algorithm automatically suggested other similar videos, accentuating the young woman’s feeling of misfortune.
For this reason, they filed a criminal complaint for “provocation to suicide”, “non-assistance to a person in danger” and “propaganda or advertising of means to commit suicide.” Does this complaint against TikTok have any chance of succeeding? In reality, they are scarce when it comes to social networks.
Clearly, in their opinion, the responsibility should fall on the cyberbullies and not the tool they use.
Contrary decisions
In terms of moderation, social networks have the obligation to eliminate “manifestly illicit” content such as advocacy of crimes against humanity, incitement to racial hatred, child pornography or even incitement to violence, hatred.
In this case, the video in which Marie talks about her discomfort was not intended to be moderated by TikTok. Is the recommendation algorithm, which would have shown him videos that would probably incite him to commit suicide, to blame? Once again, it will be difficult to prove this, explains Alexandre Archambault, although, for the moment, no judicial decision in the European Union has ruled on this specific issue.
According to the lawyer, host status can always protect platforms, which have rarely been held criminally responsible for content distributed by users, even if their recommendation systems decide to distribute it to users or not.
In the UK, however, parents condemned Instagram and Pinterest, held partly responsible for the 2017 suicide of a 14-year-old girl, Molly Russell.
The secret of messaging
A contrary decision was observed in the United States, where a decision by the US Supreme Court, questioned about the role of Twitter’s algorithm in the context of the November 13 attacks, ruled on the platform’s lack of responsibility.
In the context of cyberbullying, the main problem remains that it often occurs through private messages, whether they are messages from social networks or WhatsApp.
Once again, the platforms do not run much criminal risk because the jurisprudence is also clear on the matter: instant messaging is private correspondence. Therefore, social networks do not have to moderate exchanges.
Source: BFM TV

