Should we hide information identified as false by fact checkers on social media? In the United States, Facebook is now leaving the choice to its users, a significant change officially intended to give its algorithm less power, but one that some experts say could benefit conspiracy theorists.
Until now, this algorithm relegated content reported by Facebook’s partner verification services, including AFP, to the bottom of user feeds by default.
But a new social network parameter now allows Internet users to make this decision for themselves, and thus potentially make this false or misleading content more visible.
“Check Algorithm”
This option proposes to “reduce further” the visibility of this content considered problematic, pushing them “even further down the feed so that you can’t see them at all anymore”, or “not reduce” their visibility, with the opposite effect. , allowing users to have more access to these posts and increasing the likelihood of seeing them.
Appearing last May, this function has not been the subject of specific communication by Facebook, leaving it to American Internet users to discover it for themselves in the settings.
This development comes in a particularly polarized political climate in the United States, where content moderation on social media is a particularly sensitive issue.
Conservatives accuse the government of pressuring platforms to censor or remove content, under the false pretense of verification. On Tuesday, a federal judge in Louisiana, for example, limited the chances of meetings between senior administration officials or state agencies and social networks around content verification issues.
And scholars of disinformation at major institutions like Stanford University’s Internet Observatory, accused of promoting censorship, what they deny, face lawsuits from conservative activists and the launch of a congressional investigative committee. .
Alignment with Instagram
For many researchers, this new parameter introduced by Facebook some 18 months before the 2024 presidential elections raises fears of an explosion of problematic content on social networks.
Meta, for its part, wanted to be reassuring, remembering that the content will always be presented as being identified as misleading or false by independent verifiers, and it added itself to thinking about offering the option in other countries.
The network now also allows you to decide how often the user will be confronted with “low quality content”, such as clickbait or spam, or “sensitive”, violent or shocking content.
For specialists, the consequences of these changes can only be measured in retrospect, as users discover this functionality. Verification agencies, for whom it is impossible to verify all content, are regularly attacked online by people who question their assessment, even when it is evident that the subject is presented falsely or incompletely.
According to Emma Llanso, from the Center for Democracy and Technology, a user “who distrusts the role of verifier will be inclined” to activate this new function “to try to avoid seeing the verifications carried out.” Facebook should study the effects on “misinformation exposure” before rolling out its feature to other parts of the world, “ideally sharing the results,” she said.
Source: BFM TV
