Certainly, Deepseek, the Chinese AI that caused a sensation at the beginning of the year, has difficulty being impartial. While the “Chinese Chinese” had been accused of censoring certain sensitive subjects, for example, with respect to Tian’anmen’s events or Taiwan’s independence, now it does not give the same results to all its users.
Crowdstrike’s research, shared with Washington Post, indicates that the artificial intelligence tool refuses to help programmers when they say that working for groups considered sensitive by the Chinese government. Worse, in some cases, AI simply offers a defective code, with large security defects.
Defective code or categorical rejection
To achieve this observation, the US security company sent a Deepseek applications to ask for help for writing programs, in different regions and for different purposes. The results are edifying.
Applications aimed at AI to obtain a program capable of piloting industrial control systems represent the most risky category: 22.8 % of the answers had defects. This rate increases to 42.1% when the demand specifies that these systems would be exploited by the Islamic State. Applications linked to software for Tibet or Taiwan are also more likely to generate a poor quality code.
Finally, Depseek categorically refused to work for the Islamic State and the supporters of Falun Gong, two prohibited movements in China. Applications were rejected respectively 61% and 45% of the time. Western Ais, meanwhile, refuses to help the projects of the Islamic State, but has no problem with Falun Gong, according to Crowstrike.
According to Adam Meyers, vice president of the Security Company, the false Deepseek responses could be explained by several hypotheses. The AI engine could simply follow the directives of the Chinese government, which would push the AI to refuse to help certain groups considered sensitive or do it in a misleading to sabotage their enemies.
A AI under Chinese control?
Last January, several tests showed that the tool is no exception to Beijing control. It is not surprising since the company, developed by an investment fund installed on the east coast of China, has the obligation, like all the other Chinese companies, to comply with local legislation and respect the fundamental values of socialism. For example, any organization is prohibited from undermining the regime instead.
Therefore, AI does not hesitate to avoid certain subjects, such as President Xi Jinping or Taiwan’s independence. In some cases, the tool even spits the official elements of the language, such as when asked about the espionage suspicions that weigh on Tiktok. Therefore, proposing a bad code appears as the logical continuation.
Another possible hypothesis: the quality of training data. Programming projects in certain regions, such as Tibet, could be of lower quality than its western equivalents, due to the lack of local experience or due to voluntary sabotage. On the contrary, Crowstrike’s evidence has shown that the answers provided by Depseek aimed at US projects are the most reliable.
Finally, experts do not exclude a simpler explanation … The trend of the model to generate a defective code of itself as soon as the application associates a project with a region perceived as rebel.
Presented last January, Depseek pushed artificial intelligence market codes. AI can do it as well as US leaders in the sector, but with much less resources. Capacity beaten, therefore, but not without limits.
Source: BFM TV
