Google announced Wednesday to sign the Code of Conduct of the European Union on the supervision of artificial intelligence models (AI), unlike the finish line.
“We will join several other companies (…) signing the Code of Behavior of the European Union on AI for general use,” said Kent Walker, president of Google World Assements.
Operai, creator of Chatgpt and the French beginning -Up Mistral have already announced to sign this code of conduct, while finishing (Facebook, Instagram …) – Virulent detractors of the European digital rules, he said he would not.
Exclude from the known sites of AI for repeated acts of piracy
Published on July 10, these European recommendations on the most advanced AI models, such as Chatgpt, in particular emphasize copyright problems.
The EU calls to exclude from AI the sites known for their repeated acts of piracy and asks the signatories to commit to verify that their models do not compensate insulting or violent comments. These recommendations are designed for AI models for general use, such as Chatgpt, Grok of the Google X or Gemini platform.
Grok recently arrived at the headlines by transmitting extremist and abusive comments. Elon Musk XAI’s new company, Grok manager, apologized for the “horrible behavior” of his conversational robot.
This “Code of Good Practices” is not restrictive. However, signatory companies will benefit from a “reduced administrative office” when it comes to demonstrating that they comply with European law in AI, promises the European Commission.
Limit AI referrals
This future regulation, called “Law IA”, awakens the wrath of technological giants, which they continue to call to postpone the law. On Wednesday, Google said the European rules “run the risk of curbing the development of AI in Europe.”
The commission maintains its calendar, with an implementation of August 2 and mainly of the obligations in force a year later. The European Executive says he wants to limit AI derivations while avoiding stalking innovation. That is why he classifies the systems according to their level of risk, with limitations proportional to danger.
High risk applications, used, for example, in critical infrastructure, education, human resources or maintenance of the order, will be subject to 2026, to reinforced requirements before any marketing authorization in Europe.
Source: BFM TV
