The European Union agreed on Friday to unprecedented global legislation to regulate artificial intelligence (AI), after three days of intense negotiations between member states and the European Parliament.
EU co-legislators have reached a “political agreement” on a text that should promote innovation in Europe, while limiting the possible excesses of these highly advanced technologies.
Since then, discussions have continued. The latest round of negotiations, which began on Wednesday afternoon, lasted almost 35 hours…
The emergence of ChatGPT
The process was impacted at the end of last year by the appearance of ChatGPT, the text generator from the Californian company OpenAI, capable of writing essays, poems or translations in a few seconds.
This system, like those capable of creating sounds or images, revealed to the general public the immense potential of AI. But also certain risks. The distribution of large fake photographs on social networks has warned, for example, of the danger of opinion manipulation.
This phenomenon of generative AI has been integrated into current negotiations, at the request of MEPs who insist on specific supervision for this type of high-impact technology. In particular, they called for more transparency about the algorithms and giant databases that lie at the heart of these systems.
Member states feared that excessive regulation would nip their emerging champions, such as Aleph Alpha in Germany and Mistal AI in France, in the bud by making development costs prohibitive.
Critics within the technology sector
The political agreement reached on Friday night must be complemented by technical work to finalize the text.
However, criticism has already been heard within the technology sector.
According to him, “technical work” on crucial details is now “necessary.”
Regarding generative AI, the commitment foresees a two-speed approach. Rules will be imposed on everyone to ensure the quality of the data used in the development of algorithms and verify that they do not violate copyright legislation.
Developers will also need to ensure that the sounds, images and texts produced are clearly identified as artificial. The strengthened restrictions will apply only to the most powerful systems.
The text takes up the principles of existing European regulations on product safety that impose controls based mainly on companies.
Rules for “high risk” systems
The heart of the project consists of a list of rules imposed only on systems considered “high risk”, essentially those used in sensitive areas such as critical infrastructure, education, human resources, law enforcement, etc.
These systems will be subject to a series of obligations such as providing for human control of the machine, the establishment of technical documentation or the implementation of a risk management system.
The legislation provides for specific oversight of artificial intelligence systems that interact with humans. It will force them to inform the user that he is in contact with a machine.
Bans will be rare. These will be applications contrary to European values, such as citizen rating or mass surveillance systems used in China, or remote biometric identification of people in public places to avoid mass surveillance of populations.
However, on this last point, States have obtained exemptions for certain police missions, such as the fight against terrorism.
Unlike the voluntary codes of conduct of certain countries, European legislation will be equipped with means of surveillance and sanctions with the creation of a European AI office, within the European Commission. It may impose fines of up to 7% of turnover, with a minimum of 35 million euros, for the most serious infractions.
Source: BFM TV
