For almost two years, Europe has been trying to set the limits of artificial intelligence. After a draft regulation from the European Commission presented in April 2021, it is the turn of the European Parliament to analyze the issue. The co-sponsors of the future regulation on artificial intelligence, the Romanian Dragos Tudorache and the Italian Brando Benifei, released a first version of their compromise text on Tuesday, March 14.
The text, revealed by the Franco-Belgian context, Context, specifies the guidelines that will guide the final arrangement. It establishes a definition of AI systems as “trained on vast data at scale, designed for generality of results, and scalable to a wide range of tasks.”
Frame ChatGPT or facial recognition
Therefore, this definition does not apply to items developed for specific tasks, such as components, modules, or simple artificial intelligence systems.
The regulation plans to regulate the so-called “high risk” systems, such as ChatGPT, or its new version GPT-4, even more efficient. There is also talk of artificial intelligence models used in facial recognition, or in transportation and education.
In this first version of the commitment text, users, importers, distributors and also third parties will be considered high-risk AI providers, reveals Context.
To be as safe for use as possible, artificial intelligence must be tested before it is put on the market. Europe insists on the need to identify the potential risks of a system and ensure good data management.
This compromise text must be evaluated during a technical meeting on March 20, specifies Context. It will then be discussed by the entire Parliament negotiating team.
Source: BFM TV
