The European Commission on Wednesday proposed rules to regulate artificial intelligence. The policy proposal discusses, among other things, the use of facial recognition and ‘social scoring’.
In the policy proposal, use cases for artificial intelligence are divided into four different risk categories, each with its own set of rules. Among other things, the Commission refers to a category of AI systems that would pose an ‘unacceptable’ risk to people’s fundamental rights and safety.
AIs from that category would be banned completely if the proposal is passed. The EU mentions ‘social scoring’ systems as an example, with which governments can give individual residents a score based on their behaviour. The Commission also considers “toys that may encourage dangerous behavior by minors through speech assistance” unacceptable.
Directly below this is a category with artificial intelligence that poses a high risk. In this risk group, the Commission mentions, among other things, AIs that would be used for ‘critical infrastructure’ such as public transport. The EU also mentions software used for employment, such as managing employees and sorting CVs for selection procedures, AI for robot-assisted surgery, or software for managing migration, asylum applications or border control.
Artificial intelligence for authorities also falls under this category. All forms of biometric identification such as facial recognition also fall under this category; their use in public spaces by authorities is in principle prohibited, but there could be limited exceptions, for example to trace a missing child or prevent a specific and immediate terrorist threat.
All the different aspects of AIs classified as high-risk would be tightly regulated. The datasets used to train them must be of ‘high quality’ to minimize the risk of discrimination. Also, among other things, the activity of the artificial intelligence must be tracked to guarantee the traceability of the results, there must be appropriate measures for human surveillance and the AIs must be provided with detailed documentation so that authorities can assess whether the systems comply with the requirements. .
Chatbots and spam filters
Third, the Commission names a list of AI systems that pose a limited risk. The European Commission is referring to artificial intelligence with ‘transparency obligations’, such as chatbots. The risk is low, although “users should be aware that they are talking to a machine so that they can make an informed decision to continue or end the conversation.”
Rounding out the list is a group of artificial intelligence that poses minimal risk to humans, which will include “most AI systems.” For example, the Commission mentions games with AI and spam filters for email services. The proposed regulations will not affect these AI systems, according to the policy proposal.
Implementation of the policy proposal
With regard to the implementation of these proposed rules, the European Commission proposes that the national competent market surveillance authorities within Member States monitor the new rules. A European Artificial Intelligence Council should facilitate the implementation of the rules and drive the development of artificial intelligence standards. In addition, voluntary codes of conduct for high-risk AI would be proposed, alongside regulatory sandboxes to facilitate “responsible innovation” in AI.
“AI is a means, not a goal. It has been around for decades, but has now achieved new capabilities fueled by computing power,” said European Commissioner Thierry Breton in an explanation of the proposal. Today’s proposals aim to strengthen Europe’s position as a global center of excellence in AI from lab to market, ensure that AI in Europe respects our values and rules, and explore the potential of AI for industrial use. utilize.”
It is not yet clear whether and when the policy proposal will be implemented. The proposal has yet to be assessed by the European Parliament and the Council of the European Union. The EU will probably need a number of years to discuss and implement the possible laws.