In most cases, artificial intelligence systems in the limited or lowest risk category will be able to operate as before, and the EU is legislating specifically to deal with artificial intelligence systems that may compromise the safety or privacy of EU citizens.
The European Union has released draft legislation on the regulation of artificial intelligence, which will become a key framework for artificial intelligence suppliers and distributors to develop markets in the EU.
In legislation, the EU divides artificial intelligence systems into three risk categories: unacceptable risk, high risk and limited or minimal risk. In most cases, AI systems in the limited or minimum risk category will be able to operate as before, with EU legislation specifically dealing with AI systems that could compromise the safety or privacy of EU citizens.
European Commission President Ursula Gertrud von der Leyen said: “Artificial intelligence is a great opportunity for Europe and citizens deserve access to what they can Trusted technology.” “Today, we propose new rules for trustworthy AI. They set high standards based on different levels of risk.”
Minimal or low-risk AI systems include chatbots, spam Mail filters, video and computer games, inventory management systems, and most other impersonal AI systems already deployed in the world.
High-risk AI systems include most AI with real-world impact, such as consumer credit scoring, recruiting, and security-critical infrastructure. While these are not banned, EU legislation aims to ensure stricter requirements and oversight of these systems, along with more expensive fines for those who fail to properly protect their data.
The EU intends to review the high-risk list every year, either adding new artificial intelligence systems to it or downgrading some artificial intelligence systems that are high risk but have become normalized in society or have different risk factors than in previous years.
After the passage of this AI regulation, the EU will not allow AI systems that present unacceptable risks. These include AI systems that use subliminal, manipulative or exploitative techniques, which is not a category but a general ban on forms of AI, such as targeted political advertising, or AI that interprets emotions through facial expressions.
Under the artificial intelligence regulations, remote biometric identification systems will also be prohibited, especially when used by law enforcement to identify individuals.
For organizations operating or distributing artificial intelligence systems within the EU or within economic blocs, this legislation is the first clear sign of what is to come. It will take the EU 12 to 24 months to agree on the finer details, but the legislation is unlikely to change much from the first draft.
This gives organizations in the high-risk AI systems category a short period of time to recalibrate and hire to ensure their AI programs are viable in the EU. Additional human oversight, transparency and risk management will be required to ensure AI systems pass inspection, and penalties for non-compliance are currently set at €30 million or 6% of global revenue, so for larger organizations the cost may force the They cannot re-equip their AI systems and they will exit the EU.
The above is the detailed content of EU releases draft artificial intelligence regulations. For more information, please follow other related articles on the PHP Chinese website!