Brussels – The European Council approved the Artificial Intelligence Act, harmonizing AI rules with a risk-based approach. This pioneering law fosters safe AI development, ensures fundamental rights, and encourages innovation within the EU market.
The European Council agreed to a ground-breaking law seeking to harmonise rules on artificial intelligence, the so-called Artificial Intelligence Act. The flagship legislation pursues a ‘risk-based’ approach, which signifies the higher the risk of causing harm to society, the more stringent the rules. It is the first of its type in the world and can develop a global standard for AI regulation.
What are the key objectives of the AI Act?
According to the European Council, the new law seeks to foster the development and uptake of safe and trustworthy AI systems across the EU’s single market by both private and public actors. At the same time, it seeks to ensure respect for the fundamental rights of EU citizens and encourage investment and innovation in artificial intelligence in Europe. The AI act spreads only to areas within EU law and provides exemptions such as for systems operated exclusively for military and defence as well as for research purposes.
How does the AI Act classify AI systems?
The new law classifies different types of artificial intelligence according to risk. AI systems presenting only little risk would be subject to very light transparency commitments, while high-risk AI systems would be authorised, but subject to a set of conditions and obligations to gain access to the EU market.
Which AI systems does the AI Act exclude?
AI systems such as, for example, mental behavioural manipulation and social scoring will be excluded from the EU because their risk is considered unacceptable. The law also restricts the use of AI for predictive policing established on profiling and systems that use biometric data to classify people according to specific categories such as race, religion, or sexual orientation.
The AI act also manages the use of general-purpose AI (GPAI) models. GPAI models not posing systemic threats will be subject to some narrow requirements, for example with regard to transparency, but those with systemic threats will have to comply with stricter rules.
What governing bodies does the AI Act establish?
To secure proper enforcement, several governing bodies are formed. An AI Office within the Commission to implement the common rules across the EU. A scientific committee of independent experts to defend the enforcement activities. An AI Board with member states’ delegates to advise and assist the Commission and member states on a uniform and effective application of the AI Act. An advisory panel for stakeholders to deliver technical expertise to the AI Board and the Commission
What are the penalties for AI Act violations?
The penalties for infringements to the AI Act are established as a percentage of the offending company’s global annual turnover in the earlier financial year or a predetermined amount, whichever is more elevated. SMEs and start-ups are subject to proportionate administrative fines.