Brussels (Brussels Morning Newspaper) – MEPs Michael McNamara and Brando Benifei will co-chair the EU Parliament’s AI monitoring group, including members of the EP’s Legal Affairs committee.
The European Parliament’s AI monitoring group, entrusted with overseeing the enactment of the AI Act, will be headed by legislators Michael McNamara from Ireland/Renew group and Brando Benifei from Italy/S&D group, according to Euronews.
How will McNamara and Benifei oversee AI regulation efforts?
Euronews has conveyed that MEP McNamara will be the group’s co-chair on behalf of the Committee on Civil Liberties, Justice and Home Affairs (LIBE) and MEP Benifei on behalf of the Committee on the Internal Market and Consumer Protection (IMCO). MEP Benifei headed the Parliament’s function on the AI Act in the previous mandate, as one of the co-rapporteurs. McNamara became MEP in July after the EU election and a former member of the Irish parliament.
What is the focus of the European Parliament’s AI Act?
The AI Act – which seeks to regulate AI systems according to the risk they present to society – entered into force in early August. The general-purpose AI regulations will apply one year after entry into force and the responsibilities for high-risk systems in three years’ time. No date has been specified for a first meeting, most of their debates are likely to be closed to the public.
What are the expected outcomes of the AI monitoring group’s work?
As reported by EU sources, similar working groups were formed in the EU Parliament’s last mandate, on the Digital Services Act (DSA) and the Digital Markets Act (DMA), which will persist with the incoming Parliament. More work on AI is anticipated in both the Parliament and the European Commission’s new legislature, and additional regulations on the workplace, as well as copyright, are hoped.
In addition, the EU Commission last month revealed a list of independent experts from the EU, US and Canada authorised to lead work on drafting a Code of Practice on General Purpose Artificial Intelligence, which includes language models such as ChatGPT and Google Gemini.
The Code is developed to ease the application of the AI Act’s regulations for companies, including transparency and copyright-related rules, systemic risk taxonomy, risk assessment, and mitigation measures.