Belgium, (Brussels Morning Newspaper) The forthcoming Artificial Intelligence Act (AI Act) is one step closer to adoption by the European Parliament. In mid-March, as rapporteur, I presented the draft opinion of the Committee on Culture and Education (CULT). There are three significant gaps in the draft AI Act making it clear that remote biometric recognition, e-proctoring, and artificial intelligence in media are priorities that must not be forgotten in the upcoming negotiations .
Ban of facial recognition without exceptions
At present, we can see how the Russian regime abuses facial recognition systems to detect protesters. In Moscow, at least 180,000 cameras were installed two years ago, a system abused by the authorities there to identify and persecute participants of anti-war demonstrations. Even before, as reported by Amnesty International, Russian authorities used the same technology to monitor and detain activists and journalists involved in rallies supporting Alexei Navalny.
The European Commission has proposed a ban on remote biometric identification, except for court or emergency authorisation. However, we must not give a blank cheque to any European government which might be tempted to abuse such a technology in order to track and persecute its citizens. The recent revelation of the Pegasus scandal provided us with clear proof of how willing the Polish and Hungarian governments are about spying on journalists and opposition politicians.
In light of the danger that deployment of remote biometric identification systems in publicly accessible places poses to citizens’ fundamental rights, freedom of assembly, and the work of investigative journalists, activists, and political representatives, I propose to ban deployment of such technologies without exception.
We cannot give free rein to governments operating on the edge of democracy to abuse technology to spy on the opposition, on journalists, and on the ordinary people. That, after all, is not the Europe we want to live in.
Extension of the definition of high-risk AI applications
I also focused on the definition of high-risk AI applications in areas of education, media, and culture and on the modification of certain provisions related to banned practices. The reason is the increasing deployment of AI technologies in education and training facilities.
It is also essential not to forget about the e-proctoring systems used to monitor students during tests, and the applications used to determine the subject matter or programme a student should study. For example, if a student is doing a remote test in, for example, a student dormitory, where both audio and video are recorded, extraneous noise might disrupt the monitoring process and be misinterpreted as an attempt to cheat.
As for media, I propose adding to the list the creation and dissemination of machine-generated news articles as well as recommendation and ranking algorithms for audio-visual content. A misused AI system can contribute to the spread of disinformation.
No social scoring for companies
Another dangerous loophole in the Commission’s proposal is the absence of a ban on social scoring by private companies. The concept of social scoring is at odds with European values and we should therefore say a clear no.
Clearly, it represents a clear threat of discrimination and the exclusion of certain groups or individuals. For this reason, we must extend the ban on deployment of social scoring systems to their use by public and private entities.
Moving closer to the adoption
I welcome the fact that we are getting closer to adopting the AI Act, with the Committee on Education and Culture’s proposal to be voted on in April. The final vote in the plenary takes place this autumn, during the Czech Republic’s presidency. For now, let us keep on fighting to ensure that there are no loopholes in the final proposal!