Brussels (Brussels Morning) George Orwell’s “1984” novel might be the most iconic reference to a society living under surveillance. Its depiction of a dystopian future, where a totalitarian government continually surveys the population – not only their behaviour but also burrowing into their minds – is an hyperbolic, yet chilling, illustration of what could happen if a government with enough resources decided to control every aspect of a society.
There is an obvious exaggeration in the narrative, meant to portrait a caricature of a post-WWII society. However, there is one aspect where reality is getting closer than ever to fiction, and that is the ever-growing technical ability to recreate the surveillance depicted in the book.
Mass surveillance, or systematic monitoring, can take many forms, depending on the activities, or the technologies which are observed. Commonly, governments deploying them justify their use with the need to fight terrorism, prevent crime or protect national security. Possibly the most famous (or infamous) case of surveillance was uncovered in 2013, when Edward Snowden reported that the PRISM program of NSA was able to systematically collect and decrypt internet communications from various U.S. internet provider companies.
Regarding video surveillance, many examples can be found worldwide, with China leading on the number of cameras per 1,000 inhabitants (between 27 and 117 in the major cities). In Europe, London is still the city with the most coverage of cameras, around 73 cameras per 1,000 people.
However, the developments in artificial intelligence (AI), harnessed video surveillance systems with video recognition capabilities. Systems whose utility was once limited to record footage and detect movement or light changes, suddenly, were capacitated with new features such as recognition of objects, identification of patterns (or deviation from patterns) and biometric recognition. (Biometric technologies have the ability to translate body characteristics from human beings, such as fingerprints, skin colour or body movement traits, into measurable, and comparable, data.)
On 21 April 2021, the European Commission presented its proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (also called the AIR). Article 5 of the AIR proposal prohibits the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement. However, that same article foresees three exceptions that enable its use: when it is used for the search of potential victims of a crime, for the prevention of threat to life and/or terrorist attack and prosecution of a perpetrator or suspect of a crime.
Although valid, the aforementioned exceptions seem vague enough to be able to justify a continuous surveillance of the public space.
Remote biometric identification of individuals poses a high-risk of intrusion into individuals’ private lives. Mostly because it is deeply unfair towards the data subjects (i.e. the individuals who are monitored).
Firstly, because it is difficult to ensure that the data subjects entering a monitored area are properly informed beforehand, even when warning signs are installed on all accesses to the area.
Secondly, because data subjects cannot be sure of what kind of data processing might be taking place (or might take place in the future) with the footage that was made.
Could there be an AI algorithm analysing people’s physical traits to guess their sexual orientation?
Or maybe an analysis of the garments of passers-by to infer their political inclination?
These questions lead to a hotly debated topic of AI: its potential subjectivity. Because AI algorithms are developed by human beings, there is a risk that certain assumptions, misconceptions or biases of the creators might contaminate the way the system will behave. Also, the quality of the data which is provided for an AI system during the learning process will have a direct effect on the resulting decisions of the system.
An AI system will be as biased as its creators.
Another issue has to do with accuracy. Biometric systems compare stored data (e.g. a photo of a suspect) with the data collected from the physical environment (e.g. a footage of a person), however the quality between them might differ significantly. Systems will consider that a match occurs when the resemblance is above “a certain percentage of certainty”.
This percentage is known as acceptance threshold, and usually the system administrator is able to change it. Lowering the threshold will make the system less rigorous and increase the number of matches (these are called “false positives”). The increase of the threshold will result in a more rigorous system which, in certain cases, might fail to match biometric features of the subject (a “false negative”). There is little assurance that in a situation where system administrators are demanded to increase detection (for instance, because the rate of matches is low) they will not resort into lowering the acceptance threshold to increase the match rate, with consequences to the rights and freedom of individuals.
For this reason, in their joint opinion on the proposed regulation on AI, the European Data Protection Board (EDPB) and the European Data Protection Supervisor (EDPS) called for a general ban on any use of AI for an automated recognition of human features in publicly accessible spaces in any context.
Moreover, the EDPB and the EDPS also recommend a ban, for both public authorities and private entities, on AI systems categorising individuals from biometrics (for instance, from face recognition) into clusters according to ethnicity, gender, political or sexual orientation, or other grounds for discrimination prohibited under Article 21 of the Charter of Fundamental Rights of the EU.
EDPB and EDPS agree that societies should resist the urge to deploy mass surveillance infrastructures, as the consequences on the rights and freedom of populations might be permanent and even impossible to undo.
In the “1984” novel, the otherwise unthinkable acceptance of the population of such oppressive surveillance rested on a premise: that every citizen was indoctrinated to believe that Big Brother acted on their behalf and for “the greater good”.
The unquestionable acceptance of continuous surveillance poses a risk for the democracy of societies as it leaves way for a gradual erosion of fundamental rights.
Democratic governments are not supposed to apply totalitarian measures. Theirs is the burden of withstanding the (typically oppositional) criticisms of being unresponsive towards crime or insecurity when the only measure proposed by those detractors, aside from being an attack on civil liberties, is not a solution in itself.
*The views expressed by the author in this article are personal and do not represent the views of the EDPS.