As advancements in technology affect all areas in our lives, law enforcement agencies and private companies are also testing the use of artificial intelligence (AI) for the purpose of public safety. Advanced Remote Biometric Identification (RBI), specifically in the form of Facial Recognition Technology (FRT), are currently at the centre of discussion. RBI refers to the use of artificial intelligence to identify individuals from a distance. The identification is possible as AI works to match the biometric features stored in a database with the features recorded from a device capable of remotely capturing said data. FRT is a type of RBI, focusing on the use of unique facial features and comparing them to data from a digital image or video e.g. CCTV footage.
What does this mean around the world?

Countries such as the United States and United Kingdom are increasingly moving towards reliance on these technologies. Countries in the EU are also recording findings of some trial projects related to the use of Facial Recognition Technology. However, as the technology continues evolving and becomes increasingly more widespread, concerns arise in relation to potential consequences of using said technologies. A majority of concerns focus on biases and consequences in relation to law enforcement. In addition, concerns with regard to all individuals’ privacy rights are also at the forefront of the discussion, including:
- Whether an indiscriminate recording of all individuals captured by cameras is aligned with the principle of data minimization;
- Concerns on the lawfulness and transparency of the use of said technology, as further discussed below; and
- Appropriate processing of special categories of personal data in accordance with legal requirements.
Both the GDPR and its UK equivalent (the ‘UK-GDPR’) provide for some legal framework setting standards for the use of this technology. However, the departure of the UK from the EU in 2020 means that the two jurisdictions are now implementing entirely different approaches when it comes to the use of Artificial Intelligence. This blog post analyses said differences, and the implications thereof, with a focus on FRTs.
The history of public surveillance systems in the EU and the UK
Looking at the history of implementation of public surveillance systems in the EU and in the UK, sets the stage to highlight the difference in framework that applies to this day.
Public authorities and private actors have implemented video surveillance as one of the measures to ensure security since the middle of the 20th century. Camera systems such as CCTV have been increasingly appearing in UK cities since the 1950s, and have progressively evolved technologically. As a result, we are now at the point where South London will be installing its first permanent facial recognition cameras.
Similarly, Germany saw its first shift in the usage of cameras for public security reasons in the 1960s. By the 2000s, the majority of large European cities were deploying CCTV systems.
However, based on this history and according to researchers, the evolution in technical capabilities of CCTV and its respective use in the EU has always lagged behind that of the UK. One of the reasons for this was a lack of constitutional protections for the right of privacy. Meanwhile, EU countries have demonstrably had a stricter approach to privacy even prior to the Data Protection Directive passed in 1995. The EU has implemented further protective measures since, such as the AI Act.
How does the use of facial recognition change between the EU and the UK?
While both jurisdictions use Facial Recognition Technology with the goal of enhancing public and national security, they differ vastly in how extensively they have applied it in practice.
The main difference is in its application, which is in turn related to the current regulatory differences. In the EU, current deployments of RBI systems are primarily experimental and localised. Examples of case studies include Facial Recognition Cameras at Brussels Airport, Facial Recognition at Hamburg G20, and the DragonFly Project in Hungary. There is currently no example of fully implemented and permanent FRT or RBI systems in the EU.
Additionally, the UK’s implementation of such systems is a current point of discourse across the country. As an example, part of MET police deployment policy for overt implementation of live facial recognition to locate people on a Watchlist is to be able to implement Live Facial Recognition onto “hotspots” for a number of crimes, ranging from theft and drugs to terrorism and human trafficking.
Additionally, the use has extended to private companies, such as the retail and hospitality sector, to take advantage of the technology to enhance security and prevent theft and revenue loss.

Regulatory similarities
In both the EU and the UK, the GDPR regulates the usage of all data processing technologies, including Facial Recognition Technology. The UK also implemented the regulation at national level with the Data Protection Act 2018. Therefore, a number of legal requirements, and issues of public concern are common for both jurisdictions:
- Data needs to be processed lawfully, fairly and in a transparent manner. Where public interest can be an applicable legal base for public authorities and law enforcement (albeit not without justification). However, private companies are required to jump through more hurdles to justify the necessity and proportionality, and outright lawfulness, of the use of FRTs, typically under legitimate interest;
- Processing of biometric data means that Art. 9 special categories of personal data are being processed, adding an extra layer to the lawfulness argument. Such categories of data can only be processed pursuant to one of the exceptions listed in the Article 9. Again, reliance on substantial public interest could be an option, but not without having to make a balancing exercise, which leads to: the requirement to carry out a Data Protection Impact Assessment in accordance with Art. 35.3, where the usage of said technology arguably meets all 3 criteria;
- Further considerations and concerns include breaches to the principles of purpose and storage limitation, and data minimisation.
What is the regulatory approach to facial recognition in the EU?

However, in the EU, the newly implemented AI Act regulates the specific usage of real-time remote biometric identification systems in its Article 5. The article outright bans the use of AI systems that create or expand facial recognition databases through the untargeted scraping of facial images from the internet or CCTV footage and the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purposes of law enforcement, although the latter comes with exceptions. These include:
- Search for abducted individuals, and victims of human trafficking and sexual exploitation;
- Prevention of a specific, substantial and imminent threat to life or threat of terrorism; and
- Localisation of a person suspected to have committed a criminal offence listed in Annex 2 of the Act (which does not include property damage, theft and/or burglary).
Said exceptions, however, must still take into account rights and freedoms of the individuals involved. Additionally, Article 27 of the AI Act require a fundamental rights impact assessment and law enforcement authorities registering the system in the EU database according to Article 49.
How does the regulation framework differ in the UK?
Since its departure from the EU due to Brexit, the regulation of such technologies in the UK is entirely different. There is currently no AI-specific regulation in place. UK Parliament is currently discussing the only related legislation for the usage of such technologies, namely the Data Protection and Digital Information Bill.
Importantly, the draft of this bill demonstrates how the UK’s approach is opposite to that of the EU, possibly leading to less regulation. For example, through the abolishment of the Biometrics and Surveillance Camera Commissioner (BSCC). The underlying argument is that the removal of this office, in a period of fast technological change, will result in the loosening of safeguards designed to raise standards and protect citizens, and may ultimately result in the deployment of technologies that are not in the public interest.
That is not to say that the use of said technologies will go entirely unchecked. The Information Commissioner Office made a statement about the usage of said technologies and calls for the responsible and lawful use of Facial Recognition Technology, and published guidance on appropriate use of Biometric recognition systems. However, the guidance still relies on mostly GDPR-based principles and rules. It does not add anything new to the conversation on the increased use of FRTs by law enforcement agencies or private companies, which might have legal implications for individuals. Therefore, the status quo remains that in comparison with the EU, the UK remains a regulatory sandbox for the use of such technologies. As a result, concerns arise about the ways this compliance vacuum could be exploited and relevant risk for individuals.
Looking forward
Despite the technology being substantially more regulated in the EU, there is still criticism on the general use of FRTs, even with the existence of the GDPR and AIA rules in relation to the technologies. The vagueness of the definitions in the AI act, the changes made to the AI Act draft from an outright ban for the technologies to an approach with “exceptions” and the lack of clarity on the implementation of these technologies by private companies outside of law enforcement agencies.