Conclusions

AuthorEuropean Union Agency for Fundamental Rights (EU body or agency)
Pages33-34
FRA Focus
33
Conclusions
Using facial recognition technology – a technology
that has been developing quickly in the past years
and is increasingly used by multiple actors – affects
a range of fundamental rights. However, there is
limited information about the way and extent to
which the technology is used by law enforcement,
and about the impact of its use on fundamental
rights. Working with new AI-driven technologies,
which are not yet fully understood and where not
much experience has yet been gathered, requires
the involvement of all relevant stakeholders and
experts from different disciplines.
Facial images constitute biometric data, EU law rec-
ognises, as they can be used to identify individuals.
Facial recognition technology can be used in many
different ways, such as verifying the identity of a
person, checking whether a person is among a list
of people, and even to categorise people accord-
ing to different characteristics. Live facial recogni-
tion technology detects all faces on video footage
and then compares the faces against watch lists –
potentially used at public spaces.
While not much information is available about the
actual use of facial recognition technology in the
EU, several Member States are considering, test-
ing or planning the use of the technology for law
enforcement purposes. Most actively, the police in
the United Kingdom carried out several tests in real
life situations such as sports events, even using real
watch lists. Other law enforcement agencies tested
the accuracy of the technology in larger tests with
volunteers, such as the police in Berlin, Germany
or in Nice, France. The lack of more comprehensive
information about the actual use of the technology
limits the opportunities to analyse its fundamental
rights implications. In particular, there are no laws
or other guidance or information on who will be
included in potential watch lists.
The fundamental rights implications of using facial
recognition technology vary considerably depending
on the purpose, context and scope of the use. Some
of the fundamental rights implications stem from the
technology’s lack of accuracy. Accuracy has strongly
increased, but the technology still always comes
with a certain rate of error, which can negatively
impact fundamental rights. Moreover, importantly,
several fundamental rights concerns would remain
even if there were a complete absence of errors.
Notwithstanding the varying context, purpose and
scope of the use of facial recognition technology,
several fundamental rights considerations apply. The
way facial images are obtained and used – poten-
tially without consent or opportunities to opt out
– can have a negative impact on people’s dignity.
Relatedly, the rights to respect for private life and
protection of personal data are at the core of fun-
damental rights concerns when using facial recog-
nition technology. In addition, any use of the tech-
nology needs to be thoroughly assessed in terms of
its potential impact on non-discrimination and rights
of special groups, such as children, older persons
and persons with disabilities, because of the (some-
times unknown) varying accuracy of the technology
for these groups and according to other protected
characteristics. Moreover, freedom of expression,
association and assembly must not be undermined
by the use of the technology.
Lastly, the paper highlights that it is essential to
consider procedural rights when facial recognition
technology is used by public administrations, includ-
ing the right to good administration and the right
to an effective remedy and fair trial.
Given the novelty of the lechnology as well as the
lack of experience and detailed studies on the impact
of facial recognition technologies, multiple aspects
are key to consider before deploying such a sys-
tem in real life applications:
Following the example of the large-scale EU IT
systems, a clear and suff‌iciently detailed legal
framework must regulate the deployment and
use of facial recognition technologies. Determin-
ing when the processing of facial images is nec-
essary and proportionate will depend on the pur-
pose for which the technology is used and on the
safeguards in place to protect individuals whose
facial images are subjected to automated pro-
cessing from possible negative consequences.
Forms of facial recognition that involve a very
high degree of intrusion into fundamental rights,
compromising the inviolable essential core of
one or more fundamental rights, are unlawful.
A distinction must be made between the pro-
cessing of facial images for verif‌ication purposes,
when two facial images are compared to verify
if they appertain to the same person; and their
processing for identif‌ication purposes, when a
facial image is run against a database or watchlist
of facial images. The risk of interferences with
fundamental rights is higher in the second case
and therefore the necessity and proportionality
test must be stricter.
So-called “live facial recognition technologies”−
when facial images are extracted from video
cameras deployed in public spaces – are particu-
larly challenging. Such a use triggers different

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT