Facial recognition technology and fundamental rights: setting the scene

AuthorEuropean Union Agency for Fundamental Rights (EU body or agency)
Pages2-4
Facial recognition technology: fundamental rights considerations in the context of law enforcement
2
1. Facial recognition technology and
fundamental rights: setting the scene
This focus paper explores fundamental rights impli-
cations that should be taken into account when
developing, deploying, using and regulating facial
recognition technologies. It draws on recent analy-
ses and data (Section 3 and Section 4) and evidence
from interviews conducted with experts and rep-
resentatives of national authorities who are test-
ing facial recognition technologies (Section 5).
1
The
last sections (Section 6 and Section 7) provide a
brief legal analysis summarising applicable Euro-
pean Union (EU) and Council of Europe law.
The paper forms part of FRA’s larger research pro-
ject on artif‌icial intelligence, big data and fundamen-
tal rights.2 It is the f‌irst paper to focus on the uses
of facial recognition technology, and builds on the
agency’s extensive past work on the fundamental
rights implications of the use of biometric data in
large-scale EU information systems in the f‌ield of
migration, asylum and borders.3
Facial recognition technology (FRT) allows the auto-
matic identif‌ication of an individual by matching
two or more faces from digital images. It does this
by detecting and measuring various facial features,
extracting these from the image and, in a second
step, comparing them with features taken from
other faces.4
In the private sector, facial recognition technology
is widely used for advertisement, marketing and
other purposes, with individual customers prof‌iled
and identif‌ied to predict their preferences towards
1 FRA carried out eleven interviews between March and May
2019, in EU Member States such as Germany, France and the
United Kingdom, to gain better insight into current testing,
and the potential use, of facial recognition technology.
2 The following have been published so far as part of the
research project: FRA (2018), #BigData: Discrimination in data-
supported decision making, Luxembourg, Publications Oce,
May 2018; FRA (2019), Data quality and articial intelligence
– mitigating bias and error to protect fundamental rights,
Luxembourg, Publications Oce, June 2019. For more on the
project, consult FRA’s webpage on the project.
3 See, for example, FRA (2018), Under watchful eyes: biometrics,
EU IT systems and fundamental rights, Luxembourg, Publications
Oce, March 2018; FRA (2018), Interoperability and fundamental
rights implications – Opinion of the European Union Agency for
Fundamental Rights, FRA Opinion – 1/2018 [Interoperability],
Vienna, 11 April 2018.
4 For more detail on how facial recognition technology
works, see e.g. Introna, L. and Nissenbaum, H. (2010), Facial
Recognition Technology: A Survey of Policy and Implementation
Issues, Lancaster University Management School Working
Paper 2010/030.
products based on their facial expressions.5 Other
examples from the private sector include a foot-
ball club using it in their stadium to identify peo-
ple who have been banned from attending the
club’s matches;6 using facial recognition technol-
ogy to analyse facial expressions of job candidates
in interviews;7 and major internet and social media
companies, such as Facebook, deploying facial rec-
ognition technologies to improve their systems, by
tagging faces.8
The recent evolution of artif‌icial intelligence (AI)
powered facial recognition technology is not attrac-
tive only to the private sector. It also opens new
possibilities for public administration, including law
enforcement and border management. A consid-
erable increase in accuracy achieved in the past
few years has prompted many public authorities
and private businesses to start using, testing or
planning the use of facial recognition technologies
across the world.
This, in turn, has sparked an intense debate on its
potential impact on fundamental rights. For exam-
ple, the large-scale use of facial recognition tech-
nology in combination with surveillance cameras
in the People’s Republic of China has led to many
discussions and concerns about potential human
rights violations, particularly with respect to detect-
ing members of certain ethnic minorities.9 Follow-
ing an increased use of facial recognition in the US,
a national survey published in September 2019 by
the Pew Research Centre f‌inds that, while slightly
more than every second American (56 %) trusts
law enforcement agencies to use these technologies
responsibly, smaller shares of the public say they
5 See for example: Italy, Garante per la protezione dei dati
personali, Installazione di apparati promozionali del tipo “digital
signage” (deniti anche Totem) presso una stazione ferroviaria,
21 December 2017.
6 See EDRi, “Danish DPA approves Automated Facial
Recognition”, 19 June 2019.
7 See The Telegraph, “AI used for rst time in job interviews in
UK to nd best applicants”, 27 September 2019.
8 See Wired, “Facebook can now nd your face, even when it’s
not tagged”, 19 December 2017.
9 Human Rights Council (2019), Surveillance and human
rights. Report of the Special Rapporteur on the promotion and
protection of the right to freedom of opinion and expression,
David Kaye, A/HRC/41/35; New York Times, “One Month,
500,000 Face Scans: How China Is Using A.I. to Prole a
Minority”, 14 April 2019.

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT