Artificial Intelligence in Law Enforcement Settings

Date01 June 2023
Year2023
AuthorProf. Dr. Dimitrios Kafteranis,Prof. Dr. Athina Sachoulidou,Prof. Dr. Umut Turksen
Pages62
DOIhttps://doi.org/10.30709/eucrim-2023-006
I. Introduction

Rooted in popular culture, the catchphrase “follow the money” is often invoked in the context of investigations aimed at uncovering financial malfeasance.1 As Europol notes: “To effectively disrupt and deter criminals involved in serious and organised crime, law enforcement authorities need to follow the money trail as a regular part of their criminal investigation with the objective of seizing criminal profits”.2

This is particularly true for investigating money laundering, which involves disguising the proceeds of criminal activity (predicate offences) to make them appear legitimate. By following the money trail, namely identifying individuals, companies, or transactions that require closer scrutiny, law enforcement authorities (LEAs) are able to seize criminal assets and profits, and bring offenders to justice.3

The European Union (EU) and its Member States are not immune from cross-border financial crime, including but not limited to money laundering. To address this phenomenon, the EU has taken various legislative measures and is currently negotiating a new anti-money laundering and countering the financing of terrorism legislative package that was first proposed in July 2021.4 The creation of the European Public Prosecutor’s Office (EPPO) consolidated the EU’s institutional framework in this regard.5 While it is also putting in place steps towards a more efficient legal framework for combatting financial crime, the development of new technologies has opened up new opportunities for criminals to exploit in many different areas, such as crypto-assets and fast internet connections.6 Notwithstanding the above, such technologies may also revolutionise the way LEAs gather and evaluate evidence in order to assist criminal justice authorities in prosecuting crime effectively, particularly to the extent that borderless crime requires cross-border cooperation.

Combining expertise in computer engineering, law, and social sciences from academia, policy makers, and law enforcement agencies, the TRACE project has embarked on exploring illicit money flows (IMFs) in the context of six use cases: terrorist financing, web forensics, cyber extortion, use of cryptocurrency in property market transactions, money laundering in arts and antiquities, and online gambling.7 Its ultimate goal is to equip European LEAs with the tools and resources necessary to identify, track, document, and disrupt IMFs in a timely and effective manner. This can involve, among other things, the analysis and visualisation of financial data (virtually in any given language), the identification of suspicious financial activity patterns, and collaboration with other agencies to share information. These tools are developed with the help of cutting-edge technologies, such as artificial intelligence (AI) and machine learning (ML). As a consequence, they should represent trustworthy solutions adhering to the rule of law, fundamental rights, and ethical design principles. For this purpose, the TRACE project has a dedicated work package (WP8) on the ethical, legal, and social impact of the AI solutions it develops.8

Informed by the research conducted for the TRACE project, this article outlines some of the key findings on the use of AI in law enforcement settings as follows: Firstly, it provides a conceptual framework, including a definition of AI (Section II). Secondly, it explains how AI systems may reshape law enforcement with an emphasis on crime analytics (Section III), and which law governs such uses of AI (Section IV). In doing so, the article employs EU law as a system of reference and sheds light on the AI governance model included in the European Commission’s Proposal for a Regulation laying down harmonised rules on AI (EU AIA).9 Finally, by critically analysing the EU legal regime for AI, the article identifies key shortcomings and offers suggestions and recommendations (Section V).

II. Conceptual Framework: Definition of AI and Data Informing AI Systems

Although there is (still) no unanimously accepted definition of AI,10 the past two decades have been marked by the exponential development of AI systems using algorithms, statistical models, and other techniques. These are used to analyse and interpret large amounts of data (originating from various sources and often referred to as “big data”), with the help of advances in computing power, and to make predictions or decisions based on the analysis of this data.11 This goes hand in hand with the diversification of AI applications, including natural language processing, image and voice recognition, autonomous vehicles, and predictive analytics.

At policy-making level, as early as in 2018, the UK government referred to AI in its Industrial Strategy White Paper as: “[t]echnologies with the ability to perform tasks that would otherwise require human intelligence, such as visual perception, speech recognition and language translation”.12 In the same year, the European Commission, in its Communication on AI for Europe, emphasised not only the element of intelligent behaviour, but also the degree of autonomy AI systems may present.13 Furthermore, the Commission set up a multi-disciplinary expert group, namely the High-Level Expert Group on AI (AI HLEG) to clarify the definition of AI and to develop ethics guidelines for trustworthy AI.14 The findings of this group have informed the first attempt to regulate AI at EU level, i.e. the EU AIA, which includes a proposal for the first formal legal definition of AI. In particular, art. 3 nr.1 EU AIA defines an “AI system” as: “software that […] can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with”.15 Designed to classify AI as a sociotechnical concept, this definition has also been used by the TRACE project.

AI applications are data-driven applications. The data used to train an AI system, and the data it processes, depend on the type of tasks a system is designed to perform. AI systems intended to be employed for law enforcement purposes are no exception, namely they require various types of data, whether personal16 or not, to become effective. This may include, for instance: 1) data on past criminal activity that can be used to train AI systems to forecast criminal activity, 2) social media data that can be analysed to identify behavioural patterns that correlate with suspicious activity, 3) demographic data, such as age, gender, race, that can be used to inform decisions about the allocation of law enforcement resources, or 4) travel, communication, and financial data, the combination of which can decode the specifics of past criminal activity.

Gathering and processing data for developing, training, and using AI systems may raise significant ethical and legal issues, including but not limited to privacy, data protection, bias, and due process.17 To capitalise on the benefits of data-driven applications in a law enforcement environment, it is therefore imperative that the respective algorithms are trained and supplied with accurate data, previously collected in appropriate contexts, and that this data is properly linked, in order to avoid false negatives and, more importantly, false positives.18 What is more, the data used to train an algorithm may reflect discriminatory practices and entrench biases.19 One danger of algorithmic bias is the generation of a bias “feedback loop”, in which the analysis or predictions of an ML-based system influence how the same system is validated and updated.20 In other words, this is a case of algorithms influencing algorithms, because their analysis then influences the way LEAs act on the ground.21 If the algorithmic output were to be used in law enforcement decisions or even as evidence in a courtroom, this reality could adversely affect the rights of the defence and lead to severe consequences, including deprivation of a person's freedom.22 This suggests that high-quality and accurate data is needed to ensure that the resulting predictions, decisions, or actions are also accurate, fair, and unbiased. In fact, the respective AI systems should be tested and audited for accuracy and fairness on a regular basis.23

III. Use of AI in Law Enforcement Settings

The use of AI for law enforcement purposes has already been challenged by legal scholars with the focus placed predominantly on predictive policing and facial recognition, that allows for the automatic identification or authentication of individuals, and on AI applications employed in criminal proceedings to calculate the risk of recidivism.24 The EU AIA covers the use of AI in law enforcement settings in two scenarios. Firstly, it prohibits the use of real-time remote biometric identification systems in publicly accessibly spaces unless this is strictly necessary for achieving the purposes set out in art. 5 (1) lit. d.25 Secondly, the EU AIA classifies other AI systems employed for law enforcement purposes as high-risk (art. 6) – based on the risks they may pose to fundamental rights (recital 38) – and stipulates a series of legal obligations on their providers (see Section IV). In particular, point 6 Annex III to EU AIA introduces a typology of high-risk automated law enforcement, including AI systems intended to be used:

  • For individual risk assessments of natural persons in order to assess the risk of (re-)offending or the risk for potential victims of criminal offences (lit. a);

  • As polygraphs and similar tools or to detect the emotional state of a natural person (lit. b);

  • To detect deep fakes (lit. c);

  • To evaluate the reliability of evidence in the course of criminal investigations or crime prosecution (lit. d);

  • For predicting the (re-)occurrence of an actual or potential crime based on profiling of natural persons (art. 3 (4) Directive (EU) 2016/680) or assessing personality traits and characteristics or past...

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT