“Error 404 – Match not found”

Date01 June 2023
Year2023
AuthorDavid Hadwick
Pages57
DOIhttps://doi.org/10.30709/eucrim-2023-005
I. Why AI Tax Enforcement Systems Are “High-Risk” Systems

In EU Member States, tax administrations are the public organs that make most use of artificial intelligence (AI) and machine learning (ML) systems to perform State prerogatives. Publicly-available data alone reveals at least 70 AI systems leveraged by national tax administrations, unequally spread over 18 EU Member States.1 Even the EU itself, through Eurofisc members, has developed its own ML model: Transaction Network Analysis – a data matching model meant to detect missing trader intra-Community fraud.2 Accordingly, in certain areas of taxation AI and ML are already used throughout the EU for the enforcement of taxation rules.

These AI tax enforcement systems perform a relatively broad range of tasks, reflecting the wide array of prerogatives of the administration itself. Generally, these different systems can be categorised into two archetypes. Some AI systems are leveraged by EU tax administrations for non-coercive purposes, including chatbots3, nudging systems4, and jurisprudence analysis5. These non-coercive systems constitute a minority of the models used by tax administrations in the EU, albeit a significant one.6 The remainder AI systems are leveraged for coercive purposes, i.e. for tax enforcement tasks such as web scraping7, the detection of statistical risk indicators89, and risk scoring to screen and select taxpayers for audit.10 In a little more than a decade, predictive analysis has radically transformed tax enforcement and tax administrations in the EU. Currently, the use of statistics and ML underpins all coercive prerogatives when selecting a taxpayer for audit. Data is collected and processed through ML and taxpayers are algorithmically selected on the basis of risk indicators inferred from ML predictions. The transformative power of AI is also reflected in the human resources of tax administrations, increasingly composed of data scientists and increasingly less of tax law experts.11

Some of these models were used by tax administrations in the EU as far back as 2004. This is for instance the case of XENON, a web scraping model leveraged by the Dutch tax administration (Belastingdienst).12 This means that tax administrations were pioneering public algorithmic governance long before debates over other popular buzzwords in predictive policing, such as facial recognition, biometric surveillance, social scoring, etc. The primary reason for the prominence of the use of AI systems by tax administrations is the immense documentary burden placed on tax officials. Each year, tax administrations must process billions of documents13, answer millions of queries, and spend several millions of minutes on the phone.14 Processing such volumes of data manually with the human resources of national tax administrations is simply impossible. Accordingly, long before the advent of AI, tax administrations were already using traditional statistical approaches and heuristics to perform their fiscal prerogatives. The transition from traditional statistics to automated statistics and machine learning did thus not constitute a major scale-up.

1. The risks of AI tax enforcement systems

The EU AI Act follows a risk-based approach, meant to strike a proportional balance between the two policy goals of the instrument, namely: the promotion of innovation and the protection of citizens fundamental rights. Accordingly, the Regulation outlines four levels of risk ranging from prohibited to minimal risk. Minimal risk systems (level 1) generally escape the scope of the instrument aside from the invitation to self-regulation through codes of conduct and limited risk systems (level 2) are only bound to minimum transparency requirements in specific use cases, particularly chatbots and deep fakes. Models deemed as bearing unacceptable risk (level 4) are prohibited. By sheer number of articles, the majority of obligations in the instrument are imposed on high-risk systems (level 3). According to the current draft proposal, organisations with high-risk systems must comply with strict requirements such as certification, data governance, transparency, human oversight, record-keeping and cybersecurity. Comparatively to the other levels of risk, the obligations imposed on high-risk systems are numerous and substantively detailed, often requiring granular control of specific externalities. Hence, the risk-based approach seeks to ensure that obligations imposed on an AI system are proportional to the risks it generates.

In that regard, AI tax enforcement systems should be viewed as “high-risk” because these systems have been shown to contain various sources of conflict with EU citizens’ rights, documented in jurisprudence and doctrine. This is less true for non-coercive AI tax systems, in fact, some of these models are truly a net plus both for the administration and for taxpayers. Chatbots, for example, enable taxpayers to request information from the administration at any time of the day and year. Processing little to no taxpayer personal data15,these systems have opened up a new channel of communication with tax officials, while alleviating the substantial administrative burden of tax officials. Reports indicate that chatbots reduce the number of queries directly sent to the administration by a margin of up to 90%, with very high satisfaction rates amongst taxpayers.16 The same can be said of nudging, simply by adapting the language of default letters sent to taxpayers, e.g. referring to a taxpayer by his or her first name or by adding references to the benevolent purpose of tax collection, the speed and rate of compliance increase in noteworthy ways.17

Conversely, coercive AI tax systems used for tax enforcement bring about serious risk of conflict with taxpayers’ fundamental rights and tax procedure as a whole. These risks have already materialised in a number of cases. Coercive AI systems can conflict with the principle of legality because they disrupt procedures to such an extent that these no longer reflect procedural codes. For instance in eKasa18, the Slovak Constitutional Court ruled that machine-learning bolstered surveillance to such an extent that it required a specific framework and tailored safeguards to negate the risks of abuses. Currently, the majority of tax administrations in the EU use coercive AI systems without a specific legal basis to that effect and without safeguards to negate demonstrated risks of such systems.19 This is problematic in terms of legality as the different externalities these systems generate cannot be systematically captured by existing procedural rules.20 Most notably, these systems entail risks of conflict with the right to a private life and right to data protection, as seen for example in SyRI21 or the State Council (Conseil d’Etat) on the use of web scraping22. The primary source of friction lies in the fact that tax administrations have adopted tools that increase their surveillance capability based on procedural rules that pre-date the internet. Through web scraping, tax administrations are capable of surveilling the internet, e-commerce platforms, social media, or satellite images without differentiation between compliant and non-compliant taxpayers. As these data processing activities are generally regarded as an administrative process, tax officials do not have to secure any form of prosecutorial assent to use web scraping systems and collect taxpayer personal data.23 These tools collect bulks of data and match the data to the different taxpayers at a speed unrivaled by any human tax official, drastically increasing the scope of data collected and number of taxpayers surveilled by the administration. In spite of the apparent interferences with privacy, the use of web scraping by tax administrations in the EU, the scope of data collected, the sources of data collection, the limits and safeguards, etc. remain largely unregulated.24

Moreover, predictive models such as risk detection and risk scoring tools are prone to errors, statistical biases and discrimination. These models are predictive, hence these systems only forecast a probable outcome based on what is statistically likely. Such a process by nature involves a great deal of uncertainty, errors, and deviations from objective reality. For these reasons, predictive models have already resulted in serious scandals such as Robodebt25 in Australia and the toeslagenaffaire26 in the Netherlands. The latter is perhaps the best illustration of the devastating consequences that AI tax enforcement systems may occasion, particularly when these are not sufficiently regulated.

2. The toeslagenaffaire, stark example of the risks of AI tax enforcement

In the toeslagenaffaire, the Dutch tax administration (Belastingdienst) attempted to automate the assessment of childcare allowance (kinderopvangtoeslag) fraud with a predictive model. The model had the power to, without any human input, discontinue the allowances of welfare recipients and request the reimbursement of all aids ever received. Parents labelled as fraudsters by the AI system were made to pay back large sums of money (€35,000 on average – up to €250,000), testimony to the high childcare costs in the Netherlands, among the highest in the OECD.27 As the label was disclosed to other public and private actors, following so-called “linkage of records”28, parents were denied credit cards, bank accounts, loans, other means of public assistance, etc. In some cases, child protective services paid visits to their children’s school or homes to forcibly separate them from their parents.29 Later inquiries by the State Secretary revealed that the predictions of the models were erroneous in 94% of cases.30 A substantial part of these errors were the result of discrimination induced by the historical biases in data of the administration, data inaccuracies, and the processing of data on nationality and ethnicity by the...

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT