Impact of current use of AI on selected fundamental rights

AuthorEuropean Union Agency for Fundamental Rights (EU body or agency)
Pages57-85

IMPACT OF CURRENT USE OF AI ON
SELECTED FUNDAMENTAL RIGHTS
Deploying AI systems engages a wide range of fundamental rights. As
seen in Chapter 2, the use cases presented in this report involve a range
of technologies of varying levels of complexity and automation. They are
in different phases of development and applied in different contexts, for
different purposes and at different scale. While the rights affected depend
on these factors, a number of horizontal and sector-specif‌ic fundamental
issues emerge.
The chapter begins with a general overview of risks perceived by interviewees,
and their general awareness of fundamental rights implications when using
AI. The chapter then highlights selected fundamental rights affected by AI-
related technologies, with reference to the four use cases analysed.
The analysis takes into account and presents the views, practices and
awareness of these issues expressed in the interviews conducted for this
report. Interviewees were f‌irst asked about the general risks they see when
using AI. They were then asked about general fundamental rights awareness
when using AI and about more concrete fundamental rights implications, which
were mostly linked to data protection, non-discrimination and availability of
complaints mechanisms.
.. PERCEIVED RISKS
It is important to recognise that many issues cut across different rights. For
example, a potentially biased decision made by an algorithm could involve
the right to non-discrimination, protection of personal data, and the right to an
effective remedy. Similarly, a particular issue can be seen from the perspective
of different rights. For instance, a good explanation of a decision made by
an algorithm is required under the right to protection of personal data, right
to good administration, and the right to an effective remedy and a fair trial.
When asked about general risks when using AI, the interviewees did not always
mention fundamental rights as the main risks, although some highlighted
related topics. Private sector representatives most often mentioned inaccuracy
as a risk of using AI, followed by potential bias and the proper legal basis
for processing personal data. One respondent from an international retail
company stated that one business risk is linked to European customers being
extremely knowledgeable about their rights; namely, people do not hesitate to
ask about data storage and automated decision making. If customers are not
properly informed, they might complain and the company may lose a client.
In addition, the interviewee continued, breaching the law, and possible f‌ines
linked to a breach, is another major business risk.
With respect to public administration, bias was most often highlighted as
a risk associated with using AI. In addition, public authorities often discussed

inaccuracy and data re-identif‌ication as risks of using AI. For example,
interviewees working on social benef‌its algorithms stated that incorrect
results in general are a risk. This can occur potentially due to rare cases, which
are not well identif‌ied by the algorithm, or due to errors in the input data.
They also highlighted the diff‌iculties associated with moving from testing to
deploying a system, including technical challenges, resources required and
potential different results when deployed.
Respondents working on targeted advertising also highlighted business
risks – for example, when offering irrelevant or inappropriate content. One
respondent mentioned potentially losing control over automated systems.
In addition, interviewees indicate challenges linked to the diff‌iculty of
interpreting results and outputs from AI systems. One interviewee from
the consultancy sector fears that the risk related to the lack or absence of
suff‌icient AI knowledge and understanding can cause ongoing projects to
be halted, due to a company’s inability to explain clearly what an algorithms
will perform, and for what purpose.
Another interviewee from the law enforcement sector, looking into the
possible use of AI to support decisions about licence applications, explains that
there are inherent risks on how and why such a system proposes a certain
response. For example, when potentially using AI to support decisions about
license applications for f‌irearms, the respondent asserts that it would not
only be critical to understand the reasoning behind negative decisions, but
also positive decisions. Several interviews showed that a major concern is
to assign properly trained staff with suff‌icient expertise to trace, explain and
interact with the AI system.
This f‌inding is also corroborated by the results of the European Commission
survey among companies in the EU. In that survey, 85 % indicate as an
obstacle to adopting AI technologies the diff‌iculty to hire new staff with
the right skills; 80 % mention the complexity of algorithms as an obstacle.1
With respect to the ability to explain decisions based on algorithms, an
interviewee working in public administration mentioned that there are no
alternatives to being completely transparent when making decisions. There
should not be any room for doubt. In a similar vein, a respondent working
in the area of health for the private sector mentions that ‘self-learning’
algorithms are forbidden in their area of work, because only f‌ixed algorithms
can be traced.
Other risks reported without providing much additional information include
cyber-security, data quality, excessive monitoring of people due to the use
of data and algorithms, job loss due to automation, and prof‌iling.
.. GENERAL AWARENESS OF FUNDAMENTAL RIGHTS
AND LEGAL FRAMEWORKS IN THE AI CONTEXT
Not everyone in the EU is aware about their fundamental rights. FRA’s
Fundamental Rights Survey shows that slightly more than every second
person in the EU (aged 16 or older) has heard about the Charter. Slightly
more people, two out of three, have heard about the ECHR and the Universal
Declaration of Human Rights. This might be because the ECHR is older and
more established in people’s common knowledge.2
The majority of people interviewed for this project acknowledge that using
AI can generally affect fundamental rights. Only very few mention that
their use of AI does not have a potential impact on fundamental rights or
“The use of AI can bring many
benef‌its, but also risks, it is like
nuclear energy.”
(Interviewee working in private sector,
Spain)
“[Our use of AI] does not impact
[human rights] in any way. In terms
of the decision process, it does not
matter whether the decision is made
by machine or ahuman.”
(Interviewee working for public
administration, Estonia)

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT