Ethical principles and AI facts

Pages54-71
54
ETHICAL PRINCIPLES AND AI FACTS
In a previous chapter we set out the ethical principles on which AI should
be based in order to be trustworthy, according to the European Union. We
have also presented the key characteristics of the deep learning (DL), the
most promising technique that have aroused great interest in large technol-
ogy companies as a business opportunity and in governments as a way of
drastically increasing productivity, streamlining or automating processes re-
served for human beings and making unbiased decisions. In this chapter we
will see how the DL affects the ethical principles of the Ethics Guidelines for
Trustworthy AI (Ethics Guidelines):
Respect for human autonomy
Prevention of harm
Fairness
Explicability
The vision conveyed by these principles is as if an algorithm were a medi-
cine and that, through clinical trials, the problems to be solved could be ob-
jectively delimited. Nothing could be further from the truth. The data used,
what they are used for and how they are processed, among others, condition
the answers. In AI there are no objective test beds like guinea pigs. AI adds an

long-term harmed, even to suspect it. Let us now look at each of these ethical
principles in detail.
Respect for human autonomy
The fundamental rights on which the EU is based are aimed at ensuring
respect for the freedom and autonomy of human beings. Human beings in-
teracting with AI systems must be able to maintain full and effective self-de-

coerce, deceive, manipulate, condition or herd humans. Rather, they should
be designed to augment, complement and enhance the cognitive, social and
cultural abilities of humans. Human control must be ensured in AI systems.
     
support humans in the work environment and aim for the creation of mean-
ingful work. (European Commission, 2019)
TRUSTWORTHY AI ALONE IS NOT ENOUGH
55
Comments
In elaborating on the meaning of human autonomy, philosophers speak of
personal identity, personal autonomy, human rights and moral autonomy12,
which is much more than being free from coercion, deception or manipula-
tion. Throughout the history of thought, different visions of human autonomy
have been developed, which are beyond the scope of this book.
AI makes it possible to create tools with hitherto unthinkable capabilities.
Not only does it serve to automate tasks whose solution we intuitively
sense, but are incapable of capturing. It can also make it possible to create
artefacts with unpredictable behaviour, intentionally or unintentionally. This
introduces a great deal of uncertainty, in its impact, on people’s lives and on
nature in general.
One of the misconceptions about AI is that it can solve humanity’s old problems, but
actually the COVID-19 crisis has shown us that AI has limitations and can come up
with very good pattern recognition and even with unexpected patterns of new propos-
als for new drugs. Ultimately it is humans who have to follow up on these hypotheses.
Only humans assisted by AI can solve these problems. (Neppel, 2020)
The role of autonomy is a key, fundamental value and requirement in lib-
eral democracies. AI systems open up new opportunities to increase personal
autonomy, but at the same time pose risks of interference with human au-
tonomy. A clear case in point is that of Cambridge Analytica, which attempted
to manipulate voters during the 2016 U.S presidential election (BBC, 2018).
Human autonomy is presented in the Ethics Guidelines
key ethical principles: “AI systems must not coerce, subordinate, deceive,
   
understood as maintaining the ability to decide and control. What differenti-

-
  
 
 -
ence people’s decision making because it can specify nudging tactics to each

your opportunity to think for yourself and takes away your autonomy.
12 See, for example, https://plato.stanford.edu/search/searcher.py?query=personal+autonomy

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT