Ethical requirements and AI facts

Pages72-92
72
ETHICAL REQUIREMENTS AND AI FACTS
Ethical principles of the Ethics Guidelines for Trustworthy AI (Ethics
Guidelines) are a long-term goal, a strategy. They are put into action through
short-term tactical mechanisms. The Ethics Guidelines cite several of the
most relevant ones, which are discussed below.
 -
ternatively we can look at some indicators usually related to it, in this case we
could count how many times we laugh. However, not every time we laugh is
due to a happy state, and we can be happy without laughing. Consequently,
meaningful requirements must be chosen appropriately for each case. In the
chapter Case Studies we will see how ethical principles are translated into

Human agency and oversight
AI systems must support human autonomy and human decision-making.
   
and equitable society.
Fundamental rights: Like many technologies, AI systems can equally

by helping them track their personal data, or by increasing the accessibility
of education, hence supporting their right to education. However, given the
reach and capacity of AI systems, they can also negatively affect fundamental
rights. In situations where such risks exist, a fundamental rights impact as-
sessment should be undertaken. This should be done prior to the system’s
development and include an evaluation of whether those risks can be reduced

and freedoms of others. Moreover, mechanisms should be put into place to
receive external feedback regarding AI systems that potentially infringe on
fundamental rights.
Human agency: Users should be able to make informed autonomous
decisions regarding AI systems. They should be given the knowledge and
tools to comprehend and interact with AI systems to a satisfactory degree
and, where possible, be enabled to reasonably self-assess or challenge the sys-
tem. AI systems should support individuals in making better, more informed
TRUSTWORTHY AI ALONE IS NOT ENOUGH
73
choices in accordance with their goals. AI systems can sometimes be deployed
    

various forms of unfair manipulation, deception, herding and conditioning,
all of which may threaten individual autonomy. The overall principle of user
autonomy must be central to the system’s functionality. Key to this is the right
not to be subject to a decision based solely on automated processing when

Human oversight: Human oversight helps ensuring that an AI system
does not undermine human autonomy or causes other adverse effects. Over-
sight may be achieved through governance mechanisms such as a human-in-
the-loop (HITL), human-on-the-loop (HOTL), or human-in-command (HIC)
approach. HITL refers to the capability for human intervention in every deci-
sion cycle of the system, which in many cases is neither possible nor desir-
able. HOTL refers to the capability for human intervention during the design
cycle of the system and monitoring the system’s operation. HIC refers to the
capability to oversee the overall activity of the AI system (including its broad-
er economic, societal, legal and ethical impact) and the ability to decide when
and how to use the system in any particular situation. This can include the
decision not to use an AI system in a particular situation, to establish levels
of human discretion during the use of the system, or to ensure the ability to
override a decision made by a system. Moreover, it must be ensured that pub-
lic enforcers have the ability to exercise oversight in line with their mandate.
Oversight mechanisms can be required in varying degrees to support other
safety and control measures, depending on the AI system’s application area
and potential risk. All other things being equal, the less (European Comis-
sion, 2019).
Comments
-
ishing and equitable society seems a pretentious claim, as if the equitable,
     
distillation of fundamental rights, the creation of a just society, is the fruit of
centuries of evolution of ideas. A technology that seeks optimal outcomes by
reinforcing commonalities and eliminating differences cannot be considered
progressive.

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT