Should we trust AI?

Pages22-26
22
SHOULD WE TRUST AI?
    
apparent that a concrete context is required to help delimit the scope of this
statement. We have collected and presented below the positions of a number
of renowned experts on the question of whether it is really possible to trust
an AI system.
Thomas Metzinger’s (2019) critique of trustworthy AI as a narrative con-
structed by the HLEG is supported by his examination of trustworthy AI as
a concept:
     
nonsense. Machines are not trustworthy; only humans can be trustworthy (or untrust-
worthy).
Similarly, Joanna Bryson has bluntly stated: “No one should trust Arti-
    -
tween equals, in which the trusting party, even without complete certainty,
believes in the promises made by the trusted party. However, AI is a set of
development techniques that allow machines to compute actions or knowl-
edge from data. Therefore, Bryson argues that only other software develop-
ment techniques can be paired with AI, and since these do not trust, no one
can really trust AI. Even more, she believes that no human should trust an
AI system, because it is possible and desirable to design AI to be responsible.
Therefore, she concludes that we do not need to trust an AI system since we
can know the probability that it will perform the assigned task, and only that
task. What is important is that when a system using AI causes harm, we can
hold the humans behind that system accountable.
Accountability of AI systems is considered an almost impossible issue due
to their complexity. Bryson notes that we have long been holding non-hu-
man entities, such as banks and governments, to account. All we need to do
is keep proper records. She gives the example of autonomous car accidents.
When one of these incidents has occurred, within a week we know what the
car sensed, what it thought those sensor readings meant, and why it acted the
way it did. This is an example of accountability on the part of the companies
that built the car. She concludes her position by pointing out that not all AI
systems are built to these standards.

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT