Conclusion

Pages127-128
127
CONCLUSION
AI has the potential to address challenges facing our world, stretching


divisions, which exacerbate existing forms of inequality.
AI is a heterogeneous set of very powerful and very useful techniques for
discovering complex connections, but that does not make it an intelligent
subject. The appearance of intelligence does not make a model intelligent,
          
with the specialists who design them, use them and are able to interpret their
results, like a doctor does when making a diagnosis.
The opacity of AI systems, rather than engendering trust, erodes the trust
humans have in machines, especially as machines move towards autonomous
decision-making (even more so if they apply self-learning). But ethics in AI
should not be limited to the development of transparent and fair systems.
Sustainable AI solutions must take into account other human dimensions
such as the personal, social, legal, organisational and even institutional.
The proposition of Trustworthy AI is a distraction that can cover up the
hidden work of underpaid employees, blur responsibilities and justify the
      
           
AI that can be trusted in areas previously reserved for human judgement,
merely by meeting certain established prerequisites, leaves humanity out of
the equation. The most important human activity, “how should I act to be

Coeckelbergh notes:
Moreover, other sorts of questions also need to be answered in policy proposals: not
only what should be done, but also why it should be done, when it should be done, how
much should be done, by whom it should be done, and what the nature, extent, and
urgency of the problem are. (Coeckelbergh, 2020b)
Some authors propose Virtue Ethics for AI. Not as a genius combination
of algorithmic requirements, but for those involved in AI research, develop-
ment, deployment and exploitation. Decision-makers in the life cycle of AI
systems must be virtue-ethicists, i.e. people with habits of acting appropri-

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT