Ai and fundamental rights - why it is relevant for policymaking

AuthorEuropean Union Agency for Fundamental Rights (EU body or agency)
Artif‌icial intelligence(AI) is increasingly used in the private and public sectors,
affecting daily life. Some see AI as the end of human control over machines.
Others view it as the technology that will help humanity address some of its
most pressing challenges. While neither portrayal may be accurate, concerns
about AI’s fundamental rights impact are clearly mounting, meriting scrutiny
of its use by human rights actors.
Examples of potential problems with using AI-related technologies in relation
to fundamental rights have increasingly emerged. These include:
an algorithm used to recruit human resources was found to generally
prefer men over women;1
an online chatbot2 became ‘racist’ within a couple of hours;3
machine translations showed gender bias;4
facial recognition systems detect gender well for white men, but not for
black women;5
a public administration’s use of algorithms to categorise unemployed
people did not comply with the law;6
and a court stopped an algorithmic system supporting social benef‌it
decisions for breaching data protection laws.7
These examples raise profound questions about whether modern AI systems
are f‌it for purpose and how fundamental rights standards can be upheld
when using or considering using AI systems.
This report addresses these questions by providing a snapshot of the current
use of AI-related technologies in the EU – based on selected use cases – and
its implications on fundamental rights.

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT