AuthorMichael O'Flaherty
Did you know that artif‌icial intelligence already plays a role in deciding
what unemployment benef‌its someone gets, where a burglary is likely to
take place, whether someone is at risk of cancer, or who sees that catchy
advertisement for low mortgage rates?
We speak of artif‌icial intelligence (AI) when machines do the kind of things
that only people used to be able to do. Today, AI is more present in our lives
than we realise – and its use keeps growing. The possibilities seem endless.
But how can we fully uphold fundamental rights standards when using AI?
This report presents concrete examples of how companies and public
administrations in the EU are using, or trying to use, AI. It discusses the
potential implications for fundamental rights and shows whether and how
those using AI are taking rights into account.
FRA interviewed just over a hundred public administration off‌icials, private
company staff, as well as diverse experts – including from supervisory and
oversight authorities, non-governmental organisations and lawyers – who
variously work in the AI f‌ield.
Based on these interviews, the report analyses how fundamental rights are
taken into consideration when using or developing AI applications. It focuses
on four core areas – social benef‌its, predictive policing, health services and
targeted advertising. The AI uses differ in terms of how complex they are,
how much automation is involved, their potential impact on people, and how
widely they are being applied.
The f‌indings underscore that a lot of work lies ahead – for everyone.
One way to foster rights protection is to ensure that people can seek remedies
when something goes awry. To do so, they need to know that AI is being
used. It also means that organisations using AI need to be able to explain
their AI systems and how they deliver decisions based on them.
Yet the systems at issue can be truly complex. Both those using AI systems,
and those responsible for regulating their use, acknowledge that they do not
always fully understand them. Hiring staff with technical expertise is key.
Awareness of potential rights implications is also lacking. Most know that
data protection can be a concern, and some refer to non-discrimination. They
are less aware that other rights – such as human dignity, access to justice and
consumer protection, among others – can also be at risk. Not surprisingly,
when developers review the potential impact of AI systems, they tend to
focus on technical aspects.
To tackle these challenges, let’s encourage those working on human rights
protection and those working on AI to cooperate and share much-needed
knowledge – about tech and about rights.

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT