EUROPE | Fundamental rights in AI: What to consider

Artificial intelligence (AI) already plays a role in many decisions that affect our daily lives. From deciding what unemployment benefits someone gets, where a burglary is likely to take place, whether someone is at risk of cancer, or who sees that catchy advertisement for low mortgage rates.

Its use keeps growing, presenting seemingly endless possibilities. But we need to make sure to fully uphold fundamental rights standards when using AI.

A report from the EU Fundamental Rights Agency (FRA) on “Getting the future right – Artificial Intelligence and fundamental rights” presents concrete examples of how companies and public administrations in the EU are using, or trying to use, AI. It focuses on four core areas – social benefits, predictive policing, health services and targeted advertising.

Drawing on this report FRA presents a number of key considerations to help businesses and administrations respect fundamental rights when using AI and to show what to consider: 

 

Is it compliant?

  • Design and use must comply with relevant laws
  • Any data processing must respect data protection laws
  • Considers the wider impact on other rights

Is it fair?

  • Does not discriminate on grounds such as ethnicity, age, disability, sex and sexual orientation
  • Respects the rights of children, older people and people with disabilities

Can it be challenged?

  • People are aware AI is being used
  • People can complain about AI decisions
  • Decisions based on the system can be explained

Can it be checked?

  • Assess and regularly review use of AI for fundamental rights issues
  • People applying AI can describe the system, its aim and data used

Are external experts involved?

  • Consult with experts and stakeholders
  • Expert oversight

 

Source: EU Fundamental Rights Agency (FRA)

Share this site on Twitter Shara this site on Facebook Send the link to this site via E-Mail