Brussels (Brussels Morning) A report published by the European Fundamental Rights Agency warns that indiscriminate and irresponsible use of Artificial Intelligence (AI) could undermine fundamental human rights, the German Public Broadcaster DW reported on Monday
The FRA report notes that the careless use of AI could lead to discrimination and miscarriages of justice, making it vital to pay close attention to its potential negative effects on fundamental rights.
AI is as fallible as the people who make it, FRA Director Michael O’Flaherty points out, which is why the EU should clarify how its rules apply to AI. Meanwhile, “organizations need to assess how their technologies can interfere with people’s rights both in the development and use of AI.”
AI could discriminate
FRA calls on the EU to create safeguards and set up mechanisms for holding entities accountable for their use of AI, taking into account the fact that it is widely used in many sectors.
David Reichel, an FRA Research and Data Unit staff member, cites discrepancies in the accuracy of facial recognition software as a case in point, noting these deficiencies could result in wrongful targeting of different societal groups.
We could blindly adopt “new technologies without assessing their impact before actually using them,” the FRA report warns, hence the need to draw up regulations to prevent “potentially discriminatory effects of AI.” The regulatory agency is urging that EU funds be made available to help ensure that AI does not discriminate and that it is made more transparent.
With the EU scheduled to adopt rules for AI next year, the FRA is cautioning against using AI in targeted advertising, predictive policing and medical diagnoses, Reuters reports. The EU Commission is to present regulations on so-called high risk sectors, including energy, healthcare, transport and segments of the public sector.
The FRA insists companies must be able to explain how their AIs arrive at decisions and that safeguards must include provisions for people to challenge those decisions in the interests of protecting all fundamental rights.
The report is based on more than 100 interviews with organisations that use AI in Estonia, Finland, France, Spain and the Netherlands.Rights groups warn that, while law enforcement bodies widely use AI, it is also used for mass surveillance by authoritarian regimes.