Artificial Intelligence & Human Rights: Friend or Foe?

Artificial Intelligence & Human Rights: Friend or Foe?

 

By Alberto Quintavalla and Jeroen Temperman

Picture connected (Creative Commons licenses)
Source: https://www.techslang.com/ai-and-human-rights-are-they-related/

 

The problem

Artificial intelligence (‘AI’) applications can have a significant impact on human rights. This impact can be twofold. On the one hand, it may contribute to the advancement of human rights. A striking example is the use of machine learning in healthcare to improve precision medicine so that patients would receive better care. On the other hand, it can pose an obvious risk to the respect of human rights. Unfortunately, there are countless examples. Perhaps the most obvious one is the use of algorithms discriminating against ethnic minorities and women.

The call

It is in this context that international and national institutions are calling for further reflection on the prospective impact of AI. These calls are especially advanced at the European level, including the active involvement of the Council of Europe. Time is in fact ripe to start mapping the risks that AI applications could have on human rights and, subsequently, to develop an effective legal and policy framework in response to these risks.

The event

On 28 October 2021, the hybrid workshop ‘AI & Human Rights: Friend or Foe?’ took place. On this occasion, several researchers from around the world met to discuss the prospective impact of AI on human rights. The event was organized by the Erasmus School of Law, and benefitted from the sponsorship of both the Netherlands Network for Human Rights Research and the Jean Monnet Centre of Excellence on Digital Governance.

Zooming out: the common theme(s)

The workshop consisted of various presentations, each addressing specific instances of the complex interaction between AI and human rights. Nonetheless, the discussion with the audience highlighted two common challenges in dealing with the prospective impact of AI on human rights. Firstly, the recourse to market mechanisms or the use of regulatory instruments aiming at changing individuals’ economic incentives (and, accordingly, behaviour) are not sufficient to address the issues presented by the use of AI. Regulation laying down a comprehensive set of rules applicable to the development and deployment of all AI applications is necessary to fill the existing regulatory gaps and safeguard fundamental rights. This is in line with the EU Commission’s recent proposal setting out harmonized rules for AI, including the need to subject the so-called high-risk AI systems to strict obligations prior to their market entry. Secondly, and relatedly, the development of international measures is not enough to ensure the effective management with local issues and delineate regulation that are responsive to the particular circumstances. Society should regularly look at the context where the emerging issues unfold. The deployment of AI systems is in fact designed to operate in culturally different environments, each one with specific local features. 

Zooming in: the various panels

The remaining part of this blog post provides a short overview of the more specific arguments and considerations presented during the workshop. The workshop consisted of five panels. The first panel revolved around questions of AI and content moderation, biometric technologies, and facial recognition. The discussion emphasized major privacy concerns as well as the chilling effects on free speech and the freedom of association in this area. The second panel, among other issues, continued the content moderation discussion by arguing that the risks of deploying AI-based technologies can be complemented by the human rights potential thereof in terms of combating hateful speech. Moreover, the dynamics between AI and human rights were assessed through the lenses of data analytics, machine learning, and regulatory sandboxes. The third panel aimed to complement the conventional discussions on AI and human rights by focusing on the contextual and institutional dimensions. Specifically, it stressed the relevance of integrating transnational standards into the regulatory environments at lower governance levels since they tend to take more heed of citizens’ preferences, the expanding role of automation in administrative decision-making and the associated risk of not receiving effective remedy, the ever-increasing role of AI-driven applications in business practices and the need for protecting consumers from (e.g.) distortion of their personal autonomy or indirect discrimination, as well as the impact that AI applications can have on workers’ human rights in the workplace. These presentations yielded a broader discussion on the need to ensure a reliable framework of digital governance to protect the vulnerability of human beings as they adopt specific roles (i.e., citizens, consumers, and workers). The fourth panel further expanded the analysis on how AI may expose individuals and groups to other risks when they are in particular situations that have so far been overlooked by current scholarship. Specifically, it discussed the right to freedom of religion or belief, the right to be ignored in public spaces, and the use of AI during the pandemic and its impact on human rights implementation. All of the three presentations stressed that AI surveillance is an important facet that should be targeted by regulatory efforts. Lastly, the fifth panel ventured into a number of specific human rights and legal issues raised as a result of the interplay between AI and the rights of different minority groups such as refugees, LGBTQI and women. The discussion mostly revolved around the serious discriminatory harm that the use of AI applications can result in. References have been made, in particular, to bias in the training data employed by AI systems as well as the underrepresentation of minority groups in the technology sector.

A provisional conclusion

The discussion during the workshop showed that the startling increase in AI applications poses significant threats to several human rights. These threats are however not yet entirely spelled out. Efforts of policymakers and academic research should then be directed to pinpoint what are the specific threats that would emerge as a result of AI deployment. Only then, will it be possible to develop a legal and policy framework that would respond to the posed threats and ensure sufficient protection of fundamental rights. Admittedly, this framework will need to grant some dose of discretion to lower governance levels so that it would be possible to integrate context-specific factors. On a more positive note, the presentations from the workshop emphasized that AI applications can also be employed as a means of protecting fundamental rights.

Bio:

 

Jeroen Temperman is Professor of International Law and Head of the Law & Markets Department at Erasmus School of Law, Erasmus University, Rotterdam, Netherlands. He is also the Editor-in-Chief of Religion & Human Rights and a member of the Organization for Security and Cooperation in Europe’s Panel of Experts on Freedom of Religion or Belief. He has authored, among other books, Religious Hatred and International Law (Cambridge: Cambridge University Press, 2016) and State–Religion Relationships and Human Rights Law (Leiden: Martinus Nijhoff, 2010) and edited Blasphemy and Freedom of Expression (Cambridge: Cambridge University Press, 2017) and The Lautsi Papers (Leiden: Martinus Nijhoff, 2012).

 

 

Alberto Quintavalla is Assistant Professor at the Department of Law & Markets at Erasmus School of Law (Erasmus University Rotterdam) and affiliated researcher at the Jean Monnet Centre of Excellence on Digital Governance. He received his doctoral degree at Erasmus Universiteit in 2020 with research about water governance from the Rotterdam Institute of Law & Economics and the Department of International and European Union Law. He has been a visiting researcher at the Hebrew University of Jerusalem and the European University Institute. His research interests are at the intersection of environmental governance, human rights, and digital technologies. He is admitted to the Italian Bar.

 

Add comment