“The computer said it was OK!”: human rights (and other) implications of manipulative design (Part 2/2)

“The computer said it was OK!”: human rights (and other) implications of manipulative design


By Dr. Silvia De Conca


Credit: Silva de Conca


This is Part 2 of a two-part series.

On November 19th, 2021, the “Human Rights in the Digital Age” working group of the NNHRR held a multidisciplinary workshop on the legal implications of ‘online manipulation’. This is Part 2 of a two-part series.

Manipulative design, autonomy, and human rights.

By turning individuals into means to an end, manipulative design infringes on their dignity, because it affects their intrinsic value as human beings. Manipulative design is a constraint to individual autonomy, whether it is used for ‘paternalistic’ policymaking or by companies for profit. More...

Artificial Intelligence & Human Rights: Friend or Foe?

Artificial Intelligence & Human Rights: Friend or Foe?


By Alberto Quintavalla and Jeroen Temperman


Picture connected (Creative Commons licenses)
Source: https://www.techslang.com/ai-and-human-rights-are-they-related/


The problem

Artificial intelligence (‘AI’) applications can have a significant impact on human rights. This impact can be twofold. On the one hand, it may contribute to the advancement of human rights. A striking example is the use of machine learning in healthcare to improve precision medicine so that patients would receive better care. On the other hand, it can pose an obvious risk to the respect of human rights. Unfortunately, there are countless examples. Perhaps the most obvious one is the use of algorithms discriminating against ethnic minorities and women.

The call

It is in this context that international and national institutions are calling for further reflection on the prospective impact of AI. These calls are especially advanced at the European level, including the active involvement of the Council of Europe. Time is in fact ripe to start mapping the risks that AI applications could have on human rights and, subsequently, to develop an effective legal and policy framework in response to these risks.

The event

On 28 October 2021, the hybrid workshop ‘AI & Human Rights: Friend or Foe?’ took place. On this occasion, several researchers from around the world met to discuss the prospective impact of AI on human rights. The event was organized by the Erasmus School of Law, and benefitted from the sponsorship of both the Netherlands Network for Human Rights Research and the Jean Monnet Centre of Excellence on Digital Governance.

Zooming out: the common theme(s)

The workshop consisted of various presentations, each addressing specific instances of the complex interaction between AI and human rights. Nonetheless, the discussion with the audience highlighted two common challenges in dealing with the prospective impact of AI on human rights. Firstly, the recourse to market mechanisms or the use of regulatory instruments aiming at changing individuals’ economic incentives (and, accordingly, behaviour) are not sufficient to address the issues presented by the use of AI. Regulation laying down a comprehensive set of rules applicable to the development and deployment of all AI applications is necessary to fill the existing regulatory gaps and safeguard fundamental rights. This is in line with the EU Commission’s recent proposal setting out harmonized rules for AI, including the need to subject the so-called high-risk AI systems to strict obligations prior to their market entry. Secondly, and relatedly, the development of international measures is not enough to ensure the effective management with local issues and delineate regulation that are responsive to the particular circumstances. Society should regularly look at the context where the emerging issues unfold. The deployment of AI systems is in fact designed to operate in culturally different environments, each one with specific local features. 

Zooming in: the various panels

The remaining part of this blog post provides a short overview of the more specific arguments and considerations presented during the workshop. The workshop consisted of five panels. The first panel revolved around questions of AI and content moderation, biometric technologies, and facial recognition. The discussion emphasized major privacy concerns as well as the chilling effects on free speech and the freedom of association in this area. The second panel, among other issues, continued the content moderation discussion by arguing that the risks of deploying AI-based technologies can be complemented by the human rights potential thereof in terms of combating hateful speech. Moreover, the dynamics between AI and human rights were assessed through the lenses of data analytics, machine learning, and regulatory sandboxes. The third panel aimed to complement the conventional discussions on AI and human rights by focusing on the contextual and institutional dimensions. Specifically, it stressed the relevance of integrating transnational standards into the regulatory environments at lower governance levels since they tend to take more heed of citizens’ preferences, the expanding role of automation in administrative decision-making and the associated risk of not receiving effective remedy, the ever-increasing role of AI-driven applications in business practices and the need for protecting consumers from (e.g.) distortion of their personal autonomy or indirect discrimination, as well as the impact that AI applications can have on workers’ human rights in the workplace. These presentations yielded a broader discussion on the need to ensure a reliable framework of digital governance to protect the vulnerability of human beings as they adopt specific roles (i.e., citizens, consumers, and workers). The fourth panel further expanded the analysis on how AI may expose individuals and groups to other risks when they are in particular situations that have so far been overlooked by current scholarship. Specifically, it discussed the right to freedom of religion or belief, the right to be ignored in public spaces, and the use of AI during the pandemic and its impact on human rights implementation. All of the three presentations stressed that AI surveillance is an important facet that should be targeted by regulatory efforts. Lastly, the fifth panel ventured into a number of specific human rights and legal issues raised as a result of the interplay between AI and the rights of different minority groups such as refugees, LGBTQI and women. The discussion mostly revolved around the serious discriminatory harm that the use of AI applications can result in. References have been made, in particular, to bias in the training data employed by AI systems as well as the underrepresentation of minority groups in the technology sector.

A provisional conclusion

The discussion during the workshop showed that the startling increase in AI applications poses significant threats to several human rights. These threats are however not yet entirely spelled out. Efforts of policymakers and academic research should then be directed to pinpoint what are the specific threats that would emerge as a result of AI deployment. Only then, will it be possible to develop a legal and policy framework that would respond to the posed threats and ensure sufficient protection of fundamental rights. Admittedly, this framework will need to grant some dose of discretion to lower governance levels so that it would be possible to integrate context-specific factors. On a more positive note, the presentations from the workshop emphasized that AI applications can also be employed as a means of protecting fundamental rights.



Jeroen Temperman is Professor of International Law and Head of the Law & Markets Department at Erasmus School of Law, Erasmus University, Rotterdam, Netherlands. He is also the Editor-in-Chief of Religion & Human Rights and a member of the Organization for Security and Cooperation in Europe’s Panel of Experts on Freedom of Religion or Belief. He has authored, among other books, Religious Hatred and International Law (Cambridge: Cambridge University Press, 2016) and State–Religion Relationships and Human Rights Law (Leiden: Martinus Nijhoff, 2010) and edited Blasphemy and Freedom of Expression (Cambridge: Cambridge University Press, 2017) and The Lautsi Papers (Leiden: Martinus Nijhoff, 2012).



Alberto Quintavalla is Assistant Professor at the Department of Law & Markets at Erasmus School of Law (Erasmus University Rotterdam) and affiliated researcher at the Jean Monnet Centre of Excellence on Digital Governance. He received his doctoral degree at Erasmus Universiteit in 2020 with research about water governance from the Rotterdam Institute of Law & Economics and the Department of International and European Union Law. He has been a visiting researcher at the Hebrew University of Jerusalem and the European University Institute. His research interests are at the intersection of environmental governance, human rights, and digital technologies. He is admitted to the Italian Bar.


Doctoral Research Forum Blog Series: Part VIII

Eclipsing Human Rights: Why the International Regulation of Military AI is not Limited to International Humanitarian Law

By Taylor Woodcock

Source: Freepik

Much has been written on the transformative potential of artificial intelligence (AI) for society. The surge in recent technological advancements that seek to leverage the benefits of AI and machine learning techniques have raised a host of questions about the adverse impacts of AI on human rights. Yet, when it comes to the debate on military applications of AI, the framework of international human rights law (IHRL) tends to receive rather cursory treatment. Greater examination of the relevance of IHRL is therefore necessary in order to more comprehensively address the legality of the development, acquisition and use of AI-enabled military technologies under international law.

AI and human rights

A number of concerns about the potential of AI technologies to interfere with human rights have been raised in recent years. Problems relating to the opacity and lack of transparency and predictability of AI systems, biases in training data and resulting output generated, risks of discrimination and breaches of privacy, adverse effects on human dignity, and the difficulty of identifying who to hold responsible for these harms have all been highlighted regarding the use of AI in a number of different domains. Amongst these are the use of AI for detecting welfare fraud, as tools in the criminal justice system or for policing, in the management of borders and migration and in the use of facial recognition and surveillance technologies, to name but a few. This has led to calls for the use of IHRL as a broad overarching framework for the governance of AI, ensuring respect for rights at all stages in the development and use of these technologies. Reliance on such a framework will have the benefit of robust human rights enforcement mechanisms, as well as the availability of well-developed best practices in areas such as human rights impact assessments and due diligence. Yet, whilst these issues may equally hold relevance for the use of AI in the military domain, at present this appears to be an underexplored issue.  

IHL eclipsing debates on military AI

It is commonly recognised in debates on military AI that the legality of these technologies engages a number of bodies of international law, IHRL amongst them. Nevertheless, in these debates recourse is typically made to international humanitarian law (IHL) as the primary regime regulating military applications of AI, with a few exceptions. Of course, in this context IHL remains crucial and reliance on this body of law makes sense given the intrinsic connection between military technologies and the laws governing the means and methods of warfare. Additionally, the forum in which political debates on autonomous weapons take place occur under the auspices of the Convention on Certain Conventional Weapons, which forms part of the corpus of IHL treaties. However, the application of IHL to military AI does not eclipse the relevance of IHRL in this context. Debates about the interplay of IHL and IHRL have persisted in recent decades, yet regardless of the theoretical approach adopted, it is now generally accepted that IHRL continues to apply during armed conflict. Rather than assuming that human rights protections will be displaced by IHL, it is vital to more closely examine the implications IHRL holds on a norm-by-norm basis for the development and use of military AI.

Human rights and military AI

There are a number of circumstances in which the use of AI-enabled military technologies engage IHRL, including when States conduct surveillance, engage in counter-terrorism and other security operations, employ anticipatory military strategies or operate in the margins of existing armed conflicts. The complex nature of contemporary conflicts highlights the need for States to account for both IHL and IHRL when military applications of AI are deployed, depending on how the circumstances on the ground inform the applicable paradigm. In one of the most extensive assessments to date of how autonomous weapons may interfere with human rights, Brehm frequently uses sentry systems as an illustrative example of when States may be bound by human rights standards on the use of force during armed conflict. Less often addressed in debates is the question of what role IHRL plays when applied alongside IHL during active hostilities.   

During the conduct of hostilities IHRL obligations will often be interpreted in light of IHL. As put by the International Court of Justice in its Advisory Opinion of 1996 on the Legality of the Threat or Use of Nuclear Weapons, the right to life continues to apply during hostilities and what constitutes an ‘arbitrary’ deprivation of life will be determined with reference to the IHL rules on targeting. It therefore follows that a violation of the IHL targeting rules will also constitute an interference with human rights. More generally, the use of AI on the battlefield may impact a number of human rights protections, including but not limited to the right to life, the right to liberty, the prohibition on torture or cruel, inhuman or degrading treatment, the right to privacy, the right to respect for property and the prohibition on discrimination. Nevertheless, derogations, the limitations of extraterritorial jurisdiction and the interpretation of IHRL norms in light of prevailing IHL standards all raise questions about the additional practical significance of IHRL in the active hostilities context. However, it is arguably the case that the key relevance of IHRL here rests on the procedural obligations, such as the duty to investigate, that will be triggered as a result of the violation of IHL and IHRL.

The duty to investigate

As a threshold issue, the application of IHRL during an armed conflict occurring outside of a State’s territory depends upon the establishment of extraterritorial jurisdiction. Under the International Covenant on Civil and Political Rights, this may be relatively straightforward where States exert control over individuals’ rights. Whilst a more restrictive approach to extraterritorial jurisdiction has been adopted by the European Court of Human Rights, for the obligation to conduct investigations specifically, recent case law suggests that special features of a case may support the finding of a jurisdictional link, even if the State’s extraterritorial jurisdiction cannot be established for the substantive violation alleged.

Whilst the duty to investigate exists under both IHL and IHRL, the latter nevertheless provides a significantly more detailed set of standards on conducting effective investigations. Though these standards likely require adaptation in the context of armed conflict, this does not obviate the need for States to conduct effective investigations capable of identifying whether or not the conduct causing an alleged violation was justified. This raises the question of whether reliance on AI technologies, renowned for a lack of transparency and predictability, will impede the ability of States to conduct effective investigations. For instance, in order to assess the reasonableness of a commander’s decision to launch a particular attack in the course of an investigation, it is necessary to understand the basis on which that decision was made. The integration of inherently opaque AI-enabled technologies into military arsenals – for instance in target recognition software – complicates this picture, as there is a lack of transparency around which factors influence the algorithm’s output. As such, States must consider whether the technical specificities and design of AI technologies acquired by militaries are sufficient to meet standards set by international law, including the duty to conduct investigations. 

The development and acquisition of military AI

It is often repeated in international discussions on military AI that respect for international law needs to be ensured throughout the entire ‘life cycle’ of a system. Whilst there is a tendency in debates to limit consideration of the pre-deployment stage to the duty to conduct weapons reviews under Article 36 of the First Protocol Additional to the Geneva Conventions, IHRL may also hold relevance for understanding the duties on States that develop and acquire these technologies. The emergence of the business and human rights framework may be instructive for understanding the obligations on States to regulate corporate conduct to prevent abuses under the pre-existing duty to protect human rights. Debates on military AI should further consider IHRL to determine what is specifically required of States in regulating the corporations that play a key role in driving forward technological developments in military AI.


Though the applicability of IHRL to military AI is often accepted, meaningful discussion on its implications have been eclipsed by reliance on IHL, which only partially accounts for the applicable international legal framework that regulates AI in the military domain. With respect to the primary international law obligations on States that seek to develop, acquire and use AI-enabled military technologies, human rights also have a role to play. The duties on States acquiring and deploying military AI to investigate and to regulate corporate behaviour are only two examples that highlight the implications of IHRL in this context. This demonstrates the need for more rigorous engagement with human rights alongside IHL in order to determine how these technologies may be developed and used in accordance with international law.

Bio: These issues and more will be taken up in further research by Taylor Woodcock, a PhD Researcher in public international law at the Asser Institute. Taylor conducts research in the context of the DILEMA Project on Designing International Law and Ethics into Military Artificial Intelligence, which is funded by the Dutch Research Council (NWO) Platform for Responsible Innovation (NWO-MVI). Her work relates to applications of AI in the military domain, reflecting on the implications of these emergent technologies for the fulfilment of obligations flowing from international humanitarian law and international human rights law.



Taylor Woodcock is a PhD Candidate at the Asser Institute. Her research relates to military applications of artificial intelligence and the obligations that arise with respect to these technologies under international humanitarian law and international human rights law.