Imagining the AI We Want: Towards a New AI Constitutionalism

Jun-E Tan

Imagining the AI We Want: Towards a New AI Constitutionalism

Jun-E Tan

Artificial intelligence (AI) technologies promise vast benefits to society but also bring unprecedented risks when abused or misused. As such, a movement towards AI constitutionalism has begun, as stakeholders come together to articulate the values and principles that should inform the development, deployment, and use of AI. This essay outlines the current state of AI constitutionalism. It argues that existing discourses and initiatives centre on non-legally binding AI ethics that are overly narrow and technical in their substance, and overlook systemic and structural concerns. Most AI guidelines and value statements come from small and privileged groups of AI experts in the Global North and reflect their interests and priorities, with little or no inputs from those affected by these technologies. This essay suggests three principles for an AI constitutionalism rooted in societal and local contexts: viewing AI as a means instead of an end, with an emphasis on clarifying the objectives and analyzing the feasibility of the technology in providing solutions; emphasizing relationality in AI ethics, moving away from an individualistic and rationalistic paradigm; and envisioning an AI governance that goes beyond self-regulation by the industry, and is instead supported by checks and balances, institutional frameworks, and regulatory environments arrived at through participatory processes.

Illustration by Jahnavi Koganti

1. Introduction

The ability of machines to learn from the past and make predictions about the future promises vast improvements to our individual and collective lives. With artificial intelligence (AI), we are able to rapidly detect patterns and anomalies in data, discover new insights, and inform decision-making. Better public health and transportation, more efficient services and increased accessibility, climate change mitigation and adaptation, etc. are part of a long list of the potential benefits of AI.

Governments and companies, eager to deploy and employ these technologies, often cite these potential benefits to frame the adoption of AI as a matter of inevitable progress. The possibilities of ‘AI for good’ are endless, we are told, as long as we provide the machines with enough data to churn. The technology is neutral, we are assured, and AI experts are working on perfecting these systems, complete with ethical considerations, so that negative impacts are minimized. Yet, as more AI-enabled systems are rolled out and adopted, accounts of unintended consequences and intentional abuse continue to accumulate at an alarming pace. Cautionary tales of the unintended consequences of AI abound – machines exacerbating racial biases,1 exam grading algorithms turning out to be hugely erroneous,2 and automated social protection schemes failing society’s most vulnerable, leading to death by starvation in extreme cases.3 Then there are egregious cases of intentional abuse – state and non-state actors leveraging AI capabilities to surveil entire populations,4 manipulate voter behavior,5 or produce highly realistic manipulated audio-visual content (also known as deepfakes) that can undermine the foundations of trust in society.6

Amidst these promises and anxieties, a movement towards AI constitutionalism has begun in recent years, as stakeholders from the market, state, and civil society put forth visions of what ethical AI should constitute and how these technologies should be governed. By AI constitutionalism, we mean the process of norm-making or the articulation of key values and principles which guide the design, construction, deployment, and usage of AI technologies. The concept is inspired by the more established body of work on digital constitutionalism, defined by Dennis Redeker and his colleagues as “a constellation of initiatives [including declarations, magna cartas, charters, bills of rights, etc.] that have sought to articulate a set of political rights, governance norms, and limitations on the exercise of power on the Internet”,789which are not only important for political and symbolic reasons, but also for shaping laws and regulations in the digital era.

Indeed, the process of shaping norms is exceedingly important as it entails a reckoning with our collective values. Norms are a sort of moral compass that guide us towards an imagined future. Especially in the context of AI, a nascent technology whose direction and implications are not yet fully known, some big picture questions need to be discussed. What are our goals and principles as a society? Where do we draw the line between possible trade-offs and values that are sacred and must be protected at all costs? What behaviors do we reward or sanction? And depending on the answers to these questions, what types of AI should we build (or not build) to aid our progress as a civilization?

In this essay, I outline the current state of AI constitutionalism, and provide arguments about why existing discourses and initiatives in this space will not lead us towards a future that is cognizant of human dignity and sustainable development. Based on these arguments, I imagine a new AI constitutionalism that imbues technological discourses with socio-political relevance, thus opening up discussions rooted in specific applications and contexts. Finally, I put forth three principles that should guide future initiatives in AI constitutionalism:

1) AI must be viewed as a ‘means’ instead of an ‘end’,
2) AI ethics must emphasize relationality and context, and
3) AI governance must go beyond self-regulation by the industry.

2. AI ethics: Why it is not enough

In the last five years, the area of AI ethics has become increasingly active, with stakeholders at various levels and in different geographic locations issuing policy statements or guidelines on what ethical AI is or should be. Together, these provide a fertile ground for analyzing the underlying priorities and assumptions that mark the current state of AI constitutionalism and shape the character of norm-making in the field.

Anna Jobin and her colleagues at ETH Zurich gathered at least 84 institutional reports or guidance documents on ethical AI in their 2019 analysis of the global landscape of AI ethics guidelines and principles.10 Most of these documents come from private companies (22.6 percent), government agencies (21.4 percent), academic and research institutions (10.7 percent), and intergovernmental or supranational organizations (9.5 percent). Prominent examples at the government level include the OECD AI Principles and the European Commission’s Ethics Guidelines for Trustworthy AI. Corporations, civil society, and other multistakeholder groups have also come up with their own non-legally binding positions and manifestos. Examples include Google’s AI principles,11 the Universal Guidelines for Artificial Intelligence developed by The Public Voice,12 the Tenets of Partnership on AI to Benefit People and Society,13 and the Beijing AI Principles.

There is some convergence in the values or principles that emerge as paramount in these ethical AI guidelines and statements. In Jobin and her colleagues’ analysis, the most commonly articulated principles are those of transparency, justice and fairness, non-maleficence (causing no harm), responsibility, and privacy. Six others appear less frequently, and in the following order: beneficence (promoting good), freedom and autonomy, trust, dignity, sustainability, and solidarity. However, despite the convergence in the values that are prioritized by existing AI policy documents, the picture becomes increasingly complex when we look beyond the terms themselves, and focus on their interpretation and implementation. At this point, some divergence or lack of consensus begins to emerge.

Most articulations on AI ethics tend to focus on narrow technical problems and fixes. An evaluation by Thilo Hagendorff from the University of Tübingen14 of 22 ethical AI guidelines, finds that the most popular values (such as accountability, explainability, and privacy) tend to be the easiest to operationalize mathematically, while the more systemic problems are overlooked. These systemic problems, Hagendorff suggests, include the weakening of social cohesion (through filter bubbles and echo chambers, for instance), the political abuse of AI systems, environmental impacts of the technology, and trolley problems (in which there is no clear decision on which choice is more ethical; for instance, having to choose between killing a pedestrian or the driver of an autonomous vehicle). Moreover, very little attention is paid to the ethical dilemmas plaguing the industry itself – the lack of diversity within the AI community or the invisible and precarious labor that goes into enabling AI technologies, such as dataset labeling and content moderation.

Technology is framed as an inevitable step towards progress; its application is taken for granted regardless of the context. In other words, being ethical only entails “building better”; “not building” is not an option.

Discussions on AI ethics are also based on certain assumptions and framings – “moral backgrounds” according to Daniel Green and his colleagues15 – which set the scope and direction of AI constitutionalism. Green and his colleagues’ critical review of seven high-profile value statements in ethical AI finds that the discourse is in line with conventional business ethics but sidesteps the imperatives of social justice and considerations of human flourishing. Technology is framed as an inevitable step towards progress; its application is taken for granted regardless of the context. In other words, being ethical only entails “building better”; “not building” is not an option. Furthermore, scrutiny of the ethicality of AI technologies is restricted to the design level, and does not extend to the business level. A design-level approach to ethical AI, for instance, looks only at reducing the racial bias of facial recognition software, without questioning the ethics of deploying this technology for mass surveillance in the first place. Another implicit assumption is that ethical design is the exclusive domain of experts within the AI community (for instance, tech companies, academics, lawyers). Product users and buyers are just stakeholders who “have AI happen to them”. Seemingly ironclad values and principles start to show cracks when these assumptions are questioned. What can we expect from ethical AI that is techno-deterministic and does not take a critical view of what the technology is used for? For whom and in whose interest are AI technologies being built and deployed?

More challenges emerge as we move away from the substantive content of AI ethics discourses and start putting principles into practice. First, AI ethics is, at best, seen as good intentions with no guarantee for good actions, and at worst, criticized as deliberate attempts to ward off hard regulations. Ethics whitewashing is a real concern as corporations eschew regulations and put forth self-formulated ethical guidelines as sufficient for AI governance. In practice, ethical considerations come in only after the top priorities of profit margins, client requirements, and project constraints have been resolved.16 It is difficult to rely on the goodwill of corporations which have arguably co-opted the academic field of AI ethics in an attempt to delay regulations.17 The existence of ethical guidelines does not guarantee that companies will be ethical. There are well-documented instances of companies resorting to ethics dumping and shirking wherever convenient, most obvious in the precarious work conditions of content moderation workers in the Global South.18

Ethics whitewashing is a real concern as corporations eschew regulations and put forth self-formulated ethical guidelines as sufficient for AI governance. In practice, ethical considerations come in only after the top priorities of profit margins, client requirements, and project constraints have been resolved.

Mainstream discussions on AI ethics assume that technologies exist in a vacuum, devoid of context. These assumptions are often made by a very small and privileged group of people in the Global North,19 who do not see the need to engage people outside of their own community even though the tools they build significantly impact the world at large. When AI technologies are designed and deployed without attention to context, systemic harms are amplified, and entire populations, especially in the Global South, can be rendered more vulnerable.20 Above all, discussions on ethics remain just that – discussions – not legally binding and enforceable. AI ethics, in its current state, does not lead to ethical AI. If we are serious about making technology work for the people and the planet, our efforts towards AI constitutionalism need to look beyond dominant discourses. This is what I attempt to do in the following section.

3. Towards a new AI constitutionalism

Already, there is mounting resistance against corporations and their maneuvering of ethical self-regulation. Carly Kind, Director of the Ada Lovelace Institute, observes a “third wave” of AI ethics, following a first wave comprising of principles and philosophical debates, and a second wave focusing on narrow, technical fixes. Kind argues that the third wave of AI ethics is less conceptual, more focused on applications, and takes into account structural issues. Research institutes, activists, and advocates have mobilized to effect changes in AI design and use, with some successes such as legislations and moratoria on the use of algorithms for applications such as test grading and facial recognition.21 An emerging body of work on “radical AI” aims to expose the power imbalances exacerbated by AI and offer solutions.22

The Covid-19 pandemic has laid bare these structural imbalances and triggered a renewed rush towards digitalization, with its associated concerns. Against this backdrop, we have also seen a shift towards a more critical view of AI and its implementation. It is precisely at this point that a new AI constitutionalism, or at least a significantly upgraded one, is needed and possible. We must seize this moment to take control of the narrative and determine what is important for our collective future, and how AI can help us achieve this vision. This is particularly urgent for communities that lie outside of the AI power centres, whose views remain underrepresented in global norm-making and standards-setting, and whose contexts may not be understood by those building the technologies and making the ethical decisions that underpin them. Some groups have already rallied together to collect and compile principles important to their communities, such as the Digital Justice Manifesto put together by the Just Net Coalition23 (a global network of civil society actors based mostly in the Global South), and the CARE Principles for Indigenous Data Governance by the Global Indigenous Data Alliance.24

Societal constitutionalism is a process of constitutional rule-making that starts from social groups like civil society, representatives from the business community, or multistakeholder coalitions. As noted by Redeker et al.,25 the process can be seen in three phases: “an initial phase of coming to an agreement about a set of norms by a specific group; a second phase in which these norms become law; and a third phase in which reflection about this builds up to achieving constitutional character”. Thus far, most of the norm-making in AI has been top down, coming from high-level policymakers, transnational Big Tech firms, or small groups of elites at national levels, reflecting the priorities of these groups. This is insufficient not only from a democratic point of view, but also because the vast applications of AI across different fields, from agriculture to zoology, necessitates the inputs of field experts who understand local contexts and implications.

A reimagination of AI constitutionalism should move the discourse from a purely technological approach to take societal considerations into account. It needs to move from the realm of the abstract to focus on application. Governance norms, political rights, and limitations of power within the field of AI should be democratically deliberated at different levels of a nested societal system and within different political jurisdictions (e.g. city, state, national, regional, international levels). This would allow all stakeholders and interest groups (e.g. professional associations, business associations, civil society networks, grassroots communities) to contribute meaningfully to the governance of AI from their own vantage points. This collective bottom-up approach, I propose, should be underpinned by the following considerations:

3.1. AI as a means to an end (and not an end in itself)

One prevalent assumption about AI is that it is an inevitable step towards progress, that AI technologies, if built well, can solve any problem. The tech industry’s optimism in this regard is echoed by the state. As a result, AI becomes an end in itself instead of a means to an end. Technological determinism is reflected in the willingness of governments to keep the AI regulatory environment minimalist, in order to not stifle innovation. In the rush to remain competitive in a high-tech, machine-enabled future, governments have outlined national AI strategies to promote research, talent, and investments in the sector, while remaining noncommittal about safeguarding against potential human rights violations.26 The possibilities of ‘AI for good’ begin to fall flat when seen from this perspective. If the objective of AI is indeed to bring social and economic benefits to the people, governments need to prioritize human rights over the needs of the industry and address the thorny issues that result from these technologies, including mass job displacements and a rapid concentration of wealth in the hands of a few.

For AI to be the means to an end, we need to first clarify our objectives and then critically assess if using AI is the best way to achieve them. In this, we can follow the lead of vision statements such as the UN Sustainable Development Goals and the Universal Declaration of Human Rights which have clearly-specified objectives, arrived at through extensive international consultations, negotiations, and agreements. The UN SDGs also come with a specific timeline (by 2030) as well as established indicators to help evaluate if the objectives have been met. Additionally, we can draw on relevant national27 and sectoral policies,28 or even organizational vision and mission statements which have often gone through contestations and consensus-building by multiple stakeholders. The use of AI needs to be grounded in such clearly-stated visions and blueprints for a better society.

Furthermore, it needs to be acknowledged that AI is only one tool in a full range of options, and not all problems should/can be solved by such technologies. In a presentation at Princeton University, titled ‘How to recognize AI snake oil’, Arvind Narayanan argued that while AI has become highly accurate in applications of perception (e.g. content identification, speech to text, facial recognition), and is improving in applications of automating judgment (e.g. spam detection, detection of copyrighted material, content recommendation), applications that promise to predict social outcomes (e.g. predicting criminal recidivism, job performance, terrorist risk) are still “fundamentally dubious”. Justifying the use of the term ‘snake oil AI’, Narayanan pointed to existing studies that show that AI backed by thousands of datasets is not substantially better at predicting social outcomes than manual scoring using only a few data points. Discussions on AI constitutionalism should, therefore, be grounded in clearly-stated objectives and feasibility studies, as well as allow room for rejecting AI usage, especially when there are potential risks for stakeholder communities.

3.2. AI ethics to emphasize relationality and context

According to Sabelo Mhlambi from Harvard University, Western ethical traditions tend to emphasize “rationality” as a prized quality of personhood – along the lines of “I think, therefore I am” – where humanness is defined by the individual’s ability to arrive at the truth through logical deduction.29 Not only is this an inherently individualistic worldview, it has also been used to justify colonial and racial subjugation based on the belief that certain groups are not rational enough, and therefore, do not deserve to be treated as humans. An AI framework that prioritizes rationality and individualism ignores the interconnectedness of our globalized and digitalized world, and serves to exacerbate historical injustices and perpetuate new forms of digital exploitation. The failure to recognize the relationality of people, objects, and events has left us hurtling towards countless crises and avoidable tragedies (such as man-made climate change exacerbated by nations’ inability to coordinate a multilateral response).

An AI framework that prioritizes rationality and individualism ignores the interconnectedness of our globalized and digitalized world, and serves to exacerbate historical injustices and perpetuate new forms of digital exploitation.

Scholars of technology and ethics have offered diverse philosophies anchored in relationality – such as Ubuntu,30 Confucianism,31 and indigenous epistemologies (e.g. Hawai’i, Cree, and Lakota)32 – that view ethical behavior in the context of social relationships and relationships with non-human entities such as the environment, or even sentient AI in the future. The moral character of AI must be judged based on its impacts on social relationships and the overall context and environment it interacts with. For example, evaluating AI-powered automated decision-making systems through the ethical lens of Ubuntu, Mhlambi points to a range of ethical risks. These include the exclusion of marginalized communities because of biases and non-participatory decision-making, societal fragmentation as a result of the attention economy and its associated features, and inequalities resulting from the rapid concentration of data and resources in the hands of a powerful few.33 In contrast, current ethical AI frameworks say very little about extractive business models of surveillance capitalism or the heavy carbon footprint of training AI.

The development and deployment of AI technologies take place in a complex, networked world. Discussions on AI constitutionalism thus need a paradigmatic shift in ethics from the individual to the relational, and must consider issues as diverse as collective privacy and consent, power and decolonization, invisible labor and environmental externalities in AI supply chains, as well as unintended consequences (for instance, when systems interact in unpredictable ways with their particular environments).

3.3. AI governance to go beyond self-regulation by the industry

The tech ethos of “move fast and break things” becomes much less persuasive if we make the connection that an algorithmic tweak in Facebook can lead to (or prevent) a genocide in Myanmar.34 Some friction in the system, by way of checks and balances, is necessary to make sure that any technology released is safe for society, and to guard against AI exceptionalism. Besides safety, AI can have significant systems-level opportunities and threats. An AI Security Map drawn by Jessica Newman at the University of Berkeley proposes 20 such areas – digital/physical (e.g. malicious use of AI and automated cyberattacks, secure convergence/integration of AI with other technologies), political (e.g. disinformation and manipulation, geopolitical strategy, and international collaboration), economic (e.g. reduced inequalities, promotion of AI research and development), and social domains (e.g. privacy and data rights, sustainability and ecology).35 It is difficult to imagine that self-regulation in the AI industry would carry us through all of these different areas, across different sectoral and geographical contexts.

The World Economic Forum defines governance as “making decisions and exercising authority in order to guide the behavior of individuals and organizations”.36 As AI constitutionalism is ultimately about governance of technology, discussions should not stop at AI ethics or be left to experts. Instead, we should explore other mechanisms such as institutional frameworks and regulatory environments to bridge principles and practice. Under the broad ambit of AI constitutionalism, diverse governance issues can be debated at various policy levels – for example, cross-border data flows and data sovereignty can be discussed at the international level; hard limits against malicious use of AI and data governance frameworks can be discussed at a national level; data privacy, especially in sensitive sectors such as finance and health, can be taken up at a sectoral level.

Broad participation in AI governance can have positive spillover effects such as trust-building, pooling multidisciplinary knowledge, and capacity-building across different domains. For this, a new AI constitutionalism needs to push for stakeholder participation at various levels. Underrepresented nations need to be invited and supported in norm-making initiatives at the international level; civil society must be consulted and engaged at national and city levels. These discussions should not focus only on the technical, and the onus should be on the AI community to make the information accessible to all. As a recent report by Upturn and Omidyar Network puts it, non-technical properties about an automated system, such as clarity about its existence, purpose, constitution,37 and impact, can be “just as important, and often more important” than its technical artifacts (its policies, inputs and outputs, training data, and source code).38

4. End reflections

AI constitutionalism needs to be squarely rooted in societal contexts and must make the connections between technology and the traditional fault lines of power and privilege. The resulting discourses will be complex and contested, reflecting the messy realities that the technology is embedded in, rather than the neat lists of values and principles that see the technology in a vacuum. The values of AI ethics (such as fairness, accountability, and transparency) will take on different, more consequential meanings when applied at a societal level, challenging actors in the Global North to explore ways to decolonialize AI and distribute its benefits based on solidarity, not paternalism.

By lifting AI constitutionalism from its narrow, technological focus to the societal and application level, we will find opportunities for greater participation and a more diverse range of perspectives to shape governance norms, power structures, and political rights in the field of AI. This will make space for actors in the Global South to deliberate on our own AI-enabled future, drawing from our cultural philosophies, and governing AI through our laws and institutional frameworks. It is critical that we claim this space to govern technology, as the unprecedented advances promised by AI can only be fulfilled if it is carefully controlled. Forfeiting this space would leave us stranded with a vastly different outcome of being controlled by technology and those wielding it.

Notes

Jun-E Tan is an independent researcher based in Malaysia, currently working on the topic of AI governance in Southeast Asia. Her research interests are broadly anchored in the areas of sustainable development, human rights, and digital communication. More information on her research and projects can be found on her website, https://jun-etan.com.