Lessons From a Pandemic: Three Provocations for AI Governance

Amba Kak

What, if anything, can the global pandemic teach us about regulating artificial intelligence (AI)? Through three provocations (AI as abstraction; AI as distraction; AI policy as infrastructure policy), this essay explores how the data-driven responses to – and the technology-related impacts of – the Covid-19 pandemic hold crucial insights for the emergent policy terrain around algorithmic accountability and the political economy of AI systems.

Illustration by Mansi Thakkar

Introduction

What, if anything, can the global pandemic teach us about regulating artificial intelligence (AI)? Through three provocations (AI as abstraction; AI as distraction; AI policy as infrastructure policy), this essay explores how the data-driven responses to – and the technology-related impacts of – the Covid-19 pandemic hold crucial insights for the emergent policy terrain around algorithmic accountability and the political economy of AI systems.

First, just as abstract and decontextualized data visualizations and statistics about the pandemic have enabled the proliferation of narratives claiming that the “pandemic doesn’t discriminate”, I argue that abstraction in the discourse around artificial intelligence or AI systems plays a similarly pernicious role. For those engaged in advocacy around the social harms of AI systems, a definitional exercise could be a key way to rescue AI from the abstract, and foreground social and material concerns around these systems.

Second, contact-tracing apps deployed during the pandemic are a good entry point to understand ‘AI as distraction’. If contact-tracing apps were at the peak of the hype cycle in the early months of the pandemic, they now appear to be in the “trough of disillusionment”.1 It’s a good time then to ask: What was lost in the hype? Distraction is a useful way to understand the real function of many AI/algorithmic decision-making systems (ADS) tools which often disguise the underlying motivations and distract from deeper inequities and governance failures. Process-focused regulatory mechanisms like algorithmic impact assessments (AIA) hold promise, but they need to be structured to combat distraction and reveal the motivations driving these projects before they are implemented.

Finally, the pandemic has popularized the comparison of platforms to public utilities and brought a renewed focus to their “infrastructural” power. I argue that the “infrastructural turn”2 in AI policy is well on its way too, although this is sometimes obscured because of the lack of consensus around what counts as policy “about AI” versus broader data governance norms or industrial and competition regulation. AI policy should, in fact, be understood as an assemblage of these various policy trends aimed at democratizing, or at least, diversifying access to the inputs that sustain this new computing landscape: data, software, compute, expertise.

AI as abstraction

“The number of such laborers died/injured during migration to their native places due to such lockdown, State-wise?

Government Response: No such data is available.”

The Indian government’s response to a recent question on migrant workers who died as a consequence of the nationwide Covid-19 lockdown, imposed on March 23, 2020 with barely four hours’ notice, touched a raw nerve in public discourse.3 It came at a moment when statistics and data visualizations about the spread and impact of the pandemic have become normalized as a key mode of managing the pandemic. This is often referred to as “data-driven governance”.4 The government’s response – no data available – was a reminder that the picture the data paints is one that is palatable and indeed beneficial to those that construct it. In other words, “data on the impact of Covid” is not a neutral container: Who decides what counts as impact? Why isn’t there data on deaths due to the economic or governance impacts of Covid? Or data on the socio-economic profiles of those infected, and those who succumbed? As Rashida Richardson notes, “To exercise sovereignty is the power to authorize and enforce what information is relevant and necessary to govern.”5

As mentioned earlier, abstract and decontextualized data visualizations and statistics about the pandemic have enabled the proliferation of narratives claiming that the “pandemic doesn’t discriminate”, thereby erasing the stark disparities in how different demographics have been impacted. These data stories (like the lack of data on migrant deaths) can legitimize similarly abstract policy decisions that fail to take into account the immediate and urgent needs of particular demographic groups or localities.6 In response, counter data-narratives too have begun to emerge. Data for Black Lives and the COVID Racial Data Tracker in the US collected confirmed case data by race.7 In India, the Criminal Justice and Police Accountability Project (CPAP) studied 34,000 arrest records and 500 First Information Reports filed in Madhya Pradesh during the pandemic to understand the patterns of policing and locate the socio-economic profiles of the individuals policed.8 They produced a “countermap” that demonstrated that arbitrary and disproportionate criminalization of marginalized communities had only amplified during the pandemic.

Abstraction plays a similarly pernicious role in the discourse around AI systems. The term AI is ubiquitous in public discourse about technology but remains notoriously underspecified; it is hard to pinpoint precisely what kinds of systems are being referred to under this umbrella term.9 The moniker ‘artificial intelligence’ connotes the replacement of humans with machine thinking. It has an aura of futurism and magic10 routinely reinforced by images of robots11 that often accompany articles about AI. This imagination of AI has only served to create and foster an ‘AI hype’, which has ironically benefited a range of routine systems with vastly different functionality and levels of computational intensity. From content filters on social media and fraud detection tools in welfare systems to facial or other forms of biometric recognition to “smart” refrigerators and self-driving cars, there is an ever-expanding spectrum of systems that are enveloped under the rubric of “AI”. This has led to heated “boundary wars” in the technical research and business community that try to pinpoint a definitional threshold.12 For these groups, the stakes are high; the definitional threshold will determine which programs benefit from the ever-expanding pool of funding for AI research or make new ventures more appealing to investors.13

AI as an abstract buzzword can be brandished against complex social problems as if it were a neutral and external 'solution' rather than a sociotechnical system designed and developed to make value-laden choices and trade-offs.

For those engaged in advocacy around the social harms of AI systems, a definitional exercise could, however, be a key way to rescue AI from the abstract, and foreground social and material concerns around these systems. Just as glossy data visualizations can obscure the unequal impacts and governance failures of the pandemic, AI as an abstract buzzword can be brandished against complex social problems as if it were a neutral and external ‘solution’ rather than a sociotechnical system14 designed and developed to make value-laden choices and trade-offs.15 These abstract narratives of so-called autonomous systems also obscure the material infrastructure and distributed global workforce that undergirds the AI economy.

There has been a growing shift toward using the term ‘algorithmic decision-making systems’ or ADS to describe some of the most ubiquitous and worrying algorithmic systems in use today. This is a change being propelled by advocacy organizations and there are already multiple official policy documents, and now legislation, that use this framing, primarily in the context of government use of ADS.16 Identifying these as “decision systems” shifts the emphasis from an abstract notion of mimicking or replacing human intelligence to systems that make decisions, allocate resources, create priorities, and engage in value trade-offs. A growing body of research has clarified the various choices or trade-offs that are made at every step in the lifecycle of the system: from the data used to train these systems, the choice of algorithmic models that are used (and the causal logics they deploy), and the complex ways in which those ‘supervising’ these systems interpret and apply their results.

In fact, concentrating on the human labor involved at multiple steps in the life cycle of algorithmic systems has been another key tactic in de-abstracting the idea of ‘autonomous AI’. Policy solutions like ‘human-in-the-loop’ that envision human supervision to be an antidote to concerns of algorithmic opacity have also largely failed, leading to calls for a more nuanced exploration of this relationship and changing the lens to “algorithm-in-the-loop”.17 Other research focuses on the large globally distributed workforce which prepares the foundational datasets required for many of the most ubiquitous text and image processing systems.18

AI as distraction

Earlier this year, as most of the world was confronted with a rapidly spreading pandemic with no end in sight, contact-tracing apps developed by governments and some of the world’s the largest technology companies were a prominent (and arguably central) part of both official and popular narratives about the response to Covid-19.19 In the policy space, there were heated debates and rapid civil society responses to such technology-oriented solutions to the public health crisis which highlighted the concerns of privacy, transparency, and efficacy. In countries with low internet penetration or smartphone coverage, the overwhelming reliance on technological measures raised serious concerns of exclusion and, relatedly, the efficacy of using data derived from these apps to guide policy decisions.

Several months into the pandemic, as many countries grapple with a second wave of high infection rates, there is now markedly less buzz around technological solutions to the global public health crisis.20 While contact-tracing apps are still available in most countries, they appear peripheral (if at all) in news and official accounts of the Covid-19 response. Recent download rates of such apps in Europe, where they are strictly optional, have been very low, ranging from 20 percent of the population in Germany to just 3 percent in France.21 In India, where it is effectively mandatory, the Aarogya Setu app has gone from being a key part of the Prime Minister’s Covid-19 address to one mired in controversy.22 It is still effectively unworkable for large parts of the population without a smartphone and access to the internet or lower digital literacy skills.

If contact-tracing apps were at the peak of the hype cycle in the early months of the pandemic, they now appear to be in the “trough of disillusionment”.23 It’s a good time then to ask: what was lost in the hype? What was the opportunity cost of the focus on these kinds of consumer technology in a time of crisis? In the Indian context, I argued along with my coauthor that “these technology-based responses to the pandemic obscure that the country still lacks the foundational infrastructure for analyzing digital health information”.24 In other words, the focus on apps distracted from the more foundational lack of digitized information about the public health system, such as the number of hospital beds, disease incidence, and death tolls. Such data would have been invaluable for government agencies making decisions about how to ration hospital resources and testing facilities, but most of it is not available for policy and planning authorities. In the US, Cathy O’Neill argued that the app-hype was distracting from the glaring lack of testing and clear official messaging around masks and other precautionary measures.25

AI systems are typically proposed as a magic bullet to solve complex social problems. In reality, they can inhibit progress on broader reforms.

Distraction has been a key function of many AI/ADS tools in two primary ways. First, similar to the example of app-hype during the pandemic, AI systems are typically proposed as a magic bullet to solve complex social problems. In reality, they can inhibit progress on broader reforms. The buzz around using AI “to solve poverty” is a stark example of this.26 Data-driven forms of financial technology have been promoted as a form of inclusion to bring the poorest within the net of the formal banking and digital payments ecosystem.27 However, these technology-driven programs distract from the economic reality that these individuals lack the means and assets to participate in these systems and are particularly vulnerable to exploitative and predatory lending schemes.28

Secondly, algorithmic systems can also distract from the underlying political or economic values being pursued by the institutions that introduce them. A 2013 case from Michigan serves as one instance of how algorithms can be used to disguise austerity measures or other forms of neoliberal governance.29 In October 2013, Michigan implemented a new automated unemployment insurance system to reduce operating costs and target fraud in unemployment insurance claims. When the Michigan Integrated Data Automated System (MiDAS) was implemented, the Unemployment Insurance Agency laid off 432 employees – roughly one third of its staff. After hundreds of people started complaining about being unfairly fined for fraud, the Auditor General found that MiDAS was “in error” 92 percent of the time. This error can be explained in terms of technical parameters, but that would distract from the fact that it was embedded in the broader and ongoing cutbacks in unemployment insurance and other forms of social welfare benefits under the new Governor Rick Snyder. The political values of the administration were reflected in the way the algorithm functioned to severely limit the number of recipients, and discipline or demonize those reliant on state aid.30 A recent attempt at using facial recognition technologies in a housing complex in New York led to protests from resident groups who argued that it was in fact “a form of tenant harassment, designed to evict rent-stabilized residents” at a time of rapid gentrification in the neighborhood.31

Process-focused regulatory mechanisms like algorithmic impact assessments (AIA) could be one way to combat distraction, reveal the motivations driving these projects, and engage in a meaningful cost-benefit analysis. Requiring entities to conduct AIAs is increasingly being proposed as a tool to ensure accountability and transparency when using algorithmic decision-making systems. While AIAs are an active field of research, they are already beginning to find mention as a requirement in regulations like the directive on Automated Decision-Making Systems in Canada and the Algorithmic Accountability Bill, 2019 in the United States. These regulations delegate many of the specifics of AIA to future executive rulemaking, and there is an active debate on how to best identify the types of effects that count as impact, when these assessments are conducted (ex ante and/or ex post), and who are invited to participate or consulted in these assessments. In addition to focusing on potential impacts, it will also be critical to structure AIAs to ensure that the broader political and economic motivations of these uses are illuminated. This can only happen through consultations that not only include the perspectives of those directly impacted but also deliberately decenter the technical components of these projects in favor of the social and economic contexts in which they will be used.

AI policy as infrastructure policy

“The pandemic has many losers but it already has one clear winner: big tech”, declared an Economist headline in March 2020.32 The indispensability of large scale multinational technology companies was both revealed and entrenched at the height of the pandemic as virtual platforms for communication and market exchange became key to maintaining normalcy in the economy and social spheres like education.

As mentioned earlier, various governments were seen collaborating and negotiating with private actors as part of the technological responses to mitigating the spread of the virus as well as creating systems to govern society post the lifting of lockdowns. In China, despite popular accounts that the state has unrestricted access to data held by companies, reports suggest a more complex picture of state-private sector negotiation over access to data. Chinese authorities were putting considerable pressure on companies like Alibaba and Tencent to share their data infrastructure for the purpose of geolocation and other data required for the government’s flagship Health Code apps. It was revealed that the data held by state-owned telecom companies did not compare with the GPS and other data held by platforms like Allpay and WeChat. The British Prime Minister famously invited senior representatives from four of the largest Silicon Valley technology companies in an emergency effort to tap into the resources of big tech.33 Other companies, like Microsoft and Amazon cloud platforms hosted a range of government data dashboards and technology tools, including that of the United States Center for Disease Control (CDC). The CDC also used Microsoft’s customizable healthcare chatbots.34 Google search engine pledged ad grants to the World Health Organization (WHO) to play a key role in sharing factual information on how to prevent the spread of the virus. Previously low profile, Google’s life sciences company Verily was suddenly in the news for carrying out large scale drive-through testing in the US.35 These instances underscore the ways in which big tech companies could leverage network effects, data linkages, and large amounts of available capital to expand their footprint across multiple social domains from communication to finance to healthcare while the pandemic swept across the world.

The Apple and Google partnership, launched in April 2020, for a Bluetooth-powered contact-tracing app also holds critical insights about the nature of power these platforms exert. The two companies were going to make a contact-tracing toolkit available as part of their operating systems which could then be leveraged by state-sanctioned health apps. This provided a potential solution for the dozens of governments grappling with the challenge of Bluetooth-related restrictions on smartphones that limited the efficacy of these apps. However, in order to use these features, the governments’ apps would have to play by Google and Apple’s rules on how their apps would be designed. This was a significant boon for individual privacy and security because it mandated a decentralized architecture and therefore restricted data sharing with centralized government servers. Soon enough, however, tensions emerged as governments of France and the UK, among others, were slighted by the idea that Google and Apple would  dictate how states designed their technological response to Covid-19.36 It was a reminder that the smartphone itself contains crucial social infrastructure controlled by a handful of companies globally. As Micheal Veale notes, “It’s great for individual privacy, but the kind of infrastructural power it enables should give us sleepless nights.”37

In fact, this focus on the infrastructural power of platforms has taken renewed prominence in policy circles over the last year, sometimes expressed in comparisons of these companies with public utilities. The infrastructural lens is an important tool to understand the business logics that have created these forms of platform power, as well as the material infrastructure (data centers, submarine cables, smartphones, chipsets) that sustain it and inhibit competition. More broadly, the infrastructural lens is helpful to understand the impacts of being excluded from the use of these platforms, which has been a key concern with the shift to virtual learning during the pandemic.

The “infrastructural turn”38 in AI policy is well on its way too, although this is sometimes obscured because of the lack of consensus around what counts as policy “about AI” versus broader data governance norms or industrial and competition regulation. AI policy should, in fact, be understood as an assemblage of these various policy trends that respond to and anticipate the ongoing shift towards a computing landscape that consists of high intensity computational tasks, typically involving large amounts of data. It is a landscape dominated by internet companies like Google, Facebook, Amazon, Microsoft, Apple in the US and Alibaba and Tencent in China that have been able to leverage their access to data, computational power, algorithmic expertise, and capital to build and develop cutting edge algorithmic tools that have, in turn, served to expand the scale, reach, and monetization potential of these platforms. The US and Chinese economies have disproportionately benefited from the wealth generated by these companies, despite the fact that Global South countries like India and Brazil are some of the largest markets for Silicon Valley companies by the sheer number of users.39 Dominance in the AI marketplace is also deeply intertwined with the development of cutting edge military and cybersecurity technologies. As a result, it is a combination of economic and security anxieties that are fueling a range of policy developments aimed more explicitly at promoting domestic or native enterprises, and the creation of “national champions”. This kind of rhetoric has been most evident in policy developments at the European Union level as well as several recent policy moves by the Indian government.40

The infrastructural turn in AI policy involves disaggregated and targeted legal and policy interventions aimed at democratizing, or at least, diversifying access to the inputs that sustain this new computing landscape: data, software, compute, expertise. The Indian government has prominently made “access to data” for Indian companies and the state a key lever to enhance domestic competitiveness. A broadly stated mandatory data access proposal in recent policy documents has raised more questions than answers around the legal and technical frameworks to facilitate such a regime. Data localization or the legal requirement to store data on servers within the geographical territory of the country has been another site of heated policy making, with the draft Personal Data Protection Bill of 2019 including a requirement to keep a copy of personal data in India. One of the key official justifications for data localization has been the need to bring foreign companies firmly within Indian jurisdiction. Data localization can then be understood as a foundational step in a more aggressive access to a data regulatory regime, and one that is likely to invite stiff opposition from a range of stakeholders. Access to computing resources as well as diversifying the players providing cloud computing services has been another key theme in recent policy documents both in the EU and India. The draft e-commerce policy specifically states the need to create domestic cloud computing companies and includes government subsidies for such companies as a potential route to consider. Other efforts like public research clouds and data trusts are also experiments in creating pooled in computation and data resources that can reduce the barriers to entry for smaller and medium-sized companies as well as research organizations. Finally, while access to data and computing has been most prominent, these policy documents also note the need to cultivate and fund research centers of excellence in order to retain talent and compete with Silicon Valley and Chinese R&D.

The contours of AI policy should not be limited to axes of accountability, discrimination, and privacy, but also expand its scope to recognize the data governance and competition policies that attempt to influence the global political economy of AI.

Rather than be dismissed as digital protectionism, these developments should be taken seriously for their explicit acknowledgement of data governance as a form of industrial policy. That is not to say that the fundamental rights rationale for enacting data protection and surveillance regulation are facetious, but rather that there are additional and intersecting geopolitical and geoeconomic drivers for all these forms of data governance policy which need to be understood and engaged with by the AI policy community. In other words, the contours of AI policy should not be limited to axes of accountability, discrimination, and privacy, but also expand its scope to recognize the data governance and competition policies that attempt to influence the global political economy of AI.

Notes

Amba is the Director of Global Policy & Programs at New York University’s AI Now Institute where she develops and leads the institute’s global policy engagement, programs, and partnerships, and is a fellow at the NYU School of Law. She is also on the Strategy Advisory Committee of the Mozilla Foundation. Trained as a lawyer, Amba graduated from NUJS, and then read for the BCL and an MSc at the University of Oxford on the Rhodes Scholarship.