Intelligent but Gendered: Lessons from Welfare Automation in the Global South

This think piece was written by Shehla Rashid as part of our ‘Re-wiring India's Digitalising Economy for Women's Rights and Well-being’ project, supported by the European Union and FES.

Abstract

This paper brings aboard examples of automation at welfare interfaces to draw certain theoretical takeaways, especially surrounding the gendered experience of digitality. Examples from various countries are discussed and three case studies from the Global South, purposively selected, are elaborated upon to illustrate specific points. It argues that while artificial intelligence (AI) holds the promise of improving human lives in its emphasis on ‘augmenting’ human capabilities, this does not seem to be the priority of welfare automation systems which are deployed by private entities at the behest of governments with an overt emphasis on cost-saving. Automation could mean either the deployment of machine learning (ML) algorithms and/or automated decision-making (ADM), or profiling of welfare recipients based on integration of various databases. AI as an approach, today includes ML (both supervised and unsupervised), deep learning and neural networks, etc. (different from an earlier generation of rule-based AI systems). Owing to the inductive nature of reasoning in ML models, there is inductive bias both in their output as well as in the process of framing questions or ‘tasks’ because of ‘what’s possible’. Further, large and very large datasets necessitate huge computational capabilities, upskilling of personnel, cybersecurity measures, and constant upgradation of equipment. Hence, the costs of AI-based means-testing might offset much of the purported cost savings of targeted welfare delivery using AI. While digitisation can be rule-based, automated models tend to introduce arbitrariness which is the opposite of justice. Digitisation is a requirement today, but automation is a Big Data-enabled affordance, implying that algorithms need data more than welfare needs algorithms. This explains the current push for ‘smart’ governance across the Global South which offers huge real-life datasets and often, a regulatory vacuum. This paper highlights the risks of diversion of resources from welfare toward digitisation and automation; of private capture of public data; and of the use of public data and public infrastructure to build private capabilities without any improvement in welfare. It argues that while consent is an important issue, it is internal to the logic of datafication and is often vitiated in digital welfare initiatives.

Read the paper here.

 

Focus Areas
What We Do
Resource Type