Digital Intelligence: Challenges of the Times

In an interview with Anita Gurumurthy and Nandini Chami of IT for Change, Sally Burch (ALAI) explores the concept of “digital intelligence” as a broader conception of AI that takes into account the interaction between human and digital systems. This interview was originally published in Spanish by the digital magazine, Internet Ciudadana N° 10. The English version of the interview is below.  

1. What is the difference between the terms “artificial intelligence” and “digital intelligence”? In what ways does “digital intelligence” help us to better understand the new technical age?

The term artificial intelligence ignores the social origins of the intelligence that a particular technology is producing. It mystifies the machine. But the term digital intelligence is more system-oriented. It emphasises the interplay between human and digital systems in problem-solving and decision-making that is increasingly becoming commonplace in the 4IR world. The term digital intelligence also seems more historically grounded – it does not have a machine fetish; it seems to recognize the AI revolution as part of a longer evolution of computing, internet, and Big Data technologies. Such a systems logic – where intelligence is embedded within the techno-social relationships comprising the system – helps us ensure that we never lose sight of how social knowledge and human labour is the raw material for the intelligence revolution made possible by new digital tech affordances, particularly AI technologies.

2. There is an international debate underway about the implications of AI, particularly since Chat GPT4 was launched. What do you see as the main threats (and/or advantages) of this kind of technology, and what can we do about it, from a perspective of digital justice and community?

The miracles of AI – including the ChatGPT phenomenon – are indeed, epochal. This is a historical conjuncture quite like the Gutenberg moment – when the mass production of books through the printing press changed civilizational institutions. AI can augment human creativity and change social division of labour to empower and transform. This can be for individual emancipation or to realise the Keynesian dream of a better life for everyone. However, the status quo is not geared towards this potential at all. AI, today, is firmly entrenched within the logic of financialisation on steroids, rooted in the unabashed disregard of human dignity and societal wellbeing.

The biggest threat posed by the current trajectories of AI development is an exacerbation of the environmental crisis. Emerging evidence seems to suggest that AI may be more of a problem than a solution to our struggle against climate change, water shortages and high energy consumption. Some estimates suggest that the water consumption in training Open AI’s large language model GPT3 was equivalent to the amount taken to fill a nuclear reactor cooling tower. Even start-ups and technology developers working for a more ethical and transparent AI industry are struggling to address the sustainability challenge. The start up Hugging Face managed to train its own large language model BLOOM on a French supercomputer powered by nuclear energy, producing a lower emissions footprint than most other models of similar size but once training was completed, in the pre-deployment stage, BLOOM emitted a carbon footprint equivalent to that of 60 flights between London and Paris.

The Generative AI technological loop has also opened up a Pandora’s Box of labour exploitation. As the Sama controversy in Kenya, demonstrated, language models and content moderation tools can be perfected only through the labour of countless content workers wading through the toxic waste of hateful and violent content that leads to psychological trauma. Workers’ wellbeing and mental health are a casualty when protections for performing such high risk jobs are woefully absent in the AI value chain.

A third concern that has gained centre stage in the months after ChatGPT took the world by storm is the long-term impact of the AI revolution on the future of work. Studies in recent months from OECD and ILO suggest that the labour force in developed countries is immediately more at risk of losing jobs to automation enabled by generative AI; but in the long term, this leap is expected to lead to higher productivity and GDP increase. The labour force in the global South is not going to be affected immediately – but this does not mean good news for their livelihood prospects and wellbeing in the long term. If their countries are by-passed by the Generative AI and other AI technological leaps, and they are stuck in the low-value segments of the economy – and are rendered foot soldiers/menial workers in the new 4IR – like the indigo-growing farmers of Britain’s industrial revolution – a neo-colonial economic future which limits choices of the majority world is what stares us in the face.

Data extractivism from the majority world is what powers the AI revolution. And just as the public commons of Web 2.0 was cannibalised for corporate profit in the platformisation of the Internet, thwarting shared knowledge production and possibilities for peer-to-peer commoning, we are at another similar moment in the digital revolution. Generative AI in particular threatens to co-opt the public knowledge commons without any licensing obligations to share/give back to society. Imagine a situation where government health records – open government data – are being used by pharmaceuticals for proprietary research on epidemiological predictions that the government is forced to buy/hire in a health crisis! Big Pharma patent monopolies that impeded the fight against COVID-19 should demonstrate to us that this is a very real possibility.

We should also shift the focus back from generative AI to foundational AI. The majority world’s population is engaged in agriculture, animal husbandry, agricultural allied livelihoods dependent on forests and natural resource commons – can they be helped to flourish in an AI age, especially, vis-à-vis their climate change adaptation and mitigation needs? How can we enable localised models for diagnostics and prediction for trigger warnings and long-term strategies? Why are we just pushing for more and more data sharing in directions that only seem to help Big Agri and Big Tech integrate people on extremely adverse terms into the hyper-capitalist AI market? Developing countries need to find a way to harness their data resources for their autonomous development in the intelligence revolution, similar to how countries like Thailand, in Asia, came back after the crisis in the 1990s and rebuilt their economy.

3. Major concerns are being expressed about intellectual property theft by AI that sweeps up and reuses data, such as the work of artists, without recognition of the source. How do you propose we frame this debate?

Generative AI that is able to develop text and visual images and clone voices has indeed brought the question of intellectual property theft to the fore. Policymakers are dealing with it in different ways – China wants to control information flows to generative AI; while Japan first wanted to remove copyright applications in datasets used for generative AI and then subsequently reversed its stance; EU and US policy have ambivalences about when fair use covers Generative AI training. The balance of creator rights and the use of the public resource of the knowledge commons for technological development is still evolving.

Now coming to the creator’s perspective. Authors find themselves living in the nightmarish fiction plot of Roald Dahl’s ‘Great Automatic Grammatizator’ – when the machinic imitation engine mimics their styles and voices better than them and creation is rendered into a production assembly machine line! The moral rights of the author or creative performer are in jeopardy when works are cannibalised for training generative AI. There are also questions of cultural appropriation – like Indian Warli art being auctioned at Sotheby’s without a recognition of the cultural context of its production by forest tribes; concerns that the Maori community has in fact raised and tried to address in the use of their language and linguistic resources for training model development. Collective licensing – recognition of the cultural commons of literature, art, and human cultural heritage – seems important. A fiduciary mechanism can be created to prevent cannibalisation or re-use in violation of the cultural commons.

In terms of literature and art, there is also the balance between the intellectual commons as public heritage and the common heritage of all humanity, and the moral rights of the author that needs to be maintained. The Authors’ Guild’s collective licensing proposal seems useful in this regard. This proposal says: “The Authors’ Guild proposes to create a collective license whereby a collective management organisation (CMO) would license out rights on behalf of authors, negotiate fees with the AI companies, and then distribute the payment to authors who register with the CMO. These licenses could cover past uses of books, articles, and other works in AI systems, as well as future uses. The latter would not be licensed without a specific opt-in from the author or other rights holders.”

4. What do you consider the main issues and proposals relating to AI that should be taken up in multilateral spaces such as the United Nations, in order to further digital justice and counteract the excessive power of the large digital corporations?

There is a debate that is ongoing, including in India, about whether AI governance is appropriately addressed on the global stage or whether we need national-level responses. Western democracies and the majority world have different ways of calibrating the balance between individual rights and social good – this is well-acknowledged even in the human rights debate as contextual interpretation of rights is extremely important. As a recent UNCTAD survey of G20 countries demonstrates, what is sensitive personal data is defined differently in different societies. Ideas of human-centric innovation, market transparency and accountability, desired trajectories of AI development – we need a multi-scalar governance model where the rights of people at the margins are protected by some bottom-line rights protections, and at the same time, every national community is able to arrive at a deliberative process to determine how it must seize the AI revolution and integrate into the global economy backed by justiciable human rights-based legislation of AI development. A hyper-liberalization of the data services markets may not work for every country – some countries may actually benefit by limiting their integration into the global digital economy.

To clarify, a productive interplay is needed between an international constitutionalism on AI, grounded in universal principles of human rights, on the one hand, and pluralistic visions rooted in contextual public morality about justice in the digital world, on the other. The publics here must include the voices that are in the social margins. International law across domains – climate, food, health, trade, intellectual property etc. – needs re-articulation for this AI moment.

We need supra-liberal conceptions of rights that can challenge the structures of political-economic power in the international arena. We also need grounded perspectives on the purpose and meaning of machine intelligence – coming from dissenting worldviews and lived experience of those who question the essentialising logic of AI-determinism.

The right to challenge public decision-making on the grounds of social harm, even where direct personal harm is not involved, needs to be recognized in AI governance frameworks. Such a societal right to shape AI development and deployment cannot be achieved without reining in transnational digital corporations, without pinning them down for their gross misconduct.

Focus Areas
What We Do
Resource Type