IT for Change's Comments on UNESCO's Consultation Paper on AI Regulation

On behalf of IT for Change, Merrin Muhammed Ashraf submitted comments on the recently released UNESCO Consultation Paper on Regulatory Approaches for Artificial Intelligence (AI). The Consultation Paper is part of a broader effort by UNESCO, the Inter-Parliamentary Union, and the Internet Governance Forum’s Parliamentary Track to engage parliamentarians globally and enhance their capacities in evidence-based policymaking for AI. The Consultation Paper, which will be published as a policy brief, aims to inform legislators about the different regulatory approaches to AI that are being considered worldwide by legislative bodies.

The nine regulatory approaches identified by the paper are: (i) Principles-based approach; (ii) Standards-based approach; (iii) Agile and Experimentalists Approach; (iv) Facilitating and Enabling Approach; (v) Adapting Existing Laws Approach; (vi) Access to Information and Transparency Mandates Approach; (vii) Risk-based Approach; (viii) Rights-based Approach; and (ix) Liability Approach. Additionally, the paper provides suggestions to parliamentarians on how they can address three key questions before adopting AI regulations: Why Regulate; When to Regulate; and How to Regulate?

In our submission, we cautioned against any one regulatory approach being adopted exclusively to regulate AI. A holistic approach to AI governance that is rights-respecting, and which fosters a healthy democracy, economy, and ecological environment for all cannot be realized by adopting only one of the approaches mentioned in the consultation paper. AI regulation should be based on the precautionary principle, requiring preventive action in the face of uncertainty and shifting the burden of proof to innovators to show their technology causes no harm. Key considerations for Parliamentarians should include addressing public problems like AI resource concentration, environmental impact, safeguarding human rights, promoting desirable futures, and ensuring interpretability and accountability.

Key points from our submission are as follows:

1. On Regulatory Approaches to AI:

  • While standard-setting bodies are crucial to fostering quality of service and quality of experience, as well as the safety and security of digital technologies, they should be accompanied by overarching policy guidance of the State that provides mechanisms for oversight and enforcement of these standards in the public interest.  
  • Access to information and transparency mandates are important regulatory tools for AI governance, but they cannot be a standalone regulatory approach without a concomitant regulatory framework that provides for regulatory action based on the information made transparent. Further, access to information and transparency mandates in relation to AI should be broadened beyond just disclosing the use of AI in decision-making processes and user interactions so as to enable people to understand how an AI system is developed, trained, operates, and is deployed in the relevant application domain, as well as help those impacted by an AI system's outcome understand how the decision was made. In other words, transparency mandates should facilitate participatory and democratic governance of AI.
  • In risk-based approach to AI regulation, a crucial factor that should guide risk mapping is the context for deployment, intended use, and sector. Use of AI in certain contexts and sectors, such as criminal justice, credit, and health, require specialized attention as these areas involve sensitive and high-stakes decision-making processes, where the misuse or bias of AI could lead to serious ethical and legal consequences – with impacts for individuals and societies as a whole.   
  • A rights-based approach to AI governance should address historical and contextual injustices. This would involve a cross-cutting/cross-sectoral effort to redefine AI-related rights regimes in areas such as social communications, food sovereignty, health, environment, gender equality, welfare delivery, work/employment, etc., to ensure agency and well-being of individuals and communities.   

2. On Key Considerations for Parliamentarians

  • An assessment of when to regulate AI should be grounded in the precautionary principle, which emanates from international environmental law. The precautionary principle will require the government to take preventive action in the face of uncertainty from AI developments, shifting the burden of proof to those who want to undertake an innovation to show that it does not cause harm.  
  • The Consultation Paper highlights three key reasons for regulating AI: addressing public problems, safeguarding human rights, and achieving desirable futures and gives a few examples for each. We submitted that public problems to be addressed should include the concentration of AI resources, labor market disruptions, environmental impact, disinformation, and lack of diversity in AI models. Human rights considerations should focus on promoting equality, protecting workers' and communities' rights, ensuring inclusivity, and preserving cultural heritage. Finally, desirable futures could include directing AI innovation toward vulnerable populations, developing public AI infrastructure, promoting cultural and geographical diversity in AI, and adopting sustainable practices to reduce environmental harm.    
  • On the question of how to regulate, some additional considerations for Parliamentarians (apart from those mentioned in the paper) include: (i) protect collective rights in AI; (ii) call for interpretability of AI models in high-stakes decisions; (iii) check the power of transnational digital corporation; (iv) develop and invest in open compute paradigms; (v) provide for a strong accountability framework; (vi) and address the environmental impact of large AI models.

Read our full comments here.

Focus Areas
What We Do