IT for Change’s Submission to the FID Working Group on AI and its Implications for the Information & Communication Space

IT for Change made a submission* to the Working Group on Artificial Intelligence constituted by the Forum on Information and Democracy.

The Working Group invited inputs on three critical areas: (i) development and deployment of AI systems, (ii) accountability regimes, and (iii) governance of AI. In a detailed submission, we highlighted certain issues and provided recommendations addressing the implications of AI on the information and communication space in relation to (a) the development and deployment of AI systems; (b) accountability and liability in AI systems; and (c) global cooperation and international governance of AI.

The key points from our submission are outlined below:

(i) Classification, Assessment, and Mitigation of Risks from AI Systems: We recommended a life-cycle approach to the assessment of risks of AI systems That being said, a risk-based approach should not be an alternative to a rights-based one and should treat rights as non-negotiable regardless of the risk level posed by an AI system. Further, adopting a precautionary principle, the burden to prove that the AI system in question does not violate fundamental human rights should be on the provider or deployer of the system, as the case may be.

(ii) Intellectual Property: To safeguard indigenous and traditional art and knowledge from being reduced to mere training data for AI models and appropriated by private actors that control them, AI-generated work should not be classified as copyrightable. The social and economic value accrued from work produced by AI trained on datasets that include indigenous art and traditional knowledge should be fairly distributed to the relevant community. Further, the new ethical standard of 'fair learning' that is proposed to evaluate AI models that use copyrighted material should account for traditional and indigenous knowledge as well.

(iii) Integrity, Fairness, and Public Nature of AI Datasets: To mitigate the problem of bias, discrimination, and exclusion in the datasets used to train AI models, it is necessary to institute mechanisms to ensure audit and oversight by an independent regulatory authority to verify accuracy and integrity of datasets. Further, to prevent the creation of data enclaves or enclosures controlled by a few, large digital technology firms, we recommend a commons approach to the management of publicly available datasets, which should entail strong institutional safeguards to protect social sector datasets, and purpose limitations and sunset clause for access to public domain and open government datasets, and reciprocity guarantees.

(iv) Responsibility of Providers and Deployers for AI-generated Content: There should be joint responsibility of providers and deployers, and a nuanced approach to assess what duties should lie with whom at what point in time, and who is empowered either legally or by practical control, power, or access to data and models to make changes.

(v) Responsibility of Platforms Hosting AI-generated Content: Digital platforms such as social media, search engines, media hosting platforms, etc., should not benefit from safe harbor protection if there is a systematic or deliberate failure and gross negligence on their part by continuing to host AI-generated unlawful or harmful content.

(vi) Rights and Duties of Users and Subjects of AI Systems: All fundamental human rights of users and subjects of AI systems should be safeguarded against the risks posed by AI systems, and in the context of information and communication space, freedom of expression, right to personal autonomy, right to privacy and dignity need to be specially safeguarded. While individual rights of users are important, it is also important to balance them with the right of the public to access plural and diverse information and knowledge, right to democratic participation and the interest in safeguarding information integrity and public trust in the information ecosystem.
Additionally, particular classes of users like journalists, researchers, and highly influential actors have certain duties and a high degree of responsibility when creating or distributing AI-generated content and should follow full disclosure. This duty could be enforced through sector regulators or industry bodies

(vii) International Governance of AI Systems: To ensure that citizens have access to equal rights and opportunities across territories, countries must ensure that AI providers based in their jurisdiction whose systems impact people outside of their jurisdiction are subject to the same requirements as those within their jurisdiction, or higher standards of requirements in case the other jurisdiction guarantees better. It is also important to develop binding global benchmarks and standards on due diligence and risk assessment, as well as the margin of error that is acceptable for different AI models and full disclosure of the same. Further, joint platforms for regulatory sandboxes could also be explored.

*This submission was authored by Merrin Muhammed Ashraf with direction and review from Anita Gurumurthy.

Read the full submission here.


Update — February, 2024

In February 2024, the Forum on Information and Democracy launched its Policy Framework, AI as a Public Good: Ensuring Democratic Control of AI in the Information Society, which calls for a paradigm shift to treat AI as a public good to make it work for society and democracy. Its recommendations are addressed to actors including AI developers, deployers, and governments, calling upon them to implement concrete and practicable measures to build more ethical, inclusive and responsible AI systems. Some of the key recommendations include:

(i) Setting up an inclusive and participatory process  to determine the rules and criteria guiding dataset provenance and curation, human labeling for AI training, alignment, and red-teaming.

(ii) Providing users with an easy and user-friendly opportunity to choose alternative recommender systems, and implementing a policy where both content and users must acquire a “right of recommendability” before getting promoted or seen in feeds.

(iii) Conducting systemic risk assessment, assessing risks to the information space pre-release, and undergo a third-party conformity assessment for medium and high risk systems.

(iv) Establishing standards governing content authenticity and provenance, including for author authentication, and using these standards in government communication and media.

(v) Considering the provision of public funding to support the development and maintenance of public infrastructure for trustworthy AI systems in the communication space.

(vi) Implementing a fault-based liability regime for AI developers and deployers regarding the outputs of their systems; a strict liability regime for developers and deployers of AI systems utilized to micro target users based on protected characteristics; and introducing a rebuttable presumption that platforms are liable for illegal content they host and the harm they cause unless they can prove compliance with certain due diligence requirements.

(vii) Developing a comprehensive legal framework that clearly defines the rights of individuals including the right to be informed, to receive an explanation, to challenge an outcome, and to non-discrimination, and mandating AI systems to establish complaint-handling procedures.

(viii) Establishing a tax on AI companies and entities to address the societal impact of AI. 

Recognizing her contribution to the Policy Framework through her written submission, Merrin Muhammed Ashraf has been acknowledged as an expert contributor by the Forum. 
 

Focus Areas
What We Do