IT for Change’s Submission to the FID Working Group on AI and its Implications for the Information & Communication Space

IT for Change made a submission* to the Working Group on Artificial Intelligence constituted by the Forum on Information and Democracy.

The Working Group invited inputs on three critical areas: (i) development and deployment of AI systems, (ii) accountability regimes, and (iii) governance of AI. In a detailed submission, we highlighted certain issues and provided recommendations addressing the implications of AI on the information and communication space in relation to (a) the development and deployment of AI systems; (b) accountability and liability in AI systems; and (c) global cooperation and international governance of AI.

The key points from our submission are outlined below:

(i) Classification, Assessment, and Mitigation of Risks from AI Systems: We recommended a life-cycle approach to the assessment of risks of AI systems That being said, a risk-based approach should not be an alternative to a rights-based one and should treat rights as non-negotiable regardless of the risk level posed by an AI system. Further, adopting a precautionary principle, the burden to prove that the AI system in question does not violate fundamental human rights should be on the provider or deployer of the system, as the case may be.

(ii) Intellectual Property: To safeguard indigenous and traditional art and knowledge from being reduced to mere training data for AI models and appropriated by private actors that control them, AI-generated work should not be classified as copyrightable. The social and economic value accrued from work produced by AI trained on datasets that include indigenous art and traditional knowledge should be fairly distributed to the relevant community. Further, the new ethical standard of 'fair learning' that is proposed to evaluate AI models that use copyrighted material should account for traditional and indigenous knowledge as well.

(iii) Integrity, Fairness, and Public Nature of AI Datasets: To mitigate the problem of bias, discrimination, and exclusion in the datasets used to train AI models, it is necessary to institute mechanisms to ensure audit and oversight by an independent regulatory authority to verify accuracy and integrity of datasets. Further, to prevent the creation of data enclaves or enclosures controlled by a few, large digital technology firms, we recommend a commons approach to the management of publicly available datasets, which should entail strong institutional safeguards to protect social sector datasets, and purpose limitations and sunset clause for access to public domain and open government datasets, and reciprocity guarantees.

(iv) Responsibility of Providers and Deployers for AI-generated Content: There should be joint responsibility of providers and deployers, and a nuanced approach to assess what duties should lie with whom at what point in time, and who is empowered either legally or by practical control, power, or access to data and models to make changes.

(v) Responsibility of Platforms Hosting AI-generated Content: Digital platforms such as social media, search engines, media hosting platforms, etc., should not benefit from safe harbor protection if there is a systematic or deliberate failure and gross negligence on their part by continuing to host AI-generated unlawful or harmful content.

(vi) Rights and Duties of Users and Subjects of AI Systems: All fundamental human rights of users and subjects of AI systems should be safeguarded against the risks posed by AI systems, and in the context of information and communication space, freedom of expression, right to personal autonomy, right to privacy and dignity need to be specially safeguarded. While individual rights of users are important, it is also important to balance them with the right of the public to access plural and diverse information and knowledge, right to democratic participation and the interest in safeguarding information integrity and public trust in the information ecosystem.
Additionally, particular classes of users like journalists, researchers, and highly influential actors have certain duties and a high degree of responsibility when creating or distributing AI-generated content and should follow full disclosure. This duty could be enforced through sector regulators or industry bodies

(vii) International Governance of AI Systems: To ensure that citizens have access to equal rights and opportunities across territories, countries must ensure that AI providers based in their jurisdiction whose systems impact people outside of their jurisdiction are subject to the same requirements as those within their jurisdiction, or higher standards of requirements in case the other jurisdiction guarantees better. It is also important to develop binding global benchmarks and standards on due diligence and risk assessment, as well as the margin of error that is acceptable for different AI models and full disclosure of the same. Further, joint platforms for regulatory sandboxes could also be explored.

*This submission was authored by Merrin Muhammed Ashraf with direction and review from Anita Gurumurthy.

Read the full submission here.

 

Focus Areas
What We Do