IT for Change responded to the Department of Telecom’s (DoT) consultation paper on the Artificial Intelligence Stack released by its AI Standardization Committee in September 2020.
ITfC appreciates the DoT’s initiative in taking steps in artificial intelligence (AI) governance by recognising the importance of standards to solve potential challenges that may emerge from the deployment of AI solutions across the country. The development of a broad framework for these standards will be helpful in framing future discussions on the creation of guidelines or regulations by the Government of India as the technology continues to mature.
However, we found several instances of lack of clarity in articulation of issues and recommendations in the paper. Some key points of our submissions are as follows:
1. The paper does not accurately articulate the potential challenges of AI adoption in various sectors, and does little to try and link them to the framework being proposed. Furthermore, the scope of enforcement of the framework is unclear and the paper does not describe how the Committee plans to take this idea forward, or the governance structures required to do so.
2. The paper fails to effectively outline the over-arching institutional framework that will ensure these concerns are addressed in the development of the 5 layers of the Indian AI stack (infrastructure, storage, computational, application, data/information exchange layers and the cross-cutting security and governance layer).
3. It is unclear how the Indian AI Stack will be built, if it is intended as a digital public good or an enabler for new AI solutions to be developed in the future. The Paper also does not clarify if the Indian AI Stack is to be built and subsequently used by any and all organizations engaging with AI technologies or whether it is a proposed model only for organizations in India engaging with AI technologies.
4. The Committee recognizes the need for standards to be defined across distinct ‘layers’ of the AI solution value chain, but the technical prescriptions made within the framework are poorly justified - making it difficult to understand the rationale behind technical design choices and engage with them meaningfully.
5. The Paper alludes to the inclusion of ‘private’ data and industry standards, but it is unclear how the framework would interface with private sector entities and their internal development practices, and how these recommendations would be implemented in the public sector. This raises the question of what stakeholders the framework will include within its ambit as well as what enforcement mechanisms will be used to implement the framework and the accompanying standards.
6. While the efforts and intentions of the Committee in laying down standards are appreciated, there is no mention of how the standards of the Indian AI Stack envisaged by the Paper will be reconciled with AI standards being developed in organizations such as the Bureau of Indian Standards (BIS) and Institute of Electrical and Electronics Engineers (IEEE).
7. The Paper mentions that “In the absence of a clear data protection law in the country, EU's General Data Protection Regulation (GDPR) or any of the laws can be applied. This will serve as an interim measure until Indian laws are formalised.” This is highly problematic. It is imperative that a Personal Data Protection legislation is in place before the Indian AI Stack is operationalized. Without enacting a law, there can be no enforceability of privacy guarantees.