Open Decentralized AI-based Assessments for English government school teachers

English in India is a passport to socio-economic mobility. English, however, is not taught successfully, except in a few elite schools inaccessible to most students, who rely on the public (government) school system instead. This project has implemented an AI-driven software to support English language teachers in the public school system burdened with high teacher-pupil ratios. The software aids them in oral assessments, provides actionable recommendations, and enables them to monitor learners’ progress over time. The software  potentially enables inclusion as students acquire English competencies to support their learning and socio-economic mobility.

The software promotes collaborative innovation and evolution because it is implemented as Free and Open Source Software. It avoids problems of similar proprietary Ed-Tech AI offerings - vendor lock-ins, price increases, opaque data practices (constituting surveillance) and closed algorithms. With a decentralized architecture, the software can function on school desktops, and without internet, thereby promoting inclusivity and the ability to scale and sustain; unlike Ed-Tech models with centralized deployments on resource-intensive servers that require reliable internet connectivity.

This software has been deployed as a Proof-of-Concept (PoC) project in six government-run schools with the support and collaboration of the Kerala Education Department. The results from the PoC, in terms of supporting teachers to improve English teaching, have been encouraging. The implemention of the technology in the government school system will be an important step in overall scalability of the solution because more than 70% of Indian schools are in the government system. The scalability and sustainability of the inclusive decentralized AI model for English teaching is an important step for greening of EdTech. By enabling poor children from socio-economically backward sections of society to acquire English competencies across the entire public school system, the application enables inclusive and equitable education.

Design Principles, Pedagogical Approach and Software Scope

Listening, Speaking, Reading and Writing (LSRW) are the four skills of language learning. Traditionally, English language teaching in Indian schools has focused on Reading and Writing skills, with Writing given greater weightage because written work is easier to assess. Listening comprehension and spoken language ability are hardly assessed because they are difficult to accomplish, especially when the pupil-teacher ratio is high, as is usually the case in Indian schools.

The goal of the AI software, therefore, is to be able to support the teacher in assessing English comprehension and spoken language ability. Assessment of reading ability and written work are optional add-on capabilities.

The following principles drive the design of the AI software.

  • Pedagogy drives technology (and not the other way round)
  • Story-telling as a pedagogical tool for language learning
  • AI shall assist the teacher, not the student
  • Equity, Scalability and Sustainability

Components of AI

The software performs assesments in a three-step process.

1. Convert student speech to digital text. This is accomplished using Automated Speech Recognition (ASR), a branch of natural language processing (NLP) in the field of AI that converts human speech to digital text. We use Whisper, a MIT-Licensed FOSS for this step. Specifically, we use the 'Medium' English model.

2. Process the digital text. This step processes digital text and produces output elements that are meaningful for language proficiency analysis. Outputs include various vocabulary related metrics and results of syntax and grammar checks. We use spaCy, a MIT-Licensed FOSS for NLP tasks, and Vennify, a T5 based FOSS grammar corrector model that is released under CC-BY-NC-SA-4.0, supplemented by ERRANT, a python package for grammar error classification released under the MIT License.

3. Analyze and assess the text. Assessment rules are custom-developed in Python by the In-house team of technologists and language experts. Rules analyze the various metrics from the NLP and produce outputs in the form of meaningful actionable insights and recommendations for the teacher. The rules are carefully crafted to accommodate essential learner contexts and are grounded in language learning pedagogy. The rules are configurable by the teacher to suit their classroom. The FOSS nature of the software allows for transparent inspection of the rules by anyone and its free evolution.

The entire stack has been proven to work end to end on a modern desktop configuration (8GB RAM, 1 CPU with 4-8 cores) as a docker image. The stack works full offline, with internet connection required only for the one-time installation of the docker software. The docker image itself can be loaded one-time from an external storage device such as a USB disk.


What We Do
Resource Type