Intelligent Language Learning Assistant (InteLLA) employs dialogue system technology to adaptively change conversational strategies according to the language learner’s abilities and understanding, eliciting appropriate speech samples and effectively assessing the learner’s language proficiency.
Adaptive Interview Strategy
InteLLA was developed to maximize language competency and the effective assessment of language proficiency by tailoring conversations to the language learner’s skills and comprehension. Users can begin a conversation with InteLLA through simple means such as launching a videoconference in a web browser. Like a human interviewer, the agent facilitates natural conversation through natural speech timing control, nonverbal interaction, and adaptive dialogue strategies. By mimicking a language tutor, InteLLA encourages learners to demonstrate their conversational abilities to their full potential.
Many of the English language AI-led tests focus on read-aloud tasks only, lacking validity in the measurement of language proficiency. InteLLA enables natural conversation through the use of fluid turn-taking and non-verbal interactions, encouraging learners to reach the full potential of their conversational abilities. We believe that only in real-life conversation context can a learner’s language skills be successfully measured. InteLLA’s language proficiency reports provide a multidimensional assessment in accordance with CEFR (European Common Framework of Reference), an international standard for language proficiency evaluations.
InteLLA received the Bronze Award in the Learning Assessment Category of the QS-Wharton Reimagine Education Award 2021, one of the world’s largest awards programs for innovative pedagogies. Our agent is currently being considered for use in English conversation classes at Waseda University, as well as at other major universities, cram schools, and English conversation schools.
An Explainable Deep Learning Model
In general, evaluating English language proficiency is heavily reliant on the knowledge and experience of the language examiner. As an answer to this drawback, we propose a framework for humans and AI to collaborate and grow together. Our language assessment process pre-incorporates the English conversation ability assessment framework into the neural network structure, which is then supported with the use of active learning, wherein experienced human raters can finalize assessments. A further advantage of active learning is that it gives priority to cases in which the AI signals low confidence, allowing for constant fine-tuning.