Assessing Grammatical Correctness in Language Learning

Show full item record



Permalink

http://hdl.handle.net/10138/330272

Citation

Katinskaia , A & Yangarber , R 2021 , Assessing Grammatical Correctness in Language Learning . in Proceedings of the 16th Workshop on Innovative Use of NLP for Building Educational Applications . The Association for Computational Linguistics , Stroudsburg , pp. 135-146 , 16th Workshop on Innovative Use of NLP for Building Educational Applications , 20/04/2021 .

Title: Assessing Grammatical Correctness in Language Learning
Author: Katinskaia, Anisia; Yangarber, Roman
Contributor: University of Helsinki, Department of Digital Humanities
University of Helsinki, Department of Digital Humanities
Publisher: The Association for Computational Linguistics
Date: 2021-04
Language: eng
Number of pages: 12
Belongs to series: Proceedings of the 16th Workshop on Innovative Use of NLP for Building Educational Applications
ISBN: 9781954085114
URI: http://hdl.handle.net/10138/330272
Abstract: We present experiments on assessing the grammatical correctness of learners’ answers in a language-learning System (references to the System, and the links to the released data and code are withheld for anonymity). In particular, we explore the problem of detecting alternative-correct answers: when more than one inflected form of a lemma fits syntactically and semantically in a given context. We approach the problem with the methods for grammatical error detection (GED), since we hypothesize that models for detecting grammatical mistakes can assess the correctness of potential alternative answers in a learning setting. Due to the paucity of training data, we explore the ability of pre-trained BERT to detect grammatical errors and then fine-tune it using synthetic training data. In this work, we focus on errors in inflection. Our experiments show a. that pre-trained BERT performs worse at detecting grammatical irregularities for Russian than for English; b. that fine-tuned BERT yields promising results on assessing the correctness of grammatical exercises; and c. establish a new benchmark for Russian. To further investigate its performance, we compare fine-tuned BERT with one of the state-of-the-art models for GED (Bell et al., 2019) on our dataset and RULEC-GEC (Rozovskaya and Roth, 2019). We release the manually annotated learner dataset, used for testing, for general use.
Subject: 113 Computer and information sciences
Rights:


Files in this item

Total number of downloads: Loading...

Files Size Format View
2021.bea_1.15.pdf 474.5Kb PDF View/Open

This item appears in the following Collection(s)

Show full item record