Assessing text readability and quality with language models

Visa fullständig post



Permalänk

http://urn.fi/URN:NBN:fi:hulib-202003191584
Titel: Assessing text readability and quality with language models
Författare: Liu, Yang
Medarbetare: Helsingin yliopisto, Matemaattis-luonnontieteellinen tiedekunta
University of Helsinki, Faculty of Science
Helsingfors universitet, Matematisk-naturvetenskapliga fakulteten
Utgivare: Helsingin yliopisto
Datum: 2020
Språk: eng
Permanenta länken (URI): http://urn.fi/URN:NBN:fi:hulib-202003191584
http://hdl.handle.net/10138/313475
Nivå: pro gradu-avhandlingar
Utbildningsprogram: Tietojenkäsittelytieteen maisteriohjelma
Master's Programme in Computer Science
Magisterprogrammet i datavetenskap
Studieinriktning: Algoritmit
Algorithms
Algoritmer
Ämne: none
Abstrakt: Automatic readability assessment is considered as a challenging task in NLP due to its high degree of subjectivity. The majority prior work in assessing readability has focused on identifying the level of education necessary for comprehension without the consideration of text quality, i.e., how naturally the text flows from the perspective of a native speaker. Therefore, in this thesis, we aim to use language models, trained on well-written prose, to measure not only text readability in terms of comprehension but text quality. In this thesis, we developed two word-level metrics based on the concordance of article text with predictions made using language models to assess text readability and quality. We evaluate both metrics on a set of corpora used for readability assessment or automated essay scoring (AES) by measuring the correlation between scores assigned by our metrics and human raters. According to the experimental results, our metrics are strongly correlated with text quality, which achieve 0.4-0.6 correlations on 7 out of 9 datasets. We demonstrate that GPT-2 surpasses other language models, including the bigram model, LSTM, and bidirectional LSTM, on the task of estimating text quality in a zero-shot setting, and GPT-2 perplexity-based measure is a reasonable indicator for text quality evaluation.


Filer under denna titel

Totalt antal nerladdningar: Laddar...

Filer Storlek Format Granska
Yang_s_MSc.pdf 1.179Mb PDF Granska/Öppna

Detta dokument registreras i samling:

Visa fullständig post