Predicting Prosodic Prominence from Text with Pre-trained Contextualized Word Representations

Show full item record



Permalink

http://hdl.handle.net/10138/311873

Citation

Talman , A , Suni , A , Celikkanat , H , Kakouros , S , Tiedemann , J & Vainio , M 2019 , Predicting Prosodic Prominence from Text with Pre-trained Contextualized Word Representations . in M Hartmann & B Plank (eds) , 22nd Nordic Conference on Computational Linguistics (NoDaLiDa) : Proceedings of the Conference . Linköping Electronic Conference Proceedings , no. 167 , NEALT Proceedings Series , no. 42 , Linköping University Electronic Press , Linköping , pp. 281–290 , Nordic Conference on Computational Linguistics , Turku , Finland , 30/09/2019 .

Title: Predicting Prosodic Prominence from Text with Pre-trained Contextualized Word Representations
Author: Talman, Aarne; Suni, Antti; Celikkanat, Hande; Kakouros, Sofoklis; Tiedemann, Jörg; Vainio, Martti
Other contributor: Hartmann, Mareike
Plank, Barbara
Contributor organization: Department of Digital Humanities
Language Technology
Phonetics
Phonetics and Speech Synthesis
Mind and Matter
Publisher: Linköping University Electronic Press
Date: 2019-09-30
Language: eng
Number of pages: 10
Belongs to series: 22nd Nordic Conference on Computational Linguistics (NoDaLiDa)
Belongs to series: Linköping Electronic Conference Proceedings - NEALT Proceedings Series
ISBN: 978-91-7929-995-8
ISSN: 1650-3686
URI: http://hdl.handle.net/10138/311873
Abstract: In this paper we introduce a new natural language processing dataset and benchmark for predicting prosodic prominence from written text. To our knowledge this will be the largest publicly available dataset with prosodic labels. We describe the dataset construction and the resulting benchmark dataset in detail and train a number of different models ranging from feature-based classifiers to neural network systems for the prediction of discretized prosodic prominence. We show that pre-trained contextualized word representations from BERT outperform the other models even with less than 10% of the training data. Finally we discuss the dataset in light of the results and point to future research and plans for further improving both the dataset and methods of predicting prosodic prominence from text. The dataset and the code for the models are publicly available.
Subject: 113 Computer and information sciences
Natural language processing
6121 Languages
Peer reviewed: Yes
Rights: cc_by
Usage restriction: openAccess
Self-archived version: publishedVersion


Files in this item

Total number of downloads: Loading...

Files Size Format View
W19_6129.pdf 572.0Kb PDF View/Open

This item appears in the following Collection(s)

Show full item record