Projecting named entity recognizers without annotated or parallel corpora

Show full item record



Permalink

http://hdl.handle.net/10138/306000

Citation

Hou , J , Koppatz , M , Hoya Quecedo , J M & Yangarber , R 2019 , Projecting named entity recognizers without annotated or parallel corpora . in M Hartmann & B Plank (eds) , 22nd Nordic Conference on Computational Linguistics (NoDaLiDa) : Proceedings of the Conference . Linköping Electronic Conference Proceedings , no. 67 , NEALT Proceedings Series , no. 42 , Linköping University Electronic Press , Linköping , pp. 232-241 , Nordic Conference on Computational Linguistics , Turku , Finland , 30/09/2019 .

Title: Projecting named entity recognizers without annotated or parallel corpora
Author: Hou, Jue; Koppatz, Maximilian; Hoya Quecedo, Jose María; Yangarber, Roman
Editor: Hartmann, Mareike; Plank, Barbara
Contributor: University of Helsinki, Department of Computer Science
University of Helsinki, Department of Computer Science
University of Helsinki, Department of Digital Humanities
Publisher: Linköping University Electronic Press
Date: 2019-10
Language: eng
Number of pages: 10
Belongs to series: 22nd Nordic Conference on Computational Linguistics (NoDaLiDa) Proceedings of the Conference
Belongs to series: Linköping Electronic Conference Proceedings - NEALT Proceedings Series
ISBN: 978-91-7929-995-8
URI: http://hdl.handle.net/10138/306000
Abstract: Named entity recognition (NER) is a well-researched task in the field of NLP, which typically requires large annotated corpora for training usable models. This is a problem for languages which lack large annotated corpora, such as Finnish. We propose an approach to create a named entity recognizer with no annotated or parallel documents, by leveraging strong NER models that exist for English. We automatically gather a large amount of chronologically matched data in two languages, then project named entity annotations from the English documents onto the Finnish ones, by resolving the matches with limited linguistic rules. We use this “artificially” annotated data to train a BiLSTM-CRF model. Our results show that this method can produce annotated instances with high precision, and the resulting model achieves state-of-the-art performance.
Subject: 113 Computer and information sciences
6121 Languages
Rights:


Files in this item

Total number of downloads: Loading...

Files Size Format View
W19_6124.pdf 184.6Kb PDF View/Open

This item appears in the following Collection(s)

Show full item record