Mapping small watercourses with deep learning – impact of training watercourse types separately

Show full item record


Files in this item

Total number of downloads: Loading...

Files Size Format View Description
2022_Koski_AGILE.pdf 702.7Kb PDF View/Open 2022_Koski_AGILE.pdf
Title: Mapping small watercourses with deep learning – impact of training watercourse types separately
Author: Koski, Christian; Kettunen, Pyry; Poutanen, Justus; Oksanen, Juha
Contributor organization: Maanmittauslaitos
National Land Survey of Finland
Publisher: Copernicus publications
Date: 2022
Language: en
Belongs to series: Proceedings of the 25th AGILE Conference on Geographic Information Science
Belongs to series: AGILE: GIScience Series
ISSN: 2700-8150
Abstract: Deep learning methods for semantic segmentation have shown great potential in automating mapping of geospatial features, including small watercourses such as streams and ditches. There are a variety of small watercourse types. In many use cases users are only interested in specific types of watercourses. However, the impact on results from neural networks trained with only some types of small watercourses, compared to all types of watercourses is not well known. We trained four deep learning models to semantically segment watercourses from an elevation model. One model was trained with all small watercourses in the labels as a single class, while three models were trained each with a single type of watercourse in the label data. The results show that training the network with a single type of watercourse results in worse recall for all three watercourse types, compared to when training all of them together. This indicates that if the goal is to get as complete set of features as possible, it is better to include all watercourse types in the training data. Future studies could use multi-class output from neural network to determine how well networks could automatically classify features when training with all small watercourses in an area.
Subject: deep learning
digital elevation model
Rights: CC BY 4.0

This item appears in the following Collection(s)

Show full item record