Uncovering the structure of clinical EEG signals with self-supervised learning

Show full item record




Banville , H , Chehab , O , Hyvarinen , A , Engemann , D-A & Gramfort , A 2021 , ' Uncovering the structure of clinical EEG signals with self-supervised learning ' , Journal of Neural Engineering , vol. 18 , no. 4 , 046020 . https://doi.org/https://arxiv.org/abs/2007.16104 , https://doi.org/10.1088/1741-2552/abca18

Title: Uncovering the structure of clinical EEG signals with self-supervised learning
Author: Banville, Hubert; Chehab, Omar; Hyvarinen, Aapo; Engemann, Denis-Alexander; Gramfort, Alexandre
Contributor organization: Department of Biochemistry and Developmental Biology
Department of Computer Science
Date: 2021
Language: eng
Number of pages: 22
Belongs to series: Journal of Neural Engineering
ISSN: 1741-2560
DOI: https://doi.org/https://arxiv.org/abs/2007.16104
URI: http://hdl.handle.net/10138/342193
Abstract: Objective. Supervised learning paradigms are often limited by the amount of labeled data that is available. This phenomenon is particularly problematic in clinically-relevant data, such as electroencephalography (EEG), where labeling can be costly in terms of specialized expertise and human processing time. Consequently, deep learning architectures designed to learn on EEG data have yielded relatively shallow models and performances at best similar to those of traditional feature-based approaches. However, in most situations, unlabeled data is available in abundance. By extracting information from this unlabeled data, it might be possible to reach competitive performance with deep neural networks despite limited access to labels. Approach. We investigated self-supervised learning (SSL), a promising technique for discovering structure in unlabeled data, to learn representations of EEG signals. Specifically, we explored two tasks based on temporal context prediction as well as contrastive predictive coding on two clinically-relevant problems: EEG-based sleep staging and pathology detection. We conducted experiments on two large public datasets with thousands of recordings and performed baseline comparisons with purely supervised and hand-engineered approaches. Main results. Linear classifiers trained on SSL-learned features consistently outperformed purely supervised deep neural networks in low-labeled data regimes while reaching competitive performance when all labels were available. Additionally, the embeddings learned with each method revealed clear latent structures related to physiological and clinical phenomena, such as age effects. Significance. We demonstrate the benefit of SSL approaches on EEG data. Our results suggest that self-supervision may pave the way to a wider use of deep learning models on EEG data.
Subject: self-supervised learning
representation learning
machine learning
sleep staging
pathology detection
clinical neuroscience
3112 Neurosciences
Peer reviewed: Yes
Rights: cc_by_nc_nd
Usage restriction: openAccess
Self-archived version: acceptedVersion

Files in this item

Total number of downloads: Loading...

Files Size Format View
EEG.pdf 5.530Mb PDF View/Open

This item appears in the following Collection(s)

Show full item record