Graph embedding with data uncertainty

Show full item record



F. Laakom, J. Raitoharju, N. Passalis, A. Iosifidis and M. Gabbouj, "Graph Embedding With Data Uncertainty," in IEEE Access, vol. 10, pp. 24232-24239, 2022.

Files in this item

Total number of downloads: Loading...

Files Size Format View
Laakom et al. 2 ... with data uncertainty.pdf 1.177Mb PDF View/Open
Title: Graph embedding with data uncertainty
Author: Laakom, Firas; Raitoharju, Jenni; Passalis, Nikolaos; Iosifidis, Alexandros; Gabbouj, Moncef
Contributor organization: Suomen ympäristökeskus
The Finnish Environment Institute
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 2022
Language: en
Belongs to series: IEEE Access
ISSN: 2169-3536
Abstract: Spectral-based subspace learning is a common data preprocessing step in many machine learning pipelines. The main aim is to learn a meaningful low dimensional embedding of the data. However, most subspace learning methods do not take into consideration possible measurement inaccuracies or artifacts that can lead to data with high uncertainty. Thus, learning directly from raw data can be misleading and can negatively impact the accuracy. In this paper, we propose to model artifacts in training data using probability distributions; each data point is represented by a Gaussian distribution centered at the original data point and having a variance modeling its uncertainty. We reformulate the Graph Embedding framework to make it suitable for learning from distributions and we study as special cases the Linear Discriminant Analysis and the Marginal Fisher Analysis techniques. Furthermore, we propose two schemes for modeling data uncertainty based on pair-wise distances in an unsupervised and a supervised contexts.
Subject: epävarmuus
113 Tietojenkäsittely ja informaatiotieteet
Subject (yso): uncertainty
data models
principal component analysis
Gaussian distribution
eigenvalues and eigenfunctions
training data
feature extraction
graph theory
learning (artificial intelligence)
data uncertainty
common data preprocessing step
machine learning pipelines
meaningful low dimensional embedding
subspace learning methods
consideration possible measurement inaccuracies
raw data
probability distributions
original data point
graph embedding framework
graph embedding
subspace learning
dimensionality reduction
uncertainty estimation
spectral learning
models (objects)
modelling (creation related to information)
data models
normal distribution
machine learning
113 Computer and information sciences
Rights: CC BY 4.0

This item appears in the following Collection(s)

Show full item record