Browsing by Subject "Feature extraction"

Sort by: Order: Results:

Now showing items 1-4 of 4
  • O'Toole, John M.; Boylan, Geraldine B.; Lloyd, Rhodri O.; Goulding, Robert M.; Vanhatalo, Sampsa; Stevenson, Nathan J. (2017)
    Aim: To develop a method that segments preterm EEG into bursts and inter-bursts by extracting and combining multiple EEG features. Methods: Two EEG experts annotated bursts in individual EEG channels for 36 preterm infants with gestational age <30 weeks. The feature set included spectral, amplitude, and frequency-weighted energy features. Using a consensus annotation, feature selection removed redundant features and a support vector machine combined features. Area under the receiver operator characteristic (AUC) and Cohen's kappa (K) evaluated performance within a cross-validation procedure. Results: The proposed channel-independent method improves AUC by 4-5% over existing methods (p <0.001, n = 36), with median (95% confidence interval) AUC of 0.989 (0.973-0.997) and sensitivity -specificity of 95.8-94.4%. Agreement rates between the detector and experts' annotations, K = 0.72 (0.36-0.83) and K = 0.65 (0.32-0.81), are comparable to inter-rater agreement, K = 0.60 (0.21-0.74). Conclusions: Automating the visual identification of bursts in preterm EEG is achievable with a high level of accuracy. Multiple features, combined using a data-driven approach, improves on existing single-feature methods. (C) 2017 The Authors. Published by Elsevier Ltd on behalf of IPEM.
  • Zhu, Yongjie; Liu, Jia; Cong, Fengyu (2023)
    The human brain can be described as a complex network of functional connections between distinct regions, referred to as the brain functional network. Recent studies show that the functional network is a dynamic process and its community structure evolves with time during continuous task performance. Consequently, it is important for the understanding of the human brain to develop dynamic community detection techniques for such time-varying functional networks. Here, we propose a temporal clustering framework based on a set of network generative models and surprisingly it can be linked to Block Component Analysis to detect and track the latent community structure in dynamic functional networks. Specifically, the temporal dynamic networks are represented within a unified three-way tensor framework for simultaneously capturing multiple types of relationships between a set of entities. The multi-linear rank-(Lr,Lr,1) block term decomposition (BTD) is adopted to fit the network generative model to directly recover underlying community structures with the specific evolution of time from the temporal networks. We apply the proposed method to the study of the reorganization of the dynamic brain networks from electroencephalography (EEG) data recorded during free music listening. We derive several network structures (Lr communities in each component) with specific temporal patterns (described by BTD components) significantly modulated by musical features, involving subnetworks of frontoparietal, default mode, and sensory-motor networks. The results show that the brain functional network structures are dynamically reorganized and the derived community structures are temporally modulated by the music features. The proposed generative modeling approach can be an effective tool for describing community structures in brain networks that go beyond static methods and detecting the dynamic reconfiguration of modular connectivity elicited by continuously naturalistic tasks.
  • Ruotsalainen, Laura; Morrison, Aiden; Mäkelä, Maija; Rantanen, Jesperi; Sokolova, Nadezda (2022)
    Collaborative navigation is the most promising technique for infrastructure-free indoor navigation for a group of pedestrians, such as rescue personnel. Infrastructure-free navigation means using a system that is able to localize itself independent of any equipment pre-installed to the building using various sensors monitoring the motion of the user. The most feasible navigation sensors are inertial sensors and a camera providing motion information when a computer vision method called visual odometry is used. Collaborative indoor navigation sets challenges to the use of computer vision; navigation environment is often poor of tracked features, other pedestrians in front of the camera interfere with motion detection, and the size and cost constraints prevent the use of best quality cameras resulting in measurement errors. We have developed an improved computer vision based collaborative navigation method addressing these challenges using a depth (RGB-D) camera, a deep learning based detector to avoid using features found from other pedestrians and for controlling the inconsistency of object depth detection, which would degrade the accuracy of the visual odometry solution if not controlled. Our analysis show that our method improves the visual odometry solution using a low-cost RGB-D camera. Finally, we show the result for computing the solution using visual odometry and inertial sensor fusion for the individual and UWB ranging for collaborative navigation.
  • SU, Peifeng; Liu, Yongchun; Tarkoma, Sasu; Rebeiro-Hargrave, Andrew; Petäjä, Tuukka; Kulmala, Markku; Pellikka, Petri (2022)
    Retrieving atmospheric environmental parameters such as atmospheric horizontal visibility and mass concentration of aerosol particles with a diameter of 2.5 or 10 μm or less (PM 2.5 , PM 10 , respectively) from digital images provides new tools for horizontal environmental monitoring. In this study, we propose a new end-to-end convolutional neural network (CNN) for the retrieval of multiple atmospheric environmental parameters (RMEPs) from images. In contrast to other retrieval models, RMEP can retrieve a suite of atmospheric environmental parameters including atmospheric horizontal visibility, relative humidity (RH), ambient temperature, PM 2.5 , and PM 10 simultaneously from a single image. Experimental results demonstrate that: 1) it is possible to simultaneously retrieve multiple atmospheric environmental parameters; 2) spatial and spectral resolutions of images are not the key factors for the retrieval on the horizontal scale; and 3) RMEP achieves the best overall retrieval performance compared with several classic CNNs such as AlexNet, ResNet-50, and DenseNet-121, and the results are based on experiments on images extracted from webcams located in different continents (test R2 values are 0.63, 0.72, and 0.82 for atmospheric horizontal visibility, RH, and ambient temperature, respectively). Experimental results show the potential of utilizing webcams to help monitor the environment. Code and more results are available at https://github.com/cvvsu/RMEP .