Browsing by Subject "convolutional neural network"

Sort by: Order: Results:

Now showing items 1-6 of 6
  • Koivisto, Maria (Helsingin yliopisto, 2020)
    Immunohistochemistry (IHC) is a widely used research tool for detecting antigens and can be used in medical and biochemical research. The co-localization of two separate proteins is sometimes crucial for analysis, requiring a double staining. This comes with a number of challenges since staining results depend on the pre-treatment of samples, host-species where the antibody was raised and spectral differentiation of the two proteins. In this study, the proteins GABAR-α2 and CAMKII were stained simultaneously to study the expression of the GABA receptor in hippocampal pyramidal cells. This was performed in PGC-1α transgenic mice, possibly expressing GABAR-α2 excessively compared to wildtype mice. Staining optimization was performed regarding primary and secondary antibody concentration, section thickness, antigen retrieval and detergent. Double staining was performed successfully and proteins of interest were visualized using a confocal microscope after which image analyses were performed using two different methods: 1) a traditional image analysis based on intensity and density of stained dots and 2) a novel convolutional neural network (CNN) machine learning approach. The traditional image analysis did not detect any differences in the stained brain slices, whereas the CNN model showed an accuracy of 72% in categorizing the images correctly as transgenic/wildtype brain slices. The results from the CNN model imply that GABAR-α2 is expressed differently in PGC-1α transgenic mice, which might impact other factors such as behaviour and learning. This protocol and the novel method of using CNN as an image analysis tool can be of future help when performing IHC analysis on brain neuronal studies.
  • Hokkinen, Lasse; Mäkelä, Teemu; Savolainen, Sauli; Kangasniemi, Marko (2021)
    Background: Computed tomography perfusion (CTP) is the mainstay to determine possible eligibility for endovascular thrombectomy (EVT), but there is still a need for alternative methods in patient triage. Purpose: To study the ability of a computed tomography angiography (CTA)-based convolutional neural network (CNN) method in predicting final infarct volume in patients with large vessel occlusion successfully treated with endovascular therapy. Materials and Methods: The accuracy of the CTA source image-based CNN in final infarct volume prediction was evaluated against follow-up CT or MR imaging in 89 patients with anterior circulation ischemic stroke successfully treated with EVT as defined by Thrombolysis in Cerebral Infarction category 2b or 3 using Pearson correlation coefficients and intraclass correlation coefficients. Convolutional neural network performance was also compared to a commercially available CTP-based software (RAPID, iSchemaView). Results: A correlation with final infarct volumes was found for both CNN and CTP-RAPID in patients presenting 6-24 h from symptom onset or last known well, with r = 0.67 (p < 0.001) and r = 0.82 (p < 0.001), respectively. Correlations with final infarct volumes in the early time window (0-6 h) were r = 0.43 (p = 0.002) for the CNN and r = 0.58 (p < 0.001) for CTP-RAPID. Compared to CTP-RAPID predictions, CNN estimated eligibility for thrombectomy according to ischemic core size in the late time window with a sensitivity of 0.38 and specificity of 0.89. Conclusion: A CTA-based CNN method had moderate correlation with final infarct volumes in the late time window in patients successfully treated with EVT.
  • Ouattara, Issouf; Hyyti, Heikki; Visala, Arto (Elsevier, 2020)
    IFAC-PapersOnLine, Proceedings of the 21th IFAC World Congress, Berlin, Germany, 12-17 July 2020
    We propose a novel method to locate spruces in a young stand with a low cost unmanned aerial vehicle. The method has three stages: 1) the forest area is mapped and a digital surface model and terrain models are generated, 2) the locations of trees are found from a canopy height model using local maximum and watershed algorithms, and 3) these locations are used in a convolution neural network architecture to detect young spruces. Our result for detecting young spruce trees among other vegetation using only color images from a single RGB camera were promising. The proposed method is able to achieve a detection accuracy of more than 91%. As low cost unmanned aerial vehicles with color cameras are versatile today, the proposed work is enabling low cost forest inventory for automating forest management.
  • Rönnholm, Petri; Vaaja, Matti; Kauhanen, Heikki; Klockars, Tuomas (2020)
    In this paper, we illustrate how convolutional neural networks and voxel-based processing together with voxel visualizations can be utilized for the selection of unaimed images for a photogrammetric image block. Our research included the detection of an ear from images with a convolutional neural network, computation of image orientations with a structure-from-motion algorithm, visualization of camera locations in a voxel representation to detect the goodness of the imaging geometry, rejection of unnecessary images with an XYZ buffer, the creation of 3D models in two different example cases, and the comparison of resulting 3D models. Two test data sets were taken of an ear with the video recorder of a mobile phone. In the first test case, a special emphasis was taken to ensure good imaging geometry. On the contrary, in the second test case the trajectory was limited to approximately horizontal movement, leading to poor imaging geometry. A convolutional neural network together with an XYZ buffer managed to select a useful set of images for the photogrammetric 3D measuring phase. The voxel representation well illustrated the imaging geometry and has potential for early detection where data is suitable for photogrammetric modelling. The comparison of 3D models revealed that the model from poor imaging geometry was noisy and flattened. The results emphasize the importance of good imaging geometry.
  • Rasse, Tobias M.; Hollandi, Reka; Horvath, Peter (2020)
    Various pre-trained deep learning models for the segmentation of bioimages have been made available as developer-to-end-user solutions. They are optimized for ease of use and usually require neither knowledge of machine learning nor coding skills. However, individually testing these tools is tedious and success is uncertain. Here, we present the Open Segmentation Framework (OpSeF), a Python framework for deep learning-based instance segmentation. OpSeF aims at facilitating the collaboration of biomedical users with experienced image analysts. It builds on the analysts' knowledge in Python, machine learning, and workflow design to solve complex analysis tasks at any scale in a reproducible, well-documented way. OpSeF defines standard inputs and outputs, thereby facilitating modular workflow design and interoperability with other software. Users play an important role in problem definition, quality control, and manual refinement of results. OpSeF semi-automates preprocessing, convolutional neural network (CNN)-based segmentation in 2D or 3D, and postprocessing. It facilitates benchmarking of multiple models in parallel. OpSeF streamlines the optimization of parameters for pre- and postprocessing such, that an available model may frequently be used without retraining. Even if sufficiently good results are not achievable with this approach, intermediate results can inform the analysts in the selection of the most promising CNN-architecture in which the biomedical user might invest the effort of manually labeling training data. We provide Jupyter notebooks that document sample workflows based on various image collections. Analysts may find these notebooks useful to illustrate common segmentation challenges, as they prepare the advanced user for gradually taking over some of their tasks and completing their projects independently. The notebooks may also be used to explore the analysis options available within OpSeF in an interactive way and to document and share final workflows. Currently, three mechanistically distinct CNN-based segmentation methods, the U-Net implementation used in Cellprofiler 3.0, StarDist, and Cellpose have been integrated within OpSeF. The addition of new networks requires little; the addition of new models requires no coding skills. Thus, OpSeF might soon become both an interactive model repository, in which pre-trained models might be shared, evaluated, and reused with ease.
  • Franzese, Giulio; Linty, Nicola; Dovis, Fabio (MDPI, 2020)
    Applied Sciences
    This work focuses on a machine learning based detection of iono-spheric scintillation events affecting Global Navigation Satellite System (GNSS) signals. We here extend the recent detection results based on Decision Trees, designing a semi-supervised detection system based on the DeepInfomax approach recently proposed. The paper shows that it is possible to achieve good classification accuracy while reducing the amount of time that human experts must spend manually labelling the datasets for the training of supervised algorithms. The proposed method is scalable and reduces the required percentage of annotated samples to achieve a given performance, making it a viable candidate for a realistic deployment of scintillation detection in software defined GNSS receivers.