Browsing by Subject "Optimization"

Sort by: Order: Results:

Now showing items 1-9 of 9
  • Vuoriheimo, Tomi (Helsingfors universitet, 2017)
    Accelerator mass spectrometry (AMS) is a technique developed from mass spectrometry and it is able to measure single very rare isotopes from samples with detection capability down to one atom in 10^16. It uses an accelerator system to accelerate the atoms and molecules to break molecular bonds for precise single isotope detection. This thesis describes the optimization of University of Helsinki's AMS system to detect the rare radioactive isotope 14C from CO2 gas samples. Using AMS to detect radiocarbon is a precise and fast way to conduct radiocarbon dating with minimal sample sizes. Solid graphite samples have been in use before but as the ion source has been adopted to use also gaseous CO2 samples, optimizations must be made to maximize the carbon current and ionization efficiency for efficient 14C detection. Parameters optimized include cesium oven temperature, CO2 flow, carrier gas helium flow and their dependencies with each other. Both carbon current and ionization efficiency is looked at in the optimizations. The results are analyzed and discussed for further optimizations or actual measurements with gas. Ionization occurring in the ion source can be understood better with the results. Standard samples of CO2 were measured to determine the background and precision of the AMS system in gas use by comparing the results with literature. The current system was found to have tolerable background of 1.5% of the standard and the Fraction modern value of actual sample was 2.4% higher than values from literature. Ideas to improve background were discussed. A new theory of negative-ion formation in a cesium sputtering ion source by John S. Vogel is reviewed and taken into account in the discussion of optimization. Utilizing the theory, possible future upgrades to improve the ionization efficiency are presented such as cathode material choices to reduce competitive ionization and cesium excitation by laser.
  • Somervuo, Panu; Koskinen, Patrik; Mei, Peng; Holm, Liisa; Auvinen, Petri; Paulin, Lars (2018)
    Background: Current high-throughput sequencing platforms provide capacity to sequence multiple samples in parallel. Different samples are labeled by attaching a short sample specific nucleotide sequence, barcode, to each DNA molecule prior pooling them into a mix containing a number of libraries to be sequenced simultaneously. After sequencing, the samples are binned by identifying the barcode sequence within each sequence read. In order to tolerate sequencing errors, barcodes should be sufficiently apart from each other in sequence space. An additional constraint due to both nucleotide usage and basecalling accuracy is that the proportion of different nucleotides should be in balance in each barcode position. The number of samples to be mixed in each sequencing run may vary and this introduces a problem how to select the best subset of available barcodes at sequencing core facility for each sequencing run. There are plenty of tools available for de novo barcode design, but they are not suitable for subset selection. Results: We have developed a tool which can be used for three different tasks: 1) selecting an optimal barcode set from a larger set of candidates, 2) checking the compatibility of user-defined set of barcodes, e.g. whether two or more libraries with existing barcodes can be combined in a single sequencing pool, and 3) augmenting an existing set of barcodes. In our approach the selection process is formulated as a minimization problem. We define the cost function and a set of constraints and use integer programming to solve the resulting combinatorial problem. Based on the desired number of barcodes to be selected and the set of candidate sequences given by user, the necessary constraints are automatically generated and the optimal solution can be found. The method is implemented in C programming language and web interface is available at http://ekhidna2.biocenter.helsinki.fi/barcosel. Conclusions: Increasing capacity of sequencing platforms raises the challenge of mixing barcodes. Our method allows the user to select a given number of barcodes among the larger existing barcode set so that both sequencing errors are tolerated and the nucleotide balance is optimized. The tool is easy to access via web browser.
  • Somervuo, Panu; Koskinen, Patrik; Mei, Peng; Holm, Liisa; Auvinen, Petri; Paulin, Lars (BioMed Central, 2018)
    Abstract Background Current high-throughput sequencing platforms provide capacity to sequence multiple samples in parallel. Different samples are labeled by attaching a short sample specific nucleotide sequence, barcode, to each DNA molecule prior pooling them into a mix containing a number of libraries to be sequenced simultaneously. After sequencing, the samples are binned by identifying the barcode sequence within each sequence read. In order to tolerate sequencing errors, barcodes should be sufficiently apart from each other in sequence space. An additional constraint due to both nucleotide usage and basecalling accuracy is that the proportion of different nucleotides should be in balance in each barcode position. The number of samples to be mixed in each sequencing run may vary and this introduces a problem how to select the best subset of available barcodes at sequencing core facility for each sequencing run. There are plenty of tools available for de novo barcode design, but they are not suitable for subset selection. Results We have developed a tool which can be used for three different tasks: 1) selecting an optimal barcode set from a larger set of candidates, 2) checking the compatibility of user-defined set of barcodes, e.g. whether two or more libraries with existing barcodes can be combined in a single sequencing pool, and 3) augmenting an existing set of barcodes. In our approach the selection process is formulated as a minimization problem. We define the cost function and a set of constraints and use integer programming to solve the resulting combinatorial problem. Based on the desired number of barcodes to be selected and the set of candidate sequences given by user, the necessary constraints are automatically generated and the optimal solution can be found. The method is implemented in C programming language and web interface is available at http://ekhidna2.biocenter.helsinki.fi/barcosel . Conclusions Increasing capacity of sequencing platforms raises the challenge of mixing barcodes. Our method allows the user to select a given number of barcodes among the larger existing barcode set so that both sequencing errors are tolerated and the nucleotide balance is optimized. The tool is easy to access via web browser.
  • Koponen, Lari M.; Nieminen, Jaakko O.; Mutanen, Tuomas P.; Stenroos, Matti; Ilmoniemi, Risto J. (2017)
    Background: Transcranial magnetic stimulation (TMS) allows focal, non-invasive stimulation of the cortex. A TMS pulse is inherently weakly coupled to the cortex; thus, magnetic stimulation requires both high current and high voltage to reach sufficient intensity. These requirements limit, for example, the maximum repetition rate and the maximum number of consecutive pulses with the same coil due to the rise of its temperature. Objective: To develop methods to optimise, design, and manufacture energy-efficient TMS coils in realistic head geometry with an arbitrary overall coil shape. Methods: We derive a semi-analytical integration scheme for computing the magnetic field energy of an arbitrary surface current distribution, compute the electric field induced by this distribution with a boundary element method, and optimise a TMS coil for focal stimulation. Additionally, we introduce a method for manufacturing such a coil by using Litz wire and a coil former machined from polyvinyl chloride. Results: We designed, manufactured, and validated an optimised TMS coil and applied it to brain stimulation. Our simulations indicate that this coil requires less than half the power of a commercial figure-of-eight coil, with a 41% reduction due to the optimised winding geometry and a partial contribution due to our thinner coil former and reduced conductor height. With the optimised coil, the resting motor threshold of abductor pollicis brevis was reached with the capacitor voltage below 600 V and peak current below 3000 A. Conclusion: The described method allows designing practical TMS coils that have considerably higher efficiency than conventional figure-of-eight coils. (C) 2017 Elsevier Inc. All rights reserved.
  • Holma, Maija; Lindroos, Marko; Romakkaniemi, Atso; Oinonen, Soile (2019)
  • Premsankar, Gopika; Ghaddar, Bissan (2022)
    Edge computing is a promising solution to host artificial intelligence (AI) applications that enable real-time insights on user-generated and device-generated data. This requires edge computing resources (storage and compute) to be widely deployed close to end devices. Such edge deployments require a large amount of energy to run as edge resources are typically overprovisioned to flexibly meet the needs of time-varying user demand with a low latency. Moreover, AI applications rely on deep neural network (DNN) models that are increasingly larger in size to support high accuracy. These DNN models must be efficiently stored and transferred, so as to minimize their energy consumption. In this article, we model the problem of energy-efficient placement of services (namely, DNN models) for AI applications as a multiperiod optimization problem. The formulation jointly places services and schedules requests such that the overall energy consumption is minimized and latency is low. We propose a heuristic that efficiently solves the problem while taking into account the impact of placing services across time periods. We assess the quality of the proposed heuristic by comparing its solution to a lower bound of the problem, obtained by formulating and solving a Lagrangian relaxation of the original problem. Extensive simulations show that our proposed heuristic outperforms baseline approaches in achieving a low energy consumption by packing services on a minimal number of edge nodes, while at the same time keeping the average latency of served requests below a configured threshold in nearly all time periods.
  • Pozza, Matteo; Rao, Ashwin; Lugones, Diego. F.; Tarkoma, Sasu (2021)
    Network function (NF) developers have traditionally prioritized performance when creating new packet processing capabilities. This was usually driven by a market demand for highly available solutions with differentiating features running at line rate, even at the expense of flexibility and tightly-coupled monolithic designs. Today, however, the market advantage is achieved by providing more features in shorter development cycles and quickly deploying them in different operating environments. In fact, network operators are increasingly adopting continuous software delivery practices as well as new architectural styles (e.g., microservices) to decouple functionality and accelerate development. A key challenge in revisiting NF design is state management, which is usually highly optimized for a deployment by carefully selecting the underlying data store. Therefore, migrating to a data store that suits a different use case is time-consuming as it requires code refactoring and adaptation to new application programming interfaces, APIs. As a result, refactoring NF software for different environments can take up to months, reducing the pace at which new features and upgrades can be deployed in production networks. In this paper, we demonstrate experimentally that it is feasible to introduce an abstraction layer to decouple NF state management from the data store adopted while still approaching line-rate performance. We present FlexState, a state management system that exposes data store functionality as configuration options, which reduces code refactoring efforts. Experiments show that FlexState achieves significant flexibility in optimizing the state management, and accelerates deployment on new scenarios while preserving performance and scalability.
  • Li, Tong; Braud, Tristan; Li, Yong; Hui, Pan (2021)
    The current explosion of video traffic compels service providers to deploy caches at edge networks. Nowadays, most caching systems store data with a high programming voltage corresponding to the largest possible ‘expiry date’, typically on the order of years, which maximizes the cache damage. However, popular videos rarely exhibit lifecycles longer than a couple of months. Consequently, the programming voltage can instead be adapted to fit the lifecycle and mitigate the cache damage accordingly. In this paper, we propose LiA-cache, a Lifecycle-Aware caching policy for online videos. LiA-cache finds both near-optimal caching retention times and cache eviction policies by optimizing traffic delivery cost and cache damage cost conjointly. We first investigate temporal patterns of video access from a real-world dataset covering 10 million online videos collected by one of the largest mobile network operators in the world. We next cluster the videos based on their access lifecycles and integrate the clustering into a general model of the caching system. Specifically, LiA-cache analyzes videos and caches them depending on their cluster label. Compared to other popular policies in real-world scenarios, LiA-cache can reduce cache damage up to 90%, while keeping a cache hit ratio close to a policy purely relying on video popularity.
  • Mäkelä, Teemu; Kortesniemi, Mika; Kaasalainen, Touko (2022)
    Purpose: To determine the effects of patient vertical off-centering when using organ-based tube current modulation (OBTCM) in chest computed tomography (CT) with focus on breast dose. Materials and methods: An anthropomorphic adult female phantom with two different breast attachment sizes was scanned on GE Revolution EVO and Siemens Definition Edge CT systems using clinical chest CT protocols and anterior-to-posterior scouts. Scans with and without OBTCM were performed at different table heights (GE: centered, ±6 cm, and ± 3 cm; Siemens: centered, −6 cm, and ± 3 cm). The dose effects were studied with metal-oxidesemiconductor field-effect transistor dosimeters with complementary Monte Carlo simulations to determine full dose maps. Changes in image noise were studied using standard deviations of subtraction images from repeated acquisitions without dosimeters. Results: Patient off-centering affected both the behavior of the normal tube current modulation as well as the extent of the OBTCM. Generally, both OBTCM techniques provided a substantial decrease in the breast doses (up to 30% local decrease). Lateral breast regions may, however, in some cases receive higher doses when OBTCM is enabled. This effect becomes more prominent when the patient is centered too low in the CT gantry. Changes in noise roughly followed the expected inverse of the change in dose. Conclusions: Patient off-centering was shown to affect the outcome of OBTCM in chest CT examination, and on some occasions, resulting in higher exposure. The use of modern dose optimization tools such as OBTCM emphasizes the importance of proper centering when preparing patients to CT scans.