Browsing by Title

Sort by: Order: Results:

Now showing items 225-244 of 949
  • Veikkolainen, Toni (Helsingin yliopisto, 2014)
    A branch of science concentrated on studying the evolution of the Earth's magnetic field has emerged in the last half century. This is called paleomagnetism, and its applications include calculations of field directions and intensity in the past, plate tectonic reconstructions, variations in the conditions in the Earth s deep interior and the climatic history. With the increasing quantity and quality of observations, it has been even possible to construct models of conterminous continent blocks, or supercontinents, of the Pre-Pangaea time. These are crucial for the understanding of the evolution of our planet from the Archean to today. Paleomagnetists have traditionally heavily relied on the theory that when averaged over a period long enough, the Earth s magnetic field can be approximated as being equivalent to that generated by a magnetic dipole located at the center of the Earth and aligned with the axis of rotation. The credibility of this GAD (Geocentric Axial Dipole) hypothesis is strongest in the geologically most recent eras, such as most of the Phanerozoic and notably in the last 400 million years. Attempts to get an adequate view of the magnetic field in the Earth's earlier history have for a long time been challenged by the reliability limitations of Precambrian paleomagnetic data. With the absence of marine magnetic anomalies, observational data need to be gathered from terrestrial rocks, notably those formed within cratonic nuclei, the oldest and most stable parts of continents. To answer the call for a concise and comprehensive compilation of paleomagnetic data from the early history of the Earth, this dissertation introduces a unique database of over 3300 Precambrian paleomagnetic observations worldwide. The data are freely available at the server of the University of Helsinki ( and can be accessed via an online query form. All database entries have been coded according to their terranes, rock formation names, ages, rock types and paleomagnetic reliabilities. A new modified version of the commonly applied Van der Voo (MV) classification criteria for filtering the paleomagnetic data is also presented, along with a novel method for binning the entries cratonically to revise the previously employed way of applying binning via a simple evenly spaced geographic grid. Besides compiling data, tests of the validity of the GAD hypothesis in the Precambrian have been conducted using inclination frequency analysis and asymmetries of magnetic field reversals. Results from two self-contained tests of the GAD hypothesis suggest that the time-averaged Precambrian geomagnetic field may include the geocentric axial quadrupole and the geocentric axial octupole, but both with strengths less than 10% of the geocentric axial dipole, with the quadrupole perhaps being smaller than the octupole. In no other study a model so close to GAD has been reasonably fitted to the Precambrian paleomagnetic data. The weakness of the non-dipolar coefficients required also implies that no substantial adjustments need to be made to the novel models of Precambrian continental assemblies (supercontinents), such as the Paleo-Mesoproterozoic Columbia (Nuna) or the Neoproterozoic Rodinia. Although the supercontinent science still has plenty of uncertainty, it is more plausibly caused by the geological incoherence of the data and the lack of precise age information rather than by long-lived non-dipolar geomagnetic fields.
  • Pervilä, Mikko (Helsingin yliopisto, 2013)
    Within the field of computer science, data centers (DCs) are a major consumer of energy. A large part of that energy is used for cooling down the exhaust heat of the servers contained in the DCs. This thesis describes both the aggregate numbers of DCs and key flagship installations in detail. We then introduce the concept of Data Center Energy Retrofits, a set of low cost, easy to install techniques that may be used by the majority of DCs for reducing their energy consumption. The main contributions are a feasibility study of direct free air cooling, two techniques that explore air stream containment, a wired sensor network for temperature measurements, and a prototype greenhouse that harvests and reuses the exhaust heat of the servers for growing edible plants, including chili peppers. We also project the energy savings attainable by implementing the proposed techniques, and show that global savings are possible even when very conservative installation numbers and payback times are modelled. Using the results obtained, we make a lower bound estimate that direct free air cooling could reduce global greenhouse gas (GHG) emissions by 9.4 MtCO2e already by the year 2005 footprint of the DCs. Air stream containment could reduce the GHG emissions by a further 0.7 MtCO2e, and finally heat harvesting can turn the waste heat into additional profits. Much larger savings are already possible, since the DC footprint has increased considerably since 2005.
  • Junninen, Heikki (Helsingin yliopisto, 2014)
    In this thesis the concept of data cycle is introduced. The concept itself is general and only gets the real content when the field of application is defined. If applied in the field of atmospheric physics the data cycle includes measurements, data acquisition, processing, analysis and interpretation. The atmosphere is a complex system in which everything is in a constantly moving equilibrium. The scientific community agrees unanimously that it is human activity, which is accelerating the climate change. Nevertheless a complete understanding of the process is still lacking. The biggest uncertainty in our understanding is connected to the role of nano- to micro-scale atmospheric aerosol particles, which are emitted to the atmosphere directly or formed from precursor gases. The latter process has only been discovered recently in the long history of science and links nature s own processes to human activities. The incomplete understanding of atmospheric aerosol formation and the intricacy of the process has motivated scientists to develop novel ways to acquire data, new methods to explore already acquired data, and unprecedented ways to extract information from the examined complex systems - in other words to compete a full data cycle. Until recently it has been impossible to directly measure the chemical composition of precursor gases and clusters that participate in atmospheric particle formation. However, with the arrival of the so-called atmospheric pressure interface time-of-flight mass spectrometer we are now able to detect atmospheric ions that are taking part in particle formation. The amount of data generated from on-line analysis of atmospheric particle formation with this instrument is vast and requires efficient processing. For this purpose dedicated software was developed and tested in this thesis. When combining processed data from multiple instruments, the information content is increasing which requires special tools to extract useful information. Source apportionment and data mining techniques were explored as well as utilized to investigate the origin of atmospheric aerosol in urban environments (two case studies: Krakow and Helsinki) and to uncover indirect variables influencing the atmospheric formation of new particles.
  • Tripathi, Abhishek (Helsingin yliopisto, 2011)
    The core aim of machine learning is to make a computer program learn from the experience. Learning from data is usually defined as a task of learning regularities or patterns in data in order to extract useful information, or to learn the underlying concept. An important sub-field of machine learning is called multi-view learning where the task is to learn from multiple data sets or views describing the same underlying concept. A typical example of such scenario would be to study a biological concept using several biological measurements like gene expression, protein expression and metabolic profiles, or to classify web pages based on their content and the contents of their hyperlinks. In this thesis, novel problem formulations and methods for multi-view learning are presented. The contributions include a linear data fusion approach during exploratory data analysis, a new measure to evaluate different kinds of representations for textual data, and an extension of multi-view learning for novel scenarios where the correspondence of samples in the different views or data sets is not known in advance. In order to infer the one-to-one correspondence of samples between two views, a novel concept of multi-view matching is proposed. The matching algorithm is completely data-driven and is demonstrated in several applications such as matching of metabolites between humans and mice, and matching of sentences between documents in two languages.
  • Hätönen, Kimmo (Helsingin yliopisto, 2009)
    Telecommunications network management is based on huge amounts of data that are continuously collected from elements and devices from all around the network. The data is monitored and analysed to provide information for decision making in all operation functions. Knowledge discovery and data mining methods can support fast-pace decision making in network operations. In this thesis, I analyse decision making on different levels of network operations. I identify the requirements decision-making sets for knowledge discovery and data mining tools and methods, and I study resources that are available to them. I then propose two methods for augmenting and applying frequent sets to support everyday decision making. The proposed methods are Comprehensive Log Compression for log data summarisation and Queryable Log Compression for semantic compression of log data. Finally I suggest a model for a continuous knowledge discovery process and outline how it can be implemented and integrated to the existing network operations infrastructure.
  • Kontinen, Juha (Helsingin yliopisto, 2004)
  • Laaksonen, Tiina (Helsingin yliopisto, 2015)
    The detection of enantiomeric purity is an important part of synthetic chemistry. Especially when developing medicinal compounds the determination of the amount of enantiomeric impurities is important as one of the enantiomers may be poisonous or lethal to humans. Various methods exist for the study of enantiomeric purity and NMR spectroscopy has been intensively studied as a tool for this purpose. As NMR is fast, readily available and easy to use it provides an attractive way to study enantomeric purity. In NMR chiral discrimination is obtained by using chiral derivatising agents (CDAs) or chiral solvating agents (CSAs). CSAs have more potential in enantiomeric excess (ee) studies than CDAs as they lack of the disadvantaged of CDAs (e.g. kinetic resolution and racemisation). As chiral carboxylic acids are important in the synthesis of medicinally attractive compounds, natural products and their metabolites, CSAs which can be used for the determination of enantiomeric purity of carboxylic acids and are easily available and cheap are helpful. The present study mainly focuses on the development of CSAs suitable for the discrimination of non-ionic and ionic chiral carboxylic acids. (+)-Dehydroabietylamine was used as chiral building block for these new CSAs as it has several beneficial features such as easy availability, low price, an amenable structure for CSA construction and it is known to resolve chiral carboxylic acids via cystallisation. Three different series of non-ionic and ionic CSAs were developed from (+)-dehydroabietylamine: 1) ammonium, 2) secondary amine and 3) imidazolium based CSAs. Their enantiomeric discrimination abilities were examined with Mosher s acid and its tetrabutylammonium salt. Best resolution was obtained with non-ionic substrate and non-ionic CSA and with ionic substrate and ionic CSA. Ionic CSAs were also able to resolve non-ionic substrates but the enantiomeric resolution remained poor. The best performing CSAs were subjected for more detailed investigation. The stoichiometry of formed diastereomeric complex between the CSA and substrate was studied by titration experiment. CSA-substrate complexes were generally formed in 1:1 ratio. CSA applicability to function in ee determination was studied and they were able to detect the enantiomeric purities of samples with excellent reliability. Finally their ability to resolve various α-substituted carboxylic acids was studied showing that (+)-dehydroabietylamine based CSAs are suitable for chiral carboxylic acids containing electronegative α-substituent. Also the effect of measurement conditions and sample preparation when using cationic CSAs in enantiomeric discrimination was investigated. Lower temperatures and low polarity solvents were noticed to increase enantiomeric discrimination, among high CSA concentration. Delocalisation of negative charge in counter anion of CSA as well as the use of organic counter cation for the substrate was also notised to increase enantiomeric discrimination.
  • Nurmi, Ville (Helsingin yliopisto, 2009)
    This thesis is a study of a rather new logic called dependence logic and its closure under classical negation, team logic. In this thesis, dependence logic is investigated from several aspects. Some rules are presented for quantifier swapping in dependence logic and team logic. Such rules are among the basic tools one must be familiar with in order to gain the required intuition for using the logic for practical purposes. The thesis compares Ehrenfeucht-Fraïssé (EF) games of first order logic and dependence logic and defines a third EF game that characterises a mixed case where first order formulas are measured in the formula rank of dependence logic. The thesis contains detailed proofs of several translations between dependence logic, team logic, second order logic and its existential fragment. Translations are useful for showing relationships between the expressive powers of logics. Also, by inspecting the form of the translated formulas, one can see how an aspect of one logic can be expressed in the other logic. The thesis makes preliminary investigations into proof theory of dependence logic. Attempts focus on finding a complete proof system for a modest yet nontrivial fragment of dependence logic. A key problem is identified and addressed in adapting a known proof system of classical propositional logic to become a proof system for the fragment, namely that the rule of contraction is needed but is unsound in its unrestricted form. A proof system is suggested for the fragment and its completeness conjectured. Finally, the thesis investigates the very foundation of dependence logic. An alternative semantics called 1-semantics is suggested for the syntax of dependence logic. There are several key differences between 1-semantics and other semantics of dependence logic. 1-semantics is derived from first order semantics by a natural type shift. Therefore 1-semantics reflects an established semantics in a coherent manner. Negation in 1-semantics is a semantic operation and satisfies the law of excluded middle. A translation is provided from unrestricted formulas of existential second order logic into 1-semantics. Also game theoretic semantics are considerd in the light of 1-semantics.
  • Grönholm, Tiia (Helsingin yliopisto, 2012)
    Dry and wet deposition are removal mechanisms of atmospheric aerosol particles. Historically, there are very scarce scientic publications reporting experimentally determined dry deposition values for the ultra-fine size range. The physics of deposition is studied both using micrometeorological field measurements conducted at SMEAR II site in Hyytiälä, Southern Finland and by modeling approaches. Dry deposition velocity depends mainly on particle size and magnitude of the atmospheric surface layer turbulence. We present experimentally determined dry deposition velocity (vd) as a function of particle size for the ultra- fine aerosol size range (10 - 150 nm) using relaxed eddy accumulation and eddy-covariance (EC) methods accompanied by particle number size distribution measurements. The highest vd was found for 10 nm particles and in all size classes vd increased with increasing friction velocity. By combining two-layer (above and sub-canopy) EC measurements and a new multi-layer canopy deposition model, we addressed how dry deposition is distributed within the forest canopy and between the canopy and the underlying ground. According to the measurements, about 20 - 30 % of particles penetrated the canopy and deposited on the forest floor. The model results showed that turbophoresis, when accounted for at the leaf scale in vertically resolved models, could increase vd for 0.1 - 2 nm particles and explain why the observations over forests generally do not support the pronounced minimum of deposition velocity for particles of that size. The developed multi-layer model was further used to study the effect of canopy structure (leaf-area shape and density) on vd. Scavenging coefficients for rain and snow deposition were calculated based on measurements of particle size distribution and precipitation. Parameterizations for both rain and snow wet deposition were derived for example to be applied in air quality and global models. Also a model including both in-cloud and below cloud wet deposition was developed and compared to the field measurements. Both snow and rain scavenging efficiency increased with increasing precipitation intensity. We also found, that the effectiveness of snow scavenging depends on the crystal or snow flake structure and the air relative humidity. Wet deposition was found to be an order of magnitude more effective "air cleaner" compared to dry deposition.
  • Pohjoispää, Monika (Helsingin yliopisto, 2014)
    Lignans are naturally occurring compounds, polyphenolic secondary plant and mammalian metabolites. Due to their ubiquitous presence and biological activity, lignans have attracted the interest of scientists from different areas, like nutrition scientists, pharmaceutical researchers and synthetic chemists. The research is very active, and the number of lignan related publications has proliferated. Lignans vary widely in the structure, and the present work focuses mainly on the (hydroxy)lignano-­9,9’­‐lactones, their rearranged products, and 9,9’-­epoxylignanes. The literature review introduces the stereochemistry and assignment of the absolute configuration of these lignans. In addition, stable isotope labelling of lignans is reviewed. The experimental part is focused on deuteration of lignans and rearrangement and stereochemistry studies. The deuteration reaction utilising acidic H/D exchange within the lignan skeleton was investigated. The relative reactivity of various aromatic sites, the stability of deuterium labels and the isotopic purity of the labelled compounds were examined. Experimental observations and results were compared to computational studies. Several stable, isotopically pure polydeuterated lignano-­9,9’-lactones and 9,9’-­epoxylignanes were synthesised. Alongside the deuteration experiments unexpected reactivity in eletrophilic aromatic deuteration of methylenedioxy substituted compounds was observed and further studied. In addition to deuteration, the stereochemistry of certain rearranged lignanolactones was a central subject of this study. Our findings allowed to clarify some mechanistical aspects of the rearrangement reactions of 7’-­hydroxylignano-­9,9’-­lactones and revise certain disputable structural data in the literature. Furthermore, the X-­ray structures of 7’-hydroxylignano-­9,9’-­lactones and rearranged 9’-hydroxylignano-­9,7’-­lactones were obtained for the first time.
  • Salminen, Susanna (Helsingin yliopisto, 2009)
    In this work, separation methods have been developed for the analysis of anthropogenic transuranium elements plutonium, americium, curium and neptunium from environmental samples contaminated by global nuclear weapons testing and the Chernobyl accident. The analytical methods utilized in this study are based on extraction chromatography. Highly varying atmospheric plutonium isotope concentrations and activity ratios were found at both Kurchatov (Kazakhstan), near the former Semipalatinsk test site, and Sodankylä (Finland). The origin of plutonium is almost impossible to identify at Kurchatov, since hundreds of nuclear tests were performed at the Semipalatinsk test site. In Sodankylä, plutonium in the surface air originated from nuclear weapons testing, conducted mostly by USSR and USA before the sampling year 1963. The variation in americium, curium and neptunium concentrations was great as well in peat samples collected in southern and central Finland in 1986 immediately after the Chernobyl accident. The main source of transuranium contamination in peats was from global nuclear test fallout, although there are wide regional differences in the fraction of Chernobyl-originated activity (of the total activity) for americium, curium and neptunium.
  • Jernström, Jussi (Helsingin yliopisto, 2006)
    Radioactive particles from three locations were investigated for elemental composition, oxidation states of matrix elements, and origin. Instrumental techniques applied to the task were scanning electron microscopy, X-ray and gamma-ray spectrometry, secondary ion mass spectrometry, and synchrotron radiation based microanalytical techniques comprising X-ray fluorescence spectrometry, X-ray fluorescence tomography, and X-ray absorption near-edge structure spectroscopy. Uranium-containing low activity particles collected from Irish Sea sediments were characterized in terms of composition and distribution of matrix elements and the oxidation states of uranium. Indications of the origin were obtained from the intensity ratios and the presence of thorium, uranium, and plutonium. Uranium in the particles was found to exist mostly as U(IV). Studies on plutonium particles from Runit Island (Marshall Islands) soil indicated that the samples were weapon fuel fragments originating from two separate detonations: a safety test and a low-yield test. The plutonium in the particles was found to be of similar age. The distribution and oxidation states of uranium and plutonium in the matrix of weapon fuel particles from Thule (Greenland) sediments were investigated. The variations in intensity ratios observed with different techniques indicated more than one origin. Uranium in particle matrixes was mostly U(IV), but plutonium existed in some particles mainly as Pu(IV), and in others mainly as oxidized Pu(VI). The results demonstrated that the various techniques were effectively applied in the characterization of environmental radioactive particles. An on-line method was developed for separating americium from environmental samples. The procedure utilizes extraction chromatography to separate americium from light lanthanides, and cation exchange to concentrate americium before the final separation in an ion chromatography column. The separated radiochemically pure americium fraction is measured by alpha spectrometry. The method was tested with certified sediment and soil samples and found to be applicable for the analysis of environmental samples containing a wide range of Am-241 activity. Proceeding from the on-line method developed for americium, a method was also developed for separating plutonium and americium. Plutonium is reduced to Pu(III), and separated together with Am(III) throughout the procedure. Pu(III) and Am(III) are eluted from the ion chromatography column as anionic dipicolinate and oxalate complexes, respectively, and measured by alpha spectrometry.
  • Alexey, Adamov (Helsingin yliopisto, 2012)
    This study is focused on the development and evaluation of ion mobility instrumentation with various atmospheric pressure ionization techniques and includes the following work. First, a high-resolution drift tube ion mobility spectrometer (IMS), coupled with a commercial triple quadrupole mass spectrometer (MS), was developed. This drift tube IMS is compatible with the front-end of commercial Sciex mass spectrometers (e.g., Sciex API-300, 365, and 3000) and also allows easy (only minor modifications are needed) installation between the original atmospheric pressure ion source and the triple quadrupole mass spectrometer. Performance haracteristics (e.g.,resolving power, detection limit, transmission efficiency of ions) of this IMS-MS instrument were evaluated. Development of the IMS-MS instrument also led to a study where a proposal was made that tetraalkylammonium ions can be used as chemical standards for ESI-IMS. Second, the same drift tube design was also used to build a standalone ion mobility spectrometer equipped with a Faraday plate detector. For this highresolution (resolving power about 100 shown) IMS device, a multi-ion source platform was built, which allows the use of a range of atmospheric pressure ionization methods, such as: corona discharge chemical ionization (CD-APCI), atmospheric pressure photoionization (APPI), and radioactive atmospheric pressure chemical ionization (R-APCI). The multi-ion source platform provides easy switching between ionization methods and both positive and negative ionization modes can be used. Third, a simple desorpion/ionization on silicon (DIOS) ion source set-up for use with the developed IMS and IMS-MS instruments was built and its operation demonstrated. Fourth, a prototype of a commercial aspiration-type ion mobility spectrometer was mounted in front of a commercial triple quadrupole mass spectrometer. The set-up, which is simple, easy to install, and requires no major modifications to the MS, provides the possibility of gathering fundamental information about aspiration mobility spectrometry.
  • Tarvainen, Virpi (Helsingfors universitet, 2008)
    The volatile organic compounds (VOCs) emitted by vegetation, especially forests, can affect local and regional atmospheric photochemistry through their reactions with atmospheric oxidants. Their reaction products may also participate in the formation and growth of new particles which affect the radiation balance of the atmosphere, and thus climate, by scattering and absorbing shortwave and longwave radiation and by modifying the radiative properties, amount and lifetime of clouds. Globally, anthropogenic VOC emissions are far surpassed by the biogenic ones, making biogenic emission inventories an integral element in the development of efficient air quality and climate strategies. This thesis is focused on the VOC emissions from the boreal forest, the largest terrestrial biome with characteristic vegetation patterns and strong seasonality. The isoprene, monoterpene and sesquiterpene emissions of the most prevalent boreal tree species in Finland, Scots pine, have been measured and their seasonal variation and dependence on temperature and light have been studied. The measured emission data and other available observations of the emissions of the principal boreal trees have been used in a biogenic emission model developed for the forests in Finland. The model utilizes satellite landcover information, Finnish forest classification and hourly meteorological data to calculate isoprene, monoterpene, sesquiterpene and other VOC emissions over the growing season. The main compounds emitted by the boreal forest throughout the growing season in Finland are alpha- and beta-pinene and delta-carene, with a strong contribution of sabinene by the deciduous trees in summer and autumn. The emissions follow the course of the temperature and are highest in the south boreal zone with a steady decline towards the north. The isoprene emissions from the boreal forest are fairly low - the main isoprene emitters are the low emitting Norway spruce and the high emitting willow and aspen, whose foliage, however, only represents a very small percentage of the boreal leaf biomass. This work also includes the first estimate of sesquiterpene emissions from the boreal forest. The sesquiterpene emissions initiate after midsummer and are of the same order of magnitude as the isoprene emissions. At the annual level, the total biogenic emissions from the forests in Finland are approximately twice the anthropogenic VOC emissions.
  • Vastamäki, Pertti (Helsingin yliopisto, 2014)
    This research work was focused on the development of instrumentation, operations, and approximate theory of a new continuous two-dimensional thermal field-flow fractionation (2D-ThFFF) technique for the separation and collection of macromolecules and particles. The separation occur in a thin disk-shaped channel, where a carrier liquid flows radially from the center towards the perimeter of the channel, and a steady stream of the sample solution is introduced continuously at a second inlet close to the center of the channel. Under influence of the thermal field, the sample components are separated in radial direction according to the analytical ThFFF principle. Simultaneously, the lower channel wall is rotating with respect to the stationary upper wall, while a shear-driven flow profile deflects the separated sample components into continuous trajectories that strike off at different angles over the 2D surface. Finally, the sample components are collected at the outer rim of the channel, and the sample concentrations in each fraction are determined with the analytical ThFFF. The samples were polystyrene polymer standards and the carrier solvents cyclohexane and cyclohexane-ethylbenzene mixture in continuous 2D-ThFFF and tetrahydrofuran in analytical ThFFF. The thermal field had a positive effect on the sample deflection, although broadening of the sample zone was observed. Decreasing the channel thickness and the radial and angular flow rates of the carrier caused significant narrowing of the zone broadening. Systematic variation of the experimental parameters allowed determination of the conditions required for the continuous fractionation of polystyrene polymers according to their molar mass. As an example, almost baseline separation was achieved with two polystyrene samples of different molar masses. Meanwhile, an approximate theoretical model was developed for prediction of the trajectory of the sample component zone and its angular displacement under various operating conditions. The trends in the deflection angles without and with a thermal gradient were qualitatively in agreement with predictions of the model, but significant quantitative differences were found between the theoretical predictions and experimental results. The reasons for discrepancies between theory and experiment could be the following: relaxation of the sample already at the sample inlet, effect of solvent partition when binary solvent is used as the carrier, dispersion of the sample, limitations of the instrument, and geometrical imperfections. Despite its incompleteness, the theoretical model will provide guidelines for future interpretation and optimization of separations by continuous 2D-ThFFF method.