Matemaattis-luonnontieteellinen tiedekunta


Recent Submissions

  • Parviainen, Arno (Helsingin yliopisto, 2016)
    As the consumption of natural resources increase with the exponential world population growth, the food industry needs to answer the demand. This means that farming has to be increased and optimized from its current level. The problem is summoned from the fact that the amount of arable land is decreasing. This eventually leads to prioritizing the land for food crops and the downscaling the production of i.e. palm oil and cotton. Cotton is ~90% pure cellulose and is used for textile commodities for its properties over synthetic fibers. The same cellulose can be found all around in nature, from the structure of trees to algae. Cellulose is the world s most commonly found polymer and it is generated annually in nature enough to stop cotton farming altogether. The problem is the low solubility of cellulose to commonly used solvents. The extensive hydrogen bonding network of cellulose gives this biopolymer its strong features. The structure of cellulose and the chemical features has been known for a century and a half to this date, but solubilization of cellulose has evaded a more systemic, yet pragmatic approach. There have been introductions of various types of solvent systems for cellulose dissolution, from which ionic liquids have been the most successful class of solvents. The research performed in this thesis has been focusing on the research and development of new cellulose dissolving ionic liquids. A class of imidazolium based ionic liquids was used as the starting point for the development, since they exhibit high dissolutive power and relatively low viscosities. The chemical stability of the solvent system needs to sustain various kinds of chemical and physical stress without compromising process safety, ecology or economy. Our research indicated that the acidity-basicity of the ionic liquid components was correlating with the chemical-physical stability of the solvents. The higher the basicity was the less stable the ionic liquid become and in the same time it was found out that the ionic liquids that were synthesized from less basic components were not able to dissolve cellulose in the first place. We calculated the gas-phase basicities (proton affinity) of various types and strengths of bases by using simple and efficient computational method. After the calculations were done, we combined the bases with acetic acid to form acetate ionic liquids and with propionic acid for propionates correspondingly. After the examination of the cellulose dissolution capability we discovered a threshold basicity where the cellulose dissolution capability was introduced. In a collaboration with Aalto University, we developed an ionic liquid that could be used in industrial scale production of cellulose fibers. The research was steered towards investigation of the chemical stability and recyclability of this new ionic liquid.
  • Söderlund, Mervi (Helsingin yliopisto, 2016)
    In biosphere safety assessment of spent nuclear fuel, the importance of radionuclides increases with their possibility to induce radiation dose for humans and other organisms in the future. The surface environment migration and sorption of 135Cs, 129I, 93m,94Nb and 79Se is of great importance since these radionuclides have been assessed to contribute to the potential radiation dose in the most realistic biosphere calculation cases. This doctoral thesis aimed to investigate the retention and behaviour of cesium (Cs+), iodine (I- and IO3-), niobium (Nb(OH)5) and selenium (SeO32- and SeO42-) in humus and rather undeveloped mineral soil of boreal forest on Olkiluoto Island when abiotic factors affecting the sorption reactions were varied. Factors affecting species transformations of iodine and selenium were also examined for the same soil samples and under the same experimental conditions. Cesium retention was affected by e.g. incubation conditions, soil depth, pH, humus and mineralogy. Humus exhibited lower sorption of cesium than mineral soil, which was caused by mineral soil s relatively high muscovite content and the presence of Cs selective FES sites in muscovite interlayer spaces. Formation of slightly reducing soil conditions decreased soil retention of cesium, presumably caused by the formation of NH4+ ions and arisen competition of the FES sites. Increase in soil pH accelerated the retention of cesium on negatively charged surface sorption sites. The highest retention of inorganic iodine forms iodide and iodate were observed in humus, as caused by sorption processes and speciation changes leading to presumable formation of organo iodine compounds in microbially mediated reactions. Iodine sorption on mineral soil was very low in aerobic and anaerobic soil conditions, even though acidic pH values increased the retention. Decrease in pH had similar effect for selenium (selenite) and niobium. For these two elements inorganic soil components, and especially weakly crystalline aluminium and iron oxides are considered important retentive phases due to inner sphere complexation with surface Al and Fe atoms. Speciation of iodine showed considerable dependence on soil environment. Iodate was reduced to iodide especially in anaerobic soil conditions, low pH and in the presence of organic matter and microbial activity. No oxidation of iodide to iodate was detected. Formation of unidentified, presumable organo iodine compounds was observed in humus and mineral soil of low pH or varying incubation time. Inorganic selenium forms selenite and selenate proved to be persistent in the experimental conditions as no changes in selenium liquid phase speciation was observed irrespective of variation in incubation time, pH or redox potential.
  • Zhao, Junlei (Helsingin yliopisto, 2016)
    "There's Plenty of Room at the Bottom.", the lecture by Prof. Richard Feynman on December, 29th, 1959 at Caltech, USA, describes the field, which is "not quite the same as the others in that it will not tell us much of fundamental physics but it is more like solid-state physics in the sense that it might tell us much of great interest about the strange phenomena that occur in complex situations." This simple inspiring idea has often been referred to as the first "seed" of one of the most promising interdisciplinary branches of science, nanoscience. Nanoparticles (NPs), one of the primary building blocks for nanostructures and its application, have been incidentally synthesized and used by ancient Romans when manufacturing beautiful cups. Modern technology requires the synthesis of NPs to be precise for specific application. The composition, structure, morphology and size are four parameters which dominate the properties of NPs. How to develop a method which can control these parameters accurately and precisely is an essential question for the researchers of nanoscience. Among the wide range of existing synthesis methods, magnetron sputtering inert gas condensation has been commonly used during recent years. The method allows simultaneous control of composition, magnetron power, inert gas pressure, NP drift velocity, and aggregation zone length. To achieve a reliable control of the fabricated NPs, it is essential to understand how the nano-scale growth is influenced by these experimental conditions. In this thesis, the growth mechanisms of Si, NiCr and Fe nanoparticles are studied using multi-scale simulation methods. We investigate the effects of the macro-scaled experimental parameters on the structural properties of nanoparticles. The work presented here is a step towards the understanding of the growth process of NPs in inert gas condensation chambers and the precise control of NP properties.
  • Karadzhinova-Ferrer, Aneliya (Helsingin yliopisto, 2016)
    The purpose of this thesis is to develop, establish and apply novel quality assurance (QA) methods for nuclear and high-energy physics particle detectors. The detectors should be maintenance-free since devices can only be replaced during long technical shut-downs. Furthermore, the detector modules must endure handling during installation and withstand heat generation and cooling during operations. Longevity in a severe radiation environment must also be assured. Visual inspection and electrical characterisation of particle detectors are presented in this work. The detector studies included in this thesis, while based on different technologies, were united by the demand for reliable and enduring particle detectors. Four major achievements were accomplished during the the Gas Electron Multiplier (GEM) foil studies: a software analysis capable of precise foil inspection was developed, a rigorous calibration procedure for the Optical Scanning System was established, a detailed 3D GEM foil hole geometry study was performed for the first time and an impact of the hole geometry on the detector gain was confirmed. Promising results were also achieved during the solid-state detectors studies. A new technique for assuring the height uniformity of the chip interconnections in the pixel detector modules was proposed and implemented. Two semiconductor detectors (Si and GaAs) were designed, microfabricated and tested. The consistency of the QA results demonstrated the detectors reliability and preparedness to serve the needs of future particle and nuclear physics experiments. During the performed studies, strict calibration techniques and measurement uncertainties were applied to guarantee the trustworthy accuracy of the used measurement tools. Thus, all quality assurance techniques presented in this thesis were held in clean conditions at monitored temperature and humidity. The combined results of this thesis demonstrate the importance of adequate quality assurance for guaranteed accurate data collection and long operating life of the detector.
  • Hoikkala, Antti (Helsingin yliopisto, 2016)
    Isoflavonoid phytoestrogens are secondary plant metabolites, which structurally or functionally resemble 17β-estradiol and they originally received attention due to breeding problems affecting ewes grazing on subterranean clover. Later research of phytoestrogens has been focusing on the possible beneficial effects as oestrogen agonists or anti-estrogens. Due to their health promoting effects, the knowledge of the occurrence of isoflavonoids and their metabolites in food and biological fluids as well as the better understanding of their metabolic pathways have been the main aspects in the research field. The literature review introduces the biological significance of isoflavonoids in plants along with various analytical techniques used for the determination of these compounds in biological matrices. This is followed by a discussion of the isoflavone metabolism in humans, rodents, and ruminants. The experimental part focuses on the synthetic methods used for the preparation of the isoflavonoids, and on four studies in which they were used. In the first two studies cow milk was analysed. It was shown that commercial organic cow milk contains high levels of equol along with much lower levels of the other isoflavonoids typically found in milk and milk products. The levels of equol detected in organic milk were significantly higher than the levels found in normal milk which corresponds to the fodder that the cows are fed. In the following study five Finnish Ayrshire cows were subjected to a red clover rich diet in order to study the metabolism of the isoflavones futher. Equol and a hitherto unquantifed metabolite, 3 ,7-dihydroxyisoflavan was detected and quantitatively measured in milk samples. In another study, the metabolisation of genistein fatty acid ester was studied after (oral or parenteral) administration to adult female rhesus monkeys. It turned that levels of genistein fatty acid ester levels depended of the form of administration, and it may be possible to introduce intact genistein ester molecules into plasma by parenteral, but not oral administration. The last study, focuses on the metabolism of the soy isoflavones daidzein, genistein, and glycitein in humans. After oral administration of these isoflavones through ingestion of soy enriched food, daily urine samples of the volunteers were analysed. This led to identification of several metabolites and to the proposal of the metabolic pathways of the isoflavones ingested.
  • Athukorala, Kumaripaba (Helsingin yliopisto, 2016)
    We use information retrieval (IR) systems to meet a broad range of information needs, from simple ones involving day-to-day decisions to complex and imprecise information needs that cannot be easily formulated as a question. In consideration of these diverse goals, search activities are commonly divided into two broad categories: lookup and exploratory. Lookup searches begin with precise search goals and end soon after reaching of the target, while exploratory searches center on learning or investigation activities with imprecise search goals. Although exploration is a prominent life activity, it is naturally challenging for users because they lack domain knowledge; at the same time, information needs are broad, complex, and subject to constant change. It is also rather difficult for IR systems to offer support for exploratory searches, not least because of the complex information needs and dynamic nature of the user. It is hard also to conceptualize exploration distinctly. In consequence, most of the popular IR systems are targeted at lookup searches only. There is a clear need for better IR systems that support a wide range of search activities. The primary objective for this thesis is to enable the design of IR systems that support exploratory and lookup searches equally well. I approached this problem by modeling information search as a rational adaptation of interactions, which aids in clear conceptualization of exploratory and lookup searches. In work building on an existing framework for examination of adaptive interaction, it is assumed that three main factors influence how we interact with search systems: the ecological structure of the environment, our cognitive and perceptual limits, and the goal of optimizing the tradeoff between information gain and time cost. This thesis contributes three models developed in research proceeding from this adaptive interaction framework, to 1) predict evolving information needs in exploratory searches, 2) distinguish between exploratory and lookup tasks, and 3) predict the emergence of adaptive search strategies. It concludes with development of an approach that integrates the proposed models for the design of an IR system that provides adaptive support for both exploratory and lookup searches. The findings confirm the ability to model information search as adaptive interaction. The models developed in the thesis project have been empirically validated through user studies, with an adaptive search system that emphasizes the practical implications of the models for supporting several types of searches. The studies conducted with the adaptive search system further confirm that IR systems could improve information search performance by dynamically adapting to the task type. The thesis contributes an approach that could prove fruitful for future IR systems in efforts to offer more efficient and less challenging search experiences.
  • Miihkinen, Santeri (Helsingin yliopisto, 2016)
    The topic of this dissertation lies at the intersection of analytic function theory and operator theory. In the thesis, compactness and structural properties of a class of Volterra-type (integral) operators acting on analytic function spaces are investigated. The Volterra-type operator is obtained by integrating a product of two analytic functions, where one of these functions, the so-called symbol of the operator, is fixed and the other one is considered to be a variable. This integral operator was introduced by C. Pommerenke in 1977 in connection to exponential integrability of BMOA-functions. A systematic research of Volterra-type operators was initiated by Aleman and Siskakis in the mid-1990s when they characterized the boundedness and compactness of these operators on the Hardy spaces and weighted Bergman spaces. In the first article of the thesis, we derive estimates for the essential and weak essential norms of a Volterra-type operator in terms of its symbol when the operator is acting on the Hardy spaces, BMOA and VMOA. The essential and weak essential norms of a linear operator are its distances from compact and weakly compact operators respectively. In particular, it follows from our estimates that the compactness and weak compactness of Volterra-type operator coincide when its domain is the non-reflexive Hardy space, BMOA, or VMOA. In the second article, a notion of strict singularity of a linear operator is investigated in the case of the Volterra-type operator acting on the Hardy spaces. An operator between Banach spaces is strictly singular if its restriction to any closed infinite-dimensional subspace is not a linear isomorphism onto its range. We construct an isomorphic copy M of the sequence space of p-summable sequences and show that a non-compact Volterra-type operator restricted to M is a linear isomorphism onto its range. This implies that the strict singularity and compactness of this operator coincide in the Hardy space case. In the third article, we provide estimates for the operator norms and essential norms of the Volterra-type operator acting between weighted Bergman spaces, where the weight function satisfies a doubling condition.
  • Hakala, Jani (Helsingin yliopisto, 2016)
    Atmospheric aerosols are omnipresent. They affect health via inhalation or skin and eye contact, and reduce visibility. They also contribute to climate directly by absorbing and scattering solar radiation, and indirectly by acting as cloud condensing nuclei, thus affecting the cloud formation process. The climate has been affected by human activity since the preindustrial times. Both anthropogenic aerosol particle and greenhouse gas emissions have seen a drastic increase since the industrial revolution. As far as is known, most of the anthropogenic aerosol particles excluding black carbon or soot particles have a cooling effect on climate, partially negating the warming effect of increased greenhouse gas emissions. All in all, the effects of aerosol particles, be their origin anthropogenic on not, are considered to cause the highest uncertainty in climate models, making aerosol studies crucial for more accurate future climate predictions. The particle size is one of the most important properties to know, as it has a great impact on the effects and fate of aerosol particles in the atmosphere, or inside our respiratory system. The hygroscopicity of aerosol particles, or their ability to absorb water, determines the size of the particles in different relative humidity (RH) conditions. Dry water-soluble salt particles can double their size in diameter at the RH of 90%, whereas soot particles and fresh organics experience little to no growth. By studying the hygroscopic growth of aerosol particles, we gain important knowledge on the particle size and phase state in varying RH conditions, chemical composition, and mixing state both external and internal. This thesis is focused on measuring the hygroscopic properties of aerosol particles. Most of the hygroscopicity studies contained here were conducted using the volatility-hygroscopicity tandem differential mobility analyzer (VH-TDMA) that we built within our group in University of Helsinki. The main conclusions we arrived at are: 1) The VH-TDMA we built is indeed an accurate and versatile tool for aerosol hygroscopicity and volatility studies. It is capable of determining the external mixing state of aerosol particles (in terms of hygroscopicity and volatility) and a good indirect method for estimating the chemical composition of aerosol particles. 2) The hygroscopicity studies conducted at sub- and supersaturation conditions may have significantly different results when measuring organic aerosols. The hygroscopic growth measured in supersaturation may greatly overestimate the growth in subsaturation, which in turn overestimates the scattering and the cooling effect of aerosols on climate. 3) The lensing effect by refractive material on the surface of soot particles and its absorption enhancement may have been exaggerated in previous studies. Our field measurements showed an average enhancement of 6%, while previous estimates have been as high as 200%. Lastly, one of the key points of this thesis is to promote the use of H-TDMA technique in the field of aerosol science. The technique has been mostly replaced by the use of cloud condensing nuclei counters (CCNC). H-TDMA technique is far more accurate and versa-tile, and, in my opinion, it is easier to measure in subsaturation and predict the outcome in supersaturation, than vice versa.
  • Kelaranta, Anna (Helsingin yliopisto, 2016)
    Organ dose is the absorbed radiation energy from ionizing radiation to an organ, divided by the organ mass. Organ doses of a patient cannot be measured directly in the patient, but their determination requires dose measurements in anthropomorphic patient models i.e. phantoms or Monte Carlo simulations. Monte Carlo simulations can be performed for example by using computational phantoms or patient s computed tomography (CT) images. Organ doses can be estimated based on measurable dose quantities, such as air kerma, kerma-area product and volume-weighted CT dose index, by using suitable conversion coefficients. Conversion coefficient is the organ dose divided by the measured or calculated examination-specific dose quantity. According to the current knowledge, the probability of radiation induced stochastic effects, which include cancer risk and risk of hereditary effects, increases linearly as a function of the radiation dose. The organ dose is a better quantity for estimating the patient specific risk than the effective dose, which is meant to be used only for populations, and it does not consider patient age or gender. Moreover, the tissue weighting factors that are used in the effective dose calculation are based on whole body irradiations, but in X-ray examinations only a part of the patient is exposed to radiation. The phantoms used in medical dosimetry are either computational or physical, and computational phantoms are further divided into mathematical and voxel phantoms. Phantoms from simplified to as realistic as possible have been developed to simulate different targets, but the organ doses determined based on them can differ largely from the real organ doses of the patient. There are also standard and reference phantoms in use, which offer a dose estimate to a so called average patient. Due to the considerable variation within patient anatomies, the real dose might differ from the dose to a standard or reference phantom. The aim of this thesis was to determine organ doses based on dose measurements and Monte Carlo simulations in four X-ray imaging modalities, including general radiography, CT, mammography and dental radiography. The effect of the patient and phantom thickness and radiation quality on the organ doses in a projection X-ray examination of the thorax was studied via Monte Carlo simulations by using both mathematical phantoms and patient CT images. The effect of the breast thickness on the mean glandular doses (MGDs) was determined based on measurements with phantoms of different thicknesses and collected diagnostic and screening data from patient examinations, and the radiation qualities used in patient and phantom exposures were studied. For fetal dose estimation, fetal dose conversion coefficients were determined based on phantom measurements in CT and dental radiography examinations. Additionally, the effect of lead shields on fetal and breast doses was determined in dental examinations. The difference between Monte Carlo simulated organ doses in patients and mathematical phantoms was large, for the examined organs up to 55% in projection imaging. In mammographic examinations, the difference between MGDs calculated based on collected patient data and phantom measurements was up to 30%. In mammography, patient dose data cannot be replaced by phantom measurements. The properties and limitations of the phantoms must be known when they are used. The estimation of the fetal dose based on conversion coefficients requires understanding about the cases where conversion coefficients are applicable. When used correctly, they provide a method for simple dose estimation, where the application specific dose quantity can be taken into account. The conversion coefficients determined in this thesis can be used to estimate the fetal dose in CT examination based on the volume-weighted CT dose index (CTDIvol), and in dental examinations based on the dose-area product (DAP). In projection imaging, the lung and breast doses decreased as the patient s anterior-posterior thickness increased, but in mammography, the MGDs increased as the compressed breast thickness increased. In CT examinations, the fetal dose remained almost constant in examination where the fetus was totally within the primary radiation beam. When the fetus was outside of the primary beam, the fetal dose increased exponentially with the decreasing distance of the fetus from the scan range. As a function of the half value layer (HVL), the conversion coefficients in the studied projection imaging examination were more convergent than as a function of the tube voltage. The HVL alone describes better the radiation quality than the tube voltage alone, which requires also the definition of the total filtration. In mammography, it is possible to irradiate a phantom and a patient with the same equivalent thickness with different radiation qualities when automatic exposure control is used. Despite the relatively large shielding effect achieved with lead shielding in dental imaging, the fetal dose without lead shielding and the related exposure-induced increase in the risk of childhood cancer death are minimal (less than 10 µGy and 10^-5 %), so there is no need for abdominal shielding. The exposure-induced increase in the risk of breast cancer death is of the same order of magnitude as the increase in the risk of childhood cancer death, so also breast shielding was considered irrelevant. Most important is that a clinically justified dental radiographic examination must never be avoided or postponed due to a pregnancy.
  • Marcozzi, Matteo (Helsingin yliopisto, 2016)
    By time dependent stochastic systems we indicate efffective models for physical phenomena where the stochasticity takes into account some features whose analytic control is unattainable and/or unnecessary. In particular, we consider two classes of models which are characterized by the different role of randomness: (1) deterministic evolution with random initial data; (2) truly stochastic evolution, namely driven by some sort of random force, with either deterministic or random initial data. As an example of the setting (1) in this thesis we will deal with the discrete nonlinear Schrödinger equation (DNLS) with random initial data and we will mainly focus on its applications concerning the study of transport coefficients in lattice systems. Since the seminal work by Green and Kubo in the mid 50 s, when they discovered that transport coefficients for simple fluids can be obtained through a time integral over the respective total current correlation function, the mathematical physics community has been trying to rigorously validate these predictions and extend them also to solids. In particular, the main technical difficulty is to obtain at least a reliable asymptotic form of the time behaviour of the Green-Kubo correlation. To do this, one of the possible approaches is kinetic theory, a branch of the modern mathematical physics stemmed from the challenge of deriving the classical laws of thermodynamics from microscopic systems. Nowadays kinetic theory deals with models whose dynamics is transport dominated in the sense that typically the solutions to the kinetic equations, whose prototype is the Boltzmann equation, correspond to ballistic motion intercepted by collisions whose frequency is order one on the kinetic space-time scale. Referring to the articles in the thesis by Roman numerals [I]-[V], in [I] and [II] we build some technical tools, namely Wick polynomials and their connection with cumulants, to pave the way towards the rigorous derivation of a kinetic equation called Boltzmann-Peierls equation from the DNLS model. The paper [III] can be contextualized in the same framework of kinetic predictions for transport coefficients. In particular, we consider the velocity flip model which belongs to the family (2) of our previous classification, since it consists of a particle chain with harmonic interaction and a stochastic term which flips the velocity of the particles. In [III] we perform a detailed study of the position-momentum correlation matrix via two diffeerent methods and we get an explicit formula for the thermal conductivity. Moreover, in [IV] we consider the Lorentz model perturbed by an external magnetic field which can be categorized in the class (1): it is a gas of non interacting particles colliding with obstacles located at random positions in the plane. Here we show that under a suitable scaling limit the system is described by a kinetic equation where the magnetic field affects only the transport term, but not the collisions. Finally, in [IV] we studied a generalization of the famous Kardar-Parisi-Zhang (KPZ) equation which falls into the category (2) being a nonlinear stochastic partial differential equation driven by a space-time white noise. Spohn has recently introduced a generalized vector valued KPZ equation in the framework of nonlinear fluctuating hydrodynamics for anharmonic particle chains, a research field which is again strictly connected to the investigation of transport coefficients. The problem with the KPZ equation is that it is ill-posed. However, in 2013 Hairer succeded to give a rigorous mathematical meaning to the solution of the KPZ via an approximation scheme involving the renormalization of the nonlinear term by a formally infinite constant. In [V] we tackle a vector valued generalization of the KPZ and we prove local in time wellposedness by using a technique inspired by the so-called Wilsonian Renormalization Group.
  • Tukiainen, Simo (Helsingin yliopisto, 2016)
    Measurements of the Earth's atmosphere are crucial for understanding the behavior of the atmosphere and the underlying chemical and dynamical processes. Adequate monitoring of stratospheric ozone and greenhouse gases, for example, requires continuous global observations. Although expensive to build and complicated to operate, satellite instruments provide the best means for the global monitoring. Satellite data are often supplemented by ground-based measurements, which have limited coverage but typically provide more accurate data. Many atmospheric processes are altitude-dependent. Hence, the most useful atmospheric measurements provide information about the vertical distribution of the trace gases. Satellite instruments that observe Earth's limb are especially suitable for measuring atmospheric profiles. Satellite instruments looking down from the orbit, and remote sensing instruments looking up from the ground, generally provide considerably less information about the vertical distribution. Remote sensing measurements are indirect. The instruments observe electromagnetic radiation, but it is ozone, for example, that we are interested in. Interpreting the measured data requires a forward model that contains physical laws governing the measurement. Furthermore, to infer meaningful information from the data, we have to solve the corresponding inverse problem. Atmospheric inverse problems are typically nonlinear and ill-posed, requiring numerical treatment and prior assumptions. In this work, we developed inversion methods for the retrieval of atmospheric profiles. We used measurements by Optical Spectrograph and InfraRed Imager System (OSIRIS) on board the Odin satellite, Global Ozone Monitoring by Occultation of Stars (GOMOS) on board the Envisat satellite, and ground-based Fourier transform spectrometer (FTS) at Sodankylä, Finland. For OSIRIS and GOMOS, we developed an onion peeling inversion method and retrieved ozone, aerosol, and neutral air profiles. From the OSIRIS data, we also retrieved NO2 profiles. For the FTS data, we developed a dimension reduction inversion method and used Markov chain Monte Carlo (MCMC) statistical estimation to retrieve methane profiles. Main contributions of this work are the retrieved OSIRIS and GOMOS satellite data sets, and the novel retrieval method applied to the FTS data. Long satellite data records are useful for trends studies and for distinguishing between anthropogenic effects and natural variations. Before this work, GOMOS daytime ozone profiles were missing from scientific studies because the operational GOMOS daytime occultation product contains large biases. The GOMOS bright limb ozone product vastly improves the stratospheric part of the GOMOS daytime ozone. On the other hand, the dimension reduction method is a promising new technique for the retrieval of atmospheric profiles, especially when the measurement contains little information about the vertical distribution of gases.
  • Gozaliasl, Ghassem (Helsingin yliopisto, 2016)
    Galaxy formation is one of the most active and evolving fields of research in observational astronomy and cosmology. While we know today which physical processes qualitatively regulate galaxy evolution, the precise timing and the behaviour of these processes and their relations to host environments remain unclear. Many interesting questions are still debated: What regulates galaxy evolution? When do massive galaxies assemble their stellar mass and how? Where does this mass assembly occur? This thesis studies the formation and evolution of central galaxies in groups and clusters over the last 9 billion years in an attempt to answer these questions. Two important properties of galaxy clusters and groups make them ideal systems to study cosmic evolution. First, they are the largest structures in the Universe that have undergone gravitational relaxation and virial equilibrium. By comparing mass distributions among the nearby- and early-Universe clusters, we can measure the rate of the structure growth and formation. Second, the gravitational potential wells of clusters are deep enough that they retain all of the cluster material, despite outflows driven by supernovae (SNe) and active galactic nuclei (AGN). Thus, the cluster baryons can provide key information on the essential mechanisms related to galaxy formation, including star formation efficiency and the impact of AGN and SNe feedback on galaxy evolution. This thesis reports the identification of a large sample of galaxy groups including their optical and X-ray properties. It includes several refereed journal articles, of which five have been included here. In the first article (Gozaliasl et al. 2014a), we study the distribution and the development of the magnitude gap between the brightest group galaxies and their brightest satellites in our well defined mass-selected sample of 129 X-ray galaxy groups at 0.04 < z < 1.23 in XMM-LSS. We investigate the relation between magnitude gap and absolute r-band magnitude of the central group galaxy and its brightest satellite. Our observational results are compared to the predictions by three semi-analytic models (SAMs) based on the Millennium simulation. We show that the fraction of galaxy groups with large magnitude gaps (e.g. fossils) increases significantly with decreasing redshift by a factor of ∼ 2. In contrast to the model predictions, we show that the intercept of the relation between the absolute magnitude of the brightest groups galaxies (BGGs) and the magnitude gap becomes brighter as a function of increasing redshift. We attribute this evolution to the presence of a younger population of the observed BGGs. In the second article (Gozaliasl et al. 2016), we study the distribution and evolution of the star formation rate (SFR) and the stellar mass of BGGs over the last 9 billion years, using a sample of 407 BGGs selected from X-ray galaxy groups at 0.04 < z < 1.3 in the XMM-LSS, COSMOS, and AEGIS fields. We find that the mean stellar mass of BGGs grows by a factor of 2 from z = 1.3 to present day and the stellar mass distribution evolves towards a normal distribution with cosmic time. We find that the BGGs are not completely inactive systems as the SFR of a considerable number of BGG ranges from 1 to 1000 M_sun/yr. In the third article (Gozaliasl et al. 2014b), we study the evolution of halo mass, magnitude gap, and composite (stacked) luminosity function of galaxies in groups classified by the magnitude gap (as fossils, normal/non-fossils, and random groups) using the Guo et al. (2011) SAM. We find that galaxy groups with large magnitude gaps, i.e. fossils (∆M1,2 ≥ 2 mag), form earlier than the non-fossil systems. We measure the evolution of the Schechter function parameters, finding that M∗ for fossils grows by at least +1 mag in contrast to non-fossils, decreasing the number of massive galaxies with redshift. The faint-end slope (α) of both fossils and non-fossils remains constant with redshift. However, φ∗ grows significantly for both type of groups, changing the number of galaxies with cosmic time. We find that the number of dwarf galaxies in fossils shows no significant evolution in comparison to non-fossils and conclude that the changes in the number of galaxies (φ∗) of fossils are mainly due to the changes in the number of massive (M∗) galaxies. Overall, these results indicate that the giant central galaxies in fossils form by multiple mergers of the massive galaxies. In the fourth article (Khosroshahi et al. 2014), we analyse the observed X-ray, optical, and spectroscopic data of four optically selected fossil groups at z ∼ 0.06 in 2dFGRS to examine the possibility that a galaxy group, which hosts a giant luminous elliptical galaxy with a large magnitude gap, can be associated with diffuse X-ray radiation, similar to that of fossil groups. The X-ray and optical properties of these groups indicate the presence of extended X-ray emission from the hot intra-group gas. We find that one of them is a fossil group, and the X-ray luminosity of two groups is close to the defined threshold for fossil groups. One of the groups is ruled out due to the optical contamination in the input sample. In the fifth paper (Khosroshahi et al. 2015), we analyse data from the multiwavelength observations of galaxy groups to probe statistical predictions from the SAMs. We show that magnitude gap can be used as an observable parameter to study groups and to probe galaxy formation models.
  • Gao, Yao (Helsingin yliopisto, 2016)
    Interactions between the land surface and climate are complex as a range of physical, chemical and biological processes take place. Changes in the land surface or the climate can affect the water, energy and carbon cycles in the Earth system. This thesis discusses a number of critical issues that concern land-atmospheric interactions in the boreal zone, which is characterised by vast areas of peatlands, extensive boreal forests and a long snow cover period. Regional climate modelling and land surface modelling were used as the main tools for this study, in conjunction with observational data for evaluation. First, to better describe the present-day land cover in the regional climate model, we introduced an up-to-date and high-resolution land cover map to replace the inaccurate and outdated default land cover map for Fennoscandia. Second, in order to provide background information for future forest management actions for climate change mitigation, we studied the biogeophysical effects on the regional climate of peatland forestation, which has been the dominant land cover change in Finland over the last century. Moreover, climate variability can influence the land surface. Although drought is uncommon in northern Europe, an extreme drought occurred in the summer of 2006 in Finland, and induced visible drought symptoms in boreal forests. Thus, we assessed a set of drought indicators with drought impact data in boreal forests in Finland to indicate summer drought in boreal forests. Finally, the impacts of summer drought on water use efficiency of boreal Scots pine forests were studied to gain a deeper understanding of carbon and water dynamics in boreal forest ecosystems. In summary, the key findings of this thesis include: 1) the updated land cover map led to a slight decrease in biases of the simulated climate conditions. It is expected that the model performance could be improved by further development in model physics. 2) Peatland forestation in Finland can induce a warming effect in the spring of up to 0.43 K and a slight cooling effect in the growing season of less than 0.1 K due to decreased surface albedo and increased evapotranspiration, respectively. Corresponding to spring warming, the snow clearance day was advanced by up to 5 days over a 15-year mean. 3) The soil moisture index SMI was the most capable of the assessed drought indicators in capturing the spatial extent of observed forest damage induced by the extreme drought in 2006 in Finland. Thus, a land surface model capable of reliable predictions of regional soil moisture is important in future drought predictions in the boreal zone. 4) The inherent water use efficiency (IWUE) showed an increase during drought at the ecosystem level, and IWUE was found to be more appropriate than the ecosystem water use efficiency (EWUE) in indicating the impacts of drought on ecosystem functioning. The combined effects of soil moisture drought and atmospheric drought on stomatal conductance have to be taken into account in land surface models at the global scale when simulating the drought effects on plant functioning.
  • Korpisalo, Arto Leo (Helsingin yliopisto, 2016)
    The purpose of this thesis is to present the essential issues concerning the radio imaging method (RIM) and attenuation measurements. Although most of the issues discussed in this thesis are in no sense novel, the thesis provides an overview of the fundamental aspects of RIM and presents novel results from the combination of RIM with other borehole methods. About 2.6 million years ago, early humans perhaps accidently discovered that sharp stone flakes made it easier to cut the flesh from around bones. From sharp flakes to the first handaxes took hundreds of thousands of years, and the development was thus extremely slow. Alessandro Volta s invention of the voltaic pile (battery) in 1800 started a huge journey, and only one hundred years later humans had all the necessary means to start examining the Earth s subsurface. Since then, the development has been rapid, resulting in numerous methods (e.g. magnetic, gravimetric, electromagnetic and seismic) and techniques to resolve the Earth s treasures. The theoretical basis for the radio imaging method was established long before the method was utilized for exploration purposes. RIM is a geotomographic electromagnetic method in which the transmitter and receivers are in different boreholes to delineate electric conductors between the boreholes. It is a frequency domain method and the continuous wave technique is usually utilized. One of the pioneers was L.G. Stolarczyk in the USA in the 1980s. In the former Soviet Union, interest in RIM was high in the late 2000s. Our present device is also Russian based. Furthermore, in South Africa and Australian, a considerable amount of effort has been invested in RIM. The RIM device is superficially examined. It is the essential part in our RIM system, referred to as electromagnetic radiofrequency echoing (EMRE). The idea behind the device is excellent. However, several poor solutions have been utilized in its construction. Many of them have possibly resulted from the lack of good electronic components. The overall electronic construction of the whole device is very complicated. At least two essential properties are lacking, namely circuits for measuring the input impedances of the antennas and the return loss to obtain the actual output power. Of course, the digitalization of data in the borehole receiver could give additional benefits in data handling. The measurements can be monitored in real time on a screen, thus allowing the operator to already gain initial insights into the subsurface geology at the site and also to modify the measurement plan if necessary. Even today, no practical forward modelling tool for examining the behaviour of electromagnetic waves in the Earth s subsurface is available for the RIM environment, and interpretation is thus traditionally based on linear reconstruction techniques. Assuming low contrast and straight ray conditions can generally provide good and rapid results, even during the measurement session. Electrical resistive logging is usually one of the first methods used in a new borehole. Comparing the logging data with measured amplitude data can simply reveal the situations where a nearby and relatively limited conductive formation can mostly be responsible for the high attenuations levels between boreholes and can hence be taken into account in the interpretation. The transient electromagnetic method (TEM) functions in the time domain. TEM is also a short-range method and can very reliably reveal nearby conductors. Comparisons of RIM and TEM data from the ore district coincide well. These issues are considered in detail in Publication I. The functioning of the antenna is highly dependent on the environment in which the antenna is placed. The primary task of the antenna is to radiate and receive electromagnetic energy, or the antenna is a transducer between the generator and the environment. A simple bare wire can serve as a diagnostic probe to detect conductors in the borehole vicinity. However, borehole antennas are generally highly insulated to prevent the leakage of current into the borehole, and at the same time the insulation reduces the sensitivity of the antenna current to the ambient medium, especially as the electric properties of the insulation and surrounding material differ significantly. However, monitoring of the input impedance of the antenna could help in estimating its effectiveness in the borehole. This property is lacking in the present device. The scattering parameter s11 defines the relationship between the reflected and incident voltage or it provides information on the impedance matching chain. The behaviour of impedance of the insulated antennas in the different borehole conditions were estimated using simple analytical methods, such as the models of Wu, King and Giri (WKG) and Chen and Warne (CHEN), and highly sophisticated numerical software such as FEKO from EM Software & Systems (Altair). According to the results, our antennas maintain their effectiveness and feasibility in the whole frequency band (312.5−2500 kHz) utilized by the device. However, the highest frequency (2500 kHz) may suffer from different ambient conditions. The resolution is closely related to the frequency, whereby higher frequencies result in better resolution but at the expense of the range. These issues are clarified in Publication II. Electromagnetic methods are based on the fact that earth materials may have large contrasts in their electrical properties. A geotomographic RIM survey can have several benefits over ground-level EM sounding methods. When the transmitter is in the borehole, boundary effects due to the ground surface and the strong attenuation emerging from soils are easily eliminated. A borehole survey also brings the survey closer to the targets, and higher frequencies can be used, which means better resolution. Viewing of the target from different angles and directions also means better reconstruction results. The fundamental principles of the electromagnetic fields are explained to distinguish diffusive movement (strongly attenuating propagation) from wave propagation and to give a good conception of the possible transillumination depths of RIM. The transillumination depths of up to 1000 m are possible in a highly resistive environment using the lowest measurement frequency (312.5 kHz). In this context, one interesting and challenging case study is also presented from the area for a repository of spent nuclear fuel in Finland. The task was to examine the usefulness of RIM in the area and to determine how well the apparent resistivity could be associated with the structural integrity of the rock. The measurements were successful and the results convinced us of the potential of RIM. Publication III is related to these issues. In Finland, active use of RIM started in 2005 when Russian RIM experts jointly with GTK carried out RIM measurements at Olkiluoto. The results are presented in Publication IV. In this pioneering work, extensive background information (e.g. versatile geophysical borehole logging, optical imaging, 3D vertical seismic profile (VSP) and single-hole radar reflection measurements) was available from the site. The comparability of the results was good, e.g. low resistive or highly attenuating areas near boreholes from the RIM measurements coincided well with resistive logging and radar results. Electric mise-á-la-masse and high frequency electromagnetic RIM displayed even better comparability. The comparability of the surface electromagnetic sounding data and the RIM data was good. However, the tomographic reconstruction is much more detailed. In overall conclusion, the attenuation measurements were well suited to the recording of subsurface resistivity properties and continuity information between boreholes at Olkiluoto. To date, we have utilized RIM in two quite different environments. Olkiluoto is a spent nuclear fuel area in Finland with solid crystalline bedrock and Pyhäsalmi is an ore district with massive sulphide deposit. Despite Pyhäsalmi being an ideal research target for RIM, the utilization of the method has proven successful in both cases.
  • Marnela, Marika (Helsingin yliopisto, 2016)
    The Arctic Ocean and its exchanges with the Nordic Seas influence the north-European climate. The Fram Strait with its 2600 m sill depth is the only deep passage between the Arctic Ocean and the other oceans. Not just all the deep water exchanges between the Arctic Ocean and the rest of the world's oceans take place through the Fram Strait, but also a significant amount of cold, low-saline surface waters and sea ice exit the Arctic Ocean through the strait. Correspondingly, part of the warm and saline Atlantic water flowing northward enters the Arctic Ocean through the Fram Strait bringing heat into the Arctic Ocean. The oceanic exchanges through the Fram Strait as well as the water mass properties and the changes they undergo in the Fram Strait and its vicinity are studied from three decades of ship-based hydrographic observations collected from 1980-2010. The transports are estimated from geostrophic velocities. The main section, comprised of hydrographic stations, is located zonally at about 79 °N. For a few years of the observed period it is possible to combine the 79 °N section with a more northern section, or with a meridional section at the Greenwich meridian, to form quasi-closed boxes and to apply conservation constraints on them in order to estimate the transports through the Fram strait as well as the recirculation in the strait. In a similar way, zonal hydrographic sections in the Fram Strait and along 75 °N crossing the Greenland Sea are combined to study the exchanges between the Nordic Seas and the Fram Strait. The transport estimates are adjusted with drift estimates based on Argo floats in the Greenland Sea. The mean net volume transports through the Fram Strait are averaged from the various approaches and range from less than 1 Sv to about 3 Sv. The heat loss to the atmosphere from the quasi-closed boxes both north and south of the Fram Strait section is estimated at about 10 TW. The net freshwater transport through the Fram Strait is estimated at 60-70 mSv southward. The insufficiently known northward transport of Arctic Intermediate Water (AIW) originating in the Nordic Seas is estimated using 2002 Oden expedition data. At the time of data collection, excess sulphur hexafluoride (SF6) was available, a tracer that besides a background anthropogenic origin derives from a mixing experiment in the Greenland Sea in 1996. The excess SF6 can be used to distinguish AIW from the upper Polar Deep Water originating in the Arctic Ocean. It is estimated that 0.5 Sv of AIW enters the Arctic Ocean. The deep waters in the Nordic Seas and in the Arctic Ocean have become warmer and in the Greenland Sea also more saline during the three decades studied in this work. The temperature and salinity properties of the deep waters found in the Fram Strait from both Arctic Ocean and Greenland Sea origins have become similar and continue to do so. How these changes will affect the circulation patterns will be seen in the future.