Browsing by Issue Date

Sort by: Order: Results:

Now showing items 21-40 of 871
  • Tonttila, Juha (Helsingin yliopisto, 2015)
    Clouds, aerosols and the interactions between them are some of the most important uncertainties in climate modelling. The scales of spatial variability related to clouds are generally too small to be resolved using a typical climate model grid resolution. This work comprises studies about the small-scale variability of the vertical wind component, which significantly contributes to the process of cloud droplet formation. In addition, more elaborate methods for describing the small-scale variability of cloud properties in climate models are developed. The key questions that are investigated include: 1) What are the statistical properties of the turbulent vertical wind variability in the boundary layer and can they be represented accurately by atmospheric models? 2) How does parameterizing the small-scale variability in cloud microphysical processes affect the simulated cloud properties in climate models? 3) How does accounting for the small-scale variability in cloud properties affect the model-based estimates of the aerosol indirect radiative effects? The most important tool used in this work was the ECHAM5-HAM2 aerosol-climate model. The model simulates not only the atmospheric circulation and thermodynamics, but also the global distribution of aerosols and the physical processes between particles that affect the aerosol particle population. This allows the model to represent the interactions between clouds and aerosols. In addition, parts of this work also make use of measurement data based on remote sensing methods as well as high-resolution output from a numerical weather prediction model. The results show that the small-scale variability of the vertical wind associated with cloud droplet formation must be parameterized even in models with relatively high grid resolution. This highlights especially the importance of such methods for lower-resolution climate models. The variability of vertical wind can be described using a probability density function (PDF), the shape of which may vary significantly depending on the atmospheric conditions. The intricacies of the PDF include many uncertainties which can only be reduced by more extensive observations. With a simplified representation of the vertical velocity PDF, a new version of the climate model is constructed in this work, which can be used to study the climate effects due to the small-scale variability in vertical wind and clouds. It is noted that earlier methods that try to account for the variability in vertical velocity and cloud formation are somewhat insufficient. More attention should be paid on treating the small-scale variability self-consistently for entire chains of processes rather than separately for individual processes. This was accomplished in this work with the newly developed method, comprising the chain of processes from cloud formation to radiative transfer. The new method has a strong impact on the number of cloud droplets and drizzle formation as compared to the default model version, where the small-scale variaiblity of clouds is not as accurately accounted for. Moreover, the response of the model-simulated cloud properties to anthropogenic changes in aerosol emissions is found to be considerably weaker in the new model version than in the default model version. In effect, when compared with the default model version, the aerosol indirect radiative effect estimated with the new model version is closer to the best observation-based estimate. The results of this work contribute to improving our understanding of the aerosol-cloud interactions and to guide the work towards further reducing the uncertainties related to modelling clouds and climate.
  • Olenius, Tinja (Helsingin yliopisto, 2015)
    Formation of aerosol particles from condensable vapors is a ubiquitous phenomenon in the atmosphere. Aerosols can affect regional and global climate, as well as visibility and human health. The work of this thesis contributes to the numerous efforts made to build understanding of atmospheric particle formation mechanisms. The focus is on the first molecular-level steps, where clustering of individual gas-phase molecules initiates the process, and the applied method is dynamic cluster population modeling. Sets of sub-2 nm molecular clusters are simulated in conditions relevant to the atmosphere or laboratory considering vapor production, external sinks for clusters and vapors, cluster collision and evaporation processes, and in some cases also ionization and recombination by generic ionizing species. Evaporation rates are calculated from the cluster formation free energies computed with quantum chemical methods. As sulfuric acid has been shown to be the key component in particle formation in most boundary layer locations, the majority of the work presented here concentrates on simulating sulfuric acid-containing clusters in the presence of potentially enhancing species, namely ammonia and amines. In laboratory experiments, these base compounds have been found to be capable of enhancing sulfuric acid driven particle formation to produce formation rates around the magnitude observed in the atmosphere. This result is reproduced by the cluster model. In this work, the performance of the modeling tools is validated against experimental data also by comparing simulated concentrations of charged sulfuric acid ammonia clusters to those measured with a mass spectrometer in a chamber experiment. Examination of clustering pathways in simulated sulfuric acid ammonia and sulfuric acid dimethylamine systems shows that the clustering mechanisms and the role of ions may be very different depending on the identity of the base. In addition to predictions related to cluster formation from different precursor vapors, the model is applied to study the effects of varying conditions on the qualitative behavior of a cluster population and quantities that have been deduced from measured cluster concentrations. It is demonstrated that the composition of the critical cluster corresponding to the maximum free energy along the growth pathway cannot be reliably determined from cluster formation rates by commonly used methods. Simulations performed using a simple model substance show that cluster growth rates determined from the fluxes between subsequent cluster sizes are likely to differ from the growth rates deduced from the time evolution of the concentrations as in experiments, with the difference depending on the properties of the substance as well as ambient conditions. Finally, the effect of hydration and base molecules on sulfuric acid diffusion measurement is assessed by mimicking an experimental setup. Applications of cluster population simulations are diverse, and the development of these types of modeling tools provides useful additions to the palette of theoretical approaches to probe clustering phenomena.
  • Hildén, Timo (Helsingin yliopisto, 2015)
    Gas Electron Multiplier (GEM) detectors are special of position sensitive gas filled detectors used in several particle physics experiments. They are capable of sub- millimeter spatial resolution and energy resolution (FWHM) of the order of 20%. GEM detectors can operate with rates up to 50 kHz/mm2, withstand radiation excellently and can be manufactured up to square meter sizes. This thesis describes the Quality Assurance (QA) methods used in the assembly of 50 GEM detectors for the TOTEM T2 telescope at the LHC at CERN. Further development of optical QA methods used in T2 detector assembly lead into development of a unique large-area scanning system capable of sub-µm resolution. The system, its capability and the software used in the analysis of the scans are described in detail. A correlation was found between one of the main characteristics of the detector, the gas gain, and the results of the optical QA method. It was shown, that a qualitative estimation of the gain can be made based on accurate optical measurement of the microscopic features of the detector components. Ability to predict the performance of individual components of the detectors is extremely useful in large scale production of GEM based detectors.
  • Ilinov, Andrey (Helsingin yliopisto, 2015)
    Nanotechnology became an emerging field during the last few decades. The possibility to create elements having sizes in the nanometer range provides new opportunities for medical applications, various sensors and detectors, and composite materials technologies. However, at the nanoscale the basic physical properties may change unexpectedly including chemical, mechanical, optical and electronic properties. There is still no clear understanding of all possible consequences of miniaturization on the behavior of nanostructures. This thesis is focused on the analysis of mechanical and structural (including sputtering under irradiation) properties of nanorods. By nanorods we imply structures like beams or rods, with their cross-sectional diameter measuring in nanometers and having a length several times larger than the diameter. At such sizes it becomes possible to simulate the structures atom by atom using the molecular dynamics (MD) method. In the first part of the thesis, we analyze the elastic properties of Si nanorods: how the variation in size may change the elastic moduli, the effects of oxidation and intrinsic stresses. We also check the validity of the classical continuum mechanics approach by modeling the same nanorods with the finite elements method (FEM). In the second part we investigate sputtering from Au nanorods under ion irradiation. Recent experiments had shown that there is a big enhancement of sputtering yields from Au nanorods in contrast with those from a flat surface. The yields can be as much as 1000 per individual impact. MD gives us an opportunity to analyze the sputtering process with a femtosecond resolution which is impossible by any of the existing experimental methods. We find that an explosive ejection of nanoclusters is the main factor causing such large sputtering yields.
  • Mäkelä, Hanna (Helsingin yliopisto, 2015)
    Roughly three-quarters of Finland s area is covered by forests. Any climatological changes influencing the danger of forest fire are important to evaluate and consider. The objective of this thesis is to study the long-term past and future changes in climatically-driven forest fire danger in Finland based on the summertime mean temperature and precipitation sum. The work is composed of two parts. In the first part, long-term gridded datasets of observed monthly mean temperatures and precipitation sums for Finland are developed. In the second part, these gridded datasets are used together with calculated values of the Finnish Forest Fire Index and probabilistic climate model simulations (from the ENSEMBLES project) to estimate the number of forest fire danger days during the summer season (June-August). The long-term variation of Finland s climatological forest fire danger is studied roughly for 100 years backwards and into the future. One of the main achievements of this thesis is that it explores the possibility of quantifying past and future fire-weather using a relatively limited database with regard to both weather variables and their spatial coverage. This enables a wider exploitation of scattered data series from earlier times and can also provide opportunities for projections using data with a low resolution. The climatological forest fire danger in Finland varies considerably from year to year. There have not been any significant increasing or decreasing trends in the number of fire danger days during the 20th century (1908-2011). On average, the highest probability of forest fire danger occurs in June and July, when a fire hazard exists on roughly 35-40% of all days. The intra-seasonal variation of fire danger has been large enough to enable the occurrence of conflagrations even though the fire danger for the season as a whole has been at an average level. Despite the projected increase in average summertime precipitation, the Finnish climate will provide more favourable conditions for the occurrence of forest fires in the future than today. This is due to increases in the mean temperature. The probability of an increase in the number of fire danger days is 56-75% in the near future (2010-2029) and 71-91% by the end of the current century (2080-2099), depending on the region. This would indicate an increase of 1-2 and 7-10 days, respectively. It is thus clearly important to further develop existing tools for the forecasting of fire danger, and to maintain the capabilities of the fire prevention, surveillance and suppression services. Future projections of all relevant meteorological variables (temperature, precipitation, humidity, evaporation and wind speed) at higher temporal and spatial resolutions, in addition to information on the type of the summertime precipitation and the length of the dry periods, would notably improve the assessment of the future climatological forest fire danger.
  • Snellman, Jan (Helsingin yliopisto, 2015)
    The mathematical description of turbulence is one of the greatest unresolved problems of modern physics. Many targets of astrophysical research, such as stellar convection zones and accretion discs, are very turbulent. Especially, the understanding of stellar convection zones is important for the theory of stellar evolution. Therefore, it is necessary to use approximate descriptions for turbulence while modelling these objects. One approximate method for describing turbulence is to divide the quantities under study into mean and fluctuating parts, the latter of which represent small scale changes present in turbulence. This approach is known as the Reynolds decomposition, which makes it possible to derive equations for the mean quantities. The equations acquired depend on correlations of the fluctuating quantities, such as the correlations of the fluctuating velocity components known as the Reynolds stresses, and turbulent heat and passive scalar fluxes. A mathematically precise way of handling these correlations is to derive equations also for them, but the resultant equations will depend on new, higher order correlations. If one derives equations for these new correlations, a new set of even higher order correlations is involved, and the equation system will not be closed. This is called the closure problem. The closure problem can be circumvented by using approximations known as closure models, which work by replacing the higher order correlations with lower order ones, thereby creating a closed system. Second order closure models, in which the third order correlations have been replaced by relaxation terms of second order, are studied in this Thesis by comparing their results with those of direct numerical simulations (DNS). The two closure models studied are the minimal tau approximation (MTA) and the isotropising variable relaxation time (IVRT) closure. The physical phenomena, to which the closures were applied, included homogeneous isotropically forced turbulence with rotation and shear, compressible as well as homogeneous Boussinesq convection, decaying turbulence, and passive scalar transport. In the case of homogeneous isotropic turbulence it was found that MTA is capable of reproducing the DNS results with Strouhal numbers of about unity. It was also found that the Reynolds stress components, contributing to angular momentum transport in accretion discs, can change sign depending on rotation rate, which was seen in studies of compressible convection too, meaning that convection can potentially contribute to accretion of matter. Decaying turbulence studies indicated that the relaxation time scales occurring in the relaxation closures tend to constant values at high Reynolds numbers, and this was also observed when studying passive scalar transport. However, in studies concerning Boussinesq convection no asymptotic behaviour was found as a function of the Rayleigh and Taylor numbers. The correspondence of the closure models to direct numerical simulations is found to be generally achievable, but with varying quality depending on the physical situation. Given the asymptotic behaviour of the optimum closure parameters for forced turbulence, they can be considered universal in this case. For rotating Boussinesq convection the same conclusion cannot be drawn with respect to the Rayleigh and Taylor numbers.
  • Sibaouih, Ahlam (Helsingin yliopisto, 2015)
    Catalytic transformation of carbon dioxide into useful organic compounds has attracted much attention due to its economic and environmental benefits. In addition, other reasons are also taken into account, such as the possible utilization of CO2 as a renewable source chemical and the growing concern of the greenhouse effect. CO2 is an abundant, cheap, and safe C1 building block in organic synthesis. However, due to the inert nature of CO2, efficient catalytic processes of its chemical fixation remain a significant challenge. In this work, we have studied a possible pathway for practical utilization of CO2. The reaction of CO2 with epoxides giving cyclic carbonates, has been investigated. New catalyst systems based on cobalt capable of catalyzing the chemical transformation of carbon dioxide are described in detail. Oxygen is a cheap, readily available and environmentally friendly natural oxidant. The catalytic activation of molecular oxygen has great potential in a variety of applications. Catalysis and reactions, which are based on molecular oxygen, can also be considered to be ecologically benign processes. Moreover, catalytic reactions in water are highly desirable in terms of green chemistry. In this context, our purpose was to develop an environmentally friendly catalytic systems, suitable for oxidation of alcohols with molecular oxygen in water solution. In this part of the work, efficient catalysts, based on copper complexes have been synthesized and studied in the presence of TEMPO for the oxidation of benzyl and aliphatic alcohols with molecular oxygen in aqueous and nonaqueous medium.
  • Fager-Jokela, Erika (Helsingin yliopisto, 2015)
    The Pauson-Khand reaction (PKR) is a very efficient method of synthesising cyclopentenones. In the reaction, an alkene, an alkyne and carbon monoxide combine to form a cyclopentenone ring, mediated or catalysed by a transition metal complex in one pot. In the cyclisation, three new carbon-carbon bonds are created. This thesis concentrates on the intermolecular variant of a cobalt(0)-mediated Pauson-Khand reaction. The development of intermolecular cyclisation has been slow over the past decade, due to the lack of reactive alkenes and the lack of regioselectivity for substituted alkynes. Despite the publication of numerous studies, the electronic effects involved are not yet completely understood. In this study, our purpose was to gain a greater understanding of the interplay between steric and electronic factors in determining the regioselectivity of the Pauson-Khand reaction. The electronic guidance regarding the alkyne regioselectivity of the Pauson-Khand reaction was studied with both conjugated aromatic alkynes and non-conjugated propargylic alkynes. It was demonstrated that, in the absence of steric effects, alkyne polarisation dictates the regiochemical selectivity of PKR. In conjugated systems, like diarylalkynes, Hammett values can be utilised in estimation of the polarisation of the alkyne. With nonconjugated alkynes, on the other hand, electronegativity of the substituent group designates the major regioisomer, as the charge differences are created via inductive effect. In addition to investigating regioselectivity, additive-free methods for promotion of Pauson-Khand reaction were developed and utilised, and Pauson-Khand reaction was applied in the synthesis of estrone E-ring extension. With microwaves (MW) used in promotion, the heat was effectively transferred to the reaction, saving energy and time without affecting the selectivity of the reaction.
  • Tala, Suvi (Helsingin yliopisto, 2015)
    A central part of the enculturation of new scientists in the natural sciences takes place in poorly understood apprentice master settings: potential expert researchers learn about success in science by doing science as members of research groups. What makes learning in such settings challenging is that a central part of the expertise they are attempting to achieve is tacit: the ideas guiding scientific knowledge-building are embodied in its practices and are nowadays rarely articulated. This interdisciplinary study develops a naturalistic view concerning scientific knowledge construction and justification and what is learned in those processes, in close cooperation with practitioners and by reflection on their actual practices. Such a viewpoint guides developing the expertise education of scientists. Another goal of the study is to encourage science education at every level to reflect as much as possible the epistemological aspects of doing science that practising scientists can also agree upon. The theoretical part of the dissertation focuses on those features of experimentation and modelling that the viewpoints of scientific practices suggest are essential but which are not addressed in the traditional views of science studies and, as a consequence, in science education. Theoretical ideas are tested and deepened in the empirical part, which concerns nanoscience. The developed contextualized method supports scientists in reflecting on their shared research practices and articulating those reflections in the questionnaire and interview. Contrary to traditional views, physical knowledge is understood to progress through the technoscientific design process, aiming at tightening the mutually developing conceptual and material control over the physical world. The products of the design process are both understanding about scientific phenomena and the means to study them, which means constructing and controlling a laboratory phenomenon, created in a laboratory in the same design process that produces the understanding about its functioning. These notions suggest the revision of what exactly is achieved by science and on what kind of basis, which indeed moves the epistemological views of science towards a viewpoint recognizable to its practitioners. Nowadays, technoscientific design is increasingly embodied in simulative modelling, mediating between the experimental reality and its theoretical framework. Such modelling is neither a part or continuation of theorizing as most literature considers modelling, nor it is only a bare means to analyse experimental data, but a partly independent and flexible method of generating our understanding of the world. Because the rapid development of modelling technology alters the evidential basis of science, a new kind of expertise is needed. The entry to the physical reality provided by generative modelling differs epistemologically and cognitively, from traditional methodological approaches. The expertise developed in such modelling provides scientists with new kinds of possibilities. For young scientists success and scientific and technological progress, this expertise is worth understanding.
  • Rusak, Stanislav (Helsingin yliopisto, 2015)
    Grounded in the increasingly accurate astronomical observations of the past few decades, the study of cosmology has produced a comprehensive account of the history of the universe. This account is contained in the Hot Big Bang cosmological model which describes the expansion of a hot and dense state to become the universe as we observe it today. While the Big Bang model has been extremely successful in being able to account for a wide array of cosmological data, it leaves unexplained the special initial conditions that are required in order to produce the universe we find ourselves in. Such initial conditions are, however, a natural consequence of a period of quasi-exponential expansion of the universe known as inflation. Such a period of expansion can be realized if the universe is dominated by a scalar field - the inflation - which is slowly rolling down the slope of its potential. Inflation also provides a natural mechanism for the production of primordial seeds of structure in the universe through the growth of the quantum fluctuations in the inflaton field to super-horizon scales. Together inflation and the subsequent Big Bang evolution form the back bone of modern cosmology. However, the transition between the inflationary epoch and the thermal state which characterizes the initial conditions of the Big Bang evolution is not well understood. This process - dubbed reheating - involves the decay of the inflaton field into the particles of the Standard Model of particle physics, and may be highly non-trivial, with non-perturbative resonant processes playing a major role. Spectator fields - light scalar fields which are subdominant during inflation - may also play an important role during this epoch. The aim of this thesis is to showcase aspects of non-perturbative decay of scalar fields after inflation, focusing in particular on the role of spectator fields. This includes the modulation of the non-perturbative decay of the inflaton by a spectator field, the non-perturbative decay of a spectator into the Standard Model Higgs, as well as the non-perturbative decay of the Higgs field itself.
  • Jääskinen, Väinö (Helsingin yliopisto, 2015)
    In various fields of knowledge we can observe that the availability of potentially useful data is increasing fast. A prime example is the DNA sequence data. This increase is both an opportunity and a challenge as new methods are needed to benefit from the big data sets. This has sparked a fruitful line of research in statistics and computer science that can be called machine learning. In this thesis, we develop machine learning methods based on the Bayesian approach to statistics. We address a fairly general problem called clustering, i.e. dividing a set of objects to non-overlapping group based on their similarity, and apply it to models with Markovian dependence structures. We consider sequence data in a finite alphabet and present a model class called the Sparse Markov chain (SMC). It is a special case of a Markov chain (MC) model and offers a parsimonious description of the data generating mechanism. A Variable length Markov chain (VLMC) is a popular sparse model presented earlier in the literature and it has a representation as an SMC model. We develop Bayesian clustering methodology for learning the SMC and other Markovian models. Another problem that we study in this thesis is causal inference. We present a model and an algorithm for learning causal mechanisms from data. The model can be considered as a stochastic extension of the sufficient-component cause model that is popular in epidemiology. In our model there are several causal mechanisms each with its own parameters. A mixture distribution gives a probability that an outcome variable is associated with a mechanism. Applications that are considered in this thesis come mainly from computational biology. We cluster states of Markovian models estimated from DNA sequences. This gives an efficient description of the sequence data when comparing to methods reported in the literature. We also cluster DNA sequences with Markov chains, which results in a method that can be used for example in the estimation of bacterial community composition in a sample from which DNA is extracted. The causal model and the related learning algorithm are able to estimate mechanisms from fairly challenging data. We have developed the learning algorithms with big data sets in mind. Still, there is a need to develop them further to handle ever larger data sets.
  • Ruusuvuori, Kai (Helsingin yliopisto, 2015)
    New particle formation is an important process in the atmosphere. As ions are constantly produced in the atmosphere, the behaviour and role of charged particles in atmospheric processes needs to be understood. In order to gain insight on the role of charge in atmospheric new particle formation, the electron structure of the molecules taking part in this process needs to be taken into account using quantum chemical methods. Quantum chemical density functional theory was employed in an effort to reproduce an experimentally observed sign preference. While computational results on molecular structures agreed well with results obtained by other groups, the computationally obtained sign preference was opposite to the experimentally observed. Possible reasons for this discrepancy were found in both computational results and experiments. Simulations of clusters containing water, pyridine, ammonia and a proton were performed using density functional theory. The clusters were found to form a core consisting of ammonium ion and water with the pyridine molecule bonding to the ammonium ion. However, the solvation of the ammonium ion was observed to affect the possibility of proton transfer. Calculations of proton affinities and gas phase basicities of several compounds, which can be considered as candidates to form atmospheric ions in the boreal forest, were performed. The generally small differences between the calculated gas phase basicites and proton affinities implied only small entropy changes in the protonation reaction. Comparison with experiments resulted in the conclusion that the largest experimentally observed peaks of atmospheric ions most likely corresponded to pyridine and substituted pyridines. Furthermore, a combination of low proton affinity and high observed cation concentration was concluded to imply a high concentration of neutral parent molecules in the atmosphere. A combination of quantum chemistry and a code for modelling cluster dynamics was employed to study the use of protonated acetone monomers and dimers as the ionization reagent in a chemical ionization atmospheric pressure interface time-of-flight mass spectrometer (CI-APi-TOF). The results showed that the ionization reagents successfully charged dimethylamine monomers. However, there were discrepancies between the simulated and measured cluster distributions. Possible reasons for this discrepancy were found in both measurements and the modelling parameters.
  • Lindberg, Sauli (Helsingin yliopisto, 2015)
    The dissertation deals with the Jacobian equation in the plane. R.R. Coifman, J.-P. Lions, Y. Meyer and S. Semmes proved in their seminal paper from 1993 that when a mapping from the n-space to the n-space belongs to a suitable homogeneous Sobolev space, its Jacobian determinant belongs to a real-variable Hardy space. Coifman, Lions, Meyer and Semmes proceeded to ask the following famous open problem: can every function in the Hardy space be written as the Jacobian of some Sobolev mapping? It follows from the work of G. Cupini, B. Dacorogna and O. Kneuss that the range of the Jacobian operator is dense in the Hardy space. As a consequence of this, solving the Jacobian equation reduces to proving that every so-called energy-minimal solution satisfies certain natural a priori estimate. In the dissertation we use Lagrange multipliers in Banach spaces to prove the sought after a priori estimate for a large class of energy-minimal solutions. It remains unclear whether the class is large enough to imply the surjectivity of the Jacobian operator, but we present many partial results on the properties of the class. To cite an example, when the Hardy space is endowed with a particular norm that is well suited to the study of the Jacobian equation, all the extreme points of the unit ball are Jacobians. Furthermore, the energy-minimal solutions for the extreme points satisfy the wanted a priori estimate. As one of the main results of the dissertation we reduce solving the Jacobian equation to a fairly concrete finite-dimensional problem. As the main tools of the dissertation we use Banach space geometry, harmonic analysis in the plane and methods from the theory of incompressible elasticity.
  • Leinonen, Lasse (Helsingin yliopisto, 2015)
    Supersymmetry is a proposed new symmetry that relates bosons and fermions. If supersymmetry is realized in nature, it could provide a solution to the hierarchy problem, and one of the new particles it predicts could explain dark matter. In this thesis, I study supersymmetric models in which the lightest supersymmetric particle can be responsible for dark matter. I discuss a scenario in which the supersymmetric partner of the top quark called stop is the next-to-lightest supersymmetric particle in the constrained Minimal Supersymmetric Standard Model. Mass limits and various decay branching fractions are considered when the allowed parameter space for the scenario is determined. If the mass of stop is close to the mass of the lightest supersymmetric particle, one can obtain the observed dark matter density. The scenario leads to a novel experimental signature consisting of high transverse momentum top jets and large missing energy, which can be used to probe the model at the LHC. I also discuss an extended supersymmetric model with spontaneous charge-parity (CP) violation and a right-handed neutrino. When CP is spontaneously violated, a light singlet scalar appears in the particle spectrum, which provides new annihilation channels for the lightest supersymmetric particle. In the model, a neutralino or a right-handed sneutrino can produce the observed dark matter density. Dark matter direct detection limits are found to be especially constraining for right-handed sneutrinos.
  • Aalto, Juha (Helsingin yliopisto, 2015)
    Climate, Earth surface processes and soil thermal hydrological conditions drive landscape development, ecosystem functioning and human activities in high latitude regions. These systems are at the focal point of concurrent global change studies as the ongoing shifts in climate regimes has already changed the dynamics of fragile and highly specialized environments across pan Arctic. This thesis aimed to 1) analyze and model extreme air temperatures, soil thermal and hydrological conditions, and the main Earth surface processes (ESP) (cryoturbation, solifluction, nivation and palsa mires) controlling the functioning of high latitude systems in current and future climate conditions; 2) identify the key environmental factors driving the spatial variation of the studied phenomena; and 3) develop methodology for producing novel high quality datasets. To accomplish these objectives, spatial analyses were conducted throughout geographical scales by utilizing multiple statistical modelling approaches, such as regression, machine learning techniques and ensemble forecasting. This thesis was based on unique datasets from the northern Fennoscandia; climate station records from Finland, Sweden and Norway, state of the art climate model simulations, fine scale field measurements collected in arctic alpine tundra and remotely sensed geospatial data. In paper I, accurate extreme air temperature maps were produced, which were notably improved after incorporating the influence of local factors such as topography and water bodies into the spatial models. In paper II, the results showed extreme variation in soil temperature and moisture over very short distances, while revealing the factors controlling the heterogeneity of ground thermal and hydrological conditions. Finally, the modelling outputs in papers III and IV provided new insights into the determination of geomorphic activity patterns across arctic alpine landscapes, while stressing the need for accurate climate data for predictive geomorphological distribution mapping. Importantly, Earth surface processes were found to be extremely climatic sensitivity, and drastic changes in geomorphic systems towards the end of 21st century can be expected. The increase of current temperature conditions by 2 ˚C was projected to cause a near complete loss of active ESPs in the high latitude study area. This thesis demonstrated the applicability of spatial modelling techniques as a useful framework in multiple key challenges of contemporary physical geography. Moreover, with the utilized model ensemble approach, the modelling uncertainty can be reduced while presenting the local trends in response variables more robustly. In future Earth system studies, it is essential to further assess the dynamics of arctic alpine landscapes under changing climatic conditions and identify potential tipping points of these sensitive systems.
  • Backman, John (Helsingin yliopisto, 2015)
    Aerosol particles are part of the Earth's climatic system. Aerosol particles can significantly impact the climate. The ability of aerosol particles to do so depends mainly on the size, concentration and chemical composition of the particles. Aerosol particles can act as cloud condensation nuclei (CCN) and can therefore mediate cloud properties. Aerosol particles can thus perturb the energy balance of the Earth through clouds. Aerosol particles can also directly interact with solar radiation through scattering, absorption, or both. The climatic implications of aerosol radiation interactions depend on the Earth s surface properties and the amount of light scattering in relation to light absorption. Light absorbing aerosol particles, in particular, can alter the vertical temperature structure of the atmosphere and inhibit the formation of convective clouds. The net change in the energy balance imposed by perturbing agents, such as aerosol particles, results in a radiative forcing. Globally, aerosol particles have a net cooling effect on the climate, but, not necessarily on a local scale. Accurate measurements of the optical properties of aerosol particles are needed to estimate the climatic effects of aerosols. A widely used means of measuring light absorption by aerosol particles is to use a filter-based measurement technique. The technique is based on light-transmission measurements through the filter when the aerosol sample is drawn through the filter and particles deposit onto the filter. As the sample deposits, it will inevitably interact with the fibres of the filter and the interactions needs to be taken into account. This thesis investigates different approaches to dealing with filter-induced artefacts and how they affect aerosol light absorption using this technique. In addition, the articles included in the thesis report aerosol optical properties at sites that have not been reported in the literature before. The locations range from an urban environment in the city of São Paulo, Brazil, an industrialised region of the South African Highveld, to a rural station in Hyytiälä in Finland. In general, it can be said that sites that are distant from urban areas tend to scatter more light in relation to light absorption. In urban areas, the aerosol particle optical properties show the aerosol particles to be darker.
  • Karesoja, Mikko (Helsingin yliopisto, 2015)
    In this study several inorganic-organic hybrids and multiresponsive hybrid polymers were prepared and characterised in detail. Especially the focus has been on stimuli responsive materials but also on nanocomposites based on modified montmorillonite clay. Furthermore thin SiO2-capillaries were modified for electrophoretic separations. In all cases different controlled radical polymerisation techniques have been used. The modification of montmorillonite clay was conducted by surface initiated atom transfer polymerisation. Clay was grafted with random copolymer of butyl acrylate and methyl methacrylate and the modified clay was further mixed with a matrix polymer with the same chemical composition to create nanocomposite films. The relation of the nanocomposite structure to its mechanical properties was in the main focus. The extent of exfoliation of the clay in the composite films clearly affected mechanical properties. Montmorillonite clay was also grafted with pH- and thermoresponsive poly(2-dimethylaminoethyl methacrylate). The thermoresponsive properties of the resulting hybrid materials were compared to similar homopolymer. The inner walls of thin silica capillaries were grafted with a cationic polymer, poly([2-(methacryloyl)oxyethyl]trimethylammonium chloride) (PMOTAC). These capillaries were further used in capillary electrophoresis to separate standard proteins, different β-blockers and low-density as well as high density lipoproteins. The separation of the analytes was not possible with bare SiO2-capillaries but with polymer coated capillaries good separation of the analytes was achieved. Hybrid materials based on mesoporous silica particles grafted with poly(N-vinylcaprolactam-b-polyethylene oxide) (PVCL-b-PEO) were synthesised. The challenging synthesis of these hybrids was performed as a combination of surface initiated atom transfer polymerisation and click reactions. Thermal behaviour and the colloidal stability of these hybrid particles were studied. The role of the PEO block in the colloidal stability of the particles was crucial. Finally, multiresponsive hybrid block copolymers based on N-vinylcaprolactam and 2-dimethylaminoethyl methacrylate was prepared. The thermal properties of these block copolymers can be tuned by varying the chain length of PVCL block. On the other hand the thermal behaviour of PDMAEMA block is highly dependent on the environmental conditions like pH and ionic strength.
  • Pohjola, Valter (2014)
    This thesis deals with various aspects of the inverse boundary value problem for the magnetic Schrödinger operator. The first paper extends earlier uniqueness results to the case where the domain is a half space. The two main features of this problem is that the domain is not a bounded set and that the DN-maps are known only on parts of the boundary. The results in this paper extend known results for the slab geometry to the half space case and moreover give some improvements on the conditions for the measurement sets. The second paper (which is joint work with Pedro Caro) deals with the problem of stability. The main aim of the paper is to show that log-type stability holds for rougher classes of potentials A and q, than were previously known. We prove stable determination of an inverse boundary value problem associated to a magnetic Schrödinger operator assuming that the magnetic and electric potentials are essentially bounded and the magnetic potentials admit a certain type of Hölder-type modulus of continuity. The third paper deals with the convection-diffusion equation, which is another first order perturbation of the Laplacian. This equation is closely related to the magnetic Schrödinger equation. Here we use this relationship to show that one can recover a certain scale of Hölder continuous velocity fields from the DN-map. A common theme in the second and third papers is that of lowering regularity requirements, i.e. extending known results so that they apply to larger and more irregular classes of potentials. This is a central research topic in inverse problems.
  • Liao, Li (Helsingin yliopisto, 2014)
    Atmospheric aerosol particles influence the Earth's climate system, affect air visibility, and harm human health. Aerosol particles originate from both anthropogenic and biogenic sources, either from direct emissions or secondary particle formation. Secondary particle formation from gas phase precursors constitutes the largest fraction of global aerosol budget, yet large uncertainties remain in its mechanisms. This thesis attempted to study the source, the formation mechanisms, and the sink of secondary particles based on data analysis of field measurements and chamber experiments. In addition, numerical simulations were performed to model the processes of secondary particle formation observed in the chamber experiments. We summarized our findings into five main conclusions: 1) Monoterpenes originated from anthropogenic sources (e.g. forest industry) can significantly elevate the local average concentrations and result in a corresponding increase in local aerosol loading; 2) Monoterpenes from biogenic emissions show direct link to secondary particle production: the secondary aerosol masses correlate well with the accumulated monoterpene emissions; 3) Temperature influences biogenic monoterpene emissions, resulting in an indirect effect on the biogenic secondary particle production and corresponding cloud condensation nuclei (CCN) formation; 4) Both data analysis and numerical simulation suggested that nucleation involving the oxidation products of biogenic volatile organic compounds (VOCs) and H2SO4 better explains the nucleation mechanism, yet the specific VOCs participating in the nucleation process remains uncertain; 5) The numerical simulation showed evidence of vapor wall loss effect on the yield of secondary particles from the chamber experiments; a reversible gas-wall partitioning had to be considered to properly capture the observed temporal evolution of particle number size distribution during the chamber experiments. The results of this thesis contribute to the understanding on the role of monoterpenes to secondary particle formation. This thesis raises caution on the parameterization of the temperature dependence of biogenic secondary particle formation in predicting the aerosol production potential due to rising temperatures in the future. This work also points out a way for improving the comprehensive numerical models to better understand the secondary particle formation processes and related climatic effects.
  • Saarinen, Juha (Helsingin yliopisto, 2014)
    The climatic cooling during the Cenozoic (65 Ma present) culminated in the Pleistocene Ice Ages (ca. 2.6 Ma 10 000 BP) during which the global climate oscillated between relatively warm climatic phases and very cold and dry glacial phases when extensive continental glaciers formed in the Northern hemisphere. The oscillation between the cold and warm climatic stages caused dramatic cyclic changes in the structure of vegetation varying at its extreme between relatively humid forests and very dry and cold mammoth steppes in Europe. These constantly changing and harsh climatic and environmental conditions caused strong extinction and evolution pressures on mammal species. In this thesis I will discuss how two major ecometric variables, body size and diet, of large herbivorous land mammals have varied during the Pleistocene and how these patterns are connected with climate, environmental conditions and competing mammal species. Mammals diversified and started to occupy the niches of large vertebrates after the Late Cretaceous mass extinction which caused the extinction of large non-avian dinosaurs. The frequency of maximum body size in archaic mammal orders shows a significant global peak in the Middle Eocene (ca. 40 Ma) as a result of the diversification and niche filling after the Late Cretaceous mass extinction, but after that maximum size frequency in mammal orders was low until it peaked significantly again the Pleistocene Ice Ages. This indicates that the Pleistocene climatic and environmental conditions favoured particularly large body sizes in mammals. The overall harshness of the Ice Age climate (seasonal, mostly cold and dry conditions and often rapid climatic changes) could have favoured large body sizes in large terrestrial mammals through mechanisms which are more complicated than the often cited benefit of large size for heat conservation (Bergmann s rule). Large size increases the ability to survive over seasonal shortages of resources such as food and water and enables long-distance migrations to areas of better resource availability. On the other hand, strong erosional processes caused by glaciers produced fertile soils and harsh climates reduced the chemical defences of plants, which resulted in seasonally high primary production and plant quality, which would have enabled herbivorous mammals to grow into large sizes during seasons of high productivity. The main factor driving fine-scale body size variations in ungulate populations has been shown by several studies to be resource availability, which is regulated by primary productivity, plant quality, population densities of the ungulate species (intraspecific resource competition) and interspecific resource competition. The comparisons of ungulate body sizes from Middle and Late Pleistocene of Britain and Germany with vegetation openness (percentages of non-arboreal pollen from associated pollen records) show that species with different ecological strategies have different body size patterns in relation to the vegetation structure. The connection between body size patterns and ecological strategies could explain the different responses of body size to vegetation openness. Species which tend to have relatively small group sizes (e.g. deer) show on average larger body sizes in environments where the vegetation structure is open, whereas gregarious, open adapted species (e.g. horses) tend to have smaller average body sizes in open habitats. I suggest this is because open habitats favour large body size in ecologically flexible species with small group sizes due to high resource availability and quality per an individual (relatively low population densities), less size-restricted manoeuvrability and enhanced capability to escape predators, whereas resource limitations for each individual caused by high population densities can become a limiting factor for individual body size in open-adapted, gregarious species which are efficient open-vegetation feeders and form large groups in open habitats. In closed environments, the body size of the open-adapted, gregarious species is not limited by high population density which enables them to attain larger individual sizes. Dietary signals of the key ungulate species in Middle and Late Pleistocene Europe based on mesowear analyses are on average significantly positively correlated with vegetation openness (non-arboreal pollen percentages) at locality-level. However, there are significant interspecific differences. While most of the species show positive correlations between their mesowear signal and non-arboreal vegetation, others, especially the red deer (Cervus elaphus), do not show any correlation. Instead, the mesowear signal of the red deer is significantly more abrasive dominated when other browse-dominated feeders, especially the roe deer (Capreolus capreolus) are present. This indicates that interspecific competition can obscure the effect of available plant material in the diet of ecologically flexible species. This should be taken into account when interpreting the feeding ecology of the key species in palaeocommunities, and especially when attempting to reconstruct palaeoenvironmental conditions from dietary proxies of mammals. Such attempts should ideally be based on as complete dietary analyses of fossil herbivore faunas as possible. In order to extend the palaeodietary and palaeoecological analyses based on mesowear signals of herbivorous mammas, a new tooth wear -based dietary analysis method was developed for elephants and other lamellar toothed proboscideans, based on measuring occlusal relief of their molar teeth as angles. The benefits of that approach compared with other available methods are that it is easy-to-do, fast and robust, and it gives consistent and comparable results for species with different dental morphologies. The preliminary results from that study indicate that the angle measurement method is a powerful tool for reconstructing proboscidean diets from the fossil record.