Browsing by Issue Date

Sort by: Order: Results:

Now showing items 1-20 of 930
  • Peltola, Timo Hannu Tapani (Helsingin yliopisto, 2016)
    The position sensitive silicon particle detectors are widely used in the tracking systems of High Energy Physics experiments such as the CMS at LHC, the world's largest particle accelerator at CERN. The foreseen upgrade of the LHC to its high luminosity (HL) phase, will enable the use of maximal physics potential of the facility. However, after 10 years of operation the expected fluence will result in a radiation environment that is beyond the capacity of the present tracking system design. The required upgrade of the all-silicon central trackers will include higher granularity and radiation hard sensors that can tolerate the increased occupancy and the higher radiation levels. To address this, extensive measurement and simulation studies have been performed to investigate different designs and silicon materials. The work in this thesis has been carried out within the CMS Tracker Upgrade Project and the multi-experiment RD50 Collaboration. Simulations serve a vital role in device structure optimization and predicting the electric fields and trapping in the silicon sensors. The main objective of device simulations is by using professional software to develop an approach to both model and predict the performance of the irradiated silicon detectors. In the course of this thesis, an effective non-uniform defect model is developed using the Sentaurus TCAD simulation framework. The model reproduces both the observed bulk and surface properties and can predict the performance of strip detectors up to HL-LHC fluences. When applied to measurements of the position dependence of Charge Collection Efficiency, the model can provide a means for the parametrization of oxide charge accumulation at the detector s SiO2/Si interface as a function of irradiation dose. TCAD simulations are also applied for a comparative study of a thin p-on-p pixel sensor and a more conventional p-on-n pixel sensor. The simulations are used to provide an explanation to the measured charge collection behavior and for a detailed investigation of the electrical properties of the two sensor types. Finally, the scope of TCAD simulations is extended to GaAs, a compound semiconductor material. By implementing the observed deep donor defect level to the simulation, the resulting electrical properties are in close agreement with the measurements of an epitaxial GaAs radiation detector. Also, the transferred electron effect observed in the transient current measurements is reproduced by the simulation. The combined results of this thesis demonstrate the versatility and power of the TCAD simulations of semiconductor detectors as a tool to bridge the gap from observation to parametrization.
  • Peltola, Olli (Helsingin yliopisto, 2016)
    Methane (CH4) is a strong greenhouse gas and its surface mixing ratio has increased by 150 % since the pre-industrial era. The aggregated atmospheric CH4 budget is relatively well-constrained, however the contribution of different sources/sinks to the overall budget is not. The exchange of matter and energy between the atmosphere and different ecosystems can be studied with eddy covariance (EC) technique. Recently, instrumentation suitable for EC measurements of CH4 fluxes have become available, however, measurement and data processing methodologies are yet to be standardised. By including instrument and software intercomparisons, this thesis is aimed to advance the harmonisation of EC CH4 flux measurement and data processing methodologies. Data from two sites are utilized: Siikaneva fen in Southern Finland and Cabauw agricultural peatland in the Netherlands. Improvement in CH4 instrumentation was exemplified in this work by the decrease in the signal noise: the new CH4 gas analysers showed approximately 10-times lower noise levels than the older models. Cumulative CH4 emissions agreed within 7 % which suggests that there was no significant bias between the instruments. Another possible source of uncertainty is EC data processing. Two widely used EC data processing programs computed comparable CH4 fluxes for different instrument and data processing combinations and thus the data processing routines were implemented similarly. The significance of careful EC data processing was demonstrated by the fact that occasionally the flux corrections contributed over 100 % of the measured signal. EC CH4 fluxes showed high spatial variability in an agricultural peatland ecosystem, considerably higher than the other fluxes. The variability hinders the scalability of EC CH4 fluxes to larger spatial scales and scaling is needed if the CH4 balance of the whole landscape is evaluated. Therefore, the usability of tall flux tower to measure the landscape fluxes directly was also explored. While the results from this exercise were encouraging, the morning and evening transition periods proved to be difficult for the tall flux tower system. This thesis sets a benchmark for the precision and accuracy of EC CH4 data by evaluating instrumentation and data processing tools. Further, the thesis raises awareness of possible problems when upscaling short tower EC CH4 measurements due to flux variability within the landscape. Finally, the findings can be used by researches in the future to evaluate the reliability of their EC CH4 data and thus the thesis contributes to the harmonisation of EC CH4 methodologies.
  • Lehtomaa, Jaakko (Helsingin yliopisto, 2016)
    This book is about heavy-tailed random variables and the phenomena they induce to mathematical models. It has become clear that the classical models employing light-tailed variables have an inherent tendency to underestimate the magnitude of extremal events. Recent developments in the financial world (financial crises) suggest that such shortcomings are not merely theoretical curiosities, but game-changing phenomena that can solely determine the fate of an agent. To overcome the obstacles set by the classical lines of thought an agent must consider ways to improve the way risk is modelled and assessed. The book proposes two ways to do this. Firstly, the existing models are reconfigured to include the effects of different types of risks. Secondly, heavy-tailed effects such as the principle of a single big jump are made more transparent by a detailed investigation. The mathematical theory of this book is based on the theory of random walks and their generalisations. Special attention is paid to randomly weighted random walks and randomly stopped random walks that are commonly encountered in the field of insurance mathematics.
  • Fedi, Giacomo (Helsingin yliopisto, 2016)
    The B0s-B0sbar system was investigated using the J/ψ(μ+μ−)φ(K+K−) decay channel. Using 2010 CMS data, corresponding to an integrated luminosity of 40 1/pb, the B0s invariant mass peak was reconstructed and the B0s differential cross section as a function of its transverse momentum and rapidity were measured. Using 2011 CMS data, corresponding to an integrated luminosity of 5 1/fb, the difference of the decay widths between the two B0s mass eigenstates ∆Γs was measured. With the 2012 CMS data, corresponding to an integrated luminosity of 20 1/fb, the CP-violating weak phase φs and the decay width difference ∆Γs of B0s were measured. The most important result of this thesis is the measurement of the CP-violating phase φs, which was found to be φs = −0.075 ± 0.097 (stat.) ± 0.031 (syst.).
  • Inkinen, Juho (Helsingin yliopisto, 2016)
    Non-resonant inelastic x-ray scattering can provide information on various atomic-scale properties and phenomena by probing the spectrum of electronic excitations. The technique allows bulk-sensitive measurements and relatively freely tunable sample conditions even for excitations with energies in the soft x-ray range. Also non-dipolar excitations are accessible by virtue of the ability to impose finite momentum transfer in the scattering process. This thesis comprises of studies that apply non-resonant inelastic x-ray scattering to questions of chemistry and physics in gas and solid phase. More specifically, the works focus on selected cases of gas-phase samples at elevated temperatures and pressures as well as on a solid-state chemical reaction of an organic compound. The electronic excitation spectra from gas-phase molecules exhibit temperature dependence due to molecular vibrations, which affect both the intensities and energies of the electronic transitions. The spectra measured at varied sample temperature and with varied momentum transfer help to give insight to the vibrational effects when interpreted using spectrum simulations. Especially the vibrations that distort the molecular symmetry from that of the equilibrium geometry are demonstrated to be important. A classic example of topochemical reactions is the dimerization of crystalline cinnamic acid, which is usually induced by ultraviolet light illumation. The presented study with inelastic x-ray scattering shows the reaction take place also due to x-ray irradiation. The insitu measured time-resolved spectra allow to obtain reaction kinetics data, and the utilization of imaging method to follow also the spatial progress of the reaction. These novel experiments using non-resonant inelastic x-ray scattering and the applied analysis methods demonstrate the versatility of the technique and help to envision future studies.
  • Ihalainen, Toni (Helsingin yliopisto, 2016)
    Quality control methods and test objects were developed and used for structural magnetic resonance imaging (MRI), functional MRI (fMRI) and diffusion-weighted imaging (DWI). Emphasis was put on methods that allowed objective quality control for organizations that use several MRI systems from different vendors, which had different field strengths. Notable increases in the numbers of MRI studies and novel MRI systems, fast development of MRI technology, and international discussion about the quality and safety of medical imaging have motivated the development of objective, quantitative and time-efficient methods for quality control. The quality control methods need to be up to date with the most modern MRI methods, including parallel imaging, parallel transmit technology, and new diffusion-weighted sequences. The methods need to be appropriate to those organizations that use MRI for quantitative measurements, or for the participation in multicenter studies. Two different test object methods for structural MRI were evaluated in a multi-unit medical imaging organization, these were: the Eurospin method and the American College of Radiology (ACR) method. The Eurospin method was originally developed as a part of European Concerted Action, and five standardized test objects were used to create a quality control protocol for six MRI systems. Automatic software was written for image analysis. In contrast, a single multi-purpose test object was used for the ACR method, and image quality for both standard and clinical imaging protocols were measured for 11 MRI systems. A previously published method for fMRI quality control was applied to the evaluation of 5 MRI systems and was extended for simultaneous electroencephalography (EEG) and fMRI (EEG fMRI). The test object results were compared with human data that were obtained from two healthy volunteers. A body-diameter test object was constructed for DWI testing, and apparent diffusion coefficient (ADC) values and level of artifacts were measured using conventional and evolving DWI methods. The majority of the measured MRI systems operated at an acceptable level, when compared with published recommended values for structural and functional MRI. In general, the measurements were repeatable. The study that used the test object revealed information about the extent of superficial artifacts (15 mm) and the magnitude of signal-to-noise ratio (SNR) reduction (15%) of the simultaneous EEG fMRI images. The observations were in accordance with the data of healthy human volunteers. The agreement between the ADC values for different methods used in DWI was generally good, although differences of up to 0.1 x10^-3 mm^2/s were observed between different acquisition options and different field strengths, and along the slice direction. Readout-segmented echo-planar imaging (EPI) and zoomed EPI in addition to efficient use of the parallel transmit technology resulted in lower levels of artifacts than the conventional methods. Other findings included geometric distortions at the edges of MRI system field-of-view, minor instability of image center-of-mass in fMRI, and an amplifier difference that affected the EEG signal of EEG fMRI. The findings showed that although the majority of the results were within acceptable limits, MRI quality control was capable of detecting inferior image quality and revealing information that supported clinical imaging. A comparison between the different systems and also with international reference values was feasible with the reported limitations. Automated analysis methods were successfully developed and applied in this study. The possible future direction of MRI quality control would be the further development of its relevance for clinical imaging.
  • Kettula, Kimmo (Helsingin yliopisto, 2016)
    As galaxy clusters are the most massive bound objects in the Universe, their number and evolution can be used for constraining cosmological parameters. This requires the knowledge of cluster masses, which is typically achieved through calibrating scaling relations, where an observable is used as a mass proxy. Clusters can be efficiently detected through the X-ray emission of the hot intracluster gas, whereas weak gravitational lensing provides the most accurate mass measurements. This thesis studies the X-ray emission of galaxy clusters, the cross-calibration of X-ray instruments and the scaling between X-ray observables and weak lensing mass. We characterise the thermal Bremsstrahlung X-ray emission of the Ophiuchus cluster with XMM-Newton and use INTEGRAL to detect non-thermal hard X-ray excess emission. We model the excess emission, assuming that it is due to inverse-Compton scatter of CMB photons by a population of relativistic electrons, derive the pressure of the relativistic electron population and give limits on the magnetic field. We also study the cross-calibration of the XIS detectors onboard the Suzaku satellite and show that discrepancies can be explained by the modelling of the optical blocking filter contaminant. We conclude that XIS0 is more accurately calibrated than XIS1 and XIS3 and show that soft band cluster temperatures measured with XIS0 are approximately 14 % lower than those measured with XMM-Newton/EPIC-pn. We study the scaling of X-ray luminosity L and temperature T of the intracluster gas to weak lensing mass for galaxy groups and low-mass clusters. These samples are combined with high-mass samples from the literature, include corrections for survey biases and provide the current limitations for L and T as mass proxies. Studying the residuals, we find the first observational evidence for a mass dependence in the scaling relations using weak lensing masses. We also study hydrostatic mass bias in X-ray mass estimates and find indications for an increased bias in low-mass systems. Our results on scaling relations are limited by our understanding of sample selection and number of observations of low-mass systems. Calibration against e.g. weak lensing can help to address cross-calibration discrepancies and forthcoming X-ray observatories will significantly improve our understanding of non-thermal phenomena in clusters.
  • Airas, Annika (Helsingin yliopisto, 2016)
    Urban waterfront redevelopment is a global trend. Since the 1960s, and the advent of containerization, new commercial and residential developments began to replace the industrial operations that once characterized the waterfronts of port cities. Research to date has largely focused on the redevelopment of seaports in large coastal cities, primarily in a North American context, yet significant changes are also taking place in smaller locations around the globe. In this study, two empirical examples are given of smaller cities, one in Finland and one in Canada, both of which historically served the woodworking industry. As these industries declined and reorganized, the waterfronts they occupied have been redeveloped primarily into residential districts, particularly since the late 1980s. This study takes a new, multidisciplinary approach to waterfront research by advancing the concept of historical distinctiveness and revealing the ways that it is expressed within waterfront planning. While the term distinctiveness is often used in planning documents to refer to the waterfront s historical past, the term remains poorly defined. This study presents the novel concept of historical distinctiveness and introduces a research framework through which it can be understood. In particular, the study pays attention to the content of historical distinctiveness and examines how it is expressed in the contemporary built environment of the formerly industrial waterfronts: Lake Vesijärvi, in Lahti, Finland and in Queensborough, New Westminster, Canada. Historical distinctiveness as defined in this study consists of six interlocking and constantly evolving elements: international historical influences, historic uses of the waterfront and their reflection in local built environments, the waterfront s relation to the city, the multiple historic layers in the built form of the waterfront, comparative differences in architectural history, and varying values. The concept of historical distinctiveness enables local histories and development trajectories to be revealed while widening the understanding of contemporary waterfront cities. Both Lahti and Queensborough are changing quickly and dramatically, which makes it difficult to identify the remaining vestiges of their woodworking past. Furthermore, the appearance and design of new developments reflect a narrow appreciation of their industrial legacy. Planning processes that aim to promote the distinctiveness of historical waterfronts are instead, ironically, ignoring and at times actually erasing truly unique urban histories. This study demonstrates how new rebuilt environments are becoming more similar across sites, while also becoming more similar to non-waterfront areas in cities. Such developments may limit or destroy the use value of these areas while ignoring cultural histories and local identities, thereby limiting options for creating diverse cities. By taking historical distinctiveness into account, cities can increase historical awareness and create possibilities for the future, thereby creating truly distinctive waterfronts.
  • Kuosmanen, Niina (Helsingin yliopisto, 2016)
    In this work, the Holocene history of the western taiga forests, at the modern western range limit of Siberian larch (Larix Sibirica) in northern Europe, is investigated using fossil pollen and stomata records from small forest hollow sites. The relative importance of the potential drivers of long-term boreal forest composition is quantitatively assessed using novel approaches in a palaeoecological context. The statistical method variation partitioning is employed to assess relative importance of climate, forest fires, local moisture conditions and human population size on long-term boreal forest dynamics at both regional (lake records) and local scales (small hollow records). Furthermore, wavelet coherence analysis is applied to examine the significance of individual forest fires on boreal forest composition. The results demonstrate that Siberian larch and Norway spruce (Picea abies) have been present in the region since the early Holocene. The expansion of spruce at 8000 7000 cal yr BP caused a notable change in forest structure towards dense spruce dominated forests, and appears to mark the onset of the migration of spruce into Fennoscandia. The mid-Holocene dominance of spruce and constant presence of Siberian larch suggests that taiga forest persisted throughout the Holocene at the study sites in eastern Russian Karelia. Climate is the main driver of long-term changes in boreal vegetation at the regional scale. However, at the local scale the role of local factors increases, suggesting that intrinsic site-specific factors have an important role in stand-scale dynamics in the boreal forest. When the whole 9000 year study period is considered, forest fires explain relatively little of the variation in stand-scale boreal forest composition. However, forest fires have a significant role in stand-scale forest dynamics when observed in shorter time intervals and the results suggests that fires can have a significant effect on short-term changes in individual tree taxa as well as a longer profound effect on forest structure. The relative importance of human population size on variation in long-term boreal vegetation was statistically assessed for the first time using this type of human population size data and the results showing unexpectedly low importance of human population size as a driver of the changes in long-term boreal vegetation may be biased because of the difference in spatial representativeness between the human population size data and the pollen-derived forest composition data. Although the results strongly suggest that climate is the main driver of long-term boreal forest dynamics, the local disturbances, such as fires, species interactions and local site specific characteristics can dictate the importance of climate on stand-scale boreal forest dynamics.
  • Mäkelä, Valtteri (Helsingin yliopisto, 2016)
    NMR spectroscopy is an invaluable tool for structure elucidation in chemistry and molecular biology, which is able to provide unique information not easily obtained by other analytical methods. However, performing quantitative NMR experiments and mixture analysis is considerably less common due to constraints in sensitivity/resolution and the fact that NMR observes individual nuclei, not molecules. The advances in instrument design in the last 25 years have substantially increased the sensitivity of NMR spectrometers, diminishing the main weakness of NMR, while increases in field strength and ever more intricate experiments have improved the resolving power and expanded the attainable information. The minimal need for sample preparation and its non-specific nature make quantitative NMR suitable for many applications ranging from quality control to metabolome characterization. Furthermore, the development of automated sample changers and fully automated acquisition have made high-throughput NMR acquisition a more feasible and attractive, yet expensive, possibility. This work discusses the fundamental principles and limitations of quantitative liquid state NMR spectroscopy, and tries to put together a summary of its various aspects scattered across literature. Many of these more subtle features can be neglected in simple routine spectroscopy, but become important when extracting quantitative data and/or when trying to acquire and process vast amounts of spectra consistently. The original research presented in this thesis provides improved methods for data acquisition of quantitative 13C detected NMR spectra in the form of modified INEPT based experiments (Q-INEPT-CT and Q-INEPT-2D), while software tools for automated processing and analysis of NMR spectra are also presented (ImatraNMR and SimpeleNMR). The application of these tools is demonstrated in the analysis of complex hydrocarbon mixtures (base oils), plant extracts and blood plasma samples. The increased capability of NMR spectroscopy, the rising interest in metabolomics and for example the recent introduction of benchtop NMR spectrometers are likely to expand the future use of quantitative NMR in the analysis of complex mixtures. For this reason, the further development of robust, accurate and feasible analysis methods and tools is essential.
  • Virkki, Anne (Helsingin yliopisto, 2016)
    Planetary radar can be considered humankind's strongest instrument for post-discovery characterization and orbital refinement of near-Earth objects. After decades of radar observations, extensive literature describing the radar properties of various objects of the Solar System is currently available. Simultaneously, there is a shortage of work on what the observations imply about the physical properties of the planetary surfaces. The goal of my thesis is to fill part of this gap. Radar scattering as a term refers to alterations experienced by electromagnetic radiation in the backscattering direction when interacting with a target particle. In the thesis, I investigate by numerical modeling what role different physical properties of planetary surfaces, such as the electric permittivity, size of scatterers, or their number density, play in radar scattering. In addition, I discuss how radar observations can be interpreted based on modeling. Because all codes have their own limitations, it is crucial to compare results obtained with different methods. I use Multiple Sphere $T$-matrix method (MSTM) for clusters of spherical particles to understand scattering by closely-packed regolith particles. I use the discrete-dipole approximation code ADDA to comprehend single-scattering properties of inhomogeneous or irregular regolith particles in wavelength-scale. And finally, I use a ray-optics algorithm with radiative transfer, Siris, to simulate radar scattering by large irregular particles that mimic planetary bodies. The simulations for clusters of spherical particles reveal polarization enhancement at certain bands of sizes and refractive indices in the backscattering direction. The results from computations using MSTM and ADDA imply that the electric permittivity plays a strong part in terms of circular polarization. From the results of ray-optics computations for large, irregular particles, I derive a novel semi-analytic form for the radar scattering laws. And, by including diffuse scattering using wavelength-scale particles with laboratory-characterized geometries, we are able to simulate the effect of numerous physical properties of a realistic planetary surface on radar scattering. Our model using Siris is among the most quantitative models for radar scattering by planetary surfaces. The results support and improve the current understanding of the effects of the surface geometry, the electric permittivity, and the coherent-backscattering mechanism and can be used to interpret radar observations. Furthermore, I underscore that the roles of the absorption and the scatterer geometry must not be underestimated, albeit determining realistic values for the variables can be challenging.
  • Koski, Aleksis (Helsingin yliopisto, 2016)
    The subject of this thesis is Elliptic PDE's that appear in the fields of Geometric Analysis and The Calculus of Variations, such as the Beltrami equation and its generalizations. The main results are the existence and uniqueness of solutions in function spaces such as the Sobolev-spaces, as well as regularity and properties of solutions. The thesis contains four scientific articles on the subject. The first two articles contain results on generalized Beltrami equations, where the solvability is investigated using functional analytic methods. New results for the corresponding singular integral operators are also found, such as finding the L^2-norm of the Beurling transform for the Dirichlet problem. The third and fourth papers cover properties of solutions to Euler-Lagrange and Hopf-Laplace equations for certain energy functionals. One of the main results is the generalization of the classic Radó-Kneser-Choquet theorem for the p-harmonic energy in the plane. The proof is based on a new subharmonicity result for the Jacobian of a solution, and similar other new subharmonicity results are also obtained in the thesis.
  • Sarnet, Tiina (Helsingin yliopisto, 2015)
    Materials are crucial to the technological advances of society. The never ending need for data storage and new energy sources pushes research towards clear goals. Perhaps some of today's solutions can in the future be replaced or augmented with phase change memories and thermoelectric materials. Phase change materials store data in their amorphous and crystalline phases that have great differences in their electrical and optical properties. Thermoelectric materials can utilize waste heat and produce electricity from temperature differences. They can also be utilized in temperature control as they can be used to create a temperature difference by using electricity. Shrinking device sizes and increasing device complexity require that deposition methods such as atomic layer deposition (ALD) are used. ALD is based on sequential, saturative surface reactions. Precursors are brought to the surface one at a time, separated by purges. Because of the saturative reactions, each ALD cycle deposits a constant amount of material up to a monolayer, making film thickness control very simple. ALD of chalcogenides has focused mainly on sulfides, and the chemistries for selenide and telluride depositions have been limited. Pnictides have a similar situation. The ALD chemistries for arsenides include only a few combinations of precursors, and antimonides are barely demonstrated. This is why a new group of precursors was needed. The alkylsilyl non-metal precursors react very efficiently with metal halides in a dehalosilylation reaction. These types of reactions have now been utilized in both chalcogenide and pnictide thin film growth. In this thesis, several chalcogenide and pnictide ALD processes were studied in detail by utilizing the appropriate alkylsilyl non-metal precursors. In general, typical ALD characteristics were found. Growth rates saturated with respect to precursor pulse lengths; film thicknesses increased linearly with the number of deposition cycles; and the films were stoichiometric with low impurity contents. Application wise, the ALD chalcogenide and pnictide films had the required properties. The phase of the phase change materials could be repeatably and quickly changed, and the thermoelectric films showed a proper response to a temperature gradient.
  • Arola, Teppo (Helsingin yliopisto, 2015)
    Increase of greenhouse gas concentrations in the atmosphere, the limits of conventional energy reservoirs and the instability risks related to energy transport have forced nations to promote the utilisation of renewable energy reservoirs. Groundwater can be seen as an option for renewable energy utilisation and not only a source of individual or municipal drinking water. Finland has multiple groundwater reservoirs that are easily exploitable, but groundwater energy is not commonly used for renewable energy production. The purpose of this thesis study was to explore the groundwater energy potential in Finland, a region with low temperature groundwater. Cases at three different scales were investigated to provide a reliable assessment of the groundwater energy potential in Finland. Firstly, the national groundwater energy potential was mapped for aquifers classified for water supply purposes that are under urban or industrial land use. Secondly, the urbanisation effect on the peak heating and peak cooling power of groundwater was investigated for three Finnish cities, and finally, the long-term groundwater energy potential was modelled for 20 detached houses, 3 apartment buildings and a shopping centre. The thesis connects scientific information on hydro- and thermogeology with the energy efficiency of buildings to produce accurate information concerning groundwater energy utilisation. Hydrological and thermogeological data were used together with accurate data on the energy demands of buildings. The heating and cooling power of groundwater was estimated based on the groundwater flux, temperature and heat capacity and the efficiency of the heat transfer system. The power producible from groundwater was compared with the heating and cooling demands of buildings to calculate the concrete groundwater energy potential. Approximately 20% to 40% of annually constructed residential buildings could be heated utilising groundwater from classified aquifers that already are under urban land use in Finland. These aquifers contain approximately 40 to 45 MW of heating power. In total, 55 to 60 MW of heat load could be utilised with heat pumps. Urbanisation increases the heating energy potential of groundwater. This is due anthropogenic heat flux to the subsurface, which increases the groundwater temperatures in urbanised areas. The average groundwater temperature was 3 to 4 °C higher in city centres than in rural areas. Approximately 50% to 60% more peak heating power could be utilised from urbanised compared with rural areas. Groundwater maintained its long term heating and cooling potential during 50 years of modelled operation in an area where the natural groundwater temperature is 4.9 °C. Long-term energy utilisation created a cold groundwater plume downstream, in which the temperature decreased by 1 to 2.5 °C within a distance of 300 m from the site. Our results demonstrate that groundwater can be effectively utilised down to a temperature of 4 °C. Groundwater can form a significant local renewable energy resource in Finland. It is important to recognise and utilise all renewable energy reservoirs to achieve the internationally binding climatological targets of the country. Groundwater energy utilisation should be noted as one easily exploitable option to increase the use of renewable energy resources in a region where the natural groundwater temperature is low. The methods presented in this thesis can be applied when mapping and designing groundwater energy systems in nationwide- to property-scale projects. Accurate information on hydro- and thermogeology together with the energy demands of buildings is essential for successful system operation.
  • Hailu, Binyam Tesfaw (Helsingin yliopisto, 2015)
    Remote sensing provides land-cover information on a variety of temporal and spatial scales. The increasing availability of remote sensing data is now a major factor in land-change analysis and in understanding its impact on ecosystem services and biodiversity. This wider accessibility is also leading to improvements in the methods used to integrate these data into land-cover modelling and change analysis. Despite these developments in current technology and data availability however, there are still questions to be addressed regarding the dynamics of land cover and its impact, particularly in areas such as Ethiopia where the human population is expanding and there is a need for improvement in the management of natural resources. Multi-scale approaches (from the national to the local) were used in this thesis to assess change in land cover and ecosystem services in Ethiopia, specifically in terms of provisioning (the production of food, i.e. cash crops) and regulating (climate control for vegetation cover). These assessments were based on multi-scale remote sensing (very high spatial resolution remote aerial sensing, high-resolution SPOT 5 satellite imaging and products of medium-resolution satellite remote sensing) and climate data (e.g., precipitation, temperature). The main focus in this thesis is on mapping and modelling the spatial distribution of vegetation. This includes: (i) forest mapping (indigenous and exotic forests), (ii) modelling the probabilistic presence of understory coffee, (iii) Coffea arabica species distribution modelling and mapping and (iv) simulating pre-agricultural-expansion vegetation cover in Ethiopia. The results of the applied predictive modelling were robust in terms of: (i) identifying and mapping past vegetation cover and (ii) mapping understory shrubs such as coffee plants that grow as understory. I present a reconstruction of earlier vegetation cover that mainly comprised broadleaved evergreen and deciduous forest but was replaced in the course of agricultural expansion. Given the spatial scale of the latter, the environmental modelling was complemented with high spatial resolution satellite (2.5m) and aerial images (0.5m). The results of the Object Based Image Analysis show that indigenous forests were separated from exotic forests. Current and future suitable locations that are environmentally favourable for the growth of understory coffee were identified and mapped in the coffee-growing areas of Ethiopia. In conclusion, the information presented in this thesis, based on the multi-scale assessment of land changes, should lead to the better-informed management of natural resources and conservation, and the restoration of major areas affected by human population growth.
  • Franchin, Alessandro (Helsingin yliopisto, 2015)
    This thesis focuses on the experimental characterization of secondary atmospheric nanoparticles and ions during their formation. This work was developed in two distinct and complementary levels: a scientific level, aimed to advance the understanding of particle formation and a more technical level, dedicated to instrument development and characterization. Understanding and characterizing aerosol formation, is important, as formation of aerosol particles from precursor gases is one of the main sources of atmospheric aerosols. Elucidating how aerosol formation proceeds in detail is critical to better quantify aerosol contribution to the Earth's radiation budget. Experimentally characterizing the first steps of aerosol formation is the key to understanding this phenomenon. Developing and characterizing suitable instrumentation to measure clusters and ions in the sub 3 nm range, where aerosol formation starts, is necessary to clarify the processes that lead to aerosol formation. This thesis presents the results of a series of experimental studies of sub 3 nm aerosol particles and ions. It also shows the results of the technical characterization and instrument development that were made in the process. Specifically, we describe three scientific results achieved from chamber experiments. Firstly the relative contributions of sulfuric acid, ammonia and ions in nucleation processes was quantified experimentally, supporting that sulfuric acid alone cannot explain atmospheric observation of nucleation rates. Secondly, the chemical composition of cluster ions was directly measured for a ternary system, where sulfuric acid, ammonia and water were the condensable vapors. In these measurements we observed a decreasing acidity of the clusters with increasing concentration of gas phase ammonia, with the ratio of sulfuric acid/ammonia staying closer to that of ammonium bisulfate than to that of ammonium sulfate. Finally, in a series of chamber experiments the ion ion recombination coefficient was quantified at different conditions. The ion ion recombination coefficient is a basic physical quantity for modeling ion induced and ion mediated nucleation. We observed a steep increase in the ion ion recombination coefficient with decreasing temperatures and with decreasing relative humidity. This thesis also reviews technical results of: 1) laboratory verification, characterization and testing of different aerosol and ion instruments measuring in the sub 3 nm range; 2) the development of new inlets for such instruments to improve the detection of sub-3 nm particles and ions.
  • Fagerholm, Fabian (Helsingin yliopisto, 2015)
    Human factors have been identified as having the largest impact on performance and quality in software development. While production methods and tools, such as development processes, methodologies, integrated development environments, and version control systems, play an important role in modern software development, the largest sources of variance and opportunities for improvement can be found in individual and group factors. The success of software development projects is highly dependent on cognitive, conative, affective, and social factors among individuals and groups. When success is considered to include not only fulfilment of schedules and profitability, but also employee well-being and public impact, particular attention must be paid to software developers and their experience of the software development activity. This thesis uses a mixed-methods research design, with case studies conducted in contemporary software development environments, to develop a theory of software developer experience. The theory explains what software developers experience as part of the development activity, how an experience arises, how the experience leads to changes in software artefacts and the development environment through behaviour, and how the social nature of software development mediates both the experience and outcomes. The theory can be used both to improve software development work environments and to design further scientific studies on developer experience. In addition, the case studies provide novel insights into how software developers experience software development in contemporary environments. In Lean-Agile software development, developers are found to be engaged in a continual cycle of Performance Alignment Work, where they become aware of, interpret, and adapt to performance concerns on all levels of an organisation. High-performing teams can successfully carry out this cycle and also influence performance expectations in other parts of the organisation and beyond. The case studies show that values arise as a particular concern for developers. The combination of Lean and Agile software development allows for a great deal of flexibility and self-organisation among developers. As a result, developers themselves must interpret the value system inherent in these methodologies in order to inform everyday decision-making. Discrepancies in the understanding of the value system may lead to different interpretations of what actions are desirable in a particular situation. Improved understanding of values may improve decision-making and understanding of Lean-Agile software development methodologies among software developers. Organisations may wish to clarify the value system for their particular organisational culture and promote values-based leadership for their software development projects. The distributed nature and use of virtual teams in Open Source environments present particular challenges when new members are to join a project. This thesis examines mentoring as a particular form of onboarding support for new developers. Mentoring is found to be a promising approach which helps developers adopt the practices and tacit conventions of an Open Source project community, and to become contributing members more rapidly. Mentoring could also have utility in similar settings that use virtual teams.
  • Punkka, Ari-Juhani (Helsingin yliopisto, 2015)
    Mesoscale convective systems (MCSs) are common in Finland and nearby regions. These conglomerates of cumulonimbus clouds have a diameter in excess of 100 km and lifetime of at least four hours. About 200 MCSs are detected every year out of which roughly 80 are classified as intense MCSs (maximum radar reflectivity exceeding 50 dBZ for two consecutive hours). MCSs occur most frequently during the afternoon hours in July and August, whereas in the wintertime, they are very few in number. Also the most extreme forms of MCSs such as derechos occur in Finland but only infrequently. The average duration of the MCSs is 10.8 hours in Finland and the most common direction of movement is toward the northeast. In the light of earlier MCS research a local peculiarity is the limited population of MCSs which has a motion component towards the west. The synoptic-scale weather pattern affects the MCS motion direction. An area of low pressure and upper-level trough are located west of Finland during many MCS situations which leads to the onset of southern air flow and the increase of low-tropospheric temperature and humidity. Based on the case studies in this thesis, the area of low pressure occasionally travels to the southwest of Finland enabling southeasterly air flow and further, the MCS motion component towards the west. During the thunderstorms days with sub-MCS deep moist convection, a northwesterly air flow and a ridge of high pressure west of Finland are frequently observed. As opposed to many earlier MCS studies, mid-level lapse rate does not distinguish between the MCS and sub-MCS environments in Finland. Instead, convective available potential energy (CAPE), low-tropospheric water vapour mixing ratio and deep-layer mean wind are able to distinguish between the aforementioned environments. Moreover, mean wind parameters are among the best discriminators between the days with significant and insignificant wind damage. Unlike in many earlier investigations, no evidence is found that cases with dry low- or mid-troposphere air would be more prone to the occurrence of significant convective winds than cases with moister environments. These results and the case studies propose that in the presence of low instability dry air dampens deep moist convection and convective downdrafts. However, in the presence of high instability the effect of dry air may be reverse, as the derecho case of 5 July 2002 (Unto) suggests.
  • Tomperi, Päivi (Helsingin yliopisto, 2015)
    Both nationally and internationally, teachers professional development is a current research topic. According to international teaching and learning survey TALIS, Finnish teachers interest to participate in long-lasting in-service teacher training programs, focusing on professional development, is decreasing. In order to implement inquiry-based practical work into classroom practice, new in-service training models are needed. This thesis examines the design and development process of a professional training course, which implemented the SOLO-taxonomy. The training course was meant for chemistry teachers working at the upper-secondary school and it focused on inquiry-based chemistry instruction. The research was done using design research. The main research questions were formed according to the three central areas of design research (Edelson, 2002): 1) Problem analysis: What kind of challenges does inquiry-based practical chemistry bring to chemistry teachers at the upper secondary school, 2) Design process: What kind of possibilities and challenges does the SOLO-taxonomy offer for the support of inquiry-based practical chemistry instruction and 3) Design solution: What are the characteristics of teachers professional development that promotes inquiry-based practice in chemistry at the upper secondary school? The eight-phase design research employed qualitative research methods, including observations, surveys and interviews. The data was analyzed using content analysis. From this data, two main research results were obtained. First, information was obtained on the implementation of inquiry-based chemistry into practice, and about teachers professional development using the SOLO-taxonomy. Second, information on the characteristics of research-based training model promoting inquiry-based practical chemistry instruction was obtained. The findings show that inquiry is challenging for teachers due to its constructivist view on learning, teachers inexperience to act in modern learning environments and not practicing implementing inquiry in the classroom during training. The findings also show that using the SOLO-taxonomy supported professional development in many ways. For example, it worked as a tool in designing and modifying written instructions, it motivated teachers to develop their practices, it increased teachers ownership to the produced written instructions, it supported teachers understanding of inquiry and it acted as a model to support higher-order thinking skills. The created research-based training model, meant to promote inquiry in practical chemistry instruction, was based on a theoretical and empirical problem analysis. The main features incorporated into the training model are (i) personalized learning which considers the teachers current knowledge (ii) expanding teacher s role from merely a dispenser of knowledge to the roles of a researcher and a learner, (iii) using a theoretical framework to support research-based instruction, higher-order thinking skills and interaction-based sharing of ideas, (iv) creating meaningful inquiry-based material, done using the SOLO-taxonomy, (v) peer-support (vi) reflection and incorporation of action research, (vii) practicing implementing inquiry-based practical work, which is of collaborative and cognitive nature increasing understanding of the nature of science. The research results show that teachers need training models of various durations. If the teacher s view of learning is congruous with the inquiry-based approach, they can begin to practice the implementation of inquiry already during a short training. However, if the teachers view on learning does not support constructive learning methods, the accommodation process requires more time. The research results of this doctoral dissertation can be applied (i) in the implementation of new national core curriculum, (ii) in planning and designing new learning material for inquiry-based practical chemistry (iii) in training that supports teachers life-long learning, and (iv) in international exportation of education. Keywords: Design research, professional development, SOLO-taxonomy, research-based training, inquiry-based practical chemistry
  • Peltola, Jari (Helsingin yliopisto, 2015)
    This thesis is based on four experimental spectroscopic studies where novel highly sensitive laser absorption spectroscopy spectrometers are developed and used for trace gas detection and precision spectroscopy. Most of the studies are carried out in the mid-infrared region between 3 and 4 µm, where a homebuilt continuous-wave singly resonating optical parametric oscillator is used as a light source. In addition, one study has been performed in the visible region using a commercial green laser at 532 nm. Two of the developed spectroscopic applications are based on cavity ring-down spectroscopy. In this thesis, the first off-axis re-entrant cavity ring-down spectrometer in the mid-infrared is demonstrated and utilized for highly sensitive detection of formaldehyde. The second study presents an optical frequency comb referenced mid-infrared continuous-wave singly resonating optical parametric oscillator, which is applied to high-precision cavity ring-down spectroscopy of nitrous oxide and methane. Furthermore, this study presents a new method for referencing a mid-infrared optical parametric oscillator to a near-infrared optical frequency comb. This new method allows large mode-hop-free frequency tuning ranges in the mid-infrared region. The other two experiments are based on cantilever-enhanced photoacoustic spectroscopy, presenting the first reported studies of cantilever-enhanced-based trace gas detection in the mid-infrared and visible region. These studies show the great potential of cantilever-enhanced photoacoustic detection for substantial enhancement of the sensitivity of trace gas detection. For instance, the best nitrogen dioxide detection limit ever reported using photoacoustic spectroscopy is presented in this thesis.