Browsing by Issue Date

Sort by: Order: Results:

Now showing items 21-40 of 946
  • Fedi, Giacomo (Helsingin yliopisto, 2016)
    The B0s-B0sbar system was investigated using the J/ψ(μ+μ−)φ(K+K−) decay channel. Using 2010 CMS data, corresponding to an integrated luminosity of 40 1/pb, the B0s invariant mass peak was reconstructed and the B0s differential cross section as a function of its transverse momentum and rapidity were measured. Using 2011 CMS data, corresponding to an integrated luminosity of 5 1/fb, the difference of the decay widths between the two B0s mass eigenstates ∆Γs was measured. With the 2012 CMS data, corresponding to an integrated luminosity of 20 1/fb, the CP-violating weak phase φs and the decay width difference ∆Γs of B0s were measured. The most important result of this thesis is the measurement of the CP-violating phase φs, which was found to be φs = −0.075 ± 0.097 (stat.) ± 0.031 (syst.).
  • Ihalainen, Toni (Helsingin yliopisto, 2016)
    Quality control methods and test objects were developed and used for structural magnetic resonance imaging (MRI), functional MRI (fMRI) and diffusion-weighted imaging (DWI). Emphasis was put on methods that allowed objective quality control for organizations that use several MRI systems from different vendors, which had different field strengths. Notable increases in the numbers of MRI studies and novel MRI systems, fast development of MRI technology, and international discussion about the quality and safety of medical imaging have motivated the development of objective, quantitative and time-efficient methods for quality control. The quality control methods need to be up to date with the most modern MRI methods, including parallel imaging, parallel transmit technology, and new diffusion-weighted sequences. The methods need to be appropriate to those organizations that use MRI for quantitative measurements, or for the participation in multicenter studies. Two different test object methods for structural MRI were evaluated in a multi-unit medical imaging organization, these were: the Eurospin method and the American College of Radiology (ACR) method. The Eurospin method was originally developed as a part of European Concerted Action, and five standardized test objects were used to create a quality control protocol for six MRI systems. Automatic software was written for image analysis. In contrast, a single multi-purpose test object was used for the ACR method, and image quality for both standard and clinical imaging protocols were measured for 11 MRI systems. A previously published method for fMRI quality control was applied to the evaluation of 5 MRI systems and was extended for simultaneous electroencephalography (EEG) and fMRI (EEG fMRI). The test object results were compared with human data that were obtained from two healthy volunteers. A body-diameter test object was constructed for DWI testing, and apparent diffusion coefficient (ADC) values and level of artifacts were measured using conventional and evolving DWI methods. The majority of the measured MRI systems operated at an acceptable level, when compared with published recommended values for structural and functional MRI. In general, the measurements were repeatable. The study that used the test object revealed information about the extent of superficial artifacts (15 mm) and the magnitude of signal-to-noise ratio (SNR) reduction (15%) of the simultaneous EEG fMRI images. The observations were in accordance with the data of healthy human volunteers. The agreement between the ADC values for different methods used in DWI was generally good, although differences of up to 0.1 x10^-3 mm^2/s were observed between different acquisition options and different field strengths, and along the slice direction. Readout-segmented echo-planar imaging (EPI) and zoomed EPI in addition to efficient use of the parallel transmit technology resulted in lower levels of artifacts than the conventional methods. Other findings included geometric distortions at the edges of MRI system field-of-view, minor instability of image center-of-mass in fMRI, and an amplifier difference that affected the EEG signal of EEG fMRI. The findings showed that although the majority of the results were within acceptable limits, MRI quality control was capable of detecting inferior image quality and revealing information that supported clinical imaging. A comparison between the different systems and also with international reference values was feasible with the reported limitations. Automated analysis methods were successfully developed and applied in this study. The possible future direction of MRI quality control would be the further development of its relevance for clinical imaging.
  • Kettula, Kimmo (Helsingin yliopisto, 2016)
    As galaxy clusters are the most massive bound objects in the Universe, their number and evolution can be used for constraining cosmological parameters. This requires the knowledge of cluster masses, which is typically achieved through calibrating scaling relations, where an observable is used as a mass proxy. Clusters can be efficiently detected through the X-ray emission of the hot intracluster gas, whereas weak gravitational lensing provides the most accurate mass measurements. This thesis studies the X-ray emission of galaxy clusters, the cross-calibration of X-ray instruments and the scaling between X-ray observables and weak lensing mass. We characterise the thermal Bremsstrahlung X-ray emission of the Ophiuchus cluster with XMM-Newton and use INTEGRAL to detect non-thermal hard X-ray excess emission. We model the excess emission, assuming that it is due to inverse-Compton scatter of CMB photons by a population of relativistic electrons, derive the pressure of the relativistic electron population and give limits on the magnetic field. We also study the cross-calibration of the XIS detectors onboard the Suzaku satellite and show that discrepancies can be explained by the modelling of the optical blocking filter contaminant. We conclude that XIS0 is more accurately calibrated than XIS1 and XIS3 and show that soft band cluster temperatures measured with XIS0 are approximately 14 % lower than those measured with XMM-Newton/EPIC-pn. We study the scaling of X-ray luminosity L and temperature T of the intracluster gas to weak lensing mass for galaxy groups and low-mass clusters. These samples are combined with high-mass samples from the literature, include corrections for survey biases and provide the current limitations for L and T as mass proxies. Studying the residuals, we find the first observational evidence for a mass dependence in the scaling relations using weak lensing masses. We also study hydrostatic mass bias in X-ray mass estimates and find indications for an increased bias in low-mass systems. Our results on scaling relations are limited by our understanding of sample selection and number of observations of low-mass systems. Calibration against e.g. weak lensing can help to address cross-calibration discrepancies and forthcoming X-ray observatories will significantly improve our understanding of non-thermal phenomena in clusters.
  • Airas, Annika (Helsingin yliopisto, 2016)
    Urban waterfront redevelopment is a global trend. Since the 1960s, and the advent of containerization, new commercial and residential developments began to replace the industrial operations that once characterized the waterfronts of port cities. Research to date has largely focused on the redevelopment of seaports in large coastal cities, primarily in a North American context, yet significant changes are also taking place in smaller locations around the globe. In this study, two empirical examples are given of smaller cities, one in Finland and one in Canada, both of which historically served the woodworking industry. As these industries declined and reorganized, the waterfronts they occupied have been redeveloped primarily into residential districts, particularly since the late 1980s. This study takes a new, multidisciplinary approach to waterfront research by advancing the concept of historical distinctiveness and revealing the ways that it is expressed within waterfront planning. While the term distinctiveness is often used in planning documents to refer to the waterfront s historical past, the term remains poorly defined. This study presents the novel concept of historical distinctiveness and introduces a research framework through which it can be understood. In particular, the study pays attention to the content of historical distinctiveness and examines how it is expressed in the contemporary built environment of the formerly industrial waterfronts: Lake Vesijärvi, in Lahti, Finland and in Queensborough, New Westminster, Canada. Historical distinctiveness as defined in this study consists of six interlocking and constantly evolving elements: international historical influences, historic uses of the waterfront and their reflection in local built environments, the waterfront s relation to the city, the multiple historic layers in the built form of the waterfront, comparative differences in architectural history, and varying values. The concept of historical distinctiveness enables local histories and development trajectories to be revealed while widening the understanding of contemporary waterfront cities. Both Lahti and Queensborough are changing quickly and dramatically, which makes it difficult to identify the remaining vestiges of their woodworking past. Furthermore, the appearance and design of new developments reflect a narrow appreciation of their industrial legacy. Planning processes that aim to promote the distinctiveness of historical waterfronts are instead, ironically, ignoring and at times actually erasing truly unique urban histories. This study demonstrates how new rebuilt environments are becoming more similar across sites, while also becoming more similar to non-waterfront areas in cities. Such developments may limit or destroy the use value of these areas while ignoring cultural histories and local identities, thereby limiting options for creating diverse cities. By taking historical distinctiveness into account, cities can increase historical awareness and create possibilities for the future, thereby creating truly distinctive waterfronts.
  • Virkki, Anne (Helsingin yliopisto, 2016)
    Planetary radar can be considered humankind's strongest instrument for post-discovery characterization and orbital refinement of near-Earth objects. After decades of radar observations, extensive literature describing the radar properties of various objects of the Solar System is currently available. Simultaneously, there is a shortage of work on what the observations imply about the physical properties of the planetary surfaces. The goal of my thesis is to fill part of this gap. Radar scattering as a term refers to alterations experienced by electromagnetic radiation in the backscattering direction when interacting with a target particle. In the thesis, I investigate by numerical modeling what role different physical properties of planetary surfaces, such as the electric permittivity, size of scatterers, or their number density, play in radar scattering. In addition, I discuss how radar observations can be interpreted based on modeling. Because all codes have their own limitations, it is crucial to compare results obtained with different methods. I use Multiple Sphere $T$-matrix method (MSTM) for clusters of spherical particles to understand scattering by closely-packed regolith particles. I use the discrete-dipole approximation code ADDA to comprehend single-scattering properties of inhomogeneous or irregular regolith particles in wavelength-scale. And finally, I use a ray-optics algorithm with radiative transfer, Siris, to simulate radar scattering by large irregular particles that mimic planetary bodies. The simulations for clusters of spherical particles reveal polarization enhancement at certain bands of sizes and refractive indices in the backscattering direction. The results from computations using MSTM and ADDA imply that the electric permittivity plays a strong part in terms of circular polarization. From the results of ray-optics computations for large, irregular particles, I derive a novel semi-analytic form for the radar scattering laws. And, by including diffuse scattering using wavelength-scale particles with laboratory-characterized geometries, we are able to simulate the effect of numerous physical properties of a realistic planetary surface on radar scattering. Our model using Siris is among the most quantitative models for radar scattering by planetary surfaces. The results support and improve the current understanding of the effects of the surface geometry, the electric permittivity, and the coherent-backscattering mechanism and can be used to interpret radar observations. Furthermore, I underscore that the roles of the absorption and the scatterer geometry must not be underestimated, albeit determining realistic values for the variables can be challenging.
  • Kuosmanen, Niina (Helsingin yliopisto, 2016)
    In this work, the Holocene history of the western taiga forests, at the modern western range limit of Siberian larch (Larix Sibirica) in northern Europe, is investigated using fossil pollen and stomata records from small forest hollow sites. The relative importance of the potential drivers of long-term boreal forest composition is quantitatively assessed using novel approaches in a palaeoecological context. The statistical method variation partitioning is employed to assess relative importance of climate, forest fires, local moisture conditions and human population size on long-term boreal forest dynamics at both regional (lake records) and local scales (small hollow records). Furthermore, wavelet coherence analysis is applied to examine the significance of individual forest fires on boreal forest composition. The results demonstrate that Siberian larch and Norway spruce (Picea abies) have been present in the region since the early Holocene. The expansion of spruce at 8000 7000 cal yr BP caused a notable change in forest structure towards dense spruce dominated forests, and appears to mark the onset of the migration of spruce into Fennoscandia. The mid-Holocene dominance of spruce and constant presence of Siberian larch suggests that taiga forest persisted throughout the Holocene at the study sites in eastern Russian Karelia. Climate is the main driver of long-term changes in boreal vegetation at the regional scale. However, at the local scale the role of local factors increases, suggesting that intrinsic site-specific factors have an important role in stand-scale dynamics in the boreal forest. When the whole 9000 year study period is considered, forest fires explain relatively little of the variation in stand-scale boreal forest composition. However, forest fires have a significant role in stand-scale forest dynamics when observed in shorter time intervals and the results suggests that fires can have a significant effect on short-term changes in individual tree taxa as well as a longer profound effect on forest structure. The relative importance of human population size on variation in long-term boreal vegetation was statistically assessed for the first time using this type of human population size data and the results showing unexpectedly low importance of human population size as a driver of the changes in long-term boreal vegetation may be biased because of the difference in spatial representativeness between the human population size data and the pollen-derived forest composition data. Although the results strongly suggest that climate is the main driver of long-term boreal forest dynamics, the local disturbances, such as fires, species interactions and local site specific characteristics can dictate the importance of climate on stand-scale boreal forest dynamics.
  • Mäkelä, Valtteri (Helsingin yliopisto, 2016)
    NMR spectroscopy is an invaluable tool for structure elucidation in chemistry and molecular biology, which is able to provide unique information not easily obtained by other analytical methods. However, performing quantitative NMR experiments and mixture analysis is considerably less common due to constraints in sensitivity/resolution and the fact that NMR observes individual nuclei, not molecules. The advances in instrument design in the last 25 years have substantially increased the sensitivity of NMR spectrometers, diminishing the main weakness of NMR, while increases in field strength and ever more intricate experiments have improved the resolving power and expanded the attainable information. The minimal need for sample preparation and its non-specific nature make quantitative NMR suitable for many applications ranging from quality control to metabolome characterization. Furthermore, the development of automated sample changers and fully automated acquisition have made high-throughput NMR acquisition a more feasible and attractive, yet expensive, possibility. This work discusses the fundamental principles and limitations of quantitative liquid state NMR spectroscopy, and tries to put together a summary of its various aspects scattered across literature. Many of these more subtle features can be neglected in simple routine spectroscopy, but become important when extracting quantitative data and/or when trying to acquire and process vast amounts of spectra consistently. The original research presented in this thesis provides improved methods for data acquisition of quantitative 13C detected NMR spectra in the form of modified INEPT based experiments (Q-INEPT-CT and Q-INEPT-2D), while software tools for automated processing and analysis of NMR spectra are also presented (ImatraNMR and SimpeleNMR). The application of these tools is demonstrated in the analysis of complex hydrocarbon mixtures (base oils), plant extracts and blood plasma samples. The increased capability of NMR spectroscopy, the rising interest in metabolomics and for example the recent introduction of benchtop NMR spectrometers are likely to expand the future use of quantitative NMR in the analysis of complex mixtures. For this reason, the further development of robust, accurate and feasible analysis methods and tools is essential.
  • Koski, Aleksis (Helsingin yliopisto, 2016)
    The subject of this thesis is Elliptic PDE's that appear in the fields of Geometric Analysis and The Calculus of Variations, such as the Beltrami equation and its generalizations. The main results are the existence and uniqueness of solutions in function spaces such as the Sobolev-spaces, as well as regularity and properties of solutions. The thesis contains four scientific articles on the subject. The first two articles contain results on generalized Beltrami equations, where the solvability is investigated using functional analytic methods. New results for the corresponding singular integral operators are also found, such as finding the L^2-norm of the Beurling transform for the Dirichlet problem. The third and fourth papers cover properties of solutions to Euler-Lagrange and Hopf-Laplace equations for certain energy functionals. One of the main results is the generalization of the classic Radó-Kneser-Choquet theorem for the p-harmonic energy in the plane. The proof is based on a new subharmonicity result for the Jacobian of a solution, and similar other new subharmonicity results are also obtained in the thesis.
  • Sarnet, Tiina (Helsingin yliopisto, 2015)
    Materials are crucial to the technological advances of society. The never ending need for data storage and new energy sources pushes research towards clear goals. Perhaps some of today's solutions can in the future be replaced or augmented with phase change memories and thermoelectric materials. Phase change materials store data in their amorphous and crystalline phases that have great differences in their electrical and optical properties. Thermoelectric materials can utilize waste heat and produce electricity from temperature differences. They can also be utilized in temperature control as they can be used to create a temperature difference by using electricity. Shrinking device sizes and increasing device complexity require that deposition methods such as atomic layer deposition (ALD) are used. ALD is based on sequential, saturative surface reactions. Precursors are brought to the surface one at a time, separated by purges. Because of the saturative reactions, each ALD cycle deposits a constant amount of material up to a monolayer, making film thickness control very simple. ALD of chalcogenides has focused mainly on sulfides, and the chemistries for selenide and telluride depositions have been limited. Pnictides have a similar situation. The ALD chemistries for arsenides include only a few combinations of precursors, and antimonides are barely demonstrated. This is why a new group of precursors was needed. The alkylsilyl non-metal precursors react very efficiently with metal halides in a dehalosilylation reaction. These types of reactions have now been utilized in both chalcogenide and pnictide thin film growth. In this thesis, several chalcogenide and pnictide ALD processes were studied in detail by utilizing the appropriate alkylsilyl non-metal precursors. In general, typical ALD characteristics were found. Growth rates saturated with respect to precursor pulse lengths; film thicknesses increased linearly with the number of deposition cycles; and the films were stoichiometric with low impurity contents. Application wise, the ALD chalcogenide and pnictide films had the required properties. The phase of the phase change materials could be repeatably and quickly changed, and the thermoelectric films showed a proper response to a temperature gradient.
  • Arola, Teppo (Helsingin yliopisto, 2015)
    Increase of greenhouse gas concentrations in the atmosphere, the limits of conventional energy reservoirs and the instability risks related to energy transport have forced nations to promote the utilisation of renewable energy reservoirs. Groundwater can be seen as an option for renewable energy utilisation and not only a source of individual or municipal drinking water. Finland has multiple groundwater reservoirs that are easily exploitable, but groundwater energy is not commonly used for renewable energy production. The purpose of this thesis study was to explore the groundwater energy potential in Finland, a region with low temperature groundwater. Cases at three different scales were investigated to provide a reliable assessment of the groundwater energy potential in Finland. Firstly, the national groundwater energy potential was mapped for aquifers classified for water supply purposes that are under urban or industrial land use. Secondly, the urbanisation effect on the peak heating and peak cooling power of groundwater was investigated for three Finnish cities, and finally, the long-term groundwater energy potential was modelled for 20 detached houses, 3 apartment buildings and a shopping centre. The thesis connects scientific information on hydro- and thermogeology with the energy efficiency of buildings to produce accurate information concerning groundwater energy utilisation. Hydrological and thermogeological data were used together with accurate data on the energy demands of buildings. The heating and cooling power of groundwater was estimated based on the groundwater flux, temperature and heat capacity and the efficiency of the heat transfer system. The power producible from groundwater was compared with the heating and cooling demands of buildings to calculate the concrete groundwater energy potential. Approximately 20% to 40% of annually constructed residential buildings could be heated utilising groundwater from classified aquifers that already are under urban land use in Finland. These aquifers contain approximately 40 to 45 MW of heating power. In total, 55 to 60 MW of heat load could be utilised with heat pumps. Urbanisation increases the heating energy potential of groundwater. This is due anthropogenic heat flux to the subsurface, which increases the groundwater temperatures in urbanised areas. The average groundwater temperature was 3 to 4 °C higher in city centres than in rural areas. Approximately 50% to 60% more peak heating power could be utilised from urbanised compared with rural areas. Groundwater maintained its long term heating and cooling potential during 50 years of modelled operation in an area where the natural groundwater temperature is 4.9 °C. Long-term energy utilisation created a cold groundwater plume downstream, in which the temperature decreased by 1 to 2.5 °C within a distance of 300 m from the site. Our results demonstrate that groundwater can be effectively utilised down to a temperature of 4 °C. Groundwater can form a significant local renewable energy resource in Finland. It is important to recognise and utilise all renewable energy reservoirs to achieve the internationally binding climatological targets of the country. Groundwater energy utilisation should be noted as one easily exploitable option to increase the use of renewable energy resources in a region where the natural groundwater temperature is low. The methods presented in this thesis can be applied when mapping and designing groundwater energy systems in nationwide- to property-scale projects. Accurate information on hydro- and thermogeology together with the energy demands of buildings is essential for successful system operation.
  • Hailu, Binyam Tesfaw (Helsingin yliopisto, 2015)
    Remote sensing provides land-cover information on a variety of temporal and spatial scales. The increasing availability of remote sensing data is now a major factor in land-change analysis and in understanding its impact on ecosystem services and biodiversity. This wider accessibility is also leading to improvements in the methods used to integrate these data into land-cover modelling and change analysis. Despite these developments in current technology and data availability however, there are still questions to be addressed regarding the dynamics of land cover and its impact, particularly in areas such as Ethiopia where the human population is expanding and there is a need for improvement in the management of natural resources. Multi-scale approaches (from the national to the local) were used in this thesis to assess change in land cover and ecosystem services in Ethiopia, specifically in terms of provisioning (the production of food, i.e. cash crops) and regulating (climate control for vegetation cover). These assessments were based on multi-scale remote sensing (very high spatial resolution remote aerial sensing, high-resolution SPOT 5 satellite imaging and products of medium-resolution satellite remote sensing) and climate data (e.g., precipitation, temperature). The main focus in this thesis is on mapping and modelling the spatial distribution of vegetation. This includes: (i) forest mapping (indigenous and exotic forests), (ii) modelling the probabilistic presence of understory coffee, (iii) Coffea arabica species distribution modelling and mapping and (iv) simulating pre-agricultural-expansion vegetation cover in Ethiopia. The results of the applied predictive modelling were robust in terms of: (i) identifying and mapping past vegetation cover and (ii) mapping understory shrubs such as coffee plants that grow as understory. I present a reconstruction of earlier vegetation cover that mainly comprised broadleaved evergreen and deciduous forest but was replaced in the course of agricultural expansion. Given the spatial scale of the latter, the environmental modelling was complemented with high spatial resolution satellite (2.5m) and aerial images (0.5m). The results of the Object Based Image Analysis show that indigenous forests were separated from exotic forests. Current and future suitable locations that are environmentally favourable for the growth of understory coffee were identified and mapped in the coffee-growing areas of Ethiopia. In conclusion, the information presented in this thesis, based on the multi-scale assessment of land changes, should lead to the better-informed management of natural resources and conservation, and the restoration of major areas affected by human population growth.
  • Fagerholm, Fabian (Helsingin yliopisto, 2015)
    Human factors have been identified as having the largest impact on performance and quality in software development. While production methods and tools, such as development processes, methodologies, integrated development environments, and version control systems, play an important role in modern software development, the largest sources of variance and opportunities for improvement can be found in individual and group factors. The success of software development projects is highly dependent on cognitive, conative, affective, and social factors among individuals and groups. When success is considered to include not only fulfilment of schedules and profitability, but also employee well-being and public impact, particular attention must be paid to software developers and their experience of the software development activity. This thesis uses a mixed-methods research design, with case studies conducted in contemporary software development environments, to develop a theory of software developer experience. The theory explains what software developers experience as part of the development activity, how an experience arises, how the experience leads to changes in software artefacts and the development environment through behaviour, and how the social nature of software development mediates both the experience and outcomes. The theory can be used both to improve software development work environments and to design further scientific studies on developer experience. In addition, the case studies provide novel insights into how software developers experience software development in contemporary environments. In Lean-Agile software development, developers are found to be engaged in a continual cycle of Performance Alignment Work, where they become aware of, interpret, and adapt to performance concerns on all levels of an organisation. High-performing teams can successfully carry out this cycle and also influence performance expectations in other parts of the organisation and beyond. The case studies show that values arise as a particular concern for developers. The combination of Lean and Agile software development allows for a great deal of flexibility and self-organisation among developers. As a result, developers themselves must interpret the value system inherent in these methodologies in order to inform everyday decision-making. Discrepancies in the understanding of the value system may lead to different interpretations of what actions are desirable in a particular situation. Improved understanding of values may improve decision-making and understanding of Lean-Agile software development methodologies among software developers. Organisations may wish to clarify the value system for their particular organisational culture and promote values-based leadership for their software development projects. The distributed nature and use of virtual teams in Open Source environments present particular challenges when new members are to join a project. This thesis examines mentoring as a particular form of onboarding support for new developers. Mentoring is found to be a promising approach which helps developers adopt the practices and tacit conventions of an Open Source project community, and to become contributing members more rapidly. Mentoring could also have utility in similar settings that use virtual teams.
  • Punkka, Ari-Juhani (Helsingin yliopisto, 2015)
    Mesoscale convective systems (MCSs) are common in Finland and nearby regions. These conglomerates of cumulonimbus clouds have a diameter in excess of 100 km and lifetime of at least four hours. About 200 MCSs are detected every year out of which roughly 80 are classified as intense MCSs (maximum radar reflectivity exceeding 50 dBZ for two consecutive hours). MCSs occur most frequently during the afternoon hours in July and August, whereas in the wintertime, they are very few in number. Also the most extreme forms of MCSs such as derechos occur in Finland but only infrequently. The average duration of the MCSs is 10.8 hours in Finland and the most common direction of movement is toward the northeast. In the light of earlier MCS research a local peculiarity is the limited population of MCSs which has a motion component towards the west. The synoptic-scale weather pattern affects the MCS motion direction. An area of low pressure and upper-level trough are located west of Finland during many MCS situations which leads to the onset of southern air flow and the increase of low-tropospheric temperature and humidity. Based on the case studies in this thesis, the area of low pressure occasionally travels to the southwest of Finland enabling southeasterly air flow and further, the MCS motion component towards the west. During the thunderstorms days with sub-MCS deep moist convection, a northwesterly air flow and a ridge of high pressure west of Finland are frequently observed. As opposed to many earlier MCS studies, mid-level lapse rate does not distinguish between the MCS and sub-MCS environments in Finland. Instead, convective available potential energy (CAPE), low-tropospheric water vapour mixing ratio and deep-layer mean wind are able to distinguish between the aforementioned environments. Moreover, mean wind parameters are among the best discriminators between the days with significant and insignificant wind damage. Unlike in many earlier investigations, no evidence is found that cases with dry low- or mid-troposphere air would be more prone to the occurrence of significant convective winds than cases with moister environments. These results and the case studies propose that in the presence of low instability dry air dampens deep moist convection and convective downdrafts. However, in the presence of high instability the effect of dry air may be reverse, as the derecho case of 5 July 2002 (Unto) suggests.
  • Franchin, Alessandro (Helsingin yliopisto, 2015)
    This thesis focuses on the experimental characterization of secondary atmospheric nanoparticles and ions during their formation. This work was developed in two distinct and complementary levels: a scientific level, aimed to advance the understanding of particle formation and a more technical level, dedicated to instrument development and characterization. Understanding and characterizing aerosol formation, is important, as formation of aerosol particles from precursor gases is one of the main sources of atmospheric aerosols. Elucidating how aerosol formation proceeds in detail is critical to better quantify aerosol contribution to the Earth's radiation budget. Experimentally characterizing the first steps of aerosol formation is the key to understanding this phenomenon. Developing and characterizing suitable instrumentation to measure clusters and ions in the sub 3 nm range, where aerosol formation starts, is necessary to clarify the processes that lead to aerosol formation. This thesis presents the results of a series of experimental studies of sub 3 nm aerosol particles and ions. It also shows the results of the technical characterization and instrument development that were made in the process. Specifically, we describe three scientific results achieved from chamber experiments. Firstly the relative contributions of sulfuric acid, ammonia and ions in nucleation processes was quantified experimentally, supporting that sulfuric acid alone cannot explain atmospheric observation of nucleation rates. Secondly, the chemical composition of cluster ions was directly measured for a ternary system, where sulfuric acid, ammonia and water were the condensable vapors. In these measurements we observed a decreasing acidity of the clusters with increasing concentration of gas phase ammonia, with the ratio of sulfuric acid/ammonia staying closer to that of ammonium bisulfate than to that of ammonium sulfate. Finally, in a series of chamber experiments the ion ion recombination coefficient was quantified at different conditions. The ion ion recombination coefficient is a basic physical quantity for modeling ion induced and ion mediated nucleation. We observed a steep increase in the ion ion recombination coefficient with decreasing temperatures and with decreasing relative humidity. This thesis also reviews technical results of: 1) laboratory verification, characterization and testing of different aerosol and ion instruments measuring in the sub 3 nm range; 2) the development of new inlets for such instruments to improve the detection of sub-3 nm particles and ions.
  • Tomperi, Päivi (Helsingin yliopisto, 2015)
    Both nationally and internationally, teachers professional development is a current research topic. According to international teaching and learning survey TALIS, Finnish teachers interest to participate in long-lasting in-service teacher training programs, focusing on professional development, is decreasing. In order to implement inquiry-based practical work into classroom practice, new in-service training models are needed. This thesis examines the design and development process of a professional training course, which implemented the SOLO-taxonomy. The training course was meant for chemistry teachers working at the upper-secondary school and it focused on inquiry-based chemistry instruction. The research was done using design research. The main research questions were formed according to the three central areas of design research (Edelson, 2002): 1) Problem analysis: What kind of challenges does inquiry-based practical chemistry bring to chemistry teachers at the upper secondary school, 2) Design process: What kind of possibilities and challenges does the SOLO-taxonomy offer for the support of inquiry-based practical chemistry instruction and 3) Design solution: What are the characteristics of teachers professional development that promotes inquiry-based practice in chemistry at the upper secondary school? The eight-phase design research employed qualitative research methods, including observations, surveys and interviews. The data was analyzed using content analysis. From this data, two main research results were obtained. First, information was obtained on the implementation of inquiry-based chemistry into practice, and about teachers professional development using the SOLO-taxonomy. Second, information on the characteristics of research-based training model promoting inquiry-based practical chemistry instruction was obtained. The findings show that inquiry is challenging for teachers due to its constructivist view on learning, teachers inexperience to act in modern learning environments and not practicing implementing inquiry in the classroom during training. The findings also show that using the SOLO-taxonomy supported professional development in many ways. For example, it worked as a tool in designing and modifying written instructions, it motivated teachers to develop their practices, it increased teachers ownership to the produced written instructions, it supported teachers understanding of inquiry and it acted as a model to support higher-order thinking skills. The created research-based training model, meant to promote inquiry in practical chemistry instruction, was based on a theoretical and empirical problem analysis. The main features incorporated into the training model are (i) personalized learning which considers the teachers current knowledge (ii) expanding teacher s role from merely a dispenser of knowledge to the roles of a researcher and a learner, (iii) using a theoretical framework to support research-based instruction, higher-order thinking skills and interaction-based sharing of ideas, (iv) creating meaningful inquiry-based material, done using the SOLO-taxonomy, (v) peer-support (vi) reflection and incorporation of action research, (vii) practicing implementing inquiry-based practical work, which is of collaborative and cognitive nature increasing understanding of the nature of science. The research results show that teachers need training models of various durations. If the teacher s view of learning is congruous with the inquiry-based approach, they can begin to practice the implementation of inquiry already during a short training. However, if the teachers view on learning does not support constructive learning methods, the accommodation process requires more time. The research results of this doctoral dissertation can be applied (i) in the implementation of new national core curriculum, (ii) in planning and designing new learning material for inquiry-based practical chemistry (iii) in training that supports teachers life-long learning, and (iv) in international exportation of education. Keywords: Design research, professional development, SOLO-taxonomy, research-based training, inquiry-based practical chemistry
  • Peltola, Jari (Helsingin yliopisto, 2015)
    This thesis is based on four experimental spectroscopic studies where novel highly sensitive laser absorption spectroscopy spectrometers are developed and used for trace gas detection and precision spectroscopy. Most of the studies are carried out in the mid-infrared region between 3 and 4 µm, where a homebuilt continuous-wave singly resonating optical parametric oscillator is used as a light source. In addition, one study has been performed in the visible region using a commercial green laser at 532 nm. Two of the developed spectroscopic applications are based on cavity ring-down spectroscopy. In this thesis, the first off-axis re-entrant cavity ring-down spectrometer in the mid-infrared is demonstrated and utilized for highly sensitive detection of formaldehyde. The second study presents an optical frequency comb referenced mid-infrared continuous-wave singly resonating optical parametric oscillator, which is applied to high-precision cavity ring-down spectroscopy of nitrous oxide and methane. Furthermore, this study presents a new method for referencing a mid-infrared optical parametric oscillator to a near-infrared optical frequency comb. This new method allows large mode-hop-free frequency tuning ranges in the mid-infrared region. The other two experiments are based on cantilever-enhanced photoacoustic spectroscopy, presenting the first reported studies of cantilever-enhanced-based trace gas detection in the mid-infrared and visible region. These studies show the great potential of cantilever-enhanced photoacoustic detection for substantial enhancement of the sensitivity of trace gas detection. For instance, the best nitrogen dioxide detection limit ever reported using photoacoustic spectroscopy is presented in this thesis.
  • Lavinto, Mikko (Helsingin yliopisto, 2015)
    The science of cosmology relies heavily on interpreting observations in the context of a theoretical model. If the model does not capture all of the relevant physical effects, the interpretation of observations is on shaky grounds. The concordance model in cosmology is based on the homogeneous and isotropic Friedmann-Robertson-Walker metric with small perturbations. One long standing question is whether the small-scale details of the matter distribution can modify the predictions of the concordance model, or whether the concordance model can describe the universe to a high precision. In this thesis, I discuss some potential ways in which inhomogeneities may change the interpretation of observations from the predictions of the concordance model. One possibility is that the small-scale structure affects the average expansion rate of the universe via a process called backreaction. In such a case the concordance model fails to describe the time-evolution of the universe accurately, leading to the mis-interpretation of observations. Another possibility is that the paths that light rays travel on are curved in such a way that they do not cross all regions with equal probability. If some regions are favoured and others disfavoured, the average description of the concordance model gives incorrect results. My collaborators and I investigated the effects of voids on the CMB using second order perturbation theory and the exact Lemaître-Tolman-Bondi solution. A void has been detected in the direction of the CMB Cold Spot, but we found that contrary to the claims made in the literature, it was not large and deep enough to explain the Cold Spot. The results from perturbation theory and exact calculation agreed to a high precision, which was not surprising, as the void is fairly shallow. We have studied a toy model of the universe, called the Swiss Cheese model, to see if the model can produce observational signals that deviate significantly from the predictions of the concordance model. We studied the backreaction in such models, and concluded that in physically motivated Swiss Cheese models, its impact on the expansion rate must be small. We also considered an unphysical model that was constructed to have the holes expand independently from the background. Even though the inhomogeneities change the expansion rate completely, the backreaction contribution to the total average expansion rate today was only at 1% level. We also studied weak lensing in a more realistic Swiss Cheese model to see how the structures change the brightness and shape of sources. We found that the simplest assumption, no change in the average flux, seemed to be violated with a probability of 98.6%. Our results agree on the magnitude of the effect, in that it should be very small, but the exact value is significantly different. There are many reasons why this may be the case, and one of the reasons is that the structures alter the area of the constant-redshift surface around the observer. However, to find conclusive proof of this, the calculation should be re-done with a higher resolution.
  • Tuovinen, Hanna (Helsingin yliopisto, 2015)
    Northern Fennoscandia has experienced an unparalleled mineral exploration boom since around 2005. At the same time, there has been increasing awareness of the potential environmental impact of non-nuclear industries that extract and/or process ores containing naturally occurring radionuclides. Industrial activities may result in significant environmental problems if the waste generated during ore processing is not adequately managed. In 2010, a new project was launched with an objective to study the mobility of uranium series radionuclides from diverse mill tailings in a northern boreal environment in Finland. Three sites were investigated: the Talvivaara Ni-Cu-Zn-Co mine in central Finland, a former phosphate mine at Sokli, Finnish Lapland, and a former pilot-scale uranium mine at Paukkajanvaara, eastern Finland. The mobility of radionuclides from the mill tailings at Sokli was examined in order to assess the potential environmental impact of past and future mining activities. Mineralogical studies did not indicate that uranium or thorium have been mobilized from altered pyrochlore-group minerals in the Sokli ore or tailings. In the tailings pond, no clear trends were observed in the activity concentrations of uranium, radium or thorium isotopes in the surface layers of the mill tailings. In subsurface samples, an increase in the concentration of these isotopes can be seen when approaching the pond at the distal end of the sludge field. However, this increase is most likely to a consequence of compositional changes in material discharges. The results of the sequential extraction tests suggested that neither uranium nor thorium is in an exchangeable form and could potentially be released to the environment. Uranium (4% of the total concentration) was partly soluble under weakly acidic conditions, whereas thorium was tightly bound in mineral phases. At the former Paukkajanvaara uranium mine in Eno, the aim of the study was to examine the potential for further mobilization of radionuclides after remediation of the site in early 1990s. There are two primary sources of contamination at the site, the waste rock pile and the tailings. The results indicate that Ra-226 has been leached from the waste rock pile and accumulated in surrounding soil. In run-off sediment samples collected from a dry stream bed near the waste rock pile, the activity concentrations of Ra-226 and U-238 are higher than in soil samples. From the tailings, radionuclides can leach directly to the lake and to another small stream, which flows to the east of the waste rock pile. The results from the soil samples collected between the tailings area and the stream indicate leaching of U-238 and Ra-226 with the surface flow. Sediment samples collected from the bottom of the lake display pronounced uranium series disequilibrium with fractionation of Pb-210 and Ra-226 relative to the parent U-238. The results therefore indicate that leaching and accumulation of at least Ra-226 from the waste rock pile and possibly tailings is still ongoing. At Talvivaara, the aim of the study was to generate new data leading to a better understanding of the fate of radiotoxic uranium daughter nuclides, primarily Ra-226, Pb-210 and Po-210, in the mining process. In heap leaching, uranium is dissolved from uraninite to the pregnant leach solution. Uranium is probably transported as uranyl ions and uranyl sulfate complexes in the acid pregnant leach solution (PLS), and finally ends up in precipitates of the gypsum pond tailings via iron removal and final neutralization processes during the removal of residual metals. In terms of radiation safety, the U-238 activity concentration in the gypsum pond is partly above the exemption value (1000 Bq/kg) for natural radionuclides of the U-238 series. Radium and thorium mostly stay in the heaps during heap leaching. In addition, Pb-210 and Po-210 stay mainly in the heaps but slight mobilization of these nuclides was indicated. Secondary sulfate minerals, such as gypsum and jarosite, are precipitated from the sulfate-rich and acid PLS at Talvivaara. These minerals can incorporate radium in their crystal lattices, limiting Ra-226 mobility. Therefore, it can be assumed that most of radium and possibly part of Pb-210 and Po-210 are co-precipitated with poorly soluble sulfates in the Talvivaara heaps.
  • Ding, Yi (Helsingin yliopisto, 2015)
    Due to the popularity of smartphones and mobile streaming services, the growth of traffic volume in mobile networks is phenomenal. This leads to huge investment pressure on mobile operators' wireless access and core infrastructure, while the profits do not necessarily grow at the same pace. As a result, it is urgent to find a cost-effective solution that can scale to the ever increasing traffic volume generated by mobile systems. Among many visions, mobile traffic offloading is regarded as a promising mechanism by using complementary wireless communication technologies, such as WiFi, to offload data traffic away from the overloaded mobile networks. The current trend to equip mobile devices with an additional WiFi interface also supports this vision. This dissertation presents a novel collaborative architecture for mobile traffic offloading that can efficiently utilize the context and resources from networks and end systems. The main contributions include a network-assisted offloading framework, a collaborative system design for energy-aware offloading, and a software-defined networking (SDN) based offloading platform. Our work is the first in this domain to integrate energy and context awareness into mobile traffic offloading from an architectural perspective. We have conducted extensive measurements on mobile systems to identify hidden issues of traffic offloading in the operational networks. We implement the offloading protocol in the Linux kernel and develop our energy-aware offloading framework in C++ and Java on commodity machines and smartphones. Our prototype systems for mobile traffic offloading have been tested in a live environment. The experimental results suggest that our collaborative architecture is feasible and provides reasonable improvement in terms of energy saving and offloading efficiency. We further adopt the programmable paradigm of SDN to enhance the extensibility and deployability of our proposals. We release the SDN-based platform under open-source licenses to encourage future collaboration with research community and standards developing organizations. As one of the pioneering work, our research stresses the importance of collaboration in mobile traffic offloading. The lessons learned from our protocol design, system development, and network experiments shed light on future research and development in this domain.
  • Stén, Johan Carl-Erik (Springer / Birkhäuser, 2015)
    The Finnish mathematician and astronomer Anders Johan Lexell (1740-1784) was a long time close collaborator and the academic successor of Leonhard Euler at the Imperial Academy of Sciences in Saint Petersburg. Lexell was invited in 1768 from his native town of Åbo (Turku) in Finland to Saint Petersburg to assist in the laborious mathematical processing of the astronomical data from the forthcoming transit of Venus of 1769. A few years later he became an ordinary member of the Academy. Lexell was the first mathematician and astronomer of international renown from Finland. This thesis is the first full-length intellectual biography devoted to Lexell and his prolific scientific output. Using his numerous publications, we trace the development of his scientific thought. In close collaboration with Euler, he contributed especially to infinitesimal calculus and geometry. In astronomy his work pertains mainly to the parallax and longitude problems, as well as to orbit calculations. He is known for having recognised that Herschel's new "comet" of 1781 moves in a nearly circular orbit and must therefore be a planet. Lexell also predicted the extraordinary motion of the comet of 1770 ("Lexell's comet"), which constitutes an example of a restricted three-body problem. Lexell had wide scientific interests. Being internationally minded and well-connected, he entertained a rich correspondence not only with astronomers and mathematicians but also with natural historians and administrators. His detailed letters especially from his grand tour to Germany, France and England in 1780-1781 reveals him as a lucid observer of the intellectual landscape of enlightened Europe.