Faculty of Science


Recent Submissions

  • Waltari, Otto (Helsingin yliopisto, 2020)
    Over the past decade Internet connectivity has become an increasingly essential feature on modern mobile devices. Many use-cases representing the state of the art depend on connectivity. Smartphones, tablets, and other devices alike can even be seen as access devices to Internet services and applications. Getting a device connected requires either a data plan from a mobile network operator (MNO), or alternatively connecting over Wi-Fi wherever feasible. Data plans offered by MNO's vary in terms of price, quota size, and service quality based on regional causes. Expensive data, poor cell coverage, or a limited quota has driven many users to look for free Wi-Fis in hopes of finding a decent connection to satisfy the ever-growing transmission need of modern Internet applications. The standard for wireless local area networks (WLAN, IEEE 802.11) specifies a network discovery protocol for wireless devices to find surrounding networks. The principle behind this discovery protocol dates back to the early days of wireless networking. However, the scale at which Wi-Fi is deployed and being utilized today is magnitudes larger than what it used to be. In more recent years it was realized that the primitive network discovery protocol combined with the large scale can be used for privacy violations. Device manufacturers have acknowledged this issue and developed mechanisms, such as MAC address randomization, for preventing e.g. user tracking based on Wi-Fi background traffic. These mechanisms have been proven to be inefficient. The contributions of this thesis are two-fold. First, this thesis exposes problems related to the 802.11 network discovery protocol. It presents a highly efficient Wi-Fi traffic capturing system, through which we can show distinct characteristics in the way how different mobile devices from various brands and models scan for available networks. This thesis also looks at the potentially privacy-compromising elements in these queries, and provides a mechanism to quantify the information leak. Such collected information combined with public crowdsourced data can pinpoint locations of interest, such as home, workplace, or affiliation without user consent. Secondly, this thesis proposes a novel mechanism, WiPush, to deliver messages over Wi-Fi without association in order to avoid network discovery entirely. This mechanism leverages the existing, yet mostly inaccessible Wi-Fi infrastructure to serve a wider scope of users. Lastly, this thesis provides a communication system for privacy-preserving, opportunistic, and lightweight Wi-Fi communication without association. This system is built around an inexpensive companion device, which makes the concept adaptable for various opportunistic short-range communication systems, such as smart traffic and delay-tolerant networks.
  • Niskanen, Andreas (Helsingin yliopisto, 2020)
    Argumentation in artificial intelligence (AI) is a prominent research area situated in the field of knowledge representation and reasoning. Abstract argumentation frameworks (AFs) constitute a central formalism to argumentation in AI. AFs model argumentative scenarios as directed graphs, with nodes representing arguments and edges representing conflicts, or attacks, between arguments. For reasoning about arguments, and in particular about the acceptability of arguments, several different argumentation semantics identify jointly acceptable subsets of arguments called extensions. Argumentation is inherently a dynamic process involving changes with respect to both arguments and attacks between them. Dynamics give rise to various representational and computational challenges. In this thesis, we study three related themes involving dynamics and uncertainty in AFs from a computational perspective: extension and status enforcement in AFs, the AF synthesis problem, and two formalisms specifically designed to accommodate uncertainty arising from dynamics, namely, incomplete AFs and control AFs. Extension enforcement is the problem of finding the smallest possible change to a given AF in such a way that a given subset of arguments is (included in) an extension, while in status enforcement the goal is to make given arguments accepted or rejected. The AF synthesis problem, proposed in this thesis, seeks to construct an optimal AF in terms of representing given examples of extensions and non-extensions. AF synthesis generalizes the fundamental concept of realizability in AFs, and is more applicable in dynamic settings involving incomplete or inconsistent information. Incomplete AFs generalize standard AFs by distinguishing between definite and uncertain arguments and attacks, allowing to reason about acceptance of arguments by quantifying over the uncertain part. Control AFs also include control arguments, allowing an agent to choose which arguments to put forward in order to reach their goal regardless of the state of the uncertain part, giving rise to the problem of controllability. We provide complexity results and practical algorithms for each of the three problem settings. We show that the computational complexity of these problems varies from polynomial-time solvability to completeness for the second or third level of the polynomial hierarchy, depending largely on the problem variant, restrictions to the input instance, and the choice of semantics. Motivated by the success of Boolean satisfiability (SAT) based declarative methods for NP-complete problems and SAT-based counterexample-guided abstraction refinement (CEGAR) algorithms for problems beyond NP, we develop algorithms that are based on employing SAT and maximum satisfiability solvers both directly and in an iterative CEGAR loop. For the CEGAR algorithms, we develop so-called strong refinement steps that allow for reducing the number of redundant CEGAR iterations, and show that these are essential to solving the problems in practice. All of the proposed algorithms are implemented, made available in open source, and subjected to extensive empirical evaluation.
  • Kuusela, Linda (Helsingin yliopisto, 2020)
    Magnetic resonance imaging (MRI) technology is rapidly developing in acquisition, reconstruction and post-processing. When introducing novel methods to clinical routine, there may be aspects of the method that hinder its application. These aspects comprise safety issues, restrictions on the use of equipment, long scan times or time-consuming data post-processing, or combinations of these. Obviously, these issues must be eliminated or managed. The work presented in this thesis was driven by clinical needs at HUS Helsinki Medical Imaging Center, Finland. In simultaneous electroencephalography (EEG) and functional MRI (fMRI) the equipment is MRI-compatible and the use of only a certain low specific absorption rate (SAR) imaging sequences are allowed. The requirements for performing a simultaneous EEG-fMRI study are safety, good signal stability, acceptable signal-to-noise (SNR) ratio and no significant image artifacts. Both temperature measurements, and image quality assessments were carried out. The highest temperature changes were observed for the sequence with the highest SAR, this was, however, within acceptable limits for safe scanning. A decrease in SNR was observed with the fMRI sequence. In craniosynostosis imaging, the aim is to diagnose prematurely closed skull suture of the growing skull. The gold standard of craniosynostosis imaging is computed tomography (CT), but a non-ionizing modality enabling anatomical imaging in the same imaging session was clinically desired. Thus, a black bone MRI (BB-MRI) sequence was developed based on research reported by others and further optimized for the specific needs of our hospital. To produce the 3D rendered image, a segmentation algorithm based on a bias field-corrected fuzzy c-means algorithm was used. To verify the reliability of the BB-MRI, a comparison study with CT was conducted, where sutures and intracranial impressions were rated. For the assessment of sutures, the inter-rater reliability was observed to be high with both BB-MRI and CT. For the assessment of intracranial impression, the inter-rater reliability was low with both modalities. Gradient Echo Plural Contrast Imaging (GEPCI) is a post-processing technique, which produces quantitative information as well as several image contrasts. The issue with GEPCI is the relatively long scan time; 8-12 minutes, depending on the resolution and stack coverage. The usability of partial Fourier (PF) technique was studied with both phantom and volunteer measurements. PF factor in only the phase direction should be used, yielding a reduction in scan time of 24%.
  • Säppi, Saga (Helsingin yliopisto, 2020)
    This Thesis covers research conducted at the University of Helsinki in the field of thermal field theory, a framework for describing quantum fields in a medium, in particular at finite temperature and chemical potential. In the included Publications, there is a strong emphasis on describing field theories in a dense medium, at large chemical potentials, as well as on thermal resummation methods. The central focus and inspiration for the research is the study of elementary particle physics, in the realm of relativistic quantum fields. Specific motivations include the desire to better understand the behaviour of dense, strongly interacting matter possibly present in the cores of neutron stars, push forward high-order perturbative calculations in thermal field theory, as well as to gain some analytic insight on nonperturbative physics by studying a simple low-dimensional model. The main results emerging from the research carried out for this Thesis include a method for reducing zero-temperature finite-density Feynman loop integrals into a sum of vacuum integrals and its proof, the determination of a new high-order contribution to the weak-coupling perturbative expansion of the pressure of cold and dense Quantum Chromodynamics, as well as a study of a three-dimensional thermal quantum field theory using a novel nonperturbative method. The research also paved the way for future determination of the full next-to-next-to-next-to leading order pressure, which is currently well under way. All of this research was theoretical, and involved primarily analytic and in some cases numerical calculation methods. In addition to peer-reviewed Publications, the Thesis contains an Introduction that builds the foundation of some key concepts—gauge theory and eventually Quantum Chromodynamics as well as thermal field theory, in particular in the imaginary-time formalism—required for understanding the included research. It also includes a more focussed Chapter on Quantum Chromodynamics at finite density, also covering Hard Thermal Loop theory.
  • Donvil, Brecht (Helsingin yliopisto, 2020)
    Recent developments in experimental methods allow for the study of thermodynamic properties of quantum systems. In quantum integrated circuits, quantum systems are elements in an electric circuit that can straightforwardly be coupled to other elements. This manipulability allows one to construct quantum heat engines, Maxwell demons etc. Quantum integrated circuits also are one of the main potential settings to realise a working quantum computer. Calorimetric measurements in integrated circuits serve as a promising technique to probe thermodynamic laws of the quantum regime and to study the inner workings of quantum devices. Due to this experimental accessibility, the theoretical study of open quantum systems in the context of quantum integrated circuits is highly relevant. Open quantum systems are typically small systems, e.g. qubits or oscillators, in contact with one or more reservoirs. The research on which this thesis is based can roughly be divided into two parts. The first part is concerned with the thermodynamics of a driven qubit in contact with a thermal bath. This system is the archetype of a quantum out-ofequilibrium system. In one case the qubit is strongly driven by a semiclassical driving field. Building on earlier works, the thermodynamic relations of the system are found by proving the equivalence with an easier to study qubit-oscillator system. In the other, case the qubit is driven by being in contact with two baths with a temperature gradient. The full generating function is derived in a proper approximative scheme and a fluctuation-dissipation relation is found. The second part focusses on a specific experimental scheme to perform calorimetric measurements. The scheme relies on coupling a quantum system to a finite reservoir and performing fast temperature measurements on the reservoir. Doing so allows one to infer energy changes in the reservoir and therefore to obtain the heat exchanged with the system. The dynamics of this system are modelled for weak system-reservoir coupling and concrete experimental predictions are made. In new work, the dynamics for a toy model of a system interacting with a finite reservoir are derived from first principles for a specific model. The first principle derivation matches with the earlier modelled dynamics in the weak coupling and allows to consider strong system-reservoir coupling as well.
  • Happonen, Konsta (Helsingin yliopisto, 2020)
    According to theory, the functional traits of species dictate how environmental selection affects them, and also the functioning of ecosystems that those species form. However, we lack a general understanding about how exactly environmental selection affects the trait composition of communities, and consequently, ecosystem functions. In this thesis, I study how the effects of environmental selection manifest in the functional composition of field-layer plant communities in the tundra and in boreal forests. My aims are 1) to sharpen our understanding about the effects of trait-based selection on plant communities by accounting for the microenvironment in models of trait composition, 2) to elucidate the effects of that selection on tundra carbon cycling, and 3) to reveal how forestry and reindeer husbandry, two forms of human land use, modulate long-term vegetation changes by favouring certain trait combinations over others. The study areas span four tundra landscapes in Finnish Lapland, Greenland, Svalbard, and the southern Indian Ocean, and hundreds of herb-rich boreal forest patches in Northern Finland. I use linear modelling to study how the results of vegetation surveys, visual, sensor-based and laboratory measurements of traits and the environment, and carbon flux chamber measurements relate to each other. My results suggest the following. 1) The environment strongly determines the functional composition of plant communities when accounting for microenvironmental conditions. Warm, ungrazed and unshaded conditions favor larger plants. Leaf traits that confer fast returns on invested resources are favoured in conditions of high soil resource availability, in ungrazed areas, and in the shade. 2) In the tundra, communities consisting of larger plants cycle carbon more rapidly and have larger above-ground carbon stocks. Communities with “fast” leaf traits also cycle carbon with higher intensity, but they have lower above-ground carbon stocks than communities with “slow” leaf traits. 3) In boreal forests, forestry modifies the functional composition of understory communities by decreasing the amount of light in the long term. While forestry seems to accelerate vegetation change, reindeer husbandry could be seen to counteract it by inhibiting the growth of average plant size observed in areas without reindeer. These results show that the functional traits of plants dictate how they are affected by environmental selection pressures. The effects of this selection are consistent at the community level across locations up to 15000 km apart. Furthermore, human land use is an important control of the functional composition of communities alongside natural environmental variation. This information will be useful in predicting which species will suffer and which benefit from global change, and what will be the consequences for ecosystem functioning.
  • Akujärvi, Anu (Helsingin yliopisto, 2020)
    The carbon (C) cycle of forests and croplands contributes to human wellbeing by regulating climate, producing food, timber and energy, and providing habitats for species. In the future, climate change and the increasing use of natural resources may threaten the availability of these ecosystem services (ES). Sustainable environmental management requires spatially explicit information on the impacts of human activity on ES. Mapping C stocks and changes using overly simplified, land cover -based proxies might cause inaccuracy to the ES estimates. This dissertation introduces different approaches to quantify the C budget of terrestrial ecosystems in boreal and temperate regions. The overall objectives were to couple the estimates of C sequestration with ES assessments and to investigate the spatial variation of climate regulation in relation to other ES indicators. The specific objectives were 1) to examine the drivers of C sequestration of forests and croplands using process-based models, 2) to develop a framework for mapping the current status of forest C budget across boreal landscapes and 3) to identify and map synergies and trade-offs between regulating and provisioning ES in response to alternative forest management practices and climate change. Reasons for the observed decline in the C concentration of Finnish croplands on mineral soils in 1974-2009 were investigated in paper I. The soil C model applied was able to estimate the changes in the C stock of soil reliably based on information about the climatic conditions and the chemical composition of litter. The soil C stock of Finnish croplands declined in 1974-2009 because they produced less litter than the pre-cropland forests and this agricultural litter decomposed more rapidly. According to the sensitivity analysis, climate warming has not been a significant reason for the observed C loss yet. The effects of different climate change and forest management scenarios on the growth and C budget of forests were examined across a long latitudinal gradient in Europe in paper II. The simulated productivity of forests increased substantially in 2005-2095 throughout the studied gradient. Whole-tree harvesting caused a loss of soil C independent of the model used, demonstrating this pattern to be robust. Biomass growth was unexpectedly enhanced as a result of harvest residue extraction, revealing that the post-harvest microbial controls of stand productivity require further research. The results indicated that in the short-term, forest management affected the C budget more than climate change. An approach to quantify the C budget of boreal forested landscapes was developed in paper III by combining simulation modelling with extensive information on stand characteristics. The mapping framework produced reliable estimates of the current status of C budget in the study region in southern Finland. It was developed further in paper IV to map projections of climate regulation, biomass production and dead wood production in response to alternative forest management practices. Regular harvesting, affecting the stand age class distribution, was a key driver of the C stock changes in the studied catchment during the simulation period 2012-2100. Extracting branches and stumps enhanced energy-wood production but caused trade-offs for climate regulation, dead wood production and, consequently, forest biodiversity. The mapping framework developed in this dissertation allows for visualizing ES related to C cycling as high-resolution maps to support sustainable land use planning. It contributes to bridging the gap between ecosystem service assessments and simulation modelling. In addition, the simple structure of the approach is an advantage in comparison with some detailed simulation models. The modular structure of the mapping framework enables its flexible development with new data and models in the future.
  • Niittynen, Pekka (Helsingin yliopisto, 2020)
    The Arctic is warming two to three times faster than the global average. However, climate change is proceeding at different pace between seasons and the warming has been most prominent in winter. For most of the year, majority of the arctic organisms are covered by insulating snowpack. Snow protects arctic plants, bryophytes and lichens from weather events in the free atmosphere and may provide relatively warm and stable overwintering conditions. The importance of snow has been widely acknowledged, but snow information is rather rarely utilized in climate change impact models that predict the future state of the arctic vegetation. This is largely due to missing wintertime datasets and harsh winter conditions that limit field work efforts in the Arctic. Therefore, there has remained a largely unanswered question: what is the role of snow conditions in spatial redistribution of arctic species and vegetation under rapidly warming climate? In this thesis, I address these gaps in knowledge and methodology. I utilise extensive plot-scale vegetation datasets and link these data to detailed microclimatic measurements covering both summer and winter conditions and to satellite-born snow information at fine spatial scales. I use a suite of statistical modelling methods to explore the snow-vegetation relationships in species pools consisting several hundreds of arctic, alpine and boreal vascular plant, bryophyte and lichen species in northern Fennoscandia, Svalbard and western Greenland. These models are further used to predict patterns in species distributions, community and functional trait compositions and biodiversity in space and time, to test the sensitivity of these vegetation properties to concurrent and separate changes in snow conditions and temperatures. I found that snow and winter conditions have a fundamental role in arctic ecosystems by mediating the effects of climate change at local and regional scales. Snow information improves the accuracy of the models of arctic vegetation and reveals possible future trajectories otherwise hidden from climate change impact models if the effects of snow are not quantified. Heterogeneous snow accumulation is one of the main drivers of taxonomic and functional diversity in tundra, and losing the late melting snowbed environments may lead to homogenisation of the tundra and regional extinctions among snow specialist species. It is evident that ignoring the effects of snow can produce biased projections of the future status of arctic vegetation. Given the high ecological importance of snow in the Arctic, it is alarming that the uncertainties in snow projections for the second half of the century are so high. In the upcoming years, the scientific community should pay more attention to plant-snow relationships and interactions and improve the predictions of future snow conditions at fine spatial and temporal scales.
  • Iivonen, Tomi (Helsingin yliopisto, 2020)
    The focus of this thesis is the development and optimization of atomic layer deposition (ALD) processes of cobalt oxide and copper oxide thin films. Emphasis is placed also on the characterization of the chemical and physical properties of the obtained thin films. As materials, cobalt oxides and copper oxides are semiconducting, and they also absorb visible light. Therefore, these materials are potentially useful to be utilized in various electronic, optical and catalytic applications. ALD is a chemical gas-phase thin film synthesis technique that has several advantageous features, such as the ability to produce films with exceptional conformality on three-dimensional high aspect ratio structures, excellent uniformity of film thickness over large area substrates and accurate control of film thickness in a sub-nanometer range. The origin of these features is the unique film growth mechanism based on sequential and self-limiting gas-to-solid chemical reactions. In order to enable all the useful features of ALD in thin films deposition, the precursor chemistry must be studied, developed and above all, understood. Studies related to cobalt and copper ALD precursors have largely focused on the deposition of metallic thin films due to their applicability in the microelectronics industry. ALD of cobalt oxide and copper oxide, on the other hand, has received significantly less attention. The contribution of this PhD thesis toward cobalt oxide and copper oxide thin film deposition is four ALD process development studies on these materials. The Co(BTSA)2(THF) + H2O process could be used to deposit CoO films at temperatures of 75 – 250 ºC. However, the films deposited using this precursor combination contained an increased amount of H, C and Si impurities that originated from the BTSA ligands. The amount of impurities increased with increasing deposition temperature which suggests that Co(BTSA)2(THF) is not an ideal precursor for cobalt oxide film deposition with ALD. In-situ reaction mechanism studies gave evidence toward that the film growth occurs via a ligand exchange mechanism. The Cot–Bu(DAD)2 cobalt precursor was used together with O3 to deposit cobalt oxide films. The optimal deposition temperature for this process was 120 ºC, at which polycrystalline and phase-pure Co3O4 thin films were obtained. The formation of mixed valence Co3O4 films from a Co(II) precursor occurred due to the high oxidative power of O3. The Co3O4 films deposited at 120 ºC contained only a low amount of impurities, of which H was the most prominent at approximately a low 5 at-%. In photoelectrochemical studies, cobalt oxide nanoparticles were discovered to be efficient catalysts for the photoelectrochemical oxygen evolution reaction. The Cu(OAc)2 + H2O process produced crystalline Cu2O thin films at temperatures close to 200 ºC. During the process development study, it was found that Cu(OAc)2 is reduced to the volatile copper(I) acetate (CuOAc) when heated to its source temperature in ALD conditions. According to in-situ reaction mechanism studies and post-deposition film characterization, film growth proceeds via a ligand exchange route and results in the release of acetic acid as the reaction by-product. Elemental analysis of the films revealed that the Cu:O ratio of the films is close to the stoichiometric value of 2.0 and that the films contain exceptionally low amounts of impurities, 0.4 at-% H and ≤ 0.2 at-% C. The Cu(dmap)2 copper precursor was used at deposition temperatures of 80 – 140 ºC together with O3. This ALD chemistry produced polycrystalline and phase-pure CuO thin films with relatively low amount of impurities, ≤ 3.0 at-% H, C and N at the optimal deposition temperature for this process, 120 ºC.
  • Räsänen, Matti (Helsingin yliopisto, 2020)
    Understanding the interaction between precipitation and vegetation growth in water-limited ecosystems is vital for various livelihoods that depend on water resources. Precipitation is the primary driver of vegetation growth in dry ecosystems, while fog deposition is essential for the microclimate at dry coastal ecosystems and cloud forests. The analysis of soil moisture, which incorporates the action of climate, soil, and vegetation, is the key to understanding the carbon and water relations and the interaction between precipitation and vegetation. This thesis examines the impacts of precipitation variability on carbon and water relations in African savannas and the similarities in rainfall and fog deposition. The ecosystem-scale transpiration was estimated from eddy covariance measurements based on annually fitted water use efficiency and optimality hypothesis. The soil moisture measurements were analyzed using a hierarchy of soil moisture models with precipitation, NDVI, and potential evapotranspiration (PET) variability. The statistics of fog and rainfall were analyzed using an analogy with self-organized criticality. The annual evapotranspiration (ET) was comparable to the annual precipitation at the grazed savanna grassland. While the annual precipitation was highly variable, the estimated annual transpiration was nearly constant 55 % of ET. The transpiration (T) was reduced only during the drought year due to grass dieback-regrowth and possibly due to other changes in soil surface properties that enhanced evaporation. The annual net CO2 exchange (NEE) had large variation ranging from –58 (sink) to 198 (source) gC m-2 yr-1. The annual NEE was related to the maximum of remotely sensed vegetation index (NDVI), and the annual ecosystem respiration was strongly correlated with early season rainfall amount. The analysis of measured soil moisture across savannas showed that NDVI and PET adjustments to daily maximum ET are necessary for modeling depth averaged soil moisture. The soil moisture memory timescale, a rough measure of the time it takes for a soil column to forget the initial soil moisture state, was linearly related to daily mean precipitation intensity at semi-arid savannas. Both rainfall and fog time series showed approximate power-law relations for dry period and event size distributions consistent with self-organized criticality prediction. The spectral exponents of the on-off time series of the fog and rainfall exhibited an approximate f(-0.8) scaling, but the on-off switching was not entirely independent from the amplitude intermittency in fog and rainfall. The results show the role of short and long-term variability in precipitation and its consequences for the carbon and water cycle of semi-arid savannas with significant tree cover. These findings can be used to develop minimalist water balance models to understand how vegetation state affects water resources.
  • Salmi, Leo (Helsingin yliopisto, 2020)
    Inorganic–organic hybrids represent a class of materials consisting of inorganic and organic components mixed at the molecular level. This offers not only the possibility to combine material properties of the constituents, but also to discover completely new characteristics. Because of this, hybrid materials have become an important part of materials research. Atomic layer deposition (ALD) is a gas phase thin film deposition method with the ability to deposit conformal films with good control over film thickness and composition. Furthermore, ALD offers large-area uniformity and perfect step-coverage. Molecular layer deposition (MLD), used for depositing organic polymers, is a method derived directly from ALD. The combination of ALD and MLD offers a convenient way of depositing inorganic–organic hybrid material thin films for applications, such as electronics and optics, where ultimate precision is needed. In this thesis, ALD/MLD was used to deposit hybrid nanolaminates, metal–organic frameworks, and zinc glutarate. Nanolaminates of Ta2O5 and polyimide were deposited using tantalum ethoxide, water, pyromellitic dianhydride, and diaminohexane as the precursors. The leakage currents could be greatly reduced compared to the bare Ta2O5 and polyimide by layering of the materials. It was also shown that the mechanical properties could be improved by introduction of the organic layers. MOF-5 and IRMOF-8 thin films were deposited using zinc acetate, 1,4-benzenedicarboxylic acid, and 2,6-naphthalenedicarboxylic acid as the precursors. The deposition process included ALD/MLD combined with a two-step post-deposition crystallization in moist air and in an autoclave with dimethylformamide. Despite the need for a liquid-phase crystallization, the conformality and continuity of the films could be preserved. ALD/MLD of zinc glutarate thin films was shown for the first time ever using zinc acetate and glutaric acid as the precursors. The films were crystalline as-deposited with a structure matching to zinc glutarate. Catalytic activity of the films was demonstrated by polymerizing propylene oxide and CO2 in the presence of zinc glutarate coated glass wool and steel mesh.
  • Luomaranta, Anna (Helsingin yliopisto, 2020)
    In northern countries, such as Finland, winter climate conditions affect the functionality of society in many ways. Due to the climate warming, the winter conditions are facing changes. Changes in snow and ice act as an indicator of the climate conditions in a region. The aim of this thesis is to examine what the winters are like in Finland in a changing climate. The main results of this work are based on gridded observations, FMIClimGrid and E-OBS, and CMIP5 global climate model simulations. Using these, the observed snow, temperature and precipitation conditions in 1961-2014 were analyzed, and the future changes in Baltic Sea ice cover were projected for the ongoing century. In addition, two modeling studies were performed: The first assessed the performance of ECHAM5 atmospheric general circulation model in simulating snow melt timing in spring, and the second studied the ability of numerical convection-permitting weather prediction model HARMONIE to simulate a sea-effect snowfall case. The results showed that, in Finland, the snow depth has decreased throughout the year and the snow season has shortened. Increasing liquid precipitation in winter was one of the main reasons for the changes. In spring, increasing air temperature has had an effect. The annual maximum sea ice extent and sea ice thickness in the Baltic Sea were projected to decrease during the ongoing century. However, the Baltic Sea is unlikely to become totally ice-free during typical winters in the coming decades. When climate models are used to predict future climate conditions, it is essential that they describe the snow cover realistically, since it is an important element of the climate system. In the ECHAM5 climate model, Northern Eurasian snow melt timing was generally produced quite well when compared to satellite observations, but regional differences were also found. The reasons for the discrepancies turned out to be the simplifications in the calculations of the model’s surface energy budget. The HARMONIE model also managed to simulate a known sea-effect snowfall case reasonably well. The simulation results improved when radar reflectivities were assimilated into the model. As climate warming proceeds, the winter conditions will continue to change. The results of this thesis highlight the importance of continuous monitoring of climate conditions in the northern areas.
  • Lehtinen, Sami (Helsingin yliopisto, 2020)
    This article-based dissertation investigates the evolution of various predator-prey interactions through mathematical modelling. It utilises a 'mechanistic' approach to modelling by deriving ecological population equations from elemental individual interactions. The dissertation demonstrates how this approach to modelling can reveal novel and simple explanations for complex ecological and evolutionary phenomena. Among several other topics, this thesis studies the coevolution of predator's cannibalism and prey's defence mechanisms, such as refuge use and counter-attacks against younger predators. Using the theoretical frameworks of adaptive dynamics and bifurcation analysis, we find several intriguing ecological and evolutionary outcomes. When cannibalism and prey counter-attacks are present, the ecological dynamics tend to increase in complexity, including the possibility of alternative ecological states and abrupt regime shifts between them. The thesis demonstrates that the evolution of these behavioural traits can continue indefinitely in long-term evolutionary cycles, or the evolution can bring the predator species to extinction. Notably, the predator's own evolution can lead to their extinction, a phenomenon known as 'evolutionary suicide'. Investigation of coevolution also allows us to explain the emergence of cannibalism: it may emerge as an evolutionary response to prey adaptation. In addition to these more general topics, this thesis also demonstrates the prosperities of mechanistic model derivation in specific topics. The thesis includes derivation and analysis of the first mathematical cost/benefit model for prey capture in the carnivorous plant Venus flytrap (Dionaea muscipula). By fitting the model to the available data about the plant, we gain new information about its several ecological aspects, such as the frequency of prey captures and the optimum feeding behaviour. In particular, we provide theoretical evidence for Charles Darwin's long-standing hypothesis about prey selection in the Venus flytrap, which states that the plant deliberately allows all small prey to escape and captures only larger prey. In the light of these results, this thesis argues that the mechanistic approach is a strong tool for revealing the core of many phenomena, of which scale may range from individual behaviour to drastic ecological shifts and species extinction.
  • Siljamo, Niilo (Helsingin yliopisto, 2020)
    Snow cover plays a significant role in the weather and climate system, ecosystems and many human activities, such as traffic. Weather station snow observations (snow depth and state of the ground) do not provide high-resolution continental or global snow coverage data. The satellite observations complement in situ observations from weather stations. Geostationary weather satellites provide observations at high temporal resolution, but the spatial resolution is low, especially in polar regions. Polar-orbiting weather satellites provide better spatial resolution in polar regions with limited temporal resolution. The best detection resolution is provided by optical and infra-red radiometers onboard weather satellites. Snow cover in itself is highly variable. Also, the variability of the surface properties (such as vegetation, water bodies, topography) and changing light conditions make satellite snow detection challenging. Much of this variability is in subpixel scales, and this uncertainty creates additional challenges for the development of snow detection methods. Thus, an empirical approach may be the most practical option when developing algorithms for automatic snow detection. In this work, which is a part of the EUMETSAT-funded H SAF project, two new empirically developed snow extent products for the EUMETSAT weather satellites are presented. The geostationary MSG/SEVIRI H32 snow product has been in operational production since 2008. The polar product Metop/AVHRR H32 is available since 2015. In addition, validation results based on weather station snow observations between 2015 and 2019 are presented. The results show that both products achieve the requirements set by the H SAF.
  • Herranen, Joonas (Helsingin yliopisto, 2020)
    Radiative torques are caused by interactions between particles and electromagnetic radiation, which is more commonly referred to as electromagnetic scattering. Radiative effects can dominate the behavior of small particles, such as cosmic dust. Radiative torques on small irregular shapes have been found to be a key candidate in aligning spinning cosmic dust grains, which in turn polarizes light passing through dust clouds and emitted by the dust, first observed over 70 years ago. Numerical methods of electrodynamics have evolved with the available computing power to be the main tool of understanding the dynamics due to the scattering process, or scattering dynamics, which is the focus of this thesis. Efficient analysis of scattering dynamics involves contemporary numerical methods, which provide numerically exact solutions of electromagnetic scattering by irregular particles. In this thesis, an overview of electromagnetic scattering and scattering dynamics is presented. In addition, the applications of scadyn, a software developed for the solution and analysis of scattering dynamics are discussed. The main applications include radiative torque alignment and optical tweezers.
  • Nieminen, Juuso (Helsingin yliopisto, 2020)
    This doctoral thesis adds to the theoretical understanding of the interplay of agency and power in self-assessment in the context of undergraduate mathematics education. This is achieved by utilising the Foucauldian notion of subject positioning, referring to the positions that assessment constructs for students. This thesis addresses summative self-assessment (SSA) that involves the element of self-grading, and the disruptive nature of such practice. The four substudies of this thesis investigate the reflective space that SSA opens for students to renegotiate their positioning of “the assessee” in the examination-driven context of undergraduate mathematics. This doctoral thesis was conducted in the Digital Self-Assessment (DISA) project, in which the SSA model was created for large undergraduate mathematics courses. SSA is an assessment model that includes transparent learning objectives, various forms of feedback regarding those objectives and formative self-assessment practices. At the end of the model students decide their own grade. In this study, the SSA model is examined through the perspective of students. This empirical doctoral thesis consists of four substudies and draws on theoretical and methodological triangulation. Studies I, III and IV were based on an experimental study in which the participants in an undergraduate mathematics course were randomly divided into two groups. Half of the students attended a course exam and half of them self-graded themselves; both groups took part in a formative self-assessment process. After the course, 41 students were interviewed (26 from the summative and 15 from the formative self-assessment group). Furthermore, a survey study (N = 299) was conducted. The data for Study II was collected through a survey in another adaptation of the summative self-assessment model (N = 113). Studies I and II drew on quantitative methodology to examine the quality of studying within the SSA model to shed light on the positioning processes on a broader scale. Study I drew on latent profile analysis to investigate student subgroups in terms of deep and surface approaches to learning. Four profiles were identified and compared between the formative and summative self-assessment groups. Study II, leaning on cluster analysis, examined student subgroups after another course implementation of the SSA model. Both studies connected SSA with a deep approach to learning, while Study I also identified a connection with a higher reported level of self-efficacy. Study III drew on the concept of student agency, aiming to understand the affordances that the self-assessment model offers for agentic learning. The findings of Study III implied that the summative self-assessment model was connected with future-driven agentic behavior. Study IV introduced three different theoretical frameworks for power to understand the socio-cultural nature of SSA as a political practice. As Study III examined pedagogical opportunities for agentic learning, Study IV sought to critically examine whether students could make use of these opportunities in spite of the complex power relations. Both studies drew on interview data. Finally, Studies I-IV were reinterpreted and synthesised through a discursive-deconstructive reading. What was deconstructed was students’ positioning as “the assessee” and whether, and how, SSA disrupted this position. Overall, this thesis raises concerns about non-agentic positions that mathematics assessment tends to produce, calling for teachers and researchers to engage with disruptive practices.
  • Väisänen, Timo (Helsingin yliopisto, 2020)
    Physical characterization of planetary objects would be accelerated by the capability of simulating light scattering from an arbitrary dense multiparticle medium. Even though exact methods that solve the Maxwell equations exist, such as the superposition T-matrix method (STMM), they are too compute-intensive to be applied to large macroscale objects such as an asteroid or a planetary surface. In the thesis, radiative transfer (RT) based tools are developed, studied, and offered as an approximation to simulate light scattering from dense particulate media. The RT theory has been derived for the sparse random medium, and it fails when applied to the dense random medium. In order to extend the applicability to dense random media, we have been working with the incoherent volume-element treatment for the RT called the radiative transfer with reciprocal transactions (R²T²). Instead of using a single particle as the diffuse scatterer in the RT, the properties of the incoherent volume elements are used. These properties are computed from the incoherent electric fields extracted by subtracting the coherent part from the free-space scattered electric fields. The R²T² is validated by simulating various dense random media for which the STMM is still applicable. In the geometric optics regime, there are the generalized Snel's law and Fresnel matrices that can be used to simulate light scattering from large objects. For dense particulate media, the computation can be slow, so diffuse scattering as a tool to speed up the computation is studied. Previous studies have included surface roughness with approximate functions, but here a layer of particles is added on top of the diffusely scattering medium. We replace the classical extinction mean free path with more informative extinction distance distribution that is gathered numerically. The comparison between the RT model, our model, and the "ground truth", in which only the generalized Snel's law and Fresnel matrices are used, reveal that our model works better than the RT model. Even though the computational methods are validated against each other, the methods need to be validated experimentally against controlled samples with well-known physical properties in order to be a reliable source of information. For the validation of the R²T², we computationally simulated a well-controlled sample of which the light-scattering characteristics have been measured. Although the phase function of the simulation and measurement match well, the other scattering characteristics, seem to reveal small discrepancies between the model and the measurements. Still, the various computational validations and this experimental validation show that the R²T² works well and can be used in the near future as a characterization tool.
  • Norberg, Johannes (Helsingin yliopisto, 2020)
    Ionosphere is the partly ionised layer of Earth's atmosphere caused by solar radiation and particle precipitation. The ionisation can start from 60 km and extend up to 1000 km altitude. Often the interest in ionosphere is in the quantity and distribution of the free electrons. The electron density is related to the ionospheric refractive index and thus sufficiently high densities affect the electromagnetic waves propagating in the ionised medium. This is the reason for HF radio signals being able to reflect from the ionosphere allowing broadcast over the horizon, but also an error source in satellite positioning systems. The ionospheric electron density can be studied e.g. with specific radars and satellite in situ measurements. These instruments can provide very precise observations, however, typically only in the vicinity of the instrument. To make observations in regional and global scales, due to the volume of the domain and price of the aforementioned instruments, indirect satellite measurements and imaging methods are required. Mathematically ionospheric imaging suffers from two main complications. First, due to very sparse and limited measurement geometry between satellites and receivers, it is an ill-posed inverse problem. The measurements do not have enough information to reconstruct the electron density and thus additional information is required in some form. Second, to obtain sufficient resolution, the resulting numerical model can become computationally infeasible. In this thesis, the Bayesian statistical background for the ionospheric imaging is presented. The Bayesian approach provides a natural way to account for different sources of information with corresponding uncertainties and to update the estimated ionospheric state as new information becomes available. Most importantly, the Gaussian Markov Random Field (GMRF) priors are introduced for the application of ionospheric imaging. The GMRF approach makes the Bayesian approach computationally feasible by sparse prior precision matrices. The Bayesian method is indeed practicable and many of the widely used methods in ionospheric imaging revert back to the Bayesian approach. Unfortunately, the approach cannot escape the inherent lack of information provided by the measurement set-up, and similarly to other approaches, it is highly dependent on the additional subjective information required to solve the problem. It is here shown that the use of GMRF provides a genuine improvement for the task as this subjective information can be understood and described probabilistically in a meaningful and physically interpretative way while keeping the computational costs low.
  • Hella, Olli (Helsingin yliopisto, 2020)
    Central limit theorems are one of the most central theorems in the theory of probability. They have also been actively studied in the field of dynamical systems. In the first article of this thesis an adaptation of Stein's method, introduced by Charles Stein in 1972 is presented. Our adaptation gives new correlation-decay conditions for both univariate and multivariate observables under which central limit theorem holds for time-independent dynamical systems. When these conditions are satisfied, this adaptation also yields estimates for convergence rates without much extra work. We also present a scheme for checking these conditions and consider it in two example models. In the second article the scope of this adaptation is extended further to time-dependent dynamical systems. The applicability of this method is shown for time-dependent expanding circle maps model and also for quasistatic dynamical systems, which is a new research area introduced recently by Dobbs and Stenlund. The third article considers model of time-dependent compositions of Pomeau-Manneville-type intermittent maps. In this model we also establish central limit theorems with a rate of convergence. This article uses the results in the second article and earlier work of Juho Leppänen on the functional correlation bounds for Pomeau-Manneville maps with time-dependent parameters. Quasistatic systems are also further studied and we present general conditions under which multivariate CLT for quasistatic systems holds. In the fourth article we study random compositions of transformations. We prove a theorem for almost sure convergence of variance for normalized and fiberwise centered Birkhoff sums. This in combination with other results can be used to establish quenched central limit theorems with a rate of convergence for random dynamical systems. Two examples which use the theorem in the fourth article are proved in the second and third article.
  • Peräkylä, Otso (Helsingin yliopisto, 2020)
    Concurrently with greenhouse gases, humankind has been emitting aerosol particles and their precursors into the atmosphere. These solid or liquid particles, tiny enough to float in the air, cause adverse health effects as well as a net cooling effect on the Earth's climate, counteracting part of the warming caused by greenhouse gases. The magnitude of this effect is uncertain, leading to uncertainties in projections of future climate. One of the main causes for the uncertainty is our lacking knowledge of the natural, pre-industrial aerosol particles. A major source of aerosol particles is the oxidation of volatile organic compounds (VOCs). VOCs are emitted into the atmosphere in large quantities, with biogenic emissions dominating globally over anthropogenic ones. In the atmosphere, VOCs such as monoterpenes, the main group of VOCs emitted by the boreal forests, undergo oxidation reactions, producing vapours of lower volatility. Part of the products condense on pre-existing aerosol particles, or may even form new particles altogether. The conversion of monoterpenes into condensible vapours is the main topic of this thesis. In this thesis, I aimed to 1) determine which oxidants are important for monoterpene oxidation in the context of new particle formation, 2) quantify the volatilities of a group of VOC oxidation products, highly oxygenated organic molecules (HOMs), and 3) develop new data analysis methods to gain new insights into the formation of condensible vapours. To address these aims, I utilized mass spectrometric methods for measuring VOCs and their oxidation products, in both field and laboratory conditions. First, we found that oxidation of monoterpenes by the hydroxyl radical was likely very important for the growth of newly formed particles. Our results also suggest that multi-generation oxidation reactions are important. Second, we found that monoterpene-derived HOMs are predominantly of low volatility, though also semi-volatile behavior was observed when the HOMs contained eight or less oxygen atoms. Our estimates for the volatilities lie between earlier parametrizations and recent computations. Finally, we developed a new data analysis method for mass spectrometric measurements, based on a novel factorization technique. Our method efficiently uses the high resolution information in the measured spectra, avoiding many of the time consuming and subjective procedures commonly used. It also allowed us to separate new HOM formation processes that could not be found using traditional methods.

View more