Matemaattis-luonnontieteellinen tiedekunta


Recent Submissions

  • Leppänen, Leena (Helsingin yliopisto, 2019)
    Information on snow water equivalent (SWE) of seasonal snow is used for various purposes, including long-term climate monitoring and river discharge forecasting. Global monitoring of SWE is made feasible through remote sensing. Currently, passive microwave observations are utilized for SWE retrievals. The main challenges in the interpretation of microwave observations include the spatial variability of snow characteristics and the inaccurate characterization of snow microstructure in retrieval algorithms. Even a minor variability in snow microstructure has a notable impact to microwave emission from the snowpack. This thesis work aims to improve snow microstructure modelling and measurement methods, and understanding the influence of snow microstructure to passive microwave observations, in order to enable a more accurate SWE estimation from remote sensing observations. The thesis work applies two types of models: physical snow models and radiative transfer models that simulate microwave emission. The physical snow models use meteorological driving data to simulate physical snow characteristics, such as SWE and snow microstructure. Models are used for different purposes such as hydrological simulations and avalanche forecasting. On the other hand, microwave emission models use physical snow characteristics for predicting microwave emission from a snowpack. Microwave emission models are applied for the interpretation of spaceborne passive microwave remote sensing observations, for example. In this study, physical snow model simulations and microwave emission model simulations are compared with field observations to investigate problems in characterizing snow for microwave emission models. An extensive set of manual field measurements of snow characteristics is used for the comparisons. The measurements are collected from taiga snow in Sodankylä, northern Finland. The representativeness of the measurements is defined by investigating the spatial and temporal variability of snow characteristics. The work includes studies on microwave emission modelling from natural snowpacks and from excavated snow slabs. Radiometric observations of microwave emission from natural snowpacks are compared with simulations from three microwave emission models coupled with three physical snow models. Additionally, homogenous snow samples are excavated from the natural snowpack during the Arctic Snow Microstructure Experiment, and the incident snow characteristics and microwave emission characteristics are measured with an experimental set-up developed for this study. Predictions of two microwave emission models are compared with the radiometric observations of collected snow samples. The results indicate that none of the model configurations can accurately simulate the microwave emission from natural snowpack or snow samples. The results also suggest that the characterization of microstructure in the applied microwave emission models is not adequate.
  • Herranen, Jaana (Helsingin yliopisto, 2019)
    As the world is constantly changing, and there are concerns over a sustainable future, educating teachers for sustainability is crucial, as education is one of the most effective means to improve sustainability. Science, such as chemistry, plays a significant role in addressing sustainability issues, because chemistry can contribute to both solving as well as causing the challenges through knowledge and products that chemistry produces. Science and sustainability are inherently connected, as are the discussions over their education. On both these fields, discussions over the role of the students have emerged. In science education there has been a growing interest to educate scientifically literate students who can use scientific thinking in their own lives and in the society. This requires active participation of the students in their own learning. Sustainability education has been advocating transformative learning so that students could take action in their own lives towards sustainability. Moreover, teacher education could be developed in a direction in which the student teachers would be given possibilities to make decisions concerning the learning and teaching methods used and contents chosen, and develop their actioncompetence through active participation. However, in order to reach sustainability, all citizens should be considered as learners, not only students in schools and universities. Discussion over the learners’ roles has led to the using of terms, such as learner-centred and learner-driven learning. What these terms actually entail is, however, not always clear. In science education, learner-driven approaches are usually practiced in the form of open inquiry – an inquiry that starts with the students’ questions. Addressing and using the students’ questions is important in science education, but also in sustainability education to activate learners to think and act for sustainability. The aim of this thesis is to understand the possibilities and challenges of learner-centred and learner-driven science teacher education for sustainability. The research questions are: i) Which possibilities do learner-centred and learner-driven science teacher education for sustainability offer? and ii) What are the challenges for learner-centred and learner-driven science teacher education for sustainability? For this purpose, two types of approaches are studied: inquiry-based education as a typical approach in science teacher education from the point of view of learner-centred and learner-driven inquiry, and sustainability education as a part of science teacher education for sustainability from the viewpoint of learnercentred and learner-driven sustainability education. This is a qualitative multi-method research with one systematic review and three case studies applying grounded theory and discourse analysis. The thesis consists of four articles: i) Inquiry as a context-based practice – A case study of pre-service teachers’ beliefs and implementation of inquiry in contextbased science teaching ii) Student-question-based inquiry in science education, iii) From learner-centred to learner-driven sustainability education, and iv) Challenges and tensions in collaborative planning of a student-led course on sustainability education. Data for the studies was derived from three sources including higher education student groups and peer-reviewed articles. Study I utilised data from five student teachers who participated in a course “inquiry-based chemistry teaching” in 2015. Their beliefs about inquiry were studied by interviewing them, and their implementations of inquiry were studied from their reports. Data in study II consisted of 30 articles reviewed using systematic review. In studies III and IV, the research data consisted of a planning process of higher education students (student teachers and students interested in teaching) who planned and ran a course “sustainable development in education” in 2015. Their planning meetings and two semi-structured interviews were analysed using discourse analysis and grounded theory. As a result, understanding on the differences between learner-centred and learner-driven sustainability education was obtained. This thesis reveals that learner-driven and learnercentred education are different constructs, especially related to the learners’ roles. Studentled planning on sustainability education was studied to be challenging, as the students had to discuss several interrelated issues on sustainability and sustainability education, as well as their own roles and ways to work as a group. However, the challenges in learner-driven approaches can sometimes be viewed as part of the process. In addition, possibilities for learner-centred and learner-driven practices were revealed on how to use students’ questions in inquiries and contexts-based inquiry as a humanistic approach. For science education, a student-question-based inquiry model was created, which the teacher can use to support students in their question asking. The study also revealed challenges related to the ownership of students’ questions. The results from this thesis are relevant when planning teacher education for sustainability. This thesis points out that especially higher education has the potential to involve the students more in teaching by promoting action-competence among students through learner-driven education. Science teacher education could be focusing more on using learner-centred and learner-driven approaches, because the studied higher education students could plan and carry out teaching that mirror central aspects of science and sustainability education. Moreover, in order to be able to use learner-driven approaches, there is a need to use extra-situational knowledge, to improve students’ ownership of their own questions, to redefine expertise, and to work with non-predefined goals and with the whole community.
  • Sarnela, Nina (Helsingin yliopisto, 2019)
    Atmospheric aerosols are small liquid or solid particles suspended in the air. These small particles have significant effects to our climate and health. Approximately half of the particles that grow into cloud condensation nuclei −size are primary particles and emitted directly into the atmosphere, whereas the other half are secondary particles which are formed in the atmosphere. In new particle formation, molecular clusters form from atmospheric low-volatility vapors by condensation and/or chemical reactions. Atmospheric oxidation is a key phenomenon enhancing atmospheric particle formation since oxidized compounds condense easier due to their lower vapor pressure. So far two oxidation processes have been identified as relevant for new particle formation: the oxidation of sulfur dioxide to sulfuric acid and oxidation of volatile organic compounds to highly oxygenated compounds. The most significant atmospheric oxidants have previously thought to be ozone, hydroxyl radical and nitrate radical. Recently the importance of stabilized Criegee intermediates in atmospheric oxidation has been brought into discussion. In this thesis, we used Chemical Ionization Atmospheric Pressure interface Time of Flight mass spectrometer together with different particle measurements in order to widen the understanding of the first steps of new particle formation. We also developed new mass spectrometric measurement techniques to fill the gaps in our current methods. We developed an indirect method to measure non- OH oxidants of sulfur dioxide to better understand the role of stabilized Criegee intermediates and other non-OH oxidants of sulfur dioxide in sulfuric acid formation. We also developed a new technique to determine concentration of ambient dimethylamine at sub-pptV-level. We used both of these new techniques to measure the ambient concentrations in Boreal forest, at SMEAR II station (Station for Measuring Ecosystem-Atmosphere Relations II, Hyytiälä, Finland). Furthermore, we measured new particle formation in different environments and in a chamber study and tried to identify the condensing vapors. We studied the ozonolysis of α-pinene, the most abundant monoterpene in the atmosphere, in controlled chamber measurements in order to be able to follow the formation of highly oxygenated organics and oxidation of sulfur dioxide purely by stabilized Criegee intermediates and compare the results with kinetic model results. We studied the new particle formation near an oil refinery and found that significantly large fraction of the growth during the new particle formation events was due to sulfuric acid condensation. In our studies at the Atlantic coast, we identified the molecular steps involved in new particle formation at iodine-rich environment and could follow the growth of molecular clusters by subsequent addition of iodic acid molecules. We also did field measurements in Arctic and Antarctic sites and showed that the occurrence of high iodic acid concentration is not limited only to coastal areas with macro algae beds. Keywords: mass spectrometry, atmospheric aerosols, low-volatility vapors, ozonolysis, new particle formation
  • Saikko, Paul (Helsingin yliopisto, 2019)
    Computationally hard optimization problems are commonplace not only in theory but also in practice in many real-world domains. Even determining whether a solution exists can be NP-complete or harder. Good, ideally globally optimal, solutions to instances of such problems can save money, time, or other resources. We focus on a particular generic framework for solving constraint optimization problems, the so-called implicit hitting set (IHS) approach. The approach is based on a theory of duality between solutions and sets of mutually conflicting constraints underlying a problem. Recent years have seen a number of new instantiations of the IHS approach for various problems and constraint languages. As the main contributions, we present novel instantiations of this generic algorithmic approach to four different NP-hard problem domains: maximum satisfiability (MaxSAT), learning optimal causal graphs, propositional abduction, and answer set programming (ASP). For MaxSAT, we build on an existing IHS algorithm with a fresh implementation and new methods for integrating preprocessing. We study a specific application of this IHS approach to MaxSAT for learning optimal causal graphs. In particular we develop a number of domain-specific search techniques to specialize the IHS algorithm for the problem. Furthermore, we consider two optimization settings where the corresponding decision problem is beyond NP, in these cases Σᴾ₂-hard. In the first, we compute optimal explanations for propositional abduction problems. In the second, we solve optimization problems expressed as answer set programs with disjunctive rules. For each problem domain, we empirically evaluate the resulting algorithm and contribute an open-source implementation. These implementations improve or complement the state of the art in their respective domains.
  • Pönni, Arttu (Helsingin yliopisto, 2019)
    This thesis consists of four research papers and an introduction covering the most important concepts appearing in the papers. The papers deal with applications of gauge/gravity dualities in the study of various physical quantities and systems. Gauge/gravity dualities are equivalences between certain quantum field theories and classical theories of gravity. These dualities can be used as computational tools in a wide range of applications across different fields of physics, and as such they have garnered much attention in the last two decades. The great promise of these new tools is the ability to tackle difficult problems in strongly interacting quantum field theories by translating them to problems in classical gravity, where progress is much easier to make. Quantum information theory studies the information contained in quantum systems. Entanglement is the fundamental property of quantum mechanics that sets it apart from classical theories of physics. Entanglement is commonly quantified by entanglement entropy, a quantity which is difficult to compute in interacting quantum field theories. Gauge/gravity dualities provide a practical way for computing the entanglement entropy via the Ryu-Takayanagi formula. The primary focus of this thesis is to use this formula for computing various entanglement measures in strongly interacting quantum field theories via their gravity duals. The purpose of this thesis is to introduce quantum information theory concepts that have been important in our research. When applicable, quantities of interest are first introduced in the classical setting in order to build intuition about their behaviour. Quantum properties of entanglement measures are discussed in detail, along with their holographic counterparts, and remarks are made concerning their applications.
  • Santala, Eero (Helsingin yliopisto, 2020)
    Nanostructures are structures where at least one dimension is in nanoscale which ranges typically from 1 to 100 nm. 1D nanostructure is an object where two dimensions are in the nanometer scale and one dimension in a larger scale, for example carbon nanotubes and electrospun fibers. Due to a very small size, nanostructured materials have different properties than what they have in bulk form, for example chemical reactivity is increased when the size comes smaller. Electrospinning is a very simple but versatile and scalable method for preparing micro- and nanosized fibers. In an electrospinning process an electrical charge is used to spin very fine fibers from a polymer solution or melt. By changing electrospinning parameters, for example voltage and spinneret-collector distance, fibers of different diameters can be obtained. With different electrospinning setups it is also possible to prepare hollow fibers, and even macroscopic objects with fiber walls can be obtained. This work was concentrated on A) constructing different electrospinning setups and verifying their operation by electrospinning various materials, and B) preparing 1D nanostructures like inorganic nanofibers directly by electrospinning and nanotubes by combining electrospinning and atomic layer deposition, ALD. This is so called Tubes by Fiber Template (TUFT) –process. The electrospinning setup was constructed successfully, and its operation was verified. Several materials were electrospun. Polymers (PVP, PVA, PVAc, PEO, PMMA and PVB, Chitosan) were electrospun directly from polymer/solvent solution, and ceramic materials like TiO2, BaTiO3, SnO2, CuO, IrO2, ZnO, Fe2O3, NiFe2O4, CoFe2O4, SiO2 and Al2O3 were electrospun from polymer solutions containing the corresponding metal precursor(s). In the case of the ceramic fibers, the electrospinning was followed by calcination to remove the polymer part of the fibers. Metallic fibers were obtained by a reduction treatment of the corresponding oxides, for example Ir fibers were prepared by reducing IrO2 fibers. Combination of electrospinning and ALD was used for TUFT processing of ceramic nanotubes. In the TUFT process, electrospun template fibers were coated with the desired material (Al2O3, TiO2, IrO2, Ir, PtOx and Pt) and after coating the template fibers were removed by calcination. The inner diameter of the resulting tubes was determined by the template fiber and the tube wall thickness by the thickness of the ALD deposited film. Promising results were obtained in searching for new applications for electrospun fibers. For the first time, by combining electrospinning and ALD, the TUFT process was used to prepare reusable magnetic photocatalyst fibers. These fibers have a magnetic core fiber and a photocatalytic shell around it. After a photocatalyst purification was completed, the fibers could be collected from the solution by a strong magnet and reused in cleaning the next solution. In this study, the most commercially and environmentally valuable application invented was to use electrospun ion selective sodium titanate nanofibers for purification of radioactive wastewater. These fibers were found to be more efficient than commercial granular products, and they need much less space in final disposal.
  • Heikkilä, Jaana (Helsingin yliopisto, 2019)
    A search for a pseudoscalar Higgs boson A is performed, focusing on its decay into a standard model-like Higgs boson h and a Z boson. Decays of the h boson into a pair of tau leptons are considered along with Z boson decays into a pair of light leptons (electrons or muons). A data sample of proton-proton collisions collected by the CMS experiment at the LHC at √s= 13 TeV is used, corresponding to an integrated luminosity of 35 inverse femtobarns. The search uses the reconstructed mass distribution of the A boson as the discriminating variable. This analysis is the first of its kind to utilise the svFit algorithm while exploiting the possibility to apply a mass constraint of 125 GeV in the h→ττ four-vector reconstruction. The resolution of the reconstructed mass of the A boson is improved compared to the mass resolution obtained in previous analyses targeting the same final state. No excess above the standard model expectation is observed in data. Model-independent as well as model-dependent upper limits in the mA – tanβ plane for two minimal supersymmetric standard model benchmark scenarios are set at 95% confidence level. The model-independent upper limit on the product of the gluon fusion production cross section and the branching fraction for the A→Zh→llττ decay ranges from 27 fb at 220 GeV to 5 fb at 400 GeV. The observed model-dependent limits on the process σ(gg→A+bbA)B(A→Zh→llττ) in case of the hMSSM (low-tb-high) scenario exclude tanβ values from 1.6 (1.8) at mA= 220 GeV to 3.7 (3.8) at mA = 300 GeV, respectively.
  • Wiikinkoski, Elmo (Helsingin yliopisto, 2019)
    Each nuclear energy country has their strategy for handling the spent nuclear fuel: direct disposal, recycling or a combination of both. The advances in nuclear fuel partitioning enhance the safety of both of these approaches. The spent fuel contains fissionable material that could be used in modern and future reactors. The re-use, however, requires a separation of the fissionable material from the neutron poisoners that are also present in the spent fuel. Time-proven separation technologies exist for the recovery of uranium and plutonium, but for the trivalent actinides americium and curium, such technologies are still young. The majority of current separation technologies in nuclear fuel partitioning, such as the solvent extraction, are based on the recovery of target nuclides from liquids by organic extractants. Their application can be limited by the high radiation doses during the separation process. Ion exchange with inorganic materials offers a robust supportive role in the separation challenges for radionuclides. The materials are stable in high temperatures, high acidity and under extreme radiation, and are ion selective. By altering their structure, the desired ion selectivity can be further improved. Throughout the dissertation, a solid inorganic ion exchanger, α-zirconium phosphate, was investigated, developed and applied in column operation with one goal in mind: the application of ion exchange in the column separation of trivalent actinides from lanthanides. The α-zirconium phosphate proved suitable for americium-europium separation. The material was modified and the connections between synthesis, properties and ion selectivity between various products were investigated and discussed. Numerous characterization techniques were applied in the investigation of material properties. Radioactive materials and radiochemical methods were used in the investigation of ion selectivities for europium and americium. The materials were up to 400 times more selective towards europium over americium. For an application in nuclear fuel management, this order of preference is preferred, as americium can be readily recovered from the material for its fissioning, while europium is retained in the solid, a suitable matrix for nuclear waste disposal. In column operation, highly pure americium, up to 99.999 mol-%, could be separated from europium with high recovery in low pH. The effects of multiple factors on the separation, such as europium concentration, salt concentration and pH were investigated throughout the dissertation. Ion exchange can excel in such specific and demanding jobs, as the structures of the materials can be engineered to enhance the desirable separation properties. Whereas the well-established solvent extraction based separation processes are already applied in many areas of nuclear fuel management, I believe that ion exchange can have a supportive role in their shortcomings.
  • Heinilä, Kirsikka (Helsingin yliopisto, 2019)
    Optical snow monitoring methods have tendency to underestimate snow cover beneath the evergreen forest canopy due to the masking effect of trees. There is need to develop method for providing more reliable snow products and enhance their use e.g. in hydrological and climatological models. The main objective of this thesis is to provide information to improve the accuracy of snow mapping by algorithm development and its regional parameterization. This thesis exploits reflectance data derived from ground-based, mast-borne, airborne and space-borne sensors. Each datatype with different ground resolutions has specific strengths and weaknesses. Together this dataset provides valuable information to advance knowledge of reflectance properties of snow-covered forests and supports the interpretation of satellite-borne reflectance observations. Improvement of satellite-based snow cover mapping is essential because it is the only way to monitor snow cover spatially, temporally and economically effectively. To obtain information about certain geophysical variable using satellite data, a model for interpreting the satellite signal must be developed. The feasibility of satellite-borne observations in describing geophysical variables depends on the reliability of the model used. Here simple reflectance models based on the zeroth order radiative transfer equation and lineal mixing models are investigated. They are found to reliably describe the observed surface reflectances from snow-covered terrain, both in forests and in open areas. Additionally, to improve methods for seasonal snow cover monitoring in forests, the high spatial resolution observations are required to describe spectral properties and their temporal behaviour of different targets inside the investigated scene. It is also important to combine these target-specific reflectances with the in situ data to describe the characteristics of the target area. In this thesis the datasets complement each other so that while mast-borne data provides information on the temporal behaviour of the scene reflectance of the specific location where measurement conditions are well known, the airborne data provides information during a very short time (~1 hour) on the spatial variation of scene reflectance from the areas where land cover, forest characteristics and snow conditions are well defined. The results demonstrate the notable effect of forest on observed reflectance in both the temporal (changes in illumination geometry) and on the spatial (changes in forest structure) scale. The presence of tree canopy also weakens the capability of the Normalized Difference Snow Index (NDSI) to detect snow-covered areas. Additionally, the effect of melting snow cover on reflectances and NDSI is significant in all land covers producing high variation inside individual land cover types too.
  • Leinonen, Juho (Helsingin yliopisto, 2019)
    Data collected from the learning process of students can be used to improve education in many ways. Such data can benefit multiple stakeholders of a programming course. Data about students’ performance can be used to detect struggling students who can then be given additional support benefiting the student. If data shows that students have to read a certain section of the material multiple times, it could indicate either that that section is possibly more important than others, or it might be unclear and could be improved, which benefits the teacher. Data collected through surveys can yield insight into students’ motivations for studying. Ultimately, data can increase our knowledge of how students learn benefiting educational researchers. Different kinds of data can be collected in online courses. In programming courses, data is typically collected from tools that are specifically made for learning programming. These tools include Integrated Development Environments (IDEs), program visualization tools, automatic assessment tools, and online learning materials. The granularity of data collected from such tools varies. Fine-grained data is data that is collected frequently, while coarse-grained data is collected less frequently. In a programming course, coarse-grained data might include students’ submissions to exercises, whereas fine-grained data might include students’ actions within the IDE such as editing source code. An example of extremely fine-grained data is keystroke data, which typically includes each key pressed while typing together with a timestamp that tells when exactly the key was pressed. In this work, we study what benefits there are to collecting keystroke data in programming courses. We explore different aspects of keystroke data that could be useful for research and to students and educators. This is studied by conducting multiple quantitative experiments where information about students’ learning or the students themselves is inferred from keystroke data. Most of the experiments are based on examining how fast students are at typing specific character pairs. The results of this thesis show that students can be uniquely identified solely based on their typing whilst they are programming. This information could be used in online courses to verify that the same student completes all the assignments. Excessive collaboration can also be detected automatically based on the processes students take to reach a solution. Additionally, students’ programming experience and future performance in an exam can be inferred from typing, which could be used to detect struggling students. Inferring students’ programming experience is possible even when data is made less accurate so that identifying individuals is no longer feasible.
  • Toivonen, Jarkko (Helsingin yliopisto, 2019)
    In this thesis we aim to learn models that can describe the sites in DNA that a transcription factor (TF) prefers to bind to. We concentrate on probabilistic models that give each DNA sequence, of fixed length, a probability of binding. The probability models used are inhomogeneous 0th and 1st order Markov chains, which are called in our terminology Position-specific Probability Matrix (PPM) and Adjacent Dinucleotide Model (ADM), respectively. We consider both the case where a single TF binds in isolation to DNA, and the case where two TFs bind to proximal locations in DNA, possibly having interactions between the two factors. We use two algorithmic approaches to this learning task. Both approaches utilize data, which is assumed to have enriched number of binding sites of the TF(s) under investigation. Then the binding sites in the data need to be located and used to learn the parameters of the binding model. Both methods also assume that the length of the binding sites is known beforehand. We first introduce a combinatorial approach where we count l-mers that are either binding sites, background noise, or belong partly to both of these categories. The most common l-mer in the data and its Hamming neighbours are declared as binding sites. Then an algorithm to align these binding sites in an unbiased manner is introduced. To avoid false binding sites, the fraction of signal in the data is estimated and used to subtract the counts that rise from the background. The second approach has the following additional benefits. The division into signal and background is done in a rigorous manner using a maximum likelihood method, thus avoiding the problems due to the ad hoc nature of the first approach. Secondly, use of a mixture model allows learning multiple models simultaneously. Then, subsequently, this mixture model is extended to include dimeric models as combinations of two binding models. We call this reduction of dimers as monomers modularity. This allows investigating the preference of each distance, even the negative distance in the overlapping case, and relative orientation between these two models. The most likely mixture model that explains the data is optimized using an EM algorithm. Since all the submodels belong to the same mixture model, their relative popularity can be directly compared. The mixture model gives an intuitive and unified view of the different binding modes of a single TF or a pair of TFs. Implementations of all introduced algorithms, SeedHam and MODER for learning PPM models and MODER2 for learning ADM models, are freely available from GitHub. In validation experiments ADM models were observed to be slightly but consistently better than PPM models in explaining binding-site data. In addition, learning modularic mixture models confirmed many previously detected dimeric structures and gave new biological insights about different binding modes and their compact representations.
  • Talvitie, Topi (Helsingin yliopisto, 2019)
    Bayesian networks are probabilistic models that represent dependencies between random variables via directed acyclic graphs (DAGs). They provide a succinct representation for the joint distribution in cases where the dependency structure is sparse. Specifying the network by hand is often unfeasible, and thus it would be desirable to learn the model from observed data over the variables. In this thesis, we study computational problems encountered in different approaches to learning Bayesian networks. All of the problems involve counting or sampling DAGs under various constraints. One important computational problem in the fully Bayesian approach to structure learning is the problem of sampling DAGs from the posterior distribution over all the possible structures for the Bayesian network. From the typical modeling assumptions it follows that the distribution is modular, which means that the probability of each DAG factorizes into per-node weights, each of which depends only on the parent set of the node. For this problem, we give the first exact algorithm with a time complexity bound exponential in the number of nodes, and thus polynomial in the size of the input, which consists of all the possible per-node weights. We also adapt the algorithm such that it outperforms the previous methods in the special case of sampling DAGs from the uniform distribution. We also study the problem of counting the linear extensions of a given partial order, which appears as a subroutine in some importance sampling methods for modular distributions. This problem is a classic example of a #P-complete problem that can be approximately solved in polynomial time by reduction to sampling linear extensions uniformly at random. We present two new randomized approximation algorithms for the problem. The first algorithm extends the applicable range of an exact dynamic programming algorithm by using sampling to reduce the given instance into an easier instance. The second algorithm is obtained by combining a novel, Markov chain-based exact sampler with the Tootsie Pop algorithm, a recent generic scheme for reducing counting into sampling. Together, these two algorithms speed up approximate linear extension counting by multiple orders of magnitude in practice. Finally, we investigate the problem of counting and sampling DAGs that are Markov equivalent to a given DAG. This problem is important in learning causal Bayesian networks, because distinct Markov equivalent DAGs cannot be distinguished only based on observational data, yet they are different from the causal viewpoint. We speed up the state-of-the-art recursive algorithm for the problem by using dynamic programming. We also present a new, tree decomposition-based algorithm, which runs in linear time if the size of the maximum clique is bounded.
  • Hemminki, Samuli (Helsingin yliopisto, 2019)
    Motion sensing is one of the most important sensing capabilities of mobile devices, enabling monitoring physical movement of the device and associating the observed motion with predefined activities and physical phenomena. The present thesis is divided into three parts covering different facets of motion sensing techniques. In the first part of this thesis, we present techniques to identify the gravity component within three-dimensional accelerometer measurements. Our technique is particularly effective in the presence of sustained linear acceleration events. Using the estimated gravity component, we also demonstrate how the sensor measurements can be transformed into descriptive motion representations, able to convey information about sustained linear accelerations. To quantify sustained linear acceleration, we propose a set of novel peak features, designed to characterize movement during mechanized transportation. Using the gravity estimation technique and peak features, we proceed to present an accelerometer-based transportation mode detection system able to distinguish between fine-grained automotive modalities. In the second part of the thesis, we present a novel sensor-assisted method, crowd replication, for quantifying usage of a public space. As a key technical contribution within crowd replication, we describe construction and use of pedestrian motion models to accurately track detailed motion information. Fusing the pedestrian models with a positioning system and annotations about visual observations, we generate enriched trajectories able to accurately quantify usage of public spaces. Finally in the third part of the thesis, we present two exemplary mobile applications leveraging motion information. As the first application, we present a persuasive mobile application that uses transportation mode detection to promote sustainable transportation habits. The second application is a collaborative speech monitoring system, where motion information is used to monitor changes in physical configuration of the participating devices.
  • Soikkeli, Maiju (Helsingin yliopisto, 2019)
    Magnetic resonance imaging (MRI) is one of the most important medical imaging methods due to its noninvasiveness, superior versatility and resolution. In order to improve the image quality and to make different tissues more distinguishable, MRI is often used together with contrast agents. Contrast agents are most commonly based on gadolinium. However, during recent decades, the group of metal-free contrast agents has become a major area of development. One striking group of potential metal-free contrast agents are the nitroxides, stable organic radicals. In this thesis, two fully organic, metal-free nitroxides were designed and synthesized. The compounds consisted of a nitroxide moiety bearing the contrast enhancing properties and a targeting moiety aimed to invoke specificity of the agent towards tumor tissue. Their stability and relaxation enhancing properties were determined in order to evaluate their potential as novel contrast agents for MRI. Both of the compounds proved to be highly stable by maintaining their contrast and relaxation enhancing properties for several hours is harsh conditions. Also, they displayed effective relaxation time shortening in MRI experiments. Therefore, these organic radical contrast agents are expected to bring a noteworthy addition to the established MRI-based diagnostics by joining the growing group of metal-free contrast agents for MRI. Another medical diagnostic method based on magnetic resonance is magnetic resonance spectroscopy (MRS) and spectroscopic imaging (MRSI). In the latter section of the thesis a novel organic marker for MRS and MRSI with no existing equivalent was developed. Although the phantom MRS studies seemed promising, unfortunately the in vivo animal studies did not give the desired outcome leaving place for improvement.
  • Ahonen, Lauri (Helsingin yliopisto, 2019)
    New particle formation (NPF) is a dominant source for atmospheric aerosol particles in terms of their number concentration, and a major contributor to the number of cloud con-densation nuclei globally. Atmospheric aerosol particles have impact on Earth’s climate via direct and indirect effects. In addition to climate, aerosol particles have impact on human health. In polluted environments, airborne pollutants, especially particulate matter, shorten the lifetime expectancy by several years. Understanding the processes of NPF is in a key role, for example, while identifying the most effective acts to improve the air quality in megacities or assessing the role of anthropogenic emissions in climate change. A NPF event consists of formation of molecular clusters and their subsequent growth into larger particle sizes by condensable vapors and/or coagulation In order to quantify NPF events, measurements of particle number size distribution close to the size where gas-to-particle conversion takes place are necessary. The gas-to-particle conversion takes place in the 1-2 nm size range, where there exist electrically charged and neutral molecular clusters. On one hand, in most of the environments such clusters are present also in the absence of NPF events. The growth of the small clusters to the 2-3 nm size range is, on the other hand, indicative of a NPF event. In this thesis, we gather knowledge on the concentration of sub-3 nm aerosol particles by conducting both long-term and campaign-like measure-ments with particle size magnifier (PSM; Airmodus Ltd.). Our results were compared with the other available PSM data, from sites around the world, and presented in compilation study. In all the sites the sub-3 nm particle concentration had a daytime maximum. Gener-ally, the highest concentrations were observed at the sites with the highest anthropogenic influence. In this thesis, we also conducted a campaign to observe particle formation in a cleanroom environment, where PSM was used for the first time to monitor concentration of nanoparticles in such an environment. The results showed that sub-2 nm clusters were observed to be always present in this clean room in relatively small concentrations. Short periods of high concentrations were observed during active manufacturing processes in the clean room. Instrumental development was one important aspect of this thesis. We experimented with the possibility of using two commercial condensation particle counters (CPCs), with nomi-nal lower limit close to 10 nm, for the detection of sub-3 nm particles. Optimized operating temperatures and flow rates were tested in laboratory conditions and by using simulation tools. We showed that commercially-available CPCs can be optimized down to sub-3 nm detection. In addition, a differential mobility particle sizer (DMPS) was specially built to measure particle number size distributions in the sub-10 nm size range using PSM and half-mini differential mobility analyzer (DMA). Due to the improved overall transmission of our system, the counting uncertainty compared to a harmonized DMPS was reduced to a half in the sub-10 nm size range. An ion mobility-mass spectrometry was utilized to investigate the structures and hydration of iodine pentoxide iodic acid clusters, similar to ones observed during coastal nucleation events. The number of water molecules in hydrated clusters was sufficient to convert io-dine pentoxide into iodic acid but the water sorption beyond this amount was limited.
  • Järvinen, Ilpo (Helsingin yliopisto, 2019)
    Transmission Control Protocol (TCP) has served as the workhorse to transmit Internet traffic for several decades already. Its built-in congestion control mechanism has proved reliable to ensure the stability of the Internet, and congestion control algorithms borrowed from TCP are being applied largely also by other transport protocols. TCP congestion control has two main phases for increasing sending rate. Slow Start is responsible for starting up a flow by seeking the sending rate the flow should use. Congestion Avoidance then takes over to manage the sending rate for flows that last long enough. In addition, the flow is booted up by sending the Initial Window of packets prior to Slow Start. There is a large difference in the magnitude of sending rate increase during Slow Start and Congestion Avoidance. Slow Start increases the sending rate exponentially, whereas with Congestion Avoidance the increase is linear. If congestion is detected, a standard TCP sender reduces the sending rate heavily. It is well known that most of the Internet flows are short. It implies that flow startup is a rather frequent phenomenon.  Also, many traffic types exhibit an ON-OFF pattern with senders remaining idle for varying periods of time. As the flow startup under Slow Start causes exponential sending rate increase, the link load is often subject to exponential load transients that escalate in a few round trips into overload, if not controlled properly. It is true especially near the network edge where traffic aggregation is limited to a few users. Traditionally much of the congestion control research has focused on behavior during Congestion Avoidance and uses large aggregates during testing. To control router load, Active Queue Management (AQM) is recommended. The state-of-the-art AQM algorithms, however, are designed with little attention to Slow Start. This thesis focuses on congestion control and AQM during the flow startup. We explore what effect the Initial Window has to competing latency-sensitive traffic during a flow startup consisting of multiple parallel flows typical to Web traffic and investigate the impact of increasing Initial Window from three to ten TCP segments. We also highlight what the shortcomings are in the state-of-the-art AQM algorithms and formulate the challenges AQM algorithms must address to properly handle flow startup and exponential load transients. These challenges include the horizon problem, RTT (round-trip time) uncertainty and rapidly changing load. None of the existing AQM algorithms are prepared to handle these challenges. Therefore we explore whether an existing AQM algorithm called Random Early Detection (RED) can be altered to control exponential load transients effectively and propose necessary changes to RED. We also propose an entirely new AQM algorithm called Predict. It is the first AQM algorithm designed primarily for handling exponential load transients. Our evaluation shows that because of shortcomings in handling exponential load transients, the state-of-the-art AQM algorithms often respond too slowly or too fast depending on the actual RTT of the traffic. In contrast, the Predict AQM algorithm performs timely congestion indication without compromising throughput or latency unnecessarily, yielding low latency over a large range of RTTs. In addition, the load estimation in Predict is designed to be fully compatible with pacing and the timely congestion indication allows relaxing the large sending rate reduction on congestion detection.
  • Koivunen, Niko (Helsingin yliopisto, 2019)
    The Standard Model of particle physics (SM) has been enormously successful in explaining the experimental signals coming from the particle physics experiments. However, it leaves behind some puzzling questions. One of these questions is the flavour problem. The SM describes three generations of fermions. The everyday world is made of the fermions of the first generation: electron, electron neutrino, up-type quark and the down-type quark. The fermions of the second generation are muon, muon neutrino, charm-quark and strange-quark. The third generation consists of tau lepton, tau neutrino, top quark and the bottom quark. Mathematically each generation is treated identically, so one would expect similar masses for each generation. This is, however, not the case. The first generation is the lightest and the third is the heaviest. For example the top-quark is five orders of magnitude heavier than the up-quark. The SM offers no explanation for this huge span in the fermion masses. This is called the fermion mass hierarchy problem. The fact that the fermions in the SM come in three generations is supported by the experiments. The existence of the fourth generation seems to be excluded. The SM places each generation into identical representation and one could in principle have any number of fermion generations, and still have internally consistent model. Therefore the SM does not answer to the question: why are there exactly three fermion generations in nature? This is called the fermion family number problem. The fermion mass hierarchy problem and the fermion family number problem are together known as the flavour problem. This thesis concentrates on the possible solutions to the flavour problem. The Froggatt-Nielsen mechanism is one of the most popular methods of generating the fermion mass hierarchy. The Froggatt-Nielsen mechanism introduces a new complex scalar field called the flavon and a new global flavour symmetry that forbids the SM Yukawa couplings. When the flavon acquires a non-zero VEV the Yukawa couplings are generated as effective couplings. The hierarchy of the Yukawa couplings and therefore the fermion masses is determined by the charge assignment under this flavour symmetry. The flavon will inevitably have flavour violating couplings and it can mediate processes that are not yet experimentally seen. In the traditional 331-models the gauge anomalies only cancel if the number of fermion families is three. The 331-models thus explain the number of fermion generations in nature. The cancellation of gauge anomalies requires that one of the quark generations must be placed into a different representation than the other two. This inevitably leads to the scalar mediated flavour changing neutral currents at tree-level for quarks which are heavily constrained experimentally. This is a problem for the traditional 331-models as they offer no natural suppression mechanism. Finally the thesis deals with the FN331-model which economically incorporates the Froggatt-Nielsen mechanism into the 331-setting. Thus the FN331-model is capable of explaining both the fermion mass hierarchy problem and the fermion family number problem simultaneously, thus solving the flavour problem.
  • Fedorets, Grigori (Helsingin yliopisto, 2019)
    Astrometry, i.e., the study of positions of stellar-appearing heavenly bodies, is the basis for all astronomical research. Each generation of new astronomical surveys delivers new insights into the structure of the Universe, including our Solar System. Small bodies of the Solar System, asteroids and comets, are the populations which reveal the initial conditions, overall structure, and previous processes shaping the Solar System. With the incremental development of the impressive survey programmes, certain types of objects will always be on the threshold of discoveries. Therefore, only a small number of data for these objects will ever be available. In this thesis, two such Solar System populations are investigated: Earth's temporary natural satellites, and asteroids discovered by the Gaia mission. The statistical properties and steady-state population of two sub-populations of Earth's natural satellites, temporarily-captured orbiters and flybys, are assessed. The challenges for detection and prospects for future investigation of Earth's natural satellites are discussed. Also, the detectability of Earth's temporarily-captured orbiters with the upcoming Large Synoptic Survey Telescope is investigated, raising the importance of dedicated treatment for small fast-moving objects in the processing. One of the many fields of astronomy where ESAs Gaia mission makes an important contribution is the discovery of newly discovered asteroids. Candidates for newly discovered asteroids are processed daily and distributed to follow-up observers. A new statistical orbital inversion method, random-walk ranging, is developed. Additionally, the method to improve follow-up predictions by lowering the effect of systematic errors is introduced. This thesis gives an overview of the phenomenon of the temporary capture of asteroids by planets. The statistical ranging-based orbital inversion methods are discussed. The advancements in the fields of stellar and asteroid astronomy over the ages, and respective breakthroughs in the relevant fields of astronomy are assessed.
  • Enckell, Vera-Maria (Helsingin yliopisto, 2019)
    The earliest stage in the history of the universe is successfully modelled by cosmic inflation, a period of nearly exponential expansion. Due to inflation, the universe became spatially flat, old, and statistically homogeneous with small inhomogeneities in the energy density that later acted as seeds of structure. In the simplest scenario, inflation is driven by a scalar field, the inflaton. In the Standard Model (SM) of particle physics the Higgs boson is the only fundamental scalar field, which makes it an interesting candidate for the inflaton. However, pure SM Higgs potential does not produce the requirement amount of inflation. Instead, successful inflation can be obtained by adding a large non-minimal coupling between the Higgs and gravity which effectively flattens the potential and allows for an extended period of inflation. This is known as the Higgs inflation model. The effective theory of non-minimally coupled Higgs and gravity is non-renormalisable and breaks perturbative unitarity at an energy step below the inflationary regime. This prevents the use of perturbative quantum field theory methods in running the couplings up to the inflationary scales. It has been proposed, however, that effects of the non-perturbative or the non-renormalisable physics below the inflationary scale could be parametrised by threshold corrections which amount to undetermined jumps in couplings of the model. This leaves basically three parameters determining the Higgs inflation potential: the jumps in the Higgs self-interaction and the top Yukawa couplings and the strength of the non-minimal coupling between the Higgs and gravity. In addition to these free parameters, the choice of the gravitational degrees of freedom, or the choice between the metric or the Palatini formulations, affects predictions of Higgs inflation. This thesis consists of three articles investigating the robustness of Higgs inflation predictions. By varying the three aforementioned parameters both in the metric and Palatini formulations one can construct different kinds of features in the inflationary potential which widen the range of predictions of Higgs inflation. We also consider the combined Higgs-Starobinsky model of inflation that is motivated by quantum corrections. This analysis is performed in the metric formalism. Detailed understanding of Higgs inflation predictions is crucial in contrasting the scenario against future observations of the Cosmic Microwave Background and gravitational waves which may favour some realisations of Higgs inflation and rule out others. This may help to understand the microscopical mechanism of inflation, and, if the Higgs really is the inflaton, also shed new light to the high energy behaviour of the SM coupled to gravity.
  • Mohan, Nitinder (Helsingin yliopisto, 2019)
    Cloud computing has created a radical shift in expanding the reach of application usage and has emerged as a de-facto method to provide low-cost and highly scalable computing services to its users. Existing cloud infrastructure is a composition of large-scale networks of datacenters spread across the globe. These datacenters are carefully installed in isolated locations and are heavily managed by cloud providers to ensure reliable performance to its users. In recent years, novel applications, such as Internet-of-Things, augmented-reality, autonomous vehicles etc., have proliferated the Internet. Majority of such applications are known to be time-critical and enforce strict computational delay requirements for acceptable performance. Traditional cloud offloading techniques are inefficient for handling such applications due to the incorporation of additional network delay encountered while uploading pre-requisite data to distant datacenters. Furthermore, as computations involving such applications often rely on sensor data from multiple sources, simultaneous data upload to the cloud also results in significant congestion in the network. Edge computing is a new cloud paradigm which aims to bring existing cloud services and utilities near end users. Also termed edge clouds, the central objective behind this upcoming cloud platform is to reduce the network load on the cloud by utilizing compute resources in the vicinity of users and IoT sensors. Dense geographical deployment of edge clouds in an area not only allows for optimal operation of delay-sensitive applications but also provides support for mobility, context awareness and data aggregation in computations. However, the added functionality of edge clouds comes at the cost of incompatibility with existing cloud infrastructure. For example, while data center servers are closely monitored by the cloud providers to ensure reliability and security, edge servers aim to operate in unmanaged publicly-shared environments. Moreover, several edge cloud approaches aim to incorporate crowdsourced compute resources, such as smartphones, desktops, tablets etc., near the location of end users to support stringent latency demands. The resulting infrastructure is an amalgamation of heterogeneous, resource-constrained and unreliable compute-capable devices that aims to replicate cloud-like performance. This thesis provides a comprehensive collection of novel protocols and platforms for integrating edge computing in the existing cloud infrastructure. At its foundation lies an all-inclusive edge cloud architecture which allows for unification of several co-existing edge cloud approaches in a single logically classified platform. This thesis further addresses several open problems for three core categories of edge computing: hardware, infrastructure and platform. For hardware, this thesis contributes a deployment framework which enables interested cloud providers to effectively identify optimal locations for deploying edge servers in any geographical region. For infrastructure, the thesis proposes several protocols and techniques for efficient task allocation, data management and network utilization in edge clouds with the end-objective of maximizing the operability of the platform as a whole. Finally, the thesis presents a virtualization-dependent platform for application owners to transparently utilize the underlying distributed infrastructure of edge clouds, in conjunction with other co-existing cloud environments, without much management overhead.

View more