Matemaattis-luonnontieteellinen tiedekunta


Recent Submissions

  • Wilkman, Olli (Helsingin yliopisto, 2016)
    Understanding the light scattering properties of Solar System bodies is important, especially in the case of the small bodies. For these objects, most of our data is photometric, i.e. measurements of the brightness of light in broad spectral bands in visible and near-infrared. Though limited in many ways, these data can be used to derive physical properties that provide constraints on the structure and material composition of the objects. These atmosphereless bodies are almost always covered with a blanket of loose material called the regolith. The planetary regoliths consist of a range of grain sizes from micrometres to tens of metres, and have a complex geological history and chemical composition. We study two models for the reflectance of planetary surfaces. One is the Lommel-Seeliger model, which is mathematically simple, but also not truly applicable to particulate media such as regoliths. However, an analytical form exists for the integrated brightness of an ellipsoid with the Lommel-Seeliger scattering model. Ellipsoids are useful as crude shape models for asteroids. Some applications of Lommel-Seeliger ellipsoids are studied in the development of a faster software for the inversion of rotational state and rough shape from sparse asteroid lightcurves. The other scattering model is a semi-numerical one, developed to model the reflectance of dark particulate surfaces, such as the lunar regolith and the surfaces of many asteroids. The model term representing the shadowing effects in the medium is computed numerically, and is computationally expensive to produce, but after being computed once, it can be saved and reused. The model is applied to disk-resolved photometry of the lunar surface, as well as laboratory measurements of a dark volcanic sand. The lunar surface is the best known extraterrestrial material, while volcanic sands can be used as analogues for basaltic regoliths such as the lunar mare surfaces. These studies are still early steps in both of the model applications mentioned above. The results show promising avenues for further research. In the case of the Lommel-Seeliger ellipsoids, a statistical inversion scheme is used to gain information on the spin and shape of sparsely observed asteroids. In the studies with the PM scattering model, it was found to provide good fits to data, and though the interpretation of the model parameters is not clear, they are qualitatively reasonable. Some limitations of the current implementation of the model were found, with clear lines of future improvement. On the whole the model has potential for many applications in disk-resolved photometry of regolith surfaces.
  • Merisalo, Maria (Helsingin yliopisto, 2016)
    Digitalization, the social, economic and cultural process where individuals, organizations and societies access, adopt, use and utilize digital technologies, is expected to produce comprehensive societal benefits. Here, the spillover effects of the utilization of digital technologies such as e-government, teleworking and social media are examined in order to explore the added value that can be potentially gained from digitalization. Moreover, the study advances the conceptual perception of how, where and to whom the digitalization produces added value. The research applies Bourdieusian neo-capital theory, which emphasizes the significance of tangible and intangible forms of capital in understanding the social world. This dissertation addresses digitalization questions through four papers: The first paper is conceptual in nature. It redefines and introduces the concept of e-capital as another form of intangible capital, which emerges from the possibilities, capabilities and willingness of individuals, organizations and societies to invest in, utilize and reap benefits from digitalization and thus create added value. All forms of capital (physical, economic, human, social and cultural) are both required and produced in this process. The second paper exposes spatial and social disparities in the use of social media in the Helsinki Metropolitan Area (HMA), and the third paper shows the connection between teleworking, knowledge intensity and creativity of work and e-capital. Both of these papers draw on a survey of 971 inhabitants of the HMA conducted in 2010. The fourth paper examines the national e-government programme E-services and e-democracy (SADe) by exploiting 15 stakeholder interviews conducted in 2012. The paper indicates that the programme was mainly driven by a technological paradigm. The study demonstrates that the basic, primary motivation for advancing digitalization in societies is the fact that it matters: digitalization can provide e-capital and produce added value that cannot be gained or would be significantly more difficult to gain without digital technologies. The benefits do not materialize solely through the production of new innovative technological solutions, but rather they arise from comprehensive implementation by the individuals, organizations and societies. These actors possess varying amounts of different forms of capital and thus vary in terms of their possibilities, capabilities and willingness to implement new digital tools. Since different forms of capital are needed in order to create e-capital from digitalization, e-capital is most likely to emerge in the same locations as other forms of capital. However, the conceptualisation of e-capital demonstrated that jumping into the e-capital conversion process gives access to other forms of capital. This should motivate individuals, organizations and societies (including the public bodies supporting them) in their digitalization process. Keywords: e-capital, social media, teleworking, e-government, digitalization, Pierre Bourdieu
  • Peltola, Eveliina (Helsingin yliopisto, 2016)
    This thesis concerns questions motivated by two-dimensional critical lattice models of statistical mechanics, and conformal field theory (CFT). The core idea is to apply algebraic techniques to solve questions in random geometry, and reveal the algebraic structures therein. We consider the interplay between braided Hopf algebras (quantum groups) and CFT, with applications to critical lattice models and conformally invariant random curves, Schramm-Loewner evolutions (SLE). In the first article, a quantum group method is developed to construct explicit expressions for CFT correlation functions using a hidden Uq(sl2) symmetry. The quantum group method provides tools to directly read off properties of the functions from representation theoretical data. The correlation functions are analytic functions of several complex variables satisfying linear homogeneous partial differential equations known as the Benoit & Saint-Aubin PDEs. Such PDEs emerge in CFT from singular vectors in representations of the Virasoro algebra. The correlation functions of conformal field theory are believed to describe scaling limits of correlations in critical lattice models, expected to exhibit conformal invariance in the scaling limit. The second article contains applications to questions in the theory of SLEs: the pure partition functions of multiple SLEs, and the chordal SLE boundary visit probability amplitudes, also known as Green’s functions. The relevant solutions to the PDEs are found by imposing certain natural boundary conditions given by specified asymptotic behavior. Loosely speaking, the appropriate boundary conditions can be deduced from the qualitative properties of the associated stochastic processes, or alternatively, by CFT fusion arguments. More general solutions to the PDEs are constructed in the fourth article, in the spirit of fusion of CFT. The above type of solutions emerge also from critical lattice models, as (conjectured) scaling limits of renormalized probabilities of crossing and boundary visit events of interfaces. In the third article, such questions for the loop-erased random walk and the uniform spanning tree are studied. Explicit formulas for the probabilities are found, and their convergence in the scaling limit to solutions of second and third order PDEs of Benoit & Saint-Aubin type is proved. Furthermore, these functions are related to the conformal blocks of CFT, by certain combinatorial structures.
  • Hanhijärvi, Kalle (Helsingin yliopisto, 2016)
    Viruses are the most abundant form of life on Earth. They cause serious disease, significant suffering, and economic losses, but viruses are also important to the general balance of the ecosystem. Understanding the details of the viral lifecycle is therefore essential from the point of view of basic research. This thesis work expands the basis formed by traditional microbiology. Single molecule biophysics techniques open a unique perspective into the inner workings of viruses. The physics point of view provides a quantitative, predictive, and descriptive mathematical basis to help one understand the basic processes of life. Furthermore, single molecule methods reveal heterogeneity and process variability which are unresolvable in bulk studies. This thesis work employs single molecule biophysical experiments to study two aspects of the viral lifecycle: genome packaging and ejection. DNA ejection is a method of infection employed by many double-stranded DNA (dsDNA) bacteriophages. Their viral genome is packaged under high pressure within a small volume comparable in linear dimension to the persistence length of dsDNA. Viruses infecting archaea are a new and emerging field of study, which benefits from the single-molecule perspective. This thesis presents the first single molecule study of dsDNA ejection from an Archaeal virus His1, which has a dsDNA genome packaged in a lemon-shaped capsid. Osmotic suppression experiments are carried out and results are compared to those of established dsDNA phages. Results obtained with total internal reflection fluorescence microscopy indicate that DNA ejection from His1 is modulated by external salt concentration and osmotic pressure as is common to many bacteriophages. These findings qualitatively agree with the predictions given by the continuum theory of dsDNA packaging. In contrast to DNA ejection, genome packaging is essential to the assembly of virus particles. Here the focus is on Pseudomonas phage phi6 which has a three-part dsRNA genome, of which only positive sense ssRNA-segments are packaged into the preformed procapsid. This thesis presents the first optical tweezers experiment of single-stranded RNA (ssRNA) packaging by phi6. The results show that packaging alternates between fast and slow sections suggesting that the secondary structure of the ssRNA segment is opened as the RNA is packaged. Single molecule-level results obtained using the two model systems reveal previously unseen heterogeneity in the ejection and packaging processes. Such results cannot be obtained by bulk methods alone.
  • Kopperi, Matias (Helsingin yliopisto, 2016)
    The flux of emerging organic contaminants into environment is a global threat, which is widely studied and monitored. However, current regulation is not able to keep up with the increasing variety of new compounds released to the environment. More efficient and comprehensive analytical methodologies are required to enable sufficient monitoring of these compounds for legislative purposes. Non-targeted analytical approaches are able to identify previously unknown contaminants, which is not possible with conventional targeted methods. Therefore, the development of novel non-target methodologies is important. The goal of the thesis was to look for new ways to utilize non-targeted data for environmental applications with a special emphasis on wastewater analysis. The developed methodologies focused on chemometric quantification of non-target compounds, identification of steroidal transformation products, statistical cross-sample analysis of wastewater and atmospheric particles as well as non-targeted approaches to quantify selectivity of adsorbents employed in sample preparation. The samples were analyzed by comprehensive two-dimensional gas chromatography ‒ time-of-flight mass spectrometry utilizing mass spectral libraries and retention indices for compound identification. Different solid-phase extraction procedures were applied to aqueous samples, and ultra-sound assisted extraction to solid samples. The study included also the synthesis of novel polymeric adsorbents with increased selectivity towards steroidal compounds. Modern statistical software was utilized for data handling and chemometrics. The multidimensional system enabled the analysis of complex wastewater samples, and several steroids and their transformation products were identified from the samples. It was concluded that hydrophobic steroids were efficiently removed from wastewater by adsorption to sewage sludge. However, elimination from sludge was less efficient and steroids were also found in the processed sludge destined for agricultural purposes. The chemometric model for the prediction of concentrations of non-target compounds with steroidal structure demonstrated good accuracy. Non-targeted approaches allowed the arithmetic comparison of adsorbent selectivity, when previously only relative methods have been used. Fast comparison of different wastewater and aerosol samples was possible through cross-sample analysis with non-targeted data. Non-targeted approaches presented in this thesis can also be applied to other groups of contaminants and thus promote the available knowledge about environmental pollution. New ways to utilize non-targeted methodologies and cross-sample analyses demonstrated their value in this thesis and hopefully inspire future studies in the field.
  • Suur-Uski, Anna-Stiina (Helsingin yliopisto, 2016)
    The cosmic microwave background carries a wealth of cosmological information. It originates from the roughly 380,000-year-old Universe, and has traversed through the entire history of the Universe since. The observed radiation contains temperature anisotropies at the level of few times 10^−5 across the sky. By mapping out these temperature anisotropies, and their polarisation over the full sky, we can unravel some fundamental mysteries of our Universe. The cosmic microwave background was discovered by chance in 1965. Ever since the first discovery cosmologists have strived to measure its properties with increasing precision. The latest satellite mission to map the cosmic microwave background has been the European Space Agency s Planck satellite. Planck measured the cosmic sky for four years from 2009 to 2013 with unprecedented combination of sensitivity, angular resolution and frequency coverage. Planck s observations confirm the basic ΛCDM model, which is the most elementary model explaining the observed properties of the Universe. As the precision of measurements has improved, the data volumes have grown rapidly. Modern-sized datasets require sophisticated algorithms to tackle the challenges in the data analysis. In this thesis we will discuss two data analysis themes: map-making and residual noise estimation. Specifically, we will answer the following questions: how to produce sky maps, and low-resolution maps with corresponding noise covariance matrices from the Planck Low Frequency Instrument data. The low-resolution maps and the noise covariance matrices are required in the analysis of the largest structures of the microwave sky. The sky maps for the Stokes I, Q, and U components from the Planck Low Frequency Instrument data are built using the Madam map-maker. Madam is based on the generalised destriping principle. In the destriping the correlated part of the instrumental noise is modelled as a sequence of constant offsets, called baselines. Further, a generalised destriper is able to employ prior information on the instrumental noise properties to enhance the accuracy of noise removal. We achieved nearly optimal noise removal by destriping the data at the HEALPix resolution of Nside = 1024 using 0.25 s baselines for the 30 GHz channel, and 1 s baselines for the 44 and 70 GHz channels provided that a destriping mask, horn-uniform weighting and horn-uniform flagging was applied. For the low-l analysis we also provide maps at the resolution of Nside = 16. These low-resolution maps were downgraded from the high-resolution maps using a noise-weighted downgrading scheme combined with the Gaussian smoothing for the temperature component. The resulting maps were adequate for the Planck 2015 data release, but for the final round of Planck data analysis the downgrading scheme will need to be revised. The estimated maps will always contain some degree of residual noise. The analysis steps after the map-making, component separation and power spectrum estimation, require solid understanding of those residuals. We have three complementary methods at our disposal: half-ring noise maps, noise Monte Carlo simulations, and the noise covariance matrices. The half-ring noise maps characterise the residual noise directly at the map level, while the other methods rely on noise estimates. The noise covariance matrices describe pixel-pixel correlations of the residual noise, and they are needed especially in the low-l likelihood analysis. Hence, it is sufficient to calculate them at the highest feasible resolution of Nside = 64, and subsequently downgrade them to the target resolution of Nside = 16 using the same downgrading scheme as for the maps. The different residual noise estimates seem to show good agreement.
  • Toivanen, Jukka Mikael (Helsingin yliopisto, 2016)
    Computational creativity is an area of artificial intelligence that develops algorithms and simulations of creative phenomena, as well as tools for performing creative tasks. In this thesis, we present various computational methods and models of linguistic and musical creativity. The emphasis is on developing methods that are maximally unsupervised, i.e. methods that require a minimal amount of hand-crafted linguistic, world, or domain knowledge. This thesis consists of an introductory part and five original research articles. The introductory part outlines computational creativity as a research field and discusses some of the philosophical foundations underlying the current work. The research articles present specific methods and algorithms for automatic composition of poetry and songs. The first article proposes a corpus-based poetry generation method that relies on statistical language modelling and morphological analysis and synthesis. In the second article, we expand that basic model with constraint programming techniques to handle more aspects of the poetic structure and style. The third article presents a method for mining document-specific word associations and proposes using them in poetry generation to produce poems based, for instance, on a specific news story. The fourth article presents a song composition system that utilises constraint programming to produce songs with matching lyrics and music in a transformational way, i.e. it is able to modify its own search space and preferences. Transformationality of the system is achieved with a metalevel component that can modify the system's internal constraints leading into new conceptual spaces. Finally, the fifth article discusses possibilities of combining personal biosignal measurements, especially electroencephalography, with techniques of computational creativity and presents an art installation called Brain Poetry based on these ideas. The current work relies heavily on the use of unsupervised data mining techniques to automatically build models of specific creative domains such as poetry. The proposed methods and models are flexible and they are to a large extent independent of language and style. Thus, they provide a general framework for computational or synthetic creativity in linguistic and musical domains that can be easily expanded in many ways. Applications of this work include pedagogical tools, computer games, and artistic results.
  • Tsona Tchinda, Narcisse (Helsingin yliopisto, 2016)
    Sulfur oxidation products are involved in the formation of acid rain and atmospheric aerosol particles. The formation mechanism of these sulfur-containing species is often complex, especially when ions are involved. The work of this thesis uses computational methods to explore reactions of sulfur dioxide with some atmospheric ions, and to examine the effect of humidity on the stability and electrical mobilities of sulfuric acid-based clusters formed in the first steps of atmospheric particle formation. Quantum chemical calculations are performed to provide insights into the mechanism of the reaction between sulfur dioxide (SO2) and the superoxide ions (O2-) in the gas phase. This reaction was investigated in various experimental studies based on mass spectrometry, but discrepancies on the structure of the product remained disputed. The performed calculations indicate that the peroxy SO2O2- molecular complex is formed upon collision of SO2 and O2-. Due to a high energy barrier, SO2O2- is unable to isomerize to the sulfate radical ion (SO4-), the most stable form of the singly charged tetraoxysulfurous ion. It is suggested that SO2O2- is the major product of SO2 and O2- collision. The gas-phase reaction between SO2 and SO4- is further explored. From quantum chemical calculations and transition state theory, it is found that SO2 and SO4- cluster effectively to form SO2SO4-, which reacts fast at low relative humidity to form SO3SO3-. This species has never been observed in the atmosphere and its decomposition upon collision with other atmospheric species is most likely. First-principles molecular dynamics simulations are used to probe the decomposition by collisions with ozone (O3). The most frequent reactive collisions lead to the formation of SO4-, SO3, and O2. This implies that SO4- acts as a good catalyst in the SO2 oxidation by O3 to SO3. The best structures and the thermochemistry of the stepwise hydration of bisulfate ion, sulfuric acid, base (ammonia or dimethylamine) clusters are determined using quantum chemical calculations. The results indicate that ammonia-containing clusters are more hydrated than dimethylamine-containing ones. The effect of humidity on the mobilities of different clusters is further examined and it is finally found that the effect of humidity is negligible on the electrical mobilities of bisulfate ion, sulfuric acid, ammonia or dimethylamine clusters.
  • Kupiainen-Määttä, Oona (Helsingin yliopisto, 2016)
    A large fraction of atmospheric aerosol particles are formed from condensable vapors in the air. This particle formation process has been observed to correlate in many locations with the sulfuric acid concentration, but the very first steps of cluster formation have remained beyond the reach of experimental investigation until recently. Charged clusters can now be detected and characterized starting from the smallest sizes and even neutral clusters consisting of only a few molecules can be detected, although their composition cannot be fully characterized. However, measuring the concentrations of different cluster types does not tell the full story of how the clusters were formed, and detailed simulations are needed in order to get a full understanding of the cluster formation pathways. Cluster formation is described by a set of nonlinear differential equations that cannot be solved analytically in any realistic situation. The best way to understand the complex behavior of cluster populations is by cluster kinetics simulations. The focus of this Thesis is on developing tools for simulating cluster formation, and using the simulation results to improve the detailed understanding of atmospheric aerosol particle formation. As sulfuric acid has been identified as the main driving force of cluster formation in many locations, it is also the main compound in the simulations of this Thesis. It cannot explain the observed atmospheric particle formation rates alone, and other possible participating species considered in this Thesis are ammonia, dimethylamine and water. In the first two papers of the Thesis, theoretical values are used for the collision and evaporation rates, and simulated cluster concentrations and formation rates are compared to experimental observations. The simulation results agree well with experimental findings from two very different studies. The third and fourth paper asses existing methods for interpreting cluster measurements and point out details that should be taken into account: the effect of dipole moments on chemical ionization of neutral molecules and clusters, and the conditions for the widely used nucleation theorem to be valid. The last paper introduces a new method for extracting cluster evaporation rates from measured cluster distributions.
  • Määttä, Jussi (Helsingin yliopisto, 2016)
    Model selection is the task of selecting from a collection of alternative explanations (often probabilistic models) the one that is best suited for a given data set. This thesis studies model selection methods for two domains, linear regression and phylogenetic reconstruction, focusing particularly on situations where the amount of data available is either small or very large. In linear regression, the thesis concentrates on sequential methods for selecting a subset of the variables present in the data. A major result presented in the thesis is a proof that the Sequentially Normalized Least Squares (SNLS) method is consistent, that is, if the correct answer (i.e., the so-called true model) exists, then the method will find it with probability that approaches one as the amount of data increases. The thesis also introduces a new sequential model selection method that is an intermediate form between SNLS and the Predictive Least Squares (PLS) method. In addition, the thesis shows how these methods may be used to enhance a novel algorithm for removing noise from images. For phylogenetic reconstruction, that is, the task of inferring ancestral relations from genetic data, the thesis concentrates on the Maximum Parsimony (MP) approach that tries to find the phylogeny (family tree) which minimizes the number of evolutionary changes required. The thesis provides values for various numerical indicators that can be used to assess how much confidence may be put in the phylogeny reconstructed by MP in various situations where the amount of data is small. These values were obtained by large-scale simulations and they highlight the fact that the vast number of possible phylogenies necessitates a sufficiently large data set. The thesis also extends the so-called skewness test, which is closely related to MP and can be used to reject the hypothesis that a data set is random, possibly indicating the presence of phylogenetic structure.
  • Mod, Heidi (Helsingin yliopisto, 2016)
    The effects of co-occurring species, namely biotic interactions, govern performance and assemblages of species along with abiotic factors. They can emerge as positive or negative, with the outcome and magnitude of their impact depending on species and environmental conditions. However, no general conception of the role of biotic interactions in functioning of ecosystems exists. Implementing correlative spatial modelling approaches, combined with extensive data on species and environmental factors, would complement the understanding of biotic interactions and biodiversity. Moreover, the modelling frameworks themselves, conventionally based on abiotic predictors only, could benefit from incorporating biotic interactions and their context-dependency. In this thesis, I study the influence of biotic interactions in ecosystems and examine whether their effects vary among species and environmental gradients (sensu stress gradient hypothesis = SGH), and consequently, across landscapes. Species traits are hypothesized to govern the species-specific outcomes, while the SGH postulates that the frequency of positive interactions is higher under harsh environmental conditions, whereas negative interactions dominate at benign and productive sites. The study applies correlative spatial models utilizing both regression models and machine-learning methods, and fine-scale (1 m2) data on vascular plant, bryophyte and lichen communities from Northern Finland and Norway (69°N, 21°E). In addition to conventional distribution models of individual species (SDM), also species richness, traits and fitness are modelled to capture the community-level impacts of biotic interactions. The underlying methodology is to incorporate biotic predictors into the abiotic-only models and to examine the impacts of biotic interactions and their dependency on species traits and environmental conditions. Cover values of the dominant species of the study area are used as proxies for the intensity of their impact on other species. The results show, firstly, that plant plant interactions consistently and significantly affect species performance and richness patterns. Secondly, the results make evident that the impacts of biotic interactions vary between species, and, more importantly, that the guild, geographic range and traits of species can indicate the outcome and magnitude of the impact. For instance, vascular plant species, particularly competitive ones, respond mainly negatively to the dominant species, whereas lichens tend to show more positive responses. Thirdly, as proposed, the manifestation of biotic interactions also varies across environmental gradients. Support for the SGH is found as the effect of the dominant species is more negative under ameliorate conditions for most species and guilds. Finally, simulations of species richness, where the cover of the dominant species is modified, demonstrate that the biotic interactions exhibit a strong control over landscape-level biodiversity patterns. These simulations also show that even a moderate increase in the cover of the dominant species can lead to drastic changes in biodiversity patterns. Overall, all analyses consistently demonstrate that taking into account biotic interactions improves the explanatory power and predicting accuracy of the models. There are global demands to understand species-environment relationships to enable predictions of biodiversity changes with regard to a warming climate or altered land-use. However, uncertainties in such estimates exist, especially due to the precarious influence of biotic interactions. This thesis complements the understanding of biotic interactions in ecosystems by demonstrating their fundamental, yet species-specific and context-dependent, role in shaping species assemblages and performance across landscapes. From an applied point of view, our study highlights the importance of recognizing biotic interactions in future forecasts of biodiversity patterns.
  • Alves Antunes Soares, Joana Soares (2016)
    Atmospheric aerosols are subject to extensive research, due to their effect on air quality, human health and ecosystems, and hold a pivotal role in the Earth s climate. The first focus of this study is to improve the modelling of aerosol emissions and its dispersion in the atmosphere in both spatial and temporal scales and secondly, to integrate the dispersion modelling with population activity data that leads to exposure metrics. The mathematical models used in this study are fully or partially developed by the Finnish Meteorological Institute: a global-to -mesoscale chemical transport model, SILAM; a local-scale point/line-source dispersion model, UDM/CAR-FMI; and a human exposure and intake fraction assessment model, EXPAND. One of the outcomes of this work was the refinement of the emissions modelling for global-to-mesoscale dispersion model. Firstly, a new parameterisation for bubble-mediated sea salt emissions has been developed by combining and re-assessing widely used formulations and datasets. This parameterisation takes into account the effects of wind speed and seawater salinity and temperature, and can be applicable to particles with dry diameters raging between 0.01 and 10 µm. The parameterization is valid for low-to-moderate wind speed, seawater salinity ranging between 0 and 33 and seawater temperature ranging between -2 and 25 °C. Secondly, the near-real time fire estimation system, IS4FIRES, based on Fire Radiative Power (FRP) data from MODIS, was refined to reduce the overestimation of particulate matter (PM) emissions by including more vegetation types, improving the diurnal variation, removing highly-energetic sources and recalibrating the emission factors. Applying dynamic emission modelling brought more insight to the spatial distribution of these emissions, their contribution to the atmospheric budget, and possible impact on air quality and climate. The modelling shows that sea salt aerosol (SSA) can be transported far over land and contribute up to 6 µg m-3 to PM10 (at annual level), and indicate that the Mediterranean has sharp gradients of concentrations, becoming an interesting area to analyse for climate considerations. For fire, the simulations show the importance of meteorology and vegetation type for the intensity of the emissions. The simulations also show that MODIS FRP is accounting for highly energetic sources as a wildland fire, bringing up to an 80% overestimation in AOD, close to the misattributed sources. The second outcome is related to urban-scale modelling. The emissions for Helsinki Metropolitan Area (HMA) were revised to bring up-to-date the emissions for traffic and energy sectors in use for urban-scale modelling. The EXPAND model was revised to combine concentrations and activity data in order to compute parameters such as population exposure or intake fraction. EXPAND includes improvements of the associated urban emission and dispersion modelling system, time use of population, and infiltration coefficients from outdoor to indoor air. This refinement showed that PM2.5 in HMA is mainly originated from long-range transport, with the largest local contributors being vehicular emissions and shipping (at harbours and its vicinity). At annual level, the population living mostly indoors (home and work) is mainly exposed to PM2.5 with an acutely increased exposure while commuting.
  • Tapiola, Olli (Helsingin yliopisto, 2016)
    Different dyadic techniques are an inseparable part of modern-day harmonic analysis both in the Euclidean space and in metric spaces. In this dissertation, we improve adjacent and random dyadic techniques in metric spaces and apply these and previously known techniques for questions related to metric, Euclidean and vector-valued analysis. The dissertation consists of an introductory part and four research articles. In the first article, we present a general randomization procedure for dyadic systems in metric spaces which can be used for constructing both random and adjacent dyadic systems. As an application of the new random systems, we improve the continuity properties of metric wavelets of P. Auscher and T. Hytönen by exploiting the improved ``smallness of boundary'' property of our random cubes. In the third article, we prove some additional properties for our adjacent dyadic systems to prove a decomposition result for dyadic systems in metric spaces. With its help, we give an alternative proof for the quantitative bound of the Lp norm of shift operators acting on vector-valued functions in metric spaces. In the second article, we explore certain properties of the Muckenhoupt weight classes, the class of Reverse Hölder weights and their weakened versions in spaces of homogeneous type. In the Euclidean setting, the Muckenhoupt weight classes have numerous different equivalent definitions but in spaces of homogeneous type some of those equivalences break down. We show that although certain definitions are no longer equivalent in this context, their weakened versions still define the same weight classes. We also show that every weak Reverse Hölder weight has a self-improving property. In the literature, these types of weak weights appear especially in the theory of partial differential equations. In the fourth article, we prove quantitative weighted bounds for so called rough homogeneous singular integrals by combining older techniques with a quantitative version of M. Lacey's recent extension of the A2 theorem. The proof of this extension is based on a domination technique which provides a way to dominate Calderón-Zygmund operators pointwise with the help of a finite number of simple sparse operators associated with adjacent dyadic systems.
  • Koivisto, Juhani (Helsingin yliopisto, 2016)
    The dissertation Amenability of metric measure spaces and fixed point properties of groups consists of three articles revolving around amenability and property (T) in different contexts, and a summary. In the first article, (non-)amenability of hyperbolic metric spaces is considered. In it, we prove that a uniformly coarsely proper hyperbolic cone of a connected bounded metric space containing at least two points is non-amenable. In particular, this implies that any uniformly coarsely proper visual Gromov hyperbolic space with connected boundary containing at least two points is non-amenable. In the second article, the degree of amenability of metric measure spaces is considered in general. Here, we prove a homological characterisation of global weighted Sobolev inequalities for quasiconvex uniform metric measure spaces that support a local weak (1,1)-Poincaré inequality using methods from large scale algebraic topology. Returning to the topic of the first article, we show that a quasiconvex visual Gromov hyperbolic uniform metric measure space that supports a local weak (1,1)-Poincaré inequality with a connected boundary containing at least two points satisfies a global Sobolev inequality. In the third article, fixed point conditions for uniformly bounded group actions on Hilbert spaces are considered. In the article, we establish a spectral condition for the vanishing of the first cohomology group of the complex of square integrable cochains twisted by a uniformly bounded representation of an automorphism group of a 2-dimensional simplicial complex. In particular, if the automorphism group acts properly discontinuously and cocompactly on the complex this implies that every affine action of the automorphism group on the Hilbert space with linear part given by the representation has a fixed point. In the summary, the results of the articles are further explained and placed in a larger context: mathematically as well as historically.
  • Gross, Oskar (Helsingin yliopisto, 2016)
    In order to analyse natural language and gain a better understanding of documents, a common approach is to produce a language model which creates a structured representation of language which could then be used further for analysis or generation. This thesis will focus on a fairly simple language model which looks at word associations which appear together in the same sentence. We will revisit a classic idea of analysing word co-occurrences statistically and propose a simple parameter-free method for extracting common word associations, i.e. associations between words that are often used in the same context (e.g., Batman and Robin). Additionally we propose a method for extracting associations which are specific to a document or a set of documents. The idea behind the method is to take into account the common word associations and highlight such word associations which co-occur in the document unexpectedly often. We will empirically show that these models can be used in practice at least for three tasks: generation of creative combinations of related words, document summarization, and creating poetry. First the common word association language model is used for solving tests of creativity -- the Remote Associates test. Then observations of the properties of the model are used further to generate creative combinations of words -- sets of words which are mutually not related, but do share a common related concept. Document summarization is a task where a system has to produce a short summary of the text with a limited number of words. In this thesis, we will propose a method which will utilise the document-specific associations and basic graph algorithms to produce summaries which give competitive performance on various languages. Also, the document-specific associations are used in order to produce poetry which is related to a certain document or a set of documents. The idea is to use documents as inspiration for generating poems which could potentially be used as commentary to news stories. Empirical results indicate that both, the common and the document-specific associations, can be used effectively for different applications. This provides us with a simple language model which could be used for different languages.