Browsing by Title

Sort by: Order: Results:

Now showing items 302-321 of 900
  • Mäkelä, Hanna (Helsingin yliopisto, 2015)
    Roughly three-quarters of Finland s area is covered by forests. Any climatological changes influencing the danger of forest fire are important to evaluate and consider. The objective of this thesis is to study the long-term past and future changes in climatically-driven forest fire danger in Finland based on the summertime mean temperature and precipitation sum. The work is composed of two parts. In the first part, long-term gridded datasets of observed monthly mean temperatures and precipitation sums for Finland are developed. In the second part, these gridded datasets are used together with calculated values of the Finnish Forest Fire Index and probabilistic climate model simulations (from the ENSEMBLES project) to estimate the number of forest fire danger days during the summer season (June-August). The long-term variation of Finland s climatological forest fire danger is studied roughly for 100 years backwards and into the future. One of the main achievements of this thesis is that it explores the possibility of quantifying past and future fire-weather using a relatively limited database with regard to both weather variables and their spatial coverage. This enables a wider exploitation of scattered data series from earlier times and can also provide opportunities for projections using data with a low resolution. The climatological forest fire danger in Finland varies considerably from year to year. There have not been any significant increasing or decreasing trends in the number of fire danger days during the 20th century (1908-2011). On average, the highest probability of forest fire danger occurs in June and July, when a fire hazard exists on roughly 35-40% of all days. The intra-seasonal variation of fire danger has been large enough to enable the occurrence of conflagrations even though the fire danger for the season as a whole has been at an average level. Despite the projected increase in average summertime precipitation, the Finnish climate will provide more favourable conditions for the occurrence of forest fires in the future than today. This is due to increases in the mean temperature. The probability of an increase in the number of fire danger days is 56-75% in the near future (2010-2029) and 71-91% by the end of the current century (2080-2099), depending on the region. This would indicate an increase of 1-2 and 7-10 days, respectively. It is thus clearly important to further develop existing tools for the forecasting of fire danger, and to maintain the capabilities of the fire prevention, surveillance and suppression services. Future projections of all relevant meteorological variables (temperature, precipitation, humidity, evaporation and wind speed) at higher temporal and spatial resolutions, in addition to information on the type of the summertime precipitation and the length of the dry periods, would notably improve the assessment of the future climatological forest fire danger.
  • Viskari, Toni (Helsingin yliopisto, 2012)
    Atmospheric aerosol particles have several important effects on the environment and human society. The exact impact of aerosol particles is largely determined by their particle size distributions. However, no single instrument is able to measure the whole range of the particle size distribution. Estimating a particle size distribution from multiple simultaneous measurements remains a challenge in aerosol physical research. Current methods to combine different measurements require assumptions concerning the overlapping measurement ranges and have difficulties in accounting for measurement uncertainties. In this thesis, Extended Kalman Filter (EKF) is presented as a promising method to estimate particle number size distributions from multiple simultaneous measurements. The particle number size distribution estimated by EKF includes information from prior particle number size distributions as propagated by a dynamical model and is based on the reliabilities of the applied information sources. Known physical processes and dynamically evolving error covariances constrain the estimate both over time and particle size. The method was tested with measurements from Differential Mobility Particle Sizer (DMPS), Aerodynamic Particle Sizer (APS) and nephelometer. The particle number concentration was chosen as the state of interest. The initial EKF implementation presented here includes simplifications, yet the results are positive and the estimate successfully incorporated information from the chosen instruments. For particle sizes smaller than 4 micrometers, the estimate fits the available measurements and smooths the particle number size distribution over both time and particle diameter. The estimate has difficulties with particles larger than 4 micrometers due to issues with both measurements and the dynamical model in that particle size range. The EKF implementation appears to reduce the impact of measurement noise on the estimate, but has a delayed reaction to sudden large changes in size distribution.
  • Pushkina, Diana (Helsingfors universitet, 2007)
    Short and long term environmental changes, variations in climate and vegetation during the late Neogene shaped the geographical ranges of large terrestrial mammals by allowing origination, distribution and dispersal of certain species that make up faunas. Climatic fluctuations were intensified during the latest Neogene, Pleistocene (1.8-0.01 Ma), at the end of which also human presence became more conspicuous. Both climate and humans have been linked to extensive alteration and extinctions in mammalian faunas across the world. This dissertation consists of a set of papers that examine different periods of the Neogene and associated faunas in northern Eurasia. Major trends in changing environments and climate were studied by means of the tooth crown height (hypsodonty) and dietary structure in herbivorous terrestrial mammals or/and species commonness (locality coverage, abundance proxy). This study was also intended to bring to light a great deal of information contained in Russian literature to fill in the gap between the European literature and not translated Russian records. Since the middle Miocene (~15-11 Ma), central Asia has been the focal point of the transformation in Eurasia towards more open and dry environment. The drying of the central part of Eurasia hampered the spread of temperate or mesophilic species between western and eastern sides of the continent, and created conditions for origination of the cold and arid adapted grazing fauna in north-eastern Eurasia. Throughout the climatically unstable late middle and late Pleistocene, Europe that was more maritime during interglacials than Siberia, experienced the most drastic faunal alternations between the interglacial Palaeoloxodon antiquus and glacial Mammuthus primigenius assemblages that permanently inhabited the Mediterranean and Siberia, respectively. During more climatically equable middle part of the middle Pleistocene (Holsteinian interglacial) that was climatically similar to the current Holocene, the interglacial species could have spread eastwards. The origins, dispersal and cohesiveness of the Palaeoloxodon antiquus assemblage in Eurasia are examined. During the latest Weichselian Glaciation (Late Glacial, 15 000-10 500 yr BP, latest late Paleolithic) and Holocene (last 10 000 yr) a rapid warming initiated fragmentation of dry and cold tundra-steppes when increased temperature and humidity produced boggy tundra in the north and forests in the south of the most part of northern Eurasia. The most significant change took place in central Asia influencing the glacial mammoth fauna decline as is seen in southern Siberia from decreased mean hypsodonty and the shift in dietary preferences from grazing towards browsing in herbivorous ungulates along with decreased mean body size in large mammals. It is difficult to disentangle the role of humans from climate effect in large mammal extinctions in Eurasia at the Weichselian-Holocene boundary because they pretty much coincided. The study is consistent with the idea that Eurasian late Pleistocene extinctions were first climatically driven, after which the impact of rapidly expanding humans must have become more manifest and crucial either by direct hunting or via indirect activities. Only the data for the extinct steppe bison may indicate a disproportionate selection by humans although more sufficient and recently updated data are needed. Key words: Pleistocene, Neogene, Paleolithic, interglacial, glacial, large mammals, distribution, hypsodonty, aridity, precipitation, body size, commonness, extinction, human influence.
  • Pohjola, Mia (Helsingin yliopisto, 2006)
    This thesis contains three subject areas concerning particulate matter in urban area air quality: 1) Analysis of the measured concentrations of particulate matter mass concentrations in the Helsinki Metropolitan Area (HMA) in different locations in relation to traffic sources, and at different times of year and day. 2) The evolution of traffic exhaust originated particulate matter number concentrations and sizes in local street scale are studied by a combination of a dispersion model and an aerosol process model. 3) Some situations of high particulate matter concentrations are analysed with regard to their meteorological origins, especially temperature inversion situations, in the HMA and three other European cities. The prediction of the occurrence of meteorological conditions conducive to elevated particulate matter concentrations in the studied cities is examined. The performance of current numerical weather forecasting models in the case of air pollution episode situations is considered. The study of the ambient measurements revealed clear diurnal variation of the PM10 concentrations in the HMA measurement sites, irrespective of the year and the season of the year. The diurnal variation of local vehicular traffic flows seemed to have no substantial correlation with the PM2.5 concentrations, indicating that the PM10 concentrations were originated mainly from local vehicular traffic (direct emissions and suspension), while the PM2.5 concentrations were mostly of regionally and long-range transported origin. The modelling study of traffic exhaust dispersion and transformation showed that the number concentrations of particles originating from street traffic exhaust undergo a substantial change during the first tens of seconds after being emitted from the vehicle tailpipe. The dilution process was shown to dominate total number concentrations. Minimal effect of both condensation and coagulation was seen in the Aitken mode number concentrations. The included air pollution episodes were chosen on the basis of occurrence in either winter or spring, and having at least partly local origin. In the HMA, air pollution episodes were shown to be linked to predominantly stable atmospheric conditions with high atmospheric pressure and low wind speeds in conjunction with relatively low ambient temperatures. For the other European cities studied, the best meteorological predictors for the elevated concentrations of PM10 were shown to be temporal (hourly) evolutions of temperature inversions, stable atmospheric stability and in some cases, wind speed. Concerning the weather prediction during particulate matter related air pollution episodes, the use of the studied models were found to overpredict pollutant dispersion, leading to underprediction of pollutant concentration levels.
  • Kohonen, Jukka (Helsingin yliopisto, 2015)
    Clustering is a central task in computational statistics. Its aim is to divide observed data into groups of items, based on the similarity of their features. Among various approaches to clustering, Bayesian model-based clustering has recently gained popularity. Many existing works are based on stochastic sampling methods. This work is concerned with exact, exponential-time algorithms for the Bayesian model-based clustering task. In particular, we consider the exact computation of two summary statistics: the number of clusters, and pairwise incidence of items in the same cluster. We present an implemented algorithm for computing these statistics substantially faster than would be achieved by direct enumeration of the possible partitions. The method is practically applicable to data sets of up to approximately 25 items. We apply a variant of the exact inference method into graphical models where a given variable may have up to four parent variables. The parent variables can then have up to 16 value combinations, and the task is to cluster them and find combinations that lead to similar conditional probability tables. Further contributions of this work are related to number theory. We show that a novel combination of addition chains and additive bases provides the optimal arrangement of multiplications, when the task is to use repeated multiplication starting from a given number or entity, but only a certain kind of function of the successive powers is required. This arrangement speeds up the computation of the posterior distribution for the number of clusters. The same arrangement method can be applied to other multiplicative tasks, for example, in matrix multiplication. We also present new algorithmic results related to finding extremal additive bases. Before this work, the extremal additive bases were known up to length 23. We have computed them up to length 24 in the unrestricted case, and up to length 41 in the restricted case.
  • Orponen, Tuomas (Helsingin yliopisto, 2012)
    The dissertation Exceptional sets in projection and slicing theorems contains a treatment of two classical topics in fractal geometry: projections and slicing. The thesis consists of an introductory chapter and two scientific articles; the new results extend a long line of research originated by J. M. Marstrand in 1954. The first paper deals with projecting a planar set K onto lines. The fractal geometer is interested in the following question: what is the relation between the dimensions of K and its projections? In 1954, Marstrand proved that if the dimension of K lies between zero and one, then the projections tend to preserve dimension; for almost every line the dimension of the projection equals dim K. During its nearly 60 years of existence, this theme has spawned countless variations. In the thesis, special attention is given to scrutinizing the words almost every line in Marstrand s theorem. The words cannot be entirely omitted (an illustrative example is given by projecting the y-axis onto the x-axis), but they can be sharpened in many cases. The definition used by Marstrand allows for a fairly large set of exceptional lines , the projection onto which fails to preserve the dimension of K. It turns out that better bounds for the size of this exceptional set can be obtained through a more intricate analysis. The second paper is thematically close akin to the first; it takes up another 1954 result by Marstrand, the slicing theorem , and examines the exceptional set estimates therein. To explain the slicing theorem, fix a planar set K with dimension greater than one. This time, the set K is intersected with various planar lines. What is the dimension of these slices of K? In general, one cannot expect to find a single constant answering the question: if K is bounded, many lines evade K altogether, and the corresponding slices have dimension zero. However, not all slices of K can be so small. Marstrand showed that in almost every direction many lines meet K in a set of dimension dim K 1. In Marstrand s original formulation, the same definition for the words almost every was used as in the projection theorem, and, again, bounds for the size of the exceptional set can be improved with new techniques. In the thesis, similar estimates are also derived in a variant of the theorem where planar lines are replaced by more complicated curves.
  • Paananen, Tomi (Helsingin yliopisto, 2009)
    This is a study of ultra-cold Fermi gases in different systems. This thesis is focused on exotic superfluid states, for an example on the three component Fermi gas and the FFLO phase in optical lattices. In the two-components case, superfluidity is studied mainly in the case of the spin population imbalanced Fermi gases and the phase diagrams are calculated from the mean-field theory. Different methods to detect different phases in optical lattices are suggested. In the three-component case, we studied also the uniform gas and harmonically trapped system. In this case, the BCS theory is generalized to three-component gases. It is also discussed how to achieve the conditions to get an SU(3)-symmetric Hamiltonian in optical lattices. The thesis is divided in chapters as follows: Chapter 1 is an introduction to the field of cold quantum gases. In chapter 2 optical lattices and their experimental characteristics are discussed. Chapter 3 deals with two-components Fermi gases in optical lattices and the paired states in lattices. In chapter 4 three-component Fermi gases with and without a harmonic trap are explored, and the pairing mechanisms are studied. In this chapter, we also discuss three-component Fermi gases in optical lattices. Chapter 5 devoted to the higher order correlations, and what they can tell about the paired states. Chapter 6 concludes the thesis.
  • Asmi, Eija (Helsingin yliopisto, 2010)
    Atmospheric aerosol particles have significant climatic effects. Secondary new particle formation is a globally important source of these particles. Currently, the mechanisms of particle formation and the vapours participating in this process are, however, not truly understood. The recently developed Neutral cluster and Air Ion Spectrometer (NAIS) was widely used in field studies of atmospheric particle formation. The NAIS was calibrated and found to be in adequate agreement with the reference instruments. It was concluded that NAIS can be reliably used to measure ions and particles near the sizes where the atmospheric particle formation begins. The main focus of this thesis was to study new particle formation and participation of ions in this process. To attain this objective, particle and ion formation and growth rates were studied in various environments - at several field sites in Europe, in previously rarely studied sites in Antarctica and Siberia and also in an indoor environment. New particle formation was observed at all sites were studied and the observations were used as indicatives of the particle formation mechanisms. Particle size-dependent growth rates and nucleation mode hygroscopic growth factors were examined to obtain information on the particle growth. It was found that the atmospheric ions participate in the initial steps of new particle formation, although their contribution was minor in the boundary layer. The highest atmospheric particle formation rates were observed at the most polluted sites where the role of ions was the least pronounced. Furthermore, the increase of particle growth rate with size suggested that enhancement of the growth by ions was negligible. Participation of organic vapours in the particle growth was supported by laboratory and field observations. It was addressed that secondary new particle formation can also be a significant source of indoor air particles. These results, extending over a wide variety of environments, give support to previous observations and increase understanding on new particle formation on a global scale.
  • Schobesberger, Siegfried (Helsingin yliopisto, 2014)
    Atmospheric aerosols have important effects on health and climate. An important source is the formation of aerosol particles from gas-phase precursors. In this thesis, the goal was to improve our understanding of how exactly this atmospheric particle formation proceeds. Attempts have been made to describe aerosol particle formation by classical nucleation theory. To test this theory, the heterogeneous nucleation of n-propanol vapor on 4 11 nm seed particles was investigated. The choice of seed particle material was found to determine if classical theories could be applied or not, probably because of material-specific inter-molecular interactions between the vapor and the seed particle. The classical theories fail to describe these interactions, which can be crucial in microscopic systems. The critical processes of atmospheric particle formation occur at sizes below 2 nm. In this thesis, novel techniques were employed to access this size range, primarily the atmospheric pressure interface time-of-flight (APi-TOF) mass spectrometer that can directly measure the composition of ions and ionic clusters up to a size of about 2 nm. APi-TOFs were employed at the CLOUD facility at CERN during experiments that focused on exploring particle formation from various systems of vapors. The results of the APi-TOF measurements were the key in revealing the detailed mechanisms of how clusters were initially formed by which vapors, and how these clusters grew to sizes > 2 nm. Clusters of sulfuric acid + ammonia and sulfuric acid + dimethylamine were shown to form and grow via strong hydrogen bonds. The APi-TOF measurements also showed that certain large monoterpene oxidation products, some of them very highly oxidized, can directly bind with bisulfate ions and with sulfuric acid molecules. The clusters then grow by the addition of more of these large oxidized organics and sulfuric acid molecules. Similarities with results from measurements in the boreal forest suggest that large oxidized organics indeed play a crucial role in ambient particle formation events. A light airplane was used to explore how the mechanisms of actual aerosol particle formation vary throughout the atmosphere above the boreal forest, from the canopy up into the free troposphere. They confirmed the extent of boundary layer new particle formation events, and showed indications of an important role of dynamical processes at the top of the boundary layer. Local enhancements of particle formation were observed in connection with clouds. This thesis goal was achieved chiefly by using state-of-the-art experimental techniques together with high-quality laboratory experiments as well as in the field, and by taking ambient measurements aloft. Hopes are that this work will prove to be an important contribution in advancing our knowledge of the detailed mechanisms of atmospheric aerosol particle formation.
  • Hyvärinen, Antti-Pekka (Helsingin yliopisto, 2006)
    Aerosol particles play a role in the earth ecosystem and affect human health. A significant pathway of producing aerosol particles in the atmosphere is new particle formation, where condensable vapours nucleate and these newly formed clusters grow by condensation and coagulation. However, this phenomenon is still not fully understood. This thesis brings an insight to new particle formation from an experimental point of view. Laboratory experiments were conducted both on the nucleation process and physicochemical properties related to new particle formation. Nucleation rate measurements are used to test nucleation theories. These theories, in turn, are used to predict nucleation rates in atmospheric conditions. However, the nucleation rate measurements have proven quite difficult to conduct, as different devices can yield nucleation rates with differences of several orders of magnitude for the same substances. In this thesis, work has been done to have a greater understanding in nucleation measurements, especially those conducted in a laminar flow diffusion chamber. Systematic studies of nucleation were also made for future verification of nucleation theories. Surface tensions and densities of substances related to atmospheric new particle formation were measured. Ternary sulphuric acid + ammonia + water is a proposed candidate to participate in atmospheric nucleation. Surface tensions of an alternative candidate to nucleate in boreal forest areas, sulphuric acid + dimethylamine + water, were also measured. Binary compounds, consisting of organic acids + water are possible candidates to participate in the early growth of freshly nucleated particles. All the measured surface tensions and densities were fitted with equations, thermodynamically consistent if possible, to be easily applied to atmospheric model calculations of nucleation and subsequent evolution of particle size.
  • Eresmaa, Reima (Helsingin yliopisto, 2007)
    Data assimilation provides an initial atmospheric state, called the analysis, for Numerical Weather Prediction (NWP). This analysis consists of pressure, temperature, wind, and humidity on a three-dimensional NWP model grid. Data assimilation blends meteorological observations with the NWP model in a statistically optimal way. The objective of this thesis is to describe methodological development carried out in order to allow data assimilation of ground-based measurements of the Global Positioning System (GPS) into the High Resolution Limited Area Model (HIRLAM) NWP system. Geodetic processing produces observations of tropospheric delay. These observations can be processed either for vertical columns at each GPS receiver station, or for the individual propagation paths of the microwave signals. These alternative processing methods result in Zenith Total Delay (ZTD) and Slant Delay (SD) observations, respectively. ZTD and SD observations are of use in the analysis of atmospheric humidity. A method is introduced for estimation of the horizontal error covariance of ZTD observations. The method makes use of observation minus model background (OmB) sequences of ZTD and conventional observations. It is demonstrated that the ZTD observation error covariance is relatively large in station separations shorter than 200 km, but non-zero covariances also appear at considerably larger station separations. The relatively low density of radiosonde observing stations limits the ability of the proposed estimation method to resolve the shortest length-scales of error covariance. SD observations are shown to contain a statistically significant signal on the asymmetry of the atmospheric humidity field. However, the asymmetric component of SD is found to be nearly always smaller than the standard deviation of the SD observation error. SD observation modelling is described in detail, and other issues relating to SD data assimilation are also discussed. These include the determination of error statistics, the tuning of observation quality control and allowing the taking into account of local observation error correlation. The experiments made show that the data assimilation system is able to retrieve the asymmetric information content of hypothetical SD observations at a single receiver station. Moreover, the impact of real SD observations on humidity analysis is comparable to that of other observing systems.
  • Nordbo, Annika (Helsingin yliopisto, 2012)
    Surface-atmosphere exchange of momentum, energy and atmospheric constituents affects the atmosphere--from alterations in local microclimates and mesoscale weather to climate modification. These exchange processes can be studied using direct eddy-covariance (EC) measurements of vertical turbulent transport, but the technique has not yet readily been applied in non-prevailing ecosystems. Thus, the aim of this thesis is to extend the applicability of the EC technique in two ways: to non-standard sites and by further developing the technique itself. To reach the aim, EC measurements over a boreal lake and three urban sites in Helsinki were performed. Long-term measurements over a lake revealed that the water below the thermocline was decoupled from the atmosphere and thus not important for atmospheric vertical turbulent fluxes. The energy exchange between the lake and the atmosphere departs from vegetated surfaces especially due to large nocturnal evaporation fuelled by lake-water heat storage. Long-term measurements at a semi-urban site in Helsinki showed that the surface-atmosphere exchange is altered by anthropogenic activity: changes in surface-cover and an additional anthropogenic heat release (13 W m-2) led to an altered surface energy balance, and anthropogenic CO2 emissions led to a large positive annual CO2 balance (1.8 kg C m-2). Intra-site and intra-city variation in surface-cover led to differences in atmospheric stability and CO2 emissions. The EC technique evaluation demonstrated that (i) the 'energy imbalance problem' in EC measurements is not primarily surface-cover dependent, and that (ii) common calculation errors in EC calculations can be almost 30% of the flux. Water vapour flux measurements with a closed-path analyser were affected by sorption: the signal's arrival is delayed and it is attenuated. A new spectral-correction method based on wavelet analysis was developed to automatically correct for this signal attenuation of constituents. The conclusions of this thesis improve the understanding of surface-atmosphere exchange over non-standard ecosystems. The lake measurements will continue to be used for improving weather forecasts, and the results from the urban studies can be used in city-planning. The EC technique is developed by offering guidance in calculations at urban sites and by introducing a new correction algorithm.
  • Pirazzini, Roberta (Helsingin yliopisto, 2008)
    Polar Regions are an energy sink of the Earth system, as the Sun rays do not reach the Poles for half of the year, and hit them only at very low angles for the other half of the year. In summer, solar radiation is the dominant energy source for the Polar areas, therefore even small changes in the surface albedo strongly affect the surface energy balance and, thus, the speed and amount of snow and ice melting. In winter, the main heat sources for the atmosphere are the cyclones approaching from lower latitudes, and the atmosphere-surface heat transfer takes place through turbulent mixing and longwave radiation, the latter dominated by clouds. The aim of this thesis is to improve the knowledge about the surface and atmospheric processes that control the surface energy budget over snow and ice, with particular focus on albedo during the spring and summer seasons, on horizontal advection of heat, cloud longwave forcing, and turbulent mixing during the winter season. The critical importance of a correct albedo representation in models is illustrated through the analysis of the causes for the errors in the surface and near-surface air temperature produced in a short-range numerical weather forecast by the HIRLAM model. Then, the daily and seasonal variability of snow and ice albedo have been examined by analysing field measurements of albedo, carried out in different environments. On the basis of the data analysis, simple albedo parameterizations have been derived, which can be implemented into thermodynamic sea ice models, as well as numerical weather prediction and climate models. Field measurements of radiation and turbulent fluxes over the Bay of Bothnia (Baltic Sea) also allowed examining the impact of a large albedo change during the melting season on surface energy and ice mass budgets. When high contrasts in surface albedo are present, as in the case of snow covered areas next to open water, the effect of the surface albedo heterogeneity on the downwelling solar irradiance under overcast condition is very significant, although it is usually not accounted for in single column radiative transfer calculations. To account for this effect, an effective albedo parameterization based on three-dimensional Monte Carlo radiative transfer calculations has been developed. To test a potentially relevant application of the effective albedo parameterization, its performance in the ground-based retrieval of cloud optical depth was illustrated. Finally, the factors causing the large variations of the surface and near-surface temperatures over the Central Arctic during winter were examined. The relative importance of cloud radiative forcing, turbulent mixing, and lateral heat advection on the Arctic surface temperature were quantified through the analysis of direct observations from Russian drifting ice stations, with the lateral heat advection calculated from reanalysis products.
  • Donadini, Fabio (Helsingin yliopisto, 2007)
    The geomagnetic field is one of the most fundamental geophysical properties of the Earth and has significantly contributed to our understanding of the internal structure of the Earth and its evolution. Paleomagnetic and paleointensity data have been crucial in shaping concepts like continental drift, magnetic reversals, as well as estimating the time when the Earth's core and associated geodynamo processes begun. The work of this dissertation is based on reliable Proterozoic and Holocene geomagnetic field intensity data obtained from rocks and archeological artifacts. New archeomagnetic field intensity results are presented for Finland, Estonia, Bulgaria, Italy and Switzerland. The data were obtained using sophisticated laboratory setups as well as various reliability checks and corrections. Inter-laboratory comparisons between three laboratories (Helsinki, Sofia and Liverpool) were performed in order to check the reliability of different paleointensity methods. The new intensity results fill up considerable gaps in the master curves for each region investigated. In order to interpret the paleointensity data of the Holocene period, a novel and user-friendly database (GEOMAGIA50) was constructed. This provided a new tool to independently test the reliability of various techniques and materials used in paleointensity determinations. The results show that archeological artifacts, if well fired, are the most suitable materials. Also lavas yield reliable paleointensity results, although they appear more scattered. This study also shows that reliable estimates are obtained using the Thellier methodology (and its modifications) with reliability checks. Global paleointensity curves during Paleozoic and Proterozoic have several time gaps with few or no intensity data. To define the global intensity behavior of the Earth's magnetic field during these times new rock types (meteorite impact rocks) were investigated. Two case histories are presented. The Ilyinets (Ukraine) impact melt rocks yielded a reliable paleointensity value at 440 Ma (Silurian), whereas the results from Jänisjärvi impact melts (Russian Karelia, ca. 700 Ma) might be biased towards high intensity values because of non-ideal magnetic mineralogy. The features of the geomagnetic field at 1.1 Ga are not well defined due to problems related to reversal asymmetries observed in Keweenawan data of the Lake Superior region. In this work new paleomagnetic, paleosecular variation and paleointensity results are reported from coeval diabases from Central Arizona and help understanding the asymmetry. The results confirm the earlier preliminary observations that the asymmetry is larger in Arizona than in Lake Superior area. Two of the mechanisms proposed to explain the asymmetry remain plausible: the plate motion and the non-dipole influence.
  • Soininen, Aleksi (Helsingin yliopisto, 2001)
  • Kangas, Kaisa (Helsingin yliopisto, 2015)
    The starting point for this dissertation is whether the concept of Zariski geometry, introduced by Hrushovski and Zilber, could be generalized to the context of non-elementary classes. This leads to the axiomatization of Zariski-like structures. As our main result, we prove that if the canonical pregeometry of a Zariski-like structure is non locally modular, then the structure interprets either an algebraically closed field or a non-classical group. This is a counterpart to the result by Hrushovski and Zilber which states that an algebraically closed field can be found in a non locally modular Zariski geometry. It demonstrates that the concept of a Zariski-like structure captures one of the most essential features of a Zariski geometry. Finally, we give a non-trivial example by showing that the cover of the multiplicative group of an algebraically closed field of characteristic zero is Zariski-like. We define a Zariski-like structure as a quasiminimal pregeometry structure that has certain properties. Instead of assuming underlying topologies, we formulate the axioms for a countable collection C of Galois definable sets that have some of the properties of irreducible closed sets from the Zariski geometry context. Quasiminimal classes are abstract elementary classes (AECs) that arise from a quasiminimal pregeometry structure. They are uncountably categorical, and have both the amalgamation property (AP) and the joint embedding property (JEP), and thus also a model homogeneous universal monster model, which we denote by M. To adapt Hrushovski's and Zilber's proof to our setting, we first generalize Hrushovski's Group Configuration Theorem to the context of quasiminimal classes. For this, we develop an independence calculus that has all the usual properties of non-forking and works in our context. We then prove the group configuration theorem and apply it to find a 1-dimensional group, assuming that the canonical pregeometry obtained from the bounded closure operator is non-trivial. A field can be found under the further assumptions that M does not interpret a non-classical group and the canonical pregeometry is non locally modular. Finally, we show that the cover of the multiplicative group of an algebraically closed field, studied by e.g. Boris Zilber and Lucy Burton, provides a non-trivial example of a Zariski-like structure. Burton obtained a topology on the cover by taking sets definable by positive, quantifier-free first order formulae as the basic closed sets. This is called the PQF-topology, and the sets that are closed with respect to it are called the PQF-closed sets. We show that the cover becomes Zariski-like after adding names for a countable number of elements to the language. The axioms for a Zariski-like structure are then satisfied if the collection C is taken to consist of the PQF-closed sets that are definable over the empty set.
  • Sahlsten, Tuomas (Helsingin yliopisto, 2012)
    The main goal of this dissertation is to study the local distribution and irregularities of measures in mostly Euclidean setting. The research belongs to the field of Geometric Measure Theory. The thesis consists of an overview and three refereed research articles. The first article concerns the relationship between Hausdorff- and packing dimensions of measures and the local distribution of measures. There are many ways to quantify local distribution and here we consider local homogeneity, conical densities and porosity. Historically, there have already been many results for these notions of local distribution, but our contribution is to generalize and simplify many of the earlier results, and most importantly, provide a unified framework where such results could be proved. This framework is based on local entropy averages, a recently introduced way to calculate dimensions of measures inspired by dynamical systems. In the second and third articles we consider another notion that describes the local irregularities of measures: tangent measures. Tangent measures were rigorously defined and studied by D. Preiss in 1987 and they provided a powerful tool in the study of rectifiability. In this thesis we consider the possible relationship between tangent measures and the original measure. Our motivation is to strengthen the heuristics that it is not in general possible to deduce information from just the tangent measures of the underlying measure without further assumptions from the measure. In the second paper we construct a highly singular measure, a non-doubling measure, for which every tangent measure is equivalent to Lebesgue measure. The existence of such a measure provides a natural extension to a previous result by Preiss and it also provides a direct counterexample to the characterisation of porosity with tangent measures for general measures, which was previously unknown. In the third paper we prove that for a typical measure in the Euclidean space, in the sense of Baire category, the set of tangent measures consists of all non-zero measures at almost every point with respect to the underlying measure. This result was already proved by T. O'Neil in this PhD thesis from 1994, but we provide another self-contained proof using different techniques. Moreover, we record previously unknown corollaries and sharpen the result by T. O'Neil. Furthermore, we are able to use similar ideas in the setting of micromeasures, which are a symbolic way to define tangent measures in trees, and prove an analogous result in this setting.
  • Kesälä, Meeri (Helsingin yliopisto, 2006)
    The research in model theory has extended from the study of elementary classes to non-elementary classes, i.e. to classes which are not completely axiomatizable in elementary logic. The main theme has been the attempt to generalize tools from elementary stability theory to cover more applications arising in other branches of mathematics. In this doctoral thesis we introduce finitary abstract elementary classes, a non-elementary framework of model theory. These classes are a special case of abstract elementary classes (AEC), introduced by Saharon Shelah in the 1980's. We have collected a set of properties for classes of structures, which enable us to develop a 'geometric' approach to stability theory, including an independence calculus, in a very general framework. The thesis studies AEC's with amalgamation, joint embedding, arbitrarily large models, countable Löwenheim-Skolem number and finite character. The novel idea is the property of finite character, which enables the use of a notion of a weak type instead of the usual Galois type. Notions of simplicity, superstability, Lascar strong type, primary model and U-rank are inroduced for finitary classes. A categoricity transfer result is proved for simple, tame finitary classes: categoricity in any uncountable cardinal transfers upwards and to all cardinals above the Hanf number. Unlike the previous categoricity transfer results of equal generality the theorem does not assume the categoricity cardinal being a successor. The thesis consists of three independent papers. All three papers are joint work with Tapani Hyttinen.
  • Uusi-Simola, Jouni (Helsingin yliopisto, 2009)
    Boron neutron capture therapy (BNCT) is a form of chemically targeted radiotherapy that utilises the high neutron capture cross-section of boron-10 isotope to achieve a preferential dose increase in the tumour. The BNCT dosimetry poses a special challenge as the radiation dose absorbed by the irradiated tissues consists of several dose different components. Dosimetry is important as the effect of the radiation on the tissue is correlated with the radiation dose. Consistent and reliable radiation dose delivery and dosimetry are thus basic requirements for radiotherapy. The international recommendations for are not directly applicable to BNCT dosimetry. The existing dosimetry guidance for BNCT provides recommendations but also calls for investigating for complementary methods for comparison and improved accuracy. In this thesis the quality assurance and stability measurements of the neutron beam monitors used in dose delivery are presented. The beam monitors were found not to be affected by the presence of a phantom in the beam and that the effect of the reactor core power distribution was less than 1%. The weekly stability test with activation detectors has been generally reproducible within the recommended tolerance value of 2%. An established toolkit for epithermal neutron beams for determination of the dose components is presented and applied in an international dosimetric intercomparison. The measured quantities (neutron flux, fast neutron and photon dose) by the groups in the intercomparison were generally in agreement within the stated uncertainties. However, the uncertainties were large, ranging from 3-30% (1 standard deviation), emphasising the importance of dosimetric intercomparisons if clinical data is to be compared between different centers. Measurements with the Exradin type 2M ionisation chamber have been repeated in the epithermal neutron beam in the same measurement configuration over the course of 10 years. The presented results exclude severe sensitivity changes to thermal neutrons that have been reported for this type of chamber. Microdosimetry and polymer gel dosimetry as complementary methods for epithermal neutron beam dosimetry are studied. For microdosimetry the comparison of results with ionisation chambers and computer simulation showed that the photon dose measured with microdosimetry was lower than with the two other methods. The disagreement was within the uncertainties. For neutron dose the simulation and microdosimetry results agreed within 10% while the ionisation chamber technique gave 10-30% lower neutron dose rates than the two other methods. The response of the BANG-3 gel was found to be linear for both photon and epithermal neutron beam irradiation. The dose distribution normalised to dose maximum measured by MAGIC polymer gel was found to agree well with the simulated result near the dose maximum while the spatial difference between measured and simulated 30% isodose line was more than 1 cm. In both the BANG-3 and MAGIC gel studies, the interpretation of the results was complicated by the presence of high-LET radiation.