Matemaattis-luonnontieteellinen tiedekunta

 

Recent Submissions

  • Athukorala, Kumaripaba (Helsingin yliopisto, 2016)
    We use information retrieval (IR) systems to meet a broad range of information needs, from simple ones involving day-to-day decisions to complex and imprecise information needs that cannot be easily formulated as a question. In consideration of these diverse goals, search activities are commonly divided into two broad categories: lookup and exploratory. Lookup searches begin with precise search goals and end soon after reaching of the target, while exploratory searches center on learning or investigation activities with imprecise search goals. Although exploration is a prominent life activity, it is naturally challenging for users because they lack domain knowledge; at the same time, information needs are broad, complex, and subject to constant change. It is also rather difficult for IR systems to offer support for exploratory searches, not least because of the complex information needs and dynamic nature of the user. It is hard also to conceptualize exploration distinctly. In consequence, most of the popular IR systems are targeted at lookup searches only. There is a clear need for better IR systems that support a wide range of search activities. The primary objective for this thesis is to enable the design of IR systems that support exploratory and lookup searches equally well. I approached this problem by modeling information search as a rational adaptation of interactions, which aids in clear conceptualization of exploratory and lookup searches. In work building on an existing framework for examination of adaptive interaction, it is assumed that three main factors influence how we interact with search systems: the ecological structure of the environment, our cognitive and perceptual limits, and the goal of optimizing the tradeoff between information gain and time cost. This thesis contributes three models developed in research proceeding from this adaptive interaction framework, to 1) predict evolving information needs in exploratory searches, 2) distinguish between exploratory and lookup tasks, and 3) predict the emergence of adaptive search strategies. It concludes with development of an approach that integrates the proposed models for the design of an IR system that provides adaptive support for both exploratory and lookup searches. The findings confirm the ability to model information search as adaptive interaction. The models developed in the thesis project have been empirically validated through user studies, with an adaptive search system that emphasizes the practical implications of the models for supporting several types of searches. The studies conducted with the adaptive search system further confirm that IR systems could improve information search performance by dynamically adapting to the task type. The thesis contributes an approach that could prove fruitful for future IR systems in efforts to offer more efficient and less challenging search experiences.
  • Miihkinen, Santeri (Helsingin yliopisto, 2016)
    The topic of this dissertation lies at the intersection of analytic function theory and operator theory. In the thesis, compactness and structural properties of a class of Volterra-type (integral) operators acting on analytic function spaces are investigated. The Volterra-type operator is obtained by integrating a product of two analytic functions, where one of these functions, the so-called symbol of the operator, is fixed and the other one is considered to be a variable. This integral operator was introduced by C. Pommerenke in 1977 in connection to exponential integrability of BMOA-functions. A systematic research of Volterra-type operators was initiated by Aleman and Siskakis in the mid-1990s when they characterized the boundedness and compactness of these operators on the Hardy spaces and weighted Bergman spaces. In the first article of the thesis, we derive estimates for the essential and weak essential norms of a Volterra-type operator in terms of its symbol when the operator is acting on the Hardy spaces, BMOA and VMOA. The essential and weak essential norms of a linear operator are its distances from compact and weakly compact operators respectively. In particular, it follows from our estimates that the compactness and weak compactness of Volterra-type operator coincide when its domain is the non-reflexive Hardy space, BMOA, or VMOA. In the second article, a notion of strict singularity of a linear operator is investigated in the case of the Volterra-type operator acting on the Hardy spaces. An operator between Banach spaces is strictly singular if its restriction to any closed infinite-dimensional subspace is not a linear isomorphism onto its range. We construct an isomorphic copy M of the sequence space of p-summable sequences and show that a non-compact Volterra-type operator restricted to M is a linear isomorphism onto its range. This implies that the strict singularity and compactness of this operator coincide in the Hardy space case. In the third article, we provide estimates for the operator norms and essential norms of the Volterra-type operator acting between weighted Bergman spaces, where the weight function satisfies a doubling condition.
  • Hakala, Jani (Helsingin yliopisto, 2016)
    Atmospheric aerosols are omnipresent. They affect health via inhalation or skin and eye contact, and reduce visibility. They also contribute to climate directly by absorbing and scattering solar radiation, and indirectly by acting as cloud condensing nuclei, thus affecting the cloud formation process. The climate has been affected by human activity since the preindustrial times. Both anthropogenic aerosol particle and greenhouse gas emissions have seen a drastic increase since the industrial revolution. As far as is known, most of the anthropogenic aerosol particles excluding black carbon or soot particles have a cooling effect on climate, partially negating the warming effect of increased greenhouse gas emissions. All in all, the effects of aerosol particles, be their origin anthropogenic on not, are considered to cause the highest uncertainty in climate models, making aerosol studies crucial for more accurate future climate predictions. The particle size is one of the most important properties to know, as it has a great impact on the effects and fate of aerosol particles in the atmosphere, or inside our respiratory system. The hygroscopicity of aerosol particles, or their ability to absorb water, determines the size of the particles in different relative humidity (RH) conditions. Dry water-soluble salt particles can double their size in diameter at the RH of 90%, whereas soot particles and fresh organics experience little to no growth. By studying the hygroscopic growth of aerosol particles, we gain important knowledge on the particle size and phase state in varying RH conditions, chemical composition, and mixing state both external and internal. This thesis is focused on measuring the hygroscopic properties of aerosol particles. Most of the hygroscopicity studies contained here were conducted using the volatility-hygroscopicity tandem differential mobility analyzer (VH-TDMA) that we built within our group in University of Helsinki. The main conclusions we arrived at are: 1) The VH-TDMA we built is indeed an accurate and versatile tool for aerosol hygroscopicity and volatility studies. It is capable of determining the external mixing state of aerosol particles (in terms of hygroscopicity and volatility) and a good indirect method for estimating the chemical composition of aerosol particles. 2) The hygroscopicity studies conducted at sub- and supersaturation conditions may have significantly different results when measuring organic aerosols. The hygroscopic growth measured in supersaturation may greatly overestimate the growth in subsaturation, which in turn overestimates the scattering and the cooling effect of aerosols on climate. 3) The lensing effect by refractive material on the surface of soot particles and its absorption enhancement may have been exaggerated in previous studies. Our field measurements showed an average enhancement of 6%, while previous estimates have been as high as 200%. Lastly, one of the key points of this thesis is to promote the use of H-TDMA technique in the field of aerosol science. The technique has been mostly replaced by the use of cloud condensing nuclei counters (CCNC). H-TDMA technique is far more accurate and versa-tile, and, in my opinion, it is easier to measure in subsaturation and predict the outcome in supersaturation, than vice versa.
  • Kelaranta, Anna (Helsingin yliopisto, 2016)
    Organ dose is the absorbed radiation energy from ionizing radiation to an organ, divided by the organ mass. Organ doses of a patient cannot be measured directly in the patient, but their determination requires dose measurements in anthropomorphic patient models i.e. phantoms or Monte Carlo simulations. Monte Carlo simulations can be performed for example by using computational phantoms or patient s computed tomography (CT) images. Organ doses can be estimated based on measurable dose quantities, such as air kerma, kerma-area product and volume-weighted CT dose index, by using suitable conversion coefficients. Conversion coefficient is the organ dose divided by the measured or calculated examination-specific dose quantity. According to the current knowledge, the probability of radiation induced stochastic effects, which include cancer risk and risk of hereditary effects, increases linearly as a function of the radiation dose. The organ dose is a better quantity for estimating the patient specific risk than the effective dose, which is meant to be used only for populations, and it does not consider patient age or gender. Moreover, the tissue weighting factors that are used in the effective dose calculation are based on whole body irradiations, but in X-ray examinations only a part of the patient is exposed to radiation. The phantoms used in medical dosimetry are either computational or physical, and computational phantoms are further divided into mathematical and voxel phantoms. Phantoms from simplified to as realistic as possible have been developed to simulate different targets, but the organ doses determined based on them can differ largely from the real organ doses of the patient. There are also standard and reference phantoms in use, which offer a dose estimate to a so called average patient. Due to the considerable variation within patient anatomies, the real dose might differ from the dose to a standard or reference phantom. The aim of this thesis was to determine organ doses based on dose measurements and Monte Carlo simulations in four X-ray imaging modalities, including general radiography, CT, mammography and dental radiography. The effect of the patient and phantom thickness and radiation quality on the organ doses in a projection X-ray examination of the thorax was studied via Monte Carlo simulations by using both mathematical phantoms and patient CT images. The effect of the breast thickness on the mean glandular doses (MGDs) was determined based on measurements with phantoms of different thicknesses and collected diagnostic and screening data from patient examinations, and the radiation qualities used in patient and phantom exposures were studied. For fetal dose estimation, fetal dose conversion coefficients were determined based on phantom measurements in CT and dental radiography examinations. Additionally, the effect of lead shields on fetal and breast doses was determined in dental examinations. The difference between Monte Carlo simulated organ doses in patients and mathematical phantoms was large, for the examined organs up to 55% in projection imaging. In mammographic examinations, the difference between MGDs calculated based on collected patient data and phantom measurements was up to 30%. In mammography, patient dose data cannot be replaced by phantom measurements. The properties and limitations of the phantoms must be known when they are used. The estimation of the fetal dose based on conversion coefficients requires understanding about the cases where conversion coefficients are applicable. When used correctly, they provide a method for simple dose estimation, where the application specific dose quantity can be taken into account. The conversion coefficients determined in this thesis can be used to estimate the fetal dose in CT examination based on the volume-weighted CT dose index (CTDIvol), and in dental examinations based on the dose-area product (DAP). In projection imaging, the lung and breast doses decreased as the patient s anterior-posterior thickness increased, but in mammography, the MGDs increased as the compressed breast thickness increased. In CT examinations, the fetal dose remained almost constant in examination where the fetus was totally within the primary radiation beam. When the fetus was outside of the primary beam, the fetal dose increased exponentially with the decreasing distance of the fetus from the scan range. As a function of the half value layer (HVL), the conversion coefficients in the studied projection imaging examination were more convergent than as a function of the tube voltage. The HVL alone describes better the radiation quality than the tube voltage alone, which requires also the definition of the total filtration. In mammography, it is possible to irradiate a phantom and a patient with the same equivalent thickness with different radiation qualities when automatic exposure control is used. Despite the relatively large shielding effect achieved with lead shielding in dental imaging, the fetal dose without lead shielding and the related exposure-induced increase in the risk of childhood cancer death are minimal (less than 10 µGy and 10^-5 %), so there is no need for abdominal shielding. The exposure-induced increase in the risk of breast cancer death is of the same order of magnitude as the increase in the risk of childhood cancer death, so also breast shielding was considered irrelevant. Most important is that a clinically justified dental radiographic examination must never be avoided or postponed due to a pregnancy.
  • Marcozzi, Matteo (Helsingin yliopisto, 2016)
    By time dependent stochastic systems we indicate efffective models for physical phenomena where the stochasticity takes into account some features whose analytic control is unattainable and/or unnecessary. In particular, we consider two classes of models which are characterized by the different role of randomness: (1) deterministic evolution with random initial data; (2) truly stochastic evolution, namely driven by some sort of random force, with either deterministic or random initial data. As an example of the setting (1) in this thesis we will deal with the discrete nonlinear Schrödinger equation (DNLS) with random initial data and we will mainly focus on its applications concerning the study of transport coefficients in lattice systems. Since the seminal work by Green and Kubo in the mid 50 s, when they discovered that transport coefficients for simple fluids can be obtained through a time integral over the respective total current correlation function, the mathematical physics community has been trying to rigorously validate these predictions and extend them also to solids. In particular, the main technical difficulty is to obtain at least a reliable asymptotic form of the time behaviour of the Green-Kubo correlation. To do this, one of the possible approaches is kinetic theory, a branch of the modern mathematical physics stemmed from the challenge of deriving the classical laws of thermodynamics from microscopic systems. Nowadays kinetic theory deals with models whose dynamics is transport dominated in the sense that typically the solutions to the kinetic equations, whose prototype is the Boltzmann equation, correspond to ballistic motion intercepted by collisions whose frequency is order one on the kinetic space-time scale. Referring to the articles in the thesis by Roman numerals [I]-[V], in [I] and [II] we build some technical tools, namely Wick polynomials and their connection with cumulants, to pave the way towards the rigorous derivation of a kinetic equation called Boltzmann-Peierls equation from the DNLS model. The paper [III] can be contextualized in the same framework of kinetic predictions for transport coefficients. In particular, we consider the velocity flip model which belongs to the family (2) of our previous classification, since it consists of a particle chain with harmonic interaction and a stochastic term which flips the velocity of the particles. In [III] we perform a detailed study of the position-momentum correlation matrix via two diffeerent methods and we get an explicit formula for the thermal conductivity. Moreover, in [IV] we consider the Lorentz model perturbed by an external magnetic field which can be categorized in the class (1): it is a gas of non interacting particles colliding with obstacles located at random positions in the plane. Here we show that under a suitable scaling limit the system is described by a kinetic equation where the magnetic field affects only the transport term, but not the collisions. Finally, in [IV] we studied a generalization of the famous Kardar-Parisi-Zhang (KPZ) equation which falls into the category (2) being a nonlinear stochastic partial differential equation driven by a space-time white noise. Spohn has recently introduced a generalized vector valued KPZ equation in the framework of nonlinear fluctuating hydrodynamics for anharmonic particle chains, a research field which is again strictly connected to the investigation of transport coefficients. The problem with the KPZ equation is that it is ill-posed. However, in 2013 Hairer succeded to give a rigorous mathematical meaning to the solution of the KPZ via an approximation scheme involving the renormalization of the nonlinear term by a formally infinite constant. In [V] we tackle a vector valued generalization of the KPZ and we prove local in time wellposedness by using a technique inspired by the so-called Wilsonian Renormalization Group.
  • Tukiainen, Simo (Helsingin yliopisto, 2016)
    Measurements of the Earth's atmosphere are crucial for understanding the behavior of the atmosphere and the underlying chemical and dynamical processes. Adequate monitoring of stratospheric ozone and greenhouse gases, for example, requires continuous global observations. Although expensive to build and complicated to operate, satellite instruments provide the best means for the global monitoring. Satellite data are often supplemented by ground-based measurements, which have limited coverage but typically provide more accurate data. Many atmospheric processes are altitude-dependent. Hence, the most useful atmospheric measurements provide information about the vertical distribution of the trace gases. Satellite instruments that observe Earth's limb are especially suitable for measuring atmospheric profiles. Satellite instruments looking down from the orbit, and remote sensing instruments looking up from the ground, generally provide considerably less information about the vertical distribution. Remote sensing measurements are indirect. The instruments observe electromagnetic radiation, but it is ozone, for example, that we are interested in. Interpreting the measured data requires a forward model that contains physical laws governing the measurement. Furthermore, to infer meaningful information from the data, we have to solve the corresponding inverse problem. Atmospheric inverse problems are typically nonlinear and ill-posed, requiring numerical treatment and prior assumptions. In this work, we developed inversion methods for the retrieval of atmospheric profiles. We used measurements by Optical Spectrograph and InfraRed Imager System (OSIRIS) on board the Odin satellite, Global Ozone Monitoring by Occultation of Stars (GOMOS) on board the Envisat satellite, and ground-based Fourier transform spectrometer (FTS) at Sodankylä, Finland. For OSIRIS and GOMOS, we developed an onion peeling inversion method and retrieved ozone, aerosol, and neutral air profiles. From the OSIRIS data, we also retrieved NO2 profiles. For the FTS data, we developed a dimension reduction inversion method and used Markov chain Monte Carlo (MCMC) statistical estimation to retrieve methane profiles. Main contributions of this work are the retrieved OSIRIS and GOMOS satellite data sets, and the novel retrieval method applied to the FTS data. Long satellite data records are useful for trends studies and for distinguishing between anthropogenic effects and natural variations. Before this work, GOMOS daytime ozone profiles were missing from scientific studies because the operational GOMOS daytime occultation product contains large biases. The GOMOS bright limb ozone product vastly improves the stratospheric part of the GOMOS daytime ozone. On the other hand, the dimension reduction method is a promising new technique for the retrieval of atmospheric profiles, especially when the measurement contains little information about the vertical distribution of gases.
  • Gozaliasl, Ghassem (Helsingin yliopisto, 2016)
    Galaxy formation is one of the most active and evolving fields of research in observational astronomy and cosmology. While we know today which physical processes qualitatively regulate galaxy evolution, the precise timing and the behaviour of these processes and their relations to host environments remain unclear. Many interesting questions are still debated: What regulates galaxy evolution? When do massive galaxies assemble their stellar mass and how? Where does this mass assembly occur? This thesis studies the formation and evolution of central galaxies in groups and clusters over the last 9 billion years in an attempt to answer these questions. Two important properties of galaxy clusters and groups make them ideal systems to study cosmic evolution. First, they are the largest structures in the Universe that have undergone gravitational relaxation and virial equilibrium. By comparing mass distributions among the nearby- and early-Universe clusters, we can measure the rate of the structure growth and formation. Second, the gravitational potential wells of clusters are deep enough that they retain all of the cluster material, despite outflows driven by supernovae (SNe) and active galactic nuclei (AGN). Thus, the cluster baryons can provide key information on the essential mechanisms related to galaxy formation, including star formation efficiency and the impact of AGN and SNe feedback on galaxy evolution. This thesis reports the identification of a large sample of galaxy groups including their optical and X-ray properties. It includes several refereed journal articles, of which five have been included here. In the first article (Gozaliasl et al. 2014a), we study the distribution and the development of the magnitude gap between the brightest group galaxies and their brightest satellites in our well defined mass-selected sample of 129 X-ray galaxy groups at 0.04 < z < 1.23 in XMM-LSS. We investigate the relation between magnitude gap and absolute r-band magnitude of the central group galaxy and its brightest satellite. Our observational results are compared to the predictions by three semi-analytic models (SAMs) based on the Millennium simulation. We show that the fraction of galaxy groups with large magnitude gaps (e.g. fossils) increases significantly with decreasing redshift by a factor of ∼ 2. In contrast to the model predictions, we show that the intercept of the relation between the absolute magnitude of the brightest groups galaxies (BGGs) and the magnitude gap becomes brighter as a function of increasing redshift. We attribute this evolution to the presence of a younger population of the observed BGGs. In the second article (Gozaliasl et al. 2016), we study the distribution and evolution of the star formation rate (SFR) and the stellar mass of BGGs over the last 9 billion years, using a sample of 407 BGGs selected from X-ray galaxy groups at 0.04 < z < 1.3 in the XMM-LSS, COSMOS, and AEGIS fields. We find that the mean stellar mass of BGGs grows by a factor of 2 from z = 1.3 to present day and the stellar mass distribution evolves towards a normal distribution with cosmic time. We find that the BGGs are not completely inactive systems as the SFR of a considerable number of BGG ranges from 1 to 1000 M_sun/yr. In the third article (Gozaliasl et al. 2014b), we study the evolution of halo mass, magnitude gap, and composite (stacked) luminosity function of galaxies in groups classified by the magnitude gap (as fossils, normal/non-fossils, and random groups) using the Guo et al. (2011) SAM. We find that galaxy groups with large magnitude gaps, i.e. fossils (∆M1,2 ≥ 2 mag), form earlier than the non-fossil systems. We measure the evolution of the Schechter function parameters, finding that M∗ for fossils grows by at least +1 mag in contrast to non-fossils, decreasing the number of massive galaxies with redshift. The faint-end slope (α) of both fossils and non-fossils remains constant with redshift. However, φ∗ grows significantly for both type of groups, changing the number of galaxies with cosmic time. We find that the number of dwarf galaxies in fossils shows no significant evolution in comparison to non-fossils and conclude that the changes in the number of galaxies (φ∗) of fossils are mainly due to the changes in the number of massive (M∗) galaxies. Overall, these results indicate that the giant central galaxies in fossils form by multiple mergers of the massive galaxies. In the fourth article (Khosroshahi et al. 2014), we analyse the observed X-ray, optical, and spectroscopic data of four optically selected fossil groups at z ∼ 0.06 in 2dFGRS to examine the possibility that a galaxy group, which hosts a giant luminous elliptical galaxy with a large magnitude gap, can be associated with diffuse X-ray radiation, similar to that of fossil groups. The X-ray and optical properties of these groups indicate the presence of extended X-ray emission from the hot intra-group gas. We find that one of them is a fossil group, and the X-ray luminosity of two groups is close to the defined threshold for fossil groups. One of the groups is ruled out due to the optical contamination in the input sample. In the fifth paper (Khosroshahi et al. 2015), we analyse data from the multiwavelength observations of galaxy groups to probe statistical predictions from the SAMs. We show that magnitude gap can be used as an observable parameter to study groups and to probe galaxy formation models.
  • Gao, Yao (Helsingin yliopisto, 2016)
    Interactions between the land surface and climate are complex as a range of physical, chemical and biological processes take place. Changes in the land surface or the climate can affect the water, energy and carbon cycles in the Earth system. This thesis discusses a number of critical issues that concern land-atmospheric interactions in the boreal zone, which is characterised by vast areas of peatlands, extensive boreal forests and a long snow cover period. Regional climate modelling and land surface modelling were used as the main tools for this study, in conjunction with observational data for evaluation. First, to better describe the present-day land cover in the regional climate model, we introduced an up-to-date and high-resolution land cover map to replace the inaccurate and outdated default land cover map for Fennoscandia. Second, in order to provide background information for future forest management actions for climate change mitigation, we studied the biogeophysical effects on the regional climate of peatland forestation, which has been the dominant land cover change in Finland over the last century. Moreover, climate variability can influence the land surface. Although drought is uncommon in northern Europe, an extreme drought occurred in the summer of 2006 in Finland, and induced visible drought symptoms in boreal forests. Thus, we assessed a set of drought indicators with drought impact data in boreal forests in Finland to indicate summer drought in boreal forests. Finally, the impacts of summer drought on water use efficiency of boreal Scots pine forests were studied to gain a deeper understanding of carbon and water dynamics in boreal forest ecosystems. In summary, the key findings of this thesis include: 1) the updated land cover map led to a slight decrease in biases of the simulated climate conditions. It is expected that the model performance could be improved by further development in model physics. 2) Peatland forestation in Finland can induce a warming effect in the spring of up to 0.43 K and a slight cooling effect in the growing season of less than 0.1 K due to decreased surface albedo and increased evapotranspiration, respectively. Corresponding to spring warming, the snow clearance day was advanced by up to 5 days over a 15-year mean. 3) The soil moisture index SMI was the most capable of the assessed drought indicators in capturing the spatial extent of observed forest damage induced by the extreme drought in 2006 in Finland. Thus, a land surface model capable of reliable predictions of regional soil moisture is important in future drought predictions in the boreal zone. 4) The inherent water use efficiency (IWUE) showed an increase during drought at the ecosystem level, and IWUE was found to be more appropriate than the ecosystem water use efficiency (EWUE) in indicating the impacts of drought on ecosystem functioning. The combined effects of soil moisture drought and atmospheric drought on stomatal conductance have to be taken into account in land surface models at the global scale when simulating the drought effects on plant functioning.
  • Korpisalo, Arto Leo (Helsingin yliopisto, 2016)
    The purpose of this thesis is to present the essential issues concerning the radio imaging method (RIM) and attenuation measurements. Although most of the issues discussed in this thesis are in no sense novel, the thesis provides an overview of the fundamental aspects of RIM and presents novel results from the combination of RIM with other borehole methods. About 2.6 million years ago, early humans perhaps accidently discovered that sharp stone flakes made it easier to cut the flesh from around bones. From sharp flakes to the first handaxes took hundreds of thousands of years, and the development was thus extremely slow. Alessandro Volta s invention of the voltaic pile (battery) in 1800 started a huge journey, and only one hundred years later humans had all the necessary means to start examining the Earth s subsurface. Since then, the development has been rapid, resulting in numerous methods (e.g. magnetic, gravimetric, electromagnetic and seismic) and techniques to resolve the Earth s treasures. The theoretical basis for the radio imaging method was established long before the method was utilized for exploration purposes. RIM is a geotomographic electromagnetic method in which the transmitter and receivers are in different boreholes to delineate electric conductors between the boreholes. It is a frequency domain method and the continuous wave technique is usually utilized. One of the pioneers was L.G. Stolarczyk in the USA in the 1980s. In the former Soviet Union, interest in RIM was high in the late 2000s. Our present device is also Russian based. Furthermore, in South Africa and Australian, a considerable amount of effort has been invested in RIM. The RIM device is superficially examined. It is the essential part in our RIM system, referred to as electromagnetic radiofrequency echoing (EMRE). The idea behind the device is excellent. However, several poor solutions have been utilized in its construction. Many of them have possibly resulted from the lack of good electronic components. The overall electronic construction of the whole device is very complicated. At least two essential properties are lacking, namely circuits for measuring the input impedances of the antennas and the return loss to obtain the actual output power. Of course, the digitalization of data in the borehole receiver could give additional benefits in data handling. The measurements can be monitored in real time on a screen, thus allowing the operator to already gain initial insights into the subsurface geology at the site and also to modify the measurement plan if necessary. Even today, no practical forward modelling tool for examining the behaviour of electromagnetic waves in the Earth s subsurface is available for the RIM environment, and interpretation is thus traditionally based on linear reconstruction techniques. Assuming low contrast and straight ray conditions can generally provide good and rapid results, even during the measurement session. Electrical resistive logging is usually one of the first methods used in a new borehole. Comparing the logging data with measured amplitude data can simply reveal the situations where a nearby and relatively limited conductive formation can mostly be responsible for the high attenuations levels between boreholes and can hence be taken into account in the interpretation. The transient electromagnetic method (TEM) functions in the time domain. TEM is also a short-range method and can very reliably reveal nearby conductors. Comparisons of RIM and TEM data from the ore district coincide well. These issues are considered in detail in Publication I. The functioning of the antenna is highly dependent on the environment in which the antenna is placed. The primary task of the antenna is to radiate and receive electromagnetic energy, or the antenna is a transducer between the generator and the environment. A simple bare wire can serve as a diagnostic probe to detect conductors in the borehole vicinity. However, borehole antennas are generally highly insulated to prevent the leakage of current into the borehole, and at the same time the insulation reduces the sensitivity of the antenna current to the ambient medium, especially as the electric properties of the insulation and surrounding material differ significantly. However, monitoring of the input impedance of the antenna could help in estimating its effectiveness in the borehole. This property is lacking in the present device. The scattering parameter s11 defines the relationship between the reflected and incident voltage or it provides information on the impedance matching chain. The behaviour of impedance of the insulated antennas in the different borehole conditions were estimated using simple analytical methods, such as the models of Wu, King and Giri (WKG) and Chen and Warne (CHEN), and highly sophisticated numerical software such as FEKO from EM Software & Systems (Altair). According to the results, our antennas maintain their effectiveness and feasibility in the whole frequency band (312.5−2500 kHz) utilized by the device. However, the highest frequency (2500 kHz) may suffer from different ambient conditions. The resolution is closely related to the frequency, whereby higher frequencies result in better resolution but at the expense of the range. These issues are clarified in Publication II. Electromagnetic methods are based on the fact that earth materials may have large contrasts in their electrical properties. A geotomographic RIM survey can have several benefits over ground-level EM sounding methods. When the transmitter is in the borehole, boundary effects due to the ground surface and the strong attenuation emerging from soils are easily eliminated. A borehole survey also brings the survey closer to the targets, and higher frequencies can be used, which means better resolution. Viewing of the target from different angles and directions also means better reconstruction results. The fundamental principles of the electromagnetic fields are explained to distinguish diffusive movement (strongly attenuating propagation) from wave propagation and to give a good conception of the possible transillumination depths of RIM. The transillumination depths of up to 1000 m are possible in a highly resistive environment using the lowest measurement frequency (312.5 kHz). In this context, one interesting and challenging case study is also presented from the area for a repository of spent nuclear fuel in Finland. The task was to examine the usefulness of RIM in the area and to determine how well the apparent resistivity could be associated with the structural integrity of the rock. The measurements were successful and the results convinced us of the potential of RIM. Publication III is related to these issues. In Finland, active use of RIM started in 2005 when Russian RIM experts jointly with GTK carried out RIM measurements at Olkiluoto. The results are presented in Publication IV. In this pioneering work, extensive background information (e.g. versatile geophysical borehole logging, optical imaging, 3D vertical seismic profile (VSP) and single-hole radar reflection measurements) was available from the site. The comparability of the results was good, e.g. low resistive or highly attenuating areas near boreholes from the RIM measurements coincided well with resistive logging and radar results. Electric mise-á-la-masse and high frequency electromagnetic RIM displayed even better comparability. The comparability of the surface electromagnetic sounding data and the RIM data was good. However, the tomographic reconstruction is much more detailed. In overall conclusion, the attenuation measurements were well suited to the recording of subsurface resistivity properties and continuity information between boreholes at Olkiluoto. To date, we have utilized RIM in two quite different environments. Olkiluoto is a spent nuclear fuel area in Finland with solid crystalline bedrock and Pyhäsalmi is an ore district with massive sulphide deposit. Despite Pyhäsalmi being an ideal research target for RIM, the utilization of the method has proven successful in both cases.
  • Marnela, Marika (Helsingin yliopisto, 2016)
    The Arctic Ocean and its exchanges with the Nordic Seas influence the north-European climate. The Fram Strait with its 2600 m sill depth is the only deep passage between the Arctic Ocean and the other oceans. Not just all the deep water exchanges between the Arctic Ocean and the rest of the world's oceans take place through the Fram Strait, but also a significant amount of cold, low-saline surface waters and sea ice exit the Arctic Ocean through the strait. Correspondingly, part of the warm and saline Atlantic water flowing northward enters the Arctic Ocean through the Fram Strait bringing heat into the Arctic Ocean. The oceanic exchanges through the Fram Strait as well as the water mass properties and the changes they undergo in the Fram Strait and its vicinity are studied from three decades of ship-based hydrographic observations collected from 1980-2010. The transports are estimated from geostrophic velocities. The main section, comprised of hydrographic stations, is located zonally at about 79 °N. For a few years of the observed period it is possible to combine the 79 °N section with a more northern section, or with a meridional section at the Greenwich meridian, to form quasi-closed boxes and to apply conservation constraints on them in order to estimate the transports through the Fram strait as well as the recirculation in the strait. In a similar way, zonal hydrographic sections in the Fram Strait and along 75 °N crossing the Greenland Sea are combined to study the exchanges between the Nordic Seas and the Fram Strait. The transport estimates are adjusted with drift estimates based on Argo floats in the Greenland Sea. The mean net volume transports through the Fram Strait are averaged from the various approaches and range from less than 1 Sv to about 3 Sv. The heat loss to the atmosphere from the quasi-closed boxes both north and south of the Fram Strait section is estimated at about 10 TW. The net freshwater transport through the Fram Strait is estimated at 60-70 mSv southward. The insufficiently known northward transport of Arctic Intermediate Water (AIW) originating in the Nordic Seas is estimated using 2002 Oden expedition data. At the time of data collection, excess sulphur hexafluoride (SF6) was available, a tracer that besides a background anthropogenic origin derives from a mixing experiment in the Greenland Sea in 1996. The excess SF6 can be used to distinguish AIW from the upper Polar Deep Water originating in the Arctic Ocean. It is estimated that 0.5 Sv of AIW enters the Arctic Ocean. The deep waters in the Nordic Seas and in the Arctic Ocean have become warmer and in the Greenland Sea also more saline during the three decades studied in this work. The temperature and salinity properties of the deep waters found in the Fram Strait from both Arctic Ocean and Greenland Sea origins have become similar and continue to do so. How these changes will affect the circulation patterns will be seen in the future.
  • Tuomikoski, Laura (Helsingin yliopisto, 2016)
    The rapid development of different imaging modalities related to radiation therapy (RT) has largely affected the entire RT process from the planning phase of the treatment to the final treatment delivery. Treatment planning requires accurate anatomical information that can be provided by computed tomography (CT) and magnetic resonance imaging (MRI). Additional functional information about tissues and organs can be obtained by functional MRI or nuclear medicine imaging techniques such as single-photon emission tomography or positron emission tomography. The introduction of cone-beam computed tomography (CBCT) imaging to the RT delivery process has also opened new possibilities for RT treatment. In the past, mainly bony anatomy was visualized with planar imaging, which was used for the localization of the treatment. With CBCT also the prevailing soft tissue anatomy in addition to bones can be verified on a daily basis. By taking advantage of the growing amount of information obtainable by imaging, RT treatment plans can be customized further to suit the individual anatomical and physiological properties of patients. The focus of this thesis is on advanced methods for taking the individual variation in patients physiology into account during the RT treatment planning. Two particular cases of variation are investigated: bladder filling and deformation during the RT of urinary bladder cancer, and radiation-induced changes of salivary gland function related to the RT of head and neck cancer. In both cases, pre-treatment imaging is used to create a patient-specific model to estimate the changes that would take place during the RT. The aim is to take these predicted changes into account in the treatment planning process, with the goal of protecting normal tissues. At Helsinki University Central Hospital (HUCH), a method of adaptive radiation therapy (ART) was designed and clinically implemented for the treatment of urinary bladder cancer. In the applied workflow, four consecutive CT scans for RT treatment planning were used to capture the changes in bladder shape and size while the bladder was filling. Assuming that a similar bladder filling pattern applies during the course of RT, four treatment plans corresponding to the different bladder volumes were prepared and stored in a plan library. Before each treatment fraction a CBCT scan was performed. The treatment plan, which was the closest match to the bladder shape and size of the day, was selected from the library and delivered accordingly. The use of ART enabled better conformity of the treatment. It considerably reduced the absorbed dose to the intestinal cavity, as compared to the non-adaptive approach. Furthermore, the dose coverage in the urinary bladder was not compromised, while the treatment margins were substantially reduced. Overall, the method was found to be feasible, and it was rapidly taken into clinical practice. A model for predicting post-RT salivary flow was constructed and evaluated for the treatment of head and neck cancer. The model was based on pre-RT quantitative 99mTc-pertechnetate scintigraphy, direct measurement of total salivary flow and population-based dose-response behaviour. A good correlation was found between the modelled and the measured values of saliva flow rate. Hence, the model can be used as a predictive tool for risk-adapted treatment planning. One possible explanation for the remaining discrepancies between the predicted and the measured saliva flow rate values may be patients individual responses to radiation.
  • Kekkonen, Hanne (Helsingin yliopisto, 2016)
    My dissertation focuses on the convergence rates and uncertainty quantification for continuous linear inverse problems. The problem is studied from both deterministic and stochastic points of view. In particular, I considered regularisation and Bayesian inversion with large noise in infinite-dimensional settings. The first paper in my thesis investigates the convergence results for continuous Tikhonov regularisation in appropriate Sobolev spaces. The convergence rates are achieved by using microlocal analysis for pseudodifferential operators. In the second paper variational regularisation is studied using convex analysis. In this paper we define a new kind of approximated source condition for large noise and for the unknown solution to guarantee the convergence of the approximated solution in Bregman distance. The third paper approaches Gaussian inverse problems from the statistical perspective. In this article we study the posterior contraction rates and credible sets for Bayesian inverse problems. Also the frequentist confidence regions are examined. The analysis of the small noise limit in statistical inverse problems, also known as the theory of posterior consistency, has attracted a lot of interest in the last decade. Developing a comprehensive theory is important since posterior consistency justifies the use of the Bayesian approach the same way as convergence results justify the use of regularisation techniques.
  • Kontro, Inkeri (Helsingin yliopisto, 2016)
    Elastic X-ray scattering is a probe which provides information on the structure of matter in nanometer lengthscales. Structure in this size scale determines the mechanical and functional properties of materials, and in this thesis, small- and wide-angle X-ray scattering (SAXS and WAXS) have been used to study the structure of biological and biomimetic materials. WAXS gives information on the structures in atomistic scales, while SAXS provides information in the important range above atomistic but below microscale. SAXS was used together with dynamic light scattering and zeta potential measurements to study protein and liposome structures. The S-layer protein of Lactobacillus brevis ATCC 8287 was shown to reassemble on cationic liposomes. The structure of the reassembled crystallite differed from that of the S-layer on native bacterial cell wall fragments, and the crystallite was more stable in the direction of the larger lattice constant than in the direction of the shorter. Liposomes were also used as a biomembrane model to study the interactions of phosphonium-based ionic liquids with cell membrane mimics. All studied ionic liquids penetrated multilamellar vesicles and caused a thinning of the lamellar distance that was dependent on ionic liquid concentration. The ability of the ionic liquids to disrupt membranes was, however, dependent on the length of hydrocarbon chains in the cation. In most cases, ionic liquids with long hydrocarbon chains in the cation induced disorder in the system, but in one case also selective extraction of lipids and reassembly into lamellae was observed. The effects depended both on ionic liquid type, concentration, and lipid composition of the vesicle. WAXS was used as a complementary technique to provide information on the structure-function relationship of a novel biomimicking material composed of a genetically engineered protein, chitin and calcium carbonate, and films composed of hydroxypropylated xylan. The presence of calcium carbonate and its polymorph (calcite) was determined from the biomimetic material. For the xylan films, crystallinity was assessed. In both cases, also the crystallite size was determined. These parameters influence the mechanical properties of the developed materials. In all cases, X-ray scattering provided information on the nanostructure of biological or biomimetic materials. Over a hundred years after the principle behind X-ray scattering was first explained, it still provides information about the properties of matter which is not available by other means.
  • Paolini, Gianluca (Helsingin yliopisto, 2016)
    The subject of this doctoral thesis is the mathematical theory of independence, and its various manifestations in logic and mathematics. The topics covered in this doctoral thesis range from model theory and combinatorial geometry, to database theory, quantum logic and probability logic. This study has two intertwined centres: - classification theory, independence calculi and combinatorial geometry (papers I-IV); - new perspectives in team semantics (papers V-VII). The first topic is a classical topic in model theory, which we approach from different directions (implication problems, abstract elementary classes, unstable first-order theories). The second topic is a relatively new logical framework where to study non-classical logical phenomena (dependence and independence, uncertainty, probabilistic reasoning, quantum foundations). Although these two centres seem to be far apart, we will see that they are linked to each others in various ways, under the guiding thread of independence.
  • Kallonen, Aki (Helsingin yliopisto, 2016)
    X-ray tomography is a widely used and powerful tool; its significance to diagnostics was recognized with the Nobel award, and tomographic imaging has also become a large contributor to several fields of science, from material physics to biological and palaeontological sciences. Current technology enables tomography on the micrometre scale, microtomography, in the laboratory. This provides a non-destructive three-dimensional microscope to probe the internal structure of radiotranslucent objects, which has obvious implications towards its applicability. Further, x-rays may be utilized for x-ray scattering experiments, which probes material properties on the ångström-scale. Crystallographic studies on various forms of matter, not least of which famously being the DNA molecule, have also been awarded the Nobel. In this thesis, the construction of a combined experimental set-up for both x-ray microtomography and x-ray scattering is documented. The device may be used to characterize materials on several levels of their hierarchical structure, and the microtomography data may be used as a reference for targeting the crystallographic experiment. X-ray diffraction tomography is demonstrated. An algorithm for x-ray tomography from sparse data is presented. In many scenarios, the amount of data collected for a tomogram is not sufficient for traditional algorithms, and would benefit from more robust computational schemes. Real x-ray data was used for computing a tomographic reconstruction from a data set two orders of magnitude smaller than what is conventionally used with set-ups such as the one presented in the thesis. Additionally, x-ray microtomography was utilized for morphological studies in developmental and evolutionary biology, evo-devo for short. The fossil record shows vast changes in morphology as more complex forms of life evolved, while the morphology of any given individual organism is the product of its developmental process. Understanding both evolution and development is essential for a comprehensive view on the history of life. In this thesis, two studies on teeth and their development are discussed. In both, dental morphology was investigated with high-resolution x-ray tomography.