Browsing by Title

Sort by: Order: Results:

Now showing items 627-646 of 946
  • Mattsson, Maria (Helsingin yliopisto, 2012)
    During the last decade, the cosmological observations have indicated that the homogeneous and isotropic Friedmann models with linear perturbations fail to describe our universe at late times unless a dominant energy component with negative pressure called dark energy is introduced. In this thesis, we study the implications of the nonlinear nature of general relativity on the cosmological model building beyond the standard Friedmann models. Despite the well established observational status of cosmic structures, their effects have gained more attention only along with the dark energy debate. In particular, the fact that the start of the supposed dark energy domination coincides with the time the nonlinear inhomogeneities started to form on larger scales, motivates the study of the dynamics of the cosmic structures. In cosmology, the implication of the nonlinearity of gravity is that averages of inhomogeneous quantities do not evolve in time like the corresponding homogeneous quantities - a phenomenon referred to as the backreaction. Due to the new precision observations during the recent years, the evaluation of the backreaction in our universe is a topical, but complex task. In this thesis, rather than trying to fully quantify the backreaction, the emphasis is on the model building. We explicitly demonstrate the importance of the exact matching conditions in the solutions representing cosmic structures in the context of backreaction evaluation. Indeed, the cosmic web of structures is made of very differently behaving regions and the shear on the interface between the different regions seems to play an important role. The backreaction term emerging from averaging the Einstein equation is not the only effect that cosmic structures can have on the observations. Indeed, we also demonstrate that even though the backreaction would remain small, large effects can arise from the choice of the smoothing scale and, perhaps surprisingly, from perturbative models as well. As we find, at least the supernova data can be explained within a linearly perturbed Friedmann model - without dark energy. The key point is to take into account the effects of structures on the observable distance measures, ignored in the standard cosmological perturbation theory. Further inspection shows that the model is actually equivalent to a nonperturbative inhomogeneous solution, confirming that the supernova data does not necessarily imply additional nonperturbative corrections. Considering physical quantities such as the expansion rate of space and the matter density, there are large local variations in the cosmic web. The main question to answer is whether (and to what extent) the effects of the local variations average out or accumulate in the observables. It appears likely that when combining all the cosmological data, more sophisticated models than the perturbed Friedmann or the simplest spherically symmetric exact inhomogeneous solutions are required to fully quantify the effects of the structures on the cosmological observations.
  • Mangs, Johan (Helsingin yliopisto, 2004)
  • Dannenberg, Alia (Helsingin yliopisto, 2011)
    In the thesis I study various quantum coherence phenomena and create some of the foundations for a systematic coherence theory. So far, the approach to quantum coherence in science has been purely phenomenological. In my thesis I try to answer the question what quantum coherence is and how it should be approached within the framework of physics, the metatheory of physics and the terminology related to them. It is worth noticing that quantum coherence is a conserved quantity that can be exactly defined. I propose a way to define quantum coherence mathematically from the density matrix of the system. Degenerate quantum gases, i.e., Bose condensates and ultracold Fermi systems, form a good laboratory to study coherence, since their entropy is small and coherence is large, and thus they possess strong coherence phenomena. Concerning coherence phenomena in degenerate quantum gases, I concentrate in my thesis mainly on collective association from atoms to molecules, Rabi oscillations and decoherence. It appears that collective association and oscillations do not depend on the spin-statistics of particles. Moreover, I study the logical features of decoherence in closed systems via a simple spin-model. I argue that decoherence is a valid concept also in systems with a possibility to experience recoherence, i.e., Poincaré recurrences. Metatheoretically this is a remarkable result, since it justifies quantum cosmology: to study the whole universe (i.e., physical reality) purely quantum physically is meaningful and valid science, in which decoherence explains why the quantum physical universe appears to cosmologists and other scientists very classical-like. The study of the logical structure of closed systems also reveals that complex enough closed (physical) systems obey a principle that is similar to Gödel's incompleteness theorem of logic. According to the theorem it is impossible to describe completely a closed system within the system, and the inside and outside descriptions of the system can be remarkably different. Via understanding this feature it may be possible to comprehend coarse-graining better and to define uniquely the mutual entanglement of quantum systems.
  • Blåsten, Eemeli (Helsingin yliopisto, 2013)
    We prove uniqueness and stability for the inverse boundary value problem of the two dimensional Schrödinger equation. We do not assume the potentials to be continuous or even bounded. Instead, we assume that some of their positive fractional derivatives are in a specific Lorentz space. These spaces are a natural generalization to the usual fractional Sobolev spaces. The thesis consists of two parts. In the first part, we define the generalized fractional Sobolev spaces and prove some of their properties including embeddings and interpolation identities. In particular we sharpen the usual Sobolev embedding into the space of Hölder-continuous functions, by showing that a particular kind of space embeds into the space of continuous functions without any modulus of continuity. The inverse problem is considered in the second part of the thesis. We prove a new Carleman estimate for ∂. This estimate has a fast decay rate, which will allow us to consider potentials with very low regularity. After that we use Bukhgeim s oscillating exponential solutions, Alessandrini s identity and stationary phase to get information about the difference of the potentials from the difference of the Cauchy data. The stability estimate will be of logarithmic type, but works with potentials of low regularity.
  • Tähtinen, Vesa (Helsingin yliopisto, 2010)
    This PhD Thesis is about certain infinite-dimensional Grassmannian manifolds that arise naturally in geometry, representation theory and mathematical physics. From the physics point of view one encounters these infinite-dimensional manifolds when trying to understand the second quantization of fermions. The many particle Hilbert space of the second quantized fermions is called the fermionic Fock space. A typical element of the fermionic Fock space can be thought to be a linear combination of the configurations m particles and n anti-particles . Geometrically the fermionic Fock space can be constructed as holomorphic sections of a certain (dual)determinant line bundle lying over the so called restricted Grassmannian manifold, which is a typical example of an infinite-dimensional Grassmannian manifold one encounters in QFT. The construction should be compared with its well-known finite-dimensional analogue, where one realizes an exterior power of a finite-dimensional vector space as the space of holomorphic sections of a determinant line bundle lying over a finite-dimensional Grassmannian manifold. The connection with infinite-dimensional representation theory stems from the fact that the restricted Grassmannian manifold is an infinite-dimensional homogeneous (Kähler) manifold, i.e. it is of the form G/H where G is a certain infinite-dimensional Lie group and H its subgroup. A central extension of G acts on the total space of the dual determinant line bundle and also on the space its holomorphic sections; thus G admits a (projective) representation on the fermionic Fock space. This construction also induces the so called basic representation for loop groups (of compact groups), which in turn are vitally important in string theory / conformal field theory. The Thesis consists of three chapters: the first chapter is an introduction to the backround material and the other two chapters are individually written research articles. The first article deals in a new way with the well-known question in Yang-Mills theory, when can one lift the action of the gauge transformation group on the space of connection one forms to the total space of the Fock bundle in a compatible way with the second quantized Dirac operator. In general there is an obstruction to this (called the Mickelsson-Faddeev anomaly) and various geometric interpretations for this anomaly, using such things as group extensions and bundle gerbes, have been given earlier. In this work we give a new geometric interpretation for the Faddeev-Mickelsson anomaly in terms of differentiable gerbes (certain sheaves of categories) and central extensions of Lie groupoids. The second research article deals with the question how to define a Dirac-like operator on the restricted Grassmannian manifold, which is an infinite-dimensional space and hence not in the landscape of standard Dirac operator theory. The construction relies heavily on infinite-dimensional representation theory and one of the most technically demanding challenges is to be able to introduce proper normal orderings for certain infinite sums of operators in such a way that all divergences will disappear and the infinite sum will make sense as a well-defined operator acting on a suitable Hilbert space of spinors. This research article was motivated by a more extensive ongoing project to construct twisted K-theory classes in Yang-Mills theory via a Dirac-like operator on the restricted Grassmannian manifold.
  • Yli-Juuti, Taina (Helsingin yliopisto, 2013)
    Atmospheric aerosol particles affect the visibility, damage human health and influence the Earth's climate by scattering and absorbing radiation and acting as cloud condensation nuclei (CCN). Considerable uncertainties are associated with the estimates of aerosol climatic effects and the extent of these effects depends on the particles size, composition, concentration and location in the atmosphere. Improved knowledge on the processes affecting these properties is of great importance in predicting future climate. Significant fraction of the atmospheric aerosol particles are formed in the atmosphere from trace gases through a phase change, i.e. nucleation. The freshly nucleated secondary aerosol particles are about a nanometer in diameter, and they need to grow tens of nanometers by condensation of vapors before they affect the climate. During the growth, the nanoparticles are subject to coagulational losses, and their survival to CCN sizes is greatly dependent on their growth rate. Therefore, capturing the nanoparticle growth correctly is crucial for representing aerosol effects in climate models. A large fraction of nanoparticle growth in many environments is expected to be due to organic compounds. However a full identification of the compounds and processes involved in the growth is lacking to date. In this thesis the variability in atmospheric nanoparticle growth rates with particle size and ambient conditions was studied based on observations at two locations, a boreal forest and a Central European rural site. The importance of various organic vapor uptake mechanisms and particle phase processes was evaluated, and two nanoparticle growth models were developed to study the effect of acid-base chemistry in the uptake of organic compounds by nanoparticles. Further, the effect of inorganic solutes on the partitioning of organic aerosol constituents between gas and particle phase was studied based on laboratory experiments. Observations of the atmospheric nanoparticle growth rates supported the hypothesis of organic compounds controlling the particle growth. The growth rates of particles with diameter PIENEMPI 20 nm vary with particle size, and the processes covering the uptake of organic vapors and limiting the nanoparticle growth were concluded to be size dependent. Formation of organic salts in the particle phase is likely to play a role in nanoparticle growth, however, according to the model predictions, it does not explain the uptake of semi-volatile organic compounds entirely. Small amount of inorganic salt does not seem to affect the volatility of organic acids, however with an increased inorganic content the case is not as clear.
  • Hienola, Anca (Helsingin yliopisto, 2008)
    The conversion of a metastable phase into a thermodynamically stable phase takes place via the formation of clusters. Clusters of different sizes are formed spontaneously within the metastable mother phase, but only those larger than a certain size, called the critical size, will end up growing into a new phase. There are two types of nucleation: homogeneous, where the clusters appear in a uniform phase, and heterogeneous, when pre-existing surfaces are available and clusters form on them. The nucleation of aerosol particles from gas-phase molecules is connected not only with inorganic compounds, but also with nonvolatile organic substances found in atmosphere. The question is which ones of the myriad of organic species have the right properties and are able to participate in nucleation phenomena. This thesis discusses both homogeneous and heterogeneous nucleation, having as theoretical tool the classical nucleation theory (CNT) based on thermodynamics. Different classes of organics are investigated. The members of the first class are four dicarboxylic acids (succinic, glutaric, malonic and adipic). They can be found in both the gas and particulate phases, and represent good candidates for the aerosol formation due to their low vapor pressure and solubility. Their influence on the nucleation process has not been largely investigated in the literature and it is not fully established. The accuracy of the CNT predictions for binary water-dicarboxylic acid systems depends significantly on the good knowledge of the thermophysical properties of the organics and their aqueous solutions. A large part of the thesis is dedicated to this issue. We have shown that homogeneous and heterogeneous nucleation of succinic, glutaric and malonic acids in combination with water is unlikely to happen in atmospheric conditions. However, it seems that adipic acid could participate in the nucleation process in conditions occurring in the upper troposphere. The second class of organics is represented by n-nonane and n-propanol. Their thermophysical properties are well established, and experiments on these substances have been performed. The experimental data of binary homogeneous and heterogeneous nucleation have been compared with the theoretical predictions. Although the n-nonane - n-propanol mixture is far from being ideal, CNT seems to behave fairly well, especially when calculating the cluster composition. In the case of heterogeneous nucleation, it has been found that better characterization of the substrate - liquid interaction by means of line tension and microscopic contact angle leads to a significant improvement of the CNT prediction. Unfortunately, this can not be achieved without well defined experimental data.
  • Lindberg, Sauli (Helsingin yliopisto, 2015)
    The dissertation deals with the Jacobian equation in the plane. R.R. Coifman, J.-P. Lions, Y. Meyer and S. Semmes proved in their seminal paper from 1993 that when a mapping from the n-space to the n-space belongs to a suitable homogeneous Sobolev space, its Jacobian determinant belongs to a real-variable Hardy space. Coifman, Lions, Meyer and Semmes proceeded to ask the following famous open problem: can every function in the Hardy space be written as the Jacobian of some Sobolev mapping? It follows from the work of G. Cupini, B. Dacorogna and O. Kneuss that the range of the Jacobian operator is dense in the Hardy space. As a consequence of this, solving the Jacobian equation reduces to proving that every so-called energy-minimal solution satisfies certain natural a priori estimate. In the dissertation we use Lagrange multipliers in Banach spaces to prove the sought after a priori estimate for a large class of energy-minimal solutions. It remains unclear whether the class is large enough to imply the surjectivity of the Jacobian operator, but we present many partial results on the properties of the class. To cite an example, when the Hardy space is endowed with a particular norm that is well suited to the study of the Jacobian equation, all the extreme points of the unit ball are Jacobians. Furthermore, the energy-minimal solutions for the extreme points satisfy the wanted a priori estimate. As one of the main results of the dissertation we reduce solving the Jacobian equation to a fairly concrete finite-dimensional problem. As the main tools of the dissertation we use Banach space geometry, harmonic analysis in the plane and methods from the theory of incompressible elasticity.
  • Wang, Keguang (Helsingin yliopisto, 2007)
    Pack ice is an aggregate of ice floes drifting on the sea surface. The forces controlling the motion and deformation of pack ice are air and water drag forces, sea surface tilt, Coriolis force and the internal force due to the interaction between ice floes. In this thesis, the mechanical behavior of compacted pack ice is investigated using theoretical and numerical methods, focusing on the three basic material properties: compressive strength, yield curve and flow rule. A high-resolution three-category sea ice model is applied to investigate the sea ice dynamics in two small basins, the whole Gulf Riga and the inside Pärnu Bay, focusing on the calibration of the compressive strength for thin ice. These two basins are on the scales of 100 km and 20 km, respectively, with typical ice thickness of 10-30 cm. The model is found capable of capturing the main characteristics of the ice dynamics. The compressive strength is calibrated to be about 30 kPa, consistent with the values from most large-scale sea ice dynamic studies. In addition, the numerical study in Pärnu Bay suggests that the shear strength drops significantly when the ice-floe size markedly decreases. A characteristic inversion method is developed to probe the yield curve of compacted pack ice. The basis of this method is the relationship between the intersection angle of linear kinematic features (LKFs) in sea ice and the slope of the yield curve. A summary of the observed LKFs shows that they can be basically divided into three groups: intersecting leads, uniaxial opening leads and uniaxial pressure ridges. Based on the available observed angles, the yield curve is determined to be a curved diamond. Comparisons of this yield curve with those from other methods show that it possesses almost all the advantages identified by the other methods. A new constitutive law is proposed, where the yield curve is a diamond and the flow rule is a combination of the normal and co-axial flow rule. The non-normal co-axial flow rule is necessary for the Coulombic yield constraint. This constitutive law not only captures the main features of forming LKFs but also takes the advantage of avoiding overestimating divergence during shear deformation. Moreover, this study provides a method for observing the flow rule for pack ice during deformation.
  • Vakkari, Ville (Helsingin yliopisto, 2013)
    Aerosol is defined as solid or liquid particles suspended in a gas lighter than the particles, which means that the atmosphere we live in is an aerosol in itself. Although aerosol particles are only a trace component of the atmosphere they affect our lives in several ways. The aerosol particles can cause adverse health effects and deteriorate visibility, but they affect also the Earth s climate directly by scattering and absorbing solar radiation and indirectly by modulating the properties of the clouds. Anthropogenic aerosol particles have a net cooling effect on the climate, but the uncertainty in the amount of cooling is presently as large as the heating effect of carbon dioxide. To reduce the uncertainty in the aerosol climate effects, spatially representative reference data of high quality are needed for the global climate models. To be able to capture the diurnal and seasonal variability the data have to be collected continuously over time periods that cover at least one full seasonal cycle. Until recently such data have been nearly non-existing for continental Africa and hence one aim of this work was to establish a permanent measurement station measuring the key aerosol particle properties in a continental location in southern Africa. In close collaboration with the North-West University in South Africa this aim has now been achieved at the Welgegund measurement station. The other aims of this work were to determine the aerosol particle concentrations including their seasonal and diurnal variation and to study the most important aerosol particle sources in continental southern Africa. In this thesis the aerosol size distribution and its seasonal and diurnal variation is reported for different environments ranging from a clean rural background to an anthropogenically heavily influenced mining region in continental southern Africa. Atmospheric regional scale new particle formation has been observed at a world record high frequency and it dominates the diurnal variation except in the vicinity of low-income residential areas, where domestic heating and cooking are a stronger source. The concentration of aerosol particles in sizes that can act as cloud condensation nuclei was found to increase during the dry season because of reduced wet removal and increased aerosol production from incomplete combustion, which can be either domestic heating or savannah and grassland fires depending on location. During the wet season new particle formation was shown to be an important source of particles in the size range of cloud condensation nuclei.
  • Saltikoff, Elena (Helsingin yliopisto, 2011)
    Mesoscale weather phenomena, such as the sea breeze circulation or lake effect snow bands, are typically too large to be observed at one point, yet too small to be caught in a traditional network of weather stations. Hence, the weather radar is one of the best tools for observing, analyzing and understanding their behavior and development. A weather radar network is a complex system, which has many structural and technical features to be tuned, from the location of each radar to the number of pulses averaged in the signal processing. These design parameters have no universal optimal values, but their selection depends on the nature of the weather phenomena to be monitored as well as on the applications for which the data will be used. The priorities and critical values are different for forest fire forecasting, aviation weather service or the planning of snow ploughing, to name a few radar-based applications. The main objective of the work performed within this thesis has been to combine knowledge of technical properties of the radar systems and our understanding of weather conditions in order to produce better applications able to efficiently support decision making in service duties for modern society related to weather and safety in northern conditions. When a new application is developed, it must be tested against ground truth . Two new verification approaches for radar-based hail estimates are introduced in this thesis. For mesoscale applications, finding the representative reference can be challenging since these phenomena are by definition difficult to catch with surface observations. Hence, almost any valuable information, which can be distilled from unconventional data sources such as newspapers and holiday shots is welcome. However, as important as getting data is to obtain estimates of data quality, and to judge to what extent the two disparate information sources can be compared. The presented new applications do not rely on radar data alone, but ingest information from auxiliary sources such as temperature fields. The author concludes that in the future the radar will continue to be a key source of data and information especially when used together in an effective way with other meteorological data.
  • Hannula, Miika (Helsingin yliopisto, 2015)
    Dependence logic is a novel logical formalism that has connections to database theory, statistics, linguistics, social choice theory, and physics. Its aim is to provide a systematic and mathematically rigorous tool for studying notions of dependence and independence in different areas. Recently many variants of dependence logic have been studied in the contexts of first-order, modal, and propositional logic. In this thesis we examine independence and inclusion logic that are variants of dependence logic extending first-order logic with so-called independence or inclusion atoms, respectively. The work consists of two parts in which we study either axiomatizability or expressivity hierarchies regarding these logics. In the first part we examine whether there exist some natural parameters of independence and inclusion logic that give rise to infinite expressivity or complexity hierarchies. Two main parameters are considered. These are arity of a dependency atom and number of universal quantifiers. We show that for both logics, the notion of arity gives rise to strict expressivity hierarchies. With respect to number of universal quantifiers however, strictness or collapse of the corresponding hierarchies turns out to be relative to the choice of semantics. In the second part we turn attention to axiomatizations. Due to their complexity, dependence and independence logic cannot have a complete recursively enumerable axiomatization. Hence, restricting attention to partial solutions, we first axiomatize all first-order consequences of independence logic sentences, thus extending an analogous result for dependence logic. We also consider the class of independence and inclusion atoms, and show that it can be axiomatized using implicit existential quantification. For relational databases this implies a sound and complete axiomatization of embedded multivalued and inclusion dependencies taken together. Lastly, we consider keys together with so-called pure independence atoms and prove both positive and negative results regarding their finite axiomatizability.
  • Nikitin, Timur (Helsingin yliopisto, 2013)
    Silicon nanocrystals (Si-nc) embedded in a SiO₂ matrix is a promising system for silicon-based photonics. We studied optical and structural properties of Si-rich silicon oxide SiOₓ (x < 2) films annealed in a furnace at temperatures up to 1200 °C and containing Si-nc. The measured optical properties of SiOₓ films are compared with the values estimated by using the effective medium approximation and X-ray photoelectron spectroscopy (XPS) results. A good agreement is found between the measured and calculated refractive index. The results for absorption suggest high transparency of nanoscale suboxide. The extinction coefficient for elemental Si is found to be between the values for crystalline and amorphous Si. Thermal annealing increases the degree of Si crystallization; however, the Si–SiO₂ phase separation is not complete after annealing at 1200 °C. The 1.5-eV photoluminescence probably originates from small (~1 nm) oxidized Si grains or oxygen-related defects, but not from Si-nc with sizes of about 4 nm. The SiOx films prepared by molecular beam deposition and ion implantation are structurally and optically very different after preparation but become similar after annealing at ~1100 °C. The laser-induced thermal effects found for SiOₓ films on silica substrates illuminated by focused laser light should be taken into account in optical measurements. Continuous-wave laser irradiation can produce very high temperatures in free-standing SiOₓ and Si/SiO₂ superlattice films, which changes their structure and optical properties. The center of a laser-annealed area is very transparent and consists of amorphous SiO₂. Large Si-nc (up to 300 nm) are observed in the ring around the central region. These Si-nc produce high absorption and they are typically under compressive stress, which is connected with the crystallization from the melt phase. Some of the large Si-nc exhibit surface features, which is interpreted in terms of eruption of pressurized Si from the film. A part of large Si-nc is removed from the film forming holes of similar sizes. The presence of oxygen in the laser-annealing atmosphere decreases the amount of removed Si-nc. The structure of laser-annealed areas is explained by thermodiffusion, which leads to the macroscopic Si–SiO₂ phase separation. Comparison of the structure of central regions for laser annealing in oxygen, air, and inert atmospheres excludes the dominating effect of Si oxidation in the formation of laser-annealed area. By using a strongly focused laser beam, the structural changes in the free-standing films can be obtained in submicron areas, which suggests a concept of nonvolatile optical memory with high information density and superior thermal stability.
  • Backman, John (Helsingin yliopisto, 2015)
    Aerosol particles are part of the Earth's climatic system. Aerosol particles can significantly impact the climate. The ability of aerosol particles to do so depends mainly on the size, concentration and chemical composition of the particles. Aerosol particles can act as cloud condensation nuclei (CCN) and can therefore mediate cloud properties. Aerosol particles can thus perturb the energy balance of the Earth through clouds. Aerosol particles can also directly interact with solar radiation through scattering, absorption, or both. The climatic implications of aerosol radiation interactions depend on the Earth s surface properties and the amount of light scattering in relation to light absorption. Light absorbing aerosol particles, in particular, can alter the vertical temperature structure of the atmosphere and inhibit the formation of convective clouds. The net change in the energy balance imposed by perturbing agents, such as aerosol particles, results in a radiative forcing. Globally, aerosol particles have a net cooling effect on the climate, but, not necessarily on a local scale. Accurate measurements of the optical properties of aerosol particles are needed to estimate the climatic effects of aerosols. A widely used means of measuring light absorption by aerosol particles is to use a filter-based measurement technique. The technique is based on light-transmission measurements through the filter when the aerosol sample is drawn through the filter and particles deposit onto the filter. As the sample deposits, it will inevitably interact with the fibres of the filter and the interactions needs to be taken into account. This thesis investigates different approaches to dealing with filter-induced artefacts and how they affect aerosol light absorption using this technique. In addition, the articles included in the thesis report aerosol optical properties at sites that have not been reported in the literature before. The locations range from an urban environment in the city of São Paulo, Brazil, an industrialised region of the South African Highveld, to a rural station in Hyytiälä in Finland. In general, it can be said that sites that are distant from urban areas tend to scatter more light in relation to light absorption. In urban areas, the aerosol particle optical properties show the aerosol particles to be darker.
  • Rasmus, Kai (Helsingin yliopisto, 2009)
    The Antarctic system comprises of the continent itself, Antarctica, and the ocean surrounding it, the Southern Ocean. The system has an important part in the global climate due to its size, its high latitude location and the negative radiation balance of its large ice sheets. Antarctica has also been in focus for several decades due to increased ultraviolet (UV) levels caused by stratospheric ozone depletion, and the disintegration of its ice shelves. In this study, measurements were made during three Austral summers to study the optical properties of the Antarctic system and to produce radiation information for additional modeling studies. These are related to specific phenomena found in the system. During the summer of 1997-1998, measurements of beam absorption and beam attenuation coefficients, and downwelling and upwelling irradiance were made in the Southern Ocean along a S-N transect at 6°E. The attenuation of photosynthetically active radiation (PAR) was calculated and used together with hydrographic measurements to judge whether the phytoplankton in the investigated areas of the Southern Ocean are light limited. By using the Kirk formula the diffuse attenuation coefficient was linked to the absorption and scattering coefficients. The diffuse attenuation coefficients (Kpar) for PAR were found to vary between 0.03 and 0.09 1/m. Using the values for KPAR and the definition of the Sverdrup critical depth, the studied Southern Ocean plankton systems were found not to be light limited. Variabilities in the spectral and total albedo of snow were studied in the Queen Maud Land region of Antarctica during the summers of 1999-2000 and 2000-2001. The measurement areas were the vicinity of the South African Antarctic research station SANAE 4, and a traverse near the Finnish Antarctic research station Aboa. The midday mean total albedos for snow were between 0.83, for clear skies, and 0.86, for overcast skies, at Aboa and between 0.81 and 0.83 for SANAE 4. The mean spectral albedo levels at Aboa and SANAE 4 were very close to each other. The variations in the spectral albedos were due more to differences in ambient conditions than variations in snow properties. A Monte-Carlo model was developed to study the spectral albedo and to develop a novel nondestructive method to measure the diffuse attenuation coefficient of snow. The method was based on the decay of upwelling radiation moving horizontally away from a source of downwelling light. This was assumed to have a relation to the diffuse attenuation coefficient. In the model, the attenuation coefficient obtained from the upwelling irradiance was higher than that obtained using vertical profiles of downwelling irradiance. The model results were compared to field measurements made on dry snow in Finnish Lapland and they correlated reasonably well. Low-elevation (below 1000 m) blue-ice areas may experience substantial melt-freeze cycles due to absorbed solar radiation and the small heat conductivity in the ice. A two-dimensional (x-z) model has been developed to simulate the formation and water circulation in the subsurface ponds. The model results show that for a physically reasonable parameter set the formation of liquid water within the ice can be reproduced. The results however are sensitive to the chosen parameter values, and their exact values are not well known. Vertical convection and a weak overturning circulation is generated stratifying the fluid and transporting warmer water downward, thereby causing additional melting at the base of the pond. In a 50-year integration, a global warming scenario mimicked by a decadal scale increase of 3 degrees per 100 years in air temperature, leads to a general increase in subsurface water volume. The ice did not disintegrate due to the air temperature increase after the 50 year integration.
  • Wallin, Anders (Helsingin yliopisto, 2011)
    Molecular machinery on the micro-scale, believed to be the fundamental building blocks of life, involve forces of 1-100 pN and movements of nanometers to micrometers. Micromechanical single-molecule experiments seek to understand the physics of nucleic acids, molecular motors, and other biological systems through direct measurement of forces and displacements. Optical tweezers are a popular choice among several complementary techniques for sensitive force-spectroscopy in the field of single molecule biology. The main objective of this thesis was to design and construct an optical tweezers instrument capable of investigating the physics of molecular motors and mechanisms of protein/nucleic-acid interactions on the single-molecule level. A double-trap optical tweezers instrument incorporating acousto-optic trap-steering, two independent detection channels, and a real-time digital controller was built. A numerical simulation and a theoretical study was performed to assess the signal-to-noise ratio in a constant-force molecular motor stepping experiment. Real-time feedback control of optical tweezers was explored in three studies. Position-clamping was implemented and compared to theoretical models using both proportional and predictive control. A force-clamp was implemented and tested with a DNA-tether in presence of the enzyme lambda exonuclease. The results of the study indicate that the presented models describing signal-to-noise ratio in constant-force experiments and feedback control experiments in optical tweezers agree well with experimental data. The effective trap stiffness can be increased by an order of magnitude using the presented position-clamping method. The force-clamp can be used for constant-force experiments, and the results from a proof-of-principle experiment, in which the enzyme lambda exonuclease converts double-stranded DNA to single-stranded DNA, agree with previous research. The main objective of the thesis was thus achieved. The developed instrument and presented results on feedback control serve as a stepping stone for future contributions to the growing field of single molecule biology.
  • Suomela, Jukka (Helsingin yliopisto, 2009)
    This thesis studies optimisation problems related to modern large-scale distributed systems, such as wireless sensor networks and wireless ad-hoc networks. The concrete tasks that we use as motivating examples are the following: (i) maximising the lifetime of a battery-powered wireless sensor network, (ii) maximising the capacity of a wireless communication network, and (iii) minimising the number of sensors in a surveillance application. A sensor node consumes energy both when it is transmitting or forwarding data, and when it is performing measurements. Hence task (i), lifetime maximisation, can be approached from two different perspectives. First, we can seek for optimal data flows that make the most out of the energy resources available in the network; such optimisation problems are examples of so-called max-min linear programs. Second, we can conserve energy by putting redundant sensors into sleep mode; we arrive at the sleep scheduling problem, in which the objective is to find an optimal schedule that determines when each sensor node is asleep and when it is awake. In a wireless network simultaneous radio transmissions may interfere with each other. Task (ii), capacity maximisation, therefore gives rise to another scheduling problem, the activity scheduling problem, in which the objective is to find a minimum-length conflict-free schedule that satisfies the data transmission requirements of all wireless communication links. Task (iii), minimising the number of sensors, is related to the classical graph problem of finding a minimum dominating set. However, if we are not only interested in detecting an intruder but also locating the intruder, it is not sufficient to solve the dominating set problem; formulations such as minimum-size identifying codes and locating dominating codes are more appropriate. This thesis presents approximation algorithms for each of these optimisation problems, i.e., for max-min linear programs, sleep scheduling, activity scheduling, identifying codes, and locating dominating codes. Two complementary approaches are taken. The main focus is on local algorithms, which are constant-time distributed algorithms. The contributions include local approximation algorithms for max-min linear programs, sleep scheduling, and activity scheduling. In the case of max-min linear programs, tight upper and lower bounds are proved for the best possible approximation ratio that can be achieved by any local algorithm. The second approach is the study of centralised polynomial-time algorithms in local graphs these are geometric graphs whose structure exhibits spatial locality. Among other contributions, it is shown that while identifying codes and locating dominating codes are hard to approximate in general graphs, they admit a polynomial-time approximation scheme in local graphs.
  • Lindström, Jan (Helsingin yliopisto, 2003)
  • Kaasalainen, Touko (Helsingin yliopisto, 2015)
    The number of computed tomography (CT) examinations has increased in recent years due to developments in scanner technology and the increased diagnostic capabilities of CT. Nowadays, CT has become a major contributor to accumulated radiation doses from radiological examinations, accounting for approximately 60% of the overall medical radiation dose in Western countries. Ionizing radiation is generally considered harmful to health, and current knowledge suggests that the risk for stochastic effects increases linearly with radiation dose. Minimizing patient doses in CT requires effective optimization practices, including both technical and clinical approaches. CT optimization aims to reduce patients exposure to radiation without compromising image quality for diagnosis. The aim of this dissertation was to explore the feasibility of using anthropomorphic phantoms and metal-oxide-semiconductor field-effect transistors (MOSFETs) in CT optimization and patient dose measurements, and to study CT optimization in versatile clinical situations. Specifically, this thesis focused on studying the effects of patient centering on the CT scanner isocenter by determining changes in patient dose and image quality. Additionally, as a part of this thesis, we constructed and optimized ultralow-dose CT protocols for craniosynostosis imaging, and explored different optimization methods for reducing radiation exposure to eye lenses. Moreover, fetal radiation doses were assessed in the most typical CT examinations of a pregnant woman which also place the fetus at the highest risk for ionizing radiation-induced health detriments. Anthropomorphic phantoms and MOSFET dosimeters proved feasible in CT optimization even with the use of ultralow-dose levels. Patient vertical off-centering posed a common and serious problem in chest CT, as a majority of the scanned patients were positioned below the isocenter of the CT scanner, which significantly affected both radiation dose and image quality. This exposes the radiosensitive anterior surface tissues, including the breasts and thyroid gland, to greater risk. Special attention should focus on pediatric patients in particular, as they were typically miscentered lower than adults were. The use of constructed ultralow-dose CT protocols with model-based iterative reconstruction can enable craniosynostosis CT imaging with sufficient image quality for diagnosis with an effective dose of less than 20 μSv for the patient. This dose level was approximately 85% lower than the level used in routine CT protocols in the hospital for craniosynostosis, and was comparable to the radiation exposure of a plain-skull radiography examination. The most efficient method for reducing the dose to the eye lens proved to be gantry tilting, which leaves the eye lenses outside the primary radiation beam, thereby reducing the absorbed dose up to 75%. However, measurements with two different anthropomorphic head phantoms showed that patient geometry significantly affects dose-reduction capabilities. If lenses can only partially be cropped outside the primary beam, organ-based tube current modulation or bismuth shields may also be used for reducing the dose to the lenses. Based on the measured absorbed doses in this thesis, the radiation dose to the fetus poses no obstacle to an optimized CT examination with a medically necessary indication. The volumetric CT dose index (CTDIvol) provides a rough estimate of the fetal dose when the uterus is in the primary radiation beam, although the extent of the scan range has a substantial effect on the fetal dose. The results support the conception that when the fetus or uterus is not in the scan range, the fetal dose is affected mainly by the distance from the scan range.