Browsing by Title

Sort by: Order: Results:

Now showing items 621-640 of 888
  • Karvi, Timo (Helsingin yliopisto, 2000)
  • Hakulinen, Ville (Helsingin yliopisto, 2002)
  • Arponen, Heikki (Helsingin yliopisto, 2009)
    This thesis consists of three articles on passive vector fields in turbulence. The vector fields interact with a turbulent velocity field, which is described by the Kraichnan model. The effect of the Kraichnan model on the passive vectors is studied via an equation for the pair correlation function and its solutions. The first paper is concerned with the passive magnetohydrodynamic equations. Emphasis is placed on the so called "dynamo effect", which in the present context is understood as an unbounded growth of the pair correlation function. The exact analytical conditions for such growth are found in the cases of zero and infinite Prandtl numbers. The second paper contains an extensive study of a number of passive vector models. Emphasis is now on the properties of the (assumed) steady state, namely anomalous scaling, anisotropy and small and large scale behavior with different types of forcing or stirring. The third paper is in many ways a completion to the previous one in its study of the steady state existence problem. Conditions for the existence of the steady state are found in terms of the spatial roughness parameter of the turbulent velocity field.
  • Kiljunen, Timo (Helsingin yliopisto, 2008)
    Diagnostic radiology represents the largest man-made contribution to population radiation doses in Europe. To be able to keep the diagnostic benefit versus radiation risk ratio as high as possible, it is important to understand the quantitative relationship between the patient radiation dose and the various factors which affect the dose, such as the scan parameters, scan mode, and patient size. Paediatric patients have a higher probability for late radiation effects, since longer life expectancy is combined with the higher radiation sensitivity of the developing organs. The experience with particular paediatric examinations may be very limited and paediatric acquisition protocols may not be optimised. The purpose of this thesis was to enhance and compare different dosimetric protocols, to promote the establishment of the paediatric diagnostic reference levels (DRLs), and to provide new data on patient doses for optimisation purposes in computed tomography (with new applications for dental imaging) and in paediatric radiography. Large variations in radiation exposure in paediatric skull, sinus, chest, pelvic and abdominal radiography examinations were discovered in patient dose surveys. There were variations between different hospitals and examination rooms, between different sized patients, and between imaging techniques; emphasising the need for harmonisation of the examination protocols. For computed tomography, a correction coefficient, which takes individual patient size into account in patient dosimetry, was created. The presented patient size correction method can be used for both adult and paediatric purposes. Dental cone beam CT scanners provided adequate image quality for dentomaxillofacial examinations while delivering considerably smaller effective doses to patient compared to the multi slice CT. However, large dose differences between cone beam CT scanners were not explained by differences in image quality, which indicated the lack of optimisation. For paediatric radiography, a graphical method was created for setting the diagnostic reference levels in chest examinations, and the DRLs were given as a function of patient projection thickness. Paediatric DRLs were also given for sinus radiography. The detailed information about the patient data, exposure parameters and procedures provided tools for reducing the patient doses in paediatric radiography. The mean tissue doses presented for paediatric radiography enabled future risk assessments to be done. The calculated effective doses can be used for comparing different diagnostic procedures, as well as for comparing the use of similar technologies and procedures in different hospitals and countries.
  • Toroi, Paula (Helsingin yliopisto, 2009)
    The methods for estimating patient exposure in x-ray imaging are based on the measurement of radiation incident on the patient. In digital imaging, the useful dose range of the detector is large and excessive doses may remain undetected. Therefore, real-time monitoring of radiation exposure is important. According to international recommendations, the measurement uncertainty should be lower than 7% (confidence level 95%). The kerma-area product (KAP) is a measurement quantity used for monitoring patient exposure to radiation. A field KAP meter is typically attached to an x-ray device, and it is important to recognize the effect of this measurement geometry on the response of the meter. In a tandem calibration method, introduced in this study, a field KAP meter is used in its clinical position and calibration is performed with a reference KAP meter. This method provides a practical way to calibrate field KAP meters. However, the reference KAP meters require comprehensive calibration. In the calibration laboratory it is recommended to use standard radiation qualities. These qualities do not entirely correspond to the large range of clinical radiation qualities. In this work, the energy dependence of the response of different KAP meter types was examined. According to our findings, the recommended accuracy in KAP measurements is difficult to achieve with conventional KAP meters because of their strong energy dependence. The energy dependence of the response of a novel large KAP meter was found out to be much lower than with a conventional KAP meter. The accuracy of the tandem method can be improved by using this meter type as a reference meter. A KAP meter cannot be used to determine the radiation exposure of patients in mammography, in which part of the radiation beam is always aimed directly at the detector without attenuation produced by the tissue. This work assessed whether pixel values from this detector area could be used to monitor the radiation beam incident on the patient. The results were congruent with the tube output calculation, which is the method generally used for this purpose. The recommended accuracy can be achieved with the studied method. New optimization of radiation qualities and dose level is needed when other detector types are introduced. In this work, the optimal selections were examined with one direct digital detector type. For this device, the use of radiation qualities with higher energies was recommended and appropriate image quality was achieved by increasing the low dose level of the system.
  • Vilo, Jaak (Helsingin yliopisto, 2002)
  • Junttila, Esa (Helsingin yliopisto, 2011)
    Reorganizing a dataset so that its hidden structure can be observed is useful in any data analysis task. For example, detecting a regularity in a dataset helps us to interpret the data, compress the data, and explain the processes behind the data. We study datasets that come in the form of binary matrices (tables with 0s and 1s). Our goal is to develop automatic methods that bring out certain patterns by permuting the rows and columns. We concentrate on the following patterns in binary matrices: consecutive-ones (C1P), simultaneous consecutive-ones (SC1P), nestedness, k-nestedness, and bandedness. These patterns reflect specific types of interplay and variation between the rows and columns, such as continuity and hierarchies. Furthermore, their combinatorial properties are interlinked, which helps us to develop the theory of binary matrices and efficient algorithms. Indeed, we can detect all these patterns in a binary matrix efficiently, that is, in polynomial time in the size of the matrix. Since real-world datasets often contain noise and errors, we rarely witness perfect patterns. Therefore we also need to assess how far an input matrix is from a pattern: we count the number of flips (from 0s to 1s or vice versa) needed to bring out the perfect pattern in the matrix. Unfortunately, for most patterns it is an NP-complete problem to find the minimum distance to a matrix that has the perfect pattern, which means that the existence of a polynomial-time algorithm is unlikely. To find patterns in datasets with noise, we need methods that are noise-tolerant and work in practical time with large datasets. The theory of binary matrices gives rise to robust heuristics that have good performance with synthetic data and discover easily interpretable structures in real-world datasets: dialectical variation in the spoken Finnish language, division of European locations by the hierarchies found in mammal occurrences, and co-occuring groups in network data. In addition to determining the distance from a dataset to a pattern, we need to determine whether the pattern is significant or a mere occurrence of a random chance. To this end, we use significance testing: we deem a dataset significant if it appears exceptional when compared to datasets generated from a certain null hypothesis. After detecting a significant pattern in a dataset, it is up to domain experts to interpret the results in the terms of the application.
  • Kähkipuro, Pekka (Helsingin yliopisto, 2000)
  • Webb, Christian (Helsingin yliopisto, 2013)
    The thesis is about random measures whose density with respect to the Lebesgue measure is the exponential of a Gaussian field with a short range logarithmic singularity in its covariance. Such measures are a special case of Gaussian multiplicative chaos. This type of measures arise in a variety of physical and mathematical models. In physics, they arise as the are measure of two-dimensional Liouville quantum gravity and Gibbs measures in certain simple disordered systems. From a mathematical point of view, they are related to extreme value statistics of random variables with logarithmic correlations and are interesting as such from the point of view of random geometry. Questions addressed in the thesis are how to properly define such measures and some geometric properties of these measures. Defining these measures is non-trivial since due to the singularity in the covariance, the field can only be interpreted as a random distribution and not as a random function. It turns out that after a suitable regularization of the field and normalization of the measure, a limiting procedure yields a non-trivial limit object. This normalization is a delicate procedure and at a certain value of the variance of the field, the behavior of this normalization changes drastically - a phase transition occurs. Once the measure is constructed, some simple geometric and probabilistic properties of these measures are considered. Relevant questions are for example: does the measure possess atoms, if not what is its modulus of continuity, what is the probability distribution of the measure of a set.
  • Kalliomäki, Anna (Helsingin yliopisto, 2003)
  • Keceli, Asli (Helsingin yliopisto, 2015)
    The Standard Model of particle physics (SM) is a gauge field theory that provides a very successful description of the electromagnetic, weak and strong interactions among the elementary particles. It is in very good agreement with the precision measurements and the list of all the fundamental particles predicted by the model was completed with the discovery of the last missing piece, the Higgs boson, at the LHC in 2012. However, it is believed to be valid up to a certain energy scale and widely considered as a low-scale approximation of a more fundamental theory due to some theoretical and phenomenological issues appearing in the model. Among many alternatives, supersymmetry is considered as the most prominent candidate for new physics beyond the SM. Supersymmetry relates two different classes of the particles known as fermions and bosons. The simplest straightforward supersymmetrization of the SM is named as minimal supersymmetric Standard Model (MSSM) where minimal set of new supersymmetric particles is introduced as superpartners of the Standard Model particles. It is the most studied low-scale supersymmetric model since it has very appealing features such as containing a dark matter candidate and providing a solution to the naturalness problem of the SM. After the Higgs discovery, the parameter space of the model has been investigated in great detail and it has been observed that the measured Higgs mass can be achieved only for the parameter regions which generate a severe fine-tuning. Such large fine-tuning can be alleviated by extending the minimal field content of the model via a singlet and/or a triplet. In this thesis, we discuss the triplet extension of the supersymmetric Standard Model where the MSSM field content is enlarged by introducing a triplet chiral superfield with zero hypercharge. The first part of the thesis contains an overview of the SM and the second part is dedicated to the general features of supersymmetry. After discussing aspects of the MSSM in the third part, we discuss the triplet extended supersymmetric Standard Model where we investigate the implications of the triplet on the Higgs phenomenology. We show that the measured mass of the Higgs boson can be achieved in this model without requiring heavy third generation squarks and/or large squark mixing parameters which reduce the amount of the required fine-tuning. Afterwards, we study the charged Higgs sector where a triplet scalar field with non-zero vacuum expectation value leads to h±iZW∓ coupling at tree level. We discuss how this coupling alters the charged Higgs decay and production channels at the LHC.
  • Lignell, Hanna (Helsingin yliopisto, 2014)
    In this thesis, fundamentally and atmospherically relevant species, their heterogeneous chemistry, and photolytic processing in multiple phases are explored both experimentally and computationally, providing important new insights and mechanistic understanding of these complicated systems. HArF is a covalently bonded neutral ground-state molecule of argon that is found to form at very low temperatures. This thesis explores the HArF low temperature formation mechanism and kinetics, and discusses the effect of the environment to the formation. In the next part, a computational study of an atmospherically relevant molecule N2O4 and its isomerization and ionization on model ice and silica surfaces is presented. N2O4 is known to produce HONO, which is a major source of atmospheric OH, an important atmospheric oxidant. The isomerization mechanism is found to be connected to the dangling surface hydrogen atoms at both surfaces, and we suggest that this mechanism could be expanded to other atmospherically relevant surfaces as well. Atmospheric aerosols play a critical role in controlling climate, driving chemical reactions in the atmosphere, acting as surfaces catalyzing heterogeneous reactions, and contributing to air pollution problems and indoor air quality issues. Low-volatility organic compounds that are produced in the oxidation of biogenic and anthropogenic Volatile Organic Compounds (VOC s) are known collectively as Secondary Organic Aerosol (SOA). In this thesis, a comprehensive investigation of aqueous photochemistry of cis-pinonic acid, a common product of ozonolysis of α-pinene (an SOA precursor) is presented. Various experimental techniques are used to study the kinetics, photolysis rates, quantum yields, and photolysis products, and computational methods are used to explore the photolysis mechanisms. Atmospheric implications and importance of aqueous photolysis vs. OH-mediated aging is discussed. The viscosity effects on SOA chemistry are then explored by a novel approach where an environmentally relevant probe molecule 2,4-dinitrophenol is embedded directly inside the SOA matrix, and its photochemistry is studied at different temperatures and compared to reaction efficiency in other reaction media (octanol and water). It is observed that decreasing temperature significantly slows down the photochemical process in the SOA matrix, and this behavior is ascribed to increasing viscosity of the SOA material.
  • Isoniemi, Esa (Helsingin yliopisto, 2003)
  • Elbra, Tiiu (Helsingin yliopisto, 2011)
    Physical properties provide valuable information about the nature and behavior of rocks and minerals. The changes in rock physical properties generate petrophysical contrasts between various lithologies, for example, between shocked and unshocked rocks in meteorite impact structures or between various lithologies in the crust. These contrasts may cause distinct geophysical anomalies, which are often diagnostic to their primary cause (impact, tectonism, etc). This information is vital to understand the fundamental Earth processes, such as impact cratering and associated crustal deformations. However, most of the present day knowledge of changes in rock physical properties is limited due to a lack of petrophysical data of subsurface samples, especially for meteorite impact structures, since they are often buried under post-impact lithologies or eroded. In order to explore the uppermost crust, deep drillings are required. This dissertation is based on the deep drill core data from three impact structures: (i) the Bosumtwi impact structure (diameter 10.5 km, 1.07 Ma age; Ghana), (ii) the Chesapeake Bay impact structure (85 km, 35 Ma; Virginia, U.S.A.), and (iii) the Chicxulub impact structure (180 km, 65 Ma; Mexico). These drill cores have yielded all basic lithologies associated with impact craters such as post-impact lithologies, impact rocks including suevites and breccias, as well as fractured and unfractured target rocks. The fourth study case of this dissertation deals with the data of the Paleoproterozoic Outokumpu area (Finland), as a non-impact crustal case, where a deep drilling through an economically important ophiolite complex was carried out. The focus in all four cases was to combine results of basic petrophysical studies of relevant rocks of these crustal structures in order to identify and characterize various lithologies by their physical properties and, in this way, to provide new input data for geophysical modellings. Furthermore, the rock magnetic and paleomagnetic properties of three impact structures, combined with basic petrophysics, were used to acquire insight into the impact generated changes in rocks and their magnetic minerals, in order to better understand the influence of impact. The obtained petrophysical data outline the various lithologies and divide rocks into four domains. Based on target lithology the physical properties of the unshocked target rocks are controlled by mineral composition or fabric, particularly porosity in sedimentary rocks, while sediments result from diverse sedimentation and diagenesis processes. The impact rocks, such as breccias and suevites, strongly reflect the impact formation mechanism and are distinguishable from the other lithologies by their density, porosity and magnetic properties. The numerous shock features resulting from melting, brecciation and fracturing of the target rocks, can be seen in the changes of physical properties. These features include an increase in porosity and subsequent decrease in density in impact derived units, either an increase or a decrease in magnetic properties (depending on a specific case), as well as large heterogeneity in physical properties. In few cases a slight gradual downward decrease in porosity, as a shock-induced fracturing, was observed. Coupled with rock magnetic studies, the impact generated changes in magnetic fraction the shock-induced magnetic grain size reduction, hydrothermal- or melting-related magnetic mineral alteration, shock demagnetization and shock- or temperature-related remagnetization can be seen. The Outokumpu drill core shows varying velocities throughout the drill core depending on the microcracking and sample conditions. This is similar to observations by Kern et al., (2009), who also reported the velocity dependence on anisotropy. The physical properties are also used to explain the distinct crustal reflectors as observed in seismic reflection studies in the Outokumpu area. According to the seismic velocity data, the interfaces between the diopside-tremolite skarn layer and either serpentinite, mica schist or black schist are causing the strong seismic reflectivities.
  • Kohout, Tomas (Helsingin yliopisto, 2009)
    Together with cosmic spherules, interplanetary dust particles and lunar samples returned by Apollo and Luna missions, meteorites are the only source of extraterrestrial material on Earth. The physical properties of meteorites, especially their magnetic susceptibility, bulk and grain density, porosity and paleomagnetic information, have wide applications in planetary research and can reveal information about origin and internal structure of asteroids. Thus, an expanded database of meteorite physical properties was compiled with new measurements done in meteorite collections across Europe using a mobile laboratory facility. However, the scale problem may bring discrepancies in the comparison of asteroid and meteorite properties. Due to inhomogenity, the physical properties of meteorites studied on a centimeter or millimeter scale may differ from those of asteroids determined on kilometer scales. Further difference may arise from shock effects, space and terrestrial weathering and from difference in material properties at various temperatures. Close attention was given to the reliability of the paleomagnetic and paleointensity information in meteorites and the methodology to test for magnetic overprints was prepared and verified.