Browsing by Title

Sort by: Order: Results:

Now showing items 655-674 of 946
  • Salminen, Johanna (Helsingin yliopisto, 2009)
    The importance of supercontinents in our understanding of the geological evolution of the planet Earth has been recently emphasized. The role of paleomagnetism in reconstructing lithospheric blocks in their ancient paleopositions is vital. Paleomagnetism is the only quantitative tool for providing ancient latitudes and azimuthal orientations of continents. It also yields information of content of the geomagnetic field in the past. In order to obtain a continuous record on the positions of continents, dated intrusive rocks are required in temporal progression. This is not always possible due to pulse-like occurrences of dykes. In this work we demonstrate that studies of meteorite impact-related rocks may fill some gaps in the paleomagnetic record. This dissertation is based on paleomagnetic and rock magnetic data obtained from samples of the Jänisjärvi impact structure (Russian Karelia, most recent 40Ar-39Ar age of 682 Ma), the Salla diabase dyke (North Finland, U-Pb 1122 Ma), the Valaam monzodioritic sill (Russian Karelia, U-Pb 1458 Ma), and the Vredefort impact structure (South Africa, 2023 Ma). The paleomagnetic study of Jänisjärvi samples was made in order to obtain a pole for Baltica, which lacks paleomagnetic data from 750 to ca. 600 Ma. The position of Baltica at ca. 700 Ma is relevant in order to verify whether the supercontinent Rodinia was already fragmented. The paleomagnetic study of the Salla dyke was conducted to examine the position of Baltica at the onset of supercontinent Rodinia's formation. The virtual geomagnetic pole (VGP) from Salla dyke provides hints that the Mesoproterozoic Baltica - Laurentia unity in the Hudsonland (Columbia, Nuna) supercontinent assembly may have lasted until 1.12 Ga. Moreover, the new VGP of Salla dyke provides new constraint on the timing of the rotation of Baltica relative to Laurentia (e.g. Gower et al., 1990). A paleomagnetic study of the Valaam sill was carried out in order to shed light into the question of existence of Baltica-Laurentia unity in the supercontinent Hudsonland. Combined with results from dyke complex of the Lake Ladoga region (Schehrbakova et al., 2008) a new robust paleomagnetic pole for Baltica is obtained. This pole places Baltica on a latitude of 10°. This low latitude location is supported also by Mesoproterozoic 1.5 1.3 Ga red-bed sedimentation (for example the Satakunta sandstone). The Vredefort impactite samples provide a well dated (2.02 Ga) pole for the Kaapvaal Craton. Rock magnetic data reveal unusually high Koenigsberger ratios (Q values) in all studied lithologies of the Vredefort dome. The high Q values are now first time also seen in samples from the Johannesburg Dome (ca. 120 km away) where there is no impact evidence. Thus, a direct causative link of high Q values to the Vredefort impact event can be ruled out.
  • Leikoski, Tuomo (Helsingin yliopisto, 2014)
    Formation of carbon carbon bonds constitutes the basis of synthetic organic chemistry. The growing demand of safer and environmentally friendlier processes, combined with continuing need for more efficient and selective reactions, has given challenges to industrial and fundamental academic research. The objective of this thesis was to develop novel ways to perform important carbon carbon bond-forming reactions on solid support. Of special focus were palladium- and copper-catalysed reactions of unsaturated amines. Polymer-bound propargylamine and allylamine were arylated successfully by the palladium-catalysed Sonogashira and Heck reaction, respectively. Additionally, allenes were produced in the Crabbé homologation of polymer-bound propargylamine, where copper acetylide is acting as an intermediate. All of these reactions would give rise to biologically interesting molecules: 1,3-arylaminopropanes after hydrogenation of the Sonogashira and Heck products and nitrogen-containing allenes by the Crabbé reaction. By varying the aryl iodide in solution, a series of arylated propargylamines and allylamines were synthesised and isolated as their acetamides. From the polymer-bound propargylamine, various allene amides were obtained after N-acylation followed by the Crabbé reaction. It was also briefly explored if the arylation of propargylamine on solid-phase could be possible without expensive palladium via the Castro-Stephens reaction, using a polymer-bound copper acetylide and the aryl iodide in solution. However, attempts to perform the first Castro Stephens reaction on solid-phase failed. Free amines are problematic in the Sonogashira and Heck reactions, due to coordination with the palladium catalyst and nucleophilicity toward the allene in the Crabbé reaction. These incompatibilities were solved by using the resin linkers simultaneously as protecting groups for the amines: as carbamates in the Sonogashira and Heck reaction, and as N-acyltriazenes in the Crabbé reaction. For the Heck reaction, finding the right reaction conditions turned out to be particularly difficult, the additional challenges being the narrow temperature window and the need to avoid polyarylation. Nevertheless, a regioselective γ-arylation could be performed giving similar yields as in the Sonogashira studies. In summary, alternative methods to perform important carbon carbon bond-forming reactions on solid support were developed.
  • Andersson, Terhi (Helsingin yliopisto, 2007)
    Pressurised hot water extraction (PHWE) exploits the unique temperature-dependent solvent properties of water minimising the use of harmful organic solvents. Water is environmentally friendly, cheap and easily available extraction medium. The effects of temperature, pressure and extraction time in PHWE have often been studied, but here the emphasis was on other parameters important for the extraction, most notably the dimensions of the extraction vessel and the stability and solubility of the analytes to be extracted. Non-linear data analysis and self-organising maps were employed in the data analysis to obtain correlations between the parameters studied, recoveries and relative errors. First, pressurised hot water extraction (PHWE) was combined on-line with liquid chromatography-gas chromatography (LC-GC), and the system was applied to the extraction and analysis of polycyclic aromatic hydrocarbons (PAHs) in sediment. The method is of superior sensitivity compared with the traditional methods, and only a small 10 mg sample was required for analysis. The commercial extraction vessels were replaced by laboratory-made stainless steel vessels because of some problems that arose. The performance of the laboratory-made vessels was comparable to that of the commercial ones. In an investigation of the effect of thermal desorption in PHWE, it was found that at lower temperatures (200ºC and 250ºC) the effect of thermal desorption is smaller than the effect of the solvating property of hot water. At 300ºC, however, thermal desorption is the main mechanism. The effect of the geometry of the extraction vessel on recoveries was studied with five specially constructed extraction vessels. In addition to the extraction vessel geometry, the sediment packing style and the direction of water flow through the vessel were investigated. The geometry of the vessel was found to have only minor effect on the recoveries, and the same was true of the sediment packing style and the direction of water flow through the vessel. These are good results because these parameters do not have to be carefully optimised before the start of extractions. Liquid-liquid extraction (LLE) and solid-phase extraction (SPE) were compared as trapping techniques for PHWE. LLE was more robust than SPE and it provided better recoveries and repeatabilities than did SPE. Problems related to blocking of the Tenax trap and unrepeatable trapping of the analytes were encountered in SPE. Thus, although LLE is more labour intensive, it can be recommended over SPE. The stabilities of the PAHs in aqueous solutions were measured using a batch-type reaction vessel. Degradation was observed at 300ºC even with the shortest heating time. Ketones and quinones and other oxidation products were observed. Although the conditions of the stability studies differed considerably from the extraction conditions in PHWE, the results indicate that the risk of analyte degradation must be taken into account in PHWE. The aqueous solubilities of acenaphthene, anthracene and pyrene were measured, first below and then above the melting point of the analytes. Measurements below the melting point were made to check that the equipment was working, and the results were compared with those obtained earlier. Good agreement was found between the measured and literature values. A new saturation cell was constructed for the solubility measurements above the melting point of the analytes because the flow-through saturation cell could not be used above the melting point. An exponential relationship was found between the solubilities measured for pyrene and anthracene and temperature.
  • Ollinaho, Pirkka (Helsingin yliopisto, 2014)
    Numerical Weather Prediction (NWP) models form the basis of weather forecasting. The accuracy of model forecasts can be enhanced by providing a more accurate initial state for the model, and by improving the model representation of relevant atmospheric processes. Modelling of subgrid-scale physical processes causes additional uncertainty in the forecasts since, for example, the rates at which parts of the physical processes occur are not exactly known. The efficiency of these sub-processes in the models is controlled via so called closure parameters. This thesis is motivated by a practical need to estimate the values of these closure parameters objectively, and to assess the uncertainties related to them. In this thesis the Ensemble Prediction and Parameter Estimation System (EPPES) is utilised to determine the optimal closure parameter values, and to learn about their uncertainties. Closure parameters related to convective processes, formation of convective rain and stratiform clouds are studied in two atmospheric General Circulation Models (GCM): the Integrated Forecasting System (IFS) of the European Centre for Medium-Range Weather Forecasts (ECMWF) and the ECMWF model HAMburg version (ECHAM5). The parameter estimation is conducted by launching ensembles of medium range forecasts with initial time parameter variations. The fit of each ensemble member to analyses is then evaluated with respect to a target criterion, and the likelihoods of the forecasts are discerned. The target criterion is first set to be 500 hPa level geopotential height Mean Squared Error (MSE) at forecast days three and ten. After the proof of concept experimentations, the use of total energy norm as the target criterion is explored. EPPES estimation with both likelihoods results in parameter values converging to more optimal values during a three-month sampling period. The improved forecast accuracy of the models with the new parameter values are verified through headline skill scores (Root Mean Square Error (RMSE) and Anomaly Correlation Coefficient (ACC)) of 500 hPa geopotential height and a scorecard consisting of multiple model fields. The sampling process also provides information about parameter uncertainties. Three uses for the uncertainty data are highlighted: (i) parametrization deficiencies can be identified from large parameter uncertainties, (ii) parameter correlations can indicate a need for the coupling of parameters, and (iii) adding parameter variations into an ensemble prediction system (EPS) can be used to increase the ensemble spread. The relationship between medium range forecasts and model climatology is explored, too. Closure parameter modification induced cloud cover changes at forecast day three carry over to the very long range forecasts as well. This link could be used to improve model climatology by enhancing the computationally cheaper medium range forecast skill of the model.
  • Weijo, Ville (Helsingin yliopisto, 2008)
    The Standard Model of particle physics consists of the quantum electrodynamics (QED) and the weak and strong nuclear interactions. The QED is the basis for molecular properties, and thus it defines much of the world we see. The weak nuclear interaction is responsible for decays of nuclei, among other things, and in principle, it should also effects at the molecular scale. The strong nuclear interaction is hidden in interactions inside nuclei. From the high-energy and atomic experiments it is known that the weak interaction does not conserve parity. Consequently, the weak interaction and specifically the exchange of the Z^0 boson between a nucleon and an electron induces small energy shifts of different sign for mirror image molecules. This in turn will make the other enantiomer of a molecule energetically favorable than the other and also shifts the spectral lines of the mirror image pair of molecules into different directions creating a split. Parity violation (PV) in molecules, however, has not been observed. The topic of this thesis is how the weak interaction affects certain molecular magnetic properties, namely certain parameters of nuclear magnetic resonance (NMR) and electron spin resonance (ESR) spectroscopies. The thesis consists of numerical estimates of NMR and ESR spectral parameters and investigations of the effects of different aspects of quantum chemical computations to them. PV contributions to the NMR shielding and spin-spin coupling constants are investigated from the computational point of view. All the aspects of quantum chemical electronic structure computations are found to be very important, which makes accurate computations challenging. Effects of molecular geometry are also investigated using a model system of polysilyene chains. PV contribution to the NMR shielding constant is found to saturate after the chain reaches a certain length, but the effects of local geometry can be large. Rigorous vibrational averaging is also performed for a relatively small and rigid molecule. Vibrational corrections to the PV contribution are found to be only a couple of per cents. PV contributions to the ESR g-tensor are also evaluated using a series of molecules. Unfortunately, all the estimates are below the experimental limits, but PV in some of the heavier molecules comes close to the present day experimental resolution.
  • Karvi, Timo (Helsingin yliopisto, 2000)
  • Hakulinen, Ville (Helsingin yliopisto, 2002)
  • Arponen, Heikki (Helsingin yliopisto, 2009)
    This thesis consists of three articles on passive vector fields in turbulence. The vector fields interact with a turbulent velocity field, which is described by the Kraichnan model. The effect of the Kraichnan model on the passive vectors is studied via an equation for the pair correlation function and its solutions. The first paper is concerned with the passive magnetohydrodynamic equations. Emphasis is placed on the so called "dynamo effect", which in the present context is understood as an unbounded growth of the pair correlation function. The exact analytical conditions for such growth are found in the cases of zero and infinite Prandtl numbers. The second paper contains an extensive study of a number of passive vector models. Emphasis is now on the properties of the (assumed) steady state, namely anomalous scaling, anisotropy and small and large scale behavior with different types of forcing or stirring. The third paper is in many ways a completion to the previous one in its study of the steady state existence problem. Conditions for the existence of the steady state are found in terms of the spatial roughness parameter of the turbulent velocity field.
  • Kiljunen, Timo (Helsingin yliopisto, 2008)
    Diagnostic radiology represents the largest man-made contribution to population radiation doses in Europe. To be able to keep the diagnostic benefit versus radiation risk ratio as high as possible, it is important to understand the quantitative relationship between the patient radiation dose and the various factors which affect the dose, such as the scan parameters, scan mode, and patient size. Paediatric patients have a higher probability for late radiation effects, since longer life expectancy is combined with the higher radiation sensitivity of the developing organs. The experience with particular paediatric examinations may be very limited and paediatric acquisition protocols may not be optimised. The purpose of this thesis was to enhance and compare different dosimetric protocols, to promote the establishment of the paediatric diagnostic reference levels (DRLs), and to provide new data on patient doses for optimisation purposes in computed tomography (with new applications for dental imaging) and in paediatric radiography. Large variations in radiation exposure in paediatric skull, sinus, chest, pelvic and abdominal radiography examinations were discovered in patient dose surveys. There were variations between different hospitals and examination rooms, between different sized patients, and between imaging techniques; emphasising the need for harmonisation of the examination protocols. For computed tomography, a correction coefficient, which takes individual patient size into account in patient dosimetry, was created. The presented patient size correction method can be used for both adult and paediatric purposes. Dental cone beam CT scanners provided adequate image quality for dentomaxillofacial examinations while delivering considerably smaller effective doses to patient compared to the multi slice CT. However, large dose differences between cone beam CT scanners were not explained by differences in image quality, which indicated the lack of optimisation. For paediatric radiography, a graphical method was created for setting the diagnostic reference levels in chest examinations, and the DRLs were given as a function of patient projection thickness. Paediatric DRLs were also given for sinus radiography. The detailed information about the patient data, exposure parameters and procedures provided tools for reducing the patient doses in paediatric radiography. The mean tissue doses presented for paediatric radiography enabled future risk assessments to be done. The calculated effective doses can be used for comparing different diagnostic procedures, as well as for comparing the use of similar technologies and procedures in different hospitals and countries.
  • Toroi, Paula (Helsingin yliopisto, 2009)
    The methods for estimating patient exposure in x-ray imaging are based on the measurement of radiation incident on the patient. In digital imaging, the useful dose range of the detector is large and excessive doses may remain undetected. Therefore, real-time monitoring of radiation exposure is important. According to international recommendations, the measurement uncertainty should be lower than 7% (confidence level 95%). The kerma-area product (KAP) is a measurement quantity used for monitoring patient exposure to radiation. A field KAP meter is typically attached to an x-ray device, and it is important to recognize the effect of this measurement geometry on the response of the meter. In a tandem calibration method, introduced in this study, a field KAP meter is used in its clinical position and calibration is performed with a reference KAP meter. This method provides a practical way to calibrate field KAP meters. However, the reference KAP meters require comprehensive calibration. In the calibration laboratory it is recommended to use standard radiation qualities. These qualities do not entirely correspond to the large range of clinical radiation qualities. In this work, the energy dependence of the response of different KAP meter types was examined. According to our findings, the recommended accuracy in KAP measurements is difficult to achieve with conventional KAP meters because of their strong energy dependence. The energy dependence of the response of a novel large KAP meter was found out to be much lower than with a conventional KAP meter. The accuracy of the tandem method can be improved by using this meter type as a reference meter. A KAP meter cannot be used to determine the radiation exposure of patients in mammography, in which part of the radiation beam is always aimed directly at the detector without attenuation produced by the tissue. This work assessed whether pixel values from this detector area could be used to monitor the radiation beam incident on the patient. The results were congruent with the tube output calculation, which is the method generally used for this purpose. The recommended accuracy can be achieved with the studied method. New optimization of radiation qualities and dose level is needed when other detector types are introduced. In this work, the optimal selections were examined with one direct digital detector type. For this device, the use of radiation qualities with higher energies was recommended and appropriate image quality was achieved by increasing the low dose level of the system.
  • Vilo, Jaak (Helsingin yliopisto, 2002)
  • Junttila, Esa (Helsingin yliopisto, 2011)
    Reorganizing a dataset so that its hidden structure can be observed is useful in any data analysis task. For example, detecting a regularity in a dataset helps us to interpret the data, compress the data, and explain the processes behind the data. We study datasets that come in the form of binary matrices (tables with 0s and 1s). Our goal is to develop automatic methods that bring out certain patterns by permuting the rows and columns. We concentrate on the following patterns in binary matrices: consecutive-ones (C1P), simultaneous consecutive-ones (SC1P), nestedness, k-nestedness, and bandedness. These patterns reflect specific types of interplay and variation between the rows and columns, such as continuity and hierarchies. Furthermore, their combinatorial properties are interlinked, which helps us to develop the theory of binary matrices and efficient algorithms. Indeed, we can detect all these patterns in a binary matrix efficiently, that is, in polynomial time in the size of the matrix. Since real-world datasets often contain noise and errors, we rarely witness perfect patterns. Therefore we also need to assess how far an input matrix is from a pattern: we count the number of flips (from 0s to 1s or vice versa) needed to bring out the perfect pattern in the matrix. Unfortunately, for most patterns it is an NP-complete problem to find the minimum distance to a matrix that has the perfect pattern, which means that the existence of a polynomial-time algorithm is unlikely. To find patterns in datasets with noise, we need methods that are noise-tolerant and work in practical time with large datasets. The theory of binary matrices gives rise to robust heuristics that have good performance with synthetic data and discover easily interpretable structures in real-world datasets: dialectical variation in the spoken Finnish language, division of European locations by the hierarchies found in mammal occurrences, and co-occuring groups in network data. In addition to determining the distance from a dataset to a pattern, we need to determine whether the pattern is significant or a mere occurrence of a random chance. To this end, we use significance testing: we deem a dataset significant if it appears exceptional when compared to datasets generated from a certain null hypothesis. After detecting a significant pattern in a dataset, it is up to domain experts to interpret the results in the terms of the application.
  • Kähkipuro, Pekka (Helsingin yliopisto, 2000)
  • Webb, Christian (Helsingin yliopisto, 2013)
    The thesis is about random measures whose density with respect to the Lebesgue measure is the exponential of a Gaussian field with a short range logarithmic singularity in its covariance. Such measures are a special case of Gaussian multiplicative chaos. This type of measures arise in a variety of physical and mathematical models. In physics, they arise as the are measure of two-dimensional Liouville quantum gravity and Gibbs measures in certain simple disordered systems. From a mathematical point of view, they are related to extreme value statistics of random variables with logarithmic correlations and are interesting as such from the point of view of random geometry. Questions addressed in the thesis are how to properly define such measures and some geometric properties of these measures. Defining these measures is non-trivial since due to the singularity in the covariance, the field can only be interpreted as a random distribution and not as a random function. It turns out that after a suitable regularization of the field and normalization of the measure, a limiting procedure yields a non-trivial limit object. This normalization is a delicate procedure and at a certain value of the variance of the field, the behavior of this normalization changes drastically - a phase transition occurs. Once the measure is constructed, some simple geometric and probabilistic properties of these measures are considered. Relevant questions are for example: does the measure possess atoms, if not what is its modulus of continuity, what is the probability distribution of the measure of a set.
  • Kalliomäki, Anna (Helsingin yliopisto, 2003)