Browsing by Title

Sort by: Order: Results:

Now showing items 627-646 of 888
  • Junttila, Esa (Helsingin yliopisto, 2011)
    Reorganizing a dataset so that its hidden structure can be observed is useful in any data analysis task. For example, detecting a regularity in a dataset helps us to interpret the data, compress the data, and explain the processes behind the data. We study datasets that come in the form of binary matrices (tables with 0s and 1s). Our goal is to develop automatic methods that bring out certain patterns by permuting the rows and columns. We concentrate on the following patterns in binary matrices: consecutive-ones (C1P), simultaneous consecutive-ones (SC1P), nestedness, k-nestedness, and bandedness. These patterns reflect specific types of interplay and variation between the rows and columns, such as continuity and hierarchies. Furthermore, their combinatorial properties are interlinked, which helps us to develop the theory of binary matrices and efficient algorithms. Indeed, we can detect all these patterns in a binary matrix efficiently, that is, in polynomial time in the size of the matrix. Since real-world datasets often contain noise and errors, we rarely witness perfect patterns. Therefore we also need to assess how far an input matrix is from a pattern: we count the number of flips (from 0s to 1s or vice versa) needed to bring out the perfect pattern in the matrix. Unfortunately, for most patterns it is an NP-complete problem to find the minimum distance to a matrix that has the perfect pattern, which means that the existence of a polynomial-time algorithm is unlikely. To find patterns in datasets with noise, we need methods that are noise-tolerant and work in practical time with large datasets. The theory of binary matrices gives rise to robust heuristics that have good performance with synthetic data and discover easily interpretable structures in real-world datasets: dialectical variation in the spoken Finnish language, division of European locations by the hierarchies found in mammal occurrences, and co-occuring groups in network data. In addition to determining the distance from a dataset to a pattern, we need to determine whether the pattern is significant or a mere occurrence of a random chance. To this end, we use significance testing: we deem a dataset significant if it appears exceptional when compared to datasets generated from a certain null hypothesis. After detecting a significant pattern in a dataset, it is up to domain experts to interpret the results in the terms of the application.
  • Kähkipuro, Pekka (Helsingin yliopisto, 2000)
  • Webb, Christian (Helsingin yliopisto, 2013)
    The thesis is about random measures whose density with respect to the Lebesgue measure is the exponential of a Gaussian field with a short range logarithmic singularity in its covariance. Such measures are a special case of Gaussian multiplicative chaos. This type of measures arise in a variety of physical and mathematical models. In physics, they arise as the are measure of two-dimensional Liouville quantum gravity and Gibbs measures in certain simple disordered systems. From a mathematical point of view, they are related to extreme value statistics of random variables with logarithmic correlations and are interesting as such from the point of view of random geometry. Questions addressed in the thesis are how to properly define such measures and some geometric properties of these measures. Defining these measures is non-trivial since due to the singularity in the covariance, the field can only be interpreted as a random distribution and not as a random function. It turns out that after a suitable regularization of the field and normalization of the measure, a limiting procedure yields a non-trivial limit object. This normalization is a delicate procedure and at a certain value of the variance of the field, the behavior of this normalization changes drastically - a phase transition occurs. Once the measure is constructed, some simple geometric and probabilistic properties of these measures are considered. Relevant questions are for example: does the measure possess atoms, if not what is its modulus of continuity, what is the probability distribution of the measure of a set.
  • Kalliomäki, Anna (Helsingin yliopisto, 2003)
  • Keceli, Asli (Helsingin yliopisto, 2015)
    The Standard Model of particle physics (SM) is a gauge field theory that provides a very successful description of the electromagnetic, weak and strong interactions among the elementary particles. It is in very good agreement with the precision measurements and the list of all the fundamental particles predicted by the model was completed with the discovery of the last missing piece, the Higgs boson, at the LHC in 2012. However, it is believed to be valid up to a certain energy scale and widely considered as a low-scale approximation of a more fundamental theory due to some theoretical and phenomenological issues appearing in the model. Among many alternatives, supersymmetry is considered as the most prominent candidate for new physics beyond the SM. Supersymmetry relates two different classes of the particles known as fermions and bosons. The simplest straightforward supersymmetrization of the SM is named as minimal supersymmetric Standard Model (MSSM) where minimal set of new supersymmetric particles is introduced as superpartners of the Standard Model particles. It is the most studied low-scale supersymmetric model since it has very appealing features such as containing a dark matter candidate and providing a solution to the naturalness problem of the SM. After the Higgs discovery, the parameter space of the model has been investigated in great detail and it has been observed that the measured Higgs mass can be achieved only for the parameter regions which generate a severe fine-tuning. Such large fine-tuning can be alleviated by extending the minimal field content of the model via a singlet and/or a triplet. In this thesis, we discuss the triplet extension of the supersymmetric Standard Model where the MSSM field content is enlarged by introducing a triplet chiral superfield with zero hypercharge. The first part of the thesis contains an overview of the SM and the second part is dedicated to the general features of supersymmetry. After discussing aspects of the MSSM in the third part, we discuss the triplet extended supersymmetric Standard Model where we investigate the implications of the triplet on the Higgs phenomenology. We show that the measured mass of the Higgs boson can be achieved in this model without requiring heavy third generation squarks and/or large squark mixing parameters which reduce the amount of the required fine-tuning. Afterwards, we study the charged Higgs sector where a triplet scalar field with non-zero vacuum expectation value leads to h±iZW∓ coupling at tree level. We discuss how this coupling alters the charged Higgs decay and production channels at the LHC.
  • Lignell, Hanna (Helsingin yliopisto, 2014)
    In this thesis, fundamentally and atmospherically relevant species, their heterogeneous chemistry, and photolytic processing in multiple phases are explored both experimentally and computationally, providing important new insights and mechanistic understanding of these complicated systems. HArF is a covalently bonded neutral ground-state molecule of argon that is found to form at very low temperatures. This thesis explores the HArF low temperature formation mechanism and kinetics, and discusses the effect of the environment to the formation. In the next part, a computational study of an atmospherically relevant molecule N2O4 and its isomerization and ionization on model ice and silica surfaces is presented. N2O4 is known to produce HONO, which is a major source of atmospheric OH, an important atmospheric oxidant. The isomerization mechanism is found to be connected to the dangling surface hydrogen atoms at both surfaces, and we suggest that this mechanism could be expanded to other atmospherically relevant surfaces as well. Atmospheric aerosols play a critical role in controlling climate, driving chemical reactions in the atmosphere, acting as surfaces catalyzing heterogeneous reactions, and contributing to air pollution problems and indoor air quality issues. Low-volatility organic compounds that are produced in the oxidation of biogenic and anthropogenic Volatile Organic Compounds (VOC s) are known collectively as Secondary Organic Aerosol (SOA). In this thesis, a comprehensive investigation of aqueous photochemistry of cis-pinonic acid, a common product of ozonolysis of α-pinene (an SOA precursor) is presented. Various experimental techniques are used to study the kinetics, photolysis rates, quantum yields, and photolysis products, and computational methods are used to explore the photolysis mechanisms. Atmospheric implications and importance of aqueous photolysis vs. OH-mediated aging is discussed. The viscosity effects on SOA chemistry are then explored by a novel approach where an environmentally relevant probe molecule 2,4-dinitrophenol is embedded directly inside the SOA matrix, and its photochemistry is studied at different temperatures and compared to reaction efficiency in other reaction media (octanol and water). It is observed that decreasing temperature significantly slows down the photochemical process in the SOA matrix, and this behavior is ascribed to increasing viscosity of the SOA material.
  • Isoniemi, Esa (Helsingin yliopisto, 2003)
  • Elbra, Tiiu (Helsingin yliopisto, 2011)
    Physical properties provide valuable information about the nature and behavior of rocks and minerals. The changes in rock physical properties generate petrophysical contrasts between various lithologies, for example, between shocked and unshocked rocks in meteorite impact structures or between various lithologies in the crust. These contrasts may cause distinct geophysical anomalies, which are often diagnostic to their primary cause (impact, tectonism, etc). This information is vital to understand the fundamental Earth processes, such as impact cratering and associated crustal deformations. However, most of the present day knowledge of changes in rock physical properties is limited due to a lack of petrophysical data of subsurface samples, especially for meteorite impact structures, since they are often buried under post-impact lithologies or eroded. In order to explore the uppermost crust, deep drillings are required. This dissertation is based on the deep drill core data from three impact structures: (i) the Bosumtwi impact structure (diameter 10.5 km, 1.07 Ma age; Ghana), (ii) the Chesapeake Bay impact structure (85 km, 35 Ma; Virginia, U.S.A.), and (iii) the Chicxulub impact structure (180 km, 65 Ma; Mexico). These drill cores have yielded all basic lithologies associated with impact craters such as post-impact lithologies, impact rocks including suevites and breccias, as well as fractured and unfractured target rocks. The fourth study case of this dissertation deals with the data of the Paleoproterozoic Outokumpu area (Finland), as a non-impact crustal case, where a deep drilling through an economically important ophiolite complex was carried out. The focus in all four cases was to combine results of basic petrophysical studies of relevant rocks of these crustal structures in order to identify and characterize various lithologies by their physical properties and, in this way, to provide new input data for geophysical modellings. Furthermore, the rock magnetic and paleomagnetic properties of three impact structures, combined with basic petrophysics, were used to acquire insight into the impact generated changes in rocks and their magnetic minerals, in order to better understand the influence of impact. The obtained petrophysical data outline the various lithologies and divide rocks into four domains. Based on target lithology the physical properties of the unshocked target rocks are controlled by mineral composition or fabric, particularly porosity in sedimentary rocks, while sediments result from diverse sedimentation and diagenesis processes. The impact rocks, such as breccias and suevites, strongly reflect the impact formation mechanism and are distinguishable from the other lithologies by their density, porosity and magnetic properties. The numerous shock features resulting from melting, brecciation and fracturing of the target rocks, can be seen in the changes of physical properties. These features include an increase in porosity and subsequent decrease in density in impact derived units, either an increase or a decrease in magnetic properties (depending on a specific case), as well as large heterogeneity in physical properties. In few cases a slight gradual downward decrease in porosity, as a shock-induced fracturing, was observed. Coupled with rock magnetic studies, the impact generated changes in magnetic fraction the shock-induced magnetic grain size reduction, hydrothermal- or melting-related magnetic mineral alteration, shock demagnetization and shock- or temperature-related remagnetization can be seen. The Outokumpu drill core shows varying velocities throughout the drill core depending on the microcracking and sample conditions. This is similar to observations by Kern et al., (2009), who also reported the velocity dependence on anisotropy. The physical properties are also used to explain the distinct crustal reflectors as observed in seismic reflection studies in the Outokumpu area. According to the seismic velocity data, the interfaces between the diopside-tremolite skarn layer and either serpentinite, mica schist or black schist are causing the strong seismic reflectivities.
  • Kohout, Tomas (Helsingin yliopisto, 2009)
    Together with cosmic spherules, interplanetary dust particles and lunar samples returned by Apollo and Luna missions, meteorites are the only source of extraterrestrial material on Earth. The physical properties of meteorites, especially their magnetic susceptibility, bulk and grain density, porosity and paleomagnetic information, have wide applications in planetary research and can reveal information about origin and internal structure of asteroids. Thus, an expanded database of meteorite physical properties was compiled with new measurements done in meteorite collections across Europe using a mobile laboratory facility. However, the scale problem may bring discrepancies in the comparison of asteroid and meteorite properties. Due to inhomogenity, the physical properties of meteorites studied on a centimeter or millimeter scale may differ from those of asteroids determined on kilometer scales. Further difference may arise from shock effects, space and terrestrial weathering and from difference in material properties at various temperatures. Close attention was given to the reliability of the paleomagnetic and paleointensity information in meteorites and the methodology to test for magnetic overprints was prepared and verified.
  • Tala, Suvi (Helsingin yliopisto, 2015)
    A central part of the enculturation of new scientists in the natural sciences takes place in poorly understood apprentice master settings: potential expert researchers learn about success in science by doing science as members of research groups. What makes learning in such settings challenging is that a central part of the expertise they are attempting to achieve is tacit: the ideas guiding scientific knowledge-building are embodied in its practices and are nowadays rarely articulated. This interdisciplinary study develops a naturalistic view concerning scientific knowledge construction and justification and what is learned in those processes, in close cooperation with practitioners and by reflection on their actual practices. Such a viewpoint guides developing the expertise education of scientists. Another goal of the study is to encourage science education at every level to reflect as much as possible the epistemological aspects of doing science that practising scientists can also agree upon. The theoretical part of the dissertation focuses on those features of experimentation and modelling that the viewpoints of scientific practices suggest are essential but which are not addressed in the traditional views of science studies and, as a consequence, in science education. Theoretical ideas are tested and deepened in the empirical part, which concerns nanoscience. The developed contextualized method supports scientists in reflecting on their shared research practices and articulating those reflections in the questionnaire and interview. Contrary to traditional views, physical knowledge is understood to progress through the technoscientific design process, aiming at tightening the mutually developing conceptual and material control over the physical world. The products of the design process are both understanding about scientific phenomena and the means to study them, which means constructing and controlling a laboratory phenomenon, created in a laboratory in the same design process that produces the understanding about its functioning. These notions suggest the revision of what exactly is achieved by science and on what kind of basis, which indeed moves the epistemological views of science towards a viewpoint recognizable to its practitioners. Nowadays, technoscientific design is increasingly embodied in simulative modelling, mediating between the experimental reality and its theoretical framework. Such modelling is neither a part or continuation of theorizing as most literature considers modelling, nor it is only a bare means to analyse experimental data, but a partly independent and flexible method of generating our understanding of the world. Because the rapid development of modelling technology alters the evidential basis of science, a new kind of expertise is needed. The entry to the physical reality provided by generative modelling differs epistemologically and cognitively, from traditional methodological approaches. The expertise developed in such modelling provides scientists with new kinds of possibilities. For young scientists success and scientific and technological progress, this expertise is worth understanding.
  • Nousiainen, Maija (Helsingin yliopisto, 2012)
    In physics teacher education the use of graphical knowledge-representation tools like concept maps are often used because they are known to support the formation of organised knowledge. It is widely assumed that certain structural characteristics of concept maps can be connected to the usefulness of content. In order to study this relationship, the concept maps made by pre-service physics teachers are examined here. The design principles of the concept maps are based on quantitative experiments and modelling as the basic procedures in physics concept formation. The approach discussed here is informed by the recent cognitively oriented ideas of knowledge organisation around basic knowledge-organisation patterns and how they form the basis of more complex concept networks. The epistemic plausibility of justifications written in links is evaluated by using a four-level classification introduced here. The new method generalises and widens the existing approaches which use concept maps in representing the learners knowledge, and which also use concept maps for research purposes. Therefore, this thesis presents some novel theoretical constructs for analysis and discusses empirical results by using these new constructs at length, in order to show the advantages which the new theoretical aspects offer. Modelling of the data shows that such a concept-mapping technique supports students conceptual understanding. Also their usefulness in making plans for teaching is identified through modelling the flux of information which the relational structure of the map represents.
  • Noschis, Elias (Helsingin yliopisto, 2006)
    The TOTEM experiment at the LHC will measure the total proton-proton cross-section with a precision better than 1%, elastic proton scattering over a wide range in momentum transfer -t= p^2 theta^2 up to 10 GeV^2 and diffractive dissociation, including single, double and central diffraction topologies. The total cross-section will be measured with the luminosity independent method that requires the simultaneous measurements of the total inelastic rate and the elastic proton scattering down to four-momentum transfers of a few 10^-3 GeV^2, corresponding to leading protons scattered in angles of microradians from the interaction point. This will be achieved using silicon microstrip detectors, which offer attractive properties such as good spatial resolution (<20 um), fast response (O(10ns)) to particles and radiation hardness up to 10^14 "n"/cm^2. This work reports about the development of an innovative structure at the detector edge reducing the conventional dead width of 0.5-1 mm to 50-60 um, compatible with the requirements of the experiment.
  • Tirri, Henry (Helsingin yliopisto, 1997)