Matemaattis-luonnontieteellinen tiedekunta

 

Recent Submissions

  • Özcan-Ketola, Nergiz (Helsingin yliopisto, 2015)
    In this thesis, structural effects on magnetic response properties: magnetically induced ring currents, the ESR g-tensor and hyperfine coupling tensor, and NMR chemical shifts, are investigated computationally with DFT methods, using various exchange-correlation functionals and basis sets. Magnetically induced currents are calculated for thieno-bridged porphyrins with the emphasis on the aromatic character of the systems, the degree of which is investigated for varying molecular modifications. The ESR g-tensor, as well as the hyperfine coupling tensors for Sn and O nuclei in the vicinity of a positively charged oxygen vacancy in solid tin dioxide, are reported with finite cluster methods using different cluster embedding techniques to define the structural environment. The NMR spectral trends for increasing-size nanoflakes of graphenic materials are predicted as functions of the size and boundary geometry of the flakes. Finally, a number of dye molecules are subjected to NMR chemical shift calculations where the intermolecular interaction effects present in liquid solution are studied with dynamic simulation techniques. The magnetically induced currents calculated for thieno-bridged porphyrins show that the changes in the molecular structure such as the direction of the thiophene ring or the substitution by Zn^2+ do not change the aromatic character of the molecule. It is possible to confirm the experimental assignment of the ESR signal with the g-factor around 2.00 to the positively charged vacancy in tin dioxide, whereas the other experimental assignment of a signal at g=1.89 is not supported by our calculations. Distinct characteristic NMR spectral patterns are found for graphene nanoflakes reflecting the effects of increasing size and different boundary geometries on the NMR shifts. Solvent effects on the NMR of dye molecules are found to be location-specific: nuclei from different regions of the systems display distinct response to solvation.
  • Kohonen, Jukka (Helsingin yliopisto, 2015)
    Clustering is a central task in computational statistics. Its aim is to divide observed data into groups of items, based on the similarity of their features. Among various approaches to clustering, Bayesian model-based clustering has recently gained popularity. Many existing works are based on stochastic sampling methods. This work is concerned with exact, exponential-time algorithms for the Bayesian model-based clustering task. In particular, we consider the exact computation of two summary statistics: the number of clusters, and pairwise incidence of items in the same cluster. We present an implemented algorithm for computing these statistics substantially faster than would be achieved by direct enumeration of the possible partitions. The method is practically applicable to data sets of up to approximately 25 items. We apply a variant of the exact inference method into graphical models where a given variable may have up to four parent variables. The parent variables can then have up to 16 value combinations, and the task is to cluster them and find combinations that lead to similar conditional probability tables. Further contributions of this work are related to number theory. We show that a novel combination of addition chains and additive bases provides the optimal arrangement of multiplications, when the task is to use repeated multiplication starting from a given number or entity, but only a certain kind of function of the successive powers is required. This arrangement speeds up the computation of the posterior distribution for the number of clusters. The same arrangement method can be applied to other multiplicative tasks, for example, in matrix multiplication. We also present new algorithmic results related to finding extremal additive bases. Before this work, the extremal additive bases were known up to length 23. We have computed them up to length 24 in the unrestricted case, and up to length 41 in the restricted case.
  • Seppänen, Henri (Helsingin yliopisto, 2015)
    Tethers are key elements in the electric solar wind sail (E-sail). In this thesis I claim that E-sail tether manufacturing on km scale is possible. The E-sail is a space propulsion method for interplanetary missions. It uses long, thin and conductive, tethers to create thrust from the solar wind. Based on simulations a full scale E-sail using one hundred 20 km long tethers could create a continuous 1 N thrust. Compared to state of the art ion engines the proposed E-sail produces 10 100 times more specific impulse over the device lifetime. The E-sail is estimated to lower costs of interplanetary missions by reducing the payload mass needed to launch to orbit and by shortening the travel time. Manufacturing is an important technical challenge to the E-sail. A multifilament tether structure is needed to provide micrometeoroid tolerance to the tether. To address the challenge we combined an industrial ultrasonic wire bonder and a custom-built tether factory for tether production. A customized 3-wire bonding wedge enabled 4-wire multifilament tether manufacturing. The tether comprises 25 and 50 µm in diameter (Ø) aluminum wires that are ultrasonically welded together. The main result of this thesis is that we showed the feasibility of large-scale device manufacture by producing a continuous 1.04 km long multifilament tether comprising 90 704 wire-to-wire bonds. The measured bonding yield of the manufacture was 99.9%. Wire-to-wire bond pull strength was measured in a separate test on a 97 m long tether produced subsequent to the 1 km tether production. The maximum sustainable pull force of the tether bonds should exceed the estimated 50 mN centrifugal force of the spinning full scale E-sail. The measured average maximum sustainable pull force of 252 bonds along the 97 m test tether was (99 ± 8) mN with a minimum recorded value of 80 mN. This result shows that E-sail tether production on km scale is possible and thus supports the main claim of this thesis. Before this PhD project, no E-sail tether existed. The development of tether production and the results achieved brings the implementation of the most important E-sail component into the practical engineering realm and thus significantly advances the E-sail development. The produced 1 km tether was the most important objective of the ESAIL EU FP7 -framework project.
  • R. Labafzadeh, Sara (Helsingin yliopisto, 2015)
    Worldwide research is focused on the use of renewable and biodegradable raw materials due to the limited existing quantities of fossil supplies and the environmental degradation caused by global warming. Cellulose, derived from natural resources such as wood, annual plants and microbes, represents the most abundant renewable polymeric material on earth. Due to its low cost and functional versatility, cellulose has been a key feedstock for the production of chemicals with various properties and applications over the past century. It has found a wide range of applications in food, printing, cosmetics, pharmacy, therapeutics, paper making and in the textile industry. This partly crystalline polymer has not yet reached its full application potential due to its essential insolubility in most common solvents. Many investigations focus on the development of novel media for efficient and economically feasible functionalization of cellulose. The chemical modification of cellulose overcomes this obstacle and offers considerable opportunities for preparing cellulose-based polymeric materials. The modification could adjust the properties of the macromolecule for different purposes and meet the environmental requirements by using green reagents and recyclable solvent systems. Synthesis of new cellulose-based polymers and their thorough characterization and increasing the usefulness of cellulose by altering its properties have been of growing research interest for the past few years. The objective of this research was to investigate new paths for the preparation of cellulose-based materials with a variety of structural features to obtain advanced materials suitable for different applications. Most of the research has focused purely on the synthesis of cellulose derivatives in new and economically feasible solvent systems, but it also has general relevance for the material properties of the obtained derivatives. Also, the potential application of synthesized cellulose derivatives as barrier films for packaging was investigated. Highly substituted cellulose esters, carbamates and carbonates were prepared using various recyclable reaction solvents. Biomaterials with the potential for use in the packaging sector should provide high mechanical properties, in addition to good barrier properties for oxygen and water vapour. Some derivatives showed good barrier properties being promising for packaging application.
  • Tonttila, Juha (Helsingin yliopisto, 2015)
    Clouds, aerosols and the interactions between them are some of the most important uncertainties in climate modelling. The scales of spatial variability related to clouds are generally too small to be resolved using a typical climate model grid resolution. This work comprises studies about the small-scale variability of the vertical wind component, which significantly contributes to the process of cloud droplet formation. In addition, more elaborate methods for describing the small-scale variability of cloud properties in climate models are developed. The key questions that are investigated include: 1) What are the statistical properties of the turbulent vertical wind variability in the boundary layer and can they be represented accurately by atmospheric models? 2) How does parameterizing the small-scale variability in cloud microphysical processes affect the simulated cloud properties in climate models? 3) How does accounting for the small-scale variability in cloud properties affect the model-based estimates of the aerosol indirect radiative effects? The most important tool used in this work was the ECHAM5-HAM2 aerosol-climate model. The model simulates not only the atmospheric circulation and thermodynamics, but also the global distribution of aerosols and the physical processes between particles that affect the aerosol particle population. This allows the model to represent the interactions between clouds and aerosols. In addition, parts of this work also make use of measurement data based on remote sensing methods as well as high-resolution output from a numerical weather prediction model. The results show that the small-scale variability of the vertical wind associated with cloud droplet formation must be parameterized even in models with relatively high grid resolution. This highlights especially the importance of such methods for lower-resolution climate models. The variability of vertical wind can be described using a probability density function (PDF), the shape of which may vary significantly depending on the atmospheric conditions. The intricacies of the PDF include many uncertainties which can only be reduced by more extensive observations. With a simplified representation of the vertical velocity PDF, a new version of the climate model is constructed in this work, which can be used to study the climate effects due to the small-scale variability in vertical wind and clouds. It is noted that earlier methods that try to account for the variability in vertical velocity and cloud formation are somewhat insufficient. More attention should be paid on treating the small-scale variability self-consistently for entire chains of processes rather than separately for individual processes. This was accomplished in this work with the newly developed method, comprising the chain of processes from cloud formation to radiative transfer. The new method has a strong impact on the number of cloud droplets and drizzle formation as compared to the default model version, where the small-scale variaiblity of clouds is not as accurately accounted for. Moreover, the response of the model-simulated cloud properties to anthropogenic changes in aerosol emissions is found to be considerably weaker in the new model version than in the default model version. In effect, when compared with the default model version, the aerosol indirect radiative effect estimated with the new model version is closer to the best observation-based estimate. The results of this work contribute to improving our understanding of the aerosol-cloud interactions and to guide the work towards further reducing the uncertainties related to modelling clouds and climate.
  • Olenius, Tinja (Helsingin yliopisto, 2015)
    Formation of aerosol particles from condensable vapors is a ubiquitous phenomenon in the atmosphere. Aerosols can affect regional and global climate, as well as visibility and human health. The work of this thesis contributes to the numerous efforts made to build understanding of atmospheric particle formation mechanisms. The focus is on the first molecular-level steps, where clustering of individual gas-phase molecules initiates the process, and the applied method is dynamic cluster population modeling. Sets of sub-2 nm molecular clusters are simulated in conditions relevant to the atmosphere or laboratory considering vapor production, external sinks for clusters and vapors, cluster collision and evaporation processes, and in some cases also ionization and recombination by generic ionizing species. Evaporation rates are calculated from the cluster formation free energies computed with quantum chemical methods. As sulfuric acid has been shown to be the key component in particle formation in most boundary layer locations, the majority of the work presented here concentrates on simulating sulfuric acid-containing clusters in the presence of potentially enhancing species, namely ammonia and amines. In laboratory experiments, these base compounds have been found to be capable of enhancing sulfuric acid driven particle formation to produce formation rates around the magnitude observed in the atmosphere. This result is reproduced by the cluster model. In this work, the performance of the modeling tools is validated against experimental data also by comparing simulated concentrations of charged sulfuric acid ammonia clusters to those measured with a mass spectrometer in a chamber experiment. Examination of clustering pathways in simulated sulfuric acid ammonia and sulfuric acid dimethylamine systems shows that the clustering mechanisms and the role of ions may be very different depending on the identity of the base. In addition to predictions related to cluster formation from different precursor vapors, the model is applied to study the effects of varying conditions on the qualitative behavior of a cluster population and quantities that have been deduced from measured cluster concentrations. It is demonstrated that the composition of the critical cluster corresponding to the maximum free energy along the growth pathway cannot be reliably determined from cluster formation rates by commonly used methods. Simulations performed using a simple model substance show that cluster growth rates determined from the fluxes between subsequent cluster sizes are likely to differ from the growth rates deduced from the time evolution of the concentrations as in experiments, with the difference depending on the properties of the substance as well as ambient conditions. Finally, the effect of hydration and base molecules on sulfuric acid diffusion measurement is assessed by mimicking an experimental setup. Applications of cluster population simulations are diverse, and the development of these types of modeling tools provides useful additions to the palette of theoretical approaches to probe clustering phenomena.
  • Sibaouih, Ahlam (Helsingin yliopisto, 2015)
    Catalytic transformation of carbon dioxide into useful organic compounds has attracted much attention due to its economic and environmental benefits. In addition, other reasons are also taken into account, such as the possible utilization of CO2 as a renewable source chemical and the growing concern of the greenhouse effect. CO2 is an abundant, cheap, and safe C1 building block in organic synthesis. However, due to the inert nature of CO2, efficient catalytic processes of its chemical fixation remain a significant challenge. In this work, we have studied a possible pathway for practical utilization of CO2. The reaction of CO2 with epoxides giving cyclic carbonates, has been investigated. New catalyst systems based on cobalt capable of catalyzing the chemical transformation of carbon dioxide are described in detail. Oxygen is a cheap, readily available and environmentally friendly natural oxidant. The catalytic activation of molecular oxygen has great potential in a variety of applications. Catalysis and reactions, which are based on molecular oxygen, can also be considered to be ecologically benign processes. Moreover, catalytic reactions in water are highly desirable in terms of green chemistry. In this context, our purpose was to develop an environmentally friendly catalytic systems, suitable for oxidation of alcohols with molecular oxygen in water solution. In this part of the work, efficient catalysts, based on copper complexes have been synthesized and studied in the presence of TEMPO for the oxidation of benzyl and aliphatic alcohols with molecular oxygen in aqueous and nonaqueous medium.
  • Hildén, Timo (Helsingin yliopisto, 2015)
    Gas Electron Multiplier (GEM) detectors are special of position sensitive gas filled detectors used in several particle physics experiments. They are capable of sub- millimeter spatial resolution and energy resolution (FWHM) of the order of 20%. GEM detectors can operate with rates up to 50 kHz/mm2, withstand radiation excellently and can be manufactured up to square meter sizes. This thesis describes the Quality Assurance (QA) methods used in the assembly of 50 GEM detectors for the TOTEM T2 telescope at the LHC at CERN. Further development of optical QA methods used in T2 detector assembly lead into development of a unique large-area scanning system capable of sub-µm resolution. The system, its capability and the software used in the analysis of the scans are described in detail. A correlation was found between one of the main characteristics of the detector, the gas gain, and the results of the optical QA method. It was shown, that a qualitative estimation of the gain can be made based on accurate optical measurement of the microscopic features of the detector components. Ability to predict the performance of individual components of the detectors is extremely useful in large scale production of GEM based detectors.
  • Snellman, Jan (Helsingin yliopisto, 2015)
    The mathematical description of turbulence is one of the greatest unresolved problems of modern physics. Many targets of astrophysical research, such as stellar convection zones and accretion discs, are very turbulent. Especially, the understanding of stellar convection zones is important for the theory of stellar evolution. Therefore, it is necessary to use approximate descriptions for turbulence while modelling these objects. One approximate method for describing turbulence is to divide the quantities under study into mean and fluctuating parts, the latter of which represent small scale changes present in turbulence. This approach is known as the Reynolds decomposition, which makes it possible to derive equations for the mean quantities. The equations acquired depend on correlations of the fluctuating quantities, such as the correlations of the fluctuating velocity components known as the Reynolds stresses, and turbulent heat and passive scalar fluxes. A mathematically precise way of handling these correlations is to derive equations also for them, but the resultant equations will depend on new, higher order correlations. If one derives equations for these new correlations, a new set of even higher order correlations is involved, and the equation system will not be closed. This is called the closure problem. The closure problem can be circumvented by using approximations known as closure models, which work by replacing the higher order correlations with lower order ones, thereby creating a closed system. Second order closure models, in which the third order correlations have been replaced by relaxation terms of second order, are studied in this Thesis by comparing their results with those of direct numerical simulations (DNS). The two closure models studied are the minimal tau approximation (MTA) and the isotropising variable relaxation time (IVRT) closure. The physical phenomena, to which the closures were applied, included homogeneous isotropically forced turbulence with rotation and shear, compressible as well as homogeneous Boussinesq convection, decaying turbulence, and passive scalar transport. In the case of homogeneous isotropic turbulence it was found that MTA is capable of reproducing the DNS results with Strouhal numbers of about unity. It was also found that the Reynolds stress components, contributing to angular momentum transport in accretion discs, can change sign depending on rotation rate, which was seen in studies of compressible convection too, meaning that convection can potentially contribute to accretion of matter. Decaying turbulence studies indicated that the relaxation time scales occurring in the relaxation closures tend to constant values at high Reynolds numbers, and this was also observed when studying passive scalar transport. However, in studies concerning Boussinesq convection no asymptotic behaviour was found as a function of the Rayleigh and Taylor numbers. The correspondence of the closure models to direct numerical simulations is found to be generally achievable, but with varying quality depending on the physical situation. Given the asymptotic behaviour of the optimum closure parameters for forced turbulence, they can be considered universal in this case. For rotating Boussinesq convection the same conclusion cannot be drawn with respect to the Rayleigh and Taylor numbers.
  • Ilinov, Andrey (Helsingin yliopisto, 2015)
    Nanotechnology became an emerging field during the last few decades. The possibility to create elements having sizes in the nanometer range provides new opportunities for medical applications, various sensors and detectors, and composite materials technologies. However, at the nanoscale the basic physical properties may change unexpectedly including chemical, mechanical, optical and electronic properties. There is still no clear understanding of all possible consequences of miniaturization on the behavior of nanostructures. This thesis is focused on the analysis of mechanical and structural (including sputtering under irradiation) properties of nanorods. By nanorods we imply structures like beams or rods, with their cross-sectional diameter measuring in nanometers and having a length several times larger than the diameter. At such sizes it becomes possible to simulate the structures atom by atom using the molecular dynamics (MD) method. In the first part of the thesis, we analyze the elastic properties of Si nanorods: how the variation in size may change the elastic moduli, the effects of oxidation and intrinsic stresses. We also check the validity of the classical continuum mechanics approach by modeling the same nanorods with the finite elements method (FEM). In the second part we investigate sputtering from Au nanorods under ion irradiation. Recent experiments had shown that there is a big enhancement of sputtering yields from Au nanorods in contrast with those from a flat surface. The yields can be as much as 1000 per individual impact. MD gives us an opportunity to analyze the sputtering process with a femtosecond resolution which is impossible by any of the existing experimental methods. We find that an explosive ejection of nanoclusters is the main factor causing such large sputtering yields.
  • Mäkelä, Hanna (Helsingin yliopisto, 2015)
    Roughly three-quarters of Finland s area is covered by forests. Any climatological changes influencing the danger of forest fire are important to evaluate and consider. The objective of this thesis is to study the long-term past and future changes in climatically-driven forest fire danger in Finland based on the summertime mean temperature and precipitation sum. The work is composed of two parts. In the first part, long-term gridded datasets of observed monthly mean temperatures and precipitation sums for Finland are developed. In the second part, these gridded datasets are used together with calculated values of the Finnish Forest Fire Index and probabilistic climate model simulations (from the ENSEMBLES project) to estimate the number of forest fire danger days during the summer season (June-August). The long-term variation of Finland s climatological forest fire danger is studied roughly for 100 years backwards and into the future. One of the main achievements of this thesis is that it explores the possibility of quantifying past and future fire-weather using a relatively limited database with regard to both weather variables and their spatial coverage. This enables a wider exploitation of scattered data series from earlier times and can also provide opportunities for projections using data with a low resolution. The climatological forest fire danger in Finland varies considerably from year to year. There have not been any significant increasing or decreasing trends in the number of fire danger days during the 20th century (1908-2011). On average, the highest probability of forest fire danger occurs in June and July, when a fire hazard exists on roughly 35-40% of all days. The intra-seasonal variation of fire danger has been large enough to enable the occurrence of conflagrations even though the fire danger for the season as a whole has been at an average level. Despite the projected increase in average summertime precipitation, the Finnish climate will provide more favourable conditions for the occurrence of forest fires in the future than today. This is due to increases in the mean temperature. The probability of an increase in the number of fire danger days is 56-75% in the near future (2010-2029) and 71-91% by the end of the current century (2080-2099), depending on the region. This would indicate an increase of 1-2 and 7-10 days, respectively. It is thus clearly important to further develop existing tools for the forecasting of fire danger, and to maintain the capabilities of the fire prevention, surveillance and suppression services. Future projections of all relevant meteorological variables (temperature, precipitation, humidity, evaporation and wind speed) at higher temporal and spatial resolutions, in addition to information on the type of the summertime precipitation and the length of the dry periods, would notably improve the assessment of the future climatological forest fire danger.
  • Fager-Jokela, Erika (Helsingin yliopisto, 2015)
    The Pauson-Khand reaction (PKR) is a very efficient method of synthesising cyclopentenones. In the reaction, an alkene, an alkyne and carbon monoxide combine to form a cyclopentenone ring, mediated or catalysed by a transition metal complex in one pot. In the cyclisation, three new carbon-carbon bonds are created. This thesis concentrates on the intermolecular variant of a cobalt(0)-mediated Pauson-Khand reaction. The development of intermolecular cyclisation has been slow over the past decade, due to the lack of reactive alkenes and the lack of regioselectivity for substituted alkynes. Despite the publication of numerous studies, the electronic effects involved are not yet completely understood. In this study, our purpose was to gain a greater understanding of the interplay between steric and electronic factors in determining the regioselectivity of the Pauson-Khand reaction. The electronic guidance regarding the alkyne regioselectivity of the Pauson-Khand reaction was studied with both conjugated aromatic alkynes and non-conjugated propargylic alkynes. It was demonstrated that, in the absence of steric effects, alkyne polarisation dictates the regiochemical selectivity of PKR. In conjugated systems, like diarylalkynes, Hammett values can be utilised in estimation of the polarisation of the alkyne. With nonconjugated alkynes, on the other hand, electronegativity of the substituent group designates the major regioisomer, as the charge differences are created via inductive effect. In addition to investigating regioselectivity, additive-free methods for promotion of Pauson-Khand reaction were developed and utilised, and Pauson-Khand reaction was applied in the synthesis of estrone E-ring extension. With microwaves (MW) used in promotion, the heat was effectively transferred to the reaction, saving energy and time without affecting the selectivity of the reaction.
  • Ruusuvuori, Kai (Helsingin yliopisto, 2015)
    New particle formation is an important process in the atmosphere. As ions are constantly produced in the atmosphere, the behaviour and role of charged particles in atmospheric processes needs to be understood. In order to gain insight on the role of charge in atmospheric new particle formation, the electron structure of the molecules taking part in this process needs to be taken into account using quantum chemical methods. Quantum chemical density functional theory was employed in an effort to reproduce an experimentally observed sign preference. While computational results on molecular structures agreed well with results obtained by other groups, the computationally obtained sign preference was opposite to the experimentally observed. Possible reasons for this discrepancy were found in both computational results and experiments. Simulations of clusters containing water, pyridine, ammonia and a proton were performed using density functional theory. The clusters were found to form a core consisting of ammonium ion and water with the pyridine molecule bonding to the ammonium ion. However, the solvation of the ammonium ion was observed to affect the possibility of proton transfer. Calculations of proton affinities and gas phase basicities of several compounds, which can be considered as candidates to form atmospheric ions in the boreal forest, were performed. The generally small differences between the calculated gas phase basicites and proton affinities implied only small entropy changes in the protonation reaction. Comparison with experiments resulted in the conclusion that the largest experimentally observed peaks of atmospheric ions most likely corresponded to pyridine and substituted pyridines. Furthermore, a combination of low proton affinity and high observed cation concentration was concluded to imply a high concentration of neutral parent molecules in the atmosphere. A combination of quantum chemistry and a code for modelling cluster dynamics was employed to study the use of protonated acetone monomers and dimers as the ionization reagent in a chemical ionization atmospheric pressure interface time-of-flight mass spectrometer (CI-APi-TOF). The results showed that the ionization reagents successfully charged dimethylamine monomers. However, there were discrepancies between the simulated and measured cluster distributions. Possible reasons for this discrepancy were found in both measurements and the modelling parameters.
  • Tala, Suvi (Helsingin yliopisto, 2015)
    A central part of the enculturation of new scientists in the natural sciences takes place in poorly understood apprentice master settings: potential expert researchers learn about success in science by doing science as members of research groups. What makes learning in such settings challenging is that a central part of the expertise they are attempting to achieve is tacit: the ideas guiding scientific knowledge-building are embodied in its practices and are nowadays rarely articulated. This interdisciplinary study develops a naturalistic view concerning scientific knowledge construction and justification and what is learned in those processes, in close cooperation with practitioners and by reflection on their actual practices. Such a viewpoint guides developing the expertise education of scientists. Another goal of the study is to encourage science education at every level to reflect as much as possible the epistemological aspects of doing science that practising scientists can also agree upon. The theoretical part of the dissertation focuses on those features of experimentation and modelling that the viewpoints of scientific practices suggest are essential but which are not addressed in the traditional views of science studies and, as a consequence, in science education. Theoretical ideas are tested and deepened in the empirical part, which concerns nanoscience. The developed contextualized method supports scientists in reflecting on their shared research practices and articulating those reflections in the questionnaire and interview. Contrary to traditional views, physical knowledge is understood to progress through the technoscientific design process, aiming at tightening the mutually developing conceptual and material control over the physical world. The products of the design process are both understanding about scientific phenomena and the means to study them, which means constructing and controlling a laboratory phenomenon, created in a laboratory in the same design process that produces the understanding about its functioning. These notions suggest the revision of what exactly is achieved by science and on what kind of basis, which indeed moves the epistemological views of science towards a viewpoint recognizable to its practitioners. Nowadays, technoscientific design is increasingly embodied in simulative modelling, mediating between the experimental reality and its theoretical framework. Such modelling is neither a part or continuation of theorizing as most literature considers modelling, nor it is only a bare means to analyse experimental data, but a partly independent and flexible method of generating our understanding of the world. Because the rapid development of modelling technology alters the evidential basis of science, a new kind of expertise is needed. The entry to the physical reality provided by generative modelling differs epistemologically and cognitively, from traditional methodological approaches. The expertise developed in such modelling provides scientists with new kinds of possibilities. For young scientists success and scientific and technological progress, this expertise is worth understanding.
  • Rusak, Stanislav (Helsingin yliopisto, 2015)
    Grounded in the increasingly accurate astronomical observations of the past few decades, the study of cosmology has produced a comprehensive account of the history of the universe. This account is contained in the Hot Big Bang cosmological model which describes the expansion of a hot and dense state to become the universe as we observe it today. While the Big Bang model has been extremely successful in being able to account for a wide array of cosmological data, it leaves unexplained the special initial conditions that are required in order to produce the universe we find ourselves in. Such initial conditions are, however, a natural consequence of a period of quasi-exponential expansion of the universe known as inflation. Such a period of expansion can be realized if the universe is dominated by a scalar field - the inflation - which is slowly rolling down the slope of its potential. Inflation also provides a natural mechanism for the production of primordial seeds of structure in the universe through the growth of the quantum fluctuations in the inflaton field to super-horizon scales. Together inflation and the subsequent Big Bang evolution form the back bone of modern cosmology. However, the transition between the inflationary epoch and the thermal state which characterizes the initial conditions of the Big Bang evolution is not well understood. This process - dubbed reheating - involves the decay of the inflaton field into the particles of the Standard Model of particle physics, and may be highly non-trivial, with non-perturbative resonant processes playing a major role. Spectator fields - light scalar fields which are subdominant during inflation - may also play an important role during this epoch. The aim of this thesis is to showcase aspects of non-perturbative decay of scalar fields after inflation, focusing in particular on the role of spectator fields. This includes the modulation of the non-perturbative decay of the inflaton by a spectator field, the non-perturbative decay of a spectator into the Standard Model Higgs, as well as the non-perturbative decay of the Higgs field itself.