Browsing by Title

Sort by: Order: Results:

Now showing items 320-339 of 859
  • Långvik, Miklos (Helsingin yliopisto, 2011)
    In this thesis, the possibility of extending the Quantization Condition of Dirac for Magnetic Monopoles to noncommutative space-time is investigated. The three publications that this thesis is based on are all in direct link to this investigation. Noncommutative solitons have been found within certain noncommutative field theories, but it is not known whether they possesses only topological charge or also magnetic charge. This is a consequence of that the noncommutative topological charge need not coincide with the noncommutative magnetic charge, although they are equivalent in the commutative context. The aim of this work is to begin to fill this gap of knowledge. The method of investigation is perturbative and leaves open the question of whether a nonperturbative source for the magnetic monopole can be constructed, although some aspects of such a generalization are indicated. The main result is that while the noncommutative Aharonov-Bohm effect can be formulated in a gauge invariant way, the quantization condition of Dirac is not satisfied in the case of a perturbative source for the point-like magnetic monopole.
  • Salmela, Antti (Helsingin yliopisto, 2005)
  • Heikkinen, Aatos (Helsingin yliopisto, 2009)
    This work belongs to the field of computational high-energy physics (HEP). The key methods used in this thesis work to meet the challenges raised by the Large Hadron Collider (LHC) era experiments are object-orientation with software engineering, Monte Carlo simulation, the computer technology of clusters, and artificial neural networks. The first aspect discussed is the development of hadronic cascade models, used for the accurate simulation of medium-energy hadron-nucleus reactions, up to 10 GeV. These models are typically needed in hadronic calorimeter studies and in the estimation of radiation backgrounds. Various applications outside HEP include the medical field (such as hadron treatment simulations), space science (satellite shielding), and nuclear physics (spallation studies). Validation results are presented for several significant improvements released in Geant4 simulation tool, and the significance of the new models for computing in the Large Hadron Collider era is estimated. In particular, we estimate the ability of the Bertini cascade to simulate Compact Muon Solenoid (CMS) hadron calorimeter HCAL. LHC test beam activity has a tightly coupled cycle of simulation-to-data analysis. Typically, a Geant4 computer experiment is used to understand test beam measurements. Thus an another aspect of this thesis is a description of studies related to developing new CMS H2 test beam data analysis tools and performing data analysis on the basis of CMS Monte Carlo events. These events have been simulated in detail using Geant4 physics models, full CMS detector description, and event reconstruction. Using the ROOT data analysis framework we have developed an offline ANN-based approach to tag b-jets associated with heavy neutral Higgs particles, and we show that this kind of NN methodology can be successfully used to separate the Higgs signal from the background in the CMS experiment.
  • Kalliokoski, Matti (Helsingin yliopisto, 2012)
    Various experiments and measurements in the field of natural sciences have given us a great body of information about structure of matter and the evolution of the Universe. Though the increase in knowledge in these fields has been remarkable, further studies are required to answer the questions that have arisen. In the fields of nuclear structure and astrophysics, central questions concern the processes which lead to production of the chemical elements. It is assumed that the elements heavier than iron originate from collapsing stars or stellar collisions, the process depending on the nuclear forces and symmetries in the rare isotopes. Nuclear Structure, Astrophysics and Reactions (NUSTAR) collaboration in the Facility for Antiproton and Ion Research (FAIR) will utilize the intensive secondary beams from Superconducting Fragment Separator (Super-FRS) to investigate the processes leading to production of the elements. The secondary beams are produced by accelerating and colliding ions up to Uranium into a target, and then by steering the secondary particles that are produced in the collision through the Super-FRS. The Super-FRS separates the desired isotopes from other secondary particles and steers them to the experiments. The first part of this thesis describes a detector that can be used as a beam monitoring detector in the Super-FRS. The detector is a novel concept, combining the best parts of two gas filled detector types, Time Projection Chamber (TPC) and Gas Electron Multiplier (GEM). The TPC part is based on the knowledge obtained on the TPC-detectors that are in use in the current fragment separator of GSI-facility, the predecessor of the FAIR-facility. The addition of GEM-detector as an amplification stage, in addition to adjustment of the amplification, it will reduce the ion feedback that impair the resolution and the efficiency of the standard TPC-detectors. The second part focuses on the GEM-amplification part. In the harsh environment of the Super-FRS, the detectors that will be installed have to be well defined and manufactured from high quality components. Because of these reasons, an optical scanning system was developed to support the quality assurance chain that is needed in the manufacturing of the GEM-detectors. The system was also used in the endeavors to understand the processes that lead to possible breakdown of the detectors. In addition of the breakdown, studies of recovering broken detector components were initiated.
  • Kuparinen, Anna (Helsingin yliopisto, 2007)
    The future use of genetically modified (GM) plants in food, feed and biomass production requires a careful consideration of possible risks related to the unintended spread of trangenes into new habitats. This may occur via introgression of the transgene to conventional genotypes, due to cross-pollination, and via the invasion of GM plants to new habitats. Assessment of possible environmental impacts of GM plants requires estimation of the level of gene flow from a GM population. Furthermore, management measures for reducing gene flow from GM populations are needed in order to prevent possible unwanted effects of transgenes on ecosystems. This work develops modeling tools for estimating gene flow from GM plant populations in boreal environments and for investigating the mechanisms of the gene flow process. To describe spatial dimensions of the gene flow, dispersal models are developed for the local and regional scale spread of pollen grains and seeds, with special emphasis on wind dispersal. This study provides tools for describing cross-pollination between GM and conventional populations and for estimating the levels of transgenic contamination of the conventional crops. For perennial populations, a modeling framework describing the dynamics of plants and genotypes is developed, in order to estimate the gene flow process over a sequence of years. The dispersal of airborne pollen and seeds cannot be easily controlled, and small amounts of these particles are likely to disperse over long distances. Wind dispersal processes are highly stochastic due to variation in atmospheric conditions, so that there may be considerable variation between individual dispersal patterns. This, in turn, is reflected to the large amount of variation in annual levels of cross-pollination between GM and conventional populations. Even though land-use practices have effects on the average levels of cross-pollination between GM and conventional fields, the level of transgenic contamination of a conventional crop remains highly stochastic. The demographic effects of a transgene have impacts on the establishment of trangenic plants amongst conventional genotypes of the same species. If the transgene gives a plant a considerable fitness advantage in comparison to conventional genotypes, the spread of transgenes to conventional population can be strongly increased. In such cases, dominance of the transgene considerably increases gene flow from GM to conventional populations, due to the enhanced fitness of heterozygous hybrids. The fitness of GM plants in conventional populations can be reduced by linking the selectively favoured primary transgene to a disfavoured mitigation transgene. Recombination between these transgenes is a major risk related to this technique, especially because it tends to take place amongst the conventional genotypes and thus promotes the establishment of invasive transgenic plants in conventional populations.
  • Onkamo, Päivi (Helsingin yliopisto, 2002)
  • Heinonen, Jussi (Helsingin yliopisto, 2011)
    This study provides insights into the composition and origin of ferropicrite dikes (FeOtot = 13 17 wt. %; MgO = 13 19 wt. %) and associated meimechite, picrite, picrobasalt, and basalt dikes found at Vestfjella, western Dronning Maud Land, Antarctica. The dikes crosscut Jurassic Karoo continental flood basalts (CFB) that were emplaced during the early stages of the breakup of the Gondwana supercontinent ~180 Ma ago. Selected samples (31 overall from at least eleven dikes) were analyzed for their mineral chemical, major element, trace element, and Sr, Nd, Pb, and Os isotopic compositions. The studied samples can be divided into two geochemically distinct types: (1) The depleted type (24 samples from at least nine dikes) is relatively depleted in the most incompatible elements and exhibits isotopic characteristics (e.g., initial εNd of +4.8 to +8.3 and initial 187Os/188Os of 0.1256 0.1277 at 180 Ma) similar to those of mid-ocean ridge basalts (MORB); (2) The enriched type (7 samples from at least two dikes) exhibits relatively enriched incompatible element and isotopic characteristics (e.g., initial εNd of +1.8 to +3.6 and initial 187Os/188Os of 0.1401 0.1425 at 180 Ma) similar to those of oceanic island basalts. Both magma types have escaped significant contamination by the continental crust. The depleted type is related to the main phase of Karoo magmatism and originated as highly magnesian (MgO up to 25 wt. %) partial melts at high temperatures (mantle potential temperature >1600 °C) and pressures (~5 6 GPa) from a sublithospheric, water-bearing, depleted peridotite mantle source. The enriched type sampled pyroxene-bearing heterogeneities that can be traced down to either recycled oceanic crust or melt-metasomatized portions of the sublithospheric or lithospheric mantle. The source of the depleted type represents a sublithospheric end-member source for many Karoo lavas and has subsequently been sampled by the MORBs of the Indian Ocean. These observations, together with the purported high temperatures, indicate that the Karoo CFBs were formed in an extensive melting episode caused mainly by internal heating of the upper mantle beneath the Gondwana supercontinent. My research supports the view that ferropicritic melts can be generated in several ways: the relative Fe-enrichment of mantle partial melts is most readily achieved by (1) relatively low degree of partial melting, (2) high pressure of partial melting, and (3) melting of enriched source components (e.g., pyroxenite and metasomatized peridotite). Ferropicritic whole-rock compositions could also result from accumulation, secondary alteration, and fractional crystallization, however, and caution is required when addressing the parental magma composition.
  • Mäkinen, Jaakko (Helsingin yliopisto, 2000)
  • Makkonen, Teemu (Helsingin yliopisto, 2012)
    Innovation is commonly considered as the engine of economic growth. However, the role of education and training has been a recurrent subject raised as the actual driver of regional development. Accordingly, the role of universities has been highlighted as a significant contributor to local economies. The empirical literature remains inconsistent on the causal relationships between these phenomena. At the heart of this discussion is the on-going debate about which indicators should be used to measure innovation, as there seems to not be a single measure that could be claimed as clearly superior. This brings the question of the possible interconnections between innovation indicators and regional economic development to the fore on different scales: European Union, national, regional and local. ---- First, the sensitiveness of different innovation indicators and indexes is analysed. Second, the impacts of innovation indicators on regional and economic development are investigated. Third, the proposed role of education and training as the factors behind innovation and economic growth are put under scrutiny. Fourth, the role of universities in the local economy is studied. The analyses are mainly carried out with standard statistical methods, including principal component analysis and Granger causality tests, but the picture is also deepened with a semi-structured thematic interview case study. The data for the statistical analysis are constructed from official statistical databases and from a unique innovation count database compiled by VTT Technical Research Centre of Finland. The results show that great care is needed, when choosing the indicators to measure regional innovation with, as different measures produce highly divergent rankings. In worst cases this can lead to non-robust messages, if the shortcomings of the different indicators are not taken into account when drawing policy conclusions. The results also show that in a geographical context the innovative (European and Finnish) regions are among the most economically developed. The links between continuing vocational training, innovation and economic development are manifest in a similar fashion. Still, although innovation is clearly linked to regional development, other socio-economic variables, workforce characteristics, and education in particular, seem to offer higher explanatory power for the success of regions. In fact, educational attainment is shown to Granger cause economic development and innovative capacity, whereas the relationship between innovative capacity and economic development is bidirectional. Finally, in peripheral settings, Joensuu in this case, the impact of university on to the local economy is not as straightforward as in the case of well-to-do regions and top universities: there are evident mismatches between the needs of local business life and the research, the teaching and entrepreneurial characteristics of the university and its staff and graduates. Still, when successful the university-industry collaboration has produced good experiences and beneficial cooperative projects in the locality. In conclusion, since the link between innovative capacity and actual innovative outputs is not straightforward, policies simply relying on increasing regional research and development expenditures are not guaranteed to succeed. Therefore, although there is no universal `one-size-fits-all policy´, the strengthening of the educational base of the regions is highlighted here as a possible alternative to strive towards high levels of innovation and economic growth.
  • Picken, Päivi (Helsingin yliopisto, 2007)
    In Finland, peat harvesting sites are utilized down almost to the mineral soil. In this situation the properties of mineral subsoil are likely to have considerable influence on the suitability for the various after-use forms. The aims of this study were to recognize the chemical and physical properties of mineral subsoils possibly limiting the after-use of cut-over peatlands, to define a minimum practice for mineral subsoil studies and to describe the role of different geological areas. The future percentages of the different after-use forms were predicted, which made it possible to predict also carbon accumulation in this future situation. Mineral subsoils of 54 different peat production areas were studied. Their general features and grain size distribution was analysed. Other general items studied were pH, electrical conductivity, organic matter, water soluble nutrients (P, NO3-N, NH4-N, S and Fe) and exchangeable nutrients (Ca, Mg and K). In some cases also other elements were analysed. In an additional case study carbon accumulation effectiveness before the intervention was evaluated on three sites in Oulu area (representing sites typically considered for peat production). Areas with relatively sulphur rich mineral subsoil and pool-forming areas with very fine and compact mineral subsoil together covered approximately 1/5 of all areas. These areas were unsuitable for commercial use. They were recommended for example for mire regeneration. Another approximate 1/5 of the areas included very coarse or very fine sediments. Commercial use of these areas would demand special techniques - like using the remaining peat layer for compensating properties missing from the mineral subsoil. One after-use form was seldom suitable for one whole released peat production area. Three typical distribution patterns (models) of different mineral subsoils within individual peatlands were found. 57 % of studied cut-over peatlands were well suited for forestry. In a conservative calculation 26% of the areas were clearly suitable for agriculture, horticulture or energy crop production. If till without large boulders was included, the percentage of areas suitable to field crop production would be 42 %. 9-14 % of all areas were well suitable for mire regeneration or bird sanctuaries, but all areas were considered possible for mire regeneration with correct techniques. Also another 11 % was recommended for mire regeneration to avoid disturbing the mineral subsoil, so total 20-25 % of the areas would be used for rewetting. High sulphur concentrations and acidity were typical to the areas below the highest shoreline of the ancient Litorina sea and Lake Ladoga Bothnian Bay zone. Also differences related to nutrition were detected. In coarse sediments natural nutrient concentration was clearly higher in Lake Ladoga Bothnian Bay zone and in the areas of Svecokarelian schists and gneisses, than in Granitoid area of central Finland and in Archaean gneiss areas. Based on this study the recommended minimum analysis for after-use planning was for pH, sulphur content and fine material (<0.06 mm) percentage. Nutrition capacity could be analysed using the natural concentrations of calcium, magnesium and potassium. Carbon accumulation scenarios were developed based on the land-use predictions. These scenarios were calculated for areas in peat production and the areas released from peat production (59300 ha + 15 671 ha). Carbon accumulation of the scenarios varied between 0.074 and 0.152 million t C a-1. In the three peatlands considered for peat production the long term carbon accumulation rates varied between 13 and 24 g C m-2 a-1. The natural annual carbon accumulation had been decreasing towards the time of possible intervention.
  • Ala-Mattila, Vesa (Helsingin yliopisto, 2011)
    The main results of this thesis show that a Patterson-Sullivan measure of a non-elementary geometrically finite Kleinian group can always be characterized using geometric covering and packing constructions. This means that if the standard covering and packing constructions are modified in a suitable way, one can use either one of them to construct a geometric measure which is identical to the Patterson-Sullivan measure. The main results generalize and modify results of D. Sullivan which show that one can sometimes use the standard covering construction to construct a suitable geometric measure and sometimes the standard packing construction. Sullivan has shown also that neither or both of the standard constructions can be used to construct the geometric measure in some situations. The main modifications of the standard constructions are based on certain geometric properties of limit sets of Kleinian groups studied first by P. Tukia. These geometric properties describe how closely the limit set of a given Kleinian group resembles euclidean planes or spheres of varying dimension on small scales. The main idea is to express these geometric properties in a quantitative form which can be incorporated into the gauge functions used in the modified covering and packing constructions. Certain estimation results for general conformal measures of Kleinian groups play a crucial role in the proofs of the main results. These estimation results are generalizations and modifications of similar results considered, among others, by B. Stratmann, D. Sullivan, P. Tukia and S. Velani. The modified constructions are in general defined without reference to Kleinian groups, so they or their variants may prove useful in some other contexts in addition to that of Kleinian groups.
  • Siljander, Mika (Helsingin yliopisto, 2010)
    This thesis presents novel modelling applications for environmental geospatial data using remote sensing, GIS and statistical modelling techniques. The studied themes can be classified into four main themes: (i) to develop advanced geospatial databases. Paper (I) demonstrates the creation of a geospatial database for the Glanville fritillary butterfly (Melitaea cinxia) in the Åland Islands, south-western Finland; (ii) to analyse species diversity and distribution using GIS techniques. Paper (II) presents a diversity and geographical distribution analysis for Scopulini moths at a world-wide scale; (iii) to study spatiotemporal forest cover change. Paper (III) presents a study of exotic and indigenous tree cover change detection in Taita Hills Kenya using airborne imagery and GIS analysis techniques; (iv) to explore predictive modelling techniques using geospatial data. In Paper (IV) human population occurrence and abundance in the Taita Hills highlands was predicted using the generalized additive modelling (GAM) technique. Paper (V) presents techniques to enhance fire prediction and burned area estimation at a regional scale in East Caprivi Namibia. Paper (VI) compares eight state-of-the-art predictive modelling methods to improve fire prediction, burned area estimation and fire risk mapping in East Caprivi Namibia. The results in Paper (I) showed that geospatial data can be managed effectively using advanced relational database management systems. Metapopulation data for Melitaea cinxia butterfly was successfully combined with GPS-delimited habitat patch information and climatic data. Using the geospatial database, spatial analyses were successfully conducted at habitat patch level or at more coarse analysis scales. Moreover, this study showed it appears evident that at a large-scale spatially correlated weather conditions are one of the primary causes of spatially correlated changes in Melitaea cinxia population sizes. In Paper (II) spatiotemporal characteristics of Socupulini moths description, diversity and distribution were analysed at a world-wide scale and for the first time GIS techniques were used for Scopulini moth geographical distribution analysis. This study revealed that Scopulini moths have a cosmopolitan distribution. The majority of the species have been described from the low latitudes, sub-Saharan Africa being the hot spot of species diversity. However, the taxonomical effort has been uneven among biogeographical regions. Paper III showed that forest cover change can be analysed in great detail using modern airborne imagery techniques and historical aerial photographs. However, when spatiotemporal forest cover change is studied care has to be taken in co-registration and image interpretation when historical black and white aerial photography is used. In Paper (IV) human population distribution and abundance could be modelled with fairly good results using geospatial predictors and non-Gaussian predictive modelling techniques. Moreover, land cover layer is not necessary needed as a predictor because first and second-order image texture measurements derived from satellite imagery had more power to explain the variation in dwelling unit occurrence and abundance. Paper V showed that generalized linear model (GLM) is a suitable technique for fire occurrence prediction and for burned area estimation. GLM based burned area estimations were found to be more superior than the existing MODIS burned area product (MCD45A1). However, spatial autocorrelation of fires has to be taken into account when using the GLM technique for fire occurrence prediction. Paper VI showed that novel statistical predictive modelling techniques can be used to improve fire prediction, burned area estimation and fire risk mapping at a regional scale. However, some noticeable variation between different predictive modelling techniques for fire occurrence prediction and burned area estimation existed.
  • Martikainen, Henri (Helsingin yliopisto, 2011)
    Various Tb theorems play a key role in the modern harmonic analysis. They provide characterizations for the boundedness of Calderón-Zygmund type singular integral operators. The general philosophy is that to conclude the boundedness of an operator T on some function space, one needs only to test it on some suitable function b. The main object of this dissertation is to prove very general Tb theorems. The dissertation consists of four research articles and an introductory part. The framework is general with respect to the domain (a metric space), the measure (an upper doubling measure) and the range (a UMD Banach space). Moreover, the used testing conditions are weak. In the first article a (global) Tb theorem on non-homogeneous metric spaces is proved. One of the main technical components is the construction of a randomization procedure for the metric dyadic cubes. The difficulty lies in the fact that metric spaces do not, in general, have a translation group. Also, the measures considered are more general than in the existing literature. This generality is genuinely important for some applications, including the result of Volberg and Wick concerning the characterization of measures for which the analytic Besov-Sobolev space embeds continuously into the space of square integrable functions. In the second article a vector-valued extension of the main result of the first article is considered. This theorem is a new contribution to the vector-valued literature, since previously such general domains and measures were not allowed. The third article deals with local Tb theorems both in the homogeneous and non-homogeneous situations. A modified version of the general non-homogeneous proof technique of Nazarov, Treil and Volberg is extended to cover the case of upper doubling measures. This technique is also used in the homogeneous setting to prove local Tb theorems with weak testing conditions introduced by Auscher, Hofmann, Muscalu, Tao and Thiele. This gives a completely new and direct proof of such results utilizing the full force of non-homogeneous analysis. The final article has to do with sharp weighted theory for maximal truncations of Calderón-Zygmund operators. This includes a reduction to certain Sawyer-type testing conditions, which are in the spirit of Tb theorems and thus of the dissertation. The article extends the sharp bounds previously known only for untruncated operators, and also proves sharp weak type results, which are new even for untruncated operators. New techniques are introduced to overcome the difficulties introduced by the non-linearity of maximal truncations.
  • Korhonen, Janne (Helsingin yliopisto, 2014)
    This thesis studies exact exponential and fixed-parameter algorithms for hard graph and hypergraph problems. Specifically, we study two techniques that can be used in the development of such algorithms: (i) combinatorial decompositions of both the input instance and the solution, and (ii) evaluation of multilinear forms over semirings. In the first part of the thesis we develop new algorithms for graph and hypergraph problems based on techniques (i) and (ii). While these techniques are independently both useful, the work presented in this part is largely characterised by their joint application. That is, combining results from different pieces of the decompositions often takes the from of multilinear form evaluation task, and on the other hand, decompositions offer the basic structure for dynamic-programming-style algorithms for the evaluation of multilinear forms. As main positive results of the first part, we give algorithms for three different problem families. First, we give a fast evaluation algorithm for linear forms defined by a disjointness matrix of small sets. This can be applied to obtain faster algorithms for counting maximum-weight objects of small size, such as k-paths in graphs. Second, we give a general framework for exponential-time algorithms for finding maximum-weight subgraphs of bounded tree-width, based on the theory of tree decompositions. Besides basic combinatorial problems, this framework has applications in learning Bayesian network structures. Third, we give a fixed-parameter algorithm for finding unbalanced vertex cuts, that is, vertex cuts that separate a small number of vertices from the rest of the graph. In the second part of the thesis we consider aspects of the complexity theory of linear forms over semirings, in order to better understand technique (ii). Specifically, we study how the presence of different algebraic catalysts in the ground semiring affects the complexity. As the main result, we show that there are linear forms that are easy to compute over semirings with idempotent addition, but difficult to compute over rings, unless the strong exponential time hypothesis fails.
  • Ferrantelli, Andrea (Helsingin yliopisto, 2010)
    In this thesis we consider the phenomenology of supergravity, and in particular the particle called "gravitino". We begin with an introductory part, where we discuss the theories of inflation, supersymmetry and supergravity. Gravitino production is then investigated into details, by considering the research papers here included. First we study the scattering of massive W bosons in the thermal bath of particles, during the period of reheating. We show that the process generates in the cross section non trivial contributions, which eventually lead to unitarity breaking above a certain scale. This happens because, in the annihilation diagram, the longitudinal degrees of freedom in the propagator of the gauge bosons disappear from the amplitude, by virtue of the supergravity vertex. Accordingly, the longitudinal polarizations of the on-shell W become strongly interacting in the high energy limit. By studying the process with both gauge and mass eigenstates, it is shown that the inclusion of diagrams with off-shell scalars of the MSSM does not cancel the divergences. Next, we approach cosmology more closely, and study the decay of a scalar field S into gravitinos at the end of inflation. Once its mass is comparable to the Hubble rate, the field starts coherent oscillations about the minimum of its potential and decays pertubatively. We embed S in a model of gauge mediation with metastable vacua, where the hidden sector is of the O'Raifeartaigh type. First we discuss the dynamics of the field in the expanding background, then radiative corrections to the scalar potential V(S) and to the Kähler potential are calculated. Constraints on the reheating temperature are accordingly obtained, by demanding that the gravitinos thus produced provide with the observed Dark Matter density. We modify consistently former results in the literature, and find that the gravitino number density and T_R are extremely sensitive to the parameters of the model. This means that it is easy to account for gravitino Dark Matter with an arbitrarily low reheating temperature.
  • Meinander, Kristoffer (Helsingin yliopisto, 2009)
    Thin film applications have become increasingly important in our search for multifunctional and economically viable technological solutions of the future. Thin film coatings can be used for a multitude of purposes, ranging from a basic enhancement of aesthetic attributes to the addition of a complex surface functionality. Anything from electronic or optical properties, to an increased catalytic or biological activity, can be added or enhanced by the deposition of a thin film, with a thickness of only a few atomic layers at the best, on an already existing surface. Thin films offer both a means of saving in materials and the possibility for improving properties without a critical enlargement of devices. Nanocluster deposition is a promising new method for the growth of structured thin films. Nanoclusters are small aggregates of atoms or molecules, ranging in sizes from only a few nanometers up to several hundreds of nanometers in diameter. Due to their large surface to volume ratio, and the confinement of atoms and electrons in all three dimensions, nanoclusters exhibit a wide variety of exotic properties that differ notably from those of both single atoms and bulk materials. Nanoclusters are a completely new type of building block for thin film deposition. As preformed entities, clusters provide a new means of tailoring the properties of thin films before their growth, simply by changing the size or composition of the clusters that are to be deposited. Contrary to contemporary methods of thin film growth, which mainly rely on the deposition of single atoms, cluster deposition also allows for a more precise assembly of thin films, as the configuration of single atoms with respect to each other is already predetermined in clusters. Nanocluster deposition offers a possibility for the coating of virtually any material with a nanostructured thin film, and therein the enhancement of already existing physical or chemical properties, or the addition of some exciting new feature. A clearer understanding of cluster-surface interactions, and the growth of thin films by cluster deposition, must, however, be achieved, if clusters are to be successfully used in thin film technologies. Using a combination of experimental techniques and molecular dynamics simulations, both the deposition of nanoclusters, and the growth and modification of cluster-assembled thin films, are studied in this thesis. Emphasis is laid on an understanding of the interaction between metal clusters and surfaces, and therein the behaviour of these clusters during deposition and thin film growth. The behaviour of single metal clusters, as they impact on clean metal surfaces, is analysed in detail, from which it is shown that there exists a cluster size and deposition energy dependent limit, below which epitaxial alignment occurs. If larger clusters are deposited at low energies, or cluster-surface interactions are weaker, non-epitaxial deposition will take place, resulting in the formation of nanocrystalline structures. The effect of cluster size and deposition energy on the morphology of cluster-assembled thin films is also determined, from which it is shown that nanocrystalline cluster-assembled films will be porous. Modification of these thin films, with the purpose of enhancing their mechanical properties and durability, without destroying their nanostructure, is presented. Irradiation with heavy ions is introduced as a feasible method for increasing the density, and therein the mechanical stability, of cluster-assembled thin films, without critically destroying their nanocrystalline properties. The results of this thesis demonstrate that nanocluster deposition is a suitable technique for the growth of nanostructured thin films. The interactions between nanoclusters and their supporting surfaces must, however, be carefully considered, if a controlled growth of cluster-assembled thin films, with precisely tailored properties, is to be achieved.
  • Oksanen, Markku (Helsingin yliopisto, 2013)
    All the fundamental interactions except gravity have been successfully described in the framework of quantum field theory. Construction of a consistent quantum theory of gravity remains a challenge, because the general theory of relativity is not renormalizable. We consider gravitational theories that aim to improve the ultraviolet behavior of general relativity. The main tool of our analysis is the Hamiltonian formulation of theories that possess local (gauge) invariances. Hořava-Lifshitz gravity achieves power-counting renormalizability by assuming that space and time scale anisotropically at high energies. At long distances the theory flows to an effective theory that is relativistically invariant. We propose a generalization of this theory. Motivated by cosmology, the modified F(R) Hořava-Lifshitz gravity is constructed. It retains the renormalizability of the original Hořava-Lifshitz gravity. The Hamiltonian analysis shows that the theory contains two extra degrees of freedom compared to general relativity: one is associated with the lack of relativistic invariance at high energies and another with the presence of a second-order time derivative of the metric in the Lagrangian due to the nonlinearity of the function F(R). The theory is able to describe inflation and dark energy in a unified manner without extra components. For a certain choice of parameters the theory effectively flows to the relativistic F(R) gravity at long distances. Hamiltonian analysis of the recently proposed covariant renormalizable gravity is accomplished. The structure of constraints is discovered to be very complicated, especially for the new version of the theory with improved ultraviolet behavior. Moreover, this theory is found to contain a ghost, a degree of freedom with negative energy, which destabilizes the theory. The Hamiltonian analysis of relativistic higher-derivative gravity is revisited. Conformally invariant Weyl gravity is concluded to be the only theory of this type that could even in principle restrain the existing ghosts, since in all other potentially renormalizable cases the number of ghosts exceeds the number of local invariances. Lastly, we investigate the idea of deriving a gravitational theory by gauging the twisted Poincaré symmetry of noncommutative spacetime.