Browsing by Title

Sort by: Order: Results:

Now showing items 233-252 of 882
  • Honkonen, Ilja (Helsingin yliopisto, 2013)
    Currently the majority of space-based assets are located inside the Earth's magnetosphere where they must endure the effects of the near-Earth space environment, i.e. space weather, which is driven by the supersonic flow of plasma from the Sun. Space weather refers to the day-to-day changes in the temperature, magnetic field and other parameters of the near-Earth space, similarly to ordinary weather which refers to changes in the atmosphere above ground level. Space weather can also cause adverse effects on the ground, for example, by inducing large direct currents in power transmission systems. The performance of computers has been growing exponentially for many decades and as a result the importance of numerical modeling in science has also increased rapidly. Numerical modeling is especially important in space plasma physics because there are no in-situ observations of space plasmas outside of the heliosphere and it is not feasible to study all aspects of space plasmas in a terrestrial laboratory. With the increasing number of computational cores in supercomputers, the parallel performance of numerical models on distributed memory hardware is also becoming crucial. This thesis consists of an introduction, four peer reviewed articles and describes the process of developing numerical space environment/weather models and the use of such models to study the near-Earth space. A complete model development chain is presented starting from initial planning and design to distributed memory parallelization and optimization, and finally testing, verification and validation of numerical models. A grid library that provides good parallel scalability on distributed memory hardware and several novel features, the distributed cartesian cell-refinable grid (DCCRG), is designed and developed. DCCRG is presently used in two numerical space weather models being developed at the Finnish Meteorological Institute. The first global magnetospheric test particle simulation based on the Vlasov description of plasma is carried out using the Vlasiator model. The test shows that the Vlasov equation for plasma in six-dimensionsional phase space is solved correctly by Vlasiator, that results are obtained beyond those of the magnetohydrodynamic (MHD) description of plasma and that global magnetospheric simulations using a hybrid-Vlasov model are feasible on current hardware. For the first time four global magnetospheric models using the MHD description of plasma (BATS-R-US, GUMICS, OpenGGCM, LFM) are run with identical solar wind input and the results compared to observations in the ionosphere and outer magnetosphere. Based on the results of the global magnetospheric MHD model GUMICS a hypothesis is formulated for a new mechanism of plasmoid formation in the Earth's magnetotail.
  • Hellsten, Alex (Helsingin yliopisto, 2003)
  • Mäntylä, Terhi (Helsingin yliopisto, 2011)
    Physics teachers are in a key position to form the attitudes and conceptions of future generations toward science and technology, as well as to educate future generations of scientists. Therefore, good teacher education is one of the key areas of physics departments education program. This dissertation is a contribution to the research-based development of high quality physics teacher education, designed to meet three central challenges of good teaching. The first challenge relates to the organization of physics content knowledge. The second challenge, connected to the first one, is to understand the role of experiments and models in (re)constructing the content knowledge of physics for purposes of teaching. The third challenge is to provide for pre-service physics teachers opportunities and resources for reflecting on or assessing their knowledge and experience about physics and physics education. This dissertation demonstrates how these challenges can be met when the content knowledge of physics, the relevant epistemological aspects of physics and the pedagogical knowledge of teaching and learning physics are combined. The theoretical part of this dissertation is concerned with designing two didactical reconstructions for purposes of physics teacher education: the didactical reconstruction of processes (DRoP) and the didactical reconstruction of structures (DRoS). This part starts with taking into account the required professional competencies of physics teachers, the pedagogical aspects of teaching and learning, and the benefits of the graphical ways of representing knowledge. Then it continues with the conceptual and philosophical analysis of physics, especially with the analysis of experiments and models role in constructing knowledge. This analysis is condensed in the form of the epistemological reconstruction of knowledge justification. Finally, these two parts are combined in the designing and production of the DRoP and DRoS. The DRoP captures the knowledge formation of physical concepts and laws in concise and simplified form while still retaining authenticity from the processes of how concepts have been formed. The DRoS is used for representing the structural knowledge of physics, the connections between physical concepts, quantities and laws, to varying extents. Both DRoP and DRoS are represented in graphical form by means of flow charts consisting of nodes and directed links connecting the nodes. The empirical part discusses two case studies that show how the three challenges are met through the use of DRoP and DRoS and how the outcomes of teaching solutions based on them are evaluated. The research approach is qualitative; it aims at the in-depth evaluation and understanding about the usefulness of the didactical reconstructions. The data, which were collected from the advanced course for prospective physics teachers during 20012006, consisted of DRoP and DRoS flow charts made by students and student interviews. The first case study discusses how student teachers used DRoP flow charts to understand the process of forming knowledge about the law of electromagnetic induction. The second case study discusses how student teachers learned to understand the development of physical quantities as related to the temperature concept by using DRoS flow charts. In both studies, the attention is focused on the use of DRoP and DRoS to organize knowledge and on the role of experiments and models in this organization process. The results show that students understanding about physics knowledge production improved and their knowledge became more organized and coherent. It is shown that the flow charts and the didactical reconstructions behind them had an important role in gaining these positive learning results. On the basis of the results reported here, the designed learning tools have been adopted as a standard part of the teaching solutions used in the physics teacher education courses in the Department of Physics, University of Helsinki.
  • Keyriläinen, Jani (Helsingin yliopisto, 2004)
  • Koskelo, Otso (Helsingin yliopisto, 2010)
    The main method of modifying properties of semiconductors is to introduce small amount of impurities inside the material. This is used to control magnetic and optical properties of materials and to realize p- and n-type semiconductors out of intrinsic material in order to manufacture fundamental components such as diodes. As diffusion can be described as random mixing of material due to thermal movement of atoms, it is essential to know the diffusion behavior of the impurities in order to manufacture working components. In modified radiotracer technique diffusion is studied using radioactive isotopes of elements as tracers. The technique is called modified as atoms are deployed inside the material by ion beam implantation. With ion implantation, a distinct distribution of impurities can be deployed inside the sample surface with good con- trol over the amount of implanted atoms. As electromagnetic radiation and other nuclear decay products emitted by radioactive materials can be easily detected, only very low amount of impurities can be used. This makes it possible to study diffusion in pure materials without essentially modifying the initial properties by doping. In this thesis a modified radiotracer technique is used to study the diffusion of beryllium in GaN, ZnO, SiGe and glassy carbon. GaN, ZnO and SiGe are of great interest to the semiconductor industry and beryllium as a small and possibly rapid dopant hasn t been studied previously using the technique. Glassy carbon has been added to demonstrate the feasibility of the technique. In addition, the diffusion of magnetic impurities, Mn and Co, has been studied in GaAs and ZnO (respectively) with spintronic applications in mind.
  • Slotte, Jonatan (Helsingin yliopisto, 1999)
  • Oksanen, Juha (Helsingin yliopisto, 2006)
    Digital elevation models (DEMs) have been an important topic in geography and surveying sciences for decades due to their geomorphological importance as the reference surface for gravita-tion-driven material flow, as well as the wide range of uses and applications. When DEM is used in terrain analysis, for example in automatic drainage basin delineation, errors of the model collect in the analysis results. Investigation of this phenomenon is known as error propagation analysis, which has a direct influence on the decision-making process based on interpretations and applications of terrain analysis. Additionally, it may have an indirect influence on data acquisition and the DEM generation. The focus of the thesis was on the fine toposcale DEMs, which are typically represented in a 5-50m grid and used in the application scale 1:10 000-1:50 000. The thesis presents a three-step framework for investigating error propagation in DEM-based terrain analysis. The framework includes methods for visualising the morphological gross errors of DEMs, exploring the statistical and spatial characteristics of the DEM error, making analytical and simulation-based error propagation analysis and interpreting the error propagation analysis results. The DEM error model was built using geostatistical methods. The results show that appropriate and exhaustive reporting of various aspects of fine toposcale DEM error is a complex task. This is due to the high number of outliers in the error distribution and morphological gross errors, which are detectable with presented visualisation methods. In ad-dition, the use of global characterisation of DEM error is a gross generalisation of reality due to the small extent of the areas in which the decision of stationarity is not violated. This was shown using exhaustive high-quality reference DEM based on airborne laser scanning and local semivariogram analysis. The error propagation analysis revealed that, as expected, an increase in the DEM vertical error will increase the error in surface derivatives. However, contrary to expectations, the spatial au-tocorrelation of the model appears to have varying effects on the error propagation analysis depend-ing on the application. The use of a spatially uncorrelated DEM error model has been considered as a 'worst-case scenario', but this opinion is now challenged because none of the DEM derivatives investigated in the study had maximum variation with spatially uncorrelated random error. Sig-nificant performance improvement was achieved in simulation-based error propagation analysis by applying process convolution in generating realisations of the DEM error model. In addition, typology of uncertainty in drainage basin delineations is presented.
  • Kurkela, Aleksi (Helsingin yliopisto, 2008)
    When ordinary nuclear matter is heated to a high temperature of ~ 10^12 K, it undergoes a deconfinement transition to a new phase, strongly interacting quark-gluon plasma. While the color charged fundamental constituents of the nuclei, the quarks and gluons, are at low temperatures permanently confined inside color neutral hadrons, in the plasma the color degrees of freedom become dominant over nuclear, rather than merely nucleonic, volumes. Quantum Chromodynamics (QCD) is the accepted theory of the strong interactions, and confines quarks and gluons inside hadrons. The theory was formulated in early seventies, but deriving first principles predictions from it still remains a challenge, and novel methods of studying it are needed. One such method is dimensional reduction, in which the high temperature dynamics of static observables of the full four-dimensional theory are described using a simpler three-dimensional effective theory, having only the static modes of the various fields as its degrees of freedom. A perturbatively constructed effective theory is known to provide a good description of the plasma at high temperatures, where asymptotic freedom makes the gauge coupling small. In addition to this, numerical lattice simulations have, however, shown that the perturbatively constructed theory gives a surprisingly good description of the plasma all the way down to temperatures a few times the transition temperature. Near the critical temperature, the effective theory, however, ceases to give a valid description of the physics, since it fails to respect the approximate center symmetry of the full theory. The symmetry plays a key role in the dynamics near the phase transition, and thus one expects that the regime of validity of the dimensionally reduced theories can be significantly extended towards the deconfinement transition by incorporating the center symmetry in them. In the introductory part of the thesis, the status of dimensionally reduced effective theories of high temperature QCD is reviewed, placing emphasis on the phase structure of the theories. In the first research paper included in the thesis, the non-perturbative input required in computing the g^6 term in the weak coupling expansion of the pressure of QCD is computed in the effective theory framework at an arbitrary number of colors. The two last papers on the other hand focus on the construction of the center-symmetric effective theories, and subsequently the first non-perturbative studies of these theories are presented. Non-perturbative lattice simulations of a center-symmetric effective theory for SU(2) Yang-Mills theory show --- in sharp contrast to the perturbative setup --- that the effective theory accommodates a phase transition in the correct universality class of the full theory. This transition is seen to take place at a value of the effective theory coupling constant that is consistent with the full theory coupling at the critical temperature.
  • Fang, Chun (Helsingin yliopisto, 2013)
    One of the central problems in dynamical systems and differential equations is the analysis of the structures of invariant sets. The structures of the invariant sets of a dynamical system or differential equation reflect the complexity of the system or the equation. For example, any omega-limit set of a finite dimensional differential equation is a singleton implies that each bounded solution of the equation eventually stabilizes at some equilibrium state. In general, a dynamical system or differential equation can have very complicated invariant sets or so called chaotic sets. It is of great importance to classify those systems whose minimal invariant sets have certain simple structures and to characterize the complexity of chaotic type sets in general dynamical systems. In this thesis, we focus on the following two important problems: estimates for the dimension of chaotic sets and stable sets in a finite positive entropy system, and characterizations of minimal sets of nonautonomous tridiagonal competitive-cooperative systems.
  • Manninen, Hanna (Helsingin yliopisto, 2011)
    Aerosol particles play an important role in the Earth s atmosphere and in the climate system: they scatter and absorb solar radiation, facilitate chemical processes, and serve as seeds for cloud formation. Secondary new particle formation (NPF) is a globally important source of these particles. Currently, the mechanisms of particle formation and the vapors participating in this process are, however, not truly understood. In order to fully explain atmospheric NPF and subsequent growth, we need to measure directly the very initial steps of the formation processes. This thesis investigates the possibility to study atmospheric particle formation using a recently developed Neutral cluster and Air Ion Spectrometer (NAIS). First, the NAIS was calibrated and intercompared, and found to be in good agreement with the reference instruments both in the laboratory and in the field. It was concluded that NAIS can be reliably used to measure small atmospheric ions and particles directly at the sizes where NPF begins. Second, several NAIS systems were deployed simultaneously at 12 European measurement sites to quantify the spatial and temporal distribution of particle formation events. The sites represented a variety of geographical and atmospheric conditions. The NPF events were detected using NAIS systems at all of the sites during the year-long measurement period. Various particle formation characteristics, such as formation and growth rates, were used as indicators of the relevant processes and participating compounds in the initial formation. In a case of parallel ion and neutral cluster measurements, we also estimated the relative contribution of ion-induced and neutral nucleation to the total particle formation. At most sites, the particle growth rate increased with the increasing particle size indicating that different condensing vapors are participating in the growth of different-sized particles. The results suggest that, in addition to sulfuric acid, organic vapors contribute to the initial steps of NPF and to the subsequent growth, not just later steps of the particle growth. As a significant new result, we found out that the total particle formation rate varied much more between the different sites than the formation rate of charged particles. The results infer that the ion-induced nucleation has a minor contribution to particle formation in the boundary layer in most of the environments. These results give tools to better quantify the aerosol source provided by secondary NPF in various environments. The particle formation characteristics determined in this thesis can be used in global models to assess NPF s climatic effects.
  • Vähäkangas, Aleksi (Helsingin yliopisto, 2008)
    The object of this dissertation is to study globally defined bounded p-harmonic functions on Cartan-Hadamard manifolds and Gromov hyperbolic metric measure spaces. Such functions are constructed by solving the so called Dirichlet problem at infinity. This problem is to find a p-harmonic function on the space that extends continuously to the boundary at inifinity and obtains given boundary values there. The dissertation consists of an overview and three published research articles. In the first article the Dirichlet problem at infinity is considered for more general A-harmonic functions on Cartan-Hadamard manifolds. In the special case of two dimensions the Dirichlet problem at infinity is solved by only assuming that the sectional curvature has a certain upper bound. A sharpness result is proved for this upper bound. In the second article the Dirichlet problem at infinity is solved for p-harmonic functions on Cartan-Hadamard manifolds under the assumption that the sectional curvature is bounded outside a compact set from above and from below by functions that depend on the distance to a fixed point. The curvature bounds allow examples of quadratic decay and examples of exponential growth. In the final article a generalization of the Dirichlet problem at infinity for p-harmonic functions is considered on Gromov hyperbolic metric measure spaces. Existence and uniqueness results are proved and Cartan-Hadamard manifolds are considered as an application.
  • Hyttinen, Antti (Helsingin yliopisto, 2013)
    The causal relationships determining the behaviour of a system under study are inherently directional: by manipulating a cause we can control its effect, but an effect cannot be used to control its cause. Understanding the network of causal relationships is necessary, for example, if we want to predict the behaviour in settings where the system is subject to different manipulations. However, we are rarely able to directly observe the causal processes in action; we only see the statistical associations they induce in the collected data. This thesis considers the discovery of the fundamental causal relationships from data in several different learning settings and under various modeling assumptions. Although the research is mostly theoretical, possible application areas include biology, medicine, economics and the social sciences. Latent confounders, unobserved common causes of two or more observed parts of a system, are especially troublesome when discovering causal relations. The statistical dependence relations induced by such latent confounders often cannot be distinguished from directed causal relationships. Possible presence of feedback, that induces a cyclic causal structure, provides another complicating factor. To achieve informative learning results in this challenging setting, some restricting assumptions need to be made. One option is to constrain the functional forms of the causal relationships to be smooth and simple. In particular, we explore how linearity of the causal relations can be effectively exploited. Another common assumption under study is causal faithfulness, with which we can deduce the lack of causal relations from the lack of statistical associations. Along with these assumptions, we use data from randomized experiments, in which the system under study is observed under different interventions and manipulations. In particular, we present a full theoretical foundation of learning linear cyclic models with latent variables using second order statistics in several experimental data sets. This includes sufficient and necessary conditions on the different experimental settings needed for full model identification, a provably complete learning algorithm and characterization of the underdetermination when the data do not allow for full model identification. We also consider several ways of exploiting the faithfulness assumption for this model class. We are able to learn from overlapping data sets, in which different (but overlapping) subsets of variables are observed. In addition, we formulate a model class called Noisy-OR models with latent confounding. We prove sufficient and worst case necessary conditions for the identifiability of the full model and derive several learning algorithms. The thesis also suggests the optimal sets of experiments for the identification of the above models and others. For settings without latent confounders, we develop a Bayesian learning algorithm that is able to exploit non-Gaussianity in passively observed data.
  • Marttinen, Pekka (Helsingin yliopisto, 2008)
    Advancements in the analysis techniques have led to a rapid accumulation of biological data in databases. Such data often are in the form of sequences of observations, examples including DNA sequences and amino acid sequences of proteins. The scale and quality of the data give promises of answering various biologically relevant questions in more detail than what has been possible before. For example, one may wish to identify areas in an amino acid sequence, which are important for the function of the corresponding protein, or investigate how characteristics on the level of DNA sequence affect the adaptation of a bacterial species to its environment. Many of the interesting questions are intimately associated with the understanding of the evolutionary relationships among the items under consideration. The aim of this work is to develop novel statistical models and computational techniques to meet with the challenge of deriving meaning from the increasing amounts of data. Our main concern is on modeling the evolutionary relationships based on the observed molecular data. We operate within a Bayesian statistical framework, which allows a probabilistic quantification of the uncertainties related to a particular solution. As the basis of our modeling approach we utilize a partition model, which is used to describe the structure of data by appropriately dividing the data items into clusters of related items. Generalizations and modifications of the partition model are developed and applied to various problems. Large-scale data sets provide also a computational challenge. The models used to describe the data must be realistic enough to capture the essential features of the current modeling task but, at the same time, simple enough to make it possible to carry out the inference in practice. The partition model fulfills these two requirements. The problem-specific features can be taken into account by modifying the prior probability distributions of the model parameters. The computational efficiency stems from the ability to integrate out the parameters of the partition model analytically, which enables the use of efficient stochastic search algorithms.
  • Toivonen, Hannu (Helsingin yliopisto, 1996)
  • Mehtälä, Juha (Helsingin yliopisto, 2015)
    Continuous-time Markov processes with a finite state space can be used to model countless real world phenomena. Therefore, researchers often encounter the problem of estimating the transition rates that govern the dynamics of such processes. Ideally, the estimation of transition rates would be based on observed transition times between the states in the model, i.e., on continuous-time observation of the process. However, in many practical applications only the current status of the process can be observed on a pre-defined set of time points (discrete-time observations). The estimation of transition rates is considerably more challenging when based on discrete-time data as compared to continuous observation. The difficulty arises from missing data due to the unknown evolution of the process between the actual observation times. To be able to estimate the rates reliably, additional constraints on how they vary in time will usually be necessary. A real world application considered in this thesis involves the asymptomatic carriage state (colonisation) with the bacterium \textit{Streptococcus pneumoniae} (the pneumococcus). The pneumococcus has over 90 strains and for understanding the dynamics of the pneumococcus among humans it is important to understand within-host competition between these strains. Research questions regarding competition in this thesis are: does colonisation by one serotype protect from acquisition of other serotypes and is clearance affected by concurrent colonisation by other serotypes? A question regarding the implication of competition to pneumococcal dynamics after vaccination is also of interest. In addition, vaccine protection may be heterogeneous across individuals, leading to a question about how well such vaccine protection can be estimated from discrete-time data. When only discrete-time observations are available, the decision when to measure the current status of the process is particularly important. With measurements that are temporally apart from each other, information about the state of the process at one point does not give information about the state at the other points. When measurements are very close to each other, knowing the state at one point bears information about the state at other, temporally close points. This thesis addresses the estimation of transition rates based on repeated observations of the current status of an underlying continuous-time Markov process. Applications to actual data concern the process of pneumococcal colonisation. Optimal study designs are considered for improved future studies of similar type, applications including but not limited to pneumococcal colonisation studies.
  • Taipale, Risto (Helsingin yliopisto, 2011)
    Volatile organic compounds (VOCs) are emitted into the atmosphere from natural and anthropogenic sources, vegetation being the dominant source on a global scale. Some of these reactive compounds are deemed major contributors or inhibitors to aerosol particle formation and growth, thus making VOC measurements essential for current climate change research. This thesis discusses ecosystem scale VOC fluxes measured above a boreal Scots pine dominated forest in southern Finland. The flux measurements were performed using the micrometeorological disjunct eddy covariance (DEC) method combined with proton transfer reaction mass spectrometry (PTR-MS), which is an online technique for measuring VOC concentrations. The measurement, calibration, and calculation procedures developed in this work proved to be well suited to long-term VOC concentration and flux measurements with PTR-MS. A new averaging approach based on running averaged covariance functions improved the determination of the lag time between wind and concentration measurements, which is a common challenge in DEC when measuring fluxes near the detection limit. The ecosystem scale emissions of methanol, acetaldehyde, and acetone were substantial. These three oxygenated VOCs made up about half of the total emissions, with the rest comprised of monoterpenes. Contrary to the traditional assumption that monoterpene emissions from Scots pine originate mainly as evaporation from specialized storage pools, the DEC measurements indicated a significant contribution from de novo biosynthesis to the ecosystem scale monoterpene emissions. This thesis offers practical guidelines for long-term DEC measurements with PTR-MS. In particular, the new averaging approach to the lag time determination seems useful in the automation of DEC flux calculations. Seasonal variation in the monoterpene biosynthesis and the detailed structure of a revised hybrid algorithm, describing both de novo and pool emissions, should be determined in further studies to improve biological realism in the modelling of monoterpene emissions from Scots pine forests. The increasing number of DEC measurements of oxygenated VOCs will probably enable better estimates of the role of these compounds in plant physiology and tropospheric chemistry. Keywords: disjunct eddy covariance, lag time determination, long-term flux measurements, proton transfer reaction mass spectrometry, Scots pine forests, volatile organic compounds
  • Pohjonen, Aarne (Helsingin yliopisto, 2013)
    The work presented in this thesis is related to the design of the future electron-positron collider, called the Compact Linear Collider (CLIC), which is currently under development at CERN. The designed operation of the collider requires accelerating electric field strengths of ∼ 100 MV/m range to reach the target energy range of 0.5 to 5 TeV for the collisions in a realistic and cost efficient way. An important limiting factor of the application of the very high electric fields is the electrical breakdown rate, which has drastic dependence on the accelerating electric field strength E (approximately proportional to E^30 ). In order to achieve material properties capable of tolerating higher electric fields, research on the materials related physical origin of the fundamental cause of electrical breakdown onset needs to be undertaken. The onset stage of the electrical breakdown on a broad area metal surfaces under electric field is still unknown, although many theories have been proposed earlier. In many of the theories, it has been common to postulate the existence of a geometric protrusion on the surface that is capable of causing high field enhancement and pre-breakdown electric currents in the vacuum over metal surfaces under electric field. However, such protrusions have never been seen on the metal surface prior to the breakdown. It has been recently experimentally observed that the average field that the material can tolerate without breakdown is correlated with the crystal structure of the material. This observation hints that some dislocation mechanism could be possibly related to the onset stage of the breakdown event. In this thesis, the following mechanism that can be responsible for the breakdown onset is analyzed. Application of the electric field exerts stress on a metal surface, which can cause the nucleation and mobility of the dislocations, i.e. plasticity. The localized plastic deformation can eventually lead to protrusion growth on the metal surface. Once a protrusion is formed on the surface, the electric field is enhanced on the protrusion site, further enhancing the protrusion growth. A defect such as a void can act as a stress concentrator which changes the otherwise uniform stress field and acts as an initiation site for plastic deformation caused by dislocations. In this thesis, we have examined the effect of an external stress on a near surface void in conditions which are relevant for the research and design of the accelerating structures of the CLIC collider. A void present at a near surface region of the accelerating structure causes local concentration of the stress induced by the external electric field on the conducting metal surface. The presence of such near surface void was experimentally observed in a metal sample prepared for experimental spark setup. By means of molecular dynamics simulation method we have shown that the stress can cause nucleation and/or movement of dislocations near the void. The mobility of dislocations then leads to formation of a protrusion on the material surface. We analyzed the nucleation of the dislocations in detail and constructed a simplified analytical model that describes the relevant physical factors affecting the nucleation event. Since the shear stress on the slip plane causes the mobility and nucleation of the dislocations, we analyzed the stress distribution on the slip plane between the void and surface by using finite element method and by calculating the atomic level stress with molecular dynamics method. The results were compared also to an analytic solution for a void located deep in the bulk under similar stress. It was found that the nearby surface had significant effect on the stress distribution only when the void depth was less than its diameter. Below this the maximum stress is equal to that for a void located deep in the bulk under similar external stress. The comparison of the finite element results to the atomic level stress revealed that the pre-existing surface stress near the void surface had significant effect on the stress distribution. In addition to the tensile stress caused by the electric field on the charged metal surface, pulsed surface heating also induces stress in the material surface region under alternating electric field. This cyclic thermal stress is known to cause fatigue and severe deformation of the metal surface. We investigated the condition relevant for yield by calculating atomic level von Mises strain which has been earlier related to dislocation nucleation. The strain concentration caused by the void was 1.9 times the bulk value. In order to see activated slip planes, we exaggerated the compressive stress to the extent that dislocation nucleation could be observed within the timespan allowed by the molecular dynamics simulation method. Dislocations were observed to nucleate at the sites of maximum von Mises strain. Taken together, the results presented in thesis contribute to the understanding of the stress distributions and possible dislocation related mechanisms under different stressing conditions assuming existence of a stress concentrator, such as a near surface void.