Browsing by Title

Sort by: Order: Results:

Now showing items 1-20 of 871
  • Kalinowski, Jaroslaw (2015)
    One of the goals of modern quantum chemistry is to simulate actual chemical experiments. In order to study species closer to real life systems and bulk environments there is a need for methodological developments. There are two ways to approach large systems with a given level of accuracy: conceptual changes to quantum chemistry methods or algorithmic developments for current methods. Many scientists believe that the conceptual changes truly increase the size of the systems one can study. With more or less advanced approximations to the method it is possible to increase the efficiency of calculations orders of magnitude. The implementation and algorithms fall down in the priority list, as advanced algorithmic developments are time consuming and usually lead to lower efficiency increases than conceptual changes. In this work it is shown that algorithmic developments cannot be neglected, and that even simple changes help in utilizing the power of modern computers and can also increase the efficiency by orders of magnitude. In this work new algorithmic developments are presented and used for solving various timely chemical problems.
  • García-Matos, Marta (Helsingin yliopisto, 2005)
  • Mattsson, Teppo (Helsingin yliopisto, 2009)
    The cosmological observations of light from type Ia supernovae, the cosmic microwave background and the galaxy distribution seem to indicate that the expansion of the universe has accelerated during the latter half of its age. Within standard cosmology, this is ascribed to dark energy, a uniform fluid with large negative pressure that gives rise to repulsive gravity but also entails serious theoretical problems. Understanding the physical origin of the perceived accelerated expansion has been described as one of the greatest challenges in theoretical physics today. In this thesis, we discuss the possibility that, instead of dark energy, the acceleration would be caused by an effect of the nonlinear structure formation on light, ignored in the standard cosmology. A physical interpretation of the effect goes as follows: due to the clustering of the initially smooth matter with time as filaments of opaque galaxies, the regions where the detectable light travels get emptier and emptier relative to the average. As the developing voids begin to expand the faster the lower their matter density becomes, the expansion can then accelerate along our line of sight without local acceleration, potentially obviating the need for the mysterious dark energy. In addition to offering a natural physical interpretation to the acceleration, we have further shown that an inhomogeneous model is able to match the main cosmological observations without dark energy, resulting in a concordant picture of the universe with 90% dark matter, 10% baryonic matter and 15 billion years as the age of the universe. The model also provides a smart solution to the coincidence problem: if induced by the voids, the onset of the perceived acceleration naturally coincides with the formation of the voids. Additional future tests include quantitative predictions for angular deviations and a theoretical derivation of the model to reduce the required phenomenology. A spin-off of the research is a physical classification of the cosmic inhomogeneities according to how they could induce accelerated expansion along our line of sight. We have identified three physically distinct mechanisms: global acceleration due to spatial variations in the expansion rate, faster local expansion rate due to a large local void and biased light propagation through voids that expand faster than the average. A general conclusion is that the physical properties crucial to account for the perceived acceleration are the growth of the inhomogeneities and the inhomogeneities in the expansion rate. The existence of these properties in the real universe is supported by both observational data and theoretical calculations. However, better data and more sophisticated theoretical models are required to vindicate or disprove the conjecture that the inhomogeneities are responsible for the acceleration.
  • Palonen, Vesa (Helsingin yliopisto, 2008)
    Accelerator mass spectrometry (AMS) is an ultrasensitive technique for measuring the concentration of a single isotope. The electric and magnetic fields of an electrostatic accelerator system are used to filter out other isotopes from the ion beam. The high velocity means that molecules can be destroyed and removed from the measurement background. As a result, concentrations down to one atom in 10^16 atoms are measurable. This thesis describes the construction of the new AMS system in the Accelerator Laboratory of the University of Helsinki. The system is described in detail along with the relevant ion optics. System performance and some of the 14C measurements done with the system are described. In a second part of the thesis, a novel statistical model for the analysis of AMS data is presented. Bayesian methods are used in order to make the best use of the available information. In the new model, instrumental drift is modelled with a continuous first-order autoregressive process. This enables rigorous normalization to standards measured at different times. The Poisson statistical nature of a 14C measurement is also taken into account properly, so that uncertainty estimates are much more stable. It is shown that, overall, the new model improves both the accuracy and the precision of AMS measurements. In particular, the results can be improved for samples with very low 14C concentrations or measured only a few times.
  • Pöyhönen, Petteri (Helsingin yliopisto, 2012)
    This dissertation introduces two new access selection strategies called the network-centric and the terminal-centric strategies. These strategies use a distributed access selection algorithm, which is designed to exploit the network cooperation to support both horizontal and vertical handovers. The algorithm development was motivated by the fact that the network cooperation hides the network boundaries and through information dissemination it enables more intelligent decision making practices to ensure better utilization of the access resources resulting in enhanced capabilities serving end-users. Two new performance metrics called the USI (User Satisfaction Index) and the OSI (Operator Satisfaction Indicator) are proposed and used to evaluate the potential performance gains of these access-selection strategies. A simulation model was developed to model how a distributed access selection algorithm in a multi-radio access technology environment including one or more operators could logically function. The technical metrics for the simulation experiments were selected to measure different aspects of the access network resource usage and end users connectivity, e.g., a number of different types of handovers and network utilization rates. The USI and OSI metrics are used to assess the non-technical performance of these access-selection strategies. The simulation results and analysis indicate that the cooperation between networks increases the network utilization, coverage and service availability when the access selection is designed to take advantage of it. For the service availability in a single operator environment, the average online time is about 10% higher when the new access selection strategies are used compared to the legacy one. For the network utilization in a multi-operator environment, the network-centric strategy results in about a 20% higher utilization rate over the legacy one when the network is not overloaded. These new strategies are also able to better benefit from the network cooperation when measured using the users disconnectivity. For the end-users, this means that they perceive better connectivity when these strategies are in use compared to the legacy one, while for the operators, this means that their network resources are utilized better. Naturally, these perceived technical benefits translate into greater satisfaction as the analysis clearly shows.
  • Kapanen, Mika (Helsingin yliopisto, 2009)
    Radiation therapy (RT) plays currently significant role in curative treatments of several cancers. External beam RT is carried out mostly by using megavoltage beams of linear accelerators. Tumor eradication and normal tissue complications correlate to dose absorbed in tissues. Normally this dependence is steep and it is crucial that actual dose within patient accurately correspond to the planned dose. All factors in a RT procedure contain uncertainties requiring strict quality assurance. From hospital physicist´s point of a view, technical quality control (QC), dose calculations and methods for verification of correct treatment location are the most important subjects. Most important factor in technical QC is the verification that radiation production of an accelerator, called output, is within narrow acceptable limits. The output measurements are carried out according to a locally chosen dosimetric QC program defining measurement time interval and action levels. Dose calculation algorithms need to be configured for the accelerators by using measured beam data. The uncertainty of such data sets limits for best achievable calculation accuracy. All these dosimetric measurements require good experience, are workful, take up resources needed for treatments and are prone to several random and systematic sources of errors. Appropriate verification of treatment location is more important in intensity modulated radiation therapy (IMRT) than in conventional RT. This is due to steep dose gradients produced within or close to healthy tissues locating only a few millimetres from the targeted volume. The thesis was concentrated in investigation of the quality of dosimetric measurements, the efficacy of dosimetric QC programs, the verification of measured beam data and the effect of positional errors on the dose received by the major salivary glands in head and neck IMRT. A method was developed for the estimation of the effect of the use of different dosimetric QC programs on the overall uncertainty of dose. Data were provided to facilitate the choice of a sufficient QC program. The method takes into account local output stability and reproducibility of the dosimetric QC measurements. A method based on the model fitting of the results of the QC measurements was proposed for the estimation of both of these factors. The reduction of random measurement errors and optimization of QC procedure were also investigated. A method and suggestions were presented for these purposes. The accuracy of beam data was evaluated in Finnish RT centres. Sufficient accuracy level was estimated for the beam data. A method based on the use of reference beam data was developed for the QC of beam data. Dosimetric and geometric accuracy requirements were evaluated for head and neck IMRT when function of the major salivary glands is intended to be spared. These criteria are based on the dose response obtained for the glands. Random measurement errors could be reduced enabling lowering of action levels and prolongation of measurement time interval from 1 month to even 6 months simultaneously maintaining dose accuracy. The combined effect of the proposed methods, suggestions and criteria was found to facilitate the avoidance of maximal dose errors of up to even about 8 %. In addition, their use may make the strictest recommended overall dose accuracy level of 3 % (1SD) achievable.
  • Muuronen, Mikko (Helsingin yliopisto, 2015)
    Understanding the electronic structure of a chemical system in detail is essential for describing its chemical reactivity. In the present work, quantum chemical methods are applied in combination with experimental studies to achieve such detailed mechanistic understanding of chemical systems. Understanding the basic theory behind computational methods is of importance when applying them to chemical problems. Therefore, the first part of this work provides an introduction to quantum chemical methods. The results of this work are published in four peer-reviewed publications. In each publication, the understanding of the chemical system has been obtained using a combination of experimental and quantum chemical studies. These include the design of a new-type of Au(III)-catalysts, and understanding mechanistic aspects related to a Au(III) catalytic cycle. We have also focused on understanding how the electronic structure of an alkyne affects the regioselectivity in the Pauson-Khand reaction. A computational model, which provides a qualitative and, to some extent, a quantitative prediction of regiochemical outcomes is presented.
  • Tamminen, Johanna (Helsingin yliopisto, 2004)
  • Kotiluoto, Petri (VTT, 2007)
    A new deterministic three-dimensional neutral and charged particle transport code, MultiTrans, has been developed. In the novel approach, the adaptive tree multigrid technique is used in conjunction with simplified spherical harmonics approximation of the Boltzmann transport equation. The development of the new radiation transport code started in the framework of the Finnish boron neutron capture therapy (BNCT) project. Since the application of the MultiTrans code to BNCT dose planning problems, the testing and development of the MultiTrans code has continued in conventional radiotherapy and reactor physics applications. In this thesis, an overview of different numerical radiation transport methods is first given. Special features of the simplified spherical harmonics method and the adaptive tree multigrid technique are then reviewed. The usefulness of the new MultiTrans code has been indicated by verifying and validating the code performance for different types of neutral and charged particle transport problems, reported in separate publications.
  • Leggio, Simone (Helsingin yliopisto, 2007)
    Wireless technologies are continuously evolving. Second generation cellular networks have gained worldwide acceptance. Wireless LANs are commonly deployed in corporations or university campuses, and their diffusion in public hotspots is growing. Third generation cellular systems are yet to affirm everywhere; still, there is an impressive amount of research ongoing for deploying beyond 3G systems. These new wireless technologies combine the characteristics of WLAN based and cellular networks to provide increased bandwidth. The common direction where all the efforts in wireless technologies are headed is towards an IP-based communication. Telephony services have been the killer application for cellular systems; their evolution to packet-switched networks is a natural path. Effective IP telephony signaling protocols, such as the Session Initiation Protocol (SIP) and the H 323 protocol are needed to establish IP-based telephony sessions. However, IP telephony is just one service example of IP-based communication. IP-based multimedia sessions are expected to become popular and offer a wider range of communication capabilities than pure telephony. In order to conjoin the advances of the future wireless technologies with the potential of IP-based multimedia communication, the next step would be to obtain ubiquitous communication capabilities. According to this vision, people must be able to communicate also when no support from an infrastructured network is available, needed or desired. In order to achieve ubiquitous communication, end devices must integrate all the capabilities necessary for IP-based distributed and decentralized communication. Such capabilities are currently missing. For example, it is not possible to utilize native IP telephony signaling protocols in a totally decentralized way. This dissertation presents a solution for deploying the SIP protocol in a decentralized fashion without support of infrastructure servers. The proposed solution is mainly designed to fit the needs of decentralized mobile environments, and can be applied to small scale ad-hoc networks or also bigger networks with hundreds of nodes. A framework allowing discovery of SIP users in ad-hoc networks and the establishment of SIP sessions among them, in a fully distributed and secure way, is described and evaluated. Security support allows ad-hoc users to authenticate the sender of a message, and to verify the integrity of a received message. The distributed session management framework has been extended in order to achieve interoperability with the Internet, and the native Internet applications. With limited extensions to the SIP protocol, we have designed and experimentally validated a SIP gateway allowing SIP signaling between ad-hoc networks with private addressing space and native SIP applications in the Internet. The design is completed by an application level relay that permits instant messaging sessions to be established in heterogeneous environments. The resulting framework constitutes a flexible and effective approach for the pervasive deployment of real time applications.
  • Andersson, Mirja (Helsingin yliopisto, 2014)
    In dilute aqueous solution poly(N-isopropylacrylamide), PNIPAM, chain undergoes a coil-to-globule transition at its lower critical solution temperature, LCST, of ca. 32 °C. PNIPAM is one of the most studied polymers for instance for temperature controlled drug release systems, because its LCST is so close to body temperature. However, applicability of PNIPAM as gel-actuators or as active surfaces depends on the self-assembling microstructures and their physical properties. In this study several types of polymers based on PNIPAM (linear, microgels, macrogels, core-shell particles) were prepared and characterised with several methods. The first part of the research focused in developing a temperature controlled release system for a drug, isobutylmethylxanthine (IBMX), based on a PNIPAM system with tailored properties. It is concluded, that the prepared macroscopic PNIPAM -copolymer gels, with properties adjusted chemically by adding the aromatic esters groups (benzoates and cinnamates) to the structures, exhibited higher IBMX binding capacity than the unmodified PNIPAM gel in pure water above the LCST. The release rates of IBMX from the gels are slowed down by the aromatic moieties in the polymer network. The binding of IBMX to the polymers is concluded to be due to both the specific complex formation between aromatic moieties and IBMX, and to hydrophobic interactions inside the hydrophobically modified PNIPAM. In the second part of the research the structures of PNIPAM microgels synthesised with different concentrations of surfactant (SDS) and crosslinking monomer (MBA). Also PNIPAM microgels as shells on PS particles were studied. With high SDS concentrations during microgel synthesis, the precipitation of PNIPAM is prevented, and consequently tightly packed PNIPAM particle cores are not formed. In other words more homogeneously structured PNIPAM microgels are resulting. The concentration of MBA does not affect to the structure as dramatically as SDS, but the effect is clearly observed. Increasing hydrodynamic radius above the cloud point is observed with the increasing MBA concentration. This owes to the increasing size of the tightly crosslinked and rigid particle core. It is concluded that due to a relatively more rigid structure of the microgel at higher crosslinker concentrations, the volume phase transition broadens, and it is pushed towards higher temperatures. The enthalpy of transition is concluded to decrease with increased crosslinking density. Phase transitions and structural characteristics of microgels were further studied with 1H-NMR spectroscopy including the measurements of the signal intensities as well as the spin-lattice (T1) and spin-spin (T2) relaxation times for the protons of PNIPAM with changing temperature. When analysing the relaxation times, the broad temperature range of study is divided in two parts, to cases above and below the LCST. When the suggested significant structural changes with the MBA concentration, and especially with the SDS concentration are taken into account, the results can be rationalised. In the homogeneous microgel structure also the charges should be more evenly distributed compared to the corresponding heterogeneous microgel structure with highly charged surface and insoluble core. As the zeta potentials are also suggesting, the negatively charged coronal layers (with high local LCST) in the heterogeneous microgels are likely to contribute to the proton signals well above the LCST. According to the relaxation times from NMR-studies, it is concluded that there is more mobile structures of PNIPAM on PS core particles compared to the heterogenous microgel samples (a looser and/or more heterogeneous network structure). The results also show, that PNIPAM microgel shell on PS core inhibited the polymer-cell contact by steric repulsion similarly to PEO grafts, whereas PVCL coated PS was adsorbed on the cells more strongly, especially above the LCST. This result with a PNIPAM shell in the cell interaction study is in correlation with the observed high T2 values referring to mobile components existing in 2-stage particle samples still above LCST, and supports the idea of local high LCSTs of coronal PNIPAM layers in the outermost parts.
  • Doucet, Antoine (Helsingin yliopisto, 2005)
  • Laukkanen, Jarkko K. (Helsingin yliopisto, 2000)
  • Leszczynski, Kirsti (Helsingin yliopisto, 2002)
  • Sogacheva, Larisa (Helsingin yliopisto, 2008)
    Aerosol particles in the atmosphere are known to significantly influence ecosystems, to change air quality and to exert negative health effects. Atmospheric aerosols influence climate through cooling of the atmosphere and the underlying surface by scattering of sunlight, through warming of the atmosphere by absorbing sun light and thermal radiation emitted by the Earth surface and through their acting as cloud condensation nuclei. Aerosols are emitted from both natural and anthropogenic sources. Depending on their size, they can be transported over significant distances, while undergoing considerable changes in their composition and physical properties. Their lifetime in the atmosphere varies from a few hours to a week. New particle formation is a result of gas-to-particle conversion. Once formed, atmospheric aerosol particles may grow due to condensation or coagulation, or be removed by deposition processes. In this thesis we describe analyses of air masses, meteorological parameters and synoptic situations to reveal conditions favourable for new particle formation in the atmosphere. We studied the concentration of ultrafine particles in different types of air masses, and the role of atmospheric fronts and cloudiness in the formation of atmospheric aerosol particles. The dominant role of Arctic and Polar air masses causing new particle formation was clearly observed at Hyytiälä, Southern Finland, during all seasons, as well as at other measurement stations in Scandinavia. In all seasons and on multi-year average, Arctic and North Atlantic areas were the sources of nucleation mode particles. In contrast, concentrations of accumulation mode particles and condensation sink values in Hyytiälä were highest in continental air masses, arriving at Hyytiälä from Eastern Europe and Central Russia. The most favourable situation for new particle formation during all seasons was cold air advection after cold-front passages. Such a period could last a few days until the next front reached Hyytiälä. The frequency of aerosol particle formation relates to the frequency of low-cloud-amount days in Hyytiälä. Cloudiness of less than 5 octas is one of the factors favouring new particle formation. Cloudiness above 4 octas appears to be an important factor that prevents particle growth, due to the decrease of solar radiation, which is one of the important meteorological parameters in atmospheric particle formation and growth. Keywords: Atmospheric aerosols, particle formation, air mass, atmospheric front, cloudiness