Browsing by Subject "Fysiikka"

Sort by: Order: Results:

Now showing items 1-20 of 152
  • Vuoriheimo, Tomi (Helsingfors universitet, 2017)
    Accelerator mass spectrometry (AMS) is a technique developed from mass spectrometry and it is able to measure single very rare isotopes from samples with detection capability down to one atom in 10^16. It uses an accelerator system to accelerate the atoms and molecules to break molecular bonds for precise single isotope detection. This thesis describes the optimization of University of Helsinki's AMS system to detect the rare radioactive isotope 14C from CO2 gas samples. Using AMS to detect radiocarbon is a precise and fast way to conduct radiocarbon dating with minimal sample sizes. Solid graphite samples have been in use before but as the ion source has been adopted to use also gaseous CO2 samples, optimizations must be made to maximize the carbon current and ionization efficiency for efficient 14C detection. Parameters optimized include cesium oven temperature, CO2 flow, carrier gas helium flow and their dependencies with each other. Both carbon current and ionization efficiency is looked at in the optimizations. The results are analyzed and discussed for further optimizations or actual measurements with gas. Ionization occurring in the ion source can be understood better with the results. Standard samples of CO2 were measured to determine the background and precision of the AMS system in gas use by comparing the results with literature. The current system was found to have tolerable background of 1.5% of the standard and the Fraction modern value of actual sample was 2.4% higher than values from literature. Ideas to improve background were discussed. A new theory of negative-ion formation in a cesium sputtering ion source by John S. Vogel is reviewed and taken into account in the discussion of optimization. Utilizing the theory, possible future upgrades to improve the ionization efficiency are presented such as cathode material choices to reduce competitive ionization and cesium excitation by laser.
  • Lahtinen, Aki (Helsingfors universitet, 2015)
    Fuusioreaktiossa kaksi kevyttä ydintä yhtyy yhdeksi raskaammaksi ytimeksi ja samalla vapautuu energiaa. Fuusioreaktio tarvitsee tapahtuakseen hyvin korkean lämpötilan, minkä seurauksena aine on olomuodoltaan plasmaa. Esimerkiksi fuusioreaktoreissa käytettäväksi suunniteltu vedyn isotooppien deuteriumin ja tritiumin välinen reaktio vaatii tapahtuakseen plasman kuumentamista yli 100 miljoonan kelvinin lämpötiloihin. Tutkituin fuusioreaktorimalli on tokamak, jossa kuumaa plasmaa hallitaan toruksen muotoisessa kammiossa voimakkaiden magneettikenttien avulla. Plasmaa koossapitävästä magneettikentästä huolimatta plasmasta karkaa hiukkasia, jotka lopulta osuvat kammion pinnoille. Yksi tapa kammion pintoihin kohdistuvan lämpö- ja hiukkasvuon pienentämiseksi on suihkuttaa kammioon epäpuhtausatomeja tai -molekyylejä jäähdyttämään reunaplasmaa. Typpi on osoittautunut kiinnostavaksi vaihtoehdoksi tähän tehtävään. Typen kulkeutuminen ja kertyminen reaktorikammion sisällä vaatii kuitenkin vielä lisätutkimuksia. Typen harvinainen isotooppi 15N tarjoaa mahdollisuuden tutkia näitä kysymyksiä. Tyypillisesti tämä tehdään merkkiainekokeiden avulla, jolloin reaktorikammioon suihkutetaan valittua merkkiainetta tunnetuissa olosuhteissa ja kokeen jälkeen selvitetään merkkiaineen jakauma reaktorikammion pinnoilla. Tässä työssä keskityttiin seinätiiliin, jotka on irrotettu ASDEX Upgrade -fuusioreaktorista (AUG) vuosien 2010-2011 koekampanjan jälkeen. Kyseisen koekampanjan lopussa suoritettiin 15N-merkkiainekoe. Työssä tutkittiin tiilistä porattujen näytteiden 15N-pitoisuuksia lentoaika-rekyylianalyysilla (Time Of Flight Elastic Recoil Detection Analysis, TOF-ERDA), ydinreaktioanalyysilla (Nuclear Reaction Analysis, NRA) ja sekundääri-ionimassaspektrometrialla (Secondary Ion Mass Spectrometry, SIMS). Vertailun vuoksi tutkittiin myös 15N:llä implantoituja testinäytteitä. Tutkielman alkuosassa esitellään lyhyesti tokamak-fuusioreaktorin toimintaa, plasman vuorovaikutusta reaktorin seinämän kanssa, typen käyttöä fuusioreaktoreissa, merkkiainekokeita sekä käytetyt mittausmenetelmät. Tutkielma loppuosa keskittyy suoritettuihin mittauksiin, niiden analyysiin ja tuloksiin sekä johtopäätöksiin. Tulosten perusteella mittausmenetelmien välillä on merkittäviä eroja AUG-näytteiden kohdalla, kun taas implantoiduille näytteille erot menetelmien välillä ovat pienet. Erot johtuvat todennäköisesti AUG-näytteiden epätasaisesta pintarakenteesta, minkä seurauksena typen jakauma näytteiden pintakerroksissa vaihtelee. TOF-ERDA:lla tutkittiin näytteistä mahdollisimman sileää pintaa luotettavan analyysin onnistumiseksi. NRA-mittauksissa protonisuihku kohdistui näytteen keskelle suuremmalle pinta-alalle. Suureen alueeseen sisältyy myös karkeampia kohtia, joihin merkkiaineen kertyminen on sileää pintaa suurempaa. Tämän seurauksena NRA:lla saadaan selvästi suurempia tuloksia 15N:n pintatiheydelle kuin TOF-ERDA:lla. Kvadrupolimassaspektrometrissa ilmenneiden ongelmien vuoksi SIMS-mittauksia suoritettiin vain yksi, minkä vuoksi optimaalisten asetusten löytäminen 15N:n mittaamiseen SIMS:llä vaatii vielä lisätutkimuksia.
  • Halonen, Roope (Helsingfors universitet, 2016)
    The first order phase transition, the nucleation process, of a thermodynamic system is one of the basic physical phenomena and it has significant relevance on several scientific fields. Despite the importance of the nucleation process, the theoretical understanding is still imperfect. The emergence of a new phase, liquid or solid cluster, in the metastable gas phase is mainly treated with classical nucleation theory (CNT) by using known macroscopic thermodynamic properties of the studied substance, but the theory often fails in predicting the nucleation process adequately. The failure of describing the nucleation event by CNT has shifted the theoretical focus on molecular-level nucleation studies to improve the prediction and understanding of the origin of the failure. This thesis examines one of the key assumptions behind CNT, the constrained equilibrium hypothesis, by approaching it from statistical mechanics and thermodynamic point of view. The main tools in this work are computational: both Monte Carlo (MC) and molecular dynamics (MD) simulations have been used to simulate the homogeneous nucleation processes of Lennard-Jones argon. Two separate studies are presented: At first we compare the nucleation rates obtained by MC (based on thermodynamic equilibrium) and molecular dynamics simulations using the nonisothermal nucleation theory and then the constrained equilibrium hypothesis is invalidated by studying the kinetics of Lennad-Jones argon clusters from size of 4 up to 31 molecules at 50 K. In addition to the actual study, the thesis includes a systematic overview of the theoretical treatment of homogeneous nucleation from thermodynamic liquid drop model to applicable molecular-level simulation techniques.
  • Toijala, Heikki (Helsingin yliopisto, 2018)
    Field emission from metal surfaces is an important phenomenon in modern technology, not least due to its role in the vacuum breakdowns limiting the gradient of the accelerating fields in the Compact Linear Collider being planned at CERN. Vacuum breakdowns are found to originate at locations on the surface of the accelerating structures where field emission is enhanced, making understanding field emission important for increasing the effectivity of the instrument. According to the standard Fowler–Nordheim theory of field emission, the work function of the surface and the geometric field enhancement are the two parameters which determine the field emission current, with the standard interpretation of experimental results focusing on the geometric field enhancement. The role of the work function, which can be significantly decreased near surface defects, is often overlooked. The aim of this work is to study the influence of atomic-scale defects on the work function and field emission characteristics of a copper (111) surface, and to verify the validity of the Fowler–Nordheim equation for the surfaces with defects. The metal surface potential barriers were determined using density functional theory with an image potential type term added manually to account for the long-range exchange and correlation interactions. The determined potential barriers were used in quantum transport calculations to compute the field emission current while taking into account the density of states. A Fowler–Nordheim plot analysis was done for the computed emission currents. The results show that, for the studied atomic-scale surface defects, the decreased work function of the surface is sufficient to explain the increased field emission current, while no effective geometric field enhancement was found. The validity of the Fowler–Nordheim equation for the studied systems was established, with only an approximately constant factor separating the computed currents from those predicted by the Fowler–Nordheim equation.
  • Chen, Xuemeng (Helsingfors universitet, 2014)
    The study of air ions by applying air balance concept based on the Hyytiälä SMEAR II station measurement was performed in this work. Diurnal and seasonal variations in ion concentration and environmental ionizing radiation were studied by analysing data collected from long-term measurements. Total gamma radiation was the main source for ion production in the atmosphere, which can be attenuated by snow cover during winter periods. α and β emissions from radon decay process showed a share of about 20% in the production of total ion pairs, which were sensitive to variations in soil conditions. In general, more positive ions than the negative ones exist at ground level due to the earth electrode effect. Similar patterns were found in cluster ion concentration and the ion source rate derived from the total gamma radiation. On days with new particle formation (NPF), a relation was observed between cluster ion concentration, wind speed, temperature (T) as well as relative humidity (RH). A similar connection was also identified in ion source rate and ion production rate to T and RH. A high ion source rate derived from gamma dose rate was observed on non-event days and low on NPF days. The reversed case was found in the source rate derived from radon decay emissions. The ion production rate was typically higher on NPF event days than on non-event days. Two approaches were carried out in the determination of the ion production rate in the cluster size range by using an improved balance equation of air ions. The similar values obtained using these two approaches imply a balanced condition between ionizing source and the observed ion concentration. This suggests that measurement of air ions by the Balanced Scanning Mobility Analyser (BSMA) is likely to be reliable, though accurate parameterization for sub-0.8 nm ions is not available to the present knowledge. Moreover, the ion production rate and formation rate were found incomparable.
  • Kangasluoma, Juha (Helsingfors universitet, 2012)
    Uusien hiukkaslaskurien kehitys on luonut tarpeen tuottaa aerosoleja alle kahden nanometrin kokoluokassa. Kahden nanometrin kokoisten aerosolien tuottaminen ei ole uusi asia, mutta kyseisessä kokoluokassa esimerkiksi hiukkasten varaaminen ja epäpuhtaudet ovat merkittävässä roolissa. Sen vuoksi tämän työn mittauksissa on mukana myös massaspektrometri, jota ei aikaisemmin ole hyödynnetty hiukkaslaskurien kalibroinneissa. Tässä työssä tavoitteena oli tuottaa ja karakterisoida puhdas alle kahden nanometrin kokoinen aerosoli. Aerosolin tuottamiseen käytettiin uunia, kuumalankageneraattoria ja elektrosprayta. Elektrospraylla tuotetut hiukkaset olivat itsestään varattuja, muut tuotetut hiukkaset varattiin Am241 varaajalla. Liikkuvuusanalysaattori oli korkean resoluution Herrmann DMA (differential mobility analyzer), joka valitsee näytteestä halutun kokoiset hiukkaset ja sen resoluutio on noin 20. DMA:n jälkeen näyte johdettiin kolmelle mittalaitteelle, jotka mittasivat rinnakkain. Massaspektrometri, APi-TOF (atmospheric pressure interface, time of flight mass spectrometer) on hiukkasen lentoaikaa mittaava instrumentti, johon voidaan näyte johtaa suoraan ilmakehän paineesta. APi-TOF:n rinnalla mittasi elektrometri, joka mittaa näytteen kokonaisvarauksen, sekä PSM, joka oli mittausten kalibroitava instrumentti. Vaihtelemalla DMA:lla valittua liikkuvuutta saatiin selville hiukkasten pitoisuus, PSM:n havaintotehokkuus sekä massaspektri koon funktiona. Ammoniumsulfaatin kemialliseksi koostumukseksi määritettiin (HSO4)x(NH3)ySO4- ja (HSO4)x(NH3)yH3SO4+, natriumkloridin (NaCl)x(massa 106)yCl- ja (NaCl)x(massa 106)yNa+ ja volframioksidin H0-2WyOz(massa 88)0-2-. Positiivsen volframioksidin kemiallista koostumusta ei pystytty selvittämään. Positiivisesti varatut ammoniumsulfaatti, natriumkloridi ja volframioksidi olivat alle 1.5 nm:n koossa kontaminoituneet orgaanisilla yhdisteillä. Hopeanäytteestä tunnistettiin klusterit Agx-, missä x=7, 17, 19 ja Agy(massa 224)+ ja HAgy(massa 224)+. y on pariton kun klusterissa ei ole vetyä ja parillinen vedyn kanssa. Hopeaspektri oli kuitenkin pääosin kontaminoitunut hopean ja orgaanisten epäpuhtauksien klustereilla. PSM:n leikkausrajoiksi määritettiin 1.3, 1.6, 1.7, ja 1.7 nm:a negatiiviselle natriumkloridille, ammoniumsulfaattille, volframioksidille ja hopealle vastaavasti. Kaikkien positiivisten orgaanisten näytteiden leikkausraja oli noin 1.8 nm.
  • Pöyry, Paula (Helsingfors universitet, 2004)
    Annoksen ja pinta-alan tuloa (DAP, dose-area product) mittaavaa DAP-mittaria käytetään röntgendiagnostiikassa potilaan säteilyaltistuksen määritykseen. DAP-mittari on läpäisytyyppinen, tasomainen ionisaatiokammio, jolla voidaan mitata samanaikaisesti potilastutkimuksen kanssa. Röntgenlaitteessa oleva DAP-mittari tulisi kalibroida siten, että mittaustuloksena saadaan annoksen ja pinta-alan tulo potilaaseen kohdistuvassa säteilykeilassa. Mittareita voidaan kalibroida erilaisilla menetelmillä, mutta usein on tyydytty käyttämään mittarin valmistuksen yhteydessä tehtyä kalibrointia. Tässä työssä oli tarkoitus kehittää DAP-mittareille yhtenäinen ja toimiva kalibrointimenettely, jonka avulla mittaukset ovat jäljitettävissä kansainväliseen mittausjärjestelmään. Uudessa kalibrointimenettelyssä käyttöpaikalla suoritettava kalibrointi tehdään kalibroidulla DAP-mittarilla (vertailumittarilla), joka on säteilykeilassa samanaikaisesti kalibroitavan mittarin kanssa. Säteilyn käyttöpaikalla kalibroinnissa tarvittavat vertailumittarit kalibroidaan Säteilyturvakeskuksen (STUK) mittanormaalilaboratoriossa. Menetelmän kehittelyä varten DAP-mittareita tutkittiin laboratorioon rakennetulla mittausjärjestelyllä, jossa selvitettiin niiden toimintaa ja kalibrointiin vaikuttavia tekijöitä. Vertailumittarin kalibrointia varten tutkittiin kahta menetelmää, joissa todellinen annoksen ja pinta-alan tulo määritetään joko mittaamalla kalibroidulla DAP-mittarilla tai laskemalla ilmaan absorboituneen annoksen ja säteilykeilan poikkileikkauksen pinta-alan mitattujen arvojen tulo. Kahdella eri menetelmällä mitatut DAP-arvot poikkeavat useita prosentteja toisistaan. Aikaisempien tutkimuksien ja omien mittausten perusteella päätettiin, että vertailumittareiden kalibroinnissa käytetään mittanormaalina laboratorion kalibroitua DAP-mittaria. Kehitetyn menetelmän avulla mittanormaalilaboratoriossa kalibroitiin viisi vertailumittaria. Yhden vertailumittarin avulla kalibroitiin diagnostisten röntgenlaitteiden DAP-mittareita niiden omilla käyttöpaikoilla sairaalassa. Mittauksissa huomattiin, että tavanomainen paine- ja lämpötilakorjaus korjaa mittareiden näyttämää hieman liikaa. Siksi olosuhteiden vaihtelut vaikuttavat korjattuihin mittaustuloksiin ja kalibroinnin epävarmuuteen enemmän kuin aikaisemmin on arvioitu.
  • Rosta, Kawa (Helsingfors universitet, 2017)
    Tässä työssä tarkastellaan annoksen ja pinta-alan tulon mittarin (DAP-mittarin) toimintaa ja käyttäytymistä pienissä säteilyannoksissa, jossa DAP-arvot ovat matalia. Lasten tutkimuksissa käytetään pieniä kuvausarvoja, jonka seurauksena lapsipotilaaseen kohdistuu matalia annoksia. Lasten thorax-tutkimuksissa potilaaseen kohdistuva keskimääräinen DAP-arvo on 19 mGy x cm^2. DAP-arvon tarkkuus matalissa annoksissa on tärkeä, sillä lapsuudessa saatu säteilyaltistus aiheuttaa suuremman riskin kuin vastaava altistus aikuisiässä. Lapset ovat säteilysuojelun kannalta erityisasemassa ja lasten tutkimusten oikeutusharkintaan ja optimointiin tulee kiinnittää erityistä huomiota. Tutkimuksessa DAP-mittarin tarkkuutta matalissa annoksissa tarkasteltiin käyttäen pinta-ala menetelmän kalibrointia. Kalibrointi tapahtui siten, että DAP-mittareita käytettiin kenttämittareina ja Raysafe Xi-annosmittaria vertailumittarina. Toisin sanoen DAP-mittarista saatuja arvoja tarkasteltiin vertaamalla niitä Raysafe Xi-annosmittarin arvoihin. DAP-mittari on kiinnitetty röntgenputken eteen ja pinta-ala menetelmässä annosmittari asetetaan röntgenputken alapuolelle säteilykeilaa vasten. Tällöin kuvaamisessa molemmat mittarit altistuvat säteilylle samanaikaisesti. Tulokseksi saatiin, että DAP-mittarit ovat kalibroitu korkean kuvausjännitteen, sähkömäärän ja ilman lisäsuodatuksen avulla, eikä kalibroinnissa ole otettu huomioon matalia annoksia. Tutkimalla DAP-mittarin tarkkuutta matalissa annoksissa huomattiin, että DAP-mittaria koskeva laitevaatimus, jossa näyttämä saa poiketa oikeasta arvosta enintään 25 %, ei toteudu AGFA DX-D600 ja FUJI FDR Acselerate röntgenlaitteella DAP-arvon ollessa 0-4 mGy x cm^2 välillä. Tällöin näiden kahden röntgenlaitteiden DAP-mittareista saadut DAP-arvot eivät ole luotettavia DAP-arvojen ollessa alle 4 mGy times cm^2.
  • Agaian, David (Helsingin yliopisto, 2020)
    Particle Induced X-ray Emission (PIXE) is a nondestructive Ion Beam Analysis (IBA) technique that can be used for identifying elements in a sample. In PIXE, the radiation emitted by electron state changes is measured, after which emissions are recorded as spectral peaks. Each element is then identified based on its unique spectral peak. PIXE analysis has been carried out using a 3 MeV proton beam generated with the aid of TAMIA 5 MV EGP-10-II tandem accelerator of the Department of Physics, University of Helsinki, at the Helsinki Accelerator Laboratory in Kumpula. The external PIXE measurement setup in the accelerator laboratory has been prepared to study nine coins from 18th to 20th centuries and from different countries (Russia, USSR, Romania, France, and Portugal). The coins have been irradiated in the external PIXE setup, in which x-rays have been then detected by an x-ray detector. Two different detectors have been employed in the measurements: a KETEK AXAS-D Silicon Drift Detector (SDD) for detecting x-rays of every coin, and a Canberra GUL0110 Ultra-Low Energy Germanium (Ultra-LEGe) detector for detecting x-rays of silver coins. After PIXE spectra have been obtained, PyMCA software has been used for the elemental analysis of the data. In the present study, various elements have been found from the measured PIXE spectra. In silver coins, the following 10 elements have been specified: Ag, Cu, As, Pb, Fe, Sb, Ni, Zn, Sn and Bi. In nickel plated steel coin observed elements are Fe, Ni, Co, and Cu. Copper-zinc-nickel alloy coins have been found to consist of Cu, Zn, Ni, Fe, and Mn. Copper-nickel alloy coins have been investigated to be made of Cu, Ni, Fe, and Mn. This study verifies that the external PIXE technique can be utilized as a practical tool to identify elements in metallurgical samples.
  • Åhlgren, Elina Harriet (Helsingfors universitet, 2012)
    Graphene is the ultimately thin membrane composed of carbon atoms, for which future possibilities vary from desalinating sea water to fast electronics. When studying the properties of this material, molecular dynamics has proven to be a reliable way to simulate the effects of ion irradiation of graphene. As ion beam irradiation can be used to introduce defects into a membrane, it can also be used to add substitutional impurities and adatoms into the structure. In the first study introduced in this thesis, I presented results of doping graphene with boron and nitrogen. The most important message of this study was that doping of graphene with ion beam is possible and can be applied not only to bulk targets but also to a only one atomic layer thick sheet of carbon atoms. Another important result was that different defect types have characteristic energy ranges that differ from each other. Because of this, it is possible to control the defect types created during the irradiation by varying the ion energy. The optimum energy for creating a substitution for N ion is at about 50 eV (55%) and for B ion it is ca. 40% at about the same energy. Single vacancies are most probably created at an energy of about 125 eV for N (55%) and for B at ca. 180 eV (35%). For double vacancies, the maximum probabilities are roughly at 110 eV for N (16%) and at 70 eV for B (6%). The probabilities for adatoms are the highest at very small energies. A one atom thick graphene membrane is reportedly impermeable to standard gases. Hence, graphene's selectivity for gas molecules trying to pass through the membrane is determined only by the size of the defects and vacancies in the membrane. Gas separation using graphene membranes requires knowledge of the properties of defected graphene structures. In this thesis, I presented results of the accumulation of damage on graphene by ion irradiation using MD simulations. According to our results, graphene can withstand up to 35% vacancy concentrations without breakage of the material. Also, a simple model was introduced to predict the influence of the irradiation during the experiments. In addition to the specific results regarding ion irradiation manipulation of graphene, this work shows that MD is a valuable tool for material research, providing information on atomic scale rarely accessible for experimental research, e.g., during irradiation. Using realistic interatomic potentials MD provides a computational microscope helping to understand how materials behave at the atomic level.
  • Pirttikoski, Antti (Helsingin yliopisto, 2021)
    LHC is the highest energy particle collider ever built and it is employed to study elementary particles by colliding protons together. One intriguing study subject at LHC is the stability of the electroweak vacuum in our universe. The current prediction suggests that the vacuum is in the metastable state. The stability of the vacuum is dependent on the mass of the top quark, and it is possible that more precise measurement of the mass could shift the prediction to the border of the metastable and stable states. In order to measure the mass of the top quark more precisely, we need to measure the bottom (b) quarks decaying from it at high precision, as top quark decays predominantly into a W boson and a b quark. Due to the phenomenon called hadronisation, we can not measure the quarks directly, but rather as sprays of collimated particles called jets. The jets originating from b quarks (b jet) can be identified by b-tagging. The precise measurement and calibration of the b jet energy is crucial for top quark mass measurement. This thesis studies the b jets and their energy calibration at the CMS, which is one of the general purpose detectors along the LHC. Especially the b jet energy scale (bJES) is under the investigation and the various phenomena affecting to it. For example, large fraction of b jets contain neutrinos, which cannot be measured directly. This increases uncertainties related to the energy measurement. Also there are problems how precisely the formation and evolution of the b jets can be modelled by Monte Carlo event generators, such as Pythia8, which was utilized in this thesis. The aim of this thesis is to evaluate how big effect on the bJES is caused by the various different phenomena, which presumably weaken the precision of the b jet measurements. The studied phenomena are the semileptonic branching ratios of b hadrons, branching ratios of b hadron to c hadron decays, b hadron production fraction and parameterization of the b quark fragmentation function. The combined effect of all four different rescaling features mentioned above, suggests that bJES is known at 0.2% level. A small shift of -0.1% in the Missing transverse energy Projection Fraction (MPF) response scale is detected at low pt values, which vanishes as the pt increases. This improves remarkably 0.4-0.5% JES accuracy achieved during at CMS during Run 1 of the LHC. However, there are still many ways we can improve the performance presented here. Definitely there is a need for further studies of the rescaling methods before results could be utilized in the corrections of bJES to do precision measurement of the top quark mass.
  • Lampsijärvi, Eetu (Helsingin yliopisto, 2020)
    The feasibility of quantitatively measuring ultrasound in air with a Schlieren arrangement has been demonstrated before, but previous work demonstrating calibration of the system combined with computation to yield the 3D pressure field does not exist. The present work demonstrates the feasibility of this both in theory and practice, and characterizes the setup used to gain the results. Elementary ray optical and Schlieren theory is exhibited to support the claims. Derivation of ray optical equations related to quantitative Schlieren measurements are shown step by step to help understand the basics. A numerical example based on the theoretical results is then displayed: Synthetic Schlieren images are computed for a theoretical ultrasonic field using direct numerical integration, then the ultrasonic field is recovered from the Synthetic Schlieren images using the inverse Abel transform. Accuracy of the inverse transform is evaluated in presence of synthetic noise. The Schlieren arrangement, including the optics, optomechanics, and electronics, to produce the results is explained along with the stroboscopic use of the light source to freeze ultrasound in the photographs. Postprocessing methods such as background subtraction and median and Gaussian filtering are used. The repeatability and uncertainty of the calibration is examined by performing repeated calibration while translating or rotating the calibration targets. The ultrasound fields emitted by three transducers (100 kHz, 175 kHz, and 300 kHz) when driven by 5 cycle sine bursts at 400 Vpp are measured at two different points in time. The measured 3D pressure fields measured for each transducer are shown along with a line profile near the acoustic axis. Pressure amplitudes range near 1 kPa, which is near the acoustic pressure, are seen. Nonlinearity is seen in the waveforms as expected for such high pressures. Noise estimates from the numerical example suggest that the pressure amplitudes have an uncertainty of 10% due to noise in the photographs. Calibration experiments suggest that additional uncertainty of about 2% per degree of freedom (Z, X, rotation) is to be expected unless especial care is taken. The worst-case uncertainty is estimated to be 18%. Limitations and advantages of the method are discussed. As Schlieren is a non-contacting method it is advantageous over microphone measurements, which may affect the field they are measuring. As every photograph measures the whole field, no scanning of the measurement device is required, such as with a microphone or with an LDV. Suggestions to improve the measurement setup are provided.
  • Hakala, Jani (Helsingfors universitet, 2012)
    The most important parameters describing the aerosol particle population are the size, concentration and composition of the aerosol particles. The size and water content of the aerosol particles are dependent of the relative humidity of the ambient air. Hygroscopicity is a measure to describe the water absorption ability of an aerosol particle. Volatility of an aerosol defines how the aerosol particles behave as a function of temperature. A Volatility-Hygroscopicity Tandem Differential Mobility Analyzer (VH-TDMA) is an instrument for size-selected investigation of particle number concentration, volatility, hygroscopicity and the hygroscopicity of the particle core, i.e. what is left of the particle after the volatilization. While knowing these qualities of aerosol particles, one can predict their behavior in different atmospheric conditions. Volatility and hygroscopicity can also be used for indirect analysis of chemical composition. The aim of this study was to build and characterize a VH-TDMA, and report the results of its field deployment at the California Nexus (CalNex) 2010 measurement campaign. The calibration measurements validated that with the VH-TDMA one can obtain accurate volatility and hygroscopicity measurements for particles between 20 nm and 145 nm. The CalNex 2010 results showed that the instrument is capable in field measurements at varying measurement conditions; and valuable data about hygroscopicty, volatility and the mixing state of several types of aerosols were measured. The data obtained was in line with the observations based on the data measured with other instruments.
  • Monira, Shirajum (Helsingin yliopisto, 2020)
    Particle and nuclear physics experiments require state-of-the-art detector technologies in a pursuit to achieve high data collection efficiency, and thus ensuring reliable data recording from the particle collisions at the Large Hadron Collider (LHC) experiments of CERN. High demand for data to be used for precision analysis has led to the development of MicroPattern Gaseous Detector (MPGD) based structures: Gas Electron Multiplier (GEM) and MICROMEGAS (MM). A systematic study is conducted on charging-up behaviour in a two-stage amplification structure, consisting of a single GEM foil above a MM detector with 2D readout chamber. Charging-up effect arises in the detector system due to combined effects from polarization of dielectric surfaces and accumulation of charges on the dielectric surfaces of MM resistive strips under high external electric field. The internal fields created from charging-up of dielectric surfaces can lead to a change in the applied electric field and gain of the detector suffers. In this thesis, the instability of gain due to characteristic charging-up process in GEM and MM is observed for different event rates and humidity level in the detector fill gas (P-10 gas mixture). MM gain decreased with time due to charging-up of dielectric surfaces and an exponential drop of gain by ≈30% is detected in case of dry gas i.e. fill gas without moisture. By adding a small amount of water content into fill gas, the MM gain is observed to drop around 22-30%. Addition of 1320±280 ppmV water content into MM gas volume yielded a higher gain of about 10% compared to dry gas. In case of higher rate measurement, achieved by using GEM foil as a pre-amplification stage between drift and readout electrode, the gain saturates at 70%. For low rate measurement, the gain saturates at 68%. GEM gain is observed to increase slowly by 17% as the dielectric surfaces inside its holes charged-up gradually over time.
  • Itälä, Aku (Juvenes Print, 2018)
    In Finland the spent nuclear fuel final repository of Posiva Oy is based on the Swedish KBS-3V multi-barrier concept. In this concept, the spent fuel rods are placed inside cast iron inserts surrounded by a gastight copper canister. The canister is placed in a vertical borehole and surrounded by bentonite clay rings at a depth of at least 400m in an underground bedrock facility at Olkiluoto. The bentonite acts as a buffer material which gives mechanical and chemical protection, dissipates heat and retards radionuclide diffusion in the event of canister failure. It is crucial to know if the bentonite will retain its performance for at least 100 000 years. This thesis is compiled of 6 publications in which experiments related to bentonite buffer are modelled, or some parameters of bentonite are studied in laboratory/final repository conditions. In the two first publications the aim was to model the chemical evolution of a final repository during the thermal phase, when the bentonite is only partially saturated in the beginning. A Case called Long-term test adverse 2 performed in Äspö Hard Rock Laboratory was adopted as a reference case to make the modelling more concrete and to clarify if the phenomena occurring in the experiment must be taken into account in safety assessment. The main chemical change according to the models and the experiment was anhydrite precipitation near the heater interface. No changes affecting the performance of the bentonite was observed In addition, during this thesis a few laboratory experiments were conducted and modelled. The effect of temperature on cation-exchange behaviour of purified sodium montmorillonite was studied in three different temperatures (25 oC, 50 oC and 75 oC) using calcium/sodium perchlorate mixtures. The observed results showed similar selectivity for all temperatures. In the fourth publication, the carbon dioxide partial pressure effect on the pH of bentonite was modelled using Geochemist’s Workbench. The results indicated that only the surface protonation sites buffered the pH changes in the compacted bentonite system since the water amount inside the bentonite was small compared to the amount of surface complexation sites. The buffering capacity was approximated to be 0.3pH units/10g of bentonite. In the fifth publication, a structural model for bentonite was additionally made, which takes into account different kinds of waters inside the bentonite, and the model was compared to state-of-the-art commercial software and was noted to work well. In the last publication a simplified model was made to model the pore water of the squeezing experiments from compacted bentonite in anoxic laboratory conditions. The model worked well on major ions, but some differences were also observed. The conclusion from all these studies is that bentonite is a complex material, and the microstructural behaviour is still under dispute. The most common consensus is that there are three different waters (free pore water, diffuse double-layer water and interlamellar water). It is important to understand the microstructure of bentonite so that accurate models can be created which correctly predict the phenomena occurring inside bentonite. Modelling is needed to approximate the final repository behaviour over hundreds of thousands of years, but there are still some uncertainties remaining such as chemical and mechanical parameters, parameters relates to saturation and high temperature behaviour, lack of kinetic data for some minerals as well as reactive surface area and grain radii.
  • Arstila, Timo (Helsingin yliopisto, 2020)
    Super-resolution microscopy is a collection of methods to overcome the natural barrier of diffraction limit of resolving power that conventional optical microscopes have. A microsphere assisted microscopy (MAM) is a super-resolution method that was first demonstrated in 2011. It is based on the simple principle to utilize a dielectric microsphere as an external lens in optical microscope. Our objective was to achieve the super-resolution capability in non-contact 3D surface profilometry by integrating MAM-technology and Mirau-type of white light interferometry. The essential component in our proposed technology is a probe that contains the microsphere and its surrounding medium, the immersion film. In this thesis, I introduce the properties of microspheres and its surroundings that affect to the performance of MAM according to literature. I also present our choices for probe parameters and reveal experimental results on the performance of our 3D surface profilometer. In the image resolution, we achieved comparable results to other transverse resolution studies, but in longitudinal resolution we did not succeed to reach consistent results in non-contact mode with AFM. Despite many improvements in the fabrication process, we did not achieve repeatability in the characterization measurements for probes made with same parameters.
  • Kylliäinen, Joonas (Helsingfors universitet, 2017)
    As the data traffic, as well as the speed demands, increases, the mobile networks require means for economically fulfil these demands. The solution comes from the cloud. In order to move the processing to the cloud, it must be carefully dimensioned to know how much resources each situation requires. This means there must be a way to calculate from the traffic the virtual machines required and the hardware resources the virtual machines need, when the cloud infrastructure used is OpenStack. This thesis provides two methods for calculating the virtual machines from the traffic profile. The first one is based on performance testing of the virtual network functions and the second one is based on machine learning technique called multiple linear regression analysis. Furthermore in this work, approximation algorithms are being used in order to solve multidimensional variates of classical optimization problems such as bin packing problem and subset sum problem. These algorithms are used to dimension required resources from the virtual machines to hardware and vice versa. The algorithms are bundled to a program with a graphical user interface to make as user friendly as possible.
  • Pöyry, Outi Irene (Helsingfors universitet, 2015)
    In the upgraded CMS pixel detector (phase II upgrade), the pixel size will become smaller due to the higher occupancy caused by higher luminosity of the LHC. This means that also the bump bonds between the sensor and the read-out circuit will become smaller, which results in smaller gap between the sensor and the ROC. This will increase the probability for electrical sparking that might destroy the ROC, the sensor or both. Jaakko Härkönen has suggested using alumina passivation on the modules for sparking prevention. In this thesis it was studied whether bonding is applicable on a surface having an alumina passivation. It was also of interest, which parameters of the bonder make stronger bonds. Bonding was tested on metal pads with different layer thicknesses of alumina: 0 nm, 10 nm, 15 nm, 20 nm and 25 nm. The strengths of the bonds were tested using the bond pull test. The results indicate that wire-bonding on alumina does well in pull-strength tests, though the bonds are slightly weaker than on surfaces with no alumina. Increasing bonding force seems to weaken the bonds, increasing bonding power, on the other hand seems to make stronger bonds. The conclusion of this thesis is that alumina is a viable choice for passivation, since it does not seem to have a negative effect on the module wire bonding.
  • Mehtonen, Jonna (Helsingin yliopisto, 2019)
    In MRI, the diffusion of water molecules can be imaged using Diffusion Weighted Imaging (DWI). Increasing cellularity in tumor tissue causes the restriction of water diffusion to increase, which will show as a high contrast in DW-image. Due to this advantage, the DWI shows great promise as a biomarker for cancer treatment. However, currently the most commonly used DWI technique is an Echo-Planar-Imaging (EPI) based sequence which suffers greatly from geometric distortion. Therefore, it is not applicable for radiation therapy planning. Turbo-Spin-Echo (TSE) based DWI sequences are proposed due to their great geometric accuracy. However, Signal-to-Noise Ratio (SNR) of DWI-TSE is poor compared to DWI-EPI. The purpose of this work is to evaluate the image quality of different DWI sequences for the use of radiation therapy planning. The evaluation is done by comparing SNR, patient-induced susceptibility effect on geometric distortion, and Apparent Diffusion Coefficient (ADC) correctness of DWI-TSE sequences to the most commonly used DWI-EPI. The selected TSE based sequences are SPLICE and Alsop. The image quality comparison is also done between two radiation oncology products for MRI: Philips Ingenia 1.5T MR-RT and Elekta Unity 1.5T (MR-Linac). In addition, the aim is to see how Compressed SENSE (CS-SENSE) technique affects the image quality of SPLICE. The SNR comparison of DWI-EPI, Alsop, SPLICE, and SPLICE with Compressed SENSE were done both with a phantom and with a volunteer. Same results were observed from both studies. The DWI-EPI has significantly higher SNR than TSE-based DWI. However, the SPLICE achieved nearly square root of two times better SNR compared to the Alsop sequence. The CS-SENSE improved the SNR of SPLICE notably both in a volunteer and phantom studies. In volunteer studies, the SNR of SPLICE with CS-SENSE achieved 45% of the SNR of diagnostic DWI-EPI with Ingenia MR-RT setup, and only 27% with MR-Linac. Therefore, the image quality of diagnostic DWI-EPI can be reached with Ingenia MR-RT setup for SPLICE with CS-SENSE by optimizing the sequence. However, with MR-Linac the same image quality cannot be reached with acceptable acquisition time. The susceptibility-induced geometric distortion of DWI-EPI and SPLICE were analyzed using the same volunteer as in SNR measurements. As a result, the median distortion value for DWI-EPI was 1.5 mm in Ingenia 1.5T and 1.7 mm in MR-Linac 1.5T. Whereas, the median distortion of SPLICE was 0.02 mm in both systems. Therefore, the subject-induced susceptibility effect has an insignificant impact on the geometric accuracy of TSE-based DWI. Thus, DWI-TSE can be used for RTP in favor of geometric accuracy. The ADC value correctness of DWI-EPI, DWI-TSE, and SPLICE were measured using a standardized diffusion phantom. All ADC values of TSE-based DWI were within 14% of the reported literature values. Whereas, the maximum difference in DWI-EPI was 50%. The large variety in the ADC values of DWI-EPI were caused by geometric inaccuracy in the anterior side of the phantom. The minor changes in the ADC values of TSE-based DWI proves that the ADC values are reliable for clinical use.
  • Erkkilä, Kukka-Maaria (Helsingfors universitet, 2016)
    Freshwaters are a source of carbon to the atmosphere in the form of methane (CH4) and carbon dioxide (CO2). Global estimates of the freshwater contribution to the carbon budget are often based on a water boundary layer model (BLM) with gas transfer coefficient k calculated depending solely on wind speed. According to comparison studies, this model gives underestimated emissions and should not be used for more reliable results. A widely used flux measurement method over lakes is the floating chamber (FC) method. FC measures surface flux from a very small area of the lake, so it may not be representative of the whole ecosystem. Measurements are relatively cheap and easy, but also laborious and sporadic. Instead of measuring just a specific point on the lake, eddy covariance (EC) technique provides continuous flux measurements over a much larger source area (footprint). EC systems have been widely used over land areas, but are now growing their popularity in the lake community as well. The aim of this study was to compare EC, FC and BLM methods for CO2 and CH4 fluxes over a boreal lake. The measurements were made at a small dimictic Lake Kuivajärvi in Hyytiälä (Juupajoki, Southern Finland) during an intensive field campaign in September 2014. Manual FC measurements were done at four measurement spots in the EC footprint area 2-3 times a day for catching spatial and temporal variability. Gas transfer velocity for BLM was calculated according to three different parametrizations. Results indicate that BLM fluxes calculated based on water convection and wind driven turbulent gas exchange compare quite well with EC measurements while the model based solely on wind speed is a clear underestimate. FC measurements show about 1.7 times larger flux values than EC. The comparison is more clear for CH4 than CO2 fluxes. The greatest values of CH4 fluxes were measured near the shore, while CO2 flux did not show any spatial variability. After the lake started its autumn mixing, CH4 flux showed a diurnal variation with highest values measured during daytime. There was no diurnal variation before mixing. CO2 flux on the other hand showed diurnal variation only when calculated according to the BLM method.