Browsing by Title

Sort by: Order: Results:

Now showing items 889-908 of 25407
  • Bandyopadhyay, Payel (2015)
    The way the users interact with Information Retrieval (IR) systems is an interesting topic of interest in the field of Human Computer Interaction (HCI) and IR. With the ever increasing information in the web, users are often lost in the vast information space. Navigating in the complex information space to find the required information, is often an abstruse task by users. One of the reasons is the difficulty in designing systems that would present the user with an optimal set of navigation options to support varying information needs. As a solution to the navigation problem, in this thesis we propose a method referred as interaction portfolio theory, based on Markowitz’s "Modern Portfolio theory", a theory of finance. It provides the users with N optimal interaction options in each iteration, by taking into account user’s goal expressed via interaction during the task, but also the risk related to a potentially suboptimal choice made by the user. In each iteration, the proposed method learns the relevant interaction options from user behaviour interactively and optimizes relevance and diversity to allow the user to accomplish the task in a shorter interaction sequence. This theory can be applied to any IR system to help users to retrieve the required information efficiently.
  • Kanckos, Annika (Helsingin yliopisto, 2011)
    After Gödel's incompleteness theorems and the collapse of Hilbert's programme Gerhard Gentzen continued the quest for consistency proofs of Peano arithmetic. He considered a finitistic or constructive proof still possible and necessary for the foundations of mathematics. For a proof to be meaningful, the principles relied on should be considered more reliable than the doubtful elements of the theory concerned. He worked out a total of four proofs between 1934 and 1939. This thesis examines the consistency proofs for arithmetic by Gentzen from different angles. The consistency of Heyting arithmetic is shown both in a sequent calculus notation and in natural deduction. The former proof includes a cut elimination theorem for the calculus and a syntactical study of the purely arithmetical part of the system. The latter consistency proof in standard natural deduction has been an open problem since the publication of Gentzen's proofs. The solution to this problem for an intuitionistic calculus is based on a normalization proof by Howard. The proof is performed in the manner of Gentzen, by giving a reduction procedure for derivations of falsity. In contrast to Gentzen's proof, the procedure contains a vector assignment. The reduction reduces the first component of the vector and this component can be interpreted as an ordinal less than epsilon_0, thus ordering the derivations by complexity and proving termination of the process.
  • From, Heidi (2015)
    Tämän pro gradu-tutkielman tarkoitus on selvittää, mainitseeko Paavali Roomalaiskirjeensä 16. luvun 7. jakeessa naispuolisen apostolin. Tässä on Room. 16:7 jae kreikan- ja suomenkielisenä: ἀσπάσασθε Ἀνδρόνικον καὶ Ἰουνιαν τοὺς συγγενεῖς µου καὶ συναιχµαλώτους µου, οἵτινές εἰσιν ἐπίσηµοι ἐν τοῖς ἀποστόλοις, οἳ καὶ πρὸ ἐµοῦ γέγοναν ἐν Χριστῷ. Terveisiä Andronikokselle ja Junialle, heimolaisilleni ja vankitovereilleni, jotka ovat arvossapidettyjä apostolien joukossa ja ovat olleet Kristuksessa jo ennen minua. Saadakseni selville, puhuuko Paavali tässä naispuolisesta apostolista, perehdyn työssäni yksityiskohtaisesti alkuperäiseen tekstiin ja ilmaisuun. Mitä kreikankieliset ilmaisut tarkoittavat? Mitä käsikirjoituksissa lukee? Miten tulkinta- ja tutkimushistoriassa – pitäen sisällään myös nykytutkimuksen – tämä asia on ymmärretty? Kuinka asian ymmärsivät ensimmäisen tuhannen vuoden oppineet ja koulutetut kreikan- ja latinantaitoiset kirkkoisät? Luvussa 1 "Johdanto", esittelen tarkemmin tutkimuskysymyksiä sekä aiheeseen liittyvää problematiikkaa. Luvussa 2 "Tulkintahistoriallinen katsaus", käyn lyhyesti läpi tutkijoiden ja kommentaattorien näkemyksiä koskien päätutkimuskysymyksiäni. Miten tutkijat ja kommentaattorit ovat nähneet tämän henkilön, oliko hän mies vai nainen ja oliko hän apostoli vai ei. Noin 2000-vuotisen tulkintahistoriansa aikana tämä aihe on saanut erittäin mielenkiintoisia käänteitä, joita tarkastellaan tässä luvussa. Luvussa 3 "Oliko hän mies vai nainen?", selvitän perusteellisesti henkilön sukupuolta kielen, nimien ja nimistöjen, kieliopin, tehtyjen varhaisten käännösten sekä käsikirjoitusten perusteella. Luvussa 4 "Mitä tarkoittaa ἐπίσηµοι ἐν τοῖς ἀποστόλοις?", tarkastelen sitä, tarkoittaako tämä ilmaus, että kyseinen henkilö on apostoli, kuten tulkintahistoriassa on yleisesti nähty, vai tarkoittaako ilmaus, että hän on arvostettu apostolien silmissä, mutta ei itse ole apostoli. Tarkastelen asiaa yleisen konsensuksen haastavan näkemyksen näkökulmasta, ja selvitän, onko se perustellumpi kuin vallitseva tulkinta, jonka mukaan ilmaus tarkoittaa henkilön kuuluvan apostoleihin. Luvussa 5 "Mitä kirkkoisät sanoivat?", tuon esiin, mitä Paavalin Roomalaiskirjeen varhaisimmat kommentaattorit eli kirkkoisät ensimmäisen tuhannen vuoden ajalta ovat kyseisen henkilön sukupuolesta ja apostoliudesta sanoneet. Nämä kirkkoisät olivat aikansa oppineita ja koulutettuja henkilöitä, ja heidän käsityksensä asiasta on merkittävän tärkeä asian selvittämiseksi. Asiaan perehtyminen tässä työssä osoitti mielenkiintoisia käänteitä tulkintahistoriassa. Käsikirjoitusevidenssi, varhaiset käännökset, perehtyminen laajemmin ja yksityiskohtaisesti kreikan kieleen ja kielioppiin sekä ensimmäisen tuhannen vuoden kirkkoisien tulkintahistoriaan osoittivat, että varhaiset tulkitsijat ymmärsivät ensimmäiset 1200 vuotta henkilön naispuoliseksi apostoliksi. Ongelmalliseksi naispuolinen apostoli muuttui vasta sydänkeskiajalla. Ensimmäisen kristillisen sukupolven joukossa evankeliumin työtä tekemässä oli merkittävä naispuolinen apostoli nimeltään Junia.
  • McVeigh, Joseph (Helsingin yliopisto, 2013)
    Although the genres of blogs and marketing have been studied, the sub-genres of single-topic blogs and email marketing have not received as much attention by scholars. An account of the ways that these sub-genres use language to meet their goals or purposes is needed to see whether they follow similar patterns of the larger genres that contain them. Since research on the blogging and marketing genres has already been done, a comparative analysis is possible. This thesis analyzes the linguistic properties of blog and marketing texts which share a topic (labor and employment law) and discourse community (lawyers), but which have different goals (exposition vs. promotion). Drawing on previous genre and corpus linguistics research, I wish to answer two related questions. First, I want to know how the two sub-genres differ in terms of their linguistic properties. Second, I want to see whether a comparison of the texts from the two sub-genres is really possible or whether it would be just like comparing apples to oranges. A combination of corpus linguistic and genre analysis methods are used to compare the lexico-grammatical properties of the texts from the two sub-genres. An analysis is then made of the rhetorical moves in certain texts which share the same micro-topic (DISCRIMINATION). Throughout the analysis, the extra-textual properties of the two sub-genres are taken into account to see how they might affect the language of the texts. The results show that texts from single-topic blogs and email marketing do not always divide easily based on either their genres or expectations from previous research.
  • Laitinen, Totti (Helsingin yliopisto, 2013)
    This thesis is based on the construction of a two-step laser desorption-ionization aerosol time-of-flight mass spectrometer (laser AMS), which is capable of measuring 10 to 50 nm aerosol particles collected from urban and rural air at-site and in near real time. The operation and applicability of the instrument was tested with various laboratory measurements, including parallel measurements with filter collection/chromatographic analysis, and then in field experiments in urban environment and boreal forest. Ambient ultrafine aerosol particles are collected on a metal surface by electrostatic precipitation and introduced to the time-of-flight mass spectrometer (TOF-MS) with a sampling valve. Before MS analysis particles are desorbed from the sampling surface with an infrared laser and ionized with a UV laser. The formed ions are guided to the TOF-MS by ion transfer optics, separated according to their m/z ratios, and detected with a micro channel plate detector. The laser AMS was used in urban air studies to quantify the carbon cluster content in 50 nm aerosol particles. Standards for the study were produced from 50 nm graphite particles, suspended in toluene, with 72 hours of high power sonication. The results showed the average amount of carbon clusters (winter 2012, Helsinki, Finland) in 50 nm particles to be 7.2% per sample. Several fullerenes/fullerene fragments were detected during the measurements. In boreal forest measurements, the laser AMS was capable of detecting several different organic species in 10 to 50 nm particles. These included nitrogen-containing compounds, carbon clusters, aromatics, aliphatic hydrocarbons, and oxygenated hydrocarbons. A most interesting event occurred during the boreal forest measurements in spring 2011 when the chemistry of the atmosphere clearly changed during snow melt. On that time concentrations of laser AMS ions m/z 143 and 185 (10 nm particles) increased dramatically. Exactly at the same time, quinoline concentrations in molecular clusters measurements (APi-TOFMS) decreased markedly. With the help of simultaneously collected 30 nm filter samples, laser AMS ions m/z 143 and 185 were later identified as originating from 1-(X-methylquinolin-X-yl)ethanone.
  • Anttila, Saku (Helsingin yliopisto, 2013)
    Spatial and temporal variation within water bodies causes uncertainties in freshwater monitoring programmes that are surprisingly seldom perceived. This poses a major challenge for the representative sampling and subsequent assessment of water bodies. The sources of variability in lakes are relatively well known. The majority of them produce consistent patterns in water quality that can be statistically described. This information can be used in calibrating the sampling intervals, locations and monitoring methods against the typical variation in a water body as well as the accuracy requirements of monitoring programmes. Similarly, understanding of ecosystem history and functioning in different states can help in contextualizing the collected data. Specifically, studies on abrupt transitions and the interactions involved produce a framework against which recent water quality information can be compared. This thesis research aimed to facilitate water quality monitoring by examining 1) feasible statistical tools to study spatial and temporal uncertainty associated with sampling efforts, 2) the characteristics of variation and 3) ecosystem interactions in different states. Research was conducted at Lake Vesijärvi, southern Finland. Studies of uncertainty utilized data-rich observations of surface water chlorophyll a from flow-through, automated and remote sensing systems. Long-term monitoring information of several trophic levels was used in the analysis of ecosystem interactions. Classical sample size estimates, bootstrap methodology, autocorrelation and spatial standard score analyses were used in spatio-temporal uncertainty analysis. A general procedure to identify abrupt ecosystem transitions was applied in order to characterize lake interactions in different states. The results interlink variability at the study site with information required in sampling design. Sampling effort estimates associated with the spatial and temporal variance were used to derive precision information for summary statistics. The structure of the variance illustrated with an autocorrelation model revealed the low spatial representativeness of discrete sampling in the study area. A generalized autocorrelation model and its parameters from the monitoring area were found applicable in sampling design. Furthermore, areas with constantly higher chlorophyll a concentrations, which had an effect on the water quality information derived with remote sensing, were identified from the study area. Characterization of the interactions between the main trophic levels in different ecosystem states revealed the key role of zooplankton in maintaining the current state as well as the resilience of the studied pelagic ecosystem. The results are brought into a broader context by discussing the applicability of presented methods in sampling design of water quality monitoring programmes. According to this thesis research, sampling design in individual monitoring regimes would benefit from the characterization of variance and subsequent uncertainty analysis of different data sources. This approach allows the calibration of sampling frequency and locations on the observed variance, as well as a quantitative comparison between the abilities of different monitoring methods. The derived precision information also supports the joint use of several monitoring methods. Furthermore, analysis of long-term records can reveal the key elements of freshwater ecosystem functioning and how it has responded to earlier pressures, to which recent monitoring data can be compared. This thesis thus highlights analysis of the variance and history of the monitored system in developing a rationalized and adaptive monitoring programme
  • Meichsner, Julia (2001)
    The objective of this study is to analyse the applicability of growth predictions in the case of the Eastern enlargement. For this purpose the growth model developed by Uwe Walz (1998) was chosen and compared to empirical data as well as to further studies about the process of Eastern enlargement. In the first part of the paper Walz´ model is introduced. The production patterns of a trade union consisting of two countries are described before the enlargement. Then, a third technology-deficient country is integrated in two steps: First, barriers to trade are removed, and secondly migration is liberalized. The model shows that free trade between the two trade blocks with specialization patterns of the Heckscher-Ohlin type causes the growth rate to shift. This holds true in the next step, when skilled workers are assumed to immigrate to the countries with the higher level of technology. On the contrary, the growth rate declines when unskilled workers are assumed to migrate to the technologically-advanced countries. In the second part, the growth predictions of Walz´ model are decomposed in their underlying assumptions, defined and compared to empirical data regarding the process of the Eastern enlargement. The comparison reveals a high degree of congruency between the theoretical assumptions and the corresponding developments in reality. This congruency comes to an end when further studies on the Eastern enlargement are called in. In the final part of the paper, the results of the comparisons between Walz´ model and the data and studies about the Eastern enlargement are evaluated trying to give an answer to the question as to how applicable the theoretical growth predictions are in the case of the Eastern enlargement.
  • Cheng, Zhuo (Helsingin yliopisto, 2015)
    Risk management is essential in forest management planning. However, decision making with risk analysis is rarely done in forestry. This study presents an example of the application of conditional value-at-risk (CVaR) as a decision tool and optimizes the management planning problem from a risk perspective. Stochastic programming is used to solve the problem. The model contains four different types of risk using an assumed probability distribution and quantifies these risks, namely, inventory errors, growth model errors, price uncertainty and policy uncertainty. The results suggest that forest owners’ risk tolerance, i.e., their willingness and ability to assume risk determines to the greatest extent the return potential. When the expected first period income is maximized, the subsequent period always experiences a loss that is the greatest of the entire management horizon. The proportion of carbon subsidy in the first period is also the highest. With this model it is possible to hedge some risks or to use it as means to assess the amount of insurance to purchase in order to transfer risks. The use of CVaR in forest management planning can be seen as a useful tool to manage risk and to assist in the decision making process to assess forest owners’ willingness and ability to tolerate risks.
  • Tahvanainen, Janina (2013)
    Olennaiset toimintaedellytykset- oppi liittyy määräävän markkina-aseman väärinkäyttöön. Eräät toimintaedellytykset ovat niin olennaisia, että määräävässä markkina-asemassa oleva toimija ei voi kieltäytyä toimittamasta niitä kilpailijalleen ilman, että se syyllistyisi artiklan 102 (SEUT) rikkomukseen. Tutkielmassani tarkastellaan kyseisen opin soveltamisessa ilmeneviä eroja erityisesti energiasektorilla sekä erityisesti suhteessa aloihin, joilla immateriaalioikeudet ovat merkittävässä roolissa. Näin pystytään pohtimaan opin soveltamista kahdessa täysin erilaisessa markkinaympäristössä. Keskeinen ongelma doktriinin soveltamisessa on sen negatiivinen vaikutus määräävässä markkina-asemassa olevan yrityksen kannustimiin panostaa innovaatioon. Bronner- tapauksen mukaan toimittamasta kieltäytymisen pitäisi poistaa kilpailu kokonaan sellaisen kilpailijan osalta, joka haluaa käyttää toimintaedellytystä. Lisäksi edellytetään, että kieltäytymiselle ei ole objektiivisia perusteita ja että toimintaedellytys on välttämätön, eikä ilman sitä toiminta markkinoilla ole mahdollista. Suhteessa immateriaalioikeuksiin opin soveltaminen on edellyttänyt, että kieltäytyminen estää uuden tuotteen markkinoille tulon. Microsoft- tapauksessa lähestymistapaa muutettiin. Kieltäytymisen ei tarvinnut estää uuden tuotteen markkinoille tuloa vaan opin soveltamiselle riittävää oli, että tekninen kehitys markkinoilla rajoittuu. Tapauksessa määriteltiin myös, että muutaman kilpailijan läsnäolo markkinoilla ei estä opin soveltamista. Pian tämän jälkeen komission ensisijaisia täytäntöönpanotavoitteita koskevassa tiedonannossa teknisen kehityksen rajoittaminen korvattiin kuluttajahaitalla. Tämä edellyttää, että toimittamisesta kieltäytymiseen liittyvät kielteiset vaikutukset ovat suuremmat kuin toimintaedellytyksen käytön sallimisesta aiheutuvat seuraukset. Komission tiedoksiannossa ei tehdä eroa eri immateriaalioikeuksien välillä. Näin ollen opin soveltamista patentteihin ei pidä rajoittaa. Standardisointeihin sisällytettyjen patenttien osalta uusi tuote- vaatimusta ei edes välttämättä ole perusteltua soveltaa, koska asiakkaat saattavat tarvita enemmän yhteensopivia tuotteita kuin uutta tuotetta. Kyseisen edellytyksen soveltaminen olisi hankalaa myös energiasektorilla, jossa tuote on yleensä homogeeninen. Huomioon pitää myös ottaa, että komission tiedoksiannon mukaan käytön sallimisesta aiheutuvien negatiivisten vaikutusten puuttuminen voidaan olettaa, jos toimintaedellytyksen rakentamiseen on käytetty julkisia varoja, toimija on nauttinut yksinoikeuksia tai lainsäädäntö velvoittaa sallimaan kilpailijoiden toimintaedellytyksen käytön. Tällainen tilanne saattaa olla käsillä esimerkiksi vasta liberalisoiduilla markkinoilla. Microsoft-tapaukseen liittyy kuitenkin sellaisia erityispiirteitä, jotka saattavat hankaloittaa siinä käytettyjen edellytysten soveltamista. EU-tuomioistuimet ovat olleet myös vastahakoisia soveltamaan komission tiedoksiantoa. Post Danmark- tapaus kuitenkin antaa aihetta uskoa, että tämä linja olisi muuttumassa.
  • Kulesskiy, Evgeny (Helsingin yliopisto, 2015)
    Syndecans are cell surface heparan sulfate proteoglycans which are present in all tissues and cell types and have distinct temporal and spatial expression patterns. They play important roles in embryonic development of the organism and control relocation and alteration of extracellular matrix components. Syndecans regulate cell migration, adhesion and proliferation and are engaged in tissue injury, inflammation processes, pathogenesis of infectious diseases and tumor biology. This thesis summarizes the results of studies on one of the syndecan family receptors syndecan-3 (also known as N-syndecan). This proteoglycan is abundantly expressed in developing brain. Syndecan-3 acts as a signaling receptor upon binding of its ligand, heparin-binding growth associated molecule (HB-GAM; also known as pleiotrophin), which activates the cortactin c-Src signaling pathway. This leads to rapid neurite extension in neuronal cells, which makes syndecan-3 an interesting transmembrane receptor in neuronal development and regeneration. However, little is known about the signaling mechanism of syndecan-3. Here I show formation of ligand-syndecan-3 signaling complexes at the cell surface using fluorescence resonance energy transfer (FRET) and bioluminescence resonance energy transfer (BRET). Ligand binding leads to dimerization of syndecan-3 at the cell surface. The dimerized syndecan-3 colocalizes with actin in the filopodia of cells. Lysine 383 in the juxtamembrane (ERKE) sequence and G392 and G396 from GXXXG canonical motif are shown to be important for the ligand-induced dimerization, whereas the cytosolic domain are not required for the dimerization. In addition to acting as a signaling receptor, syndecan-3 acts as a co-receptor in epidermal growth factor receptor (EGFR) ligand binding. FRET analysis suggests that interactions of syndecan-3 and EGFR depend on a shared ligand such as heparin-binding EGF-like growth factor (HB-EGF). Furthermore, it was shown that syndecan-3 may act as a receptor for other ligands, like glial cell line-derived neurotrophic factor (GDNF). In addition, I have found a new receptor for HB-GAM glypican-2 which may be involved in regulation of HB-GAM signaling by competing with syndecan-3 for ligand binding.
  • Nousiainen, Aura (Helsingin yliopisto, 2015)
    The use of pesticides has allowed the efficient use of agricultural soil and provided humans with greater yields and agri-food security. Unfortunately, many pesticides have also adverse effects to the environment or human health, and may end up where they were not intended: the precious groundwater reserves. The use of atrazine, a herbicide used for controlling broad-leaf weeds, was banned in the EU for this reason in 2004, but is still globally one of the most widely used herbicides today. Although atrazine can be completely mineralized by microbes, in the subsurface, slow or incomplete degradation of atrazine is often observed. The ability of atrazine degradation by microbes can be utilized in bioremediation, a technique in which contaminants are removed by microbial activity. This study was undertaken to elucidate the potential use of genetic tools, such as quantitative PCR (qPCR), radiorespirometry, microautoradiography (MAR), clone libraries and genetic fingerprinting methods, in atrazine contaminated soils, and to apply them in atrazine bioremediation. Collaboration with our Indian partner permitted comparison between atrazine treated, cropped agricultural soils and boreal subsoil contaminated two decades ago with residual atrazine from weed control in municipal areas. Four different bioremediation methods, natural attenuation, bioaugmentation, biostimulation, and their combination, were used to reduce atrazine concentration in soil. Atrazine degradation copy numbers often reflected the atrazine degradation potential, indicating their robustness as monitoring tools in different soils. The most efficient bioremediation treatment was bioaugmentation by atrazine degrading bacterial strains Pseudomonas citronellolis or Arthrobacter aurescens, or by an atrazine degrading bacterial consortium: in the agricultural soil, up to 90% of atrazine was degraded in less than a week, whereas in the boreal subsoil, 76% of atrazine was mineralized. In the clone library constructed from boreal soil, several clones related to taxa which include known atrazine degraders were found. In this soil, biostimulation with additional carbon was an efficient treatment at reduced temperature. In general, the efficiency of atrazine removal in different treatments was bioaugmentation and biostimulation > bioaugmentation > biostimulation > natural attenuation. Previous exposure to atrazine was the most influential factor in atrazine disappearance from soil, as recent exposure always correlated with faster atrazine degradation, and greatly affected the composition of the microbial community, elucidated by LH-PCR. These results serve as an example on how soil origin, exposure history, organic content and use must be taken into account while choosing the best bioremediation method. Knowledge on the presence of genetic degradation potential can be helpful in choosing the treatment method. While bioaugmentation removed 90% of atrazine from soil, its application in field scale may be challenging. Our results show, that biostimulation alone may serve as the treatment method of choice, even in the challenging subsoil surroundings where atrazine concentrations are low.
  • Pesonen, Janne (Helsingin yliopisto, 2001)
  • Chen, Yongchen (2013)
    Soil contamination with oily products poses great healthy and environmental risks to the polluted sites. The remediation difficulty mainly comes from the complexity of hydrocarbons. Different kinds of remediation technologies have been applied for hydrocarbon removal from soil. New technologies especially in situ bioremediation technologies are emerging constantly. Soil assessment is a key step in the remediation processes since it provides information about the contamination level and potential risks. In the present study, hydrocarbon contaminated soil samples were collected from two sites (one site was contaminated by weathered oily sludge waste with some vegetated plots; the other was contaminated with fuel oil with short-chain hydrocarbons). The samples were analyzed for physicochemical properties and hydrocarbon degraders were enumerated. Four degrading strains were isolated from the samples and their 16S rRNA genes were sequenced. The samples and isolates were investigated to check the existence of three catabolic genes involved in petroleum degradation. The objective was to reveal the intrinsic bioremediation potential of contaminated soils by investigating the key remediation “players” i.e. the degrader microorganisms and catabolic genes. The coexistence of abundant degraders and diverse catabolic genes give the soil a good potential for bioremediation. In addition, the relationships between degrader counts, genes detection and soil contamination levels can reveal how the contaminants affect the indigenous microbial community. The differences between vegetated and nonvegetated plots can also suggest if vegetation with legumes has good potential for hydrocarbon bioremediation. According to the results, both sites were moderately contaminated with different hydrocarbon composition. In the landfarming site, the TPH depletion in vegetated fields was higher than the unvegetated bulk soil areas. However, the degrading microorganism counts had no significant differences between vegetated and nonvegetated plots. The hydrocarbon contamination level had no correlation with the degrader counts. In subsurface soils where aeration was quite limited, degrading microorganisms were much lower than those in surface soils. Catabolic genes were detected from the isolated strains but rarely from the contaminated soil samples. The contaminants co-extracted with soil DNA may inhibit the PCR-based gene detection. With more primer sets or primers targeting broader genetic diversity ranges, more detection results can be expected.
  • Bućko, Michał (Helsingin yliopisto, 2012)
    Road traffic is at present one of the major sources of environmental pollution in urban areas. Magnetic particles, heavy metals and others compounds generated by traffic can greatly affect ambient air quality and have direct implications for human health. The general aim of this research was to identify and characterize magnetic vehicle-derived particulates using magnetic, geochemical and micro-morphological methods. A combination of three different methods was used to discriminate sources of particular anthropogenic particles. Special emphasis was placed on the application of various collectors (roadside soil, snow, lichens and moss bags) to monitor spatial and temporal distribution of traffic pollution on roadsides. The spatial distribution of magnetic parameters of road dust accumulated in roadside soil, snow, lichens and moss bags indicates that the highest concentration of magnetic particles is in the sampling points situated closest to the road edge. The concentration of magnetic particles decreases with increasing distance from the road indicating vehicle traffic as a major source of emission. Significant differences in horizontal distribution of magnetic susceptibility were observed between soil and snow. Magnetic particles derived from road traffic deposit on soil within a few meters from the road, but on snow up to 60 m from the road. The values of magnetic susceptibility of road dust deposited near busy urban motorway are significantly higher than in the case of low traffic road. These differences are attributed to traffic volume, which is 30 times higher on motorway than on local road. Moss bags placed at the edge of urban parks situated near major roads show higher values of magnetic susceptibility than moss bags from parks located near minor routes. Enhanced concentrations of heavy metals (e.g. Fe, Mn, Zn, Cu, Cr, Ni and Co) were observed in the studied samples. This may be associated with specific sources of vehicle emissions (e.g. exhaust and non-exhaust emissions) and/or grain size of the accumulated particles (large active surface of ultrafine particles). Significant correlations were found between magnetic susceptibility and the concentration of selected heavy metals in the case of moss bags exposed to road traffic. Low-coercivity magnetite was identified as a major magnetic phase in all studied roadside collectors (soil, snow, moss bags and lichens). However, magnetic minerals such as titanomagnetite, ilmenite, pyrite and pyrrhotite were also observed in the studied samples. The identified magnetite particles are mostly pseudo-single-domain (PSD) with a predominant MD fraction (>10 μm). The ultrafine iron oxides (>10 nm) were found in road dust extracted from roadside snow. Large magnetic particles mostly originate from non-exhaust emissions, while ultrafine particles originate from exhaust emissions. The examined road dust contains two types of anthropogenic particles: (1) angular/aggregate particles composed of various elements (diameter ~1-300 µm); (2) spherules (~1-100 µm) mostly composed of iron. The first type of particles originates from non-exhaust emissions such as the abrasion of vehicle components, road surface and winter road maintenance. The spherule-shaped particles are products of combustion processes e.g. combustion of coal in nearby power plants and/or fuel in vehicle engines. This thesis demonstrates that snow is an efficient collector of anthropogenic particles, since it can accumulate and preserve the pollutants for several months (until the late stages of melting). Furthermore, it provides more information about spatial and temporal distribution of traffic-generated magnetic particles than soil. Since the interpretation of data obtained from magnetic measurements of soil is problematic (due to its complexity), this suggests the application of alternative collectors of anthropogenic magnetic particulates (e.g. snow and moss bags). Moss bags and lichens are well suited for magnetic biomonitoring studies, since they effectively accumulate atmospheric pollution and can thus be applied to monitor the spatio-temporal distribution of pollution effects.
  • Inkinen, Ville (2014)
    In June 2012, the Commission introduced the Monitoring and Reporting Regulation (601/2012) by virtue of Article 14(1) of the ETS Directive (2003/87/EC). In recital 2 of the MRR the Commission puts forward an interpretation of the Renewable Energy Directive which marks a major policy change. According to the interpretation, the sustainability criteria for biofuels and bioliquids in Article 17 of the Renewable Energy Directive must be fulfilled as a precondition to the rule in Annex IV of the ETS Directive according to which emissions from the use biomass shall be considered zero. Applying the sustainability criteria in the Renewable Energy Directive results from the interpretation that biomass zero-treatment constitutes ‘financial support’ within the meaning of Article 17(1)(c) of the Renewable Energy Directive. Presently, due to the limited use of biofuels and bioliquids in the Emissions Trading sector, the policy change is of minor significance. However, the Commission is preparing a proposal to introduce sustainability criteria also for solid and gaseous biomass in heating and cooling and electricity. The proposal is expected to be formally tabled in fall 2013. In many Member States, emissions from the use of solid biomass are significant as compared to the current emissions in the whole Emissions Trading sector, and thus the economic consequences can be major. The treatment of emissions from solid biomass is also likely to have major implications for the Member States in fulfilling their binding national targets under the Renewable Energy Directive. Firstly, this study analyses the described interaction between the ETS Directive and the RED. The main finding in this regard is that the interpretation whereby biomass zero-treatment would constitute ‘financial support’ within the meaning of Article 17(1)(c) of the Renewable Energy Directive is highly problematic. Secondly, the competence of the Commission in amending the Annex IV of the ETS Directive is examined. This study posits that the Commission does not have the competence to modify biomass zero-treatment to the extent of imposing a precondition to fulfil the sustainability criteria. Lastly, the upcoming sustainability criteria for solid and gaseous biomass will be briefly discussed. The upcoming criteria will resemble those for biofuels and bioliquids with some alterations. Pivotal with respect to biomass zero-treatment will be the wording of the upcoming provisions. If the norm requiring fulfilling the criteria to be eligible for financial support will be formulated in the same manner as in the Renewable Energy Directive, the interpretation in recital 2 of the Monitoring and Reporting Regulation could have the result that the upcoming extension will apply in the Emissions Trading Scheme automatically.
  • Välimäki, Niko (Helsingin yliopisto, 2012)
    Recent advancements in the field of compressed data structures create interesting opportunities for interdisciplinary research and applications. Compressed data structures provide essentially a time--space tradeoff for solving a problem; while traditional data structures use extra space in addition to the input, compressed data structures replace the input and require space proportional to the compressed size of the input. The amount of available memory is often fixed, thus, the user might be willing to spend more time if it allows the use of larger inputs. However, despite the potential behind compressed data structures, they have not quite reached the audience of other disciplines. We study how to take advantage of compressed data structures in the fields of bioinformatics, data analysis and information retrieval. We present several novel applications for compressed data structures and include an experimental evaluation of the time--space tradeoffs achieved. More precisely, we propose (i) a space-efficient string mining algorithm to recognise substrings that admit the given frequency constraints, (ii) both theoretical and practical methods for computing approximate overlaps between all string pairs, (iii) a practical path-based graph kernel for predicting the function of unknown enzymatic reactions, and (iv) a compressed XML index that supports efficient XPath queries on both the tree-structure and textual content of XML documents. Problem (i) is motivated by knowledge discovery in databases, where the goal is to extract emerging substrings that discriminate two (or more) databases. Problem (ii) is one of the first phases in a sequence assembly pipeline and requires efficient algorithms due to the new high-throughput sequencing systems. Problem (iii) is motivated by machine learning, where kernels are used to measure the similarity of complex objects. Problem (iv) has its background in information retrieval. The proposed methods achieve theoretical and practical improvements over the earlier state of the art. To raise the overall awareness of compressed data structures, our results have been published in interdisciplinary forums, including conferences and journals from the fields of bioinformatics, data engineering and data mining.