Browsing by Title

Sort by: Order: Results:

Now showing items 2609-2628 of 28473
  • Myyry, Liisa (2003)
    Tutkimuksen tavoitteena oli selvittää moraalin eri osatekijöiden välisiä suhteita. Moraalista toimintaa tarkastellaan erityisesti ammattieettisestä näkökulmasta ja tutkimukseen vastanneita korkeakouluopiskelijoita tulevina asiantuntija-ammattien edustajina. Työn viitekehyksenä on James Restin (1986) moraalisen toiminnan neljän komponentin malli. Mallin mukaan moraalisessa toiminnassa on eroteltavissa ainakin neljä psykologista prosessia: moraalinen sensitiivisyys (tilanteen tulkitseminen moraaliseksi ongelmatilanteeksi), moraaliarviointi (teon arviointi oikeaksi tai vääräksi), moraalinen motivaatio (toimintaa ohjaavat arvot) sekä moraalinen luonne (kyky toimia moraalisten periaatteittensa mukaisesti). Tässä tutkimuksessa malliin sisällytetään myös ajattelun monimutkaisuus moraalisissa päätöksentekotilanteissa, menettelytapojen oikeudenmukaisuuden periaatteiden käyttö kuvaamassa moraaliajattelun sisältöä sekä empatia, jonka oletetaan myös motivoivan moraalista toimintaa. Moraalista luonnetta ei kuitenkaan tutkita tässä yhteydessä. Työn erityisenä kiinnostuksen kohteena on arvojen yhteys muihin moraalin osatekijöihin. Väitöskirja koostuu neljästä erillisestä tutkimuksesta, joista kaksi on korrelaatiotutkimusta, yksi opetusinterventio sekä yksi kokeellinen asetelma. Tutkimuksissa on käytetty neljää korkeakouluopiskelija-aineistoa sekä yhtä kirjallista aineistoa. Arvoja on mitattu Schwartzin arvomittarilla, moraaliarviointeja soveltamalla Colbyn ja Kohlbergin (1987) moraaliarviointien pisteytystä ei-Kohlbergilaisiin dilemmoihin, ajattelun monimutkaisuutta integrativiisen kompleksisuuden teorian avulla (Suedfeld, Tetlock & Streufert, 1992), sekä empatiaa Mehrabianin ja Epsteinin (1972) empatiamittarilla. Tutkimusta varten on kehitetty lisäksi moraalisen herkkyyden mittari. Tutkimuksen tulokset tukevat teoreettisesti oletettuja, mutta empiirisesti vähemmän tutkittuja yhteyksiä moraalin eri osatekijöiden välillä. Ne myös osoittavat arvojen merkityksen moraaliselle pohdinnalle. Päätuloksena voidaan pitää havaintoa, jonka mukaan itsensä ylittämisarvot ovat positiivisesti yhteydessä moraalin muihin osatekijöihin: moraaliseen sensitiivisyyteen, ajattelun monimutkaisuuteen ja empatiaan, kun vastaavasti itsensä korostamisarvojen yhteydet näihin tekijöihin ovat negatiivisia. Tukeakseen opiskelijoiden eettistä kehitystä, yliopistojen tulisi kiinnittää huomiota millaiseen arvomaailmaan he opiskelijoita sosiaalistavat opintojen aikana. Samoin yliopistoissa tarjottavan ammattietiikan opetuksen tulisi kattaa kaikki moraalisen toiminnat komponentit ollakseen tehokasta.
  • Sipiläinen , Timo (MTT Taloustutkimus, 2008)
    The objective was to measure productivity growth and its components in Finnish agriculture, especially in dairy farming. The objective was also to compare different methods and models - both parametric (stochastic frontier analysis) and non-parametric (data envelopment analysis) - in estimating the components of productivity growth and the sensitivity of results with respect to different approaches. The parametric approach was also applied in the investigation of various aspects of heterogeneity. A common feature of the first three of five articles is that they concentrate empirically on technical change, technical efficiency change and the scale effect, mainly on the basis of the decompositions of Malmquist productivity index. The last two articles explore an intermediate route between the Fisher and Malmquist productivity indices and develop a detailed but meaningful decomposition for the Fisher index, including also empirical applications. Distance functions play a central role in the decomposition of Malmquist and Fisher productivity indices. Three panel data sets from 1990s have been applied in the study. The common feature of all data used is that they cover the periods before and after Finnish EU accession. Another common feature is that the analysis mainly concentrates on dairy farms or their roughage production systems. Productivity growth on Finnish dairy farms was relatively slow in the 1990s: approximately one percent per year, independent of the method used. Despite considerable annual variation, productivity growth seems to have accelerated towards the end of the period. There was a slowdown in the mid-1990s at the time of EU accession. No clear immediate effects of EU accession with respect to technical efficiency could be observed. Technical change has been the main contributor to productivity growth on dairy farms. However, average technical efficiency often showed a declining trend, meaning that the deviations from the best practice frontier are increasing over time. This suggests different paths of adjustment at the farm level. However, different methods to some extent provide different results, especially for the sub-components of productivity growth. In most analyses on dairy farms the scale effect on productivity growth was minor. A positive scale effect would be important for improving the competitiveness of Finnish agriculture through increasing farm size. This small effect may also be related to the structure of agriculture and to the allocation of investments to specific groups of farms during the research period. The result may also indicate that the utilization of scale economies faces special constraints in Finnish conditions. However, the analysis of a sample of all types of farms suggested a more considerable scale effect than the analysis on dairy farms.
  • Holopainen-Mantila, Ulla (Helsingin yliopisto, 2015)
    Barley (Hordeum vulgare L.) is a globally important grain crop. The composition and structure of barley grain is under genotypic and environmental control during grain development, when storage compounds (mainly starch and protein), are accumulated. Grain structure plays a significant role in malting and feed- and food-processing quality of barley. Hordeins, the major storage proteins in barley grains, are centrally located in the endosperm forming a matrix surrounding starch granules, but their role in the structural properties of barley grain is not completely understood. Thus, the main aim of the current study was to demonstrate the role of hordeins in barley grain structure. The dependence of the grain structure on the growth environment, in particular with respect to day-length and sulphur application relevant to northern growing conditions, was studied. The effects of the grain structure on end use properties in milling as well as in hydration and modification during malting were characterized. The longer photoperiod typical to latitudes in Southern Finland resulted in a C hordein fraction, entrapped by aggregated B and D hordeins, being more deeply located in the endosperm of barley cultivar Barke. Thus the impact of the growing environment on hordein deposition during grain filling was observed both at the tissue and subcellular level. However, the mechanism behind the differential accumulation of C hordein remains unclear. The deeper localization of entrapped C hordein was linked to improved hydration of grains during malting in three barley cultivars. Thus, the role of the subaleurone region in barley grain was found to be significant with respect to end use quality. Moreover, the results suggest that the growing environment affects the end-use properties of barley and that especially the northern growing conditions have a positive impact on barley processing quality. The influence of sulphur application on hordein composition in the Northern European growing conditions was demonstrated for the first time. Asparagine and C hordein served as nitrogen storage pools when the S application rate was lower than 20 mg S / kg soil, whereas total hordein and B hordein contents increased with higher S application rates. The current study also showed that even when sulphur is sufficiently available in field conditions, the hordein composition may react to sulphur application. The observed sulphur responses were in accordance with those reported earlier for hordein composition. This indicates that the more intensive growth rhythm induced in northern growing conditions does not alter greatly the effect of sulphur on grain composition. The current study confirmed that the main grain components: starch, protein and β-glucan, influence grain processing properties including milling, hydration and endosperm modification. However, their influence on endosperm texture (hardness or steeliness), which also affects the performance of barley grains in these processes, cannot be directly derived or estimated on the basis of the grain composition. The results obtained suggest that hordeins should also be taken into account in the evaluation of the processing behaviour of barley grains.
  • Nieminen, Pekka J. (Helsingin yliopisto, 2007)
    A composition operator is a linear operator that precomposes any given function with another function, which is held fixed and called the symbol of the composition operator. This dissertation studies such operators and questions related to their theory in the case when the functions to be composed are analytic in the unit disc of the complex plane. Thus the subject of the dissertation lies at the intersection of analytic function theory and operator theory. The work contains three research articles. The first article is concerned with the value distribution of analytic functions. In the literature there are two different conditions which characterize when a composition operator is compact on the Hardy spaces of the unit disc. One condition is in terms of the classical Nevanlinna counting function, defined inside the disc, and the other condition involves a family of certain measures called the Aleksandrov (or Clark) measures and supported on the boundary of the disc. The article explains the connection between these two approaches from a function-theoretic point of view. It is shown that the Aleksandrov measures can be interpreted as kinds of boundary limits of the Nevanlinna counting function as one approaches the boundary from within the disc. The other two articles investigate the compactness properties of the difference of two composition operators, which is beneficial for understanding the structure of the set of all composition operators. The second article considers this question on the Hardy and related spaces of the disc, and employs Aleksandrov measures as its main tool. The results obtained generalize those existing for the case of a single composition operator. However, there are some peculiarities which do not occur in the theory of a single operator. The third article studies the compactness of the difference operator on the Bloch and Lipschitz spaces, improving and extending results given in the previous literature. Moreover, in this connection one obtains a general result which characterizes the compactness and weak compactness of the difference of two weighted composition operators on certain weighted Hardy-type spaces.
  • Laitila, Jussi (Helsingin yliopisto, 2006)
    A composition operator is a linear operator between spaces of analytic or harmonic functions on the unit disk, which precomposes a function with a fixed self-map of the disk. A fundamental problem is to relate properties of a composition operator to the function-theoretic properties of the self-map. During the recent decades these operators have been very actively studied in connection with various function spaces. The study of composition operators lies in the intersection of two central fields of mathematical analysis; function theory and operator theory. This thesis consists of four research articles and an overview. In the first three articles the weak compactness of composition operators is studied on certain vector-valued function spaces. A vector-valued function takes its values in some complex Banach space. In the first and third article sufficient conditions are given for a composition operator to be weakly compact on different versions of vector-valued BMOA spaces. In the second article characterizations are given for the weak compactness of a composition operator on harmonic Hardy spaces and spaces of Cauchy transforms, provided the functions take values in a reflexive Banach space. Composition operators are also considered on certain weak versions of the above function spaces. In addition, the relationship of different vector-valued function spaces is analyzed. In the fourth article weighted composition operators are studied on the scalar-valued BMOA space and its subspace VMOA. A weighted composition operator is obtained by first applying a composition operator and then a pointwise multiplier. A complete characterization is given for the boundedness and compactness of a weighted composition operator on BMOA and VMOA. Moreover, the essential norm of a weighted composition operator on VMOA is estimated. These results generalize many previously known results about composition operators and pointwise multipliers on these spaces.
  • Kallio, Minna (Helsingin yliopisto, 2008)
    Comprehensive two-dimensional gas chromatography (GC×GC) offers enhanced separation efficiency, reliability in qualitative and quantitative analysis, capability to detect low quantities, and information on the whole sample and its components. These features are essential in the analysis of complex samples, in which the number of compounds may be large or the analytes of interest are present at trace level. This study involved the development of instrumentation, data analysis programs and methodologies for GC×GC and their application in studies on qualitative and quantitative aspects of GC×GC analysis. Environmental samples were used as model samples. Instrumental development comprised the construction of three versions of a semi-rotating cryogenic modulator in which modulation was based on two-step cryogenic trapping with continuously flowing carbon dioxide as coolant. Two-step trapping was achieved by rotating the nozzle spraying the carbon dioxide with a motor. The fastest rotation and highest modulation frequency were achieved with a permanent magnetic motor, and modulation was most accurate when the motor was controlled with a microcontroller containing a quartz crystal. Heated wire resistors were unnecessary for the desorption step when liquid carbon dioxide was used as coolant. With use of the modulators developed in this study, the narrowest peaks were 75 ms at base. Three data analysis programs were developed allowing basic, comparison and identification operations. Basic operations enabled the visualisation of two-dimensional plots and the determination of retention times, peak heights and volumes. The overlaying feature in the comparison program allowed easy comparison of 2D plots. An automated identification procedure based on mass spectra and retention parameters allowed the qualitative analysis of data obtained by GC×GC and time-of-flight mass spectrometry. In the methodological development, sample preparation (extraction and clean-up) and GC×GC methods were developed for the analysis of atmospheric aerosol and sediment samples. Dynamic sonication assisted extraction was well suited for atmospheric aerosols collected on a filter. A clean-up procedure utilising normal phase liquid chromatography with ultra violet detection worked well in the removal of aliphatic hydrocarbons from a sediment extract. GC×GC with flame ionisation detection or quadrupole mass spectrometry provided good reliability in the qualitative analysis of target analytes. However, GC×GC with time-of-flight mass spectrometry was needed in the analysis of unknowns. The automated identification procedure that was developed was efficient in the analysis of large data files, but manual search and analyst knowledge are invaluable as well. Quantitative analysis was examined in terms of calibration procedures and the effect of matrix compounds on GC×GC separation. In addition to calibration in GC×GC with summed peak areas or peak volumes, simplified area calibration based on normal GC signal can be used to quantify compounds in samples analysed by GC×GC so long as certain qualitative and quantitative prerequisites are met. In a study of the effect of matrix compounds on GC×GC separation, it was shown that quality of the separation of PAHs is not significantly disturbed by the amount of matrix and quantitativeness suffers only slightly in the presence of matrix and when the amount of target compounds is low. The benefits of GC×GC in the analysis of complex samples easily overcome some minor drawbacks of the technique. The developed instrumentation and methodologies performed well for environmental samples, but they could also be applied for other complex samples.
  • Sirén, Jouni (Helsingin yliopisto, 2012)
    This thesis studies problems related to compressed full-text indexes. A full-text index is a data structure for indexing textual (sequence) data, so that the occurrences of any query string in the data can be found efficiently. While most full-text indexes require much more space than the sequences they index, recent compressed indexes have overcome this limitation. These compressed indexes combine a compressed representation of the index with some extra information that allows decompressing any part of the data efficiently. This way, they provide similar functionality as the uncompressed indexes, while using only slightly more space than the compressed data. The efficiency of data compression is usually measured in terms of entropy. While entropy-based estimates predict the compressed size of most texts accurately, they fail with highly repetitive collections of texts. Examples of such collections include different versions of a document and the genomes of a number of individuals from the same population. While the entropy of a highly repetitive collection is usually similar to that of a text of the same kind, the collection can often be compressed much better than the entropy-based estimate. Most compressed full-text indexes are based on the Burrows-Wheeler transform (BWT). Originally intended for data compression, the BWT has deep connections with full-text indexes such as the suffix tree and the suffix array. With some additional information, these indexes can be simulated with the Burrows-Wheeler transform. The first contribution of this thesis is the first BWT-based index that can compress highly repetitive collections efficiently. Compressed indexes allow us to handle much larger data sets than the corresponding uncompressed indexes. To take full advantage of this, we need algorithms for constructing the compressed index directly, instead of first constructing an uncompressed index and then compressing it. The second contribution of this thesis is an algorithm for merging the BWT-based indexes of two text collections. By using this algorithm, we can derive better space-efficient construction algorithms for BWT-based indexes. The basic BWT-based indexes provide similar functionality as the suffix array. With some additional structures, the functionality can be extended to that of the suffix tree. One of the structures is an array storing the lengths of the longest common prefixes of lexicographically adjacent suffixes of the text. The third contribution of this thesis is a space-efficient algorithm for constructing this array, and a new compressed representation of the array. In the case of individual genomes, the highly repetitive collection can be considered a sample from a larger collection. This collection consists of a reference sequence and a set of possible differences from the reference, so that each sequence contains a subset of the differences. The fourth contribution of this thesis is a BWT-based index that extrapolates the larger collection from the sample and indexes it.
  • Koskelo, Jaakko (2012)
    Ioninesteet ovat suoloja, joilla on matala sulamislämpötila (alle noin 100 °C). Niillä on useita hyödyllisiä ominaisuuksia ja lukuisia mahdollisia sovelluksia. Tarkempi tieto ioninesteiden atomitason rakenteesta on kuitenkin tärkeää niiden ominaisuuksien ja mahdollisuuksien ymmärtämiseksi sekä sovellusten kehittämiseksi. Tässä työssä tutkittiin 1-3-dimetyyli-imidazoliumkloridia ([mmim]Cl), joka on molekyylimassaltaan kevyt prototyyppinen ionineste. Tässä tutkielmassa hyödynnettiin epäelastista röntgensirontaa uuden informaation saamiseksi. Epäelastisessa röntgensironnassa fotoni siroaa elektronisysteemistä luovuttaen sekä energiaa että liikemäärää. Fotonin epäelastista sirontaa kutsutaan Compton-sironnaksi, kun energian- ja liikemääränsiirto on suuri. Compton-sirontaa voidaan käyttää aineen atomi- ja molekyylitason rakenteen tutkimisessa, sillä Compton-sirontakokeissa määritettävä suure, Compton-profiili, on herkkä atomien välisen geometrian muutoksille. Mittaustulosten tulkinta on kuitenkin haastavaa ja laskennallisella mallintamisella on siinä suuri rooli. Tässä tutkielmassa laskettiin [mmim]Cl:n neste- ja kidefaasien isotrooppisten Compton-profiilien erotus (erotusprofiili). Tiettyjen oletusten ollessa voimassa Compton-profiili riippuu elektronien liikemäärätiheydestä, joten profiilit voidaan määrittää aineen perustilaa kuvaavien elektronirakennelaskujen avulla. Tässä tutkielmassa elektronirakennelaskuissa käytettiin Kohn-Sham-tiheysfunktionaaliteoriaa, periodisia reunaehtoja ja Gaussisia kantajoukkoja elektronitiloille. Lisäksi laskennan tarkkuuteen vaikuttavia tekijöitä arvioitiin. Liikemäärähilan tiheydellä sekä vaihto-korrelaatiofunktionaalin ja kantajoukon valinnalla havaittiin olevan suuri vaikutus laskettuun erotusprofiiliin. Nämä tekijät olivat selkeästi merkittävämpiä kuin nesterakenteiden äärellisestä määrästä johtuva tilastollinen epätarkkuus. Erotusprofiilin tulkitsemiseksi kiderakenteesta otettuun yhteen [mmim]Cl-ionipariin tehtiin muutoksia käytetyn nesterakenteen perusteella ja tarkasteltiin näiden muutosten vaikutusta Compton-profiiliin. Sekä molekylääristen ionien sisäisen rakenteen että ionien välisen geometrian muutosten havaittiin vaikuttavan merkittävästi laskettuun erotusprofiiliin. Tässä työssä esitetyt tulokset auttavat kokeellisen erotusprofiiliin tulkinnassa ja selittämisessä.
  • Kantosalo, Marget (2003)
    The target of this study, conducted in October/November of 1997, is a polytechnic that had then been operating as a permanent polytechnic for some months. Four of its five educational units are the target of this study each representing separate educational fields. The aim of this case-study is to explore how the ideal of a uniform polytechnic works out in practice. The study sets the experiences and views of the management and the personnel of the institution against the official aims of the polytechnic system, specifically multi-disciplinariness and deregulation, by taking into account the conditions for and the obstacles to cooperation causing by the cultural differences of the fields. An answer has been sought to the questions: How did the cultures of the fields confronted? Have they yet evolved a common culture? Do the cultures of the fields separate from one another? It has also been endeavoured to make comprehensible the complexity inherent in the implementation of the broad programmes launched by central administration, particularly, when many people participate in the implementation process. The research method was thematic interview. Cultural themes have been sought by sorting the interviewmaterial employing the Grounded Theory by Glaser & Strauss. The differences between the educational units and the views within them have been grouped by themes utilizing the tripartition of cultural perspectives by Joanne Martin. The perspectives are integration, differentation and fragmentation perspectives. By coding the material, the categories i.e. the cultural themes have emerged. The core category is “compulsive” multi-disciplinariness, i.e. the obligation brought by the polytechnic to cooperate multi-disciplinarily. As a common content and strategy has not yet been created for the implementation of this multi-discplinariness, it still remains only “a coulisse”, an artefact. Exercise of power of the central administration, which is a uniting factor of the institution, is criticized by the units. Some units find that their position has weakened, others that it has strengthened during the polytechnic-status. Nevertheless, all want to continue in the polytechnic-system. The theory of the study is also grounded on the works by Edgar Schein, Andrew Brown, Richard Geerz, Nohria & Eccles (eds.), Kickert et.al, Alasuutari, Harmon & Mayer, Spradley, Hirsijärvi & Hurme and Yin, just to mention a few. The legislation and other sources on the polytechnics are also important.
  • Ilves, Airi (Helsingin yliopisto, 2016)
    The study analyses the widening scope of competition law in the area of intellectual property rights law and the risk factors of compulsory licensing remedy for the intellectual property rights owners at European Union market. The subject of current thesis is interesting as despite the great amount of legal literature discussing the topic it still remains a controversial and developing area of European Union competition law. Intellectual property owner operating in Europe should take advantage of knowledge of the Court of Justice of the European Union case law on compulsory licensing cases to protect its commercial interests and assess the risks of European Commission and Member States court’s to be convinced that a compulsory license is the appropriate remedy if parties will not reach the agreement on licensing through their own negotiations. The refusal to license has been considered to be an abuse of a dominant position regulated under the Article 102 of the Treaty of the Functioning the European. The EU authorities have developed a list of “exceptional circumstances” for finding a refusal to license as an abuse under the Article 102 through their decisions. The Court of Justice of the European Union develops EU law by applying dynamic interpretation, thus the primary source for addressing the research topic is the case law of the Court of Justice of the European Union. The scope of this work is limited to the analysis of the most noteworthy cases in EU jurisprudence concerning the Article 102 of the TFEU and refusal to license. In some situations when IP law fails to guarantee the level of innovation in the market the competition law’s intervention may be justified as it happened e.g. in factual situation of case Magill. The landmark decision by Court of Justice is IMS Health, setting forth the legal standard applicable in the European Union today. However, European policy will be also assessed in the light of the recent European Commission decisions and General Court case law. The most recent compulsory licensing case Microsoft will be examined to analyse the policy developments and examine what test might be applied under European competition law in the future cases. In this research paper it will be examined if the competition law in Europe has graduated towards a more economic effect based approach and how the relationship between intellectual property and competition law may be seen as complementary and not as antagonistic. The different characteristics of intellectual property rights rather than “normal” property rights will be discussed according to the development of case law and analysis conducted to see what is the rationale of the new product criterion of the exceptional circumstances test. When considering the effectiveness of the jurisprudence it is necessary to take into account the need to balance the effective competition on the market and the encouragement for further innovation. The intellectual property rights protection has an important role in promoting the technological development and thus also in providing more choice for the consumers. The exceptional circumstances test created by the Court of Justice is formalistic and does not take fully into consideration the situation where intellectual property rights owner may block the innovation, however, it must be stressed that the courts are not generally well equipped to conduct the effect-based cost-benefit analysis that is necessary in order to balance the incentives of the dominant undertaking and its competitors to innovate, and such evaluation may prove to be a difficult task for the judiciary. The standards developed in case law are fact-specific and ultimately a source of uncertainty for undertakings at EU market. The study gathers together the most significant snapshots of law and assesses the possibilities where the EU jurisprudence on compulsory licensing is heading. The author concludes that the law on compulsory licensing in Europe will continue to evolve towards lesser intellectual protection to advance competition, innovation and free movement of goods, however, in spite of the widening scope of the European competition law the conditions for issuing compulsory licenses are still highly restrictive.
  • Lahesmaa-Korpinen, Anna-Maria (Helsingin yliopisto, 2012)
    Proteins are key components in biological systems as they mediate the signaling responsible for information processing in a cell and organism. In biomedical research, one goal is to elucidate the mechanisms of cellular signal transduction pathways to identify possible defects that cause disease. Advancements in technologies such as mass spectrometry and flow cytometry enable the measurement of multiple proteins from a system. Proteomics, or the large-scale study of proteins of a system, thus plays an important role in biomedical research. The analysis of all high-throughput proteomics data requires the use of advanced computational methods. Thus, the combination of bioinformatics and proteomics has become an important part in research of signal transduction pathways. The main objective in this study was to develop and apply computational methods for the preprocessing, analysis and interpretation of high-throughput proteomics data. The methods focused on data from tandem mass spectrometry and single cell flow cytometry, and integration of proteomics data with gene expression microarray data and information from various biological databases. Overall, the methods developed and applied in this study have led to new ways of management and preprocessing of proteomics data. Additionally, the available tools have successfully been used to help interpret biomedical data and to facilitate analysis of data that would have been cumbersome to do without the use of computational methods.
  • Ta, Hung (Helsingin yliopisto, 2012)
    Living systems, which are composed of biological components such as molecules, cells, organisms or entire species, are dynamic and complex. Their behaviors are difficult to study with respect to the properties of individual elements. To study their behaviors, we use quantitative techniques in the "omic" fields such as genomics, bioinformatics and proteomics to measure the behavior of groups of interacting components, and we use mathematical and computational modeling to describe and predict their dynamical behavior. The first step in the understanding of a biological system is to investigate how its individual elements interact with each other. This step consist of drawing a static wiring diagram that connects the individual parts. Experimental techniques that are used - are designed to observe interactions among the biological components in the laboratory while computational approaches are designed to predict interactions among the individual elements based on their properties. In the first part of this thesis, we present techniques for network inference that are particularly targeted at protein-protein interaction networks. These techniques include comparative genomics, structure-based, biological context methods and integrated frameworks. We evaluate and compare the prediction methods that have been most often used for domain-domain interactions and we discuss the limitations of the methods and data resources. We introduce the concept of the Enhanced Phylogenetic Tree, which is a new graphical presentation of the evolutionary history of protein families; then, we propose a novel method for assigning functional linkages to proteins. This method was applied to predicting both human and yeast protein functional linkages. The next step is to obtain insights into the dynamical aspects of the biological systems. One of the outreaching goals of systems biology is to understand the emergent properties of living systems, i.e., to understand how the individual components of a system come together to form distinct, collective and interactive properties and functions. The emergent properties of a system are neither to be found in nor are directly deducible from the lower-level properties of that system. An example of the emergent properties is synchronization, a dynamical state of complex network systems in which the individual components of the systems behave coherently, almost in unison. In the second part of the thesis, we apply computational modeling to mimic and simplify real-life complex systems. We focus on clarifying how the network topology determines the initiation and propagation of synchronization. A simple but efficient method is proposed to reconstruct network structures from functional behaviors for oscillatory systems such as brain. We study the feasibility of network reconstruction systematically for different regimes of coupling and for different network topologies. We utilize the Kuramoto model, an interacting system of oscillators, which is simple but relevant enough to address our questions.
  • Floréen, Patrik (Helsingin yliopisto, 1992)
  • Herrmann, Erik (Helsingin yliopisto, 2010)
    Nucleation is the first step in the formation of a new phase inside a mother phase. Two main forms of nucleation can be distinguished. In homogeneous nucleation, the new phase is formed in a uniform substance. In heterogeneous nucleation, on the other hand, the new phase emerges on a pre-existing surface (nucleation site). Nucleation is the source of about 30% of all atmospheric aerosol which in turn has noticeable health effects and a significant impact on climate. Nucleation can be observed in the atmosphere, studied experimentally in the laboratory and is the subject of ongoing theoretical research. This thesis attempts to be a link between experiment and theory. By comparing simulation results to experimental data, the aim is to (i) better understand the experiments and (ii) determine where the theory needs improvement. Computational fluid dynamics (CFD) tools were used to simulate homogeneous onecomponent nucleation of n-alcohols in argon and helium as carrier gases, homogeneous nucleation in the water-sulfuric acid-system, and heterogeneous nucleation of water vapor on silver particles. In the nucleation of n-alcohols, vapor depletion, carrier gas effect and carrier gas pressure effect were evaluated, with a special focus on the pressure effect whose dependence on vapor and carrier gas properties could be specified. The investigation of nucleation in the water-sulfuric acid-system included a thorough analysis of the experimental setup, determining flow conditions, vapor losses, and nucleation zone. Experimental nucleation rates were compared to various theoretical approaches. We found that none of the considered theoretical descriptions of nucleation captured the role of water in the process at all relative humidities. Heterogeneous nucleation was studied in the activation of silver particles in a TSI 3785 particle counter which uses water as its working fluid. The role of the contact angle was investigated and the influence of incoming particle concentrations and homogeneous nucleation on counting efficiency determined.
  • Cervera Taboada, Alejandra (2012)
    High-throughput technologies have had a profound impact in transcriptomics. Prior to microarrays, measuring gene expression was not possible in a massively parallel way. As of late, deep RNA sequencing has been constantly gaining ground to microarrays in transcriptomics analysis. RNA-Seq promises several advantages over microarray technologies, but it also comes with its own set of challenges. Different approaches exist to tackle each of the required processing steps of the RNA-Seq data. The proposed solutions need to be carefully evaluated to find the best methods depending on the particularities of the datasets and the specific research questions that are being addressed. In this thesis I propose a computational framework that allows the efficient analysis of RNA-Seq datasets. The parallelization of tasks and organization of the data files was handled by the Anduril framework on which the workflow was implemented. Particular emphasis was bestowed on the quality control of the RNA-Seq files. Several measures were taken to prune the data of low quality bases and reads that hamper the alignment step. Furthermore, various existing processing algorithms for transcript assembly and abundance estimation were tested. The best methods have been coupled together into an automated pipeline that takes the raw reads and delivers expression matrices at isoform and gene level. Additionally, a module for obtaining sets of differentially expressed genes under different conditions or when measuring an experiment across a time course is included.
  • Kankainen, Matti (Helsingin yliopisto, 2015)
    Lactobacilli are generally harmless gram-positive lactic acid bacteria and well known for their broad spectrum of beneficial effects on human health and usage in food production. However, relatively little is known at the molecular level about the relationships between lactobacilli and humans and about their food processing abilities. The aim of this thesis was to establish bioinformatics approaches for classifying proteins involved in the health effects and food production abilities of lactobacilli and to elucidate the functional potential of two biomedically important Lactobacillus species using whole-genome sequencing. To facilitate the genome-based analysis of lactobacilli, two new bioinformatics approaches were developed for the systematic analysis of protein function. The first approach, called LOCP, fulfilled the need for accurate genome-wide annotation of putative pilus operons in gram-positive bacteria, whereas the second approach, BLANNOTATOR, represented an improved homology-based solution for general function annotation of bacterial proteins. Importantly, both approaches showed superior accuracy in evaluation tests and proved to be useful in finding information ignored by other homology-search methods, illustrating their added value to the current repertoire of function classification systems. Their application also led to the discovery of several putative pilus operons and new potential effector molecules in lactobacilli, including many of the key findings of this thesis work. Lactobacillus rhamnosus GG is one of the clinically best-studied Lactobacillus strains and has a long history of safe use in the food industry. The whole-genome sequencing of the strain GG and a closely related dairy strain L. rhamnosus LC705 revealed two almost identical genomes, despite the physiological differences between the strains. Nevertheless of the extensive genomic similarity, present only in GG was a genomic region containing genes for three pilin subunits and a pilin-dedicated sortase. The presence of these pili on the cell surface of L. rhamnosus GG was also confirmed, and one of the GG-specific pilins was demonstrated to be central for the mucus interaction of strain GG. These discoveries established the presence of gram-positive pilus structures also in non-pathogenic bacteria and provided a long-awaited explanation for the highly efficient adhesion of the strain GG to the intestinal mucosa. The other Lactobacillus species investigated in this thesis was Lactobacillus crispatus. To gain insights into its physiology and to identify components by which this important constituent of the healthy human vagina may promote urogenital health, the genome of a representative L. crispatus strain was sequenced and compared to those of nine others. These analyses provided an accurate account of features associated with vaginal health and revealed a set of 1,224 gene families that were universally conserved across all the ten strains, and, most likely, also across the entire L. crispatus species. Importantly, this set of genes was shown to contain adhesion genes involved in the displacement of the bacterial vaginosis-associated Gardnerella vaginalis from vaginal cells and provided a molecular explanation for the inverse association between L. crispatus and G. vaginalis colonisation in the vagina. Taken together, the present study demonstrates the power of whole-genome sequencing and computer-assisted genome annotation in identifying genes that are involved in host-interactions and have industrial value. The discovery of gram-positive pili in L. rhamnosus GG and the mechanism by which L. crispatus excludes G. vaginalis from vaginal cells are both major steps forward in understanding the interaction between lactobacilli and host. We envisage that these findings together with the developed bioinformatics methods will aid the improvement of probiotic products and human health in the future.
  • Laakso, Marko (Helsingin yliopisto, 2007)
    This thesis presents a highly sensitive genome wide search method for recessive mutations. The method is suitable for distantly related samples that are divided into phenotype positives and negatives. High throughput genotype arrays are used to identify and compare homozygous regions between the cohorts. The method is demonstrated by comparing colorectal cancer patients against unaffected references. The objective is to find homozygous regions and alleles that are more common in cancer patients. We have designed and implemented software tools to automate the data analysis from genotypes to lists of candidate genes and to their properties. The programs have been designed in respect to a pipeline architecture that allows their integration to other programs such as biological databases and copy number analysis tools. The integration of the tools is crucial as the genome wide analysis of the cohort differences produces many candidate regions not related to the studied phenotype. CohortComparator is a genotype comparison tool that detects homozygous regions and compares their loci and allele constitutions between two sets of samples. The data is visualised in chromosome specific graphs illustrating the homozygous regions and alleles of each sample. The genomic regions that may harbour recessive mutations are emphasised with different colours and a scoring scheme is given for these regions. The detection of homozygous regions, cohort comparisons and result annotations are all subjected to presumptions many of which have been parameterized in our programs. The effect of these parameters and the suitable scope of the methods have been evaluated. Samples with different resolutions can be balanced with the genotype estimates of their haplotypes and they can be used within the same study.
  • Sharma, Vivek (Helsingin yliopisto, 2012)
    Heme-copper oxidases terminate the respiratory chain in many eukaryotes and prokaryotes as the final electron acceptors. They catalyze the reduction of molecular oxygen to water, and conserve the free-energy by proton pumping across the inner mitochondrial membrane or plasma membrane of bacteria. This leads to the generation of an electrochemical gradient across the membrane, which is utilized in the synthesis of ATP. The catalytic mechanism of oxidase is a complex coupling of electrons and protons, which has been studied with the help of numerous biophysical and biochemical methods. The superfamily of oxidases is classified into three different subfamilies; A-, B- and C-type. The A- and B-type oxidases have been studied in great depth, whereas relatively less is known about the molecular mechanism of distinct C-type (or cbb3-type) oxidases. The latter enzymes, which are known to possess unusually high oxygen affinity relative to the former class of enzymes, also share little sequence or structural similarity with the A- and B-type oxidases. In the work presented in this thesis, C-type oxidases have been studied using a variety of computational procedures, such as homology modeling, molecular dynamics simulations, density functional theory calculations and continuum electrostatics. Homology models of the C-type oxidase correctly predicts the side-chain orientation of the cross-linked tyrosine and a proton-channel. The active-site region is also modelled with high accuracy in the models, which are subsequently used in the DFT calculations. With the help of these calculations it is proposed that the different orientation of the cross-linked tyrosine, and a strong hydrogen bond in the proximal side of the high-spin heme are responsible for the higher apparent oxygen affinity and a more rhombic EPR signal in the C-type oxidases. Furthermore, the pKa profiles of two amino acid residues, which are located close to the active-site, suggest a strong electron-proton coupling and a unique proton pumping route. Molecular dynamics simulations on the two-subunit C-type oxidase allowed for the first time to observe redox state dependent water-chain formation in the protein interior, which can be utilized for the redox coupled proton transfer.
  • Ikäläinen, Suvi (Helsingin yliopisto, 2012)
    Theoretical examination of traditional nuclear magnetic resonance (NMR) parameters as well as novel quantities related to magneto-optic phenomena is carried out in this thesis for a collection of organic molecules. Electronic structure methods are employed, and reliable calculations involving large molecules and computationally demanding properties are made feasible through the use of completeness-optimized basis sets. In addition to introducing the foundations of NMR, a theory for the nuclear spin-induced optical rotation (NSOR) is formulated. In the NSOR, the plane of polarization of linearly polarized light is rotated by spin-polarized nuclei in an NMR sample as predicted by the Faraday effect. It has been hypothesized that this could be an advantageous alternative to traditional NMR detection. The opposite phenomenon, i.e., the laser-induced NMR splitting, is also investigated. Computational methods are discussed, including the method of completeness optimization. Nuclear shielding and spin-spin coupling are evaluated for hydrocarbon systems that simulate graphene nanoflakes, while the laser-induced NMR splitting is studied for hydrocarbons of increasing size in order to find molecules that may potentially interest the experimentalist. The NSOR is calculated for small organic systems with inequivalent nuclei to prove the existence of an optical chemical shift. The existence of the optical shift is verified in a combined experimental and computational study. Finally, relativistic effects on the size of the optical rotation are evaluated for xenon, and they are found to be significant. Completeness-optimized basis sets are used in all cases, and extensive analysis regarding the accuracy of results is made.