Recent Submissions

  • Särkkä, Iro (Helsingin yliopisto, 2019)
    The objective of this study is to analyse NATO as portrayed in Finland’s foreign and security policy, by focusing on the study of policy discourses and rhetoric. The main research question is the following: how have conceptions of NATO changed in the Finnish foreign and security policy debate during the post-Cold War period? As a theoretical framework of the study, a constructivist approach to foreign policy analysis was applied, with the aim of accounting for the significance of political language, policy discourses and rhetoric in defining the Finnish security policy. Within this paradigmatic approach, my aim was to identify and explain for the long-term discursive changes in the Finnish foreign and security policy as well as the policy positions of the key foreign and security policy actors. Empirical and inductive qualitative content analysis (QCA) was applied as methodological approach in analysing the three primary research sources: 1) NATO’s official summit declarations (1990–2016), 2) Finnish foreign, security and defence policy reports (1995–2017) and 3) the corresponding speeches held by the members of the parliament addressing NATO (N=915 speeches addressing NATO). Within the constructivist approach of foreign policy analysis, dominant discourses play an important role in defining the foreign and security policy outcomes. Foreign policy discourses are used as means to legitimize foreign policy action and goal formation; however, they also direct foreign policy debates in domestic policy forums. In this study, my aim was to analyse, the extent to which government led foreign policy transmits to the corresponding foreign policy in the national parliament. I studied the speeches held by members of the Finnish Parliament in the corresponding timeframe in relation to four attributes: 1) the content of the speech and the policy discourses, 2) the rhetorical means employed by the speaker (Aristotles’s ethos, pathos and logos), 3) the formal roles that the speakers held (ministers, group leaders, members of the parliament) as well as 4) the NATO-related rhetorician types (pro-NATO, pragmatics, skeptics and anti-NATO -speakers). In addition to these four parameters, my aim was to synthetize the change in the foreign policy rhetoric of eight major political parties present at the Finnish parliament as well as the movement between the rhetorician types. This doctoral thesis has provided new, previously undiscovered empirical knowledge about the content and the use of different rhetorical means in relation to NATO in the post-Cold War Finnish foreign and security policy debate. Furthermore, it has sought to outline differences in the contextual interpretation of NATO, between the Finnish government and the parliament. Above all the study has shown, how complex foreign and security policy issue NATO is and how many different types of interpretations it may evoke in the Finnish security policy debate.
  • Akenji, Lewis (Helsingin yliopisto, 2019)
    Transitioning to sustainable living is a complex, conflicting, and highly contested issue. As part of this push, governments and businesses have focused on promoting green consumerism - framing people as primarily consumers with “a utility function” and seeking to solve the consumerism problem by paradoxically building consumer capacity to purchase more energy and material efficient products. The now-debunked assumption is that a critical mass of informed, ecologically conscious consumers can, through the market mechanism, apply pressure on producers and thus transform the economic system into a sustainable one. In this thesis I argue that this approach, which is driven by economistic thinking, is consumer scapegoatism, and is both simplistic and flawed. In light of the magnitude and urgency of the unsustainability problem, green consumerism could even be dangerous as it delays deployment of effective solutions. Consumer scapegoatism occurs when ecological imbalance is examined primarily through an economic-growth lens, and the critical role of addressing these systemic flaws is ascribed to the consumer without proper regard for whether he or she has the power to influence other more salient actors in the system. This thesis argues for the need to develop an explicit political economy approach to sustainable living research, policy and practice. Political economy asks questions about power, institutions and agency. For sustainable living, these would be questions such as: who benefits or loses from current patterns of consumption, what are the drivers and structures that propagate unsustainable consumption, where are the meaningful points of intervention that can have desired effects. Critical to finding solutions is in understanding the power dynamics around the issue. I analyse sustainable living as an issue of heterogeneous claims and conflicting interests. The means and practical implications of achieving sustainable living threaten the interests of powerful actors such as national governments, large transnational corporations, and institutions that together shape contemporary politics, policy, and markets. Such actors are also responsible for the systems of provisioning and choice architecture that largely predetermine how individuals and communities pursue and meet their needs. As heterogeneity and conflict of interests are essential to political economy, this approach is well situated as the organizing frame of the field of sustainable living. I discuss the main tensions embodied in the pursuit of sustainable living, and juxtapose these with characteristics of the political economy approach that make it a suitable research framing. Political economy characteristics include: understanding of social transition; interdisciplinarity in research design; use of a moral perspective; and praxis, or practice orientation. I emphasize the element of power as vital in the articulation of social transformation, and highlight the need for sustainable living research to undertake a systemic analysis of power. To apply this, I develop the In-Power framework for analysing power dynamics within a system. The in-power framework has four components: institutions, interests, instruments, and influence. Institutions set the conditions or “rules of the game” for how actors operate in the production-consumption system; Interests identify stakes, showing heterogeneity or homogeneity of those interests in the sustainable living issue; Instruments refer to sources of power and tools available to each stakeholder to support its objectives; and Influence refers to activities stakeholders undertake and reflects agency. I use the framework to analyse the global value chain of consumer goods with a view to understanding drivers of consumption, how power is wielded by stakeholders, and potential points of effective intervention that can enable sustainable living. Dismantling the architecture of unsustainability would invariably call for a questioning of corporate architechture, not only due to the environmental impact resulting from its mode of operation, but also its lock-in effect on institutions and other actors of society. By extension, understanding unsustainable consumption and approaching sustainable living has at its core the need to address the balance – or imbalance – in power dynamics between consumption patterns and corporate power. Using the in-power framework to analyse power flows in a value chain leads to identifying the nexus of influence and the lead actor. The nexus of influece is the concentration of stakeholders who act interdependently and who have a combined decisive influence on the final product and also on the eco-system around it. The lead actor is the main actor in the system with a critical marketing, technological, or financial edge that permits it to set the standards or specifications for other actors in the value chain, and the characteristics that determine its production and use. Thus I argue that consumer scapegoatism, assigning full responsibility to the consumer, is ineffective; a more effective approach to addressing the systemic flaws causing or caused by unsustainable consumption is to target the nexus of influence and the lead actors in order to reform the choice architecture and systems of provision upon which people depend for meeting their needs and wants. Finally, I discuss two points not addressed in this thesis but which are essential to the political economy of sustainable living. They are: the need to define parameters for a sustainable consumption space; and to move research on sustainable living out of the shadows of economics.
  • Summa, Maija (Helsingin yliopisto, 2019)
    Human noroviruses (HuNoVs) are yearly responsible for a large number of acute human gastroenteritis cases globally in all age groups. Typically, the virus transmits via the fecal-oral route from person to person, causing strong symptoms such as nausea, vomiting, and diarrhea, which usually disappear in a few days. However, HuNoVs cause also numerous food-related illnesses in developed countries, including Finland, inducing gastroenteritis outbreaks through contaminated water and foodstuffs. According to the reports of the European Commission, both in Europe and in Finland the most common foods causing HuNoV outbreaks are shellfish, berries (especially frozen raspberries), vegetables, and mixed foods, which most likely became contaminated by a sick food handler. Noroviruses belong to the Caliciviridae family and are classified into seven genogroups. HuNoVs belong to genogroups I (GI), II (GII), and IV (GIV). Other genogroups contain only animal noroviruses. Noroviruses are generally regarded as host-species-specific, but the possibility of zoonotic transmission and infections has been discussed for over a decade for several genotypes. The purpose of this study was to develop a simple and rapid method for detection of HuNoVs in food. The potential zoonotic nature of HuNoVs, particularly whether animals can serve as transmitters for these viruses, was also investigated. In the past two decades, numerous methods for detecting HuNoVs in food have been developed. However, many of these are time-consuming and the sensitivity of the methods has been highly variable. In this work, four published extraction methods for detection of HuNoV in food (lettuce, ham, and frozen berries) were compared. The method based on alkaline elution and polyethylene glycol (PEG) precipitation was found to be the most reliable detection method for all three food matrices tested. The recovery efficiency of the method with frozen raspberries was on average 28%. Two rapid methods for detection of HuNoV in frozen raspberries were also presented. The rapid method based on direct RNA extraction yielded the same recovery levels (32%) as the PEG precipitation method. The method proved to be sensitive because it detected HuNoV also with a virus level of 100 genome copies in a 25 g sample. Moreover, the method detected HuNoV in naturally contaminated berry samples that were linked to outbreaks of disease. A treatment with either a chloroform-butanol mixture or dilution of the food samples for the RT-PCR reaction was efficient in reducing the effect of PCR inhibitors. The same effect was achieved with PEG as a supplement in the food samples. Thirty-nine frozen berry samples purchased from local stores in 2010, 2014, and 2017 were screened. All berries tested negative for HuNoVs GI and GII. HuNoV genome was detected in the feces of 31 birds, two rats, and four pet dogs. The genotypes found in six bird samples and all dog samples were the same as those commonly found in human samples at the time of sampling. HuNoVs can be detected in food samples also in small numbers using the rapid method presented in this study. The use of PEG as a supplement was found to reduce inhibition of the RT-PCR reaction in the two rapid methods, and therefore, the commonly used chloroform-butanol treatment, which easily loses viruses during processing, could be omitted. The results of animal samples strongly indicate that wild birds, pet dogs, and possibly also rats may be involved in the transmission of HuNoVs to food, water, and surfaces
  • Salonen, Antti P (Helsingin yliopisto, 2019)
    The emergence and rapid evolution of the over-the-counter (OTC) derivatives market in the early 1980s revolutionized the whole landscape of finance. OTC derivatives are financial products that are transnational in their nature. These products do not follow any jurisdictional lines nor theoretical boundaries focusing on state-made law. They transcend them. The central argument of this research is that legal scholarship requires a legal theoretical approach capable of recognizing private normativity and that accepts that it is not only nation states and organizations that derive their powers from states that can produce law. Transnational method allows the observer to acknowledge the transnational elements of finance and then set them into a legal theoretical structure. This research retells the evolution of the OTC derivatives market through the application of transnational method. Instead of building a narrative emphasizing the de- and reregulation policies and politics, the research focuses on early beginnings of the largest capital market in the world, the so-called eurobond market of the 1960s. Through legal innovation, this market developed its own transnational rules. In the 1980s, this market became integrated with the rapidly growing market for swaps, a type of OTC derivative. Seeing the demand for contractual standardization,a handful of financial institutions became organized through a trade organization today known as the International Swaps and Derivatives Association, Inc. (ISDA). The main product of ISDA, the ISDA Master Agreement architecture, had become by far the most used standard agreement in the OTC derivatives market already before the 1990s. Post financial crisis of 2008, this transnational contract still holds a central position in a very different regulatory environment than that of the 1980s. Transnational method identifies the supply and demand for financial and legal innovation, and the facilitative role that nation states and international organizations can play in enhancing private normativity and the transnationalisation of law. The results that transnational method tells are first and foremost descriptive. The application of transnational method requires a functional, rather than formal, understanding of ‘law’ because this allows private normativity to be recognized and its ontology properly understood.
  • Hlushchenko, Iryna (Helsingin yliopisto, 2019)
    This thesis explores the role of several actin-binding proteins in the regulation of brain physiology with a focus on dendritic spines. Dendritic spines are considered the plausible physical substrate for learning and memory, as their morphology allows for modulating incoming signals. Disruptions in spine density and morphology are also often associated with neuropsychiatric disorders. The two cellular processes representing neuronal learning are long-term potentiation (LTP) and long-term depression (LTD). Here, I show that the actin-severing protein gelsolin transiently relocates to dendritic spines upon LTD induction, but not LTP induction or spontaneous neuronal activity. It is plausible that the modest – but relatively long-lasting – LTD-induced elevation of Ca2+ concentration increases the affinity of gelsolin to F-actin, thus inducing the relocalization of gelsolin to dendritic spines. Proper spine regulation is crucial for learning in live animals. MIM is an I-BAR containing membrane curving protein, shown to be involved in dendritic spine initiation and dendritic branching in Purkinje cells in the cerebellum. Behavioral analysis of MIM knock-out (KO) mice revealed defects in both learning and reverse-learning, alterations in anxiety levels and reduced dominant behavior, and confirmed the previously described deficiency in motor coordination and pre-pulse inhibition. Anatomically, we observed a decreased density of thin dendritic protrusions, enlarged brain ventricles and decreased cortical volume. Genetic studies have pointed out that genes often disturbed in neuropsychiatric disorders encode synaptic actin regulators. We selected five genes encoding different actin-regulating proteins and induced ASD-associated de novo missense mutations in these proteins. These mutations induced changes in the localization of α-actinin-4, which localized less to dendritic spines, and for SWAP-70 and SrGAP3, which localized more to dendritic spines. Among the wild-type proteins studied, only α-actinin-4 expression caused a significant change in dendritic spine morphology by increasing mushroom spine density and decreasing thin spine density. We hypothesized that mutations associated with ASD shift dendritic spine morphology from mushroom to thin spines. An M554V mutation in α-actinin-4 (ACTN4) resulted in the expected shift in dendritic spine morphology by increasing the density of thin spines. In addition, we observed a trend toward higher thin spine density with mutations in myosin IXb and SWAP-70. Myosin IIb and myosin IXb expression increased the proportion of inhibitory synapses in spines. The expression of mutated myosin IIb (Y265C), SrGAP3 (E469K), and SWAP-70 (L544F) induced variable changes in inhibitory synapses.
  • Metsäniitty, Mari (Helsingin yliopisto, 2019)
    Assessment of an individual’s age has important applications in forensics. From developing countries, the possibility to gather reliable reference data for forensic age estimation may be difficult. The aim was 1) to analyse the validity of the Greulich and Pyle method (1959), and other skeletal and dental methods; 2) to analyse forensic age assessments of asylum seekers in Finland, and the Finnish legislation; 3) to compare a created Somali dental development model (SM) on lower left permanent teeth (PT) from 31 to 37 with the Willems et al. model (WM; 2001); and 4) to study whether the addition of information on the development of third molars (TM) with PT increases the accuracy of age assessment of young Somalis. Dental and skeletal radiographic age-assessment methods were compared using Finnish child victim data (N=47). Information on forensic age assessment was collected from Finnish legislation texts, EU statistics, and public statistics by Immigration Authorities on asylum seekers in Finland. Analysis was made of forensic age assessments in Finland in 2015. The dental development of Somalis, born and living in Finland, was analysed, staging the PT according to Demirjian et al. (1973) and TM according to Köhler et al. (1994). First, both SM and WM on PT were validated on 635 Somalis, aged 4–18 years. Secondly, the age prediction performances of PT and TM development were tested separately and combined on 803 Somalis, aged 3–23 years, using a Bayesian approach. Of the compared dental and skeletal methods, development of PT showed the smallest deviation from the chronological age. In 2015, 149 asylum seekers, originating most often from Afghanistan, Iraq, and Somalia, were assessed for age using methods authorised by the Finnish legislation. Comparing the performances of the WM and SM, small but statistically significant differences in mean error were detected: -0.07 years in males and 0.16 years in females. The approach combining PT and TM predicted the age with the highest accuracy. In conclusion, dental methods, except using only TM, override skeletal methods in accuracy. The current Finnish legislation on forensic age assessment has been successfully implemented in Finland. In age assessment, the WM performs well for Somali children. The age prediction performance improves by combining the information of PT and TM, especially in 12- to 15-year-olds, when both PT and TM are still developing.
  • Adam, Magdy (Helsingin yliopisto, 2019)
    The impact of the peripherally selective α2-adrenoceptor antagonist, vatinoxan, on selected pharmacodynamic and pharmacokinetic properties of two selective α2-adrenoceptor agonists, medetomidine and dexmedetomidine, were investigated in sheep. Moreover, certain interactions between vatinoxan and atipamezole, a specific α2-adrenoceptor antagonist, were evaluated. The initial objective of this study was to identify a dose of vatinoxan that would best mitigate the undesirable cardiopulmonary changes produced by intramuscular (IM) medetomidine-ketamine in sheep. Specifically, three doses of vatinoxan (150, 300 and 600 µg/kg) or saline were combined in the same syringe with medetomidine (30 µg/kg) and ketamine (1 mg/kg) and given IM. Systemic hemodynamics, arterial blood gas tensions, clinical sedation and plasma drug concentrations were compared, both before and after reversal with IM atipamezole (150 µg/kg). The middle dose of vatinoxan (300 µg/kg), which appeared to be optimal among the other doses, was then added to medetomidine (30 µg/kg) and co-administered IM, followed by atipamezole for reversal. Last, the influence of intravenous pre-treatment with vatinoxan on dexmedetomidine-induced cardiopulmonary alterations was investigated in sevoflurane-anesthetized sheep. Following concomitant IM administration, vatinoxan dose-dependently attenuated some of medetomidine’s cardiopulmonary side effects. Vatinoxan did not significantly affect the level of sedation or the plasma concentrations of drugs when ketamine was included in the same syringe. Conversely, vatinoxan significantly increased the plasma concentrations of medetomidine, and accelerated the onset and intensified the degree of sedation when compared with the agonist alone. Moreover, recoveries after atipamezole-reversal were more complete in the presence of vatinoxan. No deleterious effects were noted between vatinoxan and atipamezole. Pre-treatment with vatinoxan prevented all dexmedetomidine-induced pulmonary alterations in sheep anesthetized with sevoflurane. In conclusion, vatinoxan alleviated or prevented the unwanted cardiopulmonary effects of (dex-) medetomidine by blocking the peripheral α2-adrenoceptors. Presumably, when co-administered IM in the same syringe, vatinoxan accelerated the absorption of medetomidine and increased its concentration in blood, which resulted in a faster and more intense sedation than when the agonist was used alone. Vatinoxan also decreased later exposure to dexmedetomidine, which appeared to improve atipamezole’s efficacy to reverse both the central and peripheral effect of the agonist.
  • Kantosalo, Anna (Helsingin yliopisto, 2019)
    Human-computer co-creativity examines creative collaboration between humans and artificially intelligent computational agents. Human-computer co-creativity researchers assume that instead of using computational systems to merely automate creative tasks, computational creativity methods can be leveraged to design computational collaborators capable of sharing creative responsibility with a human collaborator. This has potential for extending both human and computational creative capability. This thesis focuses on the case of one human and one computational collaborator. More specifically this thesis studies how children collaborate with a computational collaborator called the Poetry Machine in the linguistically creative task of writing poems. This thesis investigates three topics related to human-computer co-creativity: The design of human-computer co-creative systems, their evaluation and the modelling of human-computer co-creative processes. These topics are approached from two perspectives: an interaction design perspective and a computational creativity perspective. The interaction design perspective provides practical methods for the design and evaluation of interactive systems as well as methodological frameworks for analysing design practices in the field. The computational creativity perspective then again provides a theoretical view to the evaluation and modelling of human-computer co-creativity. The thesis itself consists of five papers. This thesis starts with an analysis of the interaction design process for computational collaborators. The design process is examined through a review of case studies, and a thorough description of the design process of the Poetry Machine system described in Paper I. The review shows that several researchers in the field have assumed a user-centered design approach, but some good design practices, including the reporting of design decisions, iterative design and early testing with users are not yet fulfilled according to the best standards. After illustrating the general design process, this thesis examines different approaches to the evaluation of human-computer co-creativity. Two case studies are conducted to evaluate the usability of and user experiences with the Poetry Machine system. The first evaluations are described in Paper II. They produced useful feedback for developing the system further. The second evaluation, described in Papers III and IV, investigates specific metrics for evaluating the co-creative writing experience in more detail. To promote the accumulation of design knowledge, special care is taken to report practical issues related to evaluating co-creative systems. These include, for example, issues related to formulating suitable evaluation tasks. Finally the thesis considers modelling human-computer co-creativity. Paper V approaches modelling from a computationally creative perspective, by extending the creativity-as-a-search paradigm into co-creative systems. The new model highlights specific issues for interaction designers to be aware of when designing new computational collaborators.
  • Wahlman, Lumi-Pyry (Helsingin yliopisto, 2019)
    Among all models of inflation, Higgs inflation stands out in its minimalistic approach. In Higgs inflation, the Standard Model Higgs boson drives the expansion of the spacetime. The properties of the Higgs boson are known from collider experiments, and the only new ingredient is a non-minimal coupling of the Higgs boson to gravity. There is no need to add any new particles, and the non-minimal coupling is the only free parameter of the model. While the predictions of Higgs inflation agree with observations at classical level, loop corrections to the Higgs self-potential and gravitational action complicate the picture. From the renormalisation group equations of the Standard Model it is known that the Higgs self-coupling decreases when the energy scale increases. Significant running at the scale of inflation can foil the flat plateau of tree-level inflation. It is also known that loop corrections to gravity will destabilise pure Higgs inflation. There is also another fundamental source of uncertainty: the gravitational degrees of freedom. In Higgs inflation, the spacetime metric is usually taken to be the only gravitational degree of freedom, but this need not be the case. In the Palatini formulation of General Relativity both the metric and the connection are independent degrees of freedom. In the case of Higgs inflation, these two approaches lead to physically inequivalent theories. This thesis focuses on the differences of Higgs inflation in metric and the Palatini formulation. First we show that the metric perturbations must be quantised, if the Higgs boson is the inflaton. Then we consider loop corrections to the Higgs self-coupling, and find that the tensor-to-scalar ratio is smaller in the Palatini formulation. We also consider dimension four correction terms in the gravitational action and find a similar effect on the tensor-to-scalar ratio. There is no clear theoretical indication of how to choose the gravitational degrees of freedom. Hence it is important to be able to differentiate between different choices by observations. We find that the metric and Palatini formulation of General Relativity have distinct cosmological signatures, which can be tested with next generation experiments. If a non-zero tensor-to-scalar ratio is detected, we can rule out Higgs inflation in the Palatini formulation.
  • Äyräväinen, Leena (Helsingin yliopisto, 2019)
    Background. Patients with rheumatoid arthritis (RA) suffer from an autoimmune disease with an increased susceptibility to extra-articular inflammation. The purpose of this study was to clarify the oral health in patients with early (ERA) and chronic (CRA) stage of RA. 53 ERA and 28 CRA patients were recruited to this prospective follow-up study at Helsinki University Hospital between 2005 and 2014. ERA patients were naïve to conventional disease- modifying antirheumatic drugs (cDMARDs). CRA patients had a long history of RA and currently an inadequate response to cDMARDs. CRA patients were starting biologic DMARDs mostly combined with cDMARDs. A group of 43 control subjects of age-, gender- and place of residence- matched volunteers was included. Methods. Dental and medical examinations were conducted twice (follow-up mean 16 months) in RA patients and once in controls. Dental examinations included evaluation of periodontitis, prevalence of periodontopathic bacteria from plaque samples, salivary and serum inflammatory biomarkers MMP-8, TIMP-1 and IL-6, saliva flow, Decayed Missing Teeth (DMFT), Decayed Missing Filled Surfaces (DMFS) and Total Dental Index. Dental data comprised also bite-wing and tomogram x-rays. Medical examinations consisted of clinical rheumatological status by disease activity score DAS 28 (28-joint count), total number of swollen (66) and tender (68) joints, blood tests [rheumatoid factor (RF), antibodies against cyclic citrullinated peptide (CCPAb), C-reactive protein (CRP), erythrocyte sedimentation rate (ESR), antinuclear antibody (ANA) and antibodies for anti-SSA/SSB (anti-Ro/La) and ribonucleoprotein (RNPAb)], radiographs of hands and feet and general function by Health Assessment Questionnaire (HAQ). Results. At baseline RA patients had significantly more periodontitis: 78.8 % of ERA and 85.7 % of CRA patients vs. 44.1 % of controls suffered from periodontitis (p=0.001). Periodontal findings were more common in RA patients than in controls. Antirheumatic medication seemed to have no influence on periodontal parameters. Salivary inflammatory biomarker MMP-8 was associated with periodontal parameters. MMP-8 (p=0.010) and also IL-6 (p=0.010) in saliva were significantly increased in ERA patients compared with CRA patients and controls at baseline, while MMP-8 and IL-6 in serum were significantly elevated in CRA patients during the study. Salivary MMP-8 and MMP-8/TIMP-1 ratio associated with Periodontal Inflammatory Burden Index (PIBI) in CRA patients at baseline (MMP-8: p<0.001; MMP-8/TIMP-1: p<0.001) and after follow-up (MMP-8: p=0.002; MMP-8/TIMP-1: p=0.003). A similar association between MMP-8 and MMP-8/TIMP1 ratio in saliva was observed also in controls (MMP-8: p=0.010; MMP-8/TIMP-1: p=0.010). Total Dental Index (TDI) was significantly elevated in ERA and CRA patients vs. controls [ERA: 2 (2-3); CRA 2 (1-3); controls 1 (1-3), p=0.045]. RA disease activity (DAS28) associated positively with DMFT (p=0.002) and DMFS (p=0.001) in CRA patients at baseline and further after follow-up (DMFT: p=0.001; DMFS: p= 0.001), while in ERA patients such association was observed after follow-up (DMFT: p=0.016; DMFS: p= 0.038). Conclusions. RA patients even already at the early stage of disease had more periodontitis than control subjects. This was reflected also in elevated salivary inflammatory biomarker (MMP-8, IL-6) levels. Further DMFT and DMFS correlated positively with RA disease activity in CRA patients throughout the study.
  • Kantola, Tuula (Finnish Society of Forest Science, 2019)
    Climate change is amplifying forest disturbances, especially those by insect pests. In addition to native species, alien insects are threatening forest health, ecosystem sustainability, and economic return. Uncertainties related to insect pest infestations are increasing along the risk of high impacts. There is a high demand of accurate and cost-effective methods for forest health monitoring to prevent, control, and mitigate the various negative impacts, as well as to support decision-making. Current needs for information for efficient forest management are complex and extensive. The required quality cannot be met with traditional forest inventory methods. Forest information should be up-to date and available across spatial and temporal scales. The developing field of remote sensing and geographical information systems provide new means for various forest monitoring. However, disturbance monitoring, especially by insect pests, gives an extra challenge and increased uncertainties compared to other forest monitoring tasks. With new approaches, valuable information on disturbances can be derived for evaluation of insect-induced forest disturbance at reasonable high accuracy and reduced amount of fieldwork. This dissertation aims towards improved forest health monitoring. Insect-induced disturbances from tree level to larger areas were evaluated in six sub-studies. Different remote sensing sensors and approaches, and ecological niche modeling were employed in disturbance evaluation. Study species include native and invasive insect pests. In context of recent research, issues specific to insect disturbance monitoring are discussed. Pattern, frequency, scale, and intensity of insect infestations vary depending on the pest and landscapes in question affecting disturbance detection and impact evaluation. Sensors, platform, and/or modeling methods have to be chosen accordingly. Environmental features, such as topography, and level of landscape fragmentation give restrictions to the method selection, as well as to the appropriate spatial resolution. Importance of varying information is also affected by the scale and resolution of investigation. Timing of data acquisition is crucial. Early detection and timely management operations are often the only way to mitigate insect outbreaks. Moreover, amount and accuracy of auxiliary information, including forest inventory data, and disturbance history, differ between countries and continents. Forest policies and practices differ between regions affecting selection of usable data sets and methods. Forest health monitoring should be included into forest monitoring systems for timely disturbance detection, accurate monitoring, and impact evaluation. Higher and lower spatial resolution remote sensing should be combined over varying spatial ranges and modeling techniques incorporated for flexible and cost-efficient monitoring over a gradient of different forest ecosystems, climatic conditions, and forest inventory and management practices. Open access remote sensing archives with high temporal resolution could facilitate continuous monitoring of wide forest areas. Developing satellite technology may respond to these needs. Plenty of valuable research on forest health monitoring exist. However, considerably more research is still needed before comprehensive monitoring systems can be adopted at the operational level. Development of remote sensing and modeling techniques, as well as improving computational power and databases facilitate continuous improvement of forest health management practices.
  • Koski, Aapo (Helsingin yliopisto, 2019)
    Large scale software-centric information system projects on public sector are often based on public tenders, in which the request for quotation (RFQ) process is utilized. The systems in these cases are typically procured by a long and energy consuming process, in which the procuring organization tries its best to determine, detail and document the need and then, based on received proposals, tries to select the best candidate to implement a solution to fulfill the need. In the past, these RFQ-based procuring processes resulted in waterfall-type development processes, where again a considerable time was spent in constructing the information system before it finally was ready and accepted for the operative use. The described approach has numerous short-comings, like the strong dependency on the upfront design and the implicit assumption that the need can be communicated effectively with tendering documents. Another major problem is the unvoiced assumption that the original need does not significantly change during the process. As we have entered the era of agility, the incremental, iterative and customer-involving approaches have found their way into the RFQ-based tenders. The introduction of agility has potential to solve some of the problems encountered in the traditional RFQ processes, but at the same time, new challenges surface. Simultaneously, many organizations have reassessed their position as both the information system user and the system's maintainer and are looking into provision of the needed systems or software as a service (SaaS). This thesis is based on experiences on mission critical information system projects in industrial setting, based on public tendering processes and provided as a service. It seems that the traditional RFQ-based process, even with agile ways of working, does not provide us appropriate means to deliver high quality mission critical systems. The SaaS model is a solution, saving us from many of the short-comings by enabling agility. However, providing a service is far different from traditional information system development and deployment and requires new user and customer-facing skills. In addition to SaaS, also other improvements, like changes to the RFQ process, or even to the law governing the public tenders, would be required to succeed in the information system projects in the future.
  • Kinnunen, Heini (Helsingin yliopisto, 2019)
    This dissertation provides an analysis of the uses of the concept of the public sphere in the works of three feminist political and social theorists: Nancy Fraser, Iris Marion Young, and Seyla Benhabib. My main argument is that Young, Benhabib, and Fraser all apply the concept of the public sphere as a mediating tool in taking distance to and building alliances with the Left tradition in the context of feminist debates since the second wave. Benhabib’s, Fraser’s, and Young’s discussions on the concept of the public sphere have been analyzed and confronted by many scholars. There are however no comprehensive, contextualized, or detailed analyses of the uses of the concept in their works, and my study is designed to do this. I argue that the negotiating of the Left tradition is a crucial and prevailing element of Benhabib’s, Fraser’s, and Young’s feminist argumentation and of their uses of the concept of the public sphere. In my analysis of my three theorists’ texts I have found four distinct expressions of this negotiation: Firstly, I argue that during the 1980s the concept of the public sphere starts to emerge as a tool of renegotiating Marxist tradition faced with its Critical, postmodern and socialist feminist critics. Secondly, in the discussions on the democratic and political role of civil society before and after the Cold War, the concept of the public sphere figures as a tool to take distance from the authoritarian expressions of political power but also from the un-critical valorization of the private market. Thirdly, the concept of the public sphere figures as a tool to negotiate the so-called shift from the politics of (social) equality to (cultural) difference in social and political theorizing and practice from the 1990s onward. Finally, I argue that the concept of the public sphere has a central role in both broadening the scope of “the political” as well as defending its limits and distinct features for both feminist and socialist movements. Taken together, the analysis of the uses of the concept of the public sphere provides a window to various debates within feminist political and social theorizing and brings out a common thread that runs through all these debates. The taking distance toward, holding on to, and reinterpreting elements of the Left tradition all figure in the feminist debates in which the concept of the public sphere is used in. At another level, the concept of the public sphere is involved in and interpreted differently in various discussions, and it is precisely these different debates that construct the concept of the public sphere and brings out previously unnoticed aspects of it.
  • Lan, Hangzhen (Helsingin yliopisto, 2019)
    Traditionally, sampling and sample preparation can occupy up to 70-80% of total analysis time in an analytical process that calls for state of the art technologies to reduce the time and the labor needed. In addition, authorities and researchers increasingly demand more sensitive and reliable analytical methods. Solid phase microextraction (SPME) Arrow and in-tube extraction (ITEX) techniques meet these requirements by combining sampling and sample preparation procedures into one, resulting in decreased total analysis time and improved accuracy without any need for organic solvent. The type and amount of sorbent phase, which is immobilized on/in SPME Arrow and ITEX devices, volume of the system and affinity towards targeted analytes are the four main parameters that affect the sensitivity and capability of an analytical method. The main goals of this thesis were to develop new materials, useful as the extraction sorbent in SPME Arrow and ITEX devices, and to clarify their applicability for semi-automated and automated sampling and/or extraction systems for the analysis of volatile organic compounds (VOCs) in environmental, food and biogenic samples. Atomic layer deposition and molecular deposition-conversion methods were employed to fabricate directly iron, aluminum, and zirconium-based metal organic frameworks (MOFs) SPME Arrow coatings. The efficiency of these hydrophobic MOF coatings to isolate hazardous organic compounds from wastewater was evaluated. SPME Arrows were coated also with acidified zeolitic imidazolate framework-8 (A-ZIF-8), ordered mesoporous silicas (OMSs) and functionalized OMSs with different mesopore sizes and multidimensional pore-channel structures by dipping method. Extraction selectivities of these materials were systematically studied. The dipped coatings were reproducible and reusable. The applicability of electrospun and electroblown nanofibers as the packing materials of ITEX was also evaluated. Polyacrylonitrile (PAN) nanofibers with good gas permeability, thermal stability, and excellent affinity to VOCs made them a good alternative of commercial adsorbents for ITEX packing materials. Fully automated dynamic PAN-ITEX system on-line coupled to gas chromatography-mass spectrometry (GC-MS) for continuous analysis of VOCs in air was developed for long-term campaigns. The applicability of aerial drone as the carrier for SPME Arrow and ITEX devices was tested as well for passive and active air sampling in the field. The effects of accessories used in the sampling device, drone flight displacement and sampling location on the sampling results were evaluated. The results demonstrated the great potential of new materials as the extraction sorbents for SPME Arrow and ITEX. They provided better or similar performance in terms of extraction capacity, extraction selectivity and extraction kinetics when compared to commercial materials for enrichment and isolation of analytes from various sample matrices. Further, the developed SPME Arrow and on-line dynamic ITEX methods offered flexibility and versatility for analysis of VOCs. The drone was an ideal platform for miniaturized passive and active air sampling in remote and difficult access regions.
  • Marwah, Veer Singh (Helsingin yliopisto, 2019)
    Toxicology is the scientific pursuit of identifying and classifying the toxic effect of a substance, as well as exploration and understanding of the adverse effects due to toxic exposure. The modern toxicological efforts have been driven by the human industrial exploits in the production of engineered substances with advanced interdisciplinary scientific collaborations. These engineered substances must be carefully tested to ensure public safety. This task is now more challenging than ever with the employment of new classes of chemical compounds, such as the engineered nanomaterials. Toxicological paradigms have been redefined over the decades to be more agile, versatile, and sensitive. On the other hand, the design of toxicological studies has become more complex, and the interpretation of the results is more challenging. Toxicogenomics offers a wealth of data to estimate the gene regulation by inspection of the alterations of many biomolecules (such as DNA, RNA, proteins, and metabolites). The response of functional genes can be used to infer the toxic effects on the biological system resulting in acute or chronic adverse effects. However, the dense data from toxicogenomics studies is difficult to analyze, and the results are difficult to interpret. Toxicogenomic evidence is still not completely integrated into the regulatory framework due to these drawbacks. Nanomaterial properties such as particle size, shape, and structure increase complexity and unique challenges to Nanotoxicology. This thesis presents the efforts in the standardization of toxicogenomics data by showcasing the potential of omics in nanotoxicology and providing easy to use tools for the analysis, and interpretation of omics data. This work explores two main themes: i) omics experimentation in nanotoxicology and investigation of nanomaterial effect by analysis of the omics data, and ii) the development of analysis pipelines as easy to use tools that bring advanced analytical methods to general users. In this work, I explored a potential solution that can ensure effective interpretability and reproducibility of omics data and related experimentation such that an independent researcher can interpret it thoroughly. DNA microarray technology is a well-established research tool to estimate the dynamics of biological molecules with high throughput. The analysis of data from these assays presents many challenges as the study designs are quite complex. I explored the challenges of omics data processing and provided bioinformatics solutions to standardize this process. The responses of individual molecules to a given exposure is only partially informative and more sophisticated models, disentangling the complex networks of dynamic molecular interactions, need to be explored. An analytical solution is presented in this thesis to tackle down the challenge of producing robust interpretations of molecular dynamics in biological systems. It allows exploring the substructures in molecular networks underlying mechanisms of molecular adaptation to exposures. I also present here a multi-omics approach to defining the mechanism of action for human cell lines exposed to nanomaterials. All the methodologies developed in this project for omics data processing and network analysis are implemented as software solutions that are designed to be easily accessible also by users with no expertise in bioinformatics. Our strategies are also developed in an effort to standardize omics data processing and analysis and to promote the use of omics-based evidence in chemical risk assessment.
  • Mgbeahuruike, Eunice Ego (Hansaprint, 2019)
    Piper guineense is a medicinal plant that has wide application in African traditional medicine where it is often used in the treatment of bacterial and fungal infections. It is an economic plant with numerous health benefits which is also consumed regularly as a functional food. The fruits, leaves and seeds are used as spices and flavouring agents in commercial food preparations in West Africa. The extracts are also used for the treatment of various diseases ranging from diarrhea, intestinal diseases, rheumatoid arthritis, bronchitis, cough, stomach ache, asthma to febrile convulsions, fever and mental disorders. There is also recent interest on the biological and pharmacological properties of its bioactive compounds such as piperine, the main alkaloid constituents of P. guineense which is responsible for its pungent aroma. Based on these numerous ethnobotanical, traditional and economic uses of this plant, it became interesting to evaluate the bioactive compounds present in the extracts and to further screen the extracts against some selected human pathogenic bacterial and fungal strains so as to ascertain the efficacy of the extracts and its compounds as potent antibacterial and antifungal lead compounds. Microbial resistance to the currently available antibiotics is a global problem that has resulted to a constant search for a new antimicrobial drug with strong efficacy and low cost. There is need to screen the extracts and bioactive compounds from P. guineense for possible lead compounds for antibacterial and antifungal drug discovery. In this study, first, a method was developed for the chemical profiling, qualitative and quantitative analysis of P. guineense extracts and a good mobile phase composition was developed for the high performance liquid chromatography (HPLC) and thin layer chromatographic (TLC) analysis of the extracts. The effect of the chamber type on the separation was also evaluated using unsaturated horizontal chamber in sandwich configuration, horizontal chamber in non-sandwich configuration and twin-trough vertical chamber. Furthermore, the in vitro antibacterial activity of the extracts were evaluated using 8 pathogenic Gram-positive and Gram-negative bacterial strains. An ethnobotanical survey was also conducted on the use of P. guineense extracts in the treatment of fungal infections in West African traditional medicine. The study area was chosen to be Imo state, South Eastern Nigeria were P. guineense is mostly used by traditional healers for the treatment of fungal infections which is often common among those suffering from HIV and AIDS. The aim of the survey was to document the various methods of preparations and administrations of these extracts for the treatment of fungal diseases. From this ethnobotanical approach, the leaves and fruits extracts of the plant was further tested against 5 fungal strains including Cryptococcus neoformans which causes meningitis in immunocompromised individuals. HPLC and TLC methods were developed for the analysis of P. guineense extracts with emphasis on the shortest analysis time and minimal solvent consumption, and the best mobile phase giving favourable resolution of bands was found to be toluene: ethyl acetate (PS 6-4 corresponding to 60:40 % v/v). The result of the TLC analysis showed that the developing chamber conditions does not affect the TLC separation efficacy in the analysis of P. guineense extracts. The extracts were active against the tested bacterial and fungal strains with minimum inhibitory concentration (MIC) values ranging from 19 to 2500 µg/mL.
  • Hannula, Jani (Helsingin yliopisto, 2019)
    Research-based development of mathematics teacher education aiming at enhancing teacher knowledge has increased internationally during the 21st century. Such development is needed, as the research literature shows that teacher knowledge is associated with teaching practices and student achievement. Since the 1980s, the literature on teacher knowledge has been based on the distinction between subject matter knowledge and pedagogical content knowledge. This distinction separates pure mathematical knowledge from knowledge of learning and teaching mathematics. During the latest decades, social constructivism has been the major framework for developing teaching practices. One teaching strategy using social constructivism as a referent is problem-based learning that emphasises solving authentic problems, co-operation and open inquiry. In this doctoral dissertation, I report a design-based research that focused on strengthening the interplay between subject matter knowledge and pedagogical content knowledge in the context of Finnish mathematics teacher education. The aim of the study was to add knowledge of possibilities and challenges in enhancing teacher knowledge in the context of problem-based learning as well as to design a novel teacher-education course. The theory of social constructivism and the model of problem-based learning were utilised to guide the instructional design of the course. Conceptualisations of teacher knowledge and beliefs as well as mathematical thinking were used as domain-specific theories. To support the design process, two theoretical and one empirical problem analyses were conducted. Additionally, three case studies related to different phases of the design process were conducted. Each case study examined one of the three implementations of the course developed in the study. As is typical for design-based research, the results of the study can be divided into three viewpoints: 1) domain-specific theories, 2) design process, and 3) design artefact. The participants of the study (N=83) were pre-service teachers mainly studying for the master’s degree in mathematics education. The data included interviews, questionnaires as well as participants’ course tasks. Different forms of qualitative content analysis were used as the main method of data analysis. From the standpoint of domain-specific theories, the study supports prior research literature showing that pre-service teachers perceive a difference between the contents and methodology of university-level mathematics and that of school mathematics. This gap is evident both in secondary–tertiary transition and later in developing teacher knowledge. The results of this study emphasise that one central aspect in developing mathematics teacher education is taking into account the connections between informal mathematical thinking, emphasised at school, and formal mathematical thinking, emphasised at university. With respect to the research process, the major findings of this study concern the differences of teacher knowledge produced by the participants in different case studies. During the first implementations of the course, pre-service teachers concentrated largely on the development of pedagogical content knowledge in their course tasks. This emphasis supports prior research showing that pre-service teachers stress the role of pedagogical knowledge in the teaching profession. However, during the last implementation of the course, the teacher knowledge produced by the pre-service teachers was more aligned with the intended learning outcomes of the course. Additionally, the process gave insight into pre-service teachers’ perceptions of the problem-based-learning approach. In line with the prior research literature, these perceptions highlighted the possibilities of the approach in enhancing self-directed learning and co-operation. However, the participants also reported challenges related to workload and need for instruction. With relation to design artefact, the study stresses six central characteristics that need to be taken into account in developing teacher knowledge in line with social constructivism. These characteristics are: 1) supporting discussion and co-operative knowledge building; 2) supporting a student-centred approach; 3) solving authentic and interesting problems; 4) reflective learning and formative assessment; 5) research-based teaching; and 6) connecting the knowledge of mathematics as a discipline with the knowledge of mathematics as a school subject. The results of the study imply that a problem-based learning approach support several pedagogical consequences of social constructivism such as self-directed learning and use of diverse representations. Additionally, a problem-based learning approach can support the development of specialised content knowledge. Such knowledge includes knowledge of different representations of mathematical objects and applications of mathematics in science and technology. On the other hand, the development of horizon content knowledge seems challenging in problem-based learning. Such knowledge includes, for instance, the knowledge of the relationship between different mathematical concepts. This study provides possibilities for further research and development both within and outside of the chosen research paradigm. Within the paradigm, further research of the meaning of, the development of and pre-service teachers’ conceptions of horizon content knowledge is needed. On the other hand, the research can be extended, for instance, by focusing on affective factors of mathematical thinking in contexts similar to this study. The design artefact is transferable to other contexts such as class teacher education. Key words: teacher knowledge, mathematical thinking, social constructivism, problem-based learning, design-based research, mathematics teacher education
  • Grotenfelt, Nora Elisabeth (Helsingin yliopisto, 2019)
    The global prevalence of gestational diabetes (GDM) is around 14%, with increasing numbers over the last decades. In 2017, 19% of all pregnancies in Finland were affected by GDM. Despite treatment, GDM is associated with several short- and long-term adverse health outcomes for both the mother and the child. The short-term effects include an increased risk of large-for-gestational-age babies, birth injuries, Caesarean sections, and neonatal hypoglycemia. The long-term consequences include an increased risk of type 2 diabetes for the mother, and an increased risk of overweight, obesity, metabolic syndrome, and type 2 diabetes for the offspring of GDM pregnancies. Over the recent years, several studies designed to reduce GDM have been published. So far the results are inconclusive. Moreover, there is a shortage of studies assessing the effects of prevention of GDM on the long-term health of the offspring. The aim of this study was to assess women at high risk for GDM considering their clinical characteristics, genetic variance, and time of GDM diagnosis, and to determine the effect of a lifestyle intervention aiming at GDM prevention on long-term offspring health outcomes. This thesis includes four studies. Studies I and II are substudies and Study II is a secondary study of the RADIEL GDM prevention trial, conducted in 2004-2008, assessing maternal outcomes until delivery. Study III includes assessment of neonatal data. In the original RADIEL trial, a total of 720 women with a body mass index (BMI) ≥ 30 kg/m2 and/or a history of GDM in a previous pregnancy were enrolled either before conception or in early pregnancy and allocated either to a lifestyle intervention or a conventional-care group. The intervention focused on both diet and physical activity. Study IV is a substudy of the RADIEL five-year follow-up study (2013-2017), into which all participants of the original RADIEL trial with a viable singleton pregnancy and at least one study visit during pregnancy were invited five years after delivery along with their children. We detected pronounced differences in GDM occurrence between participants with different clinical characteristics. In addition, the effect of the intervention on GDM occurrence differed according to genetic variance. The findings suggest that GDM is a heterogenous disorder, consisting of subgroups that differ markedly regarding both fenotype and genotype. Our lifestyle intervention, delivered by trained nurses to a heterogeneous group of high-risk women enrolled either in early pregnancy or before conception, was not associated with positive effects on metabolic health in the offspring. The value of these findings is the increased knowledge of GDM heterogeneity, useful when improving the screening of GDM and the targeting and cost-effectiveness of future interventions and treatment.
  • Rantala, Antti (Helsingin yliopisto, 2019)
    The presence of supermassive black holes (SMBHs) is ubiquitous in all massive galaxies in the local Universe. In the standard cosmological model, galaxies grow in a process of hierarchical merging and through accretion of matter from the intergalactic medium. Correspondingly, SMBHs grow by accreting gas from their surroundings and through merging with other SMBHs. Thus, present-day SMBHs are expected to have a complicated past merger history. Merging SMBHs leave imprints both on the central regions of their host galaxies and the gravitational wave background. In this thesis, which consists of four peer-reviewed publications, we investigate SMBH binary dynamics in realistic galactic environments and study the effect of merging SMBHs on their early-type host galaxies. For this research, a novel numerical simulation code KETJU was developed. The first two publications present the simulation code KETJU. The KETJU code combines the widely-used galactic-scale simulation code GADGET-3 and an extremely accurate small-scale SMBH dynamics integrator AR-CHAIN. The numerical methods used in KETJU and their practical implementation are thoroughly presented. In addition, we validate the performance of KETJU in comparison simulations with direct N-body codes used in the literature. The energy conservation of the code and parallel scaling behaviour are also demonstrated. We study the effect of the chosen stellar mass resolution on the evolution of the SMBH binaries in a series of galaxy merger simulations. We find that the dependence of the SMBH binary hardening rate on the mass resolution of the simulation is weaker if more realistic multi-component galaxy initial conditions are used. Finally, we show that with a proper treatment of SMBH dynamics in galactic-scale simulations, SMBH mergers are delayed by a few 100 million years compared to the SMBH merger criteria commonly used in the literature. The last two articles study the formation of large stellar cores in massive elliptical galaxies. Using KETJU, we run a series of early-type galaxy merger simulations with SMBHs to investigate the core scouring process responsible for creating cores in massive galaxies. We systematically study the effect of the initial SMBH mass and the initial stellar density profile slope on the surface brightness, the velocity anisotropy profiles and the core scaling relations of the merger remnant. Throughout the two studies we find that more cuspy initial stellar density profiles provide a better match to the final observed properties of core elliptical galaxies. We show that elliptical galaxies built up in a series of minor mergers have larger cores than major merger remnants, as expected, but on the other hand have less anisotropic velocity distributions in their core regions. Finally, we present a simple merger model which for the first time simultaneously produces an early-type galaxy with a flat central core, a tangentially biased central stellar population and kinematically decoupled central regions. These properties of cored early-type galaxies have previously been difficult to explain in one single formation scenario.
  • Quirin, Marie Ann Christine Tania (Helsingin yliopisto, 2019)
    My doctoral thesis examines the prerequisites of replication for three positive-strand RNA viruses, Chikungunya virus (CHIKV - alphavirus), Semliki Forest virus (SFV - alphavirus) and Flock House virus (FHV - nodavirus). Chikungunya virus (CHIKV) is a mosquito-borne RNA virus that causes high fever, rashes and joint pain. Semliki Forest virus (SFV) has been extensively studied as a model to comprehend the replication strategies of alphaviruses because of its low pathogenicity. A characteristic feature of alphavirus replication is the formation of membranous invaginations termed spherules, associated with the plasma membrane. Spherules act as genome factories as they are the sites of active viral replication and release nascent viral RNA strands into the cytoplasm through a bottleneck-like structure. We created a trans-replication system specific for CHIKV that would be flexible and presents no danger to the scientist. In this system, the viral replicase proteins are expressed from a DNA plasmid while the RNA template is produced from a second plasmid, in mammalian cells. This allowed for the study of viral replication without generating infectious particles. It also enabled the visualisation of spherules and labelling of all viral replicase proteins with fluorescent or small immunological tags while preserving their function. Various mutations associated with noncytotoxic phenotypes were analysed and the results showed no correlation between the level of RNA replication and cytotoxicity. Moreover, the trans-replication system was used to elucidate that the cysteine residue of CHIKV nsP2 at position 478 is responsible for its protease activity and essential for replicase polyprotein processing. Trp479 of nsP2 also plays a vital role in RNA replication. The insect nodavirus, FHV, verges upon the properties of a ‘universal virus’ as it can replicate in a wide range of hosts. Only the replicase protein A is required for its replication. An efficient FHV trans-replication system was established in mammalian cells. The outer surface of mitochondria displayed pouch-like invaginations with a ‘neck’ structure opening towards the cytoplasm. High-level synthesis of both genomic and subgenomic RNA was detected in vitro using mitochondrial pellets isolated from transfected cells. The newly synthesized RNA was found to be of positive polarity. This system was used to investigate the capping enzyme domain of protein A, both in cells and in vitro. Mutating the most conserved amino acids of the capping domain abolished or reduced viral RNA synthesis. Surprisingly, transfection of capped RNA template did not rescue the replication activity of the mutants. FHV and alphaviruses show evolutionarily intriguing similarities in their replication complexes and RNA capping enzymes. The biological systems presented in this study offer valuable knowledge that could be exploited to understand the replication of other RNA viruses and also open up new avenues for the elucidation of key virus-host interactions.

View more