Browsing by Title

Sort by: Order: Results:

Now showing items 3728-3747 of 5347
  • Svahnbäck, Lasse (Helsingin yliopisto, 2007)
    Precipitation-induced runoff and leaching from milled peat mining mires by peat types: a comparative method for estimating the loading of water bodies during peat production. This research project in environmental geology has arisen out of an observed need to be able to predict more accurately the loading of watercourses with detrimental organic substances and nutrients from already existing and planned peat production areas, since the authorities capacity for insisting on such predictions covering the whole duration of peat production in connection with evaluations of environmental impact is at present highly limited. National and international decisions regarding monitoring of the condition of watercourses and their improvement and restoration require more sophisticated evaluation methods in order to be able to forecast watercourse loading and its environmental impacts at the stage of land-use planning and preparations for peat production.The present project thus set out from the premise that it would be possible on the basis of existing mire and peat data properties to construct estimates for the typical loading from production mires over the whole duration of their exploitation. Finland has some 10 million hectares of peatland, accounting for almost a third of its total area. Macroclimatic conditions have varied in the course of the Holocene growth and development of this peatland, and with them the habitats of the peat-forming plants. Temperatures and moisture conditions have played a significant role in determining the dominant species of mire plants growing there at any particular time, the resulting mire types and the accumulation and deposition of plant remains to form the peat. The above climatic, environmental and mire development factors, together with ditching, have contributed, and continue to contribute, to the existence of peat horizons that differ in their physical and chemical properties, leading to differences in material transport between peatlands in a natural state and mires that have been ditched or prepared for forestry and peat production. Watercourse loading from the ditching of mires or their use for peat production can have detrimental effects on river and lake environments and their recreational use, especially where oxygen-consuming organic solids and soluble organic substances and nutrients are concerned. It has not previously been possible, however, to estimate in advance the watercourse loading likely to arise from ditching and peat production on the basis of the characteristics of the peat in a mire, although earlier observations have indicated that watercourse loading from peat production can vary greatly and it has been suggested that differences in peat properties may be of significance in this. Sprinkling is used here in combination with simulations of conditions in a milled peat production area to determine the influence of the physical and chemical properties of milled peats in production mires on surface runoff into the drainage ditches and the concentrations of material in the runoff water. Sprinkling and extraction experiments were carried out on 25 samples of milled Carex (C) and Sphagnum (S) peat of humification grades H 2.5 8.5 with moisture content in the range 23.4 89% on commencement of the first sprinkling, which was followed by a second sprinkling 24 hours later. The water retention capacity of the peat was best, and surface runoff lowest, with Sphagnum and Carex peat samples of humification grades H 2.5 6 in the moisture content class 56 75%. On account of the hydrophobicity of dry peat, runoff increased in a fairly regular manner with drying of the sample from 55% to 24 30%. Runoff from the samples with an original moisture content over 55% increased by 63% in the second round of sprinkling relative to the first, as they had practically reached saturation point on the first occasion, while those with an original moisture content below 55% retained their high runoff in the second round, due to continued hydrophobicity. The well-humified samples (H 6.5 8.5) with a moisture content over 80% showed a low water retention capacity and high runoff in both rounds of sprinkling. Loading of the runoff water with suspended solids, total phosphorus and total nitrogen, and also the chemical oxygen demand (CODMn O2), varied greatly in the sprinkling experiment, depending on the peat type and degree of humification, but concentrations of the same substances in the two sprinklings were closely or moderately closely correlated and these correlations were significant. The concentrations of suspended solids in the runoff water observed in the simulations of a peat production area and the direct surface runoff from it into the drainage ditch system in response to rain (sprinkling intensity 1.27 mm/min) varied c. 60-fold between the degrees of humification in the case of the Carex peats and c. 150-fold for the Sphagnum peats, while chemical oxygen demand varied c. 30-fold and c. 50-fold, respectively, total phosphorus c. 60-fold and c. 66-fold, total nitrogen c. 65-fold and c. 195-fold and ammonium nitrogen c. 90-fold and c. 30-fold. The increases in concentrations in the runoff water were very closely correlated with increases in humification of the peat. The correlations of the concentrations measured in extraction experiments (48 h) with peat type and degree of humification corresponded to those observed in the sprinkler experiments. The resulting figures for the surface runoff from a peat production area into the drainage ditches simulated by means of sprinkling and material concentrations in the runoff water were combined with statistics on the mean extent of daily rainfall (0 67 mm) during the frost-free period of the year (May October) over an observation period of 30 years to yield typical annual loading figures (kg/ha) for suspended solids (SS), chemical oxygen demand of organic matter (CODmn O2), total phosphorus (tot. P) and total nitrogen (tot. N) entering the ditches with respect to milled Carex (C) and Sphagnum (S) peats of humification grades H 2.5 8.5. In order to calculate the loading of drainage ditches from a milled peat production mire with the aid of these annual comparative values (in kg/ha), information is required on the properties of the intended production mire and its peat. Once data are available on the area of the mire, its peat depth, peat types and their degrees of humification, dry matter content, calorific value and corresponding energy content, it is possible to produce mutually comparable estimates for individual mires with respect to the annual loading of the drainage ditch system and the surrounding watercourse for the whole service life of the production area, the duration of this service life, determinations of energy content and the amount of loading per unit of energy generated (kg/MWh). In the 8 mires in the Köyhäjoki basin, Central Ostrobothnia, taken as an example, the loading of suspended solids (SS) in the drainage ditch networks calculated on the basis of the typical values obtained here and existing mire and peat data and expressed per unit of energy generated varied between the mires and horizons in the range 0.9 16.5 kg/MWh. One of the aims of this work was to develop means of making better use of existing mire and peat data and the results of corings and other field investigations. In this respect combination of the typical loading values (kg/ha) obtained here for S, SC, CS and C peats and the various degrees of humification (H 2.5 8.5) with the above mire and peat data by means of a computer program for the acquisition and handling of such data would enable all the information currently available and that deposited in the system in the future to be used for defining watercourse loading estimates for mires and comparing them with the corresponding estimates of energy content. The intention behind this work has been to respond to the challenge facing the energy generation industry to find larger peat production areas that exert less loading on the environment and to that facing the environmental authorities to improve the means available for estimating watercourse loading from peat production and its environmental impacts in advance. The results conform well to the initial hypothesis and to the goals laid down for the research and should enable watercourse loading from existing and planned peat production to be evaluated better in the future and the resulting impacts to be taken into account when planning land use and energy generation. The advance loading information available in this way would be of value in the selection of individual peat production areas, the planning of their exploitation, the introduction of water protection measures and the planning of loading inspections, in order to achieve controlled peat production that pays due attention to environmental considerations.
  • Frias Beneyto, Rafael (Helsingin yliopisto, 2013)
    Intestinal permeability testing is the specific method to assess for a defective intestinal epithelial barrier. Intestinal permeability measurements are considered helpful and non-invasive means to evaluate intestinal mucosal damage for both scientific (particularly) and clinical purposes, and have been widely used in laboratory rodents and humans. Despite their many advantages, permeability tests have not gained widespread use as a testing option for the detection and management of canine intestinal disorders in veterinary clinical research. The main reasons for this may include the lack of an optimal biomarker for permeability testing, impracticalities involving current testing methodologies, and inconsistencies in test results that have been found by investigators using these tests. Chromium 51-labeled ethylenediamine tetra-acetic acid (51Cr-EDTA) is widely considered the most accurate intestinal permeability probe, but the use of radioactivity is a major drawback. Sugar biomarkers such as lactulose and rhamnose have been more commonly used in the recent years, but they have been associated with marked inconsistencies in the test results. Iohexol is a contrast medium commonly used in radiology for diagnostic purposes in human and veterinary patients, but this molecule has more recently been successfully used for the screening of gut mucosal damage in laboratory rats and humans. The main advantage of iohexol is that its use does not involve radioactivity, nor is it degraded in the intestinal lumen. Furthermore, it has the potential to be quantified by different analytical techniques. The main objective of this project was to improve the methodology of the intestinal permeability tests in dogs in order to make the testing simpler, more practical and accurate for veterinarians and researchers using this approach to investigate intestinal mucosal damage and disorders associated with a defective intestinal epithelial barrier. An additional objective was to preliminarily assess the use of iohexol as a novel intestinal permeability marker for use in dogs. The work consisted of preclinical comparisons of the most relevant intestinal permeability markers including 51Cr-EDTA, lactulose and rhamnose, and iohexol performed in both urine and blood tests using laboratory dog and rat models. In conclusion, studies on the percentage urinary recovery of 51Cr-EDTA, lactulose, and rhamnose, as well as D-xylose, 3-O-methyl-D-glucose, and sucrose after their oral simultaneous administration provided normative data for healthy adult male Beagle dogs. The analysis revealed a discrepancy in the percentage urinary recovery between 51Cr-EDTA and lactulose, suggesting that these two markers are not as equivalent as has so far been believed based on previous studies in humans and cats. It was also concluded that the use of a single marker provides comparable test results to the use of two markers, as evidenced by a comparison of recovery values from 51Cr-EDTA and lactulose versus their correspondent ratio against rhamnose. This supports the hypothesis that, in contrast to the dual sugar test, the use of one inert larger probe may be sufficient for permeability testing, and the testing procedure may consequently be considerably simplified. The studies also demonstrated that the 51Cr-EDTA permeability blood test based on the collection of at least two serum or plasma specimens gives comparable results to the 6-h cumulative urine test. The blood approach is much easier than the urine-based test, as it avoids the constraints associated with urine collection in dogs. Iohexol was shown to have a clear relationship with 51Cr-EDTA in serum levels when they were simultaneously administered to Beagle dogs. When it was used as an intestinal permeability probe in laboratory rats before and after the induction of a well-characterized experimental form of inflammatory bowel disease, it was also possible to clearly discriminate between healthy animals and rats with intestinal mucosal damage. The iohexol blood test can therefore be considered a promising tool for assessing canine intestinal permeability in veterinary clinical research. Nevertheless, further studies using iohexol as intestinal permeability blood marker, particularly in diseased dogs, are warranted before firm conclusions can be made on the validity of this test.
  • Saunaluoma, Sanna (Helsingin yliopisto, 2013)
    Amazonian earthworks have a variety of forms and sizes, and are found in different geographical and ecological locations, indicating separate time periods, distinct cultural affiliations, and diverse functions. Because research on pre-Columbian earthworks has only recently begun, a number of basic questions concerning the function and chronology of these structures, and the ethno-cultural and socio-political relationships of the societies who created them still remain unanswered. The main aim of this dissertation, consisting of four peer-reviewed articles and a synthesis paper, is to build new knowledge and expand on the existing, but still noticeably sparse data on the region's pre-Columbian earthworking cultures. It proposes a hypothesis of the existence of relatively early sedentary interfluvial populations with rather organized and peculiar societal and ideological systems in the southwestern Amazon. This suggestion challenges the conventional view of ancient Amazonian peoples being non-sedentary, with an egalitarian social organization and an inability to alter and manage the environment in which they lived. This dissertation presents and discusses the results of archaeological fieldwork undertaken at earthwork sites in two neighboring frontier regions in the southwestern Amazon: the region of Riberalta in Bolivia and the eastern state of Acre in Brasil. The Bolivian sites are interpreted as permanent settlements, while the Acrean earthworks were constructed principally for ceremonial purposes. The earthworks documented in the Riberalta region are structurally simpler than the ones studied in Acre and are found in slightly different locations, e.g., on high river bluffs or inland only a few kilometers from the main rivers. In Acre, the sites are located on high interfluvial plateaus near minor watercourses. The earthwork building practice prevailed in the Riberalta region from around 200 B.C. until the period of European contact, whereas the geometric earthwork tradition began earlier in Acre, around 1200 B.C., if not before. By the tenth century A.D., the regional confederation that created the geometric enclosures was already disintegrating. Even so, some sites apparently remained in use until the fourteenth century A.D. Chronologically and culturally, these earthworking peoples were formative-stage societies demonstrating emerging sedentism and evolving socio-organizational structures, and in Acre in particular, a society united by a highly developed ideological system materialized in the geometric enclosure architecture.
  • Augustin, Mona (Helsingin yliopisto, 2012)
    Myocardial infarction (MI) leading eventually to heart failure is a major cause of morbidity and mor-tality in Western countries. Current studies suggest that cell-based therapies can improve cardiac function after MI. Paracrine factors mediated by transplanted cells have been suggested to be the curative factors behind this effect. In the present work, we evaluated preconditioning methods of skeletal myoblasts (SM) and bone mar-row-derived mesenchymal stem cells (MSC) prior to transplantation in MI. We used epicardial trans-plantation of cell sheets in a rat model of MI which was induced by left anterior descending (LAD) coronary artery ligation. The preconditioning methods used were heat shock (HS) pre-treatment of SMs and gene modification of MSCs. Gene modification was performed by viral VEGF and Bcl-2 transductions. After preconditioning, cell sheets were transplanted onto MI and the therapeutic effect was evaluated. Furthermore, we studied the effect of HS pre-treatment on SM differentiation under hypoxic conditions and tested the hypothesis that HS preconditioning-treatment enhances the angio-genic properties of SM sheets. Metabolic activity of VEGF and Bcl-2 over-expressing cells was as-sessed under normal nutrient supply, serum starvation and staurosporine treatment. Our results demonstrate that HS preconditioning leads to increased expression of SM differentiation-associated troponin and to reduced caspase-3 activity from differentiation under hypoxic environment. HS-preconditioning protected SM sheets from hypoxia-associated apoptosis in vitro; however it reduced vascular endothelial growth factor (VEGF) expression of the sheet, leading to a lower therapeutic effect in heart failure. MSC sheets carrying VEGF showed enhanced efficacy of cell sheet transplantation therapy in an acute infarction model. These cell sheets attenuated left ventricular dysfunction and myocardial damage, and induced therapeutic angiogenesis. The metabolic activity of gene-over-expressing cells was significantly higher compared to wild type cells. Moreover, introducing VEGF expression in MSCs enhanced the metabolic activity of cells during serum starvation.
  • Keikkala, Elina (Helsingin yliopisto, 2014)
    Pre-eclampsia, defined as hypertension and proteinuria, causes significant maternal and perinatal complications. It affects about 2-8% of all pregnancies and there is no curative medication. If women at risk could be identified, prevention or, at least, reduction of the complications of pre-eclampsia might be possible. When serum biochemical markers and clinical characteristics have been combined, the prediction of pre-eclampsia has improved. We studied whether hyperglycosylated human chorionic gonadotropin (hCG-h) is useful for identifying women at risk of pre-eclampsia already in the first or second trimester of pregnancy. Concentrations of hCG, hCG-h, pregnancy-associated plasma protein-A (PAPP-A) and the free beta subunit of hCG (hCGβ) in the serum in the first trimester and of placental growth factor (PlGF), soluble vascular endothelial growth factor receptor 1 (sVEGFR-1) and of the angiogenetic factors, angiopoietin-1 (Ang-1), -2 (Ang-2) and their common soluble endothelial cell-specific tyrosine kinase receptor Tie-2 (sTie-2) in the second trimester were studied as predictors of pre-eclampsia. For first trimester screening, 158 women who subsequently developed preeclampsia, 41 with gestational hypertension, 81 with small-for-gestational age (SGA) infants and 427 controls were selected among 12,615 pregnant women who attended for first trimester screening for Down's syndrome. For second trimester screening we used serum samples from 55 women who subsequently developed pre-eclampsia, 21 with gestational hypertension, 30 who were normotensive and gave birth to SGA infants, and 83 controls. For analysis of Ang-1, -2 and sTie-2 at 12-15 and 16-20 weeks of pregnancy, 49 women who subsequently developed preeclampsia, 16 with intrauterine growth restriction (IUGR) and 59 healthy controls were recruited among 3240 pregnant women attending for first trimester screening for Down's syndrome. Another 20 healthy women were recruited at term pregnancy for examination of the concentrations of these markers in maternal and fetal circulation and urine and the amniotic fluid before and after delivery. The proportion of hCG-h out of total hCG(%hCG-h) was lower in the first but not in second trimester in women who subsequently developed pre-eclampsia as compared to controls. In the first trimester, %hCG-h was predictive of pre-eclampsia, especially of early-onset pre-eclampsia (diagnosed before 34 weeks of pregnancy). The predictive power improved when PAPP-A, the mean arterial blood pressure and parity were combined with %hCG-h. When these four variables were used together, 69% of the women who were to develop early-onset pre-eclampsia were identified with a specificity of 90%. The concentrations of circulating Ang-2 during the second trimester were higher and those of PlGF were lower in women who later developed pre-eclampsia. Ang-2 was only 20% sensitive and at 90% specific for predicting pre-eclampsia, but PlGF performed well and was 53% sensitive and 90% specific for predicting early-onset pre-eclampsia. %hCG-h did not provide independent prognostic value in the second trimester. In conclusion, hCG-h is a promising first trimester marker of early-onset preeclampsia. In line with the earlier observations, PlGF is a useful second trimester predictive marker of early-onset pre-eclampsia.
  • Merkkiniemi, Katja (Helsingin yliopisto, 2015)
    The presence of certain cancer-related genetic and epigenetic alterations in the tumor affect patients´ response to specific cancer therapies. The accurate screening of these predictive biomarkers in molecular diagnostics is important since it enables the tailoring of an optimal treatment based on molecular characteristics of the tumor. Depending on the type of gene alteration, a wide variety of methods could be applied in biomarker testing. Among the novel methods is next-generation sequencing (NGS) technology, enabling simultaneous detection of multiple alterations. The aim of this thesis was to analyze predictive or potentially predictive genetic and epigenetic alterations of diffuse gliomas and non-small cell lung cancer (NSCLC), and to evaluate the feasibility of pyrosequencing and targeted NGS in the detection of these alterations in formalin-fixed paraffin-embedded (FFPE) tumor tissue specimens. In Study I, we assessed the genetic and epigenetic profile of diffuse gliomas by applying methylation-specific pyrosequencing to detect MGMT promoter hypermethylation, array comparative genomic hybridization to detect chromosomal copy number alterations, and immunohistochemistry (IHC) to detect IDH1 mutation status. MGMT hypermethylation, IDH1 mutations, and losses of chromosome arms 1p and 19q were typical changes in oligodendroglial tumors (grades II-III), whereas losses of 9p and 10q were frequently seen in glioblastomas (grade IV). Furthermore, we detected significant associations of 1) MGMT hypermethylation with IDH1 mutations and loss of 19q, 2) unmethylated MGMT with losses of 9p and 10q and gain of 7p, 3) IDH1 mutations with MGMT hypermethylation, 1p loss, and combined loss of 1p/19q, and 4) non-mutated IDH1 with losses of 10q. Pyrosequencing proved to be a feasible method for determination of MGMT methylation status in FFPE sample material. In Studies II and III, we compared targeted NGS with fluorescence in situ hybridization, IHC, and real-time reverse-transcription PCR in the detection of ALK fusion (Study II), and with real-time PCR in the detection of EGFR, KRAS, and BRAF mutations (Study III). All analyses were successfully performed on all FFPE samples. A good concordance was observed between the results obtained by different methods, and targeted NGS also proved to be advantageous in the identification of novel and rare variants with a potential predictive value. In Study IV, we determined the frequency of ALK fusion in 469 Finnish NSCLC patients, and the association of ALK fusion with clinicopathological characteristics and with the presence of mutations in 22 other driver genes. We detected ALK fusion at a frequency of 2.3%, suggesting that it is a relatively rare alteration in Finnish NSCLC patients. The presence of ALK fusion was significantly linked to younger age and never-/ex-light smoking history. Although most of the ALK-positive tumors had adenocarcinoma histology, also ALK-positive large cell carcinomas were detected. Characterization of ALK-positive cases by targeted NGS showed a coexistence of ALK fusion with mutations in MET, TP53, CTNNB1, and PIK3CA, but the value of these co-occurrences requires further examination. In conclusion, our studies indicate that certain genetic and epigenetic alterations occur together, and the simultaneous screening of multiple alterations may thus allow one to obtain a more comprehensive picture of the molecular background of the tumor, which could facilitate prediction of tumor behavior, prognosis, and treatment response. Our results show the feasibility of pyrosequencing and targeted NGS in FFPE tumor tissue material and also the advantages of targeted NGS over other commonly used methods in the detection of gene rearrangements and mutations, particularly the ability to simultaneously identify multiple alterations.
  • Xiong, Jie (Helsingin yliopisto, 2015)
    A general inductive probabilistic framework for clustering and classification is introduced using the principles of Bayesian predictive inference, such that all quantities are jointly modelled and the uncertainty is fully acknowledged through the posterior predictive distribution. Several learning rules have been considered and the theoretical results are extended to acknowledge complex dependencies within the datasets. Multiple probabilistic models have been developed for analysing data from a wide variate of fields of application. State-of-art algorithms are introduced and developed for the model optimization.
  • Vuoristo-Myllys, Salla (Helsingin yliopisto, 2014)
    Randomized controlled trials and systematic reviews form a basis for evidence-based treatments of alcohol use disorders. However, generalizing the research findings of randomized controlled trials to clinical practice is sometimes difficult. Little is known about how many such treatments work in real-life treatment settings or to whom the results apply. The aim of this study was to investigate how one of the evidence-based treatments for alcohol dependence, cognitive behavioral therapy (CBT) combined with targeted (used as needed) naltrexone, works in a real-life treatment setting with a heterogeneous patient sample. The study specifically investigated which factors were prognostic of treatment dropout, treatment outcomes, and patient adherence to naltrexone. The study also investigated whether CBT combined with medication (naltrexone/acamprosate/ disulfiram) can improve patient well-being and quality of life, in addition to reducing alcohol consumption. The participants in studies I–III comprised of problem drinkers who attended an outpatient treatment program that combined CBT and naltrexone. The participants in study IV were treatment-seeking heavy drinkers who participated in a randomized controlled trial in which they received medication and CBT. In studies I–III, we evaluated the sociodemographic factors, alcohol-related factors and depressive symptoms of participants at treatment entry. We evaluated the change in alcohol consumption and symptoms of alcohol craving, as well as the patients’ adherence to naltrexone use during the 20 weeks of treatment. In study IV, we evaluated the change in the quality of life, depression, and smoking habits of participants during the treatment (52 weeks) and follow-up (119 weeks). In studies I–III, factors related to dropping out included a younger age, lower problem severity, lower adherence to naltrexone, and starting the treatment with abstinence. The alcohol-related outcomes were poorer for those with no previous treatment history and higher pretreatment alcohol consumption. Patients who drank more alcohol before and during the treatment had lower adherence to naltrexone. Poor naltrexone adherence was also associated with unemployment and a strong craving for alcohol. Study IV showed that in addition to significantly reducing drinking, combining medication and CBT can improve the quality of life, depression, and smoking habits of those patients who commit to treatment. Participants who used disulfiram were the most successful in quitting smoking. An important finding regarding routine treatment settings was the variability in how problem drinkers benefited from CBT and naltrexone. Those with lower problem severity may benefit from shorter interventions. However, those with the most severe alcohol problems may require more intensive and longer treatment, as well as the use of medications other than naltrexone. Non-adherence to medication is a barrier to the effectiveness of naltrexone in a real-life treatment setting, and those with a high craving for alcohol may need specific interventions to enhance medication use. For those who commit to treatment, CBT combined with medication may improve general well-being and quality of life, in addition to reduced drinking. The treatment may also help patients quit smoking, especially those who use disulfiram during treatment.
  • Harkonmäki, Karoliina (Helsingin yliopisto, 2007)
    The strong tendency of elderly employees to retire early and the simultaneous aging of the population have been major topics of policy and scientific debate. A key concern has been the financing of future pension schemes and possible labour shortage, especially in social and health services within the public sector. The aging of the population is inevitable, but efforts can be made to prevent or postpone early exit from the labour force, e.g., by identifying and intervening in the factors that contribute to the process of early retirement due to disability. The associations of intentions to retire early, poor mental health and different psychosocial factors with the process of disability retirement are still poorly understood. The purpose of this study was to investigate the associations of intentions to retire early, poor mental health, work and family related psychosocial factors and experiences of earlier life stages with the process of disability retirement. The data were derived from the Helsinki Health Study (HHS, N=8960) and the Health and Social Support Study (HeSSup, N=25 901). The Helsinki Health Study is an ongoing employee cohort study among middle-aged women and men. The Health and Social Support Study is an ongoing longitudinal study of a working-age sample representative of the Finnish population. The analyses were restricted to respondents 40 years of age or older. Age and gender adjusted prevalence and incidence rates were calculated. Associations were studied by using logistic, multinomial and Cox regression. Strong intentions to retire early were common among employees. Poor mental health, unfavourable working conditions and work-to-family conflicts were clearly associated with increased intentions to retire early. Strong intentions to retire early predicted disability retirement. Risk of disability retirement increased in a dose-response manner with increasing number of childhood adversities. Poor mental and somatic health, life dissatisfaction, heavy alcohol consumption, current smoking, obesity and low socioeconomic status were also predictors of disability retirement. The impact of poor mental health and adverse experiences from earlier life stages, work and family related psychosocial factors, e.g., work-family interface, the subjective experience of well-being and health related risk behaviours on the process of disability retirement should be recognised. Preventive measures against disability retirement should be launched before subjective experience of ill health, work disability and strong intentions to retire early emerge.
  • Tikkanen, Roope (Helsingin yliopisto, 2009)
    Acts of violence lays a great burden on humankind. The negative effects of violence could be relieved by accurate prediction of violent recidivism. However, prediction of violence has been considered an inexact science hampered by scare knowledge of its causes. The study at hand examines risk factors of violent reconvictions and mortality among 242 Finnish male violent offenders exhibiting severe alcoholism and severe externalizing personality disorders. The violent offenders were recruited during a court-ordered 2-month inpatient mental status examination between 1990—1998. Controls were 1210 individuals matched by sex-, age-, and place of birth. After a 9-year non-incarcerated follow-up criminal register and mortality data were obtained from national registers. Risk analyses were applied to estimate odds and relative risk for recidivism and mortality. Risk variables that were included in the analyses were antisocial personality disorder (ASPD), borderline personality disorder (BPD), a comorbidity of ASPD and BPD, childhood adversities, alcohol consumption, age, and monoamine oxidase A (MAOA) genotype. In addition to risk analyses, temperament dimensions (Tridimensional Personality Questionnaire [TPQ]) were assessed. The prevalence of recidivistic acts of violence (32%) and mortality (16%) was high among the offenders. Severe personality disorders and childhood adversities increased the risk for recidivism and mortality both among offenders (OR 2.0–10.4) and in comparison between offenders and controls (RR 4.3–53.0). Offenders having BPD and a history of childhood maltreatment emerged as a group with a particularly poor prognosis. MAOA altered the effects of alcohol consumption and ageing. Alcohol consumption (+2.3%) and age (–7.3%) showed significant effects on the risk for violent reconvictions among the high activity MAOA (MAOA-H) offenders, but not among the low activity MAOA (MAOA-L) offenders. The offenders featured temperament dimensions of high novelty seeking, high harm avoidance, and low reward dependence matching Cloninger’s definition of explosive personality. The fact that the risk for recidivistic acts of violence and mortality accumulated into clearly defined subgroups supports future efforts to provide for evidence based violence prevention and risk assessments among violent offenders.
  • Moisio, Anu-Liisa (Helsingin yliopisto, 2002)
  • Wathén, Katja-Anneli (Helsingin yliopisto, 2011)
    Pre-eclampsia is a pregnancy complication that affects about 5% of all pregnancies. It is known to be associated with alterations in angiogenesis -related factors, such as vascular endothelial growth factor (VEGF). An excess of antiangiogenic substances, especially the soluble receptor-1 of VEGF (sVEGFR-1), has been observed in maternal circulation after the onset of the disease, probably reflecting their increased placental production. Smoking reduces circulating concentrations of sVEGFR-1 in non-pregnant women, and in pregnant women it reduces the risk of pre-eclampsia. Soluble VEGFR-1 acts as a natural antagonist of VEGF and placental growth factor (PlGF) in human circulation, holding a promise for potential therapeutic use. In fact, it has been used as a model to generate a fusion protein, VEGF Trap , which has been found effective in anti-angiogenic treatment of certain tumors and ocular diseases. In the present study, we evaluated the potential use of maternal serum sVEGFR-1, Angiopoietin-2 (Ang-2) and endostatin, three central anti-angiogenic markers, in early prediction of subsequent pre-eclampsia. We also studied whether smoking affects circulating sVEGFR-1 concentrations in pregnant women or their first trimester placental secretion and expression in vitro. Last, in order to allow future discussion on the potential therapy based on sVEGFR-1, we determined the biological half-life of endogenous sVEGFR-1 in human circulation, and measured the concomitant changes in free VEGF concentrations. Blood or placental samples were collected from a total of 268 pregnant women between the years 2001 2007 in Helsinki University Central Hospital for the purposes above. The biomarkers were measured using commercially available enzyme-linked immunosorbent assays (ELISA). For the analyses of sVEGFR-1, Ang-2 and endostatin, a total of 3 240 pregnant women in the Helsinki area were admitted to blood sample collection during two routine ultrasoundscreening visits at 13.7 ± 0.5 (mean ± SD) and 19.2 ± 0.6 weeks of gestation. Of them, 49 women later developing pre-eclampsia were included in the study. Their disease was further classified as mild in 29 and severe in 20 patients. Isolated early-onset intrauterine growth retardation (IUGR) was diagnosed in 16 women with otherwise normal medical histories and uncomplicated pregnancies. Fifty-nine women remaining normotensive, non-proteinuric and finally giving birth to normal-weight infants were picked to serve as the control population of the study. Maternal serum concentrations of Ang-2, endostatin and sVEGFR-1, were increased already at 16 20 weeks of pregnancy, about 13 weeks before the clinical manifestation of preeclampsia. In addition, these biomarkers could be used to identify women at risk with a moderate precision. However, larger patient series are needed to determine whether these markers could be applied for clinical use to predict preeclampsia. Intrauterine growth retardation (IUGR), especially if noted at early stages of pregnancy and not secondary to any other pregnancy complication, has been suggested to be a form of preeclampsia compromising only the placental sufficiency and the fetus, but not affecting the maternal endothelium. In fact, IUGR and preeclampsia have been proposed to share a common vascular etiology in which factors regulating early placental angiogenesis are likely to play a central role. Thus, these factors have been suggested to be involved in the pathogenesis of IUGR. However, circulating sVEGFR-1, Ang-2 and endostatin concentrations were unaffected by subsequent IUGR at early second trimester. Furthermore, smoking was not associated with alterations in maternal circulating sVEGFR-1 or its placental production. The elimination of endogenous sVEGFR-1 after pregnancy was calculated from serial samples of eight pregnant women undergoing elective Caesarean section. As typical for proteins in human compartments, the elimination of sVEGFR-1 was biphasic, containing a rapid halflife of 3.4 h and a slow one of 29 h. The decline in sVEGFR-1 concentrations after mid-trimester legal termination of pregnancy was accompanied with a simultaneous increase in the serum levels of free VEGF so that within a few days after pregnancy VEGF dominated in the maternal circulation. Our study provides novel information on the kinetics of endogenous sVEGFR-1, which serves as a potential tool in the development of new strategies against diseases associated with angiogenic imbalance and alterations in VEGF signaling.
  • Hirn, Helinä (Helsingin yliopisto, 2009)
    It is demanding for children with visual impairment to become aware of the world beyond their immediate experience. They need to learn to control spatial experiences as a whole and understand the relationships between objects, surfaces and themselves. Tactile maps can be an excellent source of information for depicting space and environment. By means of tactile maps children can develop their spatial understanding more efficiently than through direct travel experiences supplemented with verbal explanations. Tactile maps can help children when they are learning to understand environmental, spatial, and directional concepts. The ability to read tactile maps is not self-evident; it is a skill, which must be learned. The main research question was: can children who are visually impaired learn to read tactile maps at the preschool age if they receive structural teaching? The purpose of this study was to develop an educational program for preschool children with visual impairment, the aim of which was to teach them to read tactile maps in order to strengthen their orientation skills and to encourage them to explore the world beyond their immediate experience. The study is a multiple case study describing the development of the map program consisting of eight learning tasks. The program was developed with one preschooler who was blind, and subsequently the program was implemented with three other children. Two of the children were blind from birth, one child had lost her vision at the age of two, and one child had low vision. The program was implemented in a normal preschool. Another objective of the pre-map program was to teach the preschooler with visual impairment to understand the concept of a map. The teaching tools were simple, map-like representations called pre-maps. Before a child with visual impairment can read a comprehensive tactile map, it is important to learn to understand map symbols, and how a three-dimensional model changes to a two-dimensional tactile map. All teaching sessions were videotaped; the results are based on the analysis of the videotapes. Two of the children completed the program successfully, and learned to read a tactile map. The two other children felt happy during the sessions, but it was problematic for them to engage fully in the instruction. One of the two eventually completed the program, while the other developed predominantly emerging skills. The results of the children's performances and the positive feedback from the teachers, assistants and the parents proved that this pre-map program is appropriate teaching material for preschool children who are visually impaired. The program does not demand high-level expertise; also parents, preschool teachers, and school assistants can carry out the program.
  • Siltanen, Mirjami (Helsingin yliopisto, 2004)
  • Leinonen, Jaakko (Helsingin yliopisto, 2006)
    Visual acuities at the time of referral and on the day before surgery were compared in 124 patients operated on for cataract in Vaasa Central Hospital, Finland. Preoperative visual acuity and the occurrence of ocular and general disease were compared in samples of consecutive cataract extractions performed in 1982, 1985, 1990, 1995 and 2000 in two hospitals in the Vaasa region in Finland. The repeatability and standard deviation of random measurement error in visual acuity and refractive error determination in a clinical environment in cataractous, pseudophakic and healthy eyes were estimated by re-examining visual acuity and refractive error of patients referred to cataract surgery or consultation by ophthalmic professionals. Altogether 99 eyes of 99 persons (41 cataractous, 36 pseudophakic and 22 healthy eyes) with a visual acuity range of Snellen 0.3 to 1.3 (0.52 to -0.11 logMAR) were examined. During an average waiting time of 13 months, visual acuity in the study eye decreased from 0.68 logMAR to 0.96 logMAR (from 0.2 to 0.1 in Snellen decimal values). The average decrease in vision was 0.27 logMAR per year. In the fastest quartile, visual acuity change per year was 0.75 logMAR, and in the second fastest 0.29 logMAR, the third and fourth quartiles were virtually unaffected. From 1982 to 2000, the incidence of cataract surgery increased from 1.0 to 7.2 operations per 1000 inhabitants per year in the Vaasa region. The average preoperative visual acuity in the operated eye increased by 0.85 logMAR (in decimal values from 0.03to 0.2) and in the better eye 0.27 logMAR (in decimal values from 0.23 to 0.43) over this period. The proportion of patients profoundly visually handicapped (VA in the better eye <0.1) before the operation fell from 15% to 4%, and that of patients less profoundly visually handicapped (VA in the better eye 0.1 to <0.3) from 47% to 15%. The repeatability visual acuity measurement estimated as a coefficient of repeatability for all 99 eyes was ±0.18 logMAR, and the standard deviation of measurement error was 0.06 logMAR. Eyes with the lowest visual acuity (0.3-0.45) had the largest variability, the coefficient of repeatability values being ±0.24 logMAR and eyes with a visual acuity of 0.7 or better had the smallest, ±0.12 logMAR. The repeatability of refractive error measurement was studied in the same patient material as the repeatability of visual acuity. Differences between measurements 1 and 2 were calculated as three-dimensional vector values and spherical equivalents and expressed by coefficients of repeatability. Coefficients of repeatability for all eyes for vertical, torsional and horisontal vectors were ±0.74D, ±0.34D and ±0.93D, respectively, and for spherical equivalent for all eyes ±0.74D. Eyes with lower visual acuity (0.3-0.45) had larger variability in vector and spherical equivalent values (±1.14), but the difference between visual acuity groups was not statistically significant. The difference in the mean defocus equivalent between measurements 1 and 2 was, however, significantly greater in the lower visual acuity group. If a change of ±0.5D (measured in defocus equivalents) is accepted as a basis for change of spectacles for eyes with good vision, the basis for eyes in the visual acuity range of 0.3 - 0.65 would be ±1D. Differences in repeated visual acuity measurements are partly explained by errors in refractive error measurements.