Browsing by Subject "validation"

Sort by: Order: Results:

Now showing items 1-20 of 28
  • McMinn, Megan A.; Martikainen, Pekka; Härkänen, Tommi; Tolonen, Hanna; Pitkänen, Joonas; Leyland, Alastair H.; Gray, Linsay (2021)
    Aims: It is becoming increasingly possible to obtain additional information about health survey participants, though not usually non-participants, via record linkage. We aimed to assess the validity of an assumption underpinning a method developed to mitigate non-participation bias. We use a survey in Finland where it is possible to link both participants and non-participants to administrative registers. Survey-derived alcohol consumption is used as the exemplar outcome. Methods: Data on participants (85.5%) and true non-participants of the Finnish Health 2000 survey (invited survey sample N=7167 aged 30-79 years) and a contemporaneous register-based population sample (N=496,079) were individually linked to alcohol-related hospitalisation and death records. Applying the methodology to create synthetic observations on non-participants, we created 'inferred samples' (participants and inferred non-participants). Relative differences (RDs) between the inferred sample and the invited survey sample were estimated overall and by education. Five per cent limits were used to define acceptable RDs. Results: Average weekly consumption estimates for men were 129 g and 131 g of alcohol in inferred and invited survey samples, respectively (RD -1.6%; 95% confidence interval (CI) -2.2 to -0.04%) and 35 g for women in both samples (RD -1.1%; 95% CI -2.4 to -0.8%). Estimates for men with secondary levels of education had the greatest RD (-2.4%; 95% CI -3.7 to -1.1%). Conclusions: The sufficiently small RDs between inferred and invited survey samples support the assumption validity and use of our methodology for adjusting for non-participation. However, the presence of some significant differences means caution is required.
  • Gutiérrez, José Manuel; Maraun, Douglas; Widmann, Martin; Huth, Radan; Hertig, Elke; Benestad, Rasmus; Rössler, Ole; Wibig, Joanna; Wilcke, Renate; Kotlarski, Sven; San Martin, Daniel; Herrera, Sixto; Bedia, Joaquin; Casanueva, Ana; Manzanas, Rodrigo; Iturbide, Maialen; Vrac, Mathieu; Dubrovsky, Martin; Ribalaygua, Jamie; Pórtoles, Javier; Räty, Olle Einari; Räisänen, Jouni Antero; Hingray, Benoît; Raynaud, Damien; Casado, María; Ramos, Petra; Zerenner, Tanja; Turco, Marco; Bosshard, Thomas; Stepanek, Petr; Bartholy, Judit; Pongracz, Rita; Keller, Denise; Fischer, Andreas; Cardoso, Rita; Soares, Pedro; Czernecki, Bartosz; Pagé, Christian (2019)
    VALUE is an open European collaboration to intercompare downscaling approaches for climate change research, focusing on different validation aspects (marginal, temporal, extremes, spatial, process‐based, etc.). Here we describe the participating methods and first results from the first experiment, using “perfect” reanalysis (and reanalysis‐driven regional climate model (RCM)) predictors to assess the intrinsic performance of the methods for downscaling precipitation and temperatures over a set of 86 stations representative of the main climatic regions in Europe. This study constitutes the largest and most comprehensive to date intercomparison of statistical downscaling methods, covering the three common downscaling approaches (perfect prognosis, model output statistics—including bias correction—and weather generators) with a total of over 50 downscaling methods representative of the most common techniques. Overall, most of the downscaling methods greatly improve (reanalysis or RCM) raw model biases and no approach or technique seems to be superior in general, because there is a large method‐to‐method variability. The main factors most influencing the results are the seasonal calibration of the methods (e.g., using a moving window) and their stochastic nature. The particular predictors used also play an important role in cases where the comparison was possible, both for the validation results and for the strength of the predictor–predictand link, indicating the local variability explained. However, the present study cannot give a conclusive assessment of the skill of the methods to simulate regional future climates, and further experiments will be soon performed in the framework of the EURO‐CORDEX initiative (where VALUE activities have merged and follow on). Finally, research transparency and reproducibility has been a major concern and substantive steps have been taken. In particular, the necessary data to run the experiments are provided at http://www.value‐ and data and validation results are available from the VALUE validation portal for further investigation: http://www.value‐
  • Haajanen, Hanna (Helsingin yliopisto, 2020)
    3-Chloro-1,2-propanediol (3-MCPD), 2-chloro-1,3-propanediol (2-MCPD) and 2,3-epoxy-1-propanol (glycidol) and their fatty acid esters are contaminants formed during processing fat containing foodstuffs at high temperatures. Mainly MCPD and glycidyl esters have been found to be formed in the deodorization process of oils, and in vegetable oils, such as palm oil, they have been measured at high concentrations. In accordance with the restrictions imposed by the European Commission, the levels of glycidyl esters must be especially monitored, as they have been identified as potentially carcinogenic compounds. The aim of the study was to introduce and validate a gas chromatographic analysis method for glycidyl esters and MCPD esters for the Customs Laboratory. The method was validated for two matrices: first for oils and then for powdered infant formulas. In addition, the success of the validation was examined by analyzing various oil samples previously received by the Customs Laboratory. The Customs Laboratory is also involved in the activities of the European Union Reference Laboratory, for which it was intended to participate in the reference measurement organized by the EU Reference Laboratory. The method for the determination of 3-MCPD, 2-MCPD and glycidyl esters in oils and infant formulas was carried out according to the guidelines of the European Union Reference Laboratory for Contaminants (EURL-PC). Determination of MCPD and glycidyl ester concentrations in oils and infant formulas included the following steps: fat extraction by liquid-liquid extraction (for infant formulas), addition of standards, solid-phase extraction, conversion of glycidyl esters to 3-MBPD esters, transesterification, neutralization, salting out, derivatization and analysis with gas chromatography-mass spectrometry system. Concentrations were determined using internal standard method. The method was validated for the following parameters: specificity, selectivity, limit of detection and quantitation, reproducibility, repeatability, trueness, linearity and working range, stability and measurement uncertainty. The analytical method developed for the determination of MCPD and glycidyl esters was successfully validated for oils and powdered infant formulas. The developed method proved to be specific and selective. The limit of determination was found to be 6.3 µg/kg, 1.3 µg/kg and 0.8 µg/kg for the oil matrix 3-MCPD, 2-MCPD and glycidyl esters. The limits of determination for the infant formula were 5.4 µg/kg, 3.0 µg/kg and 1.6 µg/kg for 3-MCPD, 2-MCPD and glycidyl esters. Recoveries for MCPD and glycidyl esters in the oil and powdered infant formulas were 83-105%. R2 for calibration lines were greater than 0.99, and the lines were linear over the entire measurement range of 2-1000 µg/kg. The relative standard deviation of repeatability and reproducibility was less than 20% for both matrices. The expanded measurement uncertainty for the MCPD and glycidyl esters of the oil and powdered infant formula was less than 50%. For all parameters, the requirements set by the Customs Laboratory and the performance requirements of Regulation (EU) 1881/2006 were met. A method validated for two matrices can then be accredited. The customs laboratory may use the developed method in the future to control 3-MCPD, 2-MCPD and glycidyl esters levels of oils and powdered infant formulas. In the future, the method could also be validated for new matrices, such as liquid infant formulas.
  • Lommi, Sohvi; Viljakainen, Heli T.; Weiderpass, Elisabete; de Oliveira Figueiredo, Rejane Augusta (2020)
    Purpose To validate the Children's Eating Attitudes Test (ChEAT) in the Finnish population. Materials and methods In total 339 children (age 10-15 years) from primary schools in Southern Finland were evaluated at two time points. They answered the ChEAT and SCOFF test questions, and had their weight, height and waist circumference measured. Retesting was performed 4-6 weeks later. Test-retest reliability was evaluated using intra-class correlation (ICC), and internal consistency was examined using Cronbach's alpha coefficient (C-alpha). ChEAT was cross-calibrated against SCOFF and background variables. Factor analysis was performed to examine the factor structure of ChEAT. Results The 26-item ChEAT showed high internal consistency (C-alpha 0.79), however, a 24-item ChEAT showed even better internal consistency (C-alpha 0.84) and test-retest reliability (ICC 0.794). ChEAT scores demonstrated agreement with SCOFF scores (p <0.01). The mean ChEAT score was higher in overweight children than normal weight (p <0.001). Exploratory factor analysis yielded four factors (concerns about weight, limiting food intake, pressure to eat, and concerns about food), explaining 57.8% of the variance. Conclusions ChEAT is a valid and reliable tool for measuring eating attitudes in Finnish children. The 24-item ChEAT showed higher reliability than the 26-item ChEAT.
  • Pusfitasari, Eka Dian (Helsingin yliopisto, 2019)
    Urine can be used to determine human exposure to nerve agents through the analysis of specific biomarkers. Isopropyl methylphosphonic acid (IMPA) is an important marker of sarin nerve agent, a highly toxic chemical regulated under the Chemical Weapons Convention (CWC). A methodology for sensitive, reliable, and selective determination of IMPA in urine matrix was developed and validated, using liquid chromatography-tandem mass spectrometry (LC-MS/MS). The sample preparation method employs normal phase–solid phase extraction (NP-SPE) using silica based cartridge. Before conducting IMPA analysis, the instrument performance was controlled using a quality control sample. Three different ion sources, namely electrospray ionization (ESI), Unispray, and atmospheric pressure chemical ionization (APCI), were compared in order to define the best method for trace analysis of targeted IMPA. Parameters affecting the ionization process such as cone voltage, capillary voltage, impactor pin voltage, corona voltage, and mobile phase flow rate were optimized. Negative ion mode was selected as the best method for IMPA identification in all three ion sources, and multiple reactions monitoring (MRM) was employed to improve sensitivity and selectivity. The APCI source was shown to be the least sensitive and least efficient ionization technique for IMPA identification. In contrast, using ESI and Unispray resulted in satisfactory data with excellent limit of detection (LOD), limit of quantification (LOQ), precision, and accuracy. The two latter ion sources share the same values of those parameters, i.e. 0.44 ng/mL, 1.46 ng/mL, < 4% precision bias, < 5% accuracy bias, for ESI; and 0.42 ng/mL, 1.38 ng/mL, < 4% precision bias, < 4% accuracy bias, for Unispray. Nonetheless, the Unispray shows better performance in comparison to ESI in producing higher signal intensity/peak area and has lower matrix effect.
  • Syvähuoko, Jenna (Helsingin yliopisto, 2015)
    The literature review focused on the chemical properties of Fusarium mycotoxins and their masked forms, analytical methods for their determination and the toxicological and legislative aspects. In the experimental study, a multi-method was developed and validated for the simultaneous quantification of several Fusarium toxins and their masked forms in barley, oats and wheat using liquid chromatography-tandem mass spectrometry (LC-MS/MS) technique. The simple “dilute-and-shoot” sample preparation procedure was applied, where the extraction was performed with a mixture of acetonitrile, water and acetic acid (79:20:1, v/v/v). Moreover, the aim was to obtain new data on the occurrence of the masked mycotoxins in barley, oats and wheat by analysing 95 cereal grain samples. The type A trichothecenes T-2 and HT-2 toxins (T-2 and HT-2) and the type B trichothecenes deoxynivalenol (DON) and nivalenol (NIV) as well as zearalenone (ZEN), together with 11 masked forms of them, were included based on their importance for the food safety in northern Europe. The analytes were separated on a reversed-phase column and detected in selected reaction monitoring (SRM) mode. Better peak shapes for the early eluting compounds and shorter analysis time were obtained with acetonitrile than methanol as the organic phase, thus it was chosen for the method. The method was validated according to the criteria set in the legislation. The limits of quantification varied from 0.3 to 15.9 ?g/kg. The recoveries were 92?115%, thus being within the tolerable ranges established in the legislation. The inter-day precisions (4?27%) were under the maximum permissible values. Therefore, the method proved to fit for the purpose. In this study, occurrence data on the masked mycotoxins in Finland were obtained for the first time. The presence of ZEN-16-glucoside (ZEN-16-G) and NIV-3-glucoside (NIV-3-G) were reported for the first time worldwide in some of the cereals. The most frequently found toxins were DON, NIV and HT-2. All of the masked mycotoxins included in the method were determined, the most common being DON-3-glucoside (DON-3-G), HT-2-glucoside (HT-2-G) and NIV-3-G.
  • Lee, Hei Shing (Helsingin yliopisto, 2021)
    In atmospheric sciences, measurements provided by remote-sensing instruments are crucial in observing the state of atmosphere. The associated uncertainties are important in nearly all data analyses. Random uncertainties reported by satellite instruments are typically estimated by inversion algorithms (ex-ante). They can be incomplete due to simplified or incomplete modelling of atmospheric processes used in the retrievals, and thus validating random uncertainties is important. However, such validation of uncertainties (or their estimates from statistical analysis afterwards, i.e. ex-post) is not a trivial task, because atmospheric measurements are obtained from the ever-changing atmosphere. This Thesis aims to explore the structure function method – an important approach in spatial statistics – and apply it to total ozone column measurements provided by the nadir-viewing satellite instrument TROPOMI. This method allows us to simultaneously perform validation of reported ex-ante random uncertainties and to explore of local-scale natural variability of atmospheric parameters. Two-dimensional structure functions of total ozone column have been evaluated based on spatial separations in latitudinal and longitudinal directions over selected months and latitude bands. Our results have indicated that the ex-post random uncertainties estimated agree considerably well with the reported ex-ante random uncertainties, which are within 1-2 DU. Discrepancies between them are very small in general. The morphology of ozone natural variability has also been illustrated: ozone variability is minimal in the tropics throughout the year, whereas in middle latitudes and polar regions they attain maxima in local spring and winter. In every scenario, the ozone structure functions are anisotropic with a stronger variability in the latitudinal direction, except at specific seasons in polar regions where isotropic behaviour is observed. Our analysis has demonstrated that the structure function method is a remarkable and promising tool for validating random uncertainties and exploring natural variability. It has a high potential for applications in other remote sensing measurements and atmospheric model data.
  • Bettencourt da Silva, Ricardo J.N; Saame, Jaan; Anes, Bárbara; Heering, Agnes; Leito, Ivo; Näykki, Teemu; Stoica, Daniela; Deleebeeck, Lisa; Bastkowski, Frank; Snedden, Alan; Camões, M. Filomena (Elsevier, 2021)
    Analytica Chimica Acta 1182 (2021), 338923
    The use of the unified pH concept, pHabsH2O, applicable to aqueous and non-aqueous solutions, which allows interpreting and comparison of the acidity of different types of solutions, requires reliable and objective determination. The pHabsH2O can be determined by a single differential potentiometry measurement referenced to an aqueous reference buffer or by a ladder of differential potentiometric measurements that allows minimisation of inconsistencies of various determinations. This work describes and assesses bottom-up evaluations of the uncertainty of these measurements, where uncertainty components are combined by the Monte Carlo Method (MCM) or Taylor Series Approximation (TSM). The MCM allows a detailed simulation of the measurements, including an iterative process involving in minimising ladder deviations. On the other hand, the TSM requires the approximate determination of minimisation uncertainty. The uncertainty evaluation was successfully applied to measuring aqueous buffers with pH of 2.00, 4.00, 7.00, and 10.00, with a standard uncertainty of 0.01. The reference and estimated values from both approaches are metrologically compatible for a 95% confidence level even when a negligible contribution of liquid junction potential uncertainty is assumed. The MCM estimated pH values with an expanded uncertainty, for the 95% confidence level, between 0.26 and 0.51, depending on the pH value and ladder inconsistencies. The minimisation uncertainty is negligible or responsible for up to 87% of the measurement uncertainty. The TSM quantified measurement uncertainties on average only 0.05 units larger than the MCM estimated ones. Additional experimental tests should be performed to test these uncertainty models for analysis performed in other laboratories and on non-aqueous solutions.
  • Kalliokoski, Tuomo; Mäkinen, Harri; Linkosalo, Tapio; Mäkelä-Carter, Annikki (2017)
    The evaluation of process-based models (PBM) includes ascertaining their ability to produce results consistent with forest growth in the past. In this study, we parameterized and evaluated the hybrid model PipeQual with datasets containing traditional mensuration variables collected from permanent sample plots (PSP) of even-aged Norway spruce (Picea abies (L.) Karst.) stands in Finland. To initialize the model in the middle of stand development and reproduce observed changes in Norway spruce crown structure, the built-in empirical relationships of crown characteristics were made explicitly dependent on the light environment. After these modifications, the model accuracy at the whole dataset level was high, with slope values of linear regressions between the observations and simulations ranging from 0.77 to 0.99 depending on the variable. The average bias ranged between -0.72 and 0.07 m in stand dominant height, -0.68 and 0.57 cm in stand mean diameter, -2.62 and 1.92 m(2) in stand basal area, and 20 and 29 m(3) in stand total stem volume. Stand dynamics after thinning also followed reasonable closely the observed patterns. Accurate predictions illustrate the potential of the model for predicting forest stand growth and forest management effects in changing environmental conditions.
  • Bilker-Koivula, Mirjam (Unigrafia Oy, 2021)
    FGI Publications 163 - Aalto University publication series DOCTORAL DISSERTATIONS 33/2021
    Positioning using Global Navigation Satellite Systems (GNSS) is widely used nowadays and it is getting more and more accurate. This requires also better geoid models for the transformation between heights measured with GNSS and heights in the national height system. In Finland heights are continuously changing due to the Fennoscandian postglacial rebound. Land uplift models are developed for the Fennoscandian land uplift area, not only for the vertical velocities, but also for the gravity change related to postglacial rebound. In this dissertation geoid studies were carried out in search of the geoid model that is most suitable for the conversion of GNSS heights in the EUREF-FIN coordinate system to heights in the Finnish height system N2000 on land as well as on sea. In order to determine the relationship between gravity change rates and vertical velocities, time series of absolute gravity measurements were analysed. Methods were tested for fitting a geoid model to GNSS-levelling data. The best method for Finland was found to be least-squares collocation in combination with cross-validation. The result was the height conversion surface FIN2005N00, the official model for Finland. Then, high-resolution global gravity field models were tested in geoid modelling for Finland. The resulting geoid models were better than the earlier geoid models for Finland. After correcting for an offset and tilt, the differences with other models disappeared. Also, a method was developed to validate geoid models at sea using GNSS measurements collected on a vessel. The method was successful and key elements were identified for the process of reducing the GNSS observations from the height of observation down to the geoid surface. Possible offsets between different types of absolute gravimeters were investigated by looking at the results of international comparisons, bi-lateral comparisons and of trend calculations. The trend calculations revealed significant offsets of 31.4 ± 10.9 μGal, 32.6 ± 7.4 μGal and 6.8 ± 0.8 μGal for the IMGC, GABL and JILAg-5 instruments. The time series of absolute gravity measurements at 12 stations in Finland were analysed. At seven stations reliable trends could be determined. Ratios between -0.206 ± 0.017 and -0.227 ± 0.024 μGal/mm and axis intercept values between 0.248 ± 0.089 and 0.335 ± 0.136 μGal/yr were found for the relationship between gravity change rates and vertical velocities. These values are larger than expected based on results of others. The knowledge obtained in the geoid studies will be of benefit in the determination of the next generation geoid models and height conversion surfaces for Finland. Before clear conclusions can be drawn from the absolute gravity results, more studies related to glacial isostatic adjustment, and longer high-quality time series from more stations in Finland, as well as the whole of the uplift area and its boundaries, are needed.
  • Scherrer, Daniel; Mod, Heidi K.; Guisan, Antoine (2020)
    Stacked species distribution models (S-SDM) provide a tool to make spatial predictions about communities by first modelling individual species and then stacking the modelled predictions to form assemblages. The evaluation of the predictive performance is usually based on a comparison of the observed and predicted community properties (e.g. species richness, composition). However, the most available and widely used evaluation metrics require the thresholding of single species' predicted probabilities of occurrence to obtain binary outcomes (i.e. presence/absence). This binarization can introduce unnecessary bias and error. Herein, we present and demonstrate the use of several groups of new or rarely used evaluation approaches and metrics for both species richness and community composition that do not require thresholding but instead directly compare the predicted probabilities of occurrences of species to the presence/absence observations in the assemblages. Community AUC, which is based on traditional AUC, measures the ability of a model to differentiate between species presences or absences at a given site according to their predicted probabilities of occurrence. Summing the probabilities gives the expected species richness and allows the estimation of the probability that the observed species richness is not different from the expected species richness based on the species' probabilities of occurrence. The traditional Sorensen and Jaccard similarity indices (which are based on presences/absences) were adapted to maxSorensen and maxJaccard and to probSorensen and probJaccard (which use probabilities directly). A further approach (improvement over null models) compares the predictions based on S-SDMs with the expectations from the null models to estimate the improvement in both species richness and composition predictions. Additionally, all metrics can be described against the environmental conditions of sites (e.g. elevation) to highlight the abilities of models to detect the variation in the strength of the community assembly processes in different environments. These metrics offer an unbiased view of the performance of community predictions compared to metrics that requiring thresholding. As such, they allow more straightforward comparisons of model performance among studies (i.e. they are not influenced by any subjective thresholding decisions).
  • Heinänen, M.; Brinck, T.; Lefering, R.; Handolin, L.; Soderlundt, T. (2019)
    Background and Aims: Trauma registry data are used for analyzing and improving patient care, comparison of different units, and for research and administrative purposes. Data should therefore be reliable. The aim of this study was to audit the quality of the Helsinki Trauma Registry internally. We describe how to conduct a validation of a regional or national trauma registry and how to report the results in a readily comprehensible form. Materials and Methods: Trauma registry database of Helsinki Trauma Registry from year 2013 was re-evaluated. We assessed data quality in three different parts of the data input process: the process of including patients in the trauma registry (case completeness); the process of calculating Abbreviated Injury Scale (AIS) codes; and entering the patient variables in the trauma registry (data completeness, accuracy, and correctness). We calculated the case completeness results using raw agreement percentage and Cohen's kappa value. Percentage and descriptive methods were used for the remaining calculations. Results: In total, 862 patients were evaluated; 853 were rated the same in the audit process resulting in a raw agreement percentage of 99%. Nine cases were missing from the registry, yielding a case completeness of 97.1% for the Helsinki Trauma Registry. For AIS code data, we analyzed 107 patients with severe thorax injury with 941 AIS codes. Completeness of codes was 99.0% (932/941), accuracy was 90.0% (841/932), and correctness was 97.5% (909/932). The data completeness of patient variables was 93.4% (3899/4174). Data completeness was 100% for 16 of 32 categories. Data accuracy was 94.6% (3690/3899) and data correctness was 97.2% (3789/3899). Conclusion: The case completeness, data completeness, data accuracy, and data correctness of the Helsinki Trauma Registry are excellent. We recommend that these should be the variables included in a trauma registry validation process, and that the quality of trauma registry data should be systematically and regularly reviewed and reported.
  • Kujanpaa, Miika; Syrek, Christine; Tay, Louis; Kinnunen, Ulla; Mäkikangas, Anne; Shimazu, Akihito; Wiese, Christopher W.; Brauchli, Rebecca; Bauer, Georg F.; Kerksieck, Philipp; Toyama, Hiroyuki; de Bloom, Jessica (2022)
    Shaping off-job life is becoming increasingly important for workers to increase and maintain their optimal functioning (i.e., feeling and performing well). Proactively shaping the job domain (referred to as job crafting) has been extensively studied, but crafting in the off-job domain has received markedly less research attention. Based on the Integrative Needs Model of Crafting, needs-based off-job crafting is defined as workers' proactive and self-initiated changes in their off-job lives, which target psychological needs satisfaction. Off-job crafting is posited as a possible means for workers to fulfill their needs and enhance well-being and performance over time. We developed a new scale to measure off-job crafting and examined its relationships to optimal functioning in different work contexts in different regions around the world (the United States, Germany, Austria, Switzerland, Finland, Japan, and the United Kingdom). Furthermore, we examined the criterion, convergent, incremental, discriminant, and structural validity evidence of the Needs-based Off-job Crafting Scale using multiple methods (longitudinal and cross-sectional survey studies, an "example generation"-task). The results showed that off-job crafting was related to optimal functioning over time, especially in the off-job domain but also in the job domain. Moreover, the novel off-job crafting scale had good convergent and discriminant validity, internal consistency, and test-retest reliability. To conclude, our series of studies in various countries show that off-job crafting can enhance optimal functioning in different life domains and support people in performing their duties sustainably. Therefore, shaping off-job life may be beneficial in an intensified and continually changing and challenging working life.
  • Kallio, Arttu (Helsingfors universitet, 2014)
    Cytochrome P450 (CYP) -enzymes are one of the most important enzymes in the metabolism of xenobiotics. Because many xenobiotics are metabolized with each other by the same CYP-enzymes, it is possible that metabolic interactions will take place. These interactions can be the inhibition or induction of the metabolism of another xenobiotic. The interaction can be harmful e.g. when it causes an accumulation of a toxic metabolite or when it inhibits the metabolism of an active drug substance. The aim of this study was to develop a quantitative method for determining metabolic interactions between drugs and environmental chemicals in human liver microsome (HLM) incubations. HLMs contain high concentrations of CYP-enzymes, enabling the use of CYP-model reactions for observing interactions. The model reactions chosen for this study were O-deethylation of phenacetin (CYP1A2), 7-hydroxylation of coumarin (CYP2A6), 4'-hydroxylation of diclofenac (CYP2C9), 1'-hydroxylation of bufuralol (CYP2D6) and 6β-hydroxylation of testosterone (CYP3A4). Michaelis-Menten constants (Km) and maximal enzymatic activities (Vmax) were determined for each model reaction. The suitability of the model reactions for inhibition studies was assessed with specific inhibitors. The quantitative method was developed for an ultra-high performance liquid chormatograph (UPLC) and for a quadrupole time of flight mass spectrometer (QTOF). Samples were ionized with electrospray ionization (ESI) using positive mode. Device parameters were the same for all the metabolites. The analytical method validation was partly performed according to ICH (International Conference on Harmonisation) guidelines. A sufficient linearity (R2>0,99) and specificity was achieved for the quantitative method. The achieved limits of quantitation (LOQ) were low enough (1-120 nM) for quantitation of the small concentrations of the metabolites formed in the inhibition assays. The measurement reproducibility and the reproducibility and accuracy of the method did not fulfill the acceptance criteria for all the metabolites. Improvement of the results should be tried by e.g. exploring different device parameters. 1'-hydroxydiclofenac was found likely to degrade in the matrix solution because of the acidic conditions, making the reliability of the results poor for this metabolite. The Km value obtained for coumarin differed markedly from literature values, which can be due to a too long incubation time. Therefore, incubation conditions should be optimized for this model reaction in coming studies. The Km values obtained for the model reactions of CYP1A2, CYP2D6 and CYP3A4 were similar to those found in literature. Also the IC50 values were quite well within the range of values reported in literature for the inhibitors of the above mentioned model reactions. The effects of four different polymers, F68, F127, Tetronic 1307 and polyvinyl alcohol (PVA) on the enzyme activities were also studied, at a concentration of 1 mg/ml. In principal, at this concentration the polymers did not cause significant changes in the enzyme activities, although inhibition of the CYP2C9 could have been significant. However, the reliability of CYP2C9 model reaction was found to be poor with the used method. In the future this developed method should be further validated, and the incubation conditions for the model reaction of CYP2A6 should be optimized. After this, the IC50 values for the polymers could be studied to get more reliable information about their potential CYP-inhibition properties.
  • Turunen, Anna (University of Helsinki, 1995)
  • Kilpiö, Tommi (Helsingin yliopisto, 2021)
    Plant cell culture can be used for the production of valuable secondary metabolites. Inspired by the previous studies focusing on capsaicinoid production, this study aimed for establishing plant cell cultures of Capsicum chinense to produce capsinoids. Capsinoids are non-pungent capsaicinoid analogues with potential health benefits. Another aim of this study was to determine the α-solanine content in Capsicum plants and cell cultures to ensure that no toxic amounts are formed during the cell culture. Cell cultures of non-pungent Capsicum chinense cultivars, Trinidad Pimento and Aji Dulce strain 2, were established, and the cultures were fed with intermediates, vanillin and vanillyl alcohol, to enhance the production. In addition, cell cultures of extremely pungent Trinidad Scorpion cultivar were established and they were fed with vanillyl alcohol to study if this would result in formation of capsinoids instead of capsaicinoids. A high-performance liquid chromatography (HPLC) method with UV detection was validated for determining the capsiate contents of the cell culture samples and fruit samples for comparison. To analyze the α-solanine content of the cell culture samples and leaves and flowers of three cultivars belonging to three different Capsicum species, an HPLC-UV method was validated for this purpose as well. Despite validating a sensitive and specific method for capsiate analysis, no detectable amounts of capsiate were detected in any of the cell culture samples. Cell cultures of pungent cultivars did not produce detectable amounts of capsaicinoids either. Results from analyzing the real fruit samples were in accordance with previous literature reports, and Aji Dulce fruits were found to contain higher amounts of capsiate compared to Trinidad Pimento, although having only one indoor grown Aji Dulce fruit analyzed limits the reliability. The analytical method for determining α-solanine content had problems with internal standard and specificity. This method could be used for making rough estimates about the possible α-solanine content. No hazardous amounts were detected in any of the cell culture samples. Only one sample consisting of Aji Dulce young leaves could contain α-solanine slightly above the limits set for commercial potatoes. Results with flowers of Rocoto San Pedro Orange (C. pubescens) and Aji Omnicolor (C. baccatum) were inconclusive and it couldn’t be ruled out that they might contain large amounts of α-solanine. The reason why capsinoids, or even capsaicinoids, were not detected in the cell culture samples remains unsolved, but it could be speculated that capsinoids might degrade in the cell culture environment or that selection of cultivar or cell line is critical. This study gave further proof to the previous assumptions that chili leaves are safe and should not contain notable amounts of α-solanine.
  • Vuokko, Riikka; Vakkuri, Anne; Palojoki, Sari (2022)
    Background: Currently, there is no holistic theoretical approach available for guiding classification development. On the basis of our recent classification development research in the area of patient safety in health information technology, this focus area would benefit from a more systematic approach. Although some valuable theoretical and methodological approaches have been presented, classification development literature typically is limited to methodological development in a specific domain or is practically oriented. Objective: The main purposes of this study are to fill the methodological gap in classification development research by exploring possible elements of systematic development based on previous literature and to promote sustainable and well-grounded classification outcomes by identifying a set of recommended elements. Specifically, the aim is to answer the following question: what are the main elements for systematic classification development based on research evidence and our use case? Methods: This study applied a qualitative research approach. On the basis of previous literature, preliminary elements for classification development were specified, as follows: defining a concept model, documenting the development process, incorporating multidisciplinary expertise, validating results, and maintaining the classification. The elements were compiled as guiding principles for the research process and tested in the case of patient safety incidents (n=501). Results: The results illustrate classification development based on the chosen elements, with 4 examples of technology-induced errors. Examples from the use case regard usability, system downtime, clinical workflow, and medication section problems. The study results confirm and thus suggest that a more comprehensive and theory-based systematic approach promotes well-grounded classification work by enhancing transparency and possibilities for assessing the development process. Conclusions: We recommend further testing the preliminary main elements presented in this study. The research presented herein could serve as a basis for future work. Our recently developed classification and the use case presented here serve as examples. Data retrieved from, for example, other type of electronic health records and use contexts could refine and validate the suggested methodological approach.
  • Firouzbakht, Mojgan; Tirgar, Aram; Ebadi, Abbas; Nia, Hamid Sharif; Oksanen, Tuula; Kouvonen, Anne; Riahi, Mohammad Esmaeil (2018)
    Background: The workplace social capital is one of the important features of clinical work environment that improves the productivity and quality of services and safety through trust and social participation. Evaluation of workplace social capital requires a valid and reliable scale. The short-form workplace social capital questionnaire developed by Kouvonen has long been used to evaluate the workplace social capital. Objective: To evaluate the psychometric properties of the Persian version of the questionnaire among a group of female Iranian health care workers. Methods: The Persian version of the short-form questionnaire of workplace social capital was finalized after translation and back-translation. 500 female health care workers completed the questionnaire. Then, the content validity and the construct validity of the questionnaire were assessed. The reliability of the questionnaire was assessed by Cronbach's a, theta, and McDonald's Omega. The construct reliability and ICC were also evaluated. Results: Based on the maximum likelihood exploratory factor analysis (n=250) and confirmatory factor analysis (n=250), two factors were identified. The factors could explain 65% of the total variance observed. The model had an acceptable fit: GFI=0.953, CFI=0.973, IFI=0.974, NFI=0.953, PNFI=0.522, RAMSEA=0.090, CMIN/DF=2.751, RMR=0.042. Convergent and divergent validity as well as internal consistency and construct reliability of the questionnaire were confirmed. Conclusion: The Persian version of Kouvonen workplace social capital has acceptable validity and reliability. The questionnaire can thus be used in future studies to assess the workplace social capital in Iranian health care workers.