Browsing by Subject "simulation"

Sort by: Order: Results:

Now showing items 1-20 of 57
  • Lindholm, Heidi (Helsingfors universitet, 2017)
    The purpose of this study is to explore learning experiences of sixth grade students in the Me & MyCity learning environment. The research task is approached through the criteria of meaningful learning, which have been used as a theoretical framework in a Finnish learning environment study, among others. Previous research has shown that criteria of meaningful learning can be found in different kinds of learning environments. The study focuses on what working life skills the students learn in the Me & MyCity working life and society simulation. Very little research has been conducted on Me & MyCity, so the study is much needed. Research on learning environments shows that understanding and studying the usefulness of different learning environments is necessary, since there are few studies available on the topic. The goal of this study is to generate new information about the Me & MyCity learning environment, and also about which working life skills it can help students learn. The results of this study can also be used, for example, in the development of Me & MyCity. The study was carried out as a case study. The data consists of thematic interviews of a class of students and a teacher from a school in Vantaa who visited Me & MyCity in the spring of 2016, and papers the students wrote (two per each student). Altogether there were thematic interviews of 19 students, 38 papers, and one thematic interview of a teacher. The data was analyzed deductively, using the criteria of meaningful learning and a framework of working life skills that was compiled for this study. The results show that all criteria of meaningful learning can be found in Me & MyCity. However, based on the research data, the criterion of constructive learning was fulfilled only to a small extent, so the learning environment of Me & MyCity could be developed to support students' reflection of their own learning more, for example. There is variation in how working life skills are learnt in Me & MyCity. According to the results, some working life skills were not learnt at all. These results can be applied, among other things, in the pedagogical material of Me & MyCity, and its development. The results can also be put to use in ordinary school teaching to consider how school work can support students in learning working life skills and how, for example, an authentic learning environment that supports learning can be built in a school environment. The results can also be applied to building a good learning environment that supports the learning of other skills and information as well.
  • Santillo, Jordan (Helsingin yliopisto, 2022)
    Research in radar technology requires readily accessible data from weather systems of varying properties. Lack of real-world data can delay or stop progress in development. Simulation aids this problem by providing data on demand. In this publication we present a new weather radar signal simulator. The algorithm produces raw time series data for a radar signal using physically based methodology with statistical techniques incorporated for computational efficiency. From a set of user-defined scatterer characteristics and radar system parameters, the simulator solves the radar range equation for individual, representative precipitation targets in a virtual weather cell. The model addresses the question of balancing utility and performance in simulating signal that contains all the essential weather information. For our applications, we focus on target velocity measurements. Signal is created with respect to the changing position of targets, leading to a discernable Doppler shift in frequency. We also show the operation of our simulator in generating signal using multiple pulse transmission schemes. First, we establish the theoretical basis for our algorithm. Then we demonstrate the simulator's capability for use in experimentation of advanced digital signal processing techniques and data acquisition, focusing on target motion. Finally, we discuss possible future developments of the simulator and their importance in application.
  • Kellomäki, Seppo; Hänninen, Heikki; Kolström, Taneli; Kotisaari, Ahti; Pukkala, Timo (Suomen metsätieteellinen seura, 1987)
  • Bhattacharjee, Joy; Rabbil, Mehedi; Fazel, Nasim; Darabi, Hamid; Choubin, Bahram; Khan, Md. Motiur Rahman; Marttila, Hannu; Haghighi, Ali Torabi (Elsevier, 2021)
    Science of the Total Environment 797 (2021), 149034
    Lake water level fluctuation is a function of hydro-meteorological components, namely input, and output to the system. The combination of these components from in-situ and remote sensing sources has been used in this study to define multiple scenarios, which are the major explanatory pathways to assess lake water levels. The goal is to analyze each scenario through the application of the water balance equation to simulate lake water levels. The largest lake in Iran, Lake Urmia, has been selected in this study as it needs a great deal of attention in terms of water management issues. We ran a monthly water balance simulation of nineteen scenarios for Lake Urmia from 2003 to 2007 by applying different combinations of data, including observed and remotely sensed water level, flow, evaporation, and rainfall. We used readily available water level data from Hydrosat, Hydroweb, and DAHITI platforms; evapotranspiration from MODIS and rainfall from TRMM. The analysis suggests that the consideration of field data in the algorithm as the initial water level can reproduce the fluctuation of Lake Urmia water level in the best way. The scenario that combines in-situ meteorological components is the closest match to the observed water level of Lake Urmia. Almost all scenarios showed good dynamics with the field water level, but we found that nine out of nineteen scenarios did not vary significantly in terms of dynamics. The results also reveal that, even without any field data, the proposed scenario, which consists entirely of remote sensing components, is capable of estimating water level fluctuation in a lake. The analysis also explains the necessity of using proper data sources to act on water regulations and managerial decisions to understand the temporal phenomenon not only for Lake Urmia but also for other lakes in semi-arid regions.
  • Calderón, Silvia M.; Tonttila, Juha; Buchholz, Angela; Joutsensaari, Jorma; Komppula, Mika; Leskinen, Ari; Hao, Liqing; Moisseev, Dmitri; Pullinen, Iida; Tiitta, Petri; Xu, Jian; Virtanen, Annele; Kokkola, Harri; Romakkaniemi, Sami (Copernicus Publ., 2022)
    Atmospheric chemistry and physics
    We carried out a closure study of aerosol-cloud interactions during stratocumulus formation using a large eddy simulation model UCLALES-SALSA and observations from the 2020 cloud sampling campaign at the Puijo SMEAR IV station in Kuopio, Finland. The unique observational setup combining in situ and cloud remote sensing measurements allowed a closer look into the aerosol size-composition dependence of droplet activation and droplet growth in turbulent boundary layer driven by surface forcing and radiative cooling. UCLALES-SALSA uses spectral bin microphysics for aerosols and hydrometeors and incorporates a full description of their interactions into the turbulent-convective radiation-dynamical model of stratocumulus. Based on our results, the model successfully described the probability distribution of updraft velocities and consequently the size dependency of aerosol activation into cloud droplets, and further recreated the size distributions for both interstitial aerosol and cloud droplets. This is the first time such a detailed closure is achieved not only accounting for activation of cloud droplets in different updrafts, but also accounting for processes evaporating droplets and drizzle production through coagulation-coalescence. We studied two cases of cloud formation, one diurnal (24 September 2020) and one nocturnal (31 October 2020), with high and low aerosol loadings, respectively. Aerosol number concentrations differ more than 1 order of magnitude between cases and therefore, lead to cloud droplet number concentration (CDNC) values which range from less than 100cm-3 up to 1000cm-3. Different aerosol loadings affected supersaturation at the cloud base, and thus the size of aerosol particles activating to cloud droplets. Due to higher CDNC, the mean size of cloud droplets in the diurnal-high aerosol case was lower. Thus, droplet evaporation in downdrafts affected more the observed CDNC at Puijo altitude compared to the low aerosol case. In addition, in the low aerosol case, the presence of large aerosol particles in the accumulation mode played a significant role in the droplet spectrum evolution as it promoted the drizzle formation through collision and coalescence processes. Also, during the event, the formation of ice particles was observed due to subzero temperature at the cloud top. Although the modeled number concentration of ice hydrometeors was too low to be directly measured, the retrieval of hydrometeor sedimentation velocities with cloud radar allowed us to assess the realism of modeled ice particles. The studied cases are presented in detail and can be further used by the cloud modellers to test and validate their models in a well-characterized modelling setup. We also provide recommendations on how increasing amount of information on aerosol properties could improve the understanding of processes affecting cloud droplet number and liquid water content in stratiform clouds.
  • Page, Mathew (Helsingin yliopisto, 2021)
    Tiivistelmä – Referat – Abstract With rising income inequalities and increasing immigration in many European cities, residential segregation remains a key focus for city planners and policy makers. As changes in the socio-spatial configuration of cities result from the residential mobility of its residents, the basis on which this mobility occurs is an important factor in segregation dynamics. There are many macro conditions which can constrain residential choice and facilitate segregation, such as the structure and supply of housing, competition in real estate markets and legal and institutional forms of housing discrimination. However, segregation has also been shown to occur from the bottom-up, through the self-organisation of individual households who make decisions about where to live. Using simple theoretical models, Thomas Schelling demonstrated how individual residential choices can lead to unanticipated and unexpected segregation in a city, even when this is not explicitly desired by any households. Schelling’s models are based upon theories of social homophily, or social distance dynamics, whereby individuals are thought to cluster in social and physical space on the basis of shared social traits. Understanding this process poses challenges for traditional research methods as segregation dynamics exhibit many complex behaviours including interdependency, emergence and nonlinearity. In recent years, simulation has been turned to as one possible method of analysis. Despite this increased interest in simulation as a tool for segregation research, there have been few attempts to operationalise a geospatial model, using empirical data for a real urban area. This thesis contributes to research on the simulation of social phenomena by developing a geospatial agent-based model (ABM) of residential segregation from empirical population data for the Helsinki Metropolitan Area (HMA). The urban structure, population composition, density and socio-spatial distribution of the HMA is represented within the modelling environment. Whilst the operational parameters of the model remain highly simplified in order to make processes more transparent, it permits exploration of possible system behaviour by placing it in a manipulative form. Specifically, this study uses simulation to test whether individual preferences, based on social homophily, are capable of producing segregation in a theoretical system which is absent of discrimination and other factors which may constrain residential choice. Three different scenarios were conducted, corresponding to different preference structures and demands for co-group neighbours. Each scenario was simulated for three different potential sorting variables derived from the literature; socio-economic status (income), cultural capital (education level) and language groups (mother tongue). Segregation increases in all of the simulations, however there are considerable behavioural differences between the different scenarios and grouping variables. The results broadly support the idea that individual residential choices by households are capable of producing and maintaining segregation under the right theoretical conditions. As a relatively novel approach to segregation research, the components, processes, and parameters of the developed model are described in detail for transparency. Limitations of such an approach are addressed at length, and attention is given to methods of measuring and reporting on the evolution and results of the simulations. The potential and limitations of using simulation in segregation research is highlighted through this work.
  • Omwami, Raymond K. (Suomen metsätieteellinen seura, 1988)
    A study aimed at applying concepts of economic theory relevant to the formulation of a long-term timber production model as a basis of forest policy. A vertically integrated forest sector production model is described, together with its application in a developing economy and the derivation of a dynamic silvicultural investment criterion (in a labour surplus economy).
  • Christopher, Solomon (2020)
    The study of how transmissible an infectious pathogen is and what its main routes of transmission are is key towards management and control of its spread. Some infections which begin with zoonotic or common-source transmission may additionally exhibit potential for direct person-to-person transmission. Methods to discern multiple transmission routes from observed outbreak datasets are thus essential. Features such as partial observation of the outbreak can make such inferences more challenging. This thesis presents a stochastic modelling framework to infer person-to-person transmission using data observed from a completed outbreak in a population of households. The model is specified hierarchically for the processes of transmission and observation. The transmission model specifies the process of acquiring infection from either the environment or infectious household members. This model is governed by two parameters, one for each source of transmission. While in continuous time they are characterised by transmission hazards, in discrete time they are characterised by escape probabilities. The observation model specifies the process of observation of outbreak based on symptom times and serological test results. The observation design is extended to address an ongoing outbreak with censored observation as well as to case-ascertained sampling where households are sampled based on index cases. The model and observation settings are motivated by the typical data from Hepatitis A virus (HAV) outbreaks. Partial observation of the infectious process is due to unobserved infection times, presence of asymptomatic infections and not-fully- sensitive serological test results. Individual-level latent variables are introduced in order to account for partial observation of the process. A data augmented Markov chain Monte Carlo (DA-MCMC) algorithm to estimate the transmission parameters by simultaneously sampling the latent variables is developed. A model comparison using deviance-information criteria (DIC) is formulated to test the presence of direct transmission, which is the primary aim in this thesis. In calculating DIC, the required computations utilise the DA-MCMC algorithm developed for the estimation procedures. \\ The inference methods are tested using simulated outbreak data based on a set of scenarios defined by varying the following: presence of direct transmission, sensitivity and specificity for observation of symptoms, values of the transmission parameters and household size distribution. Simulations are also used for understanding patterns in the distribution of household final sizes by varying the values of the transmission parameters. From the results using simulated outbreaks, DIC6 consistently indicates towards the correct model in almost all simulation scenarios and is robust across all the presented simulation scenarios. Also, the posterior estimates of the transmission parameters using DA- MCMC are fairly consistent with the values used in the simulation. The procedures presented in this thesis are for SEIR epidemic models wherein the latent period is shorter than the incubation period along with presence of asymptomatic infections. These procedures can be directly adapted to infections with similar or simpler natural history. The modelling framework is flexible and can be further extended to include components for vaccination and pathogen genetic sequence data.
  • Bozhko, Dmitrii V.; Galumov, Georgii K.; Polovian, Aleksandr I.; Kolchanova, Sofiia M.; Myrov, Vladislav O.; Stelmakh, Viktoriia A.; Schioth, Helgi B. (2021)
    Cerebral ("brain") organoids are high-fidelity in vitro cellular models of the developing brain, which makes them one of the go-to methods to study isolated processes of tissue organization and its electrophysiological properties, allowing to collect invaluable data for in silico modeling neurodevelopmental processes. Complex computer models of biological systems supplement in vivo and in vitro experimentation and allow researchers to look at things that no laboratory study has access to, due to either technological or ethical limitations. In this paper, we present the Biological Cellular Neural Network Modeling (BCNNM) framework designed for building dynamic spatial models of neural tissue organization and basic stimulus dynamics. The BCNNM uses a convenient predicate description of sequences of biochemical reactions and can be used to run complex models of multi-layer neural network formation from a single initial stem cell. It involves processes such as proliferation of precursor cells and their differentiation into mature cell types, cell migration, axon and dendritic tree formation, axon pathfinding and synaptogenesis. The experiment described in this article demonstrates a creation of an in silico cerebral organoid-like structure, constituted of up to 1 million cells, which differentiate and self-organize into an interconnected system with four layers, where the spatial arrangement of layers and cells are consistent with the values of analogous parameters obtained from research on living tissues. Our in silico organoid contains axons and millions of synapses within and between the layers, and it comprises neurons with high density of connections (more than 10). In sum, the BCNNM is an easy-to-use and powerful framework for simulations of neural tissue development that provides a convenient way to design a variety of tractable in silico experiments.
  • Virtanen, Jussi (Helsingin yliopisto, 2022)
    In the thesis we assess the ability of two different models to predict cash flows in private credit investment funds. Models are a stochastic type and a deterministic type which makes them quite different. The data that has been obtained for the analysis is divided in three subsamples. These subsamples are mature funds, liquidated funds and all funds. The data consists of 62 funds, subsample of mature funds 36 and subsample of liquidated funds 17 funds. Both of our models will be fitted for all subsamples. Parameters of the models are estimated with different techniques. The parameters of the Stochastic model are estimated with the conditional least squares method. The parameters of the Yale model are estimated with the numerical methods. After the estimation of the parameters, the values are explained in detail and their effect on the cash flows are investigated. This helps to understand what properties of the cash flows the models are able to capture. In addition, we assess to both models' ability to predict cash flows in the future. This is done by using the coefficient of determination, QQ-plots and comparison of predicted and observed cumulated cash flows. By using the coefficient of determination we try to explain how well the models explain the variation around the residuals of the observed and predicted values. With QQ-plots we try to determine if the values produced of the process follow the normal distribution. Finally, with the cumulated cash flows of contributions and distributions we try to determine if models are able to predict the cumulated committed capital and returns of the fund in a form of distributions. The results show that the Stochastic model performs better in its prediction of contributions and distributions. However, this is not the case for all the subsamples. The Yale model seems to do better in cumulated contributions of the subsample of the mature funds. Although, the flexibility of the Stochastic model is more suitable for different types of cash flows and subsamples. Therefore, it is suggested that the Stochastic model should be the model to be used in prediction and modelling of the private credit funds. It is harder to implement than the Yale model but it does provide more accurate results in its prediction.
  • Manninen, Terhikki; Jääskeläinen, Emmihenna; Siljamo, Niilo; Riihelä, Aku; Karlsson, Karl-Göran (Copernicus Publications, 2022)
    Atmospheric measurement techniques
    This paper describes a new method for cloudcorrecting observations of black-sky surface albedo derived using the Advanced Very High Resolution Radiometer (AVHRR). Cloud cover constitutes a major challenge for surface albedo estimation using AVHRR data for all possible conditions of cloud fraction and cloud type with any land cover type and solar zenith angle. This study shows how the new cloud probability (CP) data to be provided as part of edition A3 of the CLARA (CM SAF cLoud, Albedo and surface Radiation dataset from AVHRR data) record from the Satellite Application Facility on Climate Monitoring (CM SAF) project of EUMETSAT can be used instead of traditional binary cloud masking to derive cloud-free monthly mean surface albedo estimates. Cloudy broadband albedo distributions were simulated first for theoretical cloud distributions and then using global cloud probability (CP) data for 1 month. A weighted mean approach based on the CP values was shown to produce very-high-accuracy black-sky surface albedo estimates for simulated data. The 90 % quantile for the error was 1.1 % (in absolute albedo percentage) and that for the relative error was 2.2 %. AVHRR-based and in situ albedo distributions were in line with each other and the monthly mean values were also consistent. Comparison with binary cloud masking indicated that the developed method improves cloud contamination removal.
  • Pukkala, Timo; Kolström, Taneli (Suomen metsätieteellinen seura, 1987)
  • Sharifian, Fariba; Heikkinen, Hanna; Vigrio, Ricardo; Vanni, Simo (2016)
    In the visual cortex, stimuli outside the classical receptive field (CRF) modulate the neural firing rate, without driving the neuron by themselves. In the primary visual cortex (V1), such contextual modulation can be parametrized with an area summation function (ASF): increasing stimulus size causes first an increase and then a decrease of firing rate before reaching an asymptote. Earlier work has reported increase of sparseness when CRF stimulation is extended to its surroundings. However, there has been no clear connection between the ASF and network efficiency. Here we aimed to investigate possible link between ASF and network efficiency. In this study, we simulated the responses of a biomimetic spiking neural network model of the visual cortex to a set of natural images. We varied the network parameters, and compared the Vi excitatory neuron spike responses to the corresponding responses predicted from earlier single neuron data from primate visual cortex. The network efficiency was quantified with firing rate (which has direct association to neural energy consumption), entropy per spike and population sparseness. All three measures together provided a clear association between the network efficiency and the ASF. The association was clear when varying the horizontal connectivity within V-1, which influenced both the efficiency and the distance to ASF, DAS. Given the limitations of our biophysical model, this association is qualitative, but nevertheless suggests that an ASF-like receptive field structure can cause efficient population response.
  • Niemi, Jarkko K.; Edwards, Sandra A.; Papanastasiou, Dimitris K.; Piette, Deborah; Stygar, Anna H.; Wallenbeck, Anna; Valros, Anna (2021)
    Tail biting is an important animal welfare issue in the pig sector. Studies have identified various risk factors which can lead to biting incidents and proposed mitigation measures. This study focused on the following seven key measures which have been identified to affect the risk of tail biting lesions: improvements in straw provision, housing ventilation, genetics, stocking density, herd health, provision of point-source enrichment objects, and adoption of early warning systems. The aim of this study was to examine whether these selected measures to reduce the risk of tail biting lesions in pig fattening are cost-effective. The problem was analyzed by first summarizing the most prospective interventions, their costs and expected impacts on the prevalence of tail biting lesions, second, by using a stochastic bio-economic model to simulate the financial return per pig space unit and per pig at different levels of prevalence of tail biting lesions, and third by looking at how large a reduction in tail biting lesions would be needed at different levels of initial prevalence of lesions to cover the costs of interventions. Tail biting lesions of a severity which would require an action (medication, hospitalization of the pig or other care, or taking preventive measures) by the pig producer were considered in the model. The results provide guidance on the expected benefits and costs of the studied interventions. According to the results, if the average prevalence of tail biting lesions is at a level of 10%, the costs of this damaging behavior can be as high as euro2.3 per slaughtered pig (similar to 1.6% of carcass value). Measures which were considered the least expensive to apply, such as provision of point-source enrichment objects, or provided wider production benefits, such as improvements in ventilation and herd health, became profitable at a lower level of efficacy than measures which were considered the most expensive to apply (e.g., straw provision, increased space allowance, automated early warning systems). Measures which were considered most efficient in reducing the risk of tail biting lesions, such as straw provision, can be cost-effective in preventing tail biting, especially when the risk of tail biting is high. At lower risk levels, the provision of point-source objects and other less costly but relatively effective measures can play an important role. However, selection of measures appropriate to the individual farm problem is essential. For instance, if poor health or barren pens are causing the elevated risk of tail biting lesions, then improving health management or enriching the pens may resolve the tail biting problem cost-effectively.
  • Hämäläinen, Heikki; Aroviita, Jukka; Jyväsjärvi, Jussi; Kärkkäinen, Salme (Ecological Society of America, 2018)
    Ecological Applications 28 (5): 1260-1272
    The ecological assessment of freshwaters is currently primarily based on biological communities and the reference condition approach (RCA). In the RCA, the communities in streams and lakes disturbed by humans are compared with communities in reference conditions with no or minimal anthropogenic influence. The currently favored rationale is using selected community metrics for which the expected values (E) for each site are typically estimated from environmental variables using a predictive model based on the reference data. The proportional differences between the observed values (O) and E are then derived, and the decision rules for status assessment are based on fixed (typically 10th or 25th) percentiles of the O/E ratios among reference sites. Based on mathematical formulations, illustrations by simulated data and real case studies representing such an assessment approach, we demonstrate that the use of a common quantile of O/E ratios will, under certain conditions, cause severe bias in decision making even if the predictive model would be unbiased. This is because the variance of O/E under these conditions, which seem to be quite common among the published applications, varies systematically with E. We propose a correction method for the bias and compare the novel approach to the conventional one in our case studies, with data from both reference and impacted sites. The results highlight a conceptual issue of employing ratios in the status assessment. In some cases using the absolute deviations instead provides a simple solution for the bias identified and might also be more ecologically relevant and defensible.
  • Boutle, Ian; Angevine, Wayne; Bao, Jian-Wen; Bergot, Thierry; Bhattacharya, Ritthik; Bott, Andreas; Ducongé, Leo; Forbes, Richard; Goecke, Tobias; Grell, Evelyn; Hill, Adrian; Igel, Adele L.; Kudzotsa, Innocent; Lac, Christine; Maronga, Bjorn; Romakkaniemi, Sami; Schmidli, Juerg; Schwenkel, Johannes; Steeneveld, Gert-Jan; Vié, Benoît (Copernicus Publ., 2022)
    Atmospheric chemistry and physics
    An intercomparison between 10 single-column (SCM) and 5 large-eddy simulation (LES) models is presented for a radiation fog case study inspired by the Local and Non-local Fog Experiment (LANFEX) field campaign. Seven of the SCMs represent single-column equivalents of operational numerical weather prediction (NWP) models, whilst three are research-grade SCMs designed for fog simulation, and the LESs are designed to reproduce in the best manner currently possible the underlying physical processes governing fog formation. The LES model results are of variable quality and do not provide a consistent baseline against which to compare the NWP models, particularly under high aerosol or cloud droplet number concentration (CDNC) conditions. The main SCM bias appears to be toward the overdevelopment of fog, i.e. fog which is too thick, although the inter-model variability is large. In reality there is a subtle balance between water lost to the surface and water condensed into fog, and the ability of a model to accurately simulate this process strongly determines the quality of its forecast. Some NWP SCMs do not represent fundamental components of this process (e.g. cloud droplet sedimentation) and therefore are naturally hampered in their ability to deliver accurate simulations. Finally, we show that modelled fog development is as sensitive to the shape of the cloud droplet size distribution, a rarely studied or modified part of the microphysical parameterisation, as it is to the underlying aerosol or CDNC.
  • Oksanen, Juha (Finnish Geodetic Institute, 2006)
    FGI Publications 134
  • Spjuth, Ola; Karlsson, Andreas; Clements, Mark; Humphreys, Keith; Ivansson, Emma; Dowling, Jim; Eklund, Martin; Jauhiainen, Alexandra; Czene, Kamila; Gronberg, Henrik; Sparen, Par; Wiklund, Fredrik; Cheddad, Abbas; Palsdottir, Porgerodur; Rantalainen, Mattias; Abrahamsson, Linda; Laure, Erwin; Litton, Jan-Eric; Palmgren, Juni (2017)
    Objective: We provide an e-Science perspective on the workflow from risk factor discovery and classification of disease to evaluation of personalized intervention programs. As case studies, we use personalized prostate and breast cancer screenings. Materials and Methods: We describe an e-Science initiative in Sweden, e-Science for Cancer Prevention and Control (eCPC), which supports biomarker discovery and offers decision support for personalized intervention strategies. The generic eCPC contribution is a workflow with 4 nodes applied iteratively, and the concept of e-Science signifies systematic use of tools from the mathematical, statistical, data, and computer sciences. Results: The eCPC workflow is illustrated through 2 case studies. For prostate cancer, an in-house personalized screening tool, the Stockholm-3 model (S3M), is presented as an alternative to prostate-specific antigen testing alone. S3M is evaluated in a trial setting and plans for rollout in the population are discussed. For breast cancer, new biomarkers based on breast density and molecular profiles are developed and the US multicenter Women Informed to Screen Depending on Measures (WISDOM) trial is referred to for evaluation. While current eCPC data management uses a traditional data warehouse model, we discuss eCPC-developed features of a coherent data integration platform. Discussion and Conclusion: E-Science tools are a key part of an evidence-based process for personalized medicine. This paper provides a structured workflow from data and models to evaluation of new personalized intervention strategies. The importance of multidisciplinary collaboration is emphasized. Importantly, the generic concepts of the suggested eCPC workflow are transferrable to other disease domains, although each disease will require tailored solutions.