Browsing by Subject "simulation"

Sort by: Order: Results:

Now showing items 1-20 of 36
  • Lindholm, Heidi (Helsingfors universitet, 2017)
    The purpose of this study is to explore learning experiences of sixth grade students in the Me & MyCity learning environment. The research task is approached through the criteria of meaningful learning, which have been used as a theoretical framework in a Finnish learning environment study, among others. Previous research has shown that criteria of meaningful learning can be found in different kinds of learning environments. The study focuses on what working life skills the students learn in the Me & MyCity working life and society simulation. Very little research has been conducted on Me & MyCity, so the study is much needed. Research on learning environments shows that understanding and studying the usefulness of different learning environments is necessary, since there are few studies available on the topic. The goal of this study is to generate new information about the Me & MyCity learning environment, and also about which working life skills it can help students learn. The results of this study can also be used, for example, in the development of Me & MyCity. The study was carried out as a case study. The data consists of thematic interviews of a class of students and a teacher from a school in Vantaa who visited Me & MyCity in the spring of 2016, and papers the students wrote (two per each student). Altogether there were thematic interviews of 19 students, 38 papers, and one thematic interview of a teacher. The data was analyzed deductively, using the criteria of meaningful learning and a framework of working life skills that was compiled for this study. The results show that all criteria of meaningful learning can be found in Me & MyCity. However, based on the research data, the criterion of constructive learning was fulfilled only to a small extent, so the learning environment of Me & MyCity could be developed to support students' reflection of their own learning more, for example. There is variation in how working life skills are learnt in Me & MyCity. According to the results, some working life skills were not learnt at all. These results can be applied, among other things, in the pedagogical material of Me & MyCity, and its development. The results can also be put to use in ordinary school teaching to consider how school work can support students in learning working life skills and how, for example, an authentic learning environment that supports learning can be built in a school environment. The results can also be applied to building a good learning environment that supports the learning of other skills and information as well.
  • Kellomäki, Seppo; Hänninen, Heikki; Kolström, Taneli; Kotisaari, Ahti; Pukkala, Timo (Suomen metsätieteellinen seura, 1987)
  • Page, Mathew (Helsingin yliopisto, 2021)
    Tiivistelmä – Referat – Abstract With rising income inequalities and increasing immigration in many European cities, residential segregation remains a key focus for city planners and policy makers. As changes in the socio-spatial configuration of cities result from the residential mobility of its residents, the basis on which this mobility occurs is an important factor in segregation dynamics. There are many macro conditions which can constrain residential choice and facilitate segregation, such as the structure and supply of housing, competition in real estate markets and legal and institutional forms of housing discrimination. However, segregation has also been shown to occur from the bottom-up, through the self-organisation of individual households who make decisions about where to live. Using simple theoretical models, Thomas Schelling demonstrated how individual residential choices can lead to unanticipated and unexpected segregation in a city, even when this is not explicitly desired by any households. Schelling’s models are based upon theories of social homophily, or social distance dynamics, whereby individuals are thought to cluster in social and physical space on the basis of shared social traits. Understanding this process poses challenges for traditional research methods as segregation dynamics exhibit many complex behaviours including interdependency, emergence and nonlinearity. In recent years, simulation has been turned to as one possible method of analysis. Despite this increased interest in simulation as a tool for segregation research, there have been few attempts to operationalise a geospatial model, using empirical data for a real urban area. This thesis contributes to research on the simulation of social phenomena by developing a geospatial agent-based model (ABM) of residential segregation from empirical population data for the Helsinki Metropolitan Area (HMA). The urban structure, population composition, density and socio-spatial distribution of the HMA is represented within the modelling environment. Whilst the operational parameters of the model remain highly simplified in order to make processes more transparent, it permits exploration of possible system behaviour by placing it in a manipulative form. Specifically, this study uses simulation to test whether individual preferences, based on social homophily, are capable of producing segregation in a theoretical system which is absent of discrimination and other factors which may constrain residential choice. Three different scenarios were conducted, corresponding to different preference structures and demands for co-group neighbours. Each scenario was simulated for three different potential sorting variables derived from the literature; socio-economic status (income), cultural capital (education level) and language groups (mother tongue). Segregation increases in all of the simulations, however there are considerable behavioural differences between the different scenarios and grouping variables. The results broadly support the idea that individual residential choices by households are capable of producing and maintaining segregation under the right theoretical conditions. As a relatively novel approach to segregation research, the components, processes, and parameters of the developed model are described in detail for transparency. Limitations of such an approach are addressed at length, and attention is given to methods of measuring and reporting on the evolution and results of the simulations. The potential and limitations of using simulation in segregation research is highlighted through this work.
  • Omwami, Raymond K. (Suomen metsätieteellinen seura, 1988)
    A study aimed at applying concepts of economic theory relevant to the formulation of a long-term timber production model as a basis of forest policy. A vertically integrated forest sector production model is described, together with its application in a developing economy and the derivation of a dynamic silvicultural investment criterion (in a labour surplus economy).
  • Christopher, Solomon (2020)
    The study of how transmissible an infectious pathogen is and what its main routes of transmission are is key towards management and control of its spread. Some infections which begin with zoonotic or common-source transmission may additionally exhibit potential for direct person-to-person transmission. Methods to discern multiple transmission routes from observed outbreak datasets are thus essential. Features such as partial observation of the outbreak can make such inferences more challenging. This thesis presents a stochastic modelling framework to infer person-to-person transmission using data observed from a completed outbreak in a population of households. The model is specified hierarchically for the processes of transmission and observation. The transmission model specifies the process of acquiring infection from either the environment or infectious household members. This model is governed by two parameters, one for each source of transmission. While in continuous time they are characterised by transmission hazards, in discrete time they are characterised by escape probabilities. The observation model specifies the process of observation of outbreak based on symptom times and serological test results. The observation design is extended to address an ongoing outbreak with censored observation as well as to case-ascertained sampling where households are sampled based on index cases. The model and observation settings are motivated by the typical data from Hepatitis A virus (HAV) outbreaks. Partial observation of the infectious process is due to unobserved infection times, presence of asymptomatic infections and not-fully- sensitive serological test results. Individual-level latent variables are introduced in order to account for partial observation of the process. A data augmented Markov chain Monte Carlo (DA-MCMC) algorithm to estimate the transmission parameters by simultaneously sampling the latent variables is developed. A model comparison using deviance-information criteria (DIC) is formulated to test the presence of direct transmission, which is the primary aim in this thesis. In calculating DIC, the required computations utilise the DA-MCMC algorithm developed for the estimation procedures. \\ The inference methods are tested using simulated outbreak data based on a set of scenarios defined by varying the following: presence of direct transmission, sensitivity and specificity for observation of symptoms, values of the transmission parameters and household size distribution. Simulations are also used for understanding patterns in the distribution of household final sizes by varying the values of the transmission parameters. From the results using simulated outbreaks, DIC6 consistently indicates towards the correct model in almost all simulation scenarios and is robust across all the presented simulation scenarios. Also, the posterior estimates of the transmission parameters using DA- MCMC are fairly consistent with the values used in the simulation. The procedures presented in this thesis are for SEIR epidemic models wherein the latent period is shorter than the incubation period along with presence of asymptomatic infections. These procedures can be directly adapted to infections with similar or simpler natural history. The modelling framework is flexible and can be further extended to include components for vaccination and pathogen genetic sequence data.
  • Bozhko, Dmitrii V.; Galumov, Georgii K.; Polovian, Aleksandr I.; Kolchanova, Sofiia M.; Myrov, Vladislav O.; Stelmakh, Viktoriia A.; Schioth, Helgi B. (2021)
    Cerebral ("brain") organoids are high-fidelity in vitro cellular models of the developing brain, which makes them one of the go-to methods to study isolated processes of tissue organization and its electrophysiological properties, allowing to collect invaluable data for in silico modeling neurodevelopmental processes. Complex computer models of biological systems supplement in vivo and in vitro experimentation and allow researchers to look at things that no laboratory study has access to, due to either technological or ethical limitations. In this paper, we present the Biological Cellular Neural Network Modeling (BCNNM) framework designed for building dynamic spatial models of neural tissue organization and basic stimulus dynamics. The BCNNM uses a convenient predicate description of sequences of biochemical reactions and can be used to run complex models of multi-layer neural network formation from a single initial stem cell. It involves processes such as proliferation of precursor cells and their differentiation into mature cell types, cell migration, axon and dendritic tree formation, axon pathfinding and synaptogenesis. The experiment described in this article demonstrates a creation of an in silico cerebral organoid-like structure, constituted of up to 1 million cells, which differentiate and self-organize into an interconnected system with four layers, where the spatial arrangement of layers and cells are consistent with the values of analogous parameters obtained from research on living tissues. Our in silico organoid contains axons and millions of synapses within and between the layers, and it comprises neurons with high density of connections (more than 10). In sum, the BCNNM is an easy-to-use and powerful framework for simulations of neural tissue development that provides a convenient way to design a variety of tractable in silico experiments.
  • Pukkala, Timo; Kolström, Taneli (Suomen metsätieteellinen seura, 1987)
  • Sharifian, Fariba; Heikkinen, Hanna; Vigrio, Ricardo; Vanni, Simo (2016)
    In the visual cortex, stimuli outside the classical receptive field (CRF) modulate the neural firing rate, without driving the neuron by themselves. In the primary visual cortex (V1), such contextual modulation can be parametrized with an area summation function (ASF): increasing stimulus size causes first an increase and then a decrease of firing rate before reaching an asymptote. Earlier work has reported increase of sparseness when CRF stimulation is extended to its surroundings. However, there has been no clear connection between the ASF and network efficiency. Here we aimed to investigate possible link between ASF and network efficiency. In this study, we simulated the responses of a biomimetic spiking neural network model of the visual cortex to a set of natural images. We varied the network parameters, and compared the Vi excitatory neuron spike responses to the corresponding responses predicted from earlier single neuron data from primate visual cortex. The network efficiency was quantified with firing rate (which has direct association to neural energy consumption), entropy per spike and population sparseness. All three measures together provided a clear association between the network efficiency and the ASF. The association was clear when varying the horizontal connectivity within V-1, which influenced both the efficiency and the distance to ASF, DAS. Given the limitations of our biophysical model, this association is qualitative, but nevertheless suggests that an ASF-like receptive field structure can cause efficient population response.
  • Spjuth, Ola; Karlsson, Andreas; Clements, Mark; Humphreys, Keith; Ivansson, Emma; Dowling, Jim; Eklund, Martin; Jauhiainen, Alexandra; Czene, Kamila; Gronberg, Henrik; Sparen, Par; Wiklund, Fredrik; Cheddad, Abbas; Palsdottir, Porgerodur; Rantalainen, Mattias; Abrahamsson, Linda; Laure, Erwin; Litton, Jan-Eric; Palmgren, Juni (2017)
    Objective: We provide an e-Science perspective on the workflow from risk factor discovery and classification of disease to evaluation of personalized intervention programs. As case studies, we use personalized prostate and breast cancer screenings. Materials and Methods: We describe an e-Science initiative in Sweden, e-Science for Cancer Prevention and Control (eCPC), which supports biomarker discovery and offers decision support for personalized intervention strategies. The generic eCPC contribution is a workflow with 4 nodes applied iteratively, and the concept of e-Science signifies systematic use of tools from the mathematical, statistical, data, and computer sciences. Results: The eCPC workflow is illustrated through 2 case studies. For prostate cancer, an in-house personalized screening tool, the Stockholm-3 model (S3M), is presented as an alternative to prostate-specific antigen testing alone. S3M is evaluated in a trial setting and plans for rollout in the population are discussed. For breast cancer, new biomarkers based on breast density and molecular profiles are developed and the US multicenter Women Informed to Screen Depending on Measures (WISDOM) trial is referred to for evaluation. While current eCPC data management uses a traditional data warehouse model, we discuss eCPC-developed features of a coherent data integration platform. Discussion and Conclusion: E-Science tools are a key part of an evidence-based process for personalized medicine. This paper provides a structured workflow from data and models to evaluation of new personalized intervention strategies. The importance of multidisciplinary collaboration is emphasized. Importantly, the generic concepts of the suggested eCPC workflow are transferrable to other disease domains, although each disease will require tailored solutions.
  • Pursiainen, Tero (Helsingfors universitet, 2013)
    The long-run average return on equities shows a sizable premium with respect to their relatively riskless alternatives, the short-run government bonds. The dominant explanation is that the excess return is compensation for rare but severe consumption disasters which result in heavy losses on equities. This thesis studies the plausibility of this explanation in a common theoretical framework. The consumption disasters hypothesis is studied in the conventional Lucas-tree model with two assets and with constant relative risk aversion preferences, captured by the power utility function. The thesis argues that this oft-used model is unable to account for the high premium, and a simulation experiment is conducted to find evidence for the argument. The consumption process is modelled by the threshold autoregressive process, which offers a simple and powerful way to describe the equity premium as a result of a peso problem. Two statistics, the arithmetic average and the standard deviation, are used to estimate the long-run average and the volatility of the returns. The simulated data is analyzed and compared to the real world financial market data. The results confirm that the potential for consumption disasters produces a lower equity premium than the case without disasters in the Lucas-tree model with power utility. The disaster potential lowers the average return on equity instead of increasing it. This result comes from the reciprocal connection between the coefficient of relative risk aversion and the elasticity of intertemporal substitution, and from the special nature of the equity asset, which is a claim on the consumption process itself. The risk-free asset remains unaffected by the disaster potential. The equity premium remains a puzzle in this framework. The advantage of the threshold autoregressive consumption process is to show this result with clarity. Breaking the link between aversion to risk and intertemporal substitution is indeed one possible direction to take. Changing the assumptions about expected consumption or about the equity asset might offer another way forward. Another form of utility or another model is needed if the equity premium is to be explained in financial markets that are free of frictions.
  • Rönkkö, Niko-Petteri (Helsingin yliopisto, 2020)
    In this thesis, I analyze the causes and consequences of the Asian Crisis 1997 and simulate it with Dynare. The model includes financial accelerator mechanism, which in part explains the dynamics and the magnitude of the crisis via balance sheet effects. I find that the major components of the crisis were highly similar to other crisis that had happened in other emerging economies: High levels of foreign-currency denominated debt, unsound financial regulation, and fixed exchange rates with skewed valuation. Even though this simulation do not specifically incorporated different exchange rate regimes into the simulation, the previous literature draw a clear conclusion that flexible exchange rates lessen the shock’s effects on the economy. Thailand, as well as other ASEAN-countries during the crisis, faced severe economic contraction as well as changes in political landscape: Due to the crisis, Thailand’s GDP contracted over 10 percent, the country lost almost a million jobs, and the stock exchange index fell 75 percent. In addition, the country underwent riots, resignation of ministers, and several political changes towards more democratic institutions, even though faced some backlash and re-entry of authoritarian figures later. As the crisis worsened, IMF collected a large rescue package that was given to ASEAN-countries with preconditioned austerity policies. The simulation with recalibrated parameter-values seems to be relatively accurate. The dynamics and the impact of the crisis is captured realistically with correct magnitudes. The financial accelerator mechanism accounts a large part of the shock’s impact on investment and companies net worth, but do not account much on overall decline in output.
  • Arnold, Brian; Sohail, Mashaal; Wadsworth, Crista; Corander, Jukka; Hanage, William P.; Sunyaev, Shamil; Grad, Yonatan H. (2020)
    Identifying genetic variation in bacteria that has been shaped by ecological differences remains an important challenge. For recombining bacteria, the sign and strength of linkage provide a unique lens into ongoing selection. We show that derived alleles
  • Nyberg, Henri (2018)
    This paper introduces a regime switching vector autoregressive model with time-varying regime probabilities, where the regime switching dynamics is described by an observable binary response variable predicted simultaneously with the variables subject to regime changes. Dependence on the observed binary variable distinguishes the model from various previously proposed multivariate regime switching models, facilitating a handy simulation-based multistep forecasting method. An empirical application shows a strong bidirectional predictive linkage between US interest rates and NBER business cycle recession and expansion periods. Due to the predictability of the business cycle regimes, the proposed model yields superior out-of-sample forecasts of the US short-term interest rate and the term spread compared with the linear and nonlinear vector autoregressive (VAR) models, including the Markov switching VAR model.
  • Solberg, Birger (Suomen metsätieteellinen seura, 1986)
  • Pousi, Ilkka (Helsingfors universitet, 2014)
    The Finnish Forest Center produces forest resource data for the use of land owners and the actors in the forestry sector. The data is produced mainly by means of airborne laser scanning (ALS), and it is managed in a nationwide Aarni- forest resource information system. The produced data also includes stand-specific proposals for harvesting and silvicultural treatment. These are usually generated by a simulation, which also provides suggestion for a year of the action. The collection of forest resource data is based on the Area-based approach (ABA). In the method, the forest charac-ters, such as tree attributes measured on field sample plots, are predicted to the whole invention area by the corre-sponding laser- and aerial photo features. Forest characters are predicted to the grid cells 16 x 16 meters in size. In Aarni, the treatment simulation is based on the averages of the tree attributes generalized from the grid cells to the stand. The method does not regard a possible within-stand variation in tree density, which may cause, for ex-ample, delayed thinning proposals especially for the stands with grouped trees. The main aims of this study were: 1. To create a new method in which, in addition to the tree attributes, the subsequent treatment and its timing were simulated to grid cells. After that, special decision rules were created to derive the treatment from the grid cells to the stand. 2. To compare the treatments derived with the decision rules with the normal Aarni simulation of 291 field-surveyed stands to determine which method is better. A related action was also taken: A relationship between within-stand variation and a timing of the simulated treat-ments was also surveyed. This was accomplished by comparing the deviation of tree attributes of the grid cells (e.g. basal-area) with the corresponding attributes of the stand. Presumption was that, particularly in the stands with grouped trees, the problem of delayed thinning could be reduced by using decision rules. The results suggest that the decision rule method gives slightly better results than Aarni simulation in the case of the timing of treatments. The method gave the best results in the young stands where the field treatment proposal was first thinning. The deviation of the basal area of trees in the grid cells appeared to be slightly larger than aver-age in the stands with a large variation in tree density. In these particular stands, the decision rules mostly derived better timing for thinning than normal Aarni simulation.
  • Kilkki, Pekka (Suomen metsätieteellinen seura, 1968)
  • Maconi, Goran; Kassamakov, Ivan; Vainikka, T.; Arstila, Timo; Haeggstrom, Edward (SPIE - the international society for optics and photonics, 2019)
    Proceedings of SPIE
    We simulate the image generated by a microsphere residing in contact on top of an exposed Blu-ray disk surface, when observed by a conventional microscope objective. While microsphere lenses have been used to focus light beyond the diffraction limit and to produce super-resolution images, the nature of the light-sample interaction is still under debate. Simulations in related articles predict the characteristics of the photonic nanojet (PNJ) formed by the microsphere, but so far, no data has been published on the image formation in the far-field. For our simulations, we use the open source package Angora and the commercial software RSoft FullWave. Both packages implement the Finite Difference Time Domain (FDTD) approach. Angora permits us to accurately simulate microscope imaging at the diffraction limit. The RSoft FullWave is able to record the steady-state complex electrical and magnetic fields for multiple wavelengths inside the simulation domain. A microsphere is simulated residing on top of a dielectric substrate featuring sub-wavelength surface features. The scattered light is recorded at the edges of the simulation domain and is then used in the near-field to far-field transformation. The light in the far field is then refocused using an idealized objective model, to give us the simulated microscope image. Comparisons between the simulated image and experimentally acquired microscope images verify the accuracy of our model, whereas the simulation data predicts the interaction between the PNJ and the imaged sample. This allows us to isolate and quantify the near-field patterns of light that enable super-resolution imaging, which is important when developing new micro-optical focusing structures.
  • Westling, Tatu (2006)
    This thesis examines the implications of local network externalities on market dynamics. The subjects of study are prices, welfare, segmentation of markets and firms' withdrawing from market. Additionally, the effect of varying levels of firm rationality to subject mentioned are studied. The methodology involved is that of agent-based computational economics. A computer model has been written to simulate the markets. In the model firms try to maximise their profits by searching for optimal pricing strategies. Since there are many firms involved, fictitious play had to be implemented to model interaction among firms. Consumers are located on toroidal lattice and consume goods according to probabilistic rules. This interplay between consumers and firms give rise to market dynamics which is then analysed. Markets are quite balanced, with no monopolisation taking place. In the long-run oligopolistic markets prevail. The prices are inversely correlated with number of firms on the market. Welfare depends on the preferences for network externality of consumers; if strong, markets with less number of firms are utility enhancing; if moderate, the increasing number of firms yields higher welfare. Exit of firms from the market is fast on the beginning, but slows afterwards. If consumers prefer for local network externality is strong, it results in less exits and more firms and goods at market. Profits are associated negatively with firm rationality. The less rational the firms, the faster they exit the market. It seems that model incorporating both profit maximising firms and consumers that are subject to local network externalities, has not been implemented before. The main finding of this thesis is that the markets that involve local network externalities are rather prosaic, and no monopolisation has been found. The stabilising factor seems to be the profit taking behaviour of firms, which in fact sustain the competitive market structure. References: E. Kutschinski, T. Uthmann and D. Polani (2003): Learning competitive pricing strategies by multi-agent reinforcement learning. Journal of Economic Dynamics and Control 7:2207-2218. R. Cowan and J. Miller (1998): Technological Standards with local externalities and decentralized behaviour. Journal of Evolutionary Economics, 8:285-296.