Browsing by Subject "causal inference"

Sort by: Order: Results:

Now showing items 1-7 of 7
  • Kaikkonen, Laura; Parviainen, Tuuli; Rahikainen, Mika; Uusitalo, Laura; Lehikoinen, Annukka (2021)
    Human activities both depend upon and have consequences on the environment. Environmental risk assessment (ERA) is a process of estimating the probability and consequences of the adverse effects of human activities and other stressors on the environment. Bayesian networks (BNs) can synthesize different types of knowledge and explicitly account for the probabilities of different scenarios, therefore offering a useful tool for ERA. Their use in formal ERA practice has not been evaluated, however, despite their increasing popularity in environmental modeling. This paper reviews the use of BNs in ERA based on peer‐reviewed publications. Following a systematic mapping protocol, we identified studies in which BNs have been used in an environmental risk context and evaluated the scope, technical aspects, and use of the models and their results. The review shows that BNs have been applied in ERA, particularly in recent years, and that there is room to develop both the model implementation and participatory modeling practices. Based on this review and the authors’ experience, we outline general guidelines and development ideas for using BNs in ERA. Integr Environ Assess Manag 2021;17:62–78. © 2020 The Authors. Integrated Environmental Assessment and Management published by Wiley Periodicals LLC on behalf of Society of Environmental Toxicology & Chemistry (SETAC)
  • Kaikkonen, Laura; Parviainen, Tuuli; Rahikainen, Mika; Uusitalo, Laura; Lehikoinen, Annukka (Wiley Periodicals LLC / Society of Environmental Toxicology & Chemistry (SETAC), 2020)
    Integrated Environmental Assessment and Management 17: 1
    Human activities both depend upon and have consequences on the environment. Environmental risk assessment (ERA) is a process of estimating the probability and consequences of the adverse effects of human activities and other stressors on the environment. Bayesian networks (BNs) can synthesize different types of knowledge and explicitly account for the probabilities of different scenarios, therefore offering a useful tool for ERA. Their use in formal ERA practice has not been evaluated, however, despite their increasing popularity in environmental modeling. This paper reviews the use of BNs in ERA based on peer-reviewed publications. Following a systematic mapping protocol, we identified studies in which BNs have been used in an environmental risk context and evaluated the scope, technical aspects, and use of the models and their results. The review shows that BNs have been applied in ERA, particularly in recent years, and that there is room to develop both the model implementation and participatory modeling practices. Based on this review and the authors’ experience, we outline general guidelines and development ideas for using BNs in ERA.
  • Roberts, Sean G.; Killin, Anton; Deb, Angarika; Sheard, Catherine; Greenhill, Simon J.; Sinnemäki, Kaius; Segovia Martín, José; Nölle, Jonas; Berdicevskis, Aleksandrs; Humphreys-Balkwill, Archie; Little, Hannah; Opie, Kit; Jacques, Guillaume; Bromham, Lindell; Tinits, Peeter; Ross, Robert M.; Lee, Sean; Gasser, Emily; Calladine, Jasmine; Spike, Matthew; Mann, Stephen; Shcherbakova, Olena; Singer, Ruth; Zhang, Shuya; Benítez-Burraco, Antonio; Kliesch, Christian; Thomas-Colquhoun, Ewan; Skirgård, Hedvig; Tamariz, Monica; Passmore, Sam; Pellard, Thomas; Jordan, Fiona (2020)
    Language is one of the most complex of human traits. There are many hypotheses about how it originated, what factors shaped its diversity, and what ongoing processes drive how it changes. We present the Causal Hypotheses in Evolutionary Linguistics Database (CHIELD, https://chield.excd.org/), a tool for expressing, exploring, and evaluating hypotheses. It allows researchers to integrate multiple theories into a coherent narrative, helping to design future research. We present design goals, a formal specification, and an implementation for this database. Source code is freely available for other fields to take advantage of this tool. Some initial results are presented, including identifying conflicts in theories about gossip and ritual, comparing hypotheses relating population size and morphological complexity, and an author relation network.
  • Nurmi, Miska Juhani (Helsingin yliopisto, 2021)
    Objectives The purpose of this thesis is to consider what the cognitive models of online causal learning are and what they have to offer for the interactive AI approach. In this thesis, an interactive AI system is considered one that focuses on understanding and collaborating with a human user and which can therefore benefit from cognitive models. The general overview of the models is given by replicating some of the computational results of Bramley et al. (2017) which explored cognitive models for online causal learning. The earlier paper contained four models on how people might learn their causal beliefs, and five models on how people might choose where they place their tests, also known as interventions. Thesis also discusses the implications that the replicated models have for interactive AI, both by considering how these models could be better extended into the interactive AI framework, but also by considering a simple AI based system that could make use of such models. Replication The replication was done by reimplementing the original models of Bramley et al. in R and by reproducing the corresponding figures. Out of the four models used for causal belief updating, two were successfully replicated so that the results corresponded to the original paper. It is not certain why the two other models could not be replicated, and the task is left open for future work. Out of the five intervention choice models, four were implemented and three successfully replicated. One of the models was very close to the original results, but this thesis could not conclude whether it fully reproduces the original results. Implications The simple AI model proposed in this performed poorly but was able to show that in theory, an interactive AI system that incorporates such a model might be feasible in the future with further development. Some recommendations to better extend the replicated models into the interactive AI framework were made. Main recommendations were that a better model on how people might choose where they focus their local attention is needed. Furthermore, it should be ensured that the models approximate human behaviour in larger graphs as well.
  • Ylikoski, Petri Kullervo (Routledge, 2017)
    Routledge Handbooks in Philosophy
    Social mechanisms and mechanism-based explanation have attracted considerable attention in the social sciences and the philosophy of science during the past two decades. The idea of mechanistic explanation has proved to be a useful tool for criticizing existing research practices and meta-theoretical views on the nature of the social-scientific enterprise. Many definitions of social mechanisms have been articulated, and have been used to support a wide variety of methodological and theoretical claims. It is impossible to cover all of these in one chapter, so I will merely highlight some of the most prominent and philosophically interesting ideas.
  • Schleicher, Judith; Eklund, Johanna; Barnes, Megan D.; Geldmann, Jonas; Oldekop, Johan A.; Jones, Julia P.G. (2020)
    The awareness of the need for robust impact evaluations in conservation is growing and statistical matching techniques are increasingly being used to assess the impacts of conservation interventions. Used appropriately matching approaches are powerful tools, but they also pose potential pitfalls. We outlined important considerations and best practice when using matching in conservation science. We identified 3 steps in a matching analysis. First, develop a clear theory of change to inform selection of treatment and controls and that accounts for real‐world complexities and potential spillover effects. Second, select the appropriate covariates and matching approach. Third, assess the quality of the matching by carrying out a series of checks. The second and third steps can be repeated and should be finalized before outcomes are explored. Future conservation impact evaluations could be improved by increased planning of evaluations alongside the intervention, better integration of qualitative methods, considering spillover effects at larger spatial scales, and more publication of preanalysis plans. Implementing these improvements will require more serious engagement of conservation scientists, practitioners, and funders to mainstream robust impact evaluations into conservation. We hope this article will improve the quality of evaluations and help direct future research to continue to improve the approaches on offer.
  • Kolar, Ana; Steiner, Peter M. (2021)
    Propensity score methods provide data preprocessing tools to remove selection bias and attain statistically comparable groups – the first requirement when attempting to estimate causal effects with observational data. Although guidelines exist on how to remove selection bias when groups in comparison are large, not much is known on how to proceed when one of the groups in comparison, for example, a treated group, is particularly small, or when the study also includes lots of observed covariates (relative to the treated group’s sample size). This article investigates whether propensity score methods can help us to remove selection bias in studies with small treated groups and large amount of observed covariates. We perform a series of simulation studies to study factors such as sample size ratio of control to treated units, number of observed covariates and initial imbalances in observed covariates between the groups of units in comparison, that is, selection bias. The results demonstrate that selection bias can be removed with small treated samples, but under different conditions than in studies with large treated samples. For example, a study design with 10 observed covariates and eight treated units will require the control group to be at least 10 times larger than the treated group, whereas a study with 500 treated units will require at least, only, two times bigger control group. To confirm the usefulness of simulation study results for practice, we carry out an empirical evaluation with real data. The study provides insights for practice and directions for future research.