On Model Selection, Bayesian Networks, and the Fisher Information Integral

Show full item record



Permalink

http://hdl.handle.net/10138/158484

Citation

Zou , Y & Roos , T T 2015 , On Model Selection, Bayesian Networks, and the Fisher Information Integral . in J Suzuki & M Ueno (eds) , Advanced Methodologies for Bayesian Networks : Second International Workshop, AMBN 2015, Yokohama, Japan, November 16-18, 2015, Proceedings . Lecture notes in computer science , vol. 9505 , Lecture notes in artificial intelligence , Springer International Publishing AG , Cham , pp. 122-135 , Workshop on Advanced Methodologies for Bayesian Networks , Yokohama , Japan , 16/11/2015 . https://doi.org/10.1007/978-3-319-28379-1

Title: On Model Selection, Bayesian Networks, and the Fisher Information Integral
Author: Zou, Yuan; Roos, Teemu Teppo
Editor: Suzuki, Joe; Ueno, Maomi
Contributor: University of Helsinki, Helsinki Institute for Information Technology
University of Helsinki, Department of Computer Science
Publisher: Springer International Publishing AG
Date: 2015
Language: eng
Number of pages: 14
Belongs to series: Advanced Methodologies for Bayesian Networks Second International Workshop, AMBN 2015, Yokohama, Japan, November 16-18, 2015, Proceedings
Belongs to series: Lecture notes in computer science - Lecture notes in artificial intelligence
ISBN: 978-3-319-28378-4
978-3-319-28379-1
URI: http://hdl.handle.net/10138/158484
Abstract: Abstract. We study BIC-like model selection criteria and in particular, their refinements that include a constant term involving the Fisher information matrix. We observe that for complex Bayesian network models, the constant term is a negative number with a very large absolute value that dominates the other terms for small and moderate sample sizes. We show that including the constant term degrades model selection accuracy dramatically compared to the standard BIC criterion where the term is omitted. On the other hand, we demonstrate that exact formulas such as Bayes factors or the normalized maximum likelihood (NML), or their approximations that are not based on Taylor expansions, perform well. A conclusion is that in lack of an exact formula, one should use either BIC, which is a very rough approximation, or a very close approximation but not an approximation that is truncated after the constant term.
Subject: 113 Computer and information sciences
BIC
NML
BAYESIAN NETWORKS
Fisher information integral
112 Statistics and probability
Rights:


Files in this item

Total number of downloads: Loading...

Files Size Format View
ambn2015v2.pdf 334.3Kb PDF View/Open

This item appears in the following Collection(s)

Show full item record