Browsing by Subject "probabilistic inference"

Sort by: Order: Results:

Now showing items 1-2 of 2
  • Olkkonen, Maria; Saarela, Toni P.; Allred, Sarah R. (2016)
    A key challenge for the visual system is to extract constant object properties from incoming sensory information. This information is ambiguous because the same sensory signal can arise from many combinations of object properties and viewing conditions and noisy because of the variability in sensory encoding. The competing accounts for perceptual constancy of surface lightness fall into two classes of model: One derives lightness estimates from border contrasts, and another explicitly infers surface reflectance. To test these accounts, we combined a novel psychophysical task with probabilistic implementations of both models. Observers compared the lightness of two stimuli under a memory demand (a delay between the stimuli), a context change (different surround luminance), or both. Memory biased perceived lightness toward the mean of the whole stimulus ensemble. Context change caused the classical simultaneous lightness contrast effect, in which a target appears lighter against a dark surround and darker against a light surround. These effects were not independent: Combined memory load and context change elicited a bias smaller than predicted assuming an independent combination of biases. Both models explain the memory bias as an effect of prior expectations on perception. Both models also produce a context effect, but only the reflectance model correctly describes the magnitude. The reflectance model, finally, captures the memory-context interaction better than the contrast model, both qualitatively and quantitatively. We conclude that (a) lightness perception is more consistent with reflectance inference than contrast coding and (b) adding a memory demand to a perceptual task both renders it more ecologically valid and helps adjudicate between competing models.
  • Aslay, Cigdem; Ciaperoni, Martino; Gionis, Aristides; Mathioudakis, Michael (2021)
    IEEE International Conference on Data Engineering
    Bayesian networks are general, well-studied probabilistic models that capture dependencies among a set of variables. Variable Elimination is a fundamental algorithm for probabilistic inference over Bayesian networks. In this paper, we propose a novel materialization method, which can lead to significant efficiency gains when processing inference queries using the Variable Elimination algorithm. In particular, we address the problem of choosing a set of intermediate results to precompute and materialize, so as to maximize the expected efficiency gain over a given query workload. For the problem we consider, we provide an optimal polynomial-time algorithm and discuss alternative methods. We validate our technique using real-world Bayesian networks. Our experimental results confirm that a modest amount of materialization can lead to significant improvements in the running time of queries, with an average gain of 70%, and reaching up to a gain of 99%, for a uniform workload of queries. Moreover, in comparison with existing junction tree methods that also rely on materialization, our approach achieves competitive efficiency during inference using significantly lighter materialization.