Browsing by Subject "PERIPHERAL-VISION"

Sort by: Order: Results:

Now showing items 1-2 of 2
  • Pekkanen, Jami Joonas Olavi; Lappi, Otto; Rinkkala, Paavo; Tuhkanen, Niko Samuel; Frantsi, Roosa; Summala, Kari Heikki Ilmari (2018)
    We present a computational model of intermittent visual sampling and locomotor control in a simple yet representative task of a car driver following another vehicle. The model has a number of features that take it beyond the current state of the art in modelling natural tasks, and driving in particular. First, unlike most control theoretical models in vision science and engineering—where control is directly based on observable (optical) variables—actions are based on a temporally enduring internal representation. Second, unlike the more sophisticated engineering driver models based on internal representations, our model explicitly aims to be psychologically plausible, in particular in modelling perceptual processes and their limitations. Third, unlike most psychological models, it is implemented as an actual simulation model capable of full task performance (visual sampling and longitudinal control). The model is developed and validated using a dataset from a simplified car-following experiment (N = 40, in both three-dimensional virtual reality and a real instrumented vehicle). The results replicate our previously reported connection between time headway and visual attention. The model reproduces this connection and predicts that it emerges from control of action uncertainty. Implications for traffic psychological models and future developments for psychologically plausible yet computationally rigorous models of full natural task performance are discussed.
  • Kilpeläinen, Markku; Georgeson, Mark A. (2018)
    The locations of objects in our environment constitute arguably the most important piece of information our visual system must convey to facilitate successful visually guided behaviour. However, the relevant objects are usually not point-like and do not have one unique location attribute. Relatively little is known about how the visual system represents the location of such large objects as visual processing is, both on neural and perceptual level, highly edge dominated. In this study, human observers made saccades to the centres of luminance defined squares (width 4 deg), which appeared at random locations (8 deg eccentricity). The phase structure of the square was manipulated such that the points of maximum luminance gradient at the square’s edges shifted from trial to trial. The average saccade endpoints of all subjects followed those shifts in remarkable quantitative agreement. Further experiments showed that the shifts were caused by the edge manipulations, not by changes in luminance structure near the centre of the square or outside the square. We conclude that the human visual system programs saccades to large luminance defined square objects based on edge locations derived from the points of maximum luminance gradients at the square’s edges.