%A Grossberg,Stephen %A Srinivasan,Karthik %A Yazdanbakhsh,Arash %D 2015 %J Frontiers in Psychology %C %F %G English %K stereopsis,Depth Perception,Perceptual stability,predictive remapping,Saccadic eye movements,object recognition,spatial attention,attentional shroud,gain fields,surface perception,category learning,retinotopic coordinates,spatiotopic coordinates %Q %R 10.3389/fpsyg.2014.01457 %W %L %M %P %7 %8 2015-January-14 %9 Original Research %+ Prof Stephen Grossberg,Center for Adaptive Systems, Graduate Program in Cognitive and Neural Systems, Center of Excellence for Learning in Education, Science and Technology, Center for Computational Neuroscience and Neural Technology, and Department of Mathematics,Boston University, Boston, MA, USA,steve@bu.edu %# %! Binocular Fusion, Invariant Category Learning, and Predictive Remapping during Eye Movements %* %< %T Binocular fusion and invariant category learning due to predictive remapping during scanning of a depthful scene with eye movements %U https://www.frontiersin.org/articles/10.3389/fpsyg.2014.01457 %V 5 %0 JOURNAL ARTICLE %@ 1664-1078 %X How does the brain maintain stable fusion of 3D scenes when the eyes move? Every eye movement causes each retinal position to process a different set of scenic features, and thus the brain needs to binocularly fuse new combinations of features at each position after an eye movement. Despite these breaks in retinotopic fusion due to each movement, previously fused representations of a scene in depth often appear stable. The 3D ARTSCAN neural model proposes how the brain does this by unifying concepts about how multiple cortical areas in the What and Where cortical streams interact to coordinate processes of 3D boundary and surface perception, spatial attention, invariant object category learning, predictive remapping, eye movement control, and learned coordinate transformations. The model explains data from single neuron and psychophysical studies of covert visual attention shifts prior to eye movements. The model further clarifies how perceptual, attentional, and cognitive interactions among multiple brain regions (LGN, V1, V2, V3A, V4, MT, MST, PPC, LIP, ITp, ITa, SC) may accomplish predictive remapping as part of the process whereby view-invariant object categories are learned. These results build upon earlier neural models of 3D vision and figure-ground separation and the learning of invariant object categories as the eyes freely scan a scene. A key process concerns how an object's surface representation generates a form-fitting distribution of spatial attention, or attentional shroud, in parietal cortex that helps maintain the stability of multiple perceptual and cognitive processes. Predictive eye movement signals maintain the stability of the shroud, as well as of binocularly fused perceptual boundaries and surface representations.