AUTHOR=Lagorce Xavier , Ieng Sio-Hoi , Clady Xavier , Pfeiffer Michael , Benosman Ryad B. TITLE=Spatiotemporal features for asynchronous event-based data JOURNAL=Frontiers in Neuroscience VOLUME=9 YEAR=2015 URL=https://www.frontiersin.org/journals/neuroscience/articles/10.3389/fnins.2015.00046 DOI=10.3389/fnins.2015.00046 ISSN=1662-453X ABSTRACT=

Bio-inspired asynchronous event-based vision sensors are currently introducing a paradigm shift in visual information processing. These new sensors rely on a stimulus-driven principle of light acquisition similar to biological retinas. They are event-driven and fully asynchronous, thereby reducing redundancy and encoding exact times of input signal changes, leading to a very precise temporal resolution. Approaches for higher-level computer vision often rely on the reliable detection of features in visual frames, but similar definitions of features for the novel dynamic and event-based visual input representation of silicon retinas have so far been lacking. This article addresses the problem of learning and recognizing features for event-based vision sensors, which capture properties of truly spatiotemporal volumes of sparse visual event information. A novel computational architecture for learning and encoding spatiotemporal features is introduced based on a set of predictive recurrent reservoir networks, competing via winner-take-all selection. Features are learned in an unsupervised manner from real-world input recorded with event-based vision sensors. It is shown that the networks in the architecture learn distinct and task-specific dynamic visual features, and can predict their trajectories over time.