%A Hannagan,Thomas %A Magnuson,James %A Grainger,Jonathan %D 2013 %J Frontiers in Psychology %C %F %G English %K spoken word recognition,time-invariance,TRACE model,symmetry networks,string kernels %Q %R 10.3389/fpsyg.2013.00563 %W %L %M %P %7 %8 2013-September-02 %9 Original Research %+ Dr Thomas Hannagan,Aix-Marseille University/CNRS,Laboratoire de Psychologie Cognitive,Marseille,France,thom.hannagan@gmail.com %# %! Spoken word recognition without a TRACE %* %< %T Spoken word recognition without a TRACE %U https://www.frontiersin.org/articles/10.3389/fpsyg.2013.00563 %V 4 %0 JOURNAL ARTICLE %@ 1664-1078 %X How do we map the rapid input of spoken language onto phonological and lexical representations over time? Attempts at psychologically-tractable computational models of spoken word recognition tend either to ignore time or to transform the temporal input into a spatial representation. TRACE, a connectionist model with broad and deep coverage of speech perception and spoken word recognition phenomena, takes the latter approach, using exclusively time-specific units at every level of representation. TRACE reduplicates featural, phonemic, and lexical inputs at every time step in a large memory trace, with rich interconnections (excitatory forward and backward connections between levels and inhibitory links within levels). As the length of the memory trace is increased, or as the phoneme and lexical inventory of the model is increased to a realistic size, this reduplication of time- (temporal position) specific units leads to a dramatic proliferation of units and connections, begging the question of whether a more efficient approach is possible. Our starting point is the observation that models of visual object recognition—including visual word recognition—have grappled with the problem of spatial invariance, and arrived at solutions other than a fully-reduplicative strategy like that of TRACE. This inspires a new model of spoken word recognition that combines time-specific phoneme representations similar to those in TRACE with higher-level representations based on string kernels: temporally independent (time invariant) diphone and lexical units. This reduces the number of necessary units and connections by several orders of magnitude relative to TRACE. Critically, we compare the new model to TRACE on a set of key phenomena, demonstrating that the new model inherits much of the behavior of TRACE and that the drastic computational savings do not come at the cost of explanatory power.