Skip to main content

MINI REVIEW article

Front. Psychol., 02 October 2014
Sec. Auditory Cognitive Neuroscience

Did you hear that? The role of stimulus similarity and uncertainty in auditory change deafness

  • Human Research and Engineering Directorate, US Army Research Laboratory, Aberdeen Proving Ground, MD, USA

Change deafness, the auditory analog to change blindness, occurs when salient, and behaviorally relevant changes to sound sources are missed. Missing significant changes in the environment can have serious consequences, however, this effect, has remained little more than a lab phenomenon and a party trick. It is only recently that researchers have begun to explore the nature of these profound errors in change perception. Despite a wealth of examples of the change blindness phenomenon, work on change deafness remains fairly limited. The purpose of the current paper is to review the state of the literature on change deafness and propose an explanation of change deafness that relies on factors related to stimulus information rather than attentional or memory limits. To achieve this, work on across several auditory research domains, including environmental sound classification, informational masking, and change deafness are synthesized to present a unified perspective on the perception of change errors in complex, dynamic sound environments. We hope to extend previous research by describing how it may be possible to predict specific patters of change perception errors based on varying degrees of similarity in stimulus features and uncertainty about which stimuli and features are important for a given perceptual decision.

Everyday listening environments are complex, containing many simultaneously occurring sounds that can vary significantly along a wide range of dimensions. Despite substantial, naturally occurring variability, listeners are typically able to extract source-relevant information to detect, localize, and identify meaningful sound source changes in their environment. These processes can be considered together under the function of auditory scene analysis, in which individual sound features are segmented or bound into coherent sound “objects.” An important outcome of a successful auditory scene analysis is the ability to notice behaviorally meaningful changes in complex environments (Bregman, 1994; Snyder et al., 2012).

When these processes fail it can have potentially disastrous consequences. Listeners may not notice their own cellular ring tone, or may not hear passing vehicles. Take for example, a busy construction site, where numerous competing sound sources can overlap with varying degrees of sensory/perceptual and semantic-level similarity. Additionally, variability in the number and spatial distribution of sound sources can lead to uncertainty about what sound sources are important. If the sound of a truck backing up goes unnoticed because of the presence of other similar sound sources (e.g., loaders, generators, other trucks), serious injury may occur. For this reason, backup alarms were added to construction vehicles to help distinguish them from other workplace sounds. Still, accidents can occur even when the alarm is very distinct (e.g., Vaillancourt et al., 2013). This is a real world example of the laboratory phenomenon of change deafness—where behaviorally meaningful changes within complex auditory scenes are often missed (Gregg and Samuel, 2008). This phenomenon is the auditory analog to visual change blindness (i.e., failures to notice large changes in a visual scene; e.g., Simons and Rensink, 2005).

The first study to document change deafness used a classic dichotic listening task; independent streams of speech were presented to each of a listener's ears, and the listener had to selectively attend to one stream (Cherry, 1953). Across a number of conditions, the stream in the unattended ear was altered, and listeners were asked questions about the nature of the changes. Global changes, such as a change from speech to a tone or from a male to female voice, were always detected. But, listeners failed to notice local changes (e.g., language changed from English to German). Contemporary examples of change deafness for speech (Vitevitch, 2003; Sinnett et al., 2006; Fenn et al., 2011), music (Agres and Krumhansl, 2008), and environmental sounds (Eramudugolla et al., 2005; Pavani and Turatto, 2008; McAnally et al., 2010; Gregg and Snyder, 2012) are emerging, however, the literature remains fairly limited. Recent change deafness studies have implemented the one-shot paradigm, similar in sprit to the flicker paradigm used in change blindness studies, which is essentially a same-different task. In this task a scene is presented for a short duration, then, after a brief inter-stimulus-interval (ISI), a second scene is presented. The second scene is either the same as the first or contains a source change and listeners are asked if a change occurred across the two presentations. Change deafness occurs with error rates around 30%; this level is common across a number of studies (e.g., Gregg and Samuel, 2008; Snyder and Gregg, 2011; Vitevitch and Donoso, 2011; Backer and Alain, 2012).

Factors Mediating Change Deafness

Studies of change deafness have traveled the well-worn path set forth by researchers examining change blindness. Spatial separation, delay interval, scene size, category membership, pre- and post-change cueing, and familiarity, all influence the likelihood of change perception errors in both visual (e.g., Simons and Rensink, 2005) and auditory modalities (e.g., Snyder and Gregg, 2011). While such a broad and extensive map of the factors that influence performance is helpful, it does little to inform the discussion of the potential underlying mechanisms. To uncover the mechanisms responsible for errors in auditory change perception, a unifying construct, within which factors such as cueing, familiarity, category, and spatiotemporal effects would fit, would be beneficial. In the following sections, we discuss how stimulus similarity and uncertainty relate to the patterns of errors observed in change deafness and other related auditory phenomenon, and how these factors together may explain the nature of perceptual errors in complex environments.

Similarity

Similarity, the likeness or featureal overlap between sources, exerts a strong influence in a number of auditory tasks, from low-level sensory-perceptual to higher-level cognitive (see Leech et al., 2009). In traditional psychophysical tasks, stimuli vary along a single dimension, thus defining similarity among items is a relatively straightforward operation. For complex sounds, such as everyday environmental sounds, the definition of similarity requires an account of variability across multiple dimensions, making the definitional problem vastly more difficult. One approach to defining similarity that has been adopted in environmental sound research (e.g., Ballas, 1993; Gygi et al., 2007 has been to generate estimates of perceptual space based on subjective similarity ratings. Typical estimates are based on multidimensional scaling (MDS) analyses, and the degree of feature overlap among sources is represented as a spatial map (Young and Hamer, 1987) and can be used to quantify similarity among stimuli. This approach is sufficiently flexible to quantify similarity across various dimensions (whether well-defined or not) depending on specifically what is asked of a listener (more on MDS estimates in a later section). Similarity characterized in this way is well-suited to studies of change deafness, as it enables examination of specific subsets of sensory/perceptual and semantic-level factors. Gregg and Samuel (2008, 2009) demonstrated that both sensory/perceptual and semantic-level similarity is linked to the magnitude of change perception errors. In that study, sensory/perceptual similarity was manipulated along dimensions of pitch and harmonicity, and semantic similarity was manipulated based on experimenter-defined category membership (within- vs. between-category changes). Change perception was most accurate when the source of the change was distinct from the background at both semantic and sensory/perceptual levels. Gregg and Snyder (2012) reported similar behavioral results and extended previous reports by demonstrating increased amplitudes in both early (i.e., N100) and late (i.e., P300) components of event-related scalp potentials (ERPs) for detected changes, suggesting a reduced cortical activation during change deafness. More recently, Gregg et al. (2014) reported enhanced P300 activity for detected changes (see also Puschmann et al., 2013). Together, these results demonstrate that both sensory/perceptual and semantic-level similarity are important, and that the degree of overlap among individual features or sound objects should be directly related to performance. However, as noted above, this notion can be difficult to test since quantifying systematic differences for stimuli that vary along many dimensions is challenging, especially for complex environmental sounds (See McDermott et al., 2009, 2013 for related discussion).

Uncertainty

In any experimental task, stimulus uncertainty (e.g., number of competing sources, varying spatial position) can impact listener performance, but what seems to matter most is the difference between what a listener hears, and what they expect to hear (Durlach et al., 2003). Like similarity, psychophysical definitions of uncertainty can be relatively straightforward, when uncertainty is limited to single, or few dimensions. Even for complex environmental sources this may be true, but not necessarily when embedded in real-world contexts where there may be numerous sources of uncertainty. Uncertainty effects have been demonstrated across a number of auditory phenomena ranging from low-level detection masking, to mid-, and higher-level effects such as auditory search, change deafness or the cocktail party problem. Variability in the target or background (contextual) stimuli (occurring within or across-trials) and the magnitude of these variations can lead to a range from minimal to high uncertainty. A good example of minimal uncertainty is the auditory detection-masking paradigm (e.g., Fletcher, 1940; see Moore, 2012) where detection of a well-defined and known pure-tone target is measured as a function of the bandwidth of a white noise masker centered at the target frequency. Here, there is essentially no uncertainty associated with the target, and only minimal uncertainty about the masker, leading to precise, and stable masking thresholds over time. Because this form of detection masking is due to interactions at the auditory periphery, it is sometimes referred to as energetic masking (Kidd et al., 2008).

Unlike energetic masking, informational masking (IM) is highly dependent on the degree of stimulus uncertainty, and is thought to occur at more central levels of processing, and is related to multiple stages of processing. IM is related to a number of perceptual and cognitive constructs, such as attention, memory, and perceptual grouping (Kidd et al., 2008; Best et al., 2012). In the basic IM paradigm, a target tone is presented simultaneously with a varying number of contextual “masking” tones. The frequency difference between the target and the competing sounds is always at least an equivalent rectangular bandwidth (ERB: estimate of the size of the auditory filter; see Moore, 2012) from any of the masking sounds to prevent energetic masking. The general result is that target detection thresholds increase as a function of the number of masking tones. This increase in target detection thresholds is monotonic up to a critical masker density, with a reduction or asymptote beyond this critical limit (e.g., Lutfi et al., 2003).

In typical IM paradigms, uncertainty is primarily driven by spectral or temporal variation in contextual elements; uncertainty about the target is fairly minimal. This is generally a matter of necessity, since targets typically need to be well-defined and well-known for listeners to make perceptual decisions. In the real world, as in change deafness, not only is the context highly uncertain due to the presentation of multiple sound sources, but so is the target. This is because the target can be any one of the sounds presented on any given trial. Moreover, the change sound can be drawn from any of the entire set of sounds in the input distribution, thus introducing substantial across-trial uncertainty. Similar to IM, in change deafness, the magnitude of errors seems to be related to the magnitude of stimulus uncertainty. For example, Eramudugolla et al. (2005) found that reducing uncertainty, by cueing the identity of the change item and spatially separating sounds in a scene could significantly reduce errors (see also Backer and Alain, 2012 for discussion on pre- vs. post-change cueing). However, even with substantial spatial separation, sound localization errors systematically increase with the number of background sources (Simpson et al., 2007).

A Framework for Understanding Similarity and Uncertainty Effects

As discussed in the preceding sections, the stimulus factors of similarity and uncertainty are important for a number of auditory phenomena, including change deafness. In traditional psychophysical approaches, it is common to map perception of changes across individual stimulus dimensions, and, because of this, it is common practice to treat the contributions of similarity and uncertainty as independent. For example, measurement of the frequency difference threshold between two tones (e.g., Moore, 2012) can be operationally defined as the measurement of the similarity between those tones. If the target stimulus remains fixed, the only (minimal) uncertainty is due to the change in comparison tone frequency. By contrast, in the typical simultaneous IM paradigm, the target is always dissimilar from the contextual background tones, and changes in the detection thresholds are the result of changes in the number of background tones. In more complex IM designs, the negative effects of high uncertainty can be mitigated when the similarity between target and contextual tones is further reduced, thus resulting in reduced detection thresholds (e.g., Kidd et al., 1994; Neff, 1995; Durlach et al., 2003).

The interaction of similarity and uncertainty should be even more pronounced for complex listening tasks using environmental sounds that are often multidimensional and dynamic in nature. This seems to be the case for the change deafness phenomenon where there is clearly substantial across-trial uncertainty about both the identity of the changed source and the contextual background in which it is presented, all in addition to varying levels of within-trial similarity among sources.

Understanding the relationship between the stimulus factors of similarity and uncertainty and how they affect performance, can provide a predictive account of change deafness. A systematic mapping of the relationships of stimulus uncertainty, while still complicated, is probably the most straightforward and would require independent control of uncertainty for target and contextual sounds, both within and across-trials. A systematic mapping of similarity presents a more difficult proposition, especially given potentially suprathreshold differences across many dimensions.

As introduced earlier, one method to define similarity could be borrowed from approaches used in the environmental sound perception literature where listener perceptual space for a particular set of sounds is estimated based on listener-generated similarity ratings through the application of multivariate statistical techniques such as MDS (e.g., Gygi et al., 2007; Gaston and Letowski, 2012). By themselves, listener ratings cannot be directly tied to specific sensory/perceptual or semantic-level dimensions. Rather, they reflect similarity based on all of the perceived differences between stimulus items in a set. A good example of this approach comes from Gygi et al. (2007), who collected similarity ratings for sound pairs from a set of 50 environmental sounds, and analyzed the ratings using MDS. They then compared the mapping of sound sources in the MDS solution with measured distributions of spectral-temporal acoustic descriptors to relate the physical attributes of the sounds to listener perceptual space. Using this type of approach, similarity can be defined as the Euclidean or “city-block” distances between sound sources in the MDS solution, and thus provides a degree of systematicity in defining differences between stimuli (Young and Hamer, 1987). Additionally, identifying those properties (whether low-level sensory/perceptual or higher-level cognitive/semantic) that are correlated with MDS space can generate hypotheses about the information that may be important in perceptual decisions. There is at least some evidence that there can be good agreement between the organization of MDS space, based on perceived similarity, and the recognition of specific sets of environmental sounds (Gaston and Letowski, 2012). Likewise, in the change deafness paradigm, similarity can be defined by the relationship between the change sound and the estimated perceptual distance from contextual sounds. Indeed, Gregg and Samuel (2009) used the results of Gygi et al. (2007) as the basis for selecting sounds differing along the sensory/perceptual dimensions of pitch and harmonicity.

A Path Forward

Understanding the perception of environmental sound sources is difficult because of the complexity inherent to this broad class of sounds. In general, as complexity increases, so do potential sources of variability, which may or may not be source-relevant (e.g., Pastore et al., 2008). This complexity, which can be related to important stimulus dimensions or simply noise, can be problematic for traditional psychophysical approaches that seek to map perception as a function of systematic changes in single, well-defined stimulus dimensions. Environmental sounds are rarely uni-dimensional and stimulus differences across- and within-sound classes are rarely systematic. Speech is similarly complex and the relatively good understanding of this sound class is largely based on psychophysical mappings of speech properties (e.g., see Raphael, 2008). However, this mature understanding has had the benefit of more than 75 years of systematic, incremental research. Compared to speech perception, the study of environmental sound perception is relatively new, and supporting research has been limited.

Psychophysical approaches by themselves may be sufficient to make this problem tractable, but would require an incredible amount of time and effort. One alternative is to consider various broad classification metrics that map onto global representations of similarity between sound sources (i.e., MDS space). This approach has the benefit of enabling collection of perceptual similarity data on a large stimulus set relatively quickly. The relationship between this more global representation and both sensory/perceptual and semantic dimensions can drive predictions about information that may be important in differentiating environmental sound classes (e.g., Ballas, 1993; Gygi et al., 2007; Gaston and Letowski, 2012). These relationships would only be based on correlations, but they could ultimately support predictions for targeted psychophysical examinations of potentially relevant stimulus information.

Concluding Comments

The link between stimulus information and performance is important in understanding perception in the real world. Change deafness reflects a fundamental limitation to human perceptual experiences, and can help reveal the basic mechanisms underlying perceptual errors in complex listening. These types of complex listening tasks begin to approximate real-world listening conditions, while allowing the use of well-controlled psychophysical techniques common in auditory perception research. In this brief review, a common pattern emerges: errors in change perception can be attributed to the effects of stimulus similarity and uncertainty. Conditions with high informational overlap can increase potential difficulty in effectively allocating attention to feature or feature combinations that are relevant to the listener. It is likely that the patters of errors observed in change deafness are the result of high information load and its impact on allocation of attentional resources (see Lutfi, 1993; Oh and Lutfi, 1998 for examples of informational-attentional interactions in audition; see also Alvarez and Cavanagh, 2004 for visual example). These same patterns are associated with performance in a related phenomenon: the cocktail party problem. This is not surprising given the common origin of the phenomena (i.e., Cherry, 1953). Change deafness and the cocktail party problem demonstrate two extremes of auditory scene analysis. In the one case, high similarity and uncertainty lead to misperceptions, presumably because of a failure to adequately segment the complex scene. In the other case, low similarity and uncertainty create a sort of pop-out effect; the dominant perception is one where the target signal is easily segmented from the background sounds (see Lotto and Holt, 2011). In change deafness, it seems that the information reaching memory for encoding is not parsed in a way that enables perception of change, and this may be related to some form of IM influencing early stream segregation processes. The leading view, that change deafness represents a failure to compare between incoming and previously stored information may be part of the story (e.g., Gregg and Samuel, 2009), but what is more likely is that the information being compared is inaccurate or otherwise inaccessible due to informational factors.

Conflict of Interest Statement

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

References

Agres, K. R., and Krumhansl, C. L. (2008). “Musical change deafness: the inability to detect change in a non-speech auditory domain,” in Proceedings of the 30th Annual Conference of the Cognitive Science Society (Washington, DC), 969–974.

Google Scholar

Alvarez, G. A., and Cavanagh, P. (2004). The capacity of visual short-term memory is set both by visual information load and by number of objects. Psychol. Sci. 15, 106–111. doi: 10.1111/j.0963-7214.2004.01502006.x

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text | Google Scholar

Backer, K. C., and Alain, C. (2012). Orienting attention to sound object representations attenuates change deafness. J. Exp. Psychol. Hum. Percept. Perform. 38, 1554. doi: 10.1037/a0027858

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text | Google Scholar

Ballas, J. A. (1993). Common factors in the identification of an assortment of brief everyday sounds. J. Exp. Psychol. Hum. Percept. Perform. 19, 250.

Pubmed Abstract | Pubmed Full Text | Google Scholar

Best, V., Marrone, N., Mason, C. R., and Kidd, G. (2012). The influence of non-spatial factors on measures of spatial release from masking. J. Acoust. Soc. Am. 131, 3103–3110. doi: 10.1121/1.3693656

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text | Google Scholar

Bregman, A. S. (1994). Auditory Scene Analysis: The Perceptual Organization of Sound. Cambridge, MA: MIT press.

Google Scholar

Cherry, E. C. (1953). Some experiments on the recognition of speech, with one and with two ears. J. Acoust. Soc. Am. 25, 975–979. doi: 10.1121/1.1907229

CrossRef Full Text | Google Scholar

Durlach, N. I., Mason, C. R., Kidd, G. Jr., Arbogast, T. L., Colburn, H. S., and Shinn-Cunningham, B. G. (2003). Note on informational masking (L). J. Acoust. Soc. Am. 113, 2984–2987. doi: 10.1121/1.1570435

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text | Google Scholar

Eramudugolla, R., Irvine, D. R., McAnally, K. I., Martin, R. L., and Mattingley, J. B. (2005). Directed attention eliminates ‘change deafness’ in complex auditory scenes. Curr. Biol. 15, 1108–1113. doi: 10.1016/j.cub.2005.05.051

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text | Google Scholar

Fenn, K. M., Shintel, H., Atkins, A., Skipper, J. I., Bond, V. C., and Nusbaum, H. C. (2011). When less is heard than meets the ear: change deafness in a telephone conversation. Q. J. Exp. Psychol. 64, 1442–1456. doi: 10.1080/17470218.2011.570353

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text | Google Scholar

Fletcher, H. (1940). Auditory patterns. Rev. Mod. Phys. 12, 47. doi: 10.1103/RevModPhys.12.47

CrossRef Full Text | Google Scholar

Gaston, J. R., and Letowski, T. R. (2012). Listener perception of single-shot small arms fire. Noise Control Eng. J. 60, 236–245. doi: 10.3397/1.3701001

CrossRef Full Text | Google Scholar

Gregg, M., Irsik, V. C., and Snyder, J. S. (2014). Change deafness and object encoding with recognizable and unrecognizable sounds. Neuropsychologia 61, 19–30. doi: 10.1016/j.neuropsychologia.2014.06.007

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text | Google Scholar

Gregg, M. K., and Samuel, A. G. (2008). Change deafness and the organizational properties of sounds. J. Exp. Psychol. Hum. Percept. Perform. 34, 974. doi: 10.1037/0096-1523.34.4.974

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text | Google Scholar

Gregg, M. K., and Samuel, A. G. (2009). The importance of semantics in auditory representations. Atten. Percept. Psychophys. 71, 607–619. doi: 10.3758/APP.71.3.607

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text | Google Scholar

Gregg, M. K., and Snyder, J. S. (2012). Enhanced sensory processing accompanies successful detection of change for real-world sounds. Neuroimage 62, 113–119. doi: 10.1016/j.neuroimage.2012.04.057

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text | Google Scholar

Gygi, B., Kidd, G. R., and Watson, C. S. (2007). Similarity and categorization of environmental sounds. Percept. Psychophys. 69, 839–855. doi: 10.3758/BF03193921

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text | Google Scholar

Kidd, G., Mason, C. R., Deliwala, P. S., Woods, W. S., and Colburn, H. S. (1994). Reducing informational masking by sound segregation. J. Acoust. Soc. Am. 95, 3475–3480. doi: 10.1121/1.410023

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text | Google Scholar

Kidd, G., Mason, C. R., Richards, V. M., Gallun, F. J., and Durlach, N. I. (2008). “Informational masking,” in Springer Handbook of Auditory Research: Auditory Perception of Sound Sources, eds W. Yost, A. N. Popper, and R. R. Fay (New York, NY: Springer), 143–189.

Google Scholar

Leech, R., Gygi, B., Aydelott, J., and Dick, F. (2009). Informational factors in identifying environmental sounds in natural auditory scenes. J. Acoust. Soc. Am. 126, 3147–3155. doi: 10.1121/1.3238160

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text | Google Scholar

Lotto, A., and Holt, L. (2011). Psychology of auditory perception. Wiley Interdiscip. Rev. Cogn. Sci. 2, 479–489. doi: 10.1002/wcs.123

CrossRef Full Text | Google Scholar

Lutfi, R. A. (1993). A model of auditory pattern analysis based on component−relative−entropy. J. Acoust. Soc. Am. 94, 748–758. doi: 10.1121/1.408204

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text | Google Scholar

Lutfi, R. A., Kistler, D. J., Callahan, M. R., and Wightman, F. L. (2003). Psychometric functions for informational masking. J. Acoust. Soc. Am. 114, 3273–3282. doi: 10.1121/1.1629303

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text | Google Scholar

McAnally, K. I., Martin, R. L., Eramudugolla, R., Stuart, G., Irvine, D. R., and Mattingley, J. B. (2010). A dual-process account of auditory change detection. J. Exp. Psychol. Hum. Percept. Perform. 36, 994–1004. doi: 10.1037/a0016895

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text | Google Scholar

McDermott, J. H., Oxenham, A. J., and Simoncelli, E. P. (2009). “Sound texture synthesis via filter statistics,” in IEEE Workshop on Applications of Signal Processing to Audio and Acoustics, 2009. WASPAA'09 (New Paltz, NY: IEEE), 297–300.

Google Scholar

McDermott, J. H., Schemitsch, M., and Simoncelli, E. P. (2013). Summary statistics in auditory perception. Nat. Neurosci. 16, 493–498. doi: 10.1038/nn.3347

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text | Google Scholar

Moore, B. C. (2012). An Introduction to the Psychology of Hearing, 6th Edn. Bingley: Emerald Publishing Group.

Neff, D. L. (1995). Signal properties that reduce masking by simultaneous, random−frequency maskers. J. Acoust. Soc. Am. 98, 1909–1920. doi: 10.1121/1.414458

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text | Google Scholar

Oh, E. L., and Lutfi, R. A. (1998). Nonmonotonicity of informational masking. J. Acoust. Soc. Am. 104, 3489–3499. doi: 10.1121/1.423932

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text | Google Scholar

Pastore, R. E., Flint, J. D., Gaston, J. R., and Solomon, M. J. (2008). Auditory event perception: the source—perception loop for posture in human gait. Percept. Psychophys. 70, 13–29. doi: 10.3758/PP.70.1.13

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text | Google Scholar

Pavani, F., and Turatto, M. (2008). Change perception in complex auditory scenes. Percept. Psychophys. 70, 619–629. doi: 10.3758/PP.70.4.619

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text | Google Scholar

Puschmann, S., Sandmann, P., Ahrens, J., Thorne, J., Weerda, R., Klump, G., et al. (2013). Electrophysiological correlates of auditory change detection and change deafness in complex auditory scenes. Neuroimage 75, 155–164. doi: 10.1016/j.neuroimage.2013.02.037

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text | Google Scholar

Raphael, L. J. (2008). “Acoustic cues to the perception of segmental phonemes,” in The Handbook of Speech Perception, eds D. Pisoni and R. Remez (Malden, MA: Blackwell Publishing), 182–206.

Google Scholar

Simons, D. J., and Rensink, R. A. (2005). Change blindness: past, present, and future. Trends Cogn. Sci. 9, 16–20. doi: 10.1016/j.tics.2004.11.006

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text | Google Scholar

Simpson, B. D., Brungart, D. S., Iyer, N., Gilkey, R. H., and Hamil, J. T. (2007). Localization in Multiple Source Environments: Localizing the Missing Source (No. AFRL-RH-WP-JA-2008-0001). Air Force Research Lab Wright-Patterson AFB OH Human Effectiveness Directorate, DTIC. Available online at: http://oai.dtic.mil/oai/oai?verb=getRecord&metadataPrefix=html&identifier=ADA487288

Sinnett, S., Costa, A., and Soto-Faraco, S. (2006). Manipulating inattentional blindness within and across sensory modalities. Q. J. Exp. Psychol. 59, 1425–1442. doi: 10.1080/17470210500298948

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text | Google Scholar

Snyder, J. S., and Gregg, M. K. (2011). Memory for sound, with an ear toward hearing in complex auditory scenes. Atten. Percept. Psychophys. 73, 1993–2007. doi: 10.3758/s13414-011-0189-4

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text | Google Scholar

Snyder, J. S., Gregg, M. K., Weintraub, D. M., and Alain, C. (2012). Attention, awareness, and the perception of auditory scenes. Front. Psychol. 3:15. doi: 10.3389/fpsyg.2012.00015

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text | Google Scholar

Vaillancourt, V., Nélisse, H., Laroche, C., Giguère, C., Boutin, J., and Laferrière, P. (2013). Comparison of sound propagation and perception of three types of backup alarms with regards to worker safety. Noise Health 15, 420. doi: 10.4103/1463-1741.121249

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text | Google Scholar

Vitevitch, M. S. (2003). Change deafness: the inability to detect changes between two voices. J. Exp. Psychol. Hum. Percept. Perform. 29, 333. doi: 10.1037/0096-1523.29.2.333

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text | Google Scholar

Vitevitch, M. S., and Donoso, A. (2011). Processing of indexical information requires time: evidence from change deafness. Q. J. Exp. Psychol. 64, 1484–1493. doi: 10.1080/17470218.2011.578749

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text | Google Scholar

Young, F. W., and Hamer, R. M. (1987). Multidimensional Scaling: History. Theory and Applications. New York, NY: Erlbaum.

Google Scholar

Keywords: change deafness, similarity effects, uncertainty, informational masking, environmental sound perception, complex sound perception

Citation: Dickerson K and Gaston JR (2014) Did you hear that? The role of stimulus similarity and uncertainty in auditory change deafness. Front. Psychol. 5:1125. doi: 10.3389/fpsyg.2014.01125

Received: 19 August 2014; Accepted: 16 September 2014;
Published online: 02 October 2014.

Edited by:

Claude Alain, Rotman Research Institute, Canada

Reviewed by:

Joel Snyder, University of Nevada Las Vegas, USA
Kristina C. Backer, Rotman Research Institute, Canada

Copyright © 2014 Dickerson and Gaston. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Kelly Dickerson, Department of the Army, US Army Research Laboratory, ATTN: RDRL-HRS, Aberdeen Proving Ground, MD 21005, USA e-mail: kdicker1@binghamton.edu

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.