Skip to main content

CLINICAL CASE STUDY article

Front. Neurosci., 19 June 2015
Sec. Systems Biology Archive

Case study: auditory brain responses in a minimally verbal child with autism and cerebral palsy

  • 1ARC Centre of Excellence in Cognition and its Disorders, Macquarie University, Sydney, Australia
  • 2Department of Cognitive Science, Macquarie University, Sydney, Australia
  • 3Department of Psychology, Macquarie University, Sydney, Australia

An estimated 30% of individuals with autism spectrum disorders (ASD) remain minimally verbal into late childhood, but research on cognition and brain function in ASD focuses almost exclusively on those with good or only moderately impaired language. Here we present a case study investigating auditory processing of GM, a nonverbal child with ASD and cerebral palsy. At the age of 8 years, GM was tested using magnetoencephalography (MEG) whilst passively listening to speech sounds and complex tones. Where typically developing children and verbal autistic children all demonstrated similar brain responses to speech and nonspeech sounds, GM produced much stronger responses to nonspeech than speech, particularly in the 65–165 ms (M50/M100) time window post-stimulus onset. GM was retested aged 10 years using electroencephalography (EEG) whilst passively listening to pure tone stimuli. Consistent with her MEG response to complex tones, GM showed an unusually early and strong response to pure tones in her EEG responses. The consistency of the MEG and EEG data in this single case study demonstrate both the potential and the feasibility of these methods in the study of minimally verbal children with ASD. Further research is required to determine whether GM's atypical auditory responses are characteristic of other minimally verbal children with ASD or of other individuals with cerebral palsy.

Introduction

According to recent estimates, around 30% of individuals with autism spectrum disorders (ASD) remain nonverbal or minimally verbal despite intervention (Coleman, 2000; Mody and Belliveau, 2013; Tager-Flusberg and Kasari, 2013). A significant proportion of these individuals never speak, while others remain at the stage of echolalia or have a limited repertoire of fixed words and phrases that may be communicated through alternative/augmentative communication systems (Kasari et al., 2013). Yet the vast majority of research on cognition and brain function in ASD focuses on high-functioning individuals with age-appropriate or only mildly-impaired language and cognitive abilities. This reflects the practical difficulties of testing these profoundly affected individuals, as well as concerns that results may be compromised by failure to understand task instructions or comply with task demands. However, it is questionable whether insights gained from studies of linguistically able individuals with ASD may be extrapolated to those who are minimally verbal.

To conduct research with minimally verbal children with ASD, it is important to develop valid measures that do not depend upon the ability to understand task instructions or comply with task demands. In principle, neurophysiological techniques such as electroencephalography (EEG) and magnetoencephalography (MEG) are well suited to this purpose (Tager-Flusberg and Kasari, 2013). Electroencephalography reflects electrical activity from populations of synchronously firing neurons (Luck, 2005), while MEG measures the corresponding magnetic fields (Hämäläinen et al., 1993; Hari et al., 2010). Both techniques are safe, noninvasive, and silent, and can provide insights into the neural mechanisms underpinning cognitive function with millisecond precision. Importantly, EEG and MEG responses can often be recorded passively while the participant is engaged in another activity, thereby avoiding concerns about confounding influences of poor task understanding and poor attention.

MEG and EEG offer complementary strengths. MEG has superior spatial resolution because the brain's magnetic fields are not “smeared” or distorted by the brain, scalp, and skull, and are less prone to physiological noise compared to EEG (Hämäläinen et al., 1993; Hari et al., 2010). This allows for cleaner extraction of brain responses that are simpler to interpret. MEG set up is relatively easy and requires no physical contact with sensors, and so is well tolerated by verbal children with ASD (Roberts et al., 2008; Hari et al., 2010; Brock et al., 2013). On the other hand, EEG is much cheaper and more widely available, making it a more realistic tool for large-scale multi-site studies and clinical applications.

Despite their considerable potential, MEG and EEG studies of profoundly affected individuals with ASD are rare. To date, such studies have focussed on auditory processing of simple tone stimuli (Seri et al., 1999; Ferri et al., 2003; Tecchio et al., 2003). Using MEG, Tecchio et al. (2003) tested 8- to 32-year-old autistic individuals with “moderately to severely impaired” verbal communication (according to the Childhood Autism Ratings Scale). Relative to typically developing control participants, they showed a normal M100 response to the onset of tones, but a weak or absent mismatch response to rare sounds in the sequence. In contrast, Ferri et al. (2003) found no evidence of group differences in the mismatch response or subsequent P3a response. Participants were described as having “low functioning autism” and “mental retardation,” but unfortunately no further details were provided regarding their language proficiency or if they were nonverbal.

The current paper adds to this extremely sparse literature on auditory processing in minimally verbal individuals with ASD. We present a case report of GM, a young autistic girl with cerebral palsy who, at the time of writing, has never spoken. When GM was 8 years and 10 months old, we had the opportunity to measure her brain responses to vowels sounds and complex tones using MEG. Two years later, we were able to re-test GM, this time using a novel “gaming” EEG headset that has been adapted for research purposes. Together, the two experiments indicate that GM has a highly unusual pattern of brain responses, characterized by atypically strong responses to nonspeech sounds, but weak responses to speech. This case report demonstrates, we believe, the feasibility and potential of both EEG and MEG for the study of minimally verbal individuals with ASD as well as those with cerebral palsy.

Background

GM is a young girl with ASD and cerebral palsy. At the time of testing for Experiment 1, she was 8 years and 10 months old. By the time of Experiment 2, she was 10 years and 10 months old. Although she does vocalize, she has never spoken in words, and currently uses an augmentative and alternative communication system on the iPad with prompting from her mother to communicate. She attends a school for children with special needs. Other than her cerebral palsy, GM has no history of brain injury or epilepsy. She has no history of ear infections, and was not on medications at the time of either testing session. Her family speaks Australian English at home.

GM was diagnosed with cerebral palsy (spastic diplegia) aged 18 months. She has global developmental delay and did not walk until after her third birthday. Her mother reports that, as an infant, she had good eye contact and social communication development but lost this at around 18 months. Her diagnosis of DSM-IV Autistic Disorder was conferred by a developmental pediatrician at 59 months (American Psychiatric Association, 1994). Under DSM-5, she would, therefore, automatically qualify for a diagnosis of ASD (American Psychiatric Association, 2013). GM's ASD diagnosis was further supported by her “Lifetime” score of 29 on the Social Communication Questionnaire (SCQ; Rutter et al., 2003), which is well above the threshold of 15 for suspected ASD. Module 1 of the Autism Diagnostic Observation Schedule (Lord et al., 2002) was administered but discontinued because GM showed distress early in the assessment and increased frustration when expected to play. During ADOS administration, she failed to engage in any of the activities, and did not partake in imitation, free play, and reciprocal interaction. While she vocalized sporadically, she did not initiate, engage in, or respond to speech directed at her.

Cognitive Abilities and Adaptive Behavior

During the testing session for Experiment 1, we attempted to administer a number of standardized tests including the Peabody Picture Vocabulary Test—4th Edition (Dunn and Dunn, 2007) and the Matrices subtest of the Wechsler Intelligence Scale for Children (Wechsler, 2003). Testing using the standard procedures was unsuccessful, largely due to GM's severe communication challenges and her lack of engagement with the tasks. However, GM's mother was able to provide a report from a Clinical Psychologist and Senior Clinical Neuropsychologist of an assessment conducted at age 8 years and 2 months using modified procedures. Relevant sections from the report are reproduced below, with the caveat, noted by the clinicians, that the results of testing may have under-represented GM's true abilities.

“The administration of assessment protocol was adapted due to the severity of [GM's] attention and expressive language difficulties. Task instructions were often repeated and the examiners pointed to relevant stimuli to help [GM] focus. Tasks were selected that allowed [GM] to point to her answer and tasks that required a single word or two word response, that [GM] could type on a computer or her iPad….”

“The nonverbal subtests on the WISC were administered to assess [GM's] level of intellectual functioning… The Block Design subtest could not be administered because of [GM's] motor difficulties… [GM's] visual processing and abstract reasoning ability were found to fall within the ‘extremely low’ range. The results indicated that [GM's] performance/nonverbal skills were consistent with mild to moderate level of intellectual disability….”

“[GM's] understanding of vocabulary was measured with the PPVT… On formal testing, her performance was consistent with a 3–4 year age level….”

“[GM's mother] completed the [Adaptive Behavior Assessment System—2nd Edition] which assesses a child's level of independence in everyday living including the areas of communication, daily and community living skills, social and leisure, functional pre-academics, and motor skills. [GM's] skills overall were in the significantly delayed or ‘extremely low’ range. There was no significant variation evident in her overall level of functioning.”

Auditory Sensory Processing

Given the study's focus on auditory processing, GM's mother also completed the Short Sensory Profile (McIntosh et al., 1999), a parent questionnaire that addresses the sensory processing of the child in everyday situations. GM scored within the typical range for the Tactile, Taste/Smell, Movement and Visual/Auditory Sensitivity items. She scored within the Probable Difference range for the Underresponsive/Seeks Sensation and Auditory Filtering items and within the Definite Difference range in the Low Energy/Weak section, which relates to under-responsiveness to vestibular and proprioceptive sensation (Lane et al., 2011). Within the auditory items specifically, she was reported to have never responded negatively to unexpected or loud noises, nor to hold hands over ears, or have trouble completing tasks when the radio is on. However, she was reported to be occasionally distracted or have trouble functioning in noisy environments. Further, she was reported to not hear people, not respond to her name being called, and have difficulties with attention.

Written informed consent was obtained from the mother of the patient for publication of this Case report. A copy of the written consent is available for review by the Editor-in-Chief of this journal.

Experiment 1

In Experiment 1, we used MEG to investigate GM's brain responses to speech and nonspeech sounds. Procedures for this experiment and Experiment 2 were approved by the Macquarie University Human Research Ethics Committee. Written consent was obtained from parents of all participants, who were given a modest amount of money, a small prize, and a certificate for their participation.

Participants

At the time of testing, GM was 8 years and 10 months old. Her brain responses were compared to those of 18 typically developing (TD) children (15 boys) and 13 verbal children with ASD (11 boys), aged between 6 and 14 years, who were tested as part of a separate study (Yau et al., in press). All children spoke English as a first language and had normal hearing as determined using an Otovation Amplitude T3 series audiometer.

All children with ASD had reports from psychologists or pediatricians confirming their DSM-IV (American Psychiatric Association, 1994) and/or ICD-10 (World Health Organisation, 1992) diagnosis of an ASD. In addition, they all scored above the Autism cut-off on the SCQ. All children in the ASD group (“Verbal ASD”) had phrase speech, although performance on standardized language assessments varied widely, as shown in Table 1. Typically developing children scored below the Autism cut-off on the SCQ, and reported no history of brain injury, ASD, language impairment, or developmental disorders in their family.

TABLE 1
www.frontiersin.org

Table 1. Characteristics of children in the verbal autism spectrum disorders (ASD) and typically developing (TD) comparison groups.

Stimuli

Stimuli were 200-ms long with 5-ms ramps at the start and end to avoid clicks and distortions to the sounds. The speech stimulus was a natural sounding English vowel /a/ (McArthur et al., 2009). The nonspeech stimulus was a complex tone created using Adobe Audition to match the first three formants of the speech sound (see Table 2 for stimuli characteristics). The main difference between the two sounds was the presence of a fundamental frequency (F0) in the speech stimuli, which gave the speech sounds their “speechiness.”

TABLE 2
www.frontiersin.org

Table 2. Speech and nonspeech stimuli acoustic parameters.

Stimuli were presented binaurally at 75 dB SPL via earphones attached to rubber air tubes (Model ER-30, Etymotic Research Inc., Elk Grove Village, IL). Children were presented eight blocks of 100 speech stimuli interleaved with eight blocks of 100 nonspeech stimuli. The stimulus onset asynchrony (SOA) was jittered between 900 and 1100 ms. The stimuli were presented in an oddball paradigm originally designed to elicit a mismatch field. Each block of 100 sounds included 85 frequently occurring “standard” sounds and 15 rarely occurring “deviant” sounds (a 10% increase in the frequency of F1, F2, and F3 relative to the standard sound). However, like other researchers, we found that the mismatch response was not reliably elicited at the individual level (Kurtzberg et al., 1995; Uwer and von Suchodoletz, 2000; McArthur et al., 2003; Bishop, 2007; Mahajan and McArthur, 2012). Thus, following past research, our analyses focused on the obligatory brain responses to the onset of the standard stimuli (McArthur and Bishop, 2005; Whitehouse and Bishop, 2008).

MEG Recording

MEG data were recorded using 160 coaxial first-order gradiometers with a 50 mm baseline (Model PQ1160R-N2, KIT, Kanazawa, Japan; Kado et al., 1999; Uehara et al., 2003). MEG data were acquired with a sampling rate of 1000 Hz and filter bandpass of 0.03–200 Hz. Prior to MEG recording, each child was fitted with an elasticized cap containing five marker coils. The positions of the coils and the shape of the participant's head were measured with a pen digitizer (Polhemus Fastrack, Colchester, VT). Head position was measured with the marker coils before and after each MEG recording, and children were visually monitored for head movements. If the authors detected movement from the child, data recording for that block was aborted and marker coils re-measured. Children who exceeded head-movement of 5 mm were excluded from further analyses. During the recording, participants watched a silent subtitled DVD of their choice projected on a screen on the ceiling of the MEG room while lying on a comfortable bed inside the magnetically shielded room.

MEG Data Processing

MEG data were processed using BESA 6.0 software (MEGIS Software GmbH, Grafelfing, Germany). The data were filtered between 0.1 and 30 Hz, epoched from −100 ms pre-stimulus onset to 500 ms post-stimulus onset, and baseline corrected from −100 to 0 ms. Epochs with gradient artifact (including blinks and eye-movements) greater than 5336 fT/cm were identified using the artifact-rejection tool in BESA, and excluded from further analysis. All participants had at least 75% artifact-free epochs for each condition. On average, there were 542 accepted epochs for speech sounds and 538 for nonspeech sounds in the control group. For GM, there were 448 accepted epochs for speech sounds and 494 for nonspeech sounds.

Data were first analyzed at the sensor level by computing the Global Field Power (GFP, Lehmann and Skrandies, 1980). This involved transforming the speech and nonspeech waveforms for each of the 160 sensors to absolute values and then averaging across the 160 channels to obtain a whole head response (cf. Kasai et al., 2005). This procedure avoids bias that may arise from picking a group of channels and complements analyses conducted in source space. Magnetic GFP also strongly corresponds with fitted dipoles in terms of strength and latency, and is considered a good representation of underlying brain activity from the sources (Kasai et al., 2002, 2003).

Data were also analyzed in source space using BESA 6.0. For each participant, we first averaged the sensor data across the speech and nonspeech conditions. Two dipoles were initially placed in bilateral Heschl's gyrus (according to the template brain) and then fitted freely (location and orientation), subject to the constraint that their locations remained symmetrical. For most participants, dipoles were fitted and optimized to the 80–110 ms window, corresponding to each child's M50/M100 response. However, in some cases, it was necessary to extend the time window down to 70 ms or up to 160 ms to more accurately account for latency delays in younger children or those with maturing waveforms (cf. Oram Cardy et al., 2004). Separate speech and nonspeech source waveforms were then extracted from the left and right hemisphere dipoles.

Results and Discussion

Figure 1 shows a timeline of GM's magnetic flux map for speech and nonspeech responses. Note that compared to the age-matched typically developing child in Figure 1, GM's response to nonspeech was much earlier and larger than her response to speech.

FIGURE 1
www.frontiersin.org

Figure 1. Timeline of brain activity to speech and nonspeech stimuli for GM and an age-matched child. Timeline of magnetic flux activity showing obligatory brain activity (auditory M50/M100) from left hemisphere sensors to speech and nonspeech stimuli. The top two rows are GM and the bottom two rows are of an age-matched typically developing child to nonspeech and speech stimuli.

Figures 2, 3 show each participant's sensor waveforms to speech and nonspeech sounds. Again, there was a discrepancy between GM's double-peaked response to nonspeech stimuli and her virtually flat response to speech. In contrast, the other participants showed similar responses to speech and nonspeech stimuli. Note however, that the participants differed widely in both the morphology of the waveforms and their overall magnitude. While this may partly reflect differences in brain activity, it may also depend on the child's position in the MEG helmet and the size of their heads.

FIGURE 2
www.frontiersin.org

Figure 2. MEG Sensor waveforms for GM and all verbal children with autism spectrum disorder (ASD). Gray lines indicate response to speech and black lines indicate nonspeech response. Each tick on the vertical axis represents 10 femtoTesla.

FIGURE 3
www.frontiersin.org

Figure 3. MEG sensor waveforms for typically developing (TD) children. Gray lines indicate response to speech; black lines indicate nonspeech response. Each tick on the vertical axis represents 10 femtoTesla.

To quantify the similarity between each participant's speech response and their own nonspeech responses, we used intra-class correlations (ICCs), which were Fisher-z transformed to improve linearity for parametric statistics (cf. Bishop and McArthur, 2004, 2005). Initially, we included the whole epoch in the ICC calculations (0–500 ms). In addition, we also considered a narrower 65–165 ms window, which incorporated the obligatory M50 and M100 responses (see Yau et al., in press).

We compared GM's ICCs to those of children in the TD and ASD comparison groups using SingLims, a statistical program widely used in neuropsychological case studies (Crawford et al., 2010). The SingLims approach assumes the comparison participants to be a representative sample of the population, and uses modified t-tests to estimate the “abnormality or rarity” of a case's scores and the percentile ranking of the case (i.e., the percentage of the control population exhibiting a lower score than the case). Tables 3, 4 show the SingLims test results, and point and interval estimates of effect size and abnormality for GM's scores, compared to the TD and ASD comparison groups respectively. GM's ICCs were significantly lower than both control groups for both the 65–165 ms and 0–500 ms time periods, in each case placing her in the bottom 5% of the population.

TABLE 3
www.frontiersin.org

Table 3. Outcome of SingLims analysis comparing GM to the typically developing (TD) control group.

TABLE 4
www.frontiersin.org

Table 4. Outcome of SingLims analysis comparing GM to the autism spectrum disorders (ASD) control group.

Figure 4 shows the results of the source analysis for GM. It suggests that the striking differences between GM's speech and nonspeech sensor waveforms originate from the left hemisphere. As for the sensor analysis, we calculated Fisher z-transformed ICCs to index the similarity of each child's nonspeech and speech dipole waveforms, for left and right hemisphere sources. As the dipoles used for source extraction were oriented to the M50/M100 response, we only report ICCs for the corresponding 65–165 ms window. SingLims analyses (Tables 3, 4) show that GM had significantly reduced ICCs for the left hemisphere, again placing her in the bottom 5% of the population relative to both control groups. In contrast, her right hemisphere responses were within the normal range.

FIGURE 4
www.frontiersin.org

Figure 4. GM's source waveforms for speech and nonspeech stimuli from Experiment 1. Magnetoencephalography (MEG) source waveforms from left and right hemisphere sources approximating auditory cortex from Experiment 1. Gray lines indicate response to speech; black lines indicate nonspeech response. Vertical axis represents amplitude in femtoTesla.

To summarize, GM's MEG recordings were highly atypical. In particular, she showed a striking dissociation between her M50/M100 responses to speech and to nonspeech sounds. This appeared to originate in her left auditory cortex and was not shown by any of the typically developing or autistic children we tested.

It is important to consider the possibility that GM's atypical recordings may be artefactual. Of particular concern is the possibility that GM may have moved more than other participants during the recording session. The KIT MEG system does not currently incorporate online motion tracking. However, during MEG testing, all participants, including GM, were monitored carefully for head-motion, with strict data acquisition and exclusionary criteria applied for motion (see MEG Recording). Moreover, they were lying in a supine position that helps support the head and reduce unwanted movements during recording (Herdman and Cheyne, 2009). Finally, and perhaps most importantly, it is highly unlikely that excessive motion could have given rise to the specific pattern of responses we have reported. We would not expect motion to affect responses to speech and nonspeech differentially or to result in exaggerated hemispheric asymmetries. Nor would we expect artifacts to result in a clearer response to nonspeech than that found in controls.

Experiment 2

Two years after the initial recording session, we had the opportunity to re-test GM as part of a second ongoing study, the aim of which was to validate a lightweight “wireless gaming” EEG system as a research tool for use with typically developing children (Badcock et al., 2015). If the findings from Experiment 1 were a genuine reflection of atypical brain responses, we expected to find similar atypicalities in GM's EEG recordings. Replicating our findings from Experiment 1 would also provide preliminary evidence for the suitability of the gaming EEG system for the assessment of minimally verbal children with ASD.

Participants

At the time of testing for Experiment 2, GM was 10 years and 10 months old. Her auditory brain responses to nonspeech sounds were compared to those of 21 TD children (11 females, 10 males) aged between 6 and 12 years, tested using the same procedures as part of a validation study for the EEG system. The mean age of TD participants was 9.23 years (SD = 1.78). Participants had normal hearing and vision, and no history of developmental disorders or epilepsy.

Stimuli

Stimuli were standard tones (n = 566, 175-ms 1000-Hz pure tones with a 10-ms rise and fall time; 85% of trials) and deviant tones (n = 100, 175-ms 1200-Hz pure tones with a 10-ms rise and fall time; 15% of trials), separated by a jittered SOA of 900–1100 ms. Tones were presented binaurally at a comfortable listening volume through speakers. Participants in the TD group heard 666 tones in a single block. Due to concerns about potential movement artifacts, GM was presented with a second block of 666 trials after a short break.

EEG Recording and Analysis

Participants were seated in a comfortable chair and watched a silent video whilst ignoring the tones. Auditory brain responses were measured using an Emotiv EPOC gaming EEG system that has previously been validated against a research-grade Neuroscan EEG system (Badcock et al., 2013). The sensors in the headset were adjusted on the head until suitable connectivity was achieved as indicated by the TestBench software, which adds a small modulation to the feedforward signal, and measures the size of the signal back from each channel. The testing procedure took 10–15 min.

The Emotiv EEG system uses gold-plated contact-sensors fixed to flexible plastic arms of a wireless headset. The headset included 16 sites, aligned with the 10–20 system: AF3, F7, F3, FC5, T7, P7, O1, O2, P8, T8, FC6, F4, F8, AF4, M1, and M2. One mastoid (M1) sensor acted as a ground reference point to which the voltage of all other sensors were compared. The other mastoid (M2) was a feed-forward reference that reduced external electrical interference. The signals from the other 14 scalp sites (channels) were high-pass filtered with a 0.16 Hz cut-off, pre-amplified and low-pass filtered at an 83 Hz cut-off. The analog signals were then digitized at 2048 Hz. The digitized signal was filtered using a 5th-order sine notch filter (50–60 Hz) and low-pass filtered and down-sampled to 128 Hz. The effective bandwidth was 0.16–43 Hz.

The Emotiv EEG system was modified to send markers to the EEG to indicate the onset of each stimulus (Thie et al., 2012). This was achieved using a custom-made transmitter that converted the onset and offset of each tone into a positive and negative electrical signal. These signals were transmitted into the O1 and O2 channels using an infrared triggering system. The positive and negative spikes in the O1 and O2 EEGs were processed offline in MATLAB. A between-channels difference greater than 50 mV was coded as a stimulus onset or offset. The event marker had at a constant time interval (20 ms delay of the transmitter module) prior to the point of positive and negative signal cross-over. Stimulus markers were recombined with the EEG data.

The resultant EEG was processed offline using EEGLAB version 11.0.5.4b (Delorme and Makeig, 2004). The EEG in each channel was bandpass filtered from 0.1 to 30 Hz, and then divided into epochs that started 102 ms before the onset of each stimulus and ended 500 ms after the onset of the same stimulus. Each epoch was baseline corrected from -102 to 0 ms. Epochs with absolute values greater than 150 uV were rejected.

To maximize the amount of useful data, we collapsed across tone types (standard and deviant). For GM, this resulted in a total of 220 accepted epochs across her two blocks of recordings. For control participants, there were many more acceptable trials (mean = 617, SD = 42 for a single block), but in order to equate GM and the controls for data quality, for each TD participant, we randomly sampled 220 trials (including standards and deviants). For each participant, we then averaged the 220 epochs to create an auditory ERP.

Results and Discussion

Figure 5 shows GM's responses recorded from the two electrodes, F3 (left frontal) and F4 (right frontal), that produced the clearest response in the TD control participants. Consistent with her atypically large MEG response to nonspeech stimuli in Experiment 1, GM showed a strikingly strong and early response to the tone stimuli, particularly for the left frontal electrode. This was clearly outside the range of any of the TD control participants. Thus, GM's unusually large brain response to nonspeech stimuli appears to be a stable and replicable characteristic of her cortical response to a range of nonspeech stimuli.

FIGURE 5
www.frontiersin.org

Figure 5. GM's waveforms for nonspeech stimuli from Experiment 2. Event-related potentials (ERPs) to nonspeech sounds measured from frontal electrodes F3 (left) and F4 (right). Black line shows GM's response. Gray region indicates the average response of children with typical development (TD) for ±1.64 SD (considered the “normal” range). Light gray lines show responses of individual TD children.

General Discussion

Minimally verbal individuals represent a significant proportion of the autistic population and yet are typically excluded from research on cognition and brain function. In the current study, we used MEG and EEG to measure the brain responses to auditory stimuli of a minimally verbal child with ASD. The initial MEG study in Experiment 1 revealed a striking dissociation between her auditory sensory encoding of speech and nonspeech sounds. Specifically, GM had relatively strong and early responses to nonspeech, but unusually weak responses to speech sounds. MEG source analysis suggested that these differences arose in her left hemisphere. We were able to demonstrate statistically that this discrepancy between speech and nonspeech stimuli was highly unusual. Whether compared to typically developing children or other verbal children with ASD, GM's response similarity for speech and nonspeech fell into the bottom 5% of the population.

In Experiment 2, we replicated the finding that GM shows unusually strong response to nonspeech stimuli. This was observed despite the fact she was tested 2 years after Experiment 1 using a different neurophysiological technique (EEG rather than MEG), in a different environment, using different stimuli (pure tones rather than complex tones), as well as a different control sample. This successful replication indicates that GM's atypical responses to nonspeech sounds are genuine and not merely a consequence of methodological artifacts.

GM's atypical responses to nonspeech sounds in both experiments might be considered a neural correlate of atypical auditory processing that is widely reported amongst individuals with ASD (Boddaert et al., 2004; Gervais et al., 2004). Autobiographical accounts of individuals with ASD often include descriptions of atypical sensory experiences, particularly in relation to sounds (Grandin and Scariano, 1986; Bettison, 1996; Reynolds and Lane, 2008; Ben-Sasson et al., 2009). These accounts are supported by parental reports, clinical observations, and enhanced performance on certain psychoacoustic tests (Bonnel et al., 2003, 2010; Tomchek and Dunn, 2007; Heaton et al., 2008; Jones et al., 2009). Surprisingly, then, GM appears to show little evidence of hyper-responsiveness to auditory stimuli in everyday life, as documented by her mother's responses on the Short Sensory Profile. Given that GM is nonverbal, we were unable to obtain a self-report of her sensory experiences. Thus, it remains an open question what the subjective experience of her atypical cortical responses might be.

Clearly, the other intriguing aspect of GM's data is her attenuated response to speech stimuli in the MEG experiment. One interpretation is that GM's brain “switches off” to speech stimuli. This would be consistent with the theories of social deficit or an impairment in social motivation and cognition in ASD (Dawson et al., 1998, 2004; Klin, 2003; Chevallier et al., 2012) and with previous ERP studies suggesting that children with ASD show a difference in the attentional orienting to speech and nonspeech sounds, particularly when they are not explicitly required to attend to the sounds (Ceponienë et al., 2003; Lepistö et al., 2005, 2006; Whitehouse and Bishop, 2008). However, previous studies have focused on the later mismatch negativity and P3 components of the auditory ERP, whereas the striking differences between speech and nonspeech in GM's brain responses were apparent much earlier in the waveform, during the “obligatory” M50/M100 components. This suggests that GM's differential response to speech and nonspeech sounds reflects a bottom-up mechanism in her brain's sensitivity to the acoustic differences between the two stimuli.

The major difference between the speech and nonspeech stimuli is the presence of the fundamental frequency (F0) in the speech stimuli. This serves to give a sound its “speechness” and provides pitch cues for conveying linguistic and emotional prosody as well as information about speaker identity (see McCann and Peppe, 2003; Peppé et al., 2007 for review). Perhaps most importantly, the fundamental frequency also provides a vital cue for segregating speech from background noise in natural listening environments (e.g., Bronkhorst, 2000). Thus, a neural impairment affecting the processing of the fundamental frequency might be expected to have profound implications for the development of speech perception.

It is important to note that GM also has a diagnosis of cerebral palsy, which sets her apart from other minimally-verbal autistic children. The nature of the relationship between ASD and cerebral palsy is unclear and difficult to tease apart (Zwaigenbaum, 2014). Although the incidence of ASD is considerably higher amongst individuals with cerebral palsy (approximately 6%; Christensen et al., 2014) than it is in the general population, the majority of individuals with cerebral palsy do not meet ASD criteria. Likewise, speech and language abilities are affected in the majority of individuals with cerebral palsy, but the complete absence of speech is relatively rare (Odding et al., 2006).

Concluding Remarks

The current case report represents a starting point for investigating the potential causes of severe language impairment that affect many individuals on the autism spectrum. However, GM is obviously an unusual case and, at this stage, it is unclear whether or not her atypical brain responses might generalize either to other minimally verbal children with ASD or to those with cerebral palsy. Nonetheless, the current study stands as an important proof of concept, demonstrating that it is possible in practice to measure brain responses to different auditory stimuli, using both MEG and EEG, from minimally verbal children with ASD. Future studies can take advantage of the complementary strengths of these two techniques and begin to answer vital questions pertaining to cognition and brain function within this much-neglected subgroup of the ASD population.

Author Contributions

SY and JB contributed to the conception, design, acquisition of data, analysis, interpretation, drafting and revising the manuscript. GM contributed to data interpretation, drafting and revising the manuscript. NB contributed to the conception, design, acquisition of EEG data, EEG analysis, interpretation, and drafting the manuscript. All authors read and approved the final manuscript and have given final approval of the version to be published.

Conflict of Interest Statement

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Acknowledgments

This study was funded by the Macquarie University Research Excellence Scholarship (MQRES), Australian Research Council grants (DP098466, ARC 1236500) and an ARC Centre of Excellence Grant (CE110001021). We would also like to thank Dr Ivan Yuen for helping to create the deviant vowel sounds used in Experiment 1. Lastly, we are very grateful to GM and her family, and all the children and families who took part in this study.

References

American Psychiatric Association. (1994). Diagnostic and Statistical Manual of Mental Disorders (DSM). Washington, DC: American Psychiatric Association.

American Psychiatric Association. (2013). Diagnostic and Statistical Manual of Mental Disorders, (DSM-5®). Washington, DC: American Psychiatric Association.

Badcock, N. A., Mousikou, P., Mahajan, Y., de Lissa, P., Thie, J., and McArthur, G. (2013). Validation of the Emotiv EPOC® EEG gaming system for measuring research quality auditory ERPs. PeerJ 1:e38. doi: 10.7717/peerj.38

PubMed Abstract | CrossRef Full Text | Google Scholar

Badcock, N. A., Preece, K. A., de Wit, B., Glenn, K., Fieder, N., Thie, J., et al. (2015). Validation of the Emotiv EPOC EEG system for research quality auditory event-related potentials in children. PeerJ 3:e907. doi: 10.7717/peerj.907

PubMed Abstract | CrossRef Full Text | Google Scholar

Ben-Sasson, A., Hen, L., Fluss, R., Cermak, S. A., Engel-Yeger, B., and Gal, E. (2009). A meta-analysis of sensory modulation symptoms in individuals with autism spectrum disorders. J. Autism Dev. Disord. 39, 1–11. doi: 10.1007/s10803-008-0593-3

PubMed Abstract | CrossRef Full Text | Google Scholar

Bettison, S. (1996). The long-term effects of auditory training on children with autism. J. Autism Dev. Disord. 26, 361–374. doi: 10.1007/BF02172480

PubMed Abstract | CrossRef Full Text | Google Scholar

Bishop, D. (2007). Using mismatch negativity to study central auditory processing in developmental language and literacy impairments: where are we, and where should we be going? Psychol. Bull. 133:651. doi: 10.1037/0033-2909.133.4.651

PubMed Abstract | CrossRef Full Text | Google Scholar

Bishop, D. V. (2003). Test for Reception of Grammar: TROG-2 version 2. London: Pearson Assessment.

Bishop, D. V., and McArthur, G. (2004). Immature cortical responses to auditory stimuli in specific language impairment: evidence from ERPs to rapid tone sequences. Dev. Sci. 7, 11–18. doi: 10.1111/j.1467-7687.2004.00356.x

PubMed Abstract | CrossRef Full Text | Google Scholar

Bishop, D. V., and McArthur, G. M. (2005). Individual differences in auditory processing in specific language impairment: a follow-up study using event-related potentials and behavioural thresholds. Cortex 41, 327–341. doi: 10.1016/S0010-9452(08)70270-3

PubMed Abstract | CrossRef Full Text | Google Scholar

Boddaert, N., Chabane, N., Belin, P., Bourgeois, M., Royer, V., Barthelemy, C., et al. (2004). Perception of complex sounds in autism: abnormal auditory cortical processing in children. Am. J. Psychiatry 161, 2117–2120. doi: 10.1176/appi.ajp.161.11.2117

PubMed Abstract | CrossRef Full Text | Google Scholar

Bonnel, A., McAdams, S., Smith, B., Berthiaume, C., Bertone, A., Ciocca, V., et al. (2010). Enhanced pure-tone pitch discrimination among persons with autism but not Asperger syndrome. Neuropsychologia 48, 2465–2475. doi: 10.1016/j.neuropsychologia.2010.04.020

PubMed Abstract | CrossRef Full Text | Google Scholar

Bonnel, A., Mottron, L., Peretz, I., Trudel, M., Gallun, E., and Bonnel, A. M. (2003). Enhanced pitch sensitivity in individuals with autism: a signal detection analysis. J. Cogn. Neurosci. 15, 226–235. doi: 10.1162/089892903321208169

PubMed Abstract | CrossRef Full Text | Google Scholar

Brock, J., Bzishvili, S., Reid, M., Hautus, M., and Johnson, B. W. (2013). Atypical neuromagnetic responses to illusory auditory pitch in children with autism spectrum disorder. J. Autism Dev. Disord. 47, 2726–2731. doi: 10.1007/s10803-013-1805-z

PubMed Abstract | CrossRef Full Text | Google Scholar

Bronkhorst, A. W. (2000). The cocktail party phenomenon: a review of research on speech intelligibility in multiple-talker conditions. Acta Acust. United Ac. 86, 117–128.

Google Scholar

Ceponienë, R., Lepistö, T., Shestakova, A., Vanhala, R., Alku, P., Näätänen, R., et al. (2003). Speech–sound-selective auditory impairment in children with autism: they can perceive but do not attend. Proc. Natl. Acad. Sci. U.S.A. 100, 5567–5572. doi: 10.1073/pnas.0835631100

PubMed Abstract | CrossRef Full Text | Google Scholar

Chevallier, C., Kohls, G., Troiani, V., Brodkin, E. S., and Schultz, R. T. (2012). The social motivation theory of autism. Trends Cogn. Sci. 16, 231–239. doi: 10.1016/j.tics.2012.02.007

PubMed Abstract | CrossRef Full Text | Google Scholar

Christensen, D., Van Naarden Braun, K., Doernberg, N. S., Maenner, M. J., Arneson, C. L., Durkin, M. S., et al. (2014). Prevalence of cerebral palsy, co-occurring autism spectrum disorders, and motor functioning–Autism and Developmental Disabilities Monitoring Network, USA, 2008. Dev. Med. Child Neurol. 56, 59–65. doi: 10.1111/dmcn.12268

PubMed Abstract | CrossRef Full Text | Google Scholar

Coleman, M. (2000). The Biology of the Autistic Syndromes. London: Cambridge University Press.

Google Scholar

Crawford, J. R., Garthwaite, P. H., and Porter, S. (2010). Point and interval estimates of effect sizes for the case-controls design in neuropsychology: rationale, methods, implementations, and proposed reporting standards. Cogn. Neuropsychol. 27, 245–260. doi: 10.1080/02643294.2010.513967

PubMed Abstract | CrossRef Full Text | Google Scholar

Dawson, G., Meltzoff, A., Osterling, J., Rinaldi, J., and Brown, E. (1998). Children with autism fail to orient to naturally occurring social stimuli. J. Autism Dev. Disord. 28, 479–485. doi: 10.1023/A:1026043926488

PubMed Abstract | CrossRef Full Text | Google Scholar

Dawson, G., Toth, K., Abbott, R., Osterling, J., Munson, J., Estes, A., et al. (2004). Early social attention impairments in autism: social orienting, joint attention, and attention to distress. Dev. Psychol. 40, 271. doi: 10.1037/0012-1649.40.2.271

PubMed Abstract | CrossRef Full Text | Google Scholar

Delorme, A., and Makeig, S. (2004). EEGLAB: an open source toolbox for analysis of single-trial EEG dynamics including independent component analysis. J. Neurosci. Meth. 134, 9–21. doi: 10.1016/j.jneumeth.2003.10.009

PubMed Abstract | CrossRef Full Text | Google Scholar

Dunn, L. M., and Dunn, D. M. (2007). Peabody Picture Vocabulary Test (PPVT-4). Minneapolis, MN: Pearson Assessments.

Google Scholar

Ferri, R., Elia, M., Agarwal, N., Lanuzza, B., Musumeci, S. A., and Pennisi, G. (2003). The mismatch negativity and the P3a components of the auditory event-related potentials in autistic low-functioning subjects. Clin. Neurophysiol. 114, 1671–1680. doi: 10.1016/S1388-2457(03)00153-6

PubMed Abstract | CrossRef Full Text | Google Scholar

Gervais, H., Belin, P., Boddaert, N., Leboyer, M., Coez, A., Sfaello, I., et al. (2004). Abnormal cortical voice processing in autism. Nat. Neurosci. 7, 801–802. doi: 10.1038/nn1291

PubMed Abstract | CrossRef Full Text | Google Scholar

Grandin, T., and Scariano, M. (1986). Emergence: Labelled Autistic. Novato, CA: Arena.

Hämäläinen, M., Hari, R., Ilmoniemi, R. J., Knuutila, J., and Lounasmaa, O. V. (1993). Magnetoencephalography—theory, instrumentation, and applications to noninvasive studies of the working human brain. Rev. Mod. Phys. 65:413. doi: 10.1103/revmodphys.65.413

CrossRef Full Text | Google Scholar

Hari, R., Parkkonen, L., and Nangini, C. (2010). The brain in time: insights from neuromagnetic recordings. Ann. N.Y. Acad. Sci. 1191, 89–109. doi: 10.1111/j.1749-6632.2010.05438.x

PubMed Abstract | CrossRef Full Text | Google Scholar

Heaton, P., Hudry, K., Ludlow, A., and Hill, E. (2008). Superior discrimination of speech pitch and its relationship to verbal ability in autism spectrum disorders. Cogn. Neuropsychol. 25, 771–782. doi: 10.1080/02643290802336277

PubMed Abstract | CrossRef Full Text | Google Scholar

Herdman, A. T., and Cheyne, D. (2009). A Practical Guide to MEG and Beamforming. Brain Signal Analysis: Advances in Neuroelectric and Neuromagnetic Methods. Cambridge: MIT Press.

Google Scholar

Jones, C. R., Happé, F., Baird, G., Simonoff, E., Marsden, A. J., Tregay, J., et al. (2009). Auditory discrimination and auditory sensory behaviours in autism spectrum disorders. Neuropsychologia 47, 2850–2858. doi: 10.1016/j.neuropsychologia.2009.06.015

PubMed Abstract | CrossRef Full Text | Google Scholar

Kado, H., Higuchi, M., Shimogawara, M., Haruta, Y., Adachi, Y., Kawai, J., et al. (1999). Magnetoencephalogram systems developed at KIT. Appl. Supercond. 9, 4057–4062. doi: 10.1109/77.783918

CrossRef Full Text | Google Scholar

Kasai, K., Hashimoto, O., Kawakubo, Y., Yumoto, M., Kamio, S., Itoh, K., et al. (2005). Delayed automatic detection of change in speech sounds in adults with autism: a magnetoencephalographic study. Clin. Neurophysiol. 116, 1655–1664. doi: 10.1016/j.clinph.2005.03.007

PubMed Abstract | CrossRef Full Text | Google Scholar

Kasai, K., Yamada, H., Kamio, S., Nakagome, K., Iwanami, A., Fukuda, M., et al. (2002). Do high or low doses of anxiolytics and hypnotics affect mismatch negativity in schizophrenic subjects? An EEG and MEG study. Clin. Neurophysiol. 113, 141–150. doi: 10.1016/S1388-2457(01)00710-6

CrossRef Full Text | Google Scholar

Kasai, K., Yamada, H., Kamio, S., Nakagome, K., Iwanami, A., Fukuda, M., et al. (2003). Neuromagnetic correlates of impaired automatic categorical perception of speech sounds in schizophrenia. Schizophr. Res. 59, 159–172. doi: 10.1016/S0920-9964(01)00382-6

PubMed Abstract | CrossRef Full Text | Google Scholar

Kasari, C., Brady, N., Lord, C., and Tager-Flusberg, H. (2013). Assessing the minimally verbal school-aged child with autism spectrum disorder. Autism Res. 6, 479–493. doi: 10.1002/aur.1334

PubMed Abstract | CrossRef Full Text | Google Scholar

Klin, A. (2003). The enactive mind, or from actions to cognition: lessons from autism. Philos. Trans. R. Soc. Lond. B. Biol. Sci. 358, 345–360. doi: 10.1098/rstb.2002.1202

PubMed Abstract | CrossRef Full Text | Google Scholar

Kurtzberg, D., Vaughan, H. G. Jr., Kreuzer, J. A., and Fliegler, K. Z. (1995). Developmental studies and clinical application of mismatch negativity: problems and prospects. Ear Hear. 16, 105–117. doi: 10.1097/00003446-199502000-00008

PubMed Abstract | CrossRef Full Text | Google Scholar

Lane, A. E., Dennis, S. J., and Geraghty, M. E. (2011). Brief report: further evidence of sensory subtypes in autism. J. Autism Dev. Disord. 41, 826–831. doi: 10.1007/s10803-010-1103-y

PubMed Abstract | CrossRef Full Text | Google Scholar

Lehmann, D., and Skrandies, W. (1980). Reference-free identification of components of checkerboard-evoked multichannel potential fields. Electroencephalogr. Clin. Neurophysiol. 48, 609–621. doi: 10.1016/0013-4694(80)90419-8

PubMed Abstract | CrossRef Full Text | Google Scholar

Lepistö, T., Kujala, T., Vanhala, R., Alku, P., Huotilainen, M., and Näätänen, R. (2005). The discrimination of and orienting to speech and non-speech sounds in children with autism. Brain Res. 1066, 147–157. doi: 10.1016/j.brainres.2005.10.052

PubMed Abstract | CrossRef Full Text | Google Scholar

Lepistö, T., Silokallio, S., Nieminen-von Wendt, T., Alku, P., Näätänen, R., and Kujala, T. (2006). Auditory perception and attention as reflected by the brain event-related potentials in children with Asperger syndrome. Clin. Neurophysiol. 117, 2161–2171. doi: 10.1016/j.clinph.2006.06.709

PubMed Abstract | CrossRef Full Text | Google Scholar

Lord, C., Rutter, M., DiLavore, P., and Risi, S. (2002). Autism Diagnostic Observation Schedule: ADOS: Manual. Los Angeles, CA: Western Psychological Services.

Luck, S. J. (2005). An Introduction to the Event-Related Potential Technique. Cambridge, MA: MIT Press.

Mahajan, Y., and McArthur, G. (2012). Maturation of auditory event-related potentials across adolescence. Hear. Res. 294, 82–94. doi: 10.1016/j.heares.2012.10.005

PubMed Abstract | CrossRef Full Text | Google Scholar

McArthur, G., Atkinson, C., and Ellis, D. (2009). Atypical brain responses to sounds in children with specific language and reading impairments. Dev. Sci. 12, 768–783. doi: 10.1111/j.1467-7687.2008.00804.x

PubMed Abstract | CrossRef Full Text | Google Scholar

McArthur, G., Bishop, D., and Proudfoot, M. (2003). Do video sounds interfere with auditory event-related potentials? Behav. Res. Meth. Instrum. Comput. 35, 329–333. doi: 10.3758/BF03202561

PubMed Abstract | CrossRef Full Text | Google Scholar

McArthur, G., and Bishop, D. V. (2005). Speech and non-speech processing in people with specific language impairment: a behavioural and electrophysiological study. Brain Lang. 94, 260–273. doi: 10.1016/j.bandl.2005.01.002

PubMed Abstract | CrossRef Full Text | Google Scholar

McCann, J., and Peppe, S. (2003). Prosody in autism spectrum disorders: a critical review. Int. J. Lang. Comm. Dis. 38, 325–350. doi: 10.1080/1368282031000154204

PubMed Abstract | CrossRef Full Text | Google Scholar

McIntosh, D., Miller, L., Shyu, V., and Dunn, W. (1999). “Overview of the short sensory profile (SSP),” in The Sensory Profile: Examiner's Manual, ed W. Dunn (San Antonio, TX: Psychological Corporation), 59–73.

Google Scholar

Mody, M., and Belliveau, J. W. (2013). Speech and language impairments in autism: insights from behavior and neuroimaging. North Am. J. Med. Sci. 5, 157. doi: 10.7156/v5i3p157

PubMed Abstract | CrossRef Full Text | Google Scholar

Odding, E., Roebroeck, M. E., and Stam, H. J. (2006). The epidemiology of cerebral palsy: incidence, impairments and risk factors. Disabil. Rehabil. 28, 183–191. doi: 10.1080/09638280500158422

PubMed Abstract | CrossRef Full Text | Google Scholar

Oram Cardy, J. E., Ferrari, P., Flagg, E. J., Roberts, W., and Roberts, T. P. L. (2004). Prominence of M50 auditory evoked response over M100 in childhood and autism. Neuroreport 15, 1867–1870. doi: 10.1097/00001756-200408260-00006

PubMed Abstract | CrossRef Full Text | Google Scholar

Peppé, S., McCann, J., Gibbon, F., O'Hare, A., and Rutherford, M. (2007). Receptive and expressive prosodic ability in children with high-functioning autism. J. Speech Lang. Hear R. 50, 1015–1028. doi: 10.1044/1092-4388(2007/071)

PubMed Abstract | CrossRef Full Text | Google Scholar

Reynolds, S., and Lane, S. J. (2008). Diagnostic validity of sensory over-responsivity: a review of the literature and case reports. J. Autism Dev. Disord. 38, 516–529. doi: 10.1007/s10803-007-0418-9

PubMed Abstract | CrossRef Full Text | Google Scholar

Roberts, T. P. L., Schmidt, G. L., Egeth, M., Blaskey, L., Rey, M. M., Edgar, J. C., et al. (2008). Electrophysiological signatures: magnetoencephalographic studies of the neural correlates of language impairment in autism spectrum disorders. Int. J. Psychophysiol. 68, 149–160. doi: 10.1016/j.ijpsycho.2008.01.012

PubMed Abstract | CrossRef Full Text | Google Scholar

Rutter, M., Bailey, A., and Lord, C. (2003). The Social Communication Questionnaire: Manual. Los Angeles, CA: Western Psychological Services.

Semel, E. M., Wiig, E. H., and Secord, W. (1987). CELF-R: Clinical Evaluation of Language Fundamentals–Revised. San Antonio, TX: Psychological Corporation.

Seri, S., Cerquiglini, A., Pisani, F., and Curatolo, P. (1999). Autism in tuberous sclerosis: evoked potential evidence for a deficit in auditory sensory processing. Clin. Neurophysiol. 110, 1825–1830. doi: 10.1016/S1388-2457(99)00137-6

PubMed Abstract | CrossRef Full Text | Google Scholar

Tager-Flusberg, H., and Kasari, C. (2013). Minimally verbal school-aged children with autism spectrum disorder: the neglected end of the spectrum. Autism Res. 6, 468–478. doi: 10.1002/aur.1329

PubMed Abstract | CrossRef Full Text | Google Scholar

Tecchio, F., Benassi, F., Zappasodi, F., Gialloreti, L. E., Palermo, M., Seri, S., et al. (2003). Auditory sensory processing in autism: a magnetoencephalographic study. Biol. Psychiatry 54, 647–654. doi: 10.1016/S0006-3223(03)00295-6

PubMed Abstract | CrossRef Full Text | Google Scholar

Thie, J., Klistorner, A., and Graham, S. (2012). Biomedical signal acquisition with streaming wireless communication for recording evoked potentials. Doc. Ophthalmol. 125, 149–159. doi: 10.1007/s10633-012-9345-y

PubMed Abstract | CrossRef Full Text | Google Scholar

Tomchek, S. D., and Dunn, W. (2007). Sensory processing in children with and without autism: a comparative study using the short sensory profile. Am. J. Occup. Ther. 61, 190–200. doi: 10.5014/ajot.61.2.190

PubMed Abstract | CrossRef Full Text | Google Scholar

Uehara, G., Adachi, Y., Kawai, J., Shimogawara, M., Higuchi, M., Haruta, Y., et al. (2003). Multi-channel SQUID systems for biomagnetic measurement. IEICE Trans. Electron 86, 43–54.

Google Scholar

Uwer, R., and von Suchodoletz, W. (2000). Stability of mismatch negativities in children. Clin. Neurophysiol. 111, 45–52. doi: 10.1016/S1388-2457(99)00204-7

PubMed Abstract | CrossRef Full Text | Google Scholar

Wechsler, D. (2003). Wechsler Intelligence Scale for Children-WISC-IV. Psychological Corporation.

Whitehouse, A. J., and Bishop, D. V. (2008). Do children with autism ‘switch off’to speech sounds? An investigation using event-related potentials. Dev. Sci. 11, 516–524. doi: 10.1111/j.1467-7687.2008.00697.x

PubMed Abstract | CrossRef Full Text | Google Scholar

World Health Organisation. (1992). International Statistical Classification of Diseases and Related Health Problems, 10th Revision (ICD-10). Geneva: WHO.

Yau, S. H., Brock, J., and McArthur, G. (in press). The relationship between spoken language and speech and nonspeech processing in children with autism: an event-related field study. Dev. Sci.

Zwaigenbaum, L. (2014). The intriguing relationship between cerebral palsy and autism. Dev. Med. Child Neurol. 56, 7–8. doi: 10.1111/dmcn.12274

PubMed Abstract | CrossRef Full Text | Google Scholar

Keywords: autism, autism spectrum disorder, language impairment, magnetoencephalography, event-related potentials, auditory processing, cerebral palsy, Autistic disorder

Citation: Yau SH, McArthur G, Badcock NA and Brock J (2015) Case study: auditory brain responses in a minimally verbal child with autism and cerebral palsy. Front. Neurosci. 9:208. doi: 10.3389/fnins.2015.00208

Received: 05 December 2014; Accepted: 24 May 2015;
Published: 19 June 2015.

Edited by:

Cyndi Schumann, UC Davis MIND Institute, USA

Reviewed by:

Catherine Y. Wan, Beth Israel Deaconess Medical Center, USA
Atlal Mohammad El-Assaad, American University of Beirut, Lebanon

Copyright © 2015 Yau, McArthur, Badcock and Brock. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Shu H. Yau, Aston Brain Centre, Aston University, Aston Triangle, Birmingham B4 7ET, UK, s.yau@aston.ac.uk

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.