Skip to main content

ORIGINAL RESEARCH article

Front. Psychol., 12 May 2011
Sec. Cognitive Science

Can Changes in Eye Movement Scanning Alter the Age-Related Deficit in Recognition Memory?

  • 1 Rotman Research Institute, Baycrest Hospital, Toronto, ON, Canada
  • 2 Dalla Lana School of Public Health, University of Toronto, Toronto, ON, Canada
  • 3 Department of Psychology, University of Toronto, Toronto, ON, Canada
  • 4 Department of Psychiatry, University of Toronto, Toronto, ON, Canada

Older adults typically exhibit poorer face recognition compared to younger adults. These recognition differences may be due to underlying age-related changes in eye movement scanning. We examined whether older adults’ recognition could be improved by yoking their eye movements to those of younger adults. Participants studied younger and older faces, under free viewing conditions (bases), through a gaze-contingent moving window (own), or a moving window which replayed the eye movements of a base participant (yoked). During the recognition test, participants freely viewed the faces with no viewing restrictions. Own-age recognition biases were observed for older adults in all viewing conditions, suggesting that this effect occurs independently of scanning. Participants in the bases condition had the highest recognition accuracy, and participants in the yoked condition were more accurate than participants in the own condition. Among yoked participants, recognition did not depend on age of the base participant. These results suggest that successful encoding for all participants requires the bottom-up contribution of peripheral information, regardless of the locus of control of the viewer. Although altering the pattern of eye movements did not increase recognition, the amount of sampling of the face during encoding predicted subsequent recognition accuracy for all participants. Increased sampling may confer some advantages for subsequent recognition, particularly for people who have declining memory abilities.

Introduction

Age-related impairments in face recognition have long been reported in the literature (e.g., Ferris et al., 1980; Fulton and Bartlett, 1991; Firestone et al., 2007). Compared to younger adults, older adults are less accurate for recognizing younger faces, but show relatively preserved recognition accuracy for older faces, an effect known as the own-age recognition bias (e.g., Anastasi and Rhodes, 2005, 2006; Firestone et al., 2007). Additionally, older adults, relative to younger adults, make more eye movement transitions between face features when viewing faces, and this effect is more pronounced for younger versus older faces (Firestone et al., 2007). These findings suggest that age-related differences in recognition memory may be a result of underlying age-related differences in viewing behavior, such as might be measured by transition frequency.

Previous work has suggested that recognition memory may be intimately linked to our eye movements (e.g., Walker-Smith et al., 1977; Althoff and Cohen, 1999; Stacey et al., 2005). In particular, the scanpath theory, as described by Noton and Stark (1971), purports that the extent to which the pattern of eye movements enacted during learning are recapitulated during the test phase confers advantages to recognition. Indeed, when participants are restricted from making eye movements, and instead must maintain central fixation, recognition memory is impaired (Henderson et al., 2005). Given this link between recognition and eye movements, it is possible that recognition memory may be improved if older adults can be made to view faces in the same manner as younger adults. Specifically, for older adults, recognition memory may improve for younger faces, given that older adults, as noted above, show relatively preserved memory for older faces. Similarly, if younger adults can be made to view faces in the same manner as older adults, recognition memory may become impaired, particularly for younger faces.

In the present study, we employed a novel eyetracking method to explore whether altering the way in which older and younger adults view faces could affect subsequent recognition for those faces. In this method, the eye movements of a participant (i.e., base participant) are recorded as he/she freely views a face. The eye movements of this base participant are then replayed to another participant (i.e., yoked participant) as a restricted moving window through which a portion of the face can be seen, while the display outside of the moving window is black. As a result, yoked participants view the face as if they were viewing it through another person’s eye movements. Comparing recognition between participants in the yoked and base conditions could address whether manipulating eye movements can affect recognition. However, any differences found between the yoked and base conditions could be attributed to the absence of stimulus information available in the periphery when viewing is restricted through a moving window. As such, a third condition was employed in which participants (i.e., own participants) viewed faces under gaze-contingent conditions (e.g., Reingold and Loschky, 2002; Maw and Pomplun, 2004): viewing is restricted to a moving window, similar to the yoked condition, however these participants control the moving window with their own eye movements. This would enable us to contrast recognition for participants in free viewing (base) versus restricted viewing (own) conditions to determine the influence of peripheral information on subsequent recognition. Recognition could then be compared between the yoked and own conditions to investigate whether altering eye movements can enhance recognition and/or whether having control over one’s eye movements influences subsequent recognition. Therefore, older and younger adults studied faces under one of three viewing conditions: free viewing (bases); viewing through a gaze-contingent moving window (own); and viewing through a moving window which replayed the pattern of eye movements from either a younger or older base participant (yoked). All participants were given a subsequent recognition test in which faces were viewed without restriction.

This unique paradigm will enable us to determine whether having older adults view faces in the same manner as younger adults can increase recognition for faces and, conversely, whether having younger adults view faces in the same manner as older adults can decrease recognition. In addition, by having participants view both older and younger faces, we can, first, replicate the own-age bias in face recognition for older adults (e.g., Anastasi and Rhodes, 2005; Firestone et al., 2007), and second, determine whether this own-age bias is disrupted when older participants are yoked to younger base participants, and/or whether an “own-age” bias (e.g., better memory for older faces) emerges when younger participants are yoked to older base participants.

However, altering the way in which someone views a face may disrupt their ability to subsequently recognize the face. There is evidence of individual differences in eye movement patterns, suggesting that idiosyncratic scanning strategies may be adopted during encoding (Castelhano and Henderson, 2008; Fletcher et al., 2008; Foulsham and Underwood, 2008; Gareze et al., 2008; Tatler et al., 2010). These idiosyncratic patterns may be critical for subsequent recognition for that individual, and those benefits to recognition may not be transferable to another viewer. Also, having active control of one’s eye movements may be required for accurate subsequent recognition. When a participant is yoked to another’s eye movements, the moving window is under passive control, and therefore, the participant loses some amount of active control over the sequence and location of the eye movements. Although there is evidence for a general advantage in memory when engaging in active versus passive control of spatial exploration (see Péruch and Wilson, 2004 for review), it has not yet been demonstrated whether active control over one’s eye movements is critical for subsequent recognition. The current study will compare the effect of active (own) versus passive (yoked) control of eye movements on face recognition, thereby contributing to the literature regarding whether memory advantages occur for all forms of active control.

Altogether, the current study will use a unique eyetracking paradigm to determine the extent to which manipulations to eye movements can alter face recognition memory. In particular, the current study examines whether manipulations to eye movements can improve memory for older adults, and whether the own-age recognition bias that is typically observed for older adults can be eliminated. Finally, this work will determine whether active versus passive control over the sequence and location of eye movements is critical for subsequent memory.

Materials and Methods

Participants

Twelve younger and 12 older adults participated in the bases condition in which faces were freely viewed. In the yoked condition, 24 younger and 24 older participants were yoked to the eye movements of the younger bases, and 24 younger and 24 older participants were yoked to the eye movements of the older bases; therefore, two younger and two older participants were yoked to each base participant. Finally, 24 younger and 24 older adults participated in the own condition in which faces were studied under a gaze-contingent procedure (see Table 1 for demographic information of participants in the three viewing conditions). All participants were recruited from the Rotman Research Institute subject pool, and had normal or corrected-to-normal vision. All participants provided informed written consent and received monetary compensation. Approval for this study was obtained from the Baycrest Ethics Review Board.

TABLE 1
www.frontiersin.org

Table 1. Participant age ranges, mean (SE) age, years of education, and ERVT scores for younger and older participants across the different viewing conditions.

Demographics Across Conditions (Bases, Yoked, Own)

To examine any differences in demographic characteristics between younger and older participants across the different viewing conditions, analyses of variance (ANOVA) were conducted with the between-subject factors of age group (younger, older) and viewing condition (bases, yoked, own) for the measures of participant age, years of education, and Extended Range Vocabulary Test (ERVT) scores. For brevity, only significant results will be reported here (see Table 1 for relevant mean values and standard errors). As expected, there was a significant main effect of age group for participant age [F(1,162) = 2127.35, p < 0.001]. Moreover, as previously found (e.g., Rahhal et al., 2002), older adults had significantly higher ERVT scores (M = 27.7, SE = 1.30) than younger adults [M = 16.3, SE = 1.30; F(1,162) = 38.74, p < 0.001].

Demographics within the Yoked Condition

Analyses of variance were conducted on participant demographics solely within the yoked condition using the between-subject factors of yoked age group (younger, older) and base age (younger, older). For brevity, only significant results will be reported here (see Table 2 for relevant mean values and standard errors). Older yoked participants had a higher mean age than younger yoked participants [M = 71.0, SE = 0.82 and M = 23.0, SE = 0.82, respectively; F(1,92) = 1717.35, p < 0.001]. Moreover, participants yoked to younger bases were significantly older than participants yoked to older bases [M = 48.2, SE = 0.82 and M = 45.9, SE = 0.82, respectively; F(1,92) = 4.06, p = 0.05]. A marginally significant interaction between yoked age group and base age [F(1,92) = 3.64, p = 0.06] revealed that older adults yoked to younger bases were older than their older counterparts who were yoked to older bases [t(92) = 2.77, p = 0.01], whereas the age of younger participants did not significantly differ between the base age groups [t(92) = 0.08, p = 0.94].

TABLE 2
www.frontiersin.org

Table 2. Mean (SE) participant age and age ranges for younger and older participants yoked to younger and older base participants.

Older participants had a higher mean ERVT score (M = 28.1, SE = 1.35) than younger participants [M = 14.4, SE = 1.35; F(1,92) = 51.11, p < 0.001]. In addition, participants yoked to younger bases had marginally lower ERVT scores (M = 19.6, SE = 1.35) than participants yoked to older bases [M = 23.0, SE = 1.35; F(1,92) = 3.13, p = 0.08].

Apparatus

Stimuli were presented on a 19-inch Dell M991 monitor (1024 × 768 pixels, corresponding to a visual angle of approximately 32.3° × 25.4°) from a distance of 24 inches. An SR Research Ltd, EyeLink II system collected eye movement data with a temporal resolution of 2 ms. Eye tracking was highly accurate: if the error at any calibration point was greater than 1° or if the mean error for all nine calibration points was greater than 0.5°, the calibration was repeated.

Stimuli and Design

The stimuli were the same as those used in Firestone et al. (2007), and consisted of 48 female and 48 male non-famous faces (480 × 480 pixels, corresponding to a visual angle of approximately 16.5° × 16.5°) that were placed against a uniform black background, such that only the face and hair were visible. Half of the female and half of the male faces were younger faces (under the age of 35) and the remaining halves were older faces (over the age of 55) as judged by two independent raters.

Participants viewed 24 faces (six younger female, six younger male, six older female, six older male) in one study block. In a subsequent recognition test, participants were shown 48 faces, 24 of which were previously studied and 24 were novel. The order of the faces within the study and test blocks was randomized, and faces were viewed equally often as studied/novel across participants.

Participants in the base condition freely viewed the faces in the study block. Participants in the yoked and own conditions viewed the faces during the study block through a moving window (160 × 160 pixels, corresponding to a visual angle of approximately 5.7° × 5.7°); the display outside the moving window was black. For participants in the yoked condition, the path of the moving window was not under active control but “replayed” the eye movements of one of the base participants. For participants in the own condition, the moving window was gaze-contingent and therefore under active control (Figure 1). The moving window was centered with respect to viewer fixation. In the recognition test block, all participants viewed the test faces with no viewing constraints

FIGURE 1
www.frontiersin.org

Figure 1. Display sequence in the study block. Base participants freely viewed the face (fixations are represented by the gray circles); yoked participants were restricted in viewing to a moving window which replayed the eye movements of a base participant; participants in the own condition were also restricted in viewing to a moving window but controlled the path of the window with their own eye movements.

Procedure

Participants completed the ERVT prior to the start of the experiment. During the study block, 24 faces were presented singly at the center of the screen for 7 s each, in random order. To ensure that all participants were engaged in studying the faces, participants were instructed to judge the face’s age on a scale from 1 to 5 (1 = 21–30, 2 = 31–40, 3 = 41–50, 4 = 51–60, 5 = 61–70) and the quality of the photo on a scale from 1 to 5 (1 = poor, 3 = medium, 5 = good). Judgments were made following removal of the face from the screen. Participants in the yoked condition were told that their view of the face would be restricted to a moving window that would not be under their control, and were instructed to follow this moving window as closely as possible. Participants in the own condition were informed that their view of the face would be restricted to a window that would move as they moved their eyes. A short break was given between the study and test block.

During the recognition test block, all participants were shown 48 faces singly in the center of the screen up to 7 s each, in random order, with no viewing constraints. Participants were instructed to make an old/new recognition judgment via a button press as quickly and as accurately as possible. Each face remained on the screen until the participant made a button press response.

Analyses

A repeated-measures ANOVA using the between-subject factor of age group (younger, older) and the within-subject factor of face age (younger, older) was conducted on the judgments of face age and picture quality to ensure encoding of the faces during the study block. The same analysis was conducted, with the additional between-subject factor of viewing condition (bases, yoked, own), on the total number of fixations made to the face, and the total number of transitions made between facial features during the study block for participants in all viewing conditions. A transition occurred when successive fixations were not within the same region of the face; that is when an eye movement is made from one face feature to another. Regions were defined for each of the face features (the eyes, nose, mouth) and one region was defined for the rest of the area on the face. These analyses were done to determine whether the current findings replicate those of our previous work (Firestone et al., 2007), in which older participants made significantly more fixations and transitions than younger participants during the study block, and older participants made marginally more transitions when viewing younger compared to older faces, an effect which was not found for the younger participants (Firestone et al., 2007).

Corrected accuracy was calculated using d’ scores, a standardized score of normal quantile transformed hit rate (studied faces called “old”) minus normal quantile transformed false alarm rate (novel faces called “old”). Hit and false alarm rates are also reported in Tables 4 and 5. Repeated-measures ANOVAs were conducted on d′ scores using the between-subject factors of age group (younger, older) and viewing condition (bases, yoked, own), and the within-subject factor of face age (younger, older). Within the yoked condition, repeated-measures ANOVAs were conducted on the same measures using the between-subject factors of yoked age group (younger, older) and base age (younger, older), and the within-subject factor of face age (younger, older).

As previously noted, older participants who were yoked to a younger base were significantly older than older participants who were yoked to an older base. As such, a repeated-measures ANOVA was conducted on d′ scores, using the between-subject factor of base age (younger, older), the within-subject factor of face age (younger, older), and the continuous variable of age of the yoked participant. We centered the continuous variable around 50 years since the average age of yoked participants was about 50 years (actual mean = 47.02 years).

To address the possibility that yoked participants had poorer recognition compared to participants in the base condition due to an inability to follow the moving window, the proportion of total viewing time spent in the moving window was examined for participants in the yoked condition. A repeated-measures ANOVA was conducted with the between-subject factors of yoked age group (younger, older) and base age (younger, older), and the within-subject factor of face age (younger, older). Less than 0.5% of all trials were discarded due to technical error. The percent of time that yoked participants spent within the moving window was also correlated with later recognition using Pearson’s correlation coefficient, controlling for the factors of yoked age group, base age, and face age.

Finally, in order to explore whether other aspects of viewing behavior predicted subsequent memory across all viewing conditions, we correlated the number of fixations and transitions made to faces during the study bock with subsequent recognition accuracy. This analysis was conducted for all younger and older participants separately, using Pearson’s correlation coefficient while controlling for the factor of viewing condition.

Analyses were conducted using SPSS Statistics 18. Helmert contrasts were used to decompose significant main effects of viewing condition: base participants were compared to all other participants (yoked and own), and yoked participants were then compared to own participants. Analyses of simple main effects were conducted to decompose significant interactions (Kirk, 1968).

Results

Judgments of Face Age and Picture Quality

There was no significant main effect of age group (F < 1) on the judgments of face age. However, older adults gave higher quality ratings (M = 3.93, SE = 0.07) compared to younger adults [M = 3.58, SE = 0.07; age group: F(1,166) = 12.75, p < 0.001]. Experimenter-defined younger faces were judged as younger (M = 1.48, SE = 0.02; approximate age categories of 21–30 and 31–40 years) and of higher quality (M = 3.95, SE = 0.05) than the experimenter-defined older faces which were judged as older [M = 3.88, SE = 0.03, approximate age category of 51–60 years; face age: F(1,166) = 4408.24, p < 0.001] and of lower quality [M = 3.56, SE = 0.05; face age: F(1,166) = 143.58, p < 0.001]. A significant interaction between age group and face age [F(1,166) = 8.10, p = 0.01] for judgments of face age revealed that older faces were rated higher in years by younger [M = 3.93, SE = 0.05; t(166) = 2.00, p = 0.05] compared to older adults (M = 3.82, SE = 0.05), and younger faces were rated as marginally lower in years by younger (M = 1.43, SE = 0.03) versus older adults [M = 1.53, SE = 0.03; t(166) = −1.61, p = 0.10], similar to previous findings (Firestone et al., 2007). A significant interaction was also found between age group and face age for judgments of picture quality [F(1,166) = 58.89, p < 0.001]; younger faces were judged to be of similar quality by younger (M = 3.90, SE = 0.07) and older participants [M = 4.00, SE = 0.07; t(166) = −0.97, p = 0.33], but older participants judged older faces to be of higher quality (M = 3.86, SE = 0.08) than younger participants [M = 3.25, SE = 0.08; t(166) = 5.80, p < 0.001].

Viewing during Encoding

A significant main effect of viewing condition was found [F(1,162) = 16.33, p < 0.001], in which base participants made significantly more fixations (M = 22.13, SE = 0.56) than yoked (M = 20.53, SE = 0.28) or own participants [M = 18.48, SE = 0.39; t(162) = 4.33, p < 0.001], and yoked participants made significantly more fixations than own participants [t(162) = 4.27, p < 0.001]. A marginal interaction between face age and viewing condition was found for the number of fixations made to the faces during the study block [F(2,162) = 2.65, p = 0.07]; whereas participants in the base and yoked conditions made more fixations toward younger (base: M = 22.22, SE = 0.58; yoked: 20.62, SE = 0.29) compared to older faces (base: M = 22.04, SE = 0.57; yoked: 20.44, SE = 0.29), participants in the own condition made fewer fixations toward younger (M = 18.29, SE = 0.41) compared to older faces (M = 18.67, SE = 0.40). A marginal interaction was also found between face age and age group for the number of fixations [F(1,162) = 3.13, p = 0.08]; older participants made more fixations toward older (M = 20.74, SE = 0.36) compared to younger faces (M = 20.51, SE = 0.36), and younger participants made more fixations toward younger (M = 20.25, SE = 0.36) compared to older faces (M = 20.03, SE = 0.36). There was a marginal interaction between face age, viewing condition and age group for the number of fixations [F(2,162) = 2.44, p = 0.09]. The general pattern of findings as noted above (participants directing more fixations to faces within their same age group) were found, except that younger participants in the own condition made more fixations toward older compared to younger faces. No other main effects and no other interactions were significant [age group by viewing condition: F(1,162) = 2.19, p = 0.12; all other effects: F < 1; see Table 3 for relevant means and standard errors].

TABLE 3
www.frontiersin.org

Table 3. Mean (SE) number of fixations and transitions made to younger and older faces during the study block for younger and older participants across the different viewing conditions.

Older participants (M = 6.76, SE = 0.25) made more transitions between facial features than younger participants (M = 6.02, SE = 0.25) as revealed by a significant main effect of age group [F(1,162) = 4.51, p = 0.04]. A significant main effect of viewing condition was found [F(2,162) = 57.82, p < 0.001]; base participants (M = 7.71, SE = 0.39) made significantly more transitions than either yoked (M = 7.47, SE = 0.20) or own participants [M = 3.99, SE = 0.28; t(162) = 4.62, p < 0.001], and yoked participants made significantly more transitions than own participants [t(162) = 10.23, p < 0.001]; this pattern is the same as observed for number of fixations. A significant interaction between age group and viewing condition [F(2,162) = 57.82, p < 0.001] revealed that, in the bases condition, older participants (M = 9.23, SE = 0.56) made more transitions than younger participants [M = 6.18, SE = 0.56; t(162) = 8.40, p < 0.001], whereas in the own condition, younger participants made more transitions (M = 4.38, SE = 0.39) than older participants [M = 3.59, SE = 0.39; t(162) = 2.17, p < 0.01], and no difference in the number of transitions was observed between younger (M = 7.50, SE = 0.28) and older participants in the yoked condition [M = 7.44, SE = 0.28; t(162) = 0.15, p = 0.22]. All other main effects and interactions were non-significant (Fs < 1; see Table 3 for relevant means and standard errors).

Critically here, older participants in the bases condition made more transitions between facial features than younger base participants, consistent with previous findings (Firestone et al., 2007). This difference in the manner by which younger and older adults view faces may ultimately contribute to age-related differences in recognition memory, as detailed below.

Recognition Accuracy (d′): All Viewing Conditions

Younger participants (M = 2.03, SE = 0.06) were significantly more accurate at recognizing faces than older participants (M = 1.63, SE = 0.06) as revealed through a significant main effect of age group on the d′ scores [F(1,162) = 21.50, p < 0.001]. The differences in accuracy between the younger and older participants seem to be largely driven by an increase in false alarm rates for the older adults (see Table 4 for hit and false alarm rates). Participants were significantly more accurate at recognizing older (M = 1.94, SE = 0.05) compared to younger faces [M = 1.72, SE = 0.06; face age: F(1,162) = 11.80, p = 0.001]. A significant age group by face age interaction [F(1,162) = 6.73, p = 0.01] indicated that while younger participants showed no significant difference in accurately recognizing older (M = 2.05, SE = 0.07) compared to younger faces [M = 2.00, SE = 0.08; t(162) = 0.69, p = 0.49], older participants were significantly more accurate at recognizing older (M = 1.82, SE = 0.07) compared to younger faces [M = 1.44, SE = 0.08; t(162) = 4.99, p < 0.001]. This finding replicates the own-age bias previously shown for older adults (e.g., Anastasi and Rhodes, 2005; Firestone et al., 2007).

TABLE 4
www.frontiersin.org

Table 4. Mean (SE) hit and false alarm recognition rates for younger and older participants viewing younger and older faces across the different viewing conditions.

A significant main effect of viewing condition [F(2,162) = 122.99, p < 0.001] was found for the d′ scores. Base participants had the highest recognition [M = 2.92, SE = 0.10; bases versus all other participants: t(162) = 15.56, p < 0.001]. For those participants whose viewing was restricted to a moving window during the study block, participants in the yoked condition (M = 1.47, SE = 0.05) were more accurate than participants in the own condition who actively controlled the moving window [M = 1.10, SE = 0.07; t(162) = 4.35, p < 0.001].

The three-way interaction of age group, viewing condition and face age was non-significant (F < 1), suggesting that the recognition advantage for younger adults and the own-age bias on the d′ scores observed for older adults was evident even when eye movement patterns were altered (Figure 2). All other interactions were non-significant (Fs < 1).

FIGURE 2
www.frontiersin.org

Figure 2. Mean d′ scores of younger and older faces for older and younger participants across the three viewing conditions (bases, yoked, own). Error bars represent standard error. Participants who freely viewed the faces during study were significantly more accurate than participants whose viewing was restricted by a moving window (i.e., yoked, own). Yoked participants were more accurate than own participants who were in control of the moving window. The own-age recognition bias for older adults (higher accuracy for older versus younger faces) was observed regardless of viewing condition.

The significantly higher recognition for participants in the base condition compared to those in the yoked and own conditions suggests that the availability of peripheral information during encoding contributes to subsequent recognition. However, there was an advantage in recognition memory when someone else’s eye movements were followed. To further investigate recognition for participants in the yoked condition, we examined whether recognition was affected by the age of the base participant. In particular, we examined whether face recognition would be improved for older adults who followed the eye movements of a younger participant, and conversely, whether face recognition would be decreased for younger adults who were yoked to the eye movements of an older participant.

Recognition Accuracy: Yoked Viewing

As previously noted in the overall analysis, younger yoked participants were significantly more accurate than older yoked participants, and older faces were recognized significantly more often than younger faces. However, a marginal interaction between yoked age group and face age [F(1,92) = 3.56, p = 0.06] revealed that younger (M = 1.73, SE = 0.09) and older yoked participants (M = 1.55, SE = 0.09) recognized older faces more often than younger faces (younger participants: M = 1.53, SE = 0.09; older participants: M = 1.06, SE = 0.09; see Table 5 for hit and false alarm rates), and this difference was more pronounced for older adults [younger participants: t(92) = 1.92, p = 0.06; older participants: t(92) = 4.59, p < 0.001; see Figure 3]. All other main effects and interactions were non-significant; in particular, age-related deficits in recognition memory remained regardless of whether participants followed a younger or older adult’s eye movements.

TABLE 5
www.frontiersin.org

Table 5. Mean (SE) hit and false alarm recognition rates for younger and older yoked participants, viewing younger and older faces after studying the faces through the eye movements of younger or older base participants.

FIGURE 3
www.frontiersin.org

Figure 3. Mean d′ scores for younger and older faces for older and younger yoked participants who followed the eye movements of older or younger base participants. Error bars represent standard error. Accuracy was not influenced by age of the base participant.

When the above analyses were repeated using age of the yoked participant, centered around 50 years as a continuous variable, the pattern of results for d′ remained the same; suggesting that manipulating the sequence and location of eye movements during the study block did not eliminate subsequent age-related deficits in face recognition.

Moving Window Analyses

During the study block, younger participants (M = 0.67, SE = 0.01) spent marginally more time in the moving window than older participants in the yoked condition [M = 0.64, SE = 0.01; age group: F(1,92) = 2.86, p = 0.09]. A significant interaction between base age and face age [F(1,92) = 8.38, p = 0.01] revealed that when a younger base’s eye movements were followed, yoked participants spent more time in the moving window when the face was older (M = 0.67, SE = 0.01) versus younger [M = 0.65, SE = 0.01; t(92) = 3.08, p = 0.01], whereas when an older participants’ eye movements were followed, yoked participants spent the same amount of time in the moving window regardless of face age [older: M = 0.65, SE = 0.01; younger: M = 0.66, SE = 0.01; t(92) = −1.02, p = 0.31]. This could be attributed to our earlier findings: younger participants in the base condition made more fixations during viewing of younger faces compared to older faces, thus, participants yoked to younger base participants may have found it more difficult to stay within the moving window when viewing younger faces. Similarly, older participants in the base condition showed similar numbers of fixations during viewing of younger and older faces, and the amount of time spent in the moving window did not vary depending on face age when participants were yoked to an older base participant. All other main effects and interactions were non-significant [face age: F(1,92) = 2.12, p = 0.15; yoked age group by face age by base age: F(1,92) = 2.58, p = 0.11; all other effects: F < 1].

There was no correlation between the percent of time that yoked participants spent in the moving window and subsequent recognition (r = −0.03, n = 96, p = 0.76), even when controlling for age of the yoked participant, base age and face age (older face: r = −0.04, n = 96, p = 0.67; younger face: r = −0.10, n = 96, p = 0.36). Altogether, participants were able to stay within the moving window during the study block (more than 60% of the time within each yoked/base age group), and such viewing was perhaps sufficient enough that it did not significantly impact subsequent recognition.

Viewing during Encoding and Subsequent Recognition

Altering the pattern of eye movements during the study block did not considerably improve subsequent recognition memory for younger or older adults. To probe whether there was an aspect of viewing behavior that predicted subsequent memory across all viewing conditions, we examined the relationship between the number of fixations and transitions made to faces during the study bock and subsequent recognition accuracy. Specifically, given that the base participants had higher rates of accuracy as well as fixations and transitions compared to the other groups, and, in turn, yoked participants had higher rates of accuracy and fixations/transitions than the own participants, we examined whether these measures were correlated. Correlations were conducted for all younger adults, collapsed across viewing condition; similarly, correlations were conducted for all older adults, collapsed across viewing condition.

For younger participants, there was a marginally significant positive correlation between the number of fixations made during encoding and subsequent recognition (Pearson’s r = 0.18, n = 84, p = 0.10), and a significant positive correlation between the number of transitions and subsequent recognition (r = 0.41, n = 84, p < 0.001). For older adults, both the number of fixations (r = 0.32, n = 84, p = 0.003) and number of transitions (r = 0.32, n = 84, p = 0.003) were significantly and positively correlated with subsequent recognition. Both age groups, but in particular older adults, may require greater sampling of the faces during encoding to support later memory for those faces.

Discussion

Age-related recognition deficits are often reported in the literature (e.g., Ferris et al., 1980; Fulton and Bartlett, 1991; Firestone et al., 2007), and we have previously suggested that this may be a result of underlying differences in eye movement scanning between younger and older adults (Firestone et al., 2007). The present study was conducted to determine whether altering eye movement scanning is a means by which memory can be improved. Specifically, we examined whether yoking the eye movements of older adults to those of younger adults during encoding could increase subsequent recognition accuracy for faces. Moreover, we examined whether the own-age recognition bias that is typically exhibited by older adults could be eliminated by altering eye movement scanning. To address these questions, a novel eyetracking paradigm was employed in which a viewer’s eye movements are “replayed” to another through a moving window. Compared to free viewing conditions, recognition decreased when faces were viewed through a moving window during the study phase. Within each viewing condition, younger adults outperformed older adults in recognition accuracy, and the own-age recognition bias for older adults remained across all viewing conditions. Thus, altering the pattern of eye movements during encoding did not eliminate the age-related impairment in face recognition nor did it eliminate the own-age recognition bias. However, increased sampling of the face during study was correlated with later recognition accuracy for all participants, particularly for older adults. Altogether, the present findings have implications for understanding the contribution of eye movements to subsequent recognition, the encoding conditions that may be critical for accurate subsequent recognition in older adults, and ideas regarding the effects of active versus passive control on memory.

The Influence of Eye Movements on Recognition

Participants who freely viewed faces during study had significantly higher recognition compared to participants who were restricted via a moving window, regardless of whether or not they had active control over the moving window. This difference in recognition between participants in the base condition compared to the other participants suggests that having simultaneous access to the entire array of information available during study is critical for encoding and/or subsequent recognition, consistent with findings from prior research that has employed gaze-contingent viewing (Maw and Pomplun, 2004; van Belle et al., 2010). The manner of processing that occurred during the encoding and test phases did not match for the yoked and own groups who freely viewed the images during the test phase rather than viewing through a gaze-contingent window; research has shown that such a change in perceptual processing modes can disrupt recognition (Reingold, 2002). Further, it is possible that the pattern of recognition rates may have been altered if a different shape of moving window had been used; recent work has suggested that the shape of the gaze-contingent window influences the distribution of fixations (Foulsham et al., 2011), which in turn, may influence subsequent memory.

Having the entire array of information available for viewing may provide an advantage for later recognition. Participants who were restricted to viewing through a moving window were forced to encode faces feature-by-feature, with no opportunity to process holistic information (e.g., process the face as a single unit). There is considerable evidence to support the dominance of holistic over featural information in face recognition (see Maurer et al., 2002 for a review). For example, participants recognize parts of a face more accurately in the context of the whole face rather than in isolation (Tanaka and Farah, 1993). Similarly, when participants were forced to view single facial features via a gaze-contingent moving window during test, recognition significantly decreased compared to those who engaged in free viewing, resulting in performance similar to that of a prosopagnosic patient (van Belle et al., 2010).

Additionally, the presence of the entire face may provide advantages to subsequent recognition because the sequence and location of eye movements may be influenced by both bottom-up (e.g., salience, luminance) and top-down factors (e.g., task demands, memory; Parker, 1978; Underwood et al., 2004, 2006; Ryan et al., 2007). It is possible that disrupting either one of these processes while studying a face has a detrimental effect on subsequent recognition accuracy as this prevents the most salient and/or informative regions from being sampled earliest and/or most often, thereby resulting in a weaker representation of that stimulus in memory. The lack of peripheral information, and the reduction in bottom-up control of eye movements may explain the significant reduction in recognition for the participants in the own condition relative to the base participants. In the present study, participants who followed another person’s eye movements through a moving window (yoked participants) had higher recognition rates than those who had active control over the moving window (own participants), which may seem surprising given that, during encoding, both groups did not have peripheral stimulus information available. However, yoked participants followed the eye movements of a base participant who was able to freely view the face. The eye movements of base participants, and consequently the yoked participants, were presumably influenced by both top-down and bottom-up factors, leading to encoding of the most salient and informative face features, and a subsequent advantage in a recognition memory. Conversely, participants in the gaze-contingent own condition were restricted to adopting an eye movement scanning strategy that relied predominantly on top-down factors. Specifically, viewing by the own participants was likely guided by information in memory regarding where facial features should be with respect to one another, rather than being guided by bottom-up influences such as a particularly salient facial feature (e.g., large nose, bright eyes). The lack of peripheral information may have disrupted bottom-up guidance of eye movements, or may have disrupted the (perhaps conscious) planning of a sequence of eye movement patterns that is informed by foveal and peripheral details. Therefore, the current results suggest that the presence of peripheral information allows for bottom-up and top-down factors to be exerted on eye movement scanning, both of which influence subsequent memory.

Aging and Face Recognition

Within the yoked condition, recognition for the older participants did not improve when their eye movements were yoked to those of younger participants. Moreover, recognition accuracy for the younger yoked participants did not decrease when their eye movements were yoked to those of older participants. Older adults were impaired on recognition relative to younger adults, regardless of whether the eye movements of a younger or older adult were followed. It seems then, that manipulating the eye movements of older adults to view faces in the same manner as younger adults cannot override age-related deficits in face recognition. It is possible that this age-related recognition deficit may stem from an inability to bind the features of a face together into lasting representations (e.g., Chalfonte and Johnson, 1996; Naveh-Benjamin, 2000).

Age-related changes in binding may be the underlying process that outwardly manifests itself as age-related changes in scanning behavior. That is, age-related changes in scanning behavior reflect an age-related change in binding. In our previous work (Ryan and Villate, 2009), we have suggested that eye movements serve to link, or bind, distinct objects to one another. Here, eye movements may serve to bind distinct face features to one another. If older adults have structural and functional changes in the neural systems that would otherwise support binding (e.g., Driscoll et al., 2003), then a compensatory response may be to increase eye movements within and between face features to support the development of face representations. This potential strategy of increased sampling cannot be invoked when older adults are yoked to the eye movements of younger participants. Indeed, older participants yoked to older base participants showed a numerical advantage in recognition for faces compared to those who were yoked to a younger base participant. Moreover, increased sampling of the face during study was associated with improved recognition accuracy during test for all participants, and in particular for older adults, again supporting the notion that increased sampling may be critical for older adults’ subsequent recognition. As a result of increased sampling (i.e., numerically greater number of fixations, significantly greater number of transitions), the average duration of each fixation necessarily decreased for older adults relative to younger adults due to the fixed viewing time (results not reported here, for brevity). Curiously, even though increasing the number of fixations and transitions resulted in decreased time for foveal processing of the stimulus, higher numbers of fixations and transitions, rather than lower numbers, were correlated with subsequent recognition. This suggests that it may not be the processing of the details of the faces per se that supports subsequent recognition, but the linking of the face features that is supported by the eye movements themselves: namely, fixations and transitions.

While an own-age recognition bias was not observed here for younger adults (but see Wright and Stroud, 2002; Anastasi and Rhodes, 2006), an own-age recognition bias was observed for older adults, regardless of the manner in which faces were initially studied. Across all viewing conditions, older adults exhibited greater recognition accuracy for older versus younger faces. This suggests that the own-age recognition bias observed for older adults is not related to scanning behavior, and that older adults are employing an alternative strategy to encode and subsequently recognize older faces.

The advantage in memory for older faces may occur as a result of additional elaborative processing following acquisition of information by the eye movements. It is possible that for older adults, social group status may play a larger role in determining how faces are processed and maintained in memory (e.g., Wright and Stroud, 2002; Anastasi and Rhodes, 2005; Lamont et al., 2005; Firestone et al., 2007). For instance, it may be easier for older adults to integrate older faces into their semantic network of known people if their current cohort is comprised largely of older adults. By contrast, the younger adults in this study may have more frequent exposure to a wide variety of age groups (e.g., family members, college professors), thereby making it relatively easy to incorporate both younger and older faces into an existing semantic framework (Anastasi and Rhodes, 2006).

Alternatively, for the older adults, memories for older faces may be based primarily on a defining feature; storing a single defining feature in memory may be advantageous if older adults have difficulty binding multiple features together into a face representation. In that case, it may not be important that a particular pattern of eye movements be enacted in order to support a memory representation for an older face; rather, it may be more important that a sufficient amount of fixations and/or time is spent on a particular feature. A sufficient amount of viewing could be spent on a defining feature regardless of whether or not the viewer were in control of the sequence and location of eye movements that are enacted to study the face. Therefore, active versus passive control over the manner by which the face features are encoded would be of secondary influence to the own-age recognition bias compared to the type of information that is encoded and/or the amount of viewing that is directed to particular aspects of the face.

The Influence of Active Versus Passive Control on Memory

Previous work has demonstrated an advantage in memory when participants maintain active control over the manner by which encoding unfolds. For example, in a study by Brooks et al. (1999), participants actively negotiated their way through the rooms of a virtual bungalow with a joystick. A passive participant sat in front of a yoked monitor and watched as the active participant moved through the bungalow. Participants who had active control of their movements had better recall for the bungalow’s spatial layout than the participants who were under passive control. Similar advantages from active control have been shown for drivers versus passive car passengers (e.g., Appleyard, 1970; but see Booth et al., 2000) and in active versus passive touch studies (e.g., Gibson, 1962; but see Schwartz et al., 1975).

On the surface, our findings may appear to be inconsistent with research that demonstrates an advantage in memory for active control at encoding. Participants in the yoked condition under passive control of the moving window showed higher recognition rates compared to participants in the own condition who had active control of the moving window. In the current paradigm, having active control over one’s eye movements may not be the most influential factor affecting recognition. Instead, having access to as much information as possible may be more critical, since this may enable the adoption of a scanning strategy influenced by both top-down and bottom-up information, as well as holistic and featural information regarding the face. Although peripheral information was not available for either the yoked or own participants, the path of the moving window for yoked participants was based on all available information that was indeed present for the base participants. This may have then inflated the recognition rate for the yoked condition; that is, it is possible that some of the deficit in recognition due to passive control of the moving window may have been offset by an increase in recognition afforded by the efficient path of the moving window that was based on foveal and peripheral information available to the base participant. In contrast, the path of the moving window for the own participants was likely based largely on top-down information.

Conclusion

Although eye movement scanning may indeed be related to subsequent memory (e.g., Althoff and Cohen, 1999; Henderson et al., 2005; Ryan et al., 2007), using a novel eyetracking paradigm, the current findings suggest that recognition memory may not necessarily be improved by yoking a viewer’s eye movements to those of another viewer who has demonstrated superior memory performance. It may be that eye movement scanning is idiosyncratic; findings that reveal the similarity between eye movements enacted during encoding and those enacted during recognition suggest that comparisons of one’s own eye movements across the experimental phases may be critical for recognition (Noton and Stark, 1971; Brandt and Stark, 1997; Laeng and Teodorescu, 2002; Foulsham and Underwood, 2008). It is possible that adopting another’s eye movements does not take into account the particular coping mechanisms that have proved beneficial for memory within a given viewer, and that older adults in particular may benefit from a scanning strategy in which increased transitional behavior and sampling of a face is enacted to compensate for difficulties in memory binding (Firestone et al., 2007). Additionally, eye movement modification may still be a means by which recognition memory may be enhanced, perhaps via increased sampling. Future work remains to explore whether increased sampling to particular regions (e.g., increased viewing toward the eyes or nose) is required to enhance recognition, or whether increased sampling in general may help to reduce age-related deficits in recognition memory. Moreover, while memory may be influenced by eye movement behavior, the own-age recognition bias for older adults may be independent from the pattern (sequence and location) of scanning behavior, and may instead be related to factors such as the amount of time and/or number of fixations spent viewing a particular face feature or the ease with which a face may be integrated into an existing framework of known persons. Additionally, other cognitive factors such as speed and/or level of processing may be considered as factors that contribute to age-related differences in face recognition and the own-age recognition bias (Craik and Lockhart, 1973; Salthouse, 1995). Future work remains to explore these possibilities and to determine the means by which eye movement patterns and/or different processing modes may be modified to efficiently support the formation of lasting representations in memory.

Conflict of Interest Statement

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Acknowledgments

The authors thank G. Leung and U. Blankstein for assistance in data collection, C. Villate for assistance in data analysis, J. Shen for technical assistance, and L. Riggs and J. Heisz for comments on a previous version of this manuscript. Funding from the Canadian Institutes of Health Research (CIHR) and the Canada Research Chairs Program and the Canadian Foundation for Innovation (CRC/CFI) was provided to Jennifer D. Ryan.

References

Althoff, R. R., and Cohen, N. J. (1999). Eye-movement-based memory effect: a reprocessing effect in face perception. J. Exp. Psychol. Learn. Mem. Cogn. 25, 997–1010.

Pubmed Abstract | Pubmed Full Text

Anastasi, J. S., and Rhodes, M. G. (2005). An own-age bias in face recognition for children and older adults. Psychon. Bull. Rev. 12, 1043–1047.

Pubmed Abstract | Pubmed Full Text

Anastasi, J. S., and Rhodes, M. G. (2006). Evidence for an own-age bias in face recognition. N. Am. J. Psychol. 8, 237–252.

Appleyard, D. (1970). Styles and methods of structuring a city. Environ. Behav. 2, 100–117.

Booth, K., Fisher, B., Page, S., Ware, C., and Widen, S. (2000). “Wayfinding in a virtual environment,” in Poster Abstracts for Graphics Interface 2000, Montreal.

Brandt, S. A., and Stark, L. W. (1997). Spontaneous eye movements during visual imagery reflect the content of the visual scene. J. Cogn. Neurosci. 9, 27–38.

Brooks, B. M., Attree, E. A., Rose, F. D., Clifford, B. R., and Leadbetter, A. G. (1999). The specificity of memory enhancement during interaction with a virtual environment. Memory 7, 65–78.

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Castelhano, M. S., and Henderson, J. M. (2008). Stable individual differences across images in human saccadic eye movements. Can. J. Exp. Psychol. 62, 1–14.

Pubmed Abstract | Pubmed Full Text

Chalfonte, B. L., and Johnson, M. K. (1996). Feature memory and binding in young and older adults. Mem. Cognit. 24, 403–416.

Pubmed Abstract | Pubmed Full Text

Craik, F. I. M., and Lockhart, R. S. (1973). Levels of processing: a framework for memory research. J. Verbal Learn. Verbal Behav. 11, 671–684.

CrossRef Full Text

Driscoll, I., Hamilton, D. A., Petropoulos, H., Yeo, R. A., Brooks, W. M., Baumgartner, R. N., and Sutherland, R. J. (2003). The aging hippocampus: cognitive, biochemical and structural findings. Cereb. Cortex 13, 1344–1351.

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Ferris, S. H., Crook, T., Clark, E., McCarthy, M., and Rae, D. (1980). Facial recognition memory deficits in normal aging and senile dementia. J. Gerontol. 35, 707–714.

Pubmed Abstract | Pubmed Full Text

Firestone, A., Turk-Browne, N. B., and Ryan, J. D. (2007). Age-related deficits in face recognition are related to underlying changes in scanning behaviour. Neuropsychol. Dev. Cogn. B Aging Neuropsychol. Cogn. 14, 594–607.

Pubmed Abstract | Pubmed Full Text

Fletcher, K. I., Butavicius, M. A., and Lee, M. D. (2008). Attention to internal face features in unfamiliar face matching. Br. J. Psychol. 99, 379–394.

Pubmed Abstract | Pubmed Full Text

Foulsham, T., Teszka, R., and Kingstone, A. (2011). Saccade control in natural images is shaped by the information visible at fixation: evidence from asymmetric gaze-contingent windows. Atten. Percept. Psychophys. 73, 266–283.

Pubmed Abstract | Pubmed Full Text

Foulsham, T., and Underwood, G. (2008). What can saliency models predict about eye movements? Spatial and sequential aspects of fixations during encoding and recognition. J. Vis. 8, 1–17.

Pubmed Abstract | Pubmed Full Text

Fulton, A., and Bartlett, J. C. (1991). Young and old faces in young and old heads the factor of age in face recognition. Psychol. Aging 6, 623–630.

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Gareze, L., Harris, J. M., Barenghi, C. F., and Tadmor, Y. (2008). Characterising patterns of eye movements in natural images and visual scanning. J. Mod. Opt. 55, 533–555.

Gibson, J. J. (1962). Observations on active touch. Psychol. Rev. 69, 477–491.

Pubmed Abstract | Pubmed Full Text

Henderson, J. M., Williams, C. C., and Falk, R. J. (2005). Eye movements are functional during face learning. Mem. Cognit. 33, 98–106.

Pubmed Abstract | Pubmed Full Text

Kirk, R. E. (1968). Experimental Design: Procedures for the Behavioral Sciences. Monterey, CA: Brooks-Cole Publishing Co.

Laeng, B., and Teodorescu, D. S. (2002). Eye scanpaths during visual imagery reenact those of perception of the same visual scene. Cogn. Sci. 26, 207–231.

Lamont, A. C., Stewart-Williams, S., and Podd, J. (2005). Face recognition and aging: effects of target age and memory load. Mem. Cognit. 33, 1017–1024.

Pubmed Abstract | Pubmed Full Text

Maurer, D., Le Grand, R., and Mondloch, C. J. (2002). The many faces of configural processing. Trends Cogn. Sci. 6, 255–260.

Pubmed Abstract | Pubmed Full Text

Maw, N. N., and Pomplun, M. (2004). “Studying human face recognition with the gaze-contingent window technique,” in Proceedings of the Twenty-Sixth Annual Meeting of the Cognitive Science Society, 2004, Chicago, IL, eds K. Forbus, D. Gentner, and T. Regier (Mahwah, NJ: Erlbaum), 927–932.

Naveh-Benjamin, M. (2000). Adult age differences in memory performance: tests of an associative deficit hypothesis. J. Exp. Psychol. Learn. Mem. Cogn. 26, 1170–1187.

Pubmed Abstract | Pubmed Full Text

Noton, D., and Stark, L. (1971). Scanpaths in saccadic eye movements while viewing and recognizing patterns. Vision Res. 11, 929–942.

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Parker, R. E. (1978). Picture processing during recognition. J. Exp. Psychol. Hum. Percept. Perform. 4, 284–293.

Pubmed Abstract | Pubmed Full Text

Péruch, P., and Wilson, P. N. (2004). Active versus passive learning and testing in a complex outside built environment. Cogn. Process. 5, 218–227.

Rahhal, T. A., May, C. P., and Hasher, L. (2002). Truth and character: sources that older adults can remember. Psychol. Sci. 13, 101–105.

Pubmed Abstract | Pubmed Full Text

Reingold, E. M. (2002). On the perceptual specificity of memory representations. Memory 10, 365–379.

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Reingold, E. M., and Loschky, L. C. (2002). Saliency of peripheral targets in gaze-contingent multi-resolutional displays. Behav. Res. Methods Instrum. Comput. 34, 491–499.

Pubmed Abstract | Pubmed Full Text

Ryan, J. D., Hannula, D. E., and Cohen, N. J. (2007). The obligatory effects of memory on eye movements. Memory 15, 508–525.

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Ryan, J. D., and Villate, C. (2009). Building visual representations: the binding of relative spatial relations across time. Vis. Cogn. 17, 254–272.

Salthouse, T. A. (1995). Influence of processing speed on adult age differences in learning. Swiss J. Psychol. 54, 102–112.

Schwartz, A. S., Perey, A. J., and Azulay, A. (1975). Further analysis of active and passive touch in pattern discrimination. Bull. Psychon. Soc. 6, 7–9.

Stacey, P. C., Walker, S., and Underwood, J. D. M. (2005). Face processing and familiarity: evidence from eye-movement data. Br. J. Psychol. 96, 407–422.

Pubmed Abstract | Pubmed Full Text

Tanaka, J. W., and Farah, M. (1993). Parts and wholes in face recognition. Q. J. Exp. Psychol. 46, 225–245.

Tatler, B. W., Wade, N. J., Kwan, H., Findlay, J. M., and Velichkovsky, B. M. (2010). Yarbus, eye movements, and vision. i-Perception 1, 7–27.

CrossRef Full Text

Underwood, G., Foulsham, T., van Loon, E., Humpherys, L., and Bloyce, J. (2006). Eye movements during scene inspection: a test of the saliency map hypothesis. Eur. J. Cogn. Psychol. 18, 321–342.

Underwood, G., Jebbett, L., and Roberts, K. (2004). Inspecting pictures for information to verify a sentence: eye movements in general encoding and in focused search. Q. J. Exp. Psychol. 57A, 165–182.

van Belle, G., de Graef, P., Verfaillie, K., Busigny, T., and Rossion, B. (2010). Whole not hole: expert face recognition requires holistic perception. Neuropsychologia 48, 2620–2629.

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Walker-Smith, G. J., Gale, A. G., and Findlay, J. M. (1977). Eye movement strategies involved in face perception. Perception 6, 313–326.

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Wright, D. B., and Stroud, J. S. (2002). Age differences in lineup identification accuracy: people are better with their own age. Law Hum. Behav. 26, 641–654.

Pubmed Abstract | Pubmed Full Text

Keywords: eye movements, aging, recognition, memory, face perception

Citation: Chan JPK, Kamino D, Binns MA and Ryan JD (2011) Can changes in eye movement scanning alter the age-related deficit in recognition memory?. Front. Psychology 2:92. doi: 10.3389/fpsyg.2011.00092

Received: 14 February 2011; Accepted: 27 April 2011;
Published online: 12 May 2011.

Edited by:

Dietmar Heinke, University of Birmingham, UK

Reviewed by:

Tom Foulsham University of Essex, UK
Harriet Ann Allen, University of Birmingham, UK

Copyright: © 2011 Chan, Kamino, Binns and Ryan. This is an open-access article subject to a non-exclusive license between the authors and Frontiers Media SA, which permits use, distribution and reproduction in other forums, provided the original authors and source are credited and other Frontiers conditions are complied with.

*Correspondence: Jennifer D. Ryan, Rotman Research Institute, Baycrest Hospital, 3560 Bathurst Street, Toronto, ON M6A 2E1, Canada. e-mail: jryan@rotman-baycrest.on.ca

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.