Wikia

Psychology Wiki

Face perception

Talk0
34,141pages on
this wiki

Assessment | Biopsychology | Comparative | Cognitive | Developmental | Language | Individual differences | Personality | Philosophy | Social |
Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology |

Cognitive Psychology: Attention · Decision making · Learning · Judgement · Memory · Motivation · Perception · Reasoning · Thinking  - Cognitive processes Cognition - Outline Index


Faceex

Although this picture merely consists of some vague blobs, the human brain seems to be "hardwired" to find a human face in the image.

Face perception is the process by which the brain and mind understand and interpret the face, particularly the human face.

The face is an important site for the identification of others and conveys significant social information. Probably because of the importance of its role in social interaction, psychological processes involved in face perception are known to be present from birth, complex, involve large and widely distributed areas in the brain and can be selectively damaged to cause a specific impairment in understanding faces known as prosopagnosia.

DevelopmentEdit

From birth, infants possess rudimentary facial processing capacities. Infants as young as two days of age are capable of mimicking the facial expressions of an adult, displaying their capacity to note details like mouth and eye shape as well as to move their own muscles in a way that produces similar patterns in their faces.[1] However, despite this ability, newborns are not yet aware of the emotional content encoded within facial expressions. Five-month olds, when presented with an image of a person making a fearful expression and a person making a happy expression, pay the same amount of attention to and exhibit similar event-related potentials for both. When seven-month-olds are given the same treatment, they focus more on the fearful face, and their event-related potential for the scared face shows a stronger initial negative central component than the happy face. This result indicates an increased attentional and cognitive focus toward fear that reflects the threat-salient nature of the emotion.[2] In addition, infants’ negative central components were not different for new faces that varied in the intensity of an emotional expression but portrayed the same emotion as a face they had been habituated to but were stronger for different-emotion faces, showing that seven-month-olds regarded happy and sad faces as distinct emotive categories.[3]

The recognition of faces is an important neurological mechanism that an individual uses every day. Jeffrey and Rhodes[4] said that faces "convey a wealth of information that we use to guide our social interactions."[5] For example, emotions play a large role in our social interactions. The perception of a positive or negative emotion on a face affects the way that an individual perceives and processes that face. A face that is perceived to have a negative emotion is processed in a less holistic manner than a face displaying a positive emotion.[6] The ability of face recognition is apparent even in early childhood. By age five, the neurological mechanisms responsible for face recognition are present. Research shows that the way children process faces is similar to that of adults, however adults process faces more efficiently. The reason for this may be because of improvements in memory and cognitive functioning that occur with age.[7]

Infants are able to comprehend facial expressions as social cues representing the feelings of other people before they have been alive for a year. At seven months, the object of an observed face’s apparent emotional reaction is relevant in processing the face. Infants at this age show greater negative central components to angry faces that are looking directly at them than elsewhere, although the direction of fearful faces’ gaze produces no difference. In addition, two ERP components in the posterior part of the brain are differently aroused by the two negative expressions tested. These results indicate that infants at this age can at least partially understand the higher level of threat from anger directed at them as compared to anger directed elsewhere.[8] At least seven months of age, infants are also able to use others’ facial expressions to understand their behavior. Seven-month olds will look to facial cues to understand the motives of other people in ambiguous situations, as shown by a study in which they watched an experimenter’s face longer if she took a toy from them and maintained a neutral expression than if she made a happy expression.[9] Interest in the social world is increased by interaction with the physical environment. Training three-month-old infants to reach for objects with Velcro-covered “sticky mitts” increases the amount of attention that they pay to faces as compared to passively moving objects through their hands and non-trained control groups.[10]

In following with the notion that seven-month-olds have categorical understandings of emotion, they are also capable of associating emotional prosodies with corresponding facial expressions. When presented with a happy or angry face, shortly followed by an emotionally-neutral word read in a happy or angry tone, their ERPs follow different patterns. Happy faces followed by angry vocal tones produce more changes than the other incongruous pairing, while there was no such difference between happy and angry congruous pairings, with the greater reaction implying that infants held greater expectations of a happy vocal tone after seeing a happy face than an angry tone following an angry face. Considering an infant’s relative immobility and thus their decreased capacity to elicit negative reactions from their parents, this result implies that experience has a role in building comprehension of facial expressions.[11]

Several other studies indicate that early perceptual experience is crucial to the development of capacities characteristic of adult visual perception, including the ability to identify familiar others and to recognize and comprehend facial expressions.[12] The capacity to discern between faces, much like language, appears to have a broad potential in early life that is whittled down to kinds of faces that are experienced in early life.[12] Infants can discern between macaque faces at six months of age, but, without continued exposure, cannot at nine months of age. Being shown photographs of macaques during this three-month period gave nine-month-olds the ability to reliably tell between unfamiliar macaque faces.[13]

The neural substrates of face perception in infants are likely similar to those of adults, but the limits of imaging technology that are feasible for use with infants currently prevent very specific localization of function as well as specific information from subcortical areas[14] like the amygdala, which is active in the perception of facial expression in adults.[12] In a study on healthy adults, it was shown that faces are likely to be processed, in part, via a retinotectal (subcortical) pathway.[15]

However, there is activity near the fusiform gyrus,[14] as well as in occipital areas.[8] when infants are exposed to faces, and it varies depending on factors including facial expression and eye gaze direction.[3][8]

Adult face perceptionEdit

Theories about the processes involved in adult face perception have largely come from two sources: research on normal adult face perception and the study of impairments in face perception that are caused by brain injury or neurological illness. Novel optical illusions such as the Flashed Face Distortion Effect, in which scientific phenomenology outpaces neurological theory, also provide areas for research.

One of the most widely accepted theories of face perception argues that understanding faces involves several stages:[16] from basic perceptual manipulations on the sensory information to derive details about the person (such as age, gender or attractiveness), to being able to recall meaningful details such as their name and any relevant past experiences of the individual.

This model (developed by psychologists Vicki Bruce and Andrew Young) argues that face perception might involve several independent sub-processes working in unison.

  1. A "view centered description" is derived from the perceptual input. Simple physical aspects of the face are used to work out age, gender or basic facial expressions. Most analysis at this stage is on feature-by-feature basis. That initial information is used to create a structural model of the face, which allows it to be compared to other faces in memory, and across views. This explains why the same person seen from a novel angle can still be recognized. This structural encoding can be seen to be specific for upright faces as demonstrated by the Thatcher effect. The structurally encoded representation is transferred to notional "face recognition units" that are used with "personal identity nodes" to identify a person through information from semantic memory. The natural ability to produce someone's name when presented with their face has been shown in experimental research to be damaged in some cases of brain injury, suggesting that naming may be a separate process from the memory of other information about a person.

The study of prosopagnosia (an impairment in recognizing faces which is usually caused by brain injury) has been particularly helpful in understanding how normal face perception might work. Individuals with prosopagnosia may differ in their abilities to understand faces, and it has been the investigation of these differences which has suggested that several stage theories might be correct.

Face perception is an ability that involves many areas of the brain; however, some areas have been shown to be particularly important. Brain imaging studies typically show a great deal of activity in an area of the temporal lobe known as the fusiform gyrus, an area also known to cause prosopagnosia when damaged (particularly when damage occurs on both sides). This evidence has led to a particular interest in this area and it is sometimes referred to as the fusiform face area for that reason.[17]

Neuroanatomy of facial processingEdit

There are several parts of the brain that play a role in face perception. Rossion, Hanseeuw, and Dricot[18] used BOLD fMRI mapping to identify activation in the brain when subjects viewed both cars and faces. The majority of BOLD fMRI studies use blood oxygen level dependent (BOLD) contrast to determine which areas of the brain are activated by various cognitive functions.[19] They found that the occipital face area, located in the occipital lobe, the fusiform face area, the superior temporal sulcus, the amygdala, and the anterior/inferior cortex of the temporal lobe, all played roles in contrasting the faces from the cars, with the initial face perception beginning in the fusiform face area and occipital face areas. This entire region links to form a network that acts to distinguish faces. The processing of faces in the brain is known as a "sum of parts" perception.[20] However, the individual parts of the face must be processed first in order to put all of the pieces together. In early processing, the occipital face area contributes to face perception by recognizing the eyes, nose, and mouth as individual pieces.[21] Furthermore, Arcurio, Gold, and James[22] used BOLD fMRI mapping to determine the patterns of activation in the brain when parts of the face were presented in combination and when they were presented singly. The occipital face area is activated by the visual perception of single features of the face, for example, the nose and mouth, and preferred combination of two-eyes over other combinations. This research supports that the occipital face area recognizes the parts of the face at the early stages of recognition. On the contrary, the fusiform face area shows no preference for single features, because the fusiform face area is responsible for "holistic/configural" information,[23] meaning that it puts all of the processed pieces of the face together in later processing. This theory is supported by the work of Gold et al.[20] who found that regardless of the orientation of a face, subjects were impacted by the configuration of the individual facial features. Subjects were also impacted by the coding of the relationships between those features. This shows that processing is done by a summation of the parts in the later stages of recognition.

Facial perception has well identified, neuroanatomical correlates in the brain. During the perception of faces, major activations occur in the extrastriate areas bilaterally, particularly in the fusiform face area (FFA), the occipital face area (OFA), and the superior temporal sulcus (fSTS).[24][25]

The FFA is located in the lateral fusiform gyrus. It is thought that this area is involved in holistic processing of faces and it is sensitive to the presence of facial parts as well as the configuration of these parts. The FFA is also necessary for successful face detection and identification. This is supported by fMRI activation and studies on prospagnosia, which involves lesions in the FFA.[24][25][26]

The OFA is located in the inferior occipital gyrus.[25] Similar to the FFA, this area is also active during successful face detection and identification, a finding that is supported by fMRI activation.[24] The OFA is involved and necessary in the analysis of facial parts but not in the spacing or configuration of facial parts. This suggests that the OFA may be involved in a facial processing step that occurs prior to the FFA processing.[24]

The fSTS is involved in recognition of facial parts and is not sensitive to the configuration of these parts. It is also thought that this area is involved in gaze perception.[27] The fSTS has demonstrated increased activation when attending to gaze direction.[24]

Bilateral activation is generally shown in all of these specialized facial areas.[28][29][30][31][32][33] However there are some studies that include increased activation in one side over the other. For instance McCarthy (1997) has shown that the right fusiform gyrus is more important for facial processing in complex situations.[26]

Gorno-Tempini and Price have shown that the fusiform gyri are preferentially responsive to faces, whereas the parahippocampal/lingual gyri are responsive to buildings.[34]

It is important to note that while certain areas respond selectively to faces, facial processing involves many neural networks. These networks include visual and emotional processing systems as well. Emotional face processing research has demonstrated that there are some of the other functions at work. While looking at faces displaying emotions (especially those with fear facial expressions) compared to neutral faces there is increased activity in the right fusiform gyrus. This increased activity also correlates with increased amygdala activity in the same situations.[35] The emotional processing effects observed in the fusiform gyrus are decreased in patients with amygdala lesions.[35] This demonstrates possible connections between the amygdala and facial processing areas.[35]

Another aspect that affects both the fusiform gyrus and the amygdala activation is the familiarity of faces. Having multiple regions that can be activated by similar face components indicates that facial processing is a complex process.[35]

Ishai and colleagues have proposed the object form topology hypothesis, which posits that there is a topological organization of neural substrates for object and facial processing.[36] However, Gauthier disagrees and suggests that the category-specific and process-map models could accommodate most other proposed models for the neural underpinnings of facial processing.[37]

Most neuroanatomical substrates for facial processing are perfused by the middle cerebral artery (MCA). Therefore, facial processing has been studied using measurements of mean cerebral blood flow velocity in the middle cerebral arteries bilaterally. During facial recognition tasks, greater changes in the right middle cerebral artery (RMCA) than the left (LMCA) have been observed.[38][39] It has been demonstrated that men were right lateralized and women left lateralized during facial processing tasks.[40]

Just as memory and cognitive function separate the abilities of children and adults to recognize faces, the familiarity of a face may also play a role in the perception of faces.[20] Zheng, Mondloch, and Segalowitz recorded event-related potentials in the brain to determine the timing of recognition of faces in the brain.[41] The results of the study showed that familiar faces are indicated and recognized by a stronger N250,[41] a specific wavelength response that plays a role in the visual memory of faces.[42] Similarly, Moulson et al.[43] found that all faces elicit the N170 response in the brain.

Hemispheric asymmetries in facial processing capabilityEdit

The mechanisms underlying gender-related differences in facial processing have not been studied extensively.

Studies using electrophysiological techniques have demonstrated gender-related differences during a face recognition memory (FRM) task and a facial affect identification task (FAIT). The male subjects used a right, while the female subjects used a left, hemisphere neural activation system in the processing of faces and facial affect.[44] Moreover, in facial perception there was no association to estimated intelligence, suggesting that face recognition performance in women is unrelated to several basic cognitive processes.[45] Gender-related differences[46] may suggest a role for sex hormones. In females there may be variability for psychological functions[47] related to differences in hormonal levels during different phases of the menstrual cycle.[48]

Data obtained in norm and in pathology support asymmetric face processing.[49][50][51] Gorno-Tempini and others in 2001, suggested that the left inferior frontal cortex and the bilateral occipitotemporal junction respond equally to all face conditions. Some neuroscientists contend that both the left inferior frontal cortex (Brodmann area 47) and the occipitotemporal junction are implicated in facial memory.[52][53][54] The right inferior temporal/fusiform gyrus responds selectively to faces but not to non-faces. The right temporal pole is activated during the discrimination of familiar faces and scenes from unfamiliar ones.[55] Right asymmetry in the mid temporal lobe for faces has also been shown using 133-Xenon measured cerebral blood flow (CBF).[56] Other investigators have observed right lateralization for facial recognition in previous electrophysiological and imaging studies.[57]

The implication of the observation of asymmetry for facial perception would be that different hemispheric strategies would be implemented. The right hemisphere would be expected to employ a holistic strategy, and the left an analytic strategy.[58][59][60][61] In 2007, Philip Njemanze, using a novel functional transcranial Doppler (fTCD) technique called functional transcranial Doppler spectroscopy (fTCDS), demonstrated that men were right lateralized for object and facial perception, while women were left lateralized for facial tasks but showed a right tendency or no lateralization for object perception.[62] Njemanze demonstrated using fTCDS, summation of responses related to facial stimulus complexity, which could be presumed as evidence for topological organization of these cortical areas in men. It may suggest that the latter extends from the area implicated in object perception to a much greater area involved in facial perception.

This agrees with the object form topology hypothesis proposed by Ishai and colleagues in 1999. However, the relatedness of object and facial perception was process based, and appears to be associated with their common holistic processing strategy in the right hemisphere. Moreover, when the same men were presented with facial paradigm requiring analytic processing, the left hemisphere was activated. This agrees with the suggestion made by Gauthier in 2000, that the extrastriate cortex contains areas that are best suited for different computations, and described as the process-map model. Therefore, the proposed models are not mutually exclusive, and this underscores the fact that facial processing does not impose any new constraints on the brain other than those used for other stimuli.

It may be suggested that each stimulus was mapped by category into face or non-face, and by process into holistic or analytic. Therefore, a unified category-specific process-mapping system was implemented for either right or left cognitive styles. Njemanze in 2007, concluded that, for facial perception, men used a category-specific process-mapping system for right cognitive style, but women used same for the left.

Verbal overshadowing and face identification Edit

A study by Wickham & Swift[63] looks at the role that articulatory suppression can have on verbal overshadowing and face identification. Verbal overshadowing is the phenomenon that verbally describing a face between presentation and test can impair identification of the face (Schooler & Engstler-Schooler, 1990). This study had the goal of looking to see how important verbal encoding is to face recognition and also how it interacts with verbal overshadowing by using articulatory suppression to force the individual to rely on their visual code instead of the phonological code. Participants were asked to perform a procedure that included studying a face carefully for five seconds. During these five seconds, they were repeating the word ‘the’. Then, participants were either given one minute to write down a description of the face they just saw or were given a crossword puzzle to complete to distract them. Then, participants were shown ten faces, of which there were nine faces that were very similar and one face that was considered the target, or the one they just studied. This procedure was repeated twelve times for each participant. Researchers found several interesting conclusions from their study. First, they found that that “articulatory suppression significantly reduced the identification scores of no description participants but not the description participants.” This means that articulatory suppression has an effect on facial identification in that it impairs one’s ability to recognize a face. Interestingly, this study also found that when participants were using articulatory suppression, the verbal overshadowing effect did not occur. This would seem to suggest that the encoding of faces and the verbal overshadowing effect comes from a problem with the verbal code, not the visual code. Because having that distracter syllable did not hinder the individual’s ability to recognize the face more than describing it did, it would not be a problem with disrupting what the participants encoded visually, but the verbal aspects that interferes and creates the verbal overshadowing effect.

From these studies, it can be seen how this information can be used in everyday life to help understand and improve memory. As discussed before, the studies shown that information is best encoded when there is no auditory information to interfere with the rehearsal of the information. This could be helpful to students who like to listen to music while studying or to anyone trying to encode information. The second study suggests that the ability to simultaneously interpret language might enable individuals to bypass the effects of articulatory suppression. Perhaps researchers would be able to investigate if being multi-lingual helps with this or if there is some brain process that makes it easier for them to encode information to memory and therefore to learn multiple languages and then easily interpret between them. Lastly, the study regarding face recognition and identification reinforces the notion that articulatory suppression interferes with an individual’s ability to encode information.


ControversiesEdit

Template:Criticism section

File:Martian face viking cropped.jpg

Cognitive neuroscientists Isabel Gauthier and Michael Tarr are two of the major proponents of the view that face recognition involves expert discrimination of similar objects (See the Perceptual Expertise Network). Other scientists, in particular Nancy Kanwisher and her colleagues, argue that face recognition involves processes that are face-specific and that are not recruited by expert discriminations in other object classes (see the domain specificity).

Studies by Gauthier have shown that an area of the brain known as the fusiform gyrus (sometimes called the "fusiform face area, (FFA)" because it is active during face recognition) is also active when study participants are asked to discriminate between different types of birds and cars,[64] and even when participants become expert at distinguishing computer generated nonsense shapes known as greebles.[65] This suggests that the fusiform gyrus may have a general role in the recognition of similar visual objects. Yaoda Xu, then a post doctoral fellow with Nancy Kanwisher, replicated the car and bird expertise study using an improved fMRI design that was less susceptible to attentional accounts.

The activity found by Gauthier when participants viewed non-face objects was not as strong as when participants were viewing faces, however this could be because we have much more expertise for faces than for most other objects. Furthermore, not all findings of this research have been successfully replicated, for example, other research groups using different study designs have found that the fusiform gyrus is specific to faces and other nearby regions deal with non-face objects.[66]

However, these failures to replicate are difficult to interpret, because studies vary on too many aspects of the method. It has been argued that some studies test experts with objects that are slightly outside of their domain of expertise. More to the point, failures to replicate are null effects and can occur for many different reasons. In contrast, each replication adds a great deal of weight to a particular argument. With regard to "face specific" effects in neuroimaging, there are now multiple replications with Greebles, with birds and cars,[67] and two unpublished studies with chess experts.[68][69]

Although it is sometimes found that expertise recruits the FFA (e.g. as hypothesized by a proponent of this view in the preceding paragraph), a more common and less controversial finding is that expertise leads to focal category-selectivity in the fusiform gyrus—a pattern similar in terms of antecedent factors and neural specificity to that seen for faces. As such, it remains an open question as to whether face recognition and expert-level object recognition recruit similar neural mechanisms across different subregions of the fusiform or whether the two domains literally share the same neural substrates. Moreover, at least one study argues that the issue as to whether expertise-predicated category-selective areas overlap with the FFA is nonsensical in that multiple measurements of the FFA within an individual person often overlap no more with each other than do measurements of FFA and expertise-predicated regions.[70] At the same time, numerous studies have failed to replicate them altogether.[citation needed] For example, four published fMRI studies have asked whether expertise has any specific connection to the FFA in particular, by testing for expertise effects in both the FFA and a nearby but not face-selective region called LOC (Rhodes et al., JOCN 2004; Op de Beeck et al., JN 2006; Moore et al., JN 2006; Yue et al. VR 2006). In all four studies, expertise effects are significantly stronger in the LOC than in the FFA, and indeed expertise effects were only borderline significant in the FFA in two of the studies, while the effects were robust and significant in the LOC in all four studies.[citation needed]

Therefore, it is still not clear in exactly which situations the fusiform gyrus becomes active, although it is certain that face recognition relies heavily on this area and damage to it can lead to severe face recognition impairment.

EthnicityEdit

Main article: cross-race effect

Differences in own- versus other-race face recognition and perceptual discrimination was first researched in 1914[71] Humans tend to perceive people of other races than their own to all look alike:

Other things being equal, individuals of a given race are distinguishable from each other in proportion to our familiarity, to our contact with the race as whole. Thus, to the uninitiated American all Asiatics look alike, while to the Asiatics, all White men look alike.[71]

This phenomenon is known as the cross-race effect, own-race effect, other-race effect, own race bias or interracial-face-recognition-deficit.[72] The effect occurs as early as 170ms in the brain with the N170 brain response to faces.[73] A meta-analysis, Mullen[citation needed] has found evidence that the other-race effect is larger among White subjects than among African American subjects, whereas Brigham and Williamson (1979, cited in Shepherd, 1981) obtained the opposite pattern. Shepherd also reviewed studies that found a main effect for race efface like that of the present study, with better performance on White faces,[74] other studies in which no difference was found,[75] and yet other studies in which performance was better on African American faces.[76] Overall, Shepherd reports a reliable positive correlation between the size of the effect of target race (indexed by the difference in proportion correct on same- and other-race faces) and self-ratings of amount of interaction with members of the other race, r(30) = .57, p < .01. This correlation is at least partly an artifact of the fact that African American subjects, who performed equally well on faces of both races, almost always responded with the highest possible self-rating of amount of interaction with white people (M = 4.75), whereas their white counterparts both demonstrated an other-race effect and reported less other-race interaction (M = 2.13); the difference in ratings was reliable, £(30) = 7.86, p < .01[77]

Further research points to the importance of other-race experience in own- versus other-race face processing (O'Toole et al., 1991; Slone et al., 2000; Walker & Tanaka, 2003). In a series of studies, Walker and colleagues showed the relationship between amount and type of other-race contact and the ability to perceptually differentiate other-race faces (Walker & Tanaka, 2003; Walker & Hewstone, 2006a,b; 2007). Participants with greater other-race experience were consistently more accurate at discriminating between other-race faces than were participants with less other-race experience.

In addition to other-race contact, there is suggestion that the own-race effect is linked to increased ability to extract information about the spatial relationships between different features.[78] Richard Ferraro writes that facial recognition is an example of a neuropsychological measure that can be used to assess cognitive abilities that are salient within African-American culture.[79] Daniel T. Levin writes that the deficit occurs because people emphasize visual information specifying race at the expense of individuating information when recognizing faces of other races.[80] Further research using perceptual tasks could shed light on the specific cognitive processes involved in the other-race effect.[77] The question if the own-race effect can be overcome was already indirectly answered by Ekman & Friesen in 1976 and Ducci, Arcuri, Georgis & Sineshaw in 1982. They had observed that people from New Guinea and Ethiopia who had had contact with white people before had a significantly better emotional recognition rate.

Studies on adults have also shown sex differences in face recognition. Men tend to recognize fewer faces of women than women do, whereas there are no sex differences with regard to male faces.[81]

In individuals with autism spectrum disorderEdit

Autism spectrum disorder (ASD) is a comprehensive neural developmental disorder that produces many deficits including social, communicative,[82] and perceptual deficits.[83] Of specific interest, individuals with autism exhibit difficulties in various aspects of facial perception, including facial identity recognition and recognition of emotional expressions.[84][85] These deficits are suspected to be a product of abnormalities occurring in both the early and late stages of facial processing.[86]

Speed and methodsEdit

People with ASD process face and non-face stimuli with the same speed.[86][87] In typically developing individuals, there is a preference for face processing, thus resulting in a faster processing speed in comparison to non-face stimuli.[86][87] These individuals primarily utilize holistic processing when perceiving faces.[83] Contrastingly, individuals with ASD employ part-based processing or bottom-up processing, focusing on individual features rather than the face as a whole.[88][89] When focusing on the individual parts of the face, persons with ASD direct their gaze primarily to the lower half of the face, specifically the mouth, varying from the eye trained gaze of typically developing people.[88][89][90][91][92] This deviation from holistic face processing does not employ the use of facial prototypes, which are templates stored in memory that make for easy retrieval.[85][93]

Additionally, individuals with ASD display difficulty with recognition memory, specifically memory that aids in identifying faces. The memory deficit is selective for faces and does not extend to other objects or visual inputs.[85] Some evidence lends support to the theory that these face-memory deficits are products of interference between connections of face processing regions.[85]

Associated difficultiesEdit

The atypical facial processing style of people with ASD often manifests in constrained social ability, due to decreased eye contact, joint attention, interpretation of emotional expression, and communicative skills.[94] These deficiencies can be seen in infants as young as 9 months; specifically in terms of poor eye contact and difficulties engaging in joint attention.[86] Some experts have even used the term 'face avoidance' to describe the phenomena where infants who are later diagnosed with ASD preferentially attend to non-face objects over faces.[82] Furthermore, some have proposed that the demonstrated impairment in children with ASD's ability to grasp emotional content of faces is not a reflection of the incapacity to process emotional information, but rather, the result of a general inattentiveness to facial expression.[82] The constraints of these processes that are essential to the development of communicative and social-cognitive abilities are viewed to be the cause of impaired social engagement and responsivity.[95] Furthermore, research suggests that there exists a link between decreased face processing abilities in individuals with ASD and later deficits in Theory of Mind; for example, while typically developing individuals are able to relate others' emotional expressions to their actions, individuals with ASD do not demonstrate this skill to the same extent.[96]

There is some contention about this causation however, resembling the chicken or the egg dispute. Others theorize that social impairment leads to perceptual problems rather than vice versa.[88] In this perspective, a biological lack of social interest inherent to ASD inhibits developments of facial recognition and perception processes due to underutilization.[88] Continued research is necessary to determine which theory is best supported.

Neurology Edit

Many of the obstacles that individuals with ASD face in terms of facial processing may be derived from abnormalities in the fusiform face area and amygdala, which have been shown to be important in face perception as discussed above. Typically, the fusiform face area in individuals with ASD has reduced volume compared to normally developed persons.[97] This volume reduction has been attributed to deviant amygdala activity that does not flag faces as emotionally salient and thus decreases activation levels of the fusiform face area. This hypoactivity in the in the fusiform face area has been found in several studies.[88]

Studies are not conclusive as to which brain areas people with ASD use instead. One study found that, when looking at faces, people with ASD exhibit activity in brain regions normally active when typically developing individuals perceive objects.[88] Another study found that during facial perception, people with ASD use different neural systems, with each one of them using their own unique neural circuitry.[97]

Compensation mechanismsEdit

As ASD individuals age, scores on behavioral tests assessing ability to perform face-emotion recognition increase to levels similar to controls.[86][98] Yet, it is apparent that the recognition mechanisms of these individuals are still atypical, though often effective.[98] In terms of face identity-recognition, compensation can take many forms including a more pattern-based strategy which was first seen in face inversion tasks.[91] Alternatively, evidence suggests that older individuals compensate by using mimicry of other’s facial expressions and rely on their motor feedback of facial muscles for face emotion-recognition.[99] These strategies help overcome the obstacles individuals with ASD face in interacting within social contexts.

Artificial face perceptionEdit

A great deal of effort has been put into developing software that can recognize human faces. Much of the work has been done by a branch of artificial intelligence known as computer vision which uses findings from the psychology of face perception to inform software design. Recent breakthroughs using noninvasive functional transcranial Doppler spectroscopy as demonstrated by Njemanze, 2007, to locate specific responses to facial stimuli have led to improved systems for facial recognition. The new system uses input responses called cortical long-term potentiation (CLTP) derived from Fourier analysis of mean blood flow velocity to trigger target face search from a computerized face database system.[100][101] Such a system provides for brain-machine interface for facial recognition, and the method has been referred to as cognitive biometrics.

Another interesting application is the estimation of human age from face images. As an important hint for human communication, facial images contain lots of useful information including gender, expression, age, etc. Unfortunately, compared with other cognition problems, age estimation from facial images is still very challenging. This is mainly because the aging process is influenced not only by a person's genes but also many external factors. Physical condition, living style etc. may accelerate or slow the aging process. Besides, since the aging process is slow and with long duration, collecting sufficient data for training is fairly demanding work.[102]

See alsoEdit

References & BibliographyEdit

Key textsEdit

BooksEdit

  • Bruce, V. and Young, A. (2000) In the Eye of the Beholder: The Science of Face Perception. Oxford: Oxford University Press. ISBN 0198524390
  • Bruce, V. & Humphreys, G. W. (Eds.) (1994) Object and face processing. London: Erlbaum.


PapersEdit

1Nelson, C.A. (2001) The development and neural bases of face recognition. Infant and Child Development, 10, 3-18.
2Bruce, V. & Young, A. (1986) Understanding face recognition. The British Journal of Psychology, 77 (3), 305-327.
3Kanwisher, NG, McDermott, J, Chun, MM. (1997) The fusiform face area: a module in human extrastriate cortex specialized for face perception. Journal of Neuroscience, 17 (11), 4302-11.
4Gauthier I, Skudlarski P, Gore JC, Anderson AW. (2000) Expertise for cars and birds recruits brain areas involved in face recognition. Nature Neuroscience, 3 (2), 191-7.
5Gauthier I, Tarr MJ, Anderson AW, Skudlarski P, Gore JC. (1999) Activation of the middle fusiform 'face area' increases with expertise in recognizing novel objects. Nature Neuroscience, 2 (6), 568-73.
6Grill-Spector K, Knouf N, Kanwisher N. (2004) The fusiform face area subserves face perception, not generic within-category identification. Nature Neuroscience, 7 (5), 555-62.
7Xu, Y. (2005). Revisiting the role of the fusiform and occipital face areas in visual expertise. Cerebral Cortex, 185, 1234-1242.

8Kargopoulos, P., Bablekou, Z., Gonida, E., & Kiosseoglou, G. (2003) Effects of face and name presentation on memory for associated verbal descriptors. The American Journal of Psychology, 116 (3), 415-430.

  • Bower, G.H. and Karlin, M.B. (1974) Depth of processing pictures of faces and recognition memory, Journal of Experimental Psychology 103: 751-7.
  • Bruce, V, and Young, A.W. (1986) Understanding face recognition, British Journal of Psychology 77: 305-27.
  • Ellis, H.D., Shepherd, J.W. and Davies, G.M. (1979) Identification of familiar and unfamiliar faces from internal and external features: some implications for theories of face recognition, Perception 8: 431-9.
  • Haig, N.D. (1984) The effect of feature displacement on face recognition, Perception 13: 505-12.
  • Servos. P., Engel, S. A., Gati, J. and Menon, R. (1999) fMRI evidence for an inverted face representation in human somatosensory cortex, NeuroReport, 10, 1393-95.

Additional materialEdit

BooksEdit

PapersEdit

Ahrens, S.R. (1954) Beitrage zur Entwicklung des Physiognomie and Mimikerkennes {Contributions on the development of physiognomy and mimicry recognition], Zeitschrift fur Experimentelle and Angewandre Psychologie 2: 412-54.

  • Goren, C.C., Sarty, M. and Wu, R.W.K. (1975) Visual following and pattern discrimination of face-like stimuli by new-born infants, Paediatrics 56: 544-9.
  • Valentine, T. (1985) Identity priming in the recognition of familiar and unfamiliar faces, British Journal of Psychology 76: 373-83.

External links Edit

  • FaceResearch – Scientific research and online studies on face perception
  • Facial Expression Resources Page Links to research groups and other resources concerning facial expression perception, recognition and synthesis.
  • Face Blind Prosopagnosia Research Centers at Harvard and University College London
  • Perceptual Expertise Network (PEN) Collaborative group of cognitive neuroscientists studying perceptual expertise, including face recognition.
This page uses Creative Commons Licensed content from Wikipedia (view authors).

Cite error: <ref> tags exist, but no <references/> tag was found

Around Wikia's network

Random Wiki