Face perception Flashcards
how do we measure eye movements?
- Yarbus (1967) was one of the pioneers in eye tracking and picture perception.
- You could gather from eye movements which features are most salient or important to the observers.
- In one study, he measured eye movements over three minutes and found large amount of time spent looking at eyes, mouth and shape of the face.
what are the importances of faces and eyes?
- Cerf et al. (2008) found that when a picture contained a face, the observers’ gaze was drawn to it immediately, even when they were asked to look for other objects.
- Birmingham et al (2009) found that viewers fixated on the eye region of the faces within the first 200ms of viewing.
what is the importance of the eyes?
- Faces or eyes have a special meaning: Bateson et al (2006) placed a picture of flowers or eye on a notice listing coffee/tea prices in the Psychology Department at Newcastle.
- They found that on weeks eyes were shown, colleagues were more likely to pay their share. This suggests that eyes and gaze are powerful even when they are only in pictures.
which do we prefer - eyes or faces?
- Franz (1961) presented stimuli that either had properly or randomly configured facial features to babies at a range of ages and found that infants spent more time looking at intact faces compared to scrambled faces and a non-face stimuli.
- Note: In developmental research, looking time is a marker for preference.
- Franz suggested that face configuration is important, and infants preferred looking at the faces.
what is the face detection effect?
- Purcell and Steward (1988) ran a series of experiments and presented participants with a face target that could appear either to the left or right of a fixation point. Participants had to enter on which side the target appeared.
- The face target could be an upright, inverted or scrambled face.
- The researchers found that it took participants less time to detect the location of the face target when they were an upright face compared to an inverted or scrambled face.
what is the face superiority effect?
- Tanaka and Farah (1993) asked observers to remember the names of faces that were constructed from a kit of facial features.
- Subjects were then presented with isolated features and asked to identify on whose face they belonged to.
- They found that observers are much better at recognising the features when they are in a face.
- Faces are processed holistically, i.e. as a whole.
can infants recognise mothers?
- Field et al. (1984) demonstrated that neonates, who have only spent four discontinuous hours with their mothers, displayed a preference for their mother’s face over a stranger’s.
- Walton, Bower & Bower (1992) found that 12- to 36-hours old infants preferred their mother’s face.
- Bushnell (2001) reported that 12-hour old infants looked longer at their mother’s face than a stranger’s.
how do we recognise individuals?
- We are generally very good at recognising faces…
- Caricatures accentuate the differences between faces and therefore makes them more recognisable that the actual photo (Rhodes et al., 1987).
how do we recognise faces in photos?
- Jenkins et al. (2011) demonstrated that there is a lot of within-person variability in photographs.
- They found that when subjects were presented with photographs of a person they did not know, they tended to think the photographs were of several different individuals.
are we really that good at remembering faces?
- Bruce et al. (1999) presented observers with images from high-quality videos (CCTV) and asked them whether the target was present in a set of faces.
- They found that observers made lots of mistakes and that superficial facial resemblances and dissimilarities were very misleading.
how useful are photo IDs?
- Bindemann (2011) presented observers with three photo ID cards and asked them to identify the person on the cards from a set of 30 faces.
- He found that subjects were quite poor at identifying the targets (mean accuracy 57%).
- However, if they were told that the three cards belonged to the same person, then accuracy rises to 85%!
what is other-race effect?
- Bothwell et al. (1989) showed that white subjects were better at recognising white faces and black subjects were better at recognising black faces.
- More recently, Anzures et al (2013) published a good review on the developmental origins of ORE.
what are the causes of ORE?
- The limited exposure to faces of a different culture can make it more difficult for one to detect the subtle differences in “foreign” faces.
- This phenomenon is not only true for human faces, but also for faces of other species. For example, farmers can recognise the faces of their sheep, whereas other people generally cannot (McNeil & Warrington, 1993).
how do we recognise facial expressions?
- Eckman (1992) showed that there is some universality in how we express different emotions (e.g. happiness, sadness, etc).
- Our ability to recognise these emotions is also universal.
what is the automatic mimicry?
- This has been suggested to underlie how humans process and express emotions.
- It refers to the unconscious/automatic imitation of speech and movements, gestures, facial expressions and eye gaze.
- The tendency to automatically mimic and synchronize movements with those of another person has been suggested to result inemotional contagion (Cacioppo et al., 2000).
what is the importance of a contagious smile?
- The Duchenne smile reaches your eyes, making the corners wrinkle up with crow’s feet and is recognised as the most authentic expression of happiness.
- Some have suggested that it is the constriction of the eyes that marks the smile of true enjoyment.
what is the effect of botox on emotions?
- Lewis (2018) suggested that Botox, a treatment used to treat the appearance of frown lines, may impact on the clients’ emotion and their ability to recognise emotion.
- Botulinum toxin (BTX) injections reduce muscle mobility and are commonly used to treat the appearance of frown lines and wrinkles.
- Lewis (2018) found that
BTX treatment of laughter lines was associated with increased depression scores. They suggested this was because the treatment of crow’s feet (or laughter lines) would reduce mood as patients’ Duchenne smiles would be impaired. (Think James-Lange Theory.) - BTX treatment was associated with reduced emotion recognition ability. They reasoned that facial BTX treatments would impair emotional expression recognition because the ability to mimic emotions would be reduced.
what is ensemble coding of expressions?
- Similar to Ariely’s study on ensemble coding, Haberman & Whitney (2009) showed that although observers retained little information about the emotion of individual members, they had a remarkably precise representation of the mean emotion.
- Observers continued to discriminate the mean emotion accurately even when they viewed sets of 16 faces for 500 ms or less.
- Here again, observers quickly recognize the mean expression of a set of faces with remarkable precision, despite lacking a representation of the set constituents.
what is the attractiveness of faces?
- Perrett et al. (1994) showed that we find average faces attractive, but we find the average of attractive faces even more attractive.
- If we took the difference between (average face d) and (average of attractive e), and add that to e, we get f the ultimate super-attractive face.
why is the average attractive?
- Averaging faces removes any anomalous facial features, therefore making the faces more bilaterally symmetrical.
- Symmetry is generally preferred because:
- Asymmetries could result from injuries in utero (Hubschman & Stack, 1992).
- Asymmetry in human infants is correlated with the number of infectious diseases experienced by the mother during pregnancy (Livshits & Kobyliansky, 1991).
- Bilateral symmetrical individuals display an advantage in sexual competition in different animal species (Moller, 1992).
what is the halo effect?
- Attractiveness has been said to have a positive “halo effect”, where people tend to attribute socially desirable personality traits to physically attractive individuals. Indeed, several studies have documented this effect.
- Attractive individuals are attributed more positive traits such as being more extraverted (Albright et al.,1988) and friendlier (Dion et al.,1972).
how does the halo effect effect across cultures?
- Batres and Shiramizu, (2023) examined the “attractiveness halo effect” across 45 countries in 11 world regions.
- Participants were asked to rate 120 faces on one of several traits.
- Results showed that attractiveness correlated positively with most of the socially desirable personality traits, including being more confident, emotionally stable, intelligent, responsible, sociable, and trustworthy.
what is the effect of social media and smartphones on perceived beauty?
- Gill (2021) wrote a report for the UK Parliament on the effects of smart phones and social media on appearance pressures. The data was collected from 175 were young women or nonbinary people.
- She found about 88% felt some pressure to look attractive, and 90% either apply beauty filters or edit their photos before posting them on social media.
what are the augmented reality beauty filters?
- These filters modify the appearance of the face by conforming it to current beauty ideals, such as flawless skin, see right (Gill, 2021).
- Miller (2024) showed that 87% of the filters sampled shrank the user’s nose and 90% made the user’s lips larger.
- Unlike photo editing, ARB filters adapt to facial features in real time, resulting in an instantaneous and unique digital beautifying process.
what did Gulati et al 2014 study?
- Gulati et al. (2024) conducted a large-scale online user study involving 2,748 participants who rated facial images from a diverse set of 462 distinct individuals in two conditions: original and attractive after applying a beauty filter.
- Gulati et al. (2024) revealed that:
- (1) AI-based beauty filters increase the perceptions of attractiveness for almost all individuals, regardless of their gender, age and race, and
- (2) Individuals in filtered images received higher ratings of attractiveness and other traits, such as intelligence and trustworthiness.
what did Isakowitsch 2022 study?
- published a chapter on how ARB filters can affect self-perception.
- online interviews that were conducted with eight individuals (8 people participated of whom 4 identified as female, 2 as non-binary and 2 as male. Participants were asked to take selfies with and without using an ARB filter.
- All participants described their experience of interacting with the filter as fake.
- Seven out of the eight participants described that they were experiencing a negative emotional reaction in the moment when they switched back to their unfiltered camera.
- Three stated that they were disappointed when they saw their face without a filter again. They used expressions like “downgrade” and feeling “underwhelmed”, “disappointed” and “less enthusiastic” about their physical appearance.
- Most participants seemed to notice certain facial features more than before applying the filter.
- “I can definitely notice how different my jaw line is between the filter and no filter. And that doesn’t make me feel great. And so looking at my face now, I just noticed my jaw, a lot more. I look so much more round without the filter”.
what are GAN and deep fakes?
- Generative Adversarial Network is a type of machine learning model that is widely used for creating realistic images, videos, and other types of data.
- GAN can create deepfakes(‘deep learning’+’fake’) that can be new or edited images, videos, or audio, edited or generatedusing AI tools, and that can depict real or non-existent people.
- Academics have raised concerns about the potential for deep fakes to be used to promote disinformation and hate speech, and even interfere with elections.
how do we detect deep fake faces?
- Bray et al. (2023) used an online survey to present 280 participants a sequence of 20 images randomly selected from a pool of deepfake and real images of human faces. Participants were asked if each image was AI-generated or not, to report their confidence, and to describe the reasoning behind each response.
They found that overall detection accuracy was just above chance (62%). - Participants’ confidence in their answers was high and unrelated to accuracy.
- They noted that accuracy can range between 30% to 80% across images.
The accuracy rate for 20% of images was below chance.
what did Shen et al. 2021 study?
- In another study, Shen et al. (2021) conducted a series of large-scale crowdsourced behavioural experiments using different sources and kinds of face imagery.
- Results show that humans are unable to distinguish synthetic faces from real faces under several different circumstances.
- This finding has serious implications for many different applications where face images are presented to human users.
what are our neural and behavioural responses to real vs fake faces?
- Moshel et al. (2022) showed that participants real and fake (realistic and unrealistic) faces and asked them to classify them as real or fake.
- They found that:
- Participants performed near chance when classifying real and realistic fakes.
- Participants tended to interchange the labels, classifying real faces as realistic fakes and vice versa.
- BUT participants’ neural activity can reliably decode AI generated faces.
how to spot a fake?
- Hills and Lewis (2006) suggested that participants can be trained to spot fakes if they know what to look for.
- Liu et al. (2020) reported that artefacts such as “asymmetrical eyes” and “irregular teeth” in artificial faces can help with spotting fakes.
what are the research potentials for deep fakes?
- But there are some researchers who have recognised the potential in deepfakes as a tool for Psychological research.
- Deepfakes can produce highly customisable faces that can be controlled for factors such as attractiveness, race, facial features, etc etc. Furthermore, deepfakes can be static images or moving video clips.
what are deep fake visual illusions?
- Barabanschikov and Marinova(2021) created dynamic face illusions using deepfakes, such as dynamic chimeras (illusory stimuli created by combining different facial features or expressions from multiple individuals).
– On the right, they combined Lily-Rose Depp and Scartlet Johansson.
how do we use deep fake to manipulate race?
- Haut et al. (2021) used AI to generate photos and deepfake to generate videos of actors of different race, whilst maintaining the same discourse, attractiveness, etc. They presented these to participants, alongside a story and asked them to rate how credible the actors appeared to them.
- They found that there were no differences for AI-generated images but significant differences for deepfake videos.
how do we use deep fakes for experiments in the social sciences?
- Eberl et al. (2022) tried the use of deepfake videos in psychological research.
- They presented students with presentation videos: half were real and had attractive instructors, and the other half were deepfake and had less attractive instructors
- They found that students did not detect the deepfake and did not notice any differences in terms of video quality.
- In contrast to the halo effect, they found a beauty penalty where the more attractive (original) instructor received lower scores on all skills, except for likeability and good preparation.