1.Summary of underlying theory to which your experiment relates
2.Description of task and questionnaire, and data you will receive
3.Hypotheses and statistical tests we would like you to run
4.Report structure (very briefly as you’ll get a separate talk)
- Evidence that people can recognize 5000 faces on average (but wide individual differences) – Jenkins et al., 2018
- Recognition survives well if we haven’t seen someone for a long time, or they’ve changed their appearance (hairstyle etc)
- People are also generally good at deriving other information from faces such as emotion, but again evidence of individual difference – Hoffman et al., 2010
- How good we are is surprising as we see a lot of faces and their first order configuration (nose in the middle, eyes above, mouth below) is the same.
- Individual features (nose, mouth, eye etc)
- Second order configuration (spacing)
- Holistic processing (integration of the multiple parts of a face into a single holistic representation)
- See Maurer et al., 2002 for further discussion.
- Many psychologists believe faces (particularly upright faces) and non-face objects are processed differently:
- Range of effects (composite effect, inversion effect etc) found in behavioural experiments with faces are not found (or not to the same extent) with non-face objects – Robbins & McKone, 2007
- Neuro-imaging studies show differences in activation (notably fusiform face area) – Kanwisher & Yovel, 2006
- It has been proposed:
- object processing involves decomposition into parts or features (Biederman, 1987)
- faces are represented and recognised holistically (Tanaka & Farah, 2003) and in particular relying on second-order configuration (Searcy & Bartlett, 1996)
- However, it has also been argued featural processing of faces has been underplayed:
- emphasis on configural processing often relies on assumption that inversion primarily impairs configural processing, but evidence that it also impairs feature processing (Murphy & Cook, 2017)
- Your experiment asks:
- Does sensitivity to configure differences in upright faces predict self-reported face recognition ability?
- Does sensitivity to feature differences in upright faces predict self-reported face recognition ability?
- Is this pattern the same for upright houses?
- Why is it relevant?
- Face processing is important for social interactions & deficits could contribute to isolation etc
- As well as prognosticator (face blindness) as an extreme form, various groups may have some difficulties with faces – autism (Dawson et al., 2005) and older people (Ortega & Phillips, 2007)
- Could training help? If so, configure or feature, and would it be limited to faces or include other things?
- Could configure and feature processing differences be a diagnostic tool?
- Stimuli were houses and faces which differed either in features or configuration – Yovel & Kanwisher 2004
- Stimuli were either upright or inverted, giving four conditions. This replicates Y&K but we will only give you, and you should only analyze, upright conditions
- You can say there were 160 trials embedded within a larger task involving inverted stimuli
- 80 trials for each condition:
- 20 which differed in configuration but not features
- 20 which differed in features but not configuration
- 40 pairs were identical
- We’ve used a signal detection task and you will get data on sensitivity (d’)
- You don’t need to explain SDT in any detail in your report (hooray!)
- Key point – sensitivity (d’) is a measure of accuracy which is independent of response bias
- It takes into account both cases where you correctly saw there was a difference (“hits”) and where you correctly saw there was no difference (“correct rejections”)
- Superior measure than just “hits” because it doesn’t matter in theory if you are biased towards or against reporting a difference.