Attention Flashcards
5 functions of attention
- Focusing: Limiting the number of items being processed
- Perceptual Enhancement: Increasing perceptual acuity for the selected stimulus
- Binding: Combining distinct features into a percept of a single object
- Sustaining Behaviour: Maintaining an action/thought in the presence of potential distraction
Early filter theory
You’re not processing the meaning of certain inputs (filter is before semantic processing)
Late filter theory
We process stuff and then decide if it’s worth attending to (filter is after semantic processing)
How we process information
Sensory inputs –> Sensory memory (doesn’t last long) –> Semantic processing breaks up inputs into meaningful units –> Conscious awareness
Dichotic listening
A technique in which different streams of auditory information are played to each ear using headphones
How is attention manipulated in dichotic listening tasks?
Attention is manipulated by asking subjects to
“shadow” the content presented to one ear
How is processing of unattended information assessed?
Processing of unattended information is assessed by asking questions about the auditory stream which wasn’t shadowed:
- what did the voice say?
- what language was it in?
- was the voice male or female?
Cocktail party effect
Ability to focus on one source of auditory information
Evidence for late filters in dichotic listening tasks
- Morray found that 30% of subjects noticed if their name was presented in the unattended stream.
- Treisman asked subjects to shadow the content in one ear when content made more sense across ears (Basically you’re hearing the story in the attended stream and it’s continued in the unattended stream and you pay attention to both because it makes more sense)
- Gay and Wedderburn found a similar bias
using numbers and words
Attenuation model
- A theory in its own right, but also used to reconcile the early filter vs. late filter debate
- Treisman proposed an attenuation model in which unattended information isn’t completely blocked — like a leaky filter
- Attenuator operates early — before semantic processing; unattended information still makes its way over to semantic processing, but it’s a lot weaker than the attended information
What happens to unattended information in the attenuation model?
Unattended content can activate a dictionary unit (gain conscious awareness) if it is a:
- Permanent Priority (e.g. name, danger) - perpetually low threshold for activation
- Current Priority (e.g. continuation of sentence) - as the conversation goes on, thresholds decrease; depends on the context
Load and capacity theories
- Lavie argued that how we process unattended information is determined by: processing capacity and perceptual load
- This means that the degree to which you process
unattended information depends on an interaction between your capacity and primary task difficulty - processing load matters
Processing capacity
The amount of information that people can simultaneously process
Perceptual load
Related to task difficulty
What makes us attend?
- Exogenous attention
- Endogenous attention
Exogenous attention
- Automatic deployment of attention to a salient aspect of environment
- e.g. a loud noise or sudden motion
- fast and obligatory
Endogenous attention
- Conscious deployment of attention to a behaviourally
important target - e.g. a lecture, reading, conversation in a noisy environment
- slower and requires effort
How do we deploy attention?
- Overt attention
- Covert attention
Overt attention
Shifting attention by focusing the eyes on a target
Covert attention
Shifting attention without focusing the eyes on a target — attention is directed to a different region of space than the eyes
How did Heimholtz present empirical proof that we can covertly attend?
- Helmholtz (1894) provided empirical proof that we
can covertly attend by holding his fixation constant
but moving his attention - He could report more letters from the attended as compared to unattended location after a brief display
Posner’s precuing paradigm
Fixate -> covertly attend to cue -> respond to target
3 types of trials in Posner’s precuing paradigm
- Neutral- target location was not indicated
- Valid- arrow directed attention to the correct location (80%)
- Invalid- arrow directed attention to the incorrect location (20%)
Solutions to load/capacity problems
- You can train yourself to have a higher capacity
- You can figure out ways to make certain tasks easier
Posner’s precuing - What are the independent and dependent variables?
Independent variable - cue type (three conditions: valid, invalid or neutral)
Dependent variable - reaction time
Does Posner’s precuing paradigm test exogenous or endogenous attention? How could you change the task to study the other form of attention?
- Endogenous - you need to deploy your covert attention yourself
- You can convert this to exogenous by using a flashing light
Why are there more valid than invalid trials in Posner’s precuing?
More valid trials than invalid trials because then the cue would just always be neutral
Why do you need a neutral condition in Posner’s precuing?
- Neutral condition gives you a point of reference/baseline - lets you see whether reaction time decreases or increases
- Allows you to interpret the results (having attention in right spot makes you faster + wrong spot, slower and worse than not having your attention anywhere at all)
What do the results tell us about the time taken to deploy attention? How can you test this timing more directly?
With the invalid cue. you need to disengage attention from wrong place and refocus it, so additional time. You can test timing playing with onset of cue and how long it takes for the representation to show up on the screen
Attention as a spotlight
- An early metaphor for attention was that it was a spotlight that moved across space and lit the attended location
- Consistent with Posner’s findings - if attention was in the right location then processing was faster than if it was in the wrong location
- But does attention always need to be allocated to space or can it also be allocated to objects? (Spatial vs. object based theories)
Scanning a scene with eye movements
Necessary; good detail vision only for things you are looking at directly
Central vision
- The area you are looking at
- Objects here fall on the fovea (has better vision)
Peripheral vision
- Everything off to the side
- Area not on fovea falls on peripheral retina
Fixation
Pausing of the eyes on places of interest while observing a scene
Saccadic eye movement
Rapid jerky eye movement from one fixation point to another - we move our eyes about 3 times per second
Stimulus salience
Physical properties of the stimulus - example of bottom up process
Salience map
Map of a scene that indicates the stimulus salience of areas and objects in the scene
Scanning based on cognitive factors
- Variations in how people scan scenes
- Example of top-down/cognitive processing
- Scene Schema – person’s knowledge about what’s likely to be contained in a scene
- Guides attention to different areas of scene
Scanning based on task demands
- Tasks: people shift attention from one place to another as they’re doing things
- E.g. Driving
- Timing of when people look at specific places = determined by the sequence of actions involved in the task
Just in Time Strategy
- Eye movements occur just before we need the information they will provide
- When eye movements & fixations = closely linked to the action a person is about to take
- Eye movement comes before a motor action by a fraction of a second
What do scanning based on tasks and scanning based on cognitive factors have in common?
Evidence that scanning is influenced by people’s predictions about what is likely to happen
Inattentional blindness
When you don’t perceive something in your environment that is clearly present
Change blindness
When you don’t notice more subtle changes to your environment after a brief delay
Differences between inattentional and change blindness
- Change detection requires attention to the changed part of the image + memory for
what it looked like before - You can overcome in attentional blindness by simply attending to the ignored stimulus
Why do some tasks require more attention?
- Processing load matters
- More processing steps require more attention
- Holding more content in mind requires more attention
Automaticity in tasks
- Practice can decrease the capacity that a task requires
- With enough practice a task that requires a lot of attention can become automatic
- Truly automatic tasks can be performed at the same time as another task without hurting the performance of other task — they can be performed in parallel
- Reading and driving are examples of two tasks that become nearly automatic with time
Stroop effect
- Word reading interferes with colour naming:
- Reading is virtually automatic for adults
- Becomes obligatory - hard to suppress even if it is not
the right thing to do - The reverse isn’t true! Only reading is automatic, not
colour naming
Conditions of Stroop effect
- Conflict trials (e.g. RED)
- Congruent trials (e.g. BLUE)
- Neutral trials (e.g. CAT)
Feature integration theory
Treisman proposed that features are processed in parallel without attention; attention is required to bind the features of an object together
Visual search tasks
- Have been used to test when attention is needed
- Many objects are presented in a display
- Job is to detect wether or not a target is present
Simple Visual Search
- The target can be identified by a single feature
- The amount of time that it takes to identify a target is independent of the number of distractors
- Implies that all objects are processed
at once - Parallel process
Conjunctive Visual Search
- A combination of features is required to identify a target
- The amount of time that it takes to identify a target increases with the number of distractors
- Implies that objects are inspected one at a time - Serial process
Illusory conjunctions
A situation, demonstrated in experiments by Anne Treisman, in which features from different objects are inappropriately combined, even if the stimuli greatly differ in size and shape. Supports the FIT. However, though it’s usually bottom top, if context is given, this effect fades (e.g. an object stated to be a carrot will pretty much always be perceived as orange)
Feature Integration Theory challenges
- The degree to which targets ‘pop out’ depends on which features are combined (size and colour are easy to put together)
- Wolf et al argued that visual search always has two stages:
1. First stage is parallel and identifies potential targets
2. Second stage is serial: sequentially evaluates each target
Balint’s syndrome
A condition caused by brain damage in which a person has difficulty focusing attention on individual objects.
The case of R.M.
He had Balint’s syndrome and since he couldn’t focus on individual objects, he combined features incorrectly. Supports FIT.