Attention Flashcards
Which different meanings are ascribed to attention?
- Vigilance: Ability to uphold critical task performance across long time
periods (concentration) - Selective attention: Selection of one or more alternative stimuli for in-depth cognitive processing (due to limitations in cognitive resources: not all sensory stimuli can undergo deep processing).
Why do we need attention?
- Full parallel processing: When everything from the visual field is processed in parallel
- Selection for processing: Selecting certain parts of the visual system for in-depth processing.
- Selection for action: Screening certain parts of the visual field for action, prioritize factors over others due to bottleneck of motor system (we can only do a certain amount of things at once with our physical body, example: if you need the hand to land on a specific point.)
What is overt orienting and how is that an indicator of attention?
Overt orienting: moving the eyes or head along with the focus of attention.
Fixation points: Under most natural conditions the point of gaze fixation coincides with the focus of attention (overt spatial attention).
Thus, fixation patterns are a good indicator of a person‘s current focus of attention.
What is covert attention? And how can this be explained?
Covert attention is the ability to select a region of visual space for in-depth processing that’s different from where we’re fixating (without moving the eyes): voluntary de-coupling of priority for processing (focus) from gaze (Helmholtz experiment).
Explanation: Eye movements and visual selective attention are controlled by highly similar networks in parietal (IPS) and frontal cortex (FEF).
This could reflect the voluntary and „artificial“ suppression of eye movements in covert attention tasks.
What are two antagonistic demands (triggers) for attention?
Dorsal: Endogenous top-down orientation of attention (voluntary).
Localization: IPs/SPL, FEF
Ventral: Exogenous bottom-up capture of attention (involuntary)
Localization: TPJ (IPL/STG), VFC (IFg, MFg)
IPS - Intraparietal sulcus; FEF - frontal eye fields; TPJ - Temporoparietal junction; VPC - ventral frontal cortex
What is Posner’s spatial cueing paradigm about?
Where is manipulation possible and where not?
Fixating, while something is happening in periphery leads to cueing.
Costs-benefit analysis in attention shows resource dedication: taking processing resources to one place (benefit: faster reaction, accuracy) takes them away from other unattended items (cost: slower reaction).
The distribution of attention can be manipulated e.g. by the validity with which an endogenous cue predicts the location of a target, or using an exogenous cue that „automatically“ draws attention to that spatial location.
Endogenous cues can be ignored but exogenous cues cannot.
What are early and late selection?
Early selection: Influences processing of sensory inputs before the completion of perceptual analyses.
Example: Dichotic listening experiments show that channel separation is possible to some degree (if physical characteristics, like voices, are distinct enough).
According to Broadbent’s filter model
a gating mechanism determines what limited information is passed on for higher level analysis. So messages from input channels pass through a selective filter that moves the information to a limited capacity decision channel and then either information is stored in the long term memory store or it is responded to. The gating mechanism is needed at stages where processing has limited capacity.
→ Early selection happens prior to semantic analysis
→ oversimplification: we now know that the signal doesn’t go one way, but there’s processing to both directions.
Late selection: only after the complete perceptual processing of the sensory inputs, at stages where the information had been recoded as a semantic or categorical representation. So all incoming information is processed up to the level of meaning (semantics) before being selected for further processing.
Example: Cocktail party phenomenon.
→ counter evidence to early selection: Channel separation is possible to some degree, but not an absolute hard filter: signals from other channel are just weakened.
Stages of selection (in cells)
Single cell data shows relative orientation relative to peak:
V1 small difference
V4: quite some difference
→ evidence for late selection (stronger than early)
What is the computational mechanism underlying selective attention? (at the neuronal level)
Population histograms show that different cells can respond in many different ways to changes in spatial attention:
- Unspecific increase in firing rate (general population: no offset, just some cells)
- Narrowing of tuning function, higher resolution (some cells)
- Stonger response amplitude (gain), higher SNR (most cells → predominant effect)
What role does the LGN play in attention?
Connor et. al study 2002: Most projections that come in to LGN are from cortex, not from retina (most synapses come from cells from cortex, not retina)
→ attention effects: attention manipulation in very early stages – enhances neural responses to attended stimuli, attenuates responses to ignored stimuli, and increases baseline activity in the absence of visual stimulation.
What is visual search and how is it processed?
Selectively go through and process all information: A task of detecting the presence or absence of a specified target object in an array of other distracting objects.
It is a mix of bottom-up processing (perceptual identification of objects and features) and top-down processing (holding in mind the target and endogenously driven orienting of attention, eg. finding letter F from an array of other letters).
What are the two search types in visual search?
Parallel: When a single feature is different: you don’t need to search around, you do a parallel processing and find the target pops out from all the background information.
Serial: Feature conjunction (combined stimuli), attentive search: When you have to search one by one and larger the set size longer it takes. Processing in the brain is different for both these types of searches.
What does Treisman‘s feature-integration theory about?
Feature integration theory (FIT) is a model of how attention selects perceptual objects and binds the different features of those objects (e.g., color and shape) into a reportable experience. Most of the evidence for it (and against it) has come from the visual search paradigm. Example it is easier to find a blue ‘T’ within an array of red ‘L’s instead of a blue ‘T’ in an array if red ‘T’s.
Based on different higher-level visual processing areas:
From V1 information is sent to different areas with dedicated map for 1 type of feature (simplification), processed in parallel, attention brings this information together (master map).
Only for 1 small location you can get all this information at once (single map).
What is a saliency map?
Computational model for the different maps in the brain in which data processes, for example there seems to be a different map for things like orientation, colour, shape, intensity etc. Saliency map is the average of all the unusual conspicuousness in the visual field across all the different features to determine where you would pay attention (good prediction).
This is without any top down endogenous attention. This explains why and where our attention is directed when we have a feature contrast in our visual environment.
This is why we are able to make sense of a watercolour painting even if the colour does not follow exact shape, we can still make out what the object is.