213 Final Flashcards
encoding
initial processing of information so that it is represented in the nervous system (creating memory traces)
storage
retention of encoded information through consolidation
retrieval
ability of the brain to access stored information to use for some cognitive purpose - a cue (internal or external) triggers part of a memory trace, then you recall the rest
capacity
how much information can be stored in a memory system
duration
how long information remains in memory
modal model of memory (Atkinson and Shiffrin)
we have three types of memory: sensory, short-term, and long-term which each have their own capacities and durations
sensory memory according to the modal model of memory
large capacity, short duration - the sensory system holds information in place before it can be selected for further processing
temporary, automatic, no conscious effort required
short-term memory according to the modal model
smaller capacity than sensory, but longer duration (15-30 seconds) - STM can produce a behavioural output, transferring information to LTM
what is maintenance rehearsal and its function?
to prolong the duration of information in STM, it is the mental repetition of information without distractions
long-term memory according to the modal model
storage for information to be retrieved in STM and used for some cognitive function
persistence of vision
an image of a stimulus remains in our visual system after that stimulus has gone
iconic memory
Sperling’s letter grid experiment
partial report or whole report conditions of a grid of letters = could recall more of the grid in the partial report condition
types of sensory memory
iconic: visual (afterimages)
echoic: auditory (to help us separate streams of sound quickly)
haptic: touch (useful for gripping and grasping)
gustatory: taste
olfactory: smell
types of LTM
implicit: non-conscious, non-declarative
explicit: consciously accessible, declarative
visual capacity of STM
7 +/- 2 chunks/3-5 chunks
auditory capacity of STM
7 chunks
what is chunking and what does it depend on?
combining information into larger groups of meaningful units, depends on LTM (matching to memory), increases with expertise (chess novices vs. experts)
mnemonists
people with the ability to form large chunks
decay theory of forgetting
over time information leaks out
interference theory of forgetting
information processed between or before encoding affects retrieval
proactive interference (old information causes you to be unable to learn new information) and retroactive interference (new information causes you to forget older information)
examples of proactive interference and retroactive interference
pro: getting a new phone number and being unable to remember it because you keep typing in your old phone number
retro: learning a new model in psychology and being unable to remember the one it contradicted
articulatory suppression
repeating an irrelevant word to prevent rehearsal
working memory model of STM
three interconnected subunits: visuo-spatial sketchpad (visual component), phonological loop (audio component), central executive (coordinates other components and filters out distractors)
how does the working memory model explain the age decline in memory?
decline in the central executive instead of memory stores; becomes less effective at filtering out distractors
evidence against the initial working memory model
binding problem
a coherent story is better remembered = phonological loop interacts with LTM, so they are not completely separate sensory codes and instead interact with each other
episodic buffer
added component of the WM model to account for the integration of information in different stores (sketchpad, phonological loop, LTM) and is controlled by the central executive
brain regions associated with the WM model
occurs all over the brain (whichever sensory experience is involved)
dorsolateral PFC could be the central executive
episodic buffer in the parietal lobe
phonological loop in Broca’s and Wernicke’s
visuo-spatial sketchpad in the occipital lobe
attentional control in the anterior cingulate cortex
function of the hippocampus
encoding memories of complex events as patterns of activity across the cortex (depending on the nature of the memory)
over time the memory trace can become independent from the hippocampus
types of implicit memory
procedural
priming
types of explicit memory
semantic
episodic
afterimages
positive: represents the perceived image
negative: inverse of the perceived image (colours are inverted)
serial position effects and their mechanisms
primacy effect: information presented first is better remembered because of increased rehearsal = benefits from LTM processes
recency effect: final information is better remembered because it is stored in STM (increase delay to more than 30 seconds eliminates the effect)
evidence for dissociable WM memory stores
neuroimaging studies: different active brain regions for verbal and visual tasks
double dissociation in neuropsychological cases: patients have selective deficits to STM regarding visual-spatial and verbal tasks
mechanisms of the phonological loop
phonological store: passive storage for verbal information (“inner ear”)
articulatory control loop: active rehearsal of verbal information (“inner voice”), converts written material to sounds
mechanisms of the visuo-spatial sketchpad
visual cache: stores feature information (colours, form), passive
inner scribe: holding and working with information about sequence, movement, spatial location; active (processing changes)
Ebbinghaus’ experiment
tested how nonsense syllables (no access to knowledge) were retained and forgotten over time
study syllables without inflection, at a constant slow pace
developed the forgetting curve: exponential (memory loss is largest early on, then decreases)
ways to slow the forgetting curve
active rehearsal: speaking and working with the syllables
spacing effect: taking breaks between encoding sessions and varying the review sessions (differences in shorter bursts)
levels of processing theory
how we encode information affects whether we’re going to forget it (the strength of the memory)
shallow processing
focus on sensory features = likely to forget that information
deep processing
integrating higher-level information (meaning, evaluating, making connections to prior knowledge) = better memory
deep and shallow encoding of faces experiment
upright/inverted faces: focus on sensory features = shallow processing (better memory for upright because of holistic processing in FFA)
actor/politician faces: links to prior knowledge = deep processing (better memory than upright/inverted)
naming mnemonic
using acronyms like ROY G. BIV to remember the colours of the rainbow
story mnemonic
creating a story out of a list of words
method of Loci
associating pieces of information with a location/visual image (mind palace)
visceral/emotional aspects are better remembered
leaves a neural imprint - different neuronal connections
types of deep encoding
self-reference effect: information attached to oneself
generation effect: generated content is better remembered than passively read
encoding-specificity hypothesis
memory retrieval is better than when there is overlap with encoding context (context can act as a retrieval cue - internal state and external environment - includes state and context-dependent memory)
state-dependent learning
mental and physiological states match at encoding and recall = better retrieval (sober-sober and drunk-drunk conditions)
external environment effect on retrieval experiment
deep-sea divers were better at recalling learned words when external environment matched (underwater-underwater or land-land conditions)
episodic memory
encoding and recalling unique events within temporal and spatial context (re-experiencing)
semantic memory
information you know without remembering the context in which you learned it (societally shared general knowledge)
includes facts about yourself
semantic dementia
episodic memory is preserved (can copy images from memory)
cannot access concept knowledge, faces, names, words, functions of objects
common in Alzheimer’s
temporal poles are damaged, anterior temporal lobe
hippocampal damage
episodic memory dependent on the hippocampus - damage to it impairs ability to copy images after a delay, but semantic memory is preserved
anoetic consciousness
implicit memory: no conscious awareness (tying shoes, riding a bike)
no awareness and no personal engagement
noetic consciousness
semantic memory: aware that you’re consciously accessing information, but you don’t recall where/how you learned it
awareness without personal engagement
autonoetic consciousness
episodic memory: mental time travel to context to remember how you learned information
awareness and personal engagement
personal semantics
an intermediary types of semantics that involves information about the self and things that occur repeatedly in your life (autobiographical facts - my brother’s name & repeated events - I walked my brother to school every day)
reappearance hypothesis
idea that memories are encoded a certain way and stays that way (will be recalled the same way)
people with PTSD were recalling highly emotional events the same way = appears fixed
flashbulb memories
vivid memories of significant public events (emotionally arousing), retreiving specific details about time and place
still reconstructed memories - details change but vividness and confidence in them increased
evidence for the reconstruction of flashbulb memories
OJ Simpson trial: recollections changed and people experienced major distortions
people closer to the World Trade Center on 9/11 had more vivid, detailed, confident memories
how to construct an episodic memory trace
hippocampus binds together details processed in different brain areas and re-activates them at retrieval (may bring forth different combinations of details = memory changes)
consolidation vs. reconsolidation
consolidation is the initial storage from STM to LTM
once a trace gets re-activated, it is unstable and subject to change
it must be re-consolidated back into LTM which can alter the neural network
applications of reconsolidation
because memories become unstable, the memory could be changed/erased - eliminating fear responses in PTSD and phobias
role of schemas in memory
can lead to distortions based on your expectations
can lead to false memories - falsely endorsing a recollection of a schema-consistent lure
War of Ghosts experiment
Ps read an unfamiliar Native American folk story (did not match schema-consistent Western story structure) - their recall changed to match their schema (lost details over time, omitted strange details and altered others to become more conventional)
engaged in assimilation
Roediger-McDermot paradigm
semantically-related lures are falsely reported to be part of episodic memories (influence of semantic memory on episodic)
how are false memories formed?
familiar feeling = incorrect associations
altered at retrieval by context, suggestion, misinformation
misattribution effect
retrieving familiar information from the wrong source (match context to the wrong memory)
misattribution of familiarity (thinking your prof works at your grocery store because they seem familiar)
misinformation effect
leading questions lead to false memories: ‘contacted’ vs. ‘smashed’ = details added to original memory
adaptive functions of reconstructive memory
we can reconstruct and form hypothetical situations in our mind (planning the future)
decision-making, creativity, problem-solving
overlap in neural activity during recollection and imagining the future
retrograde amnesia & Kayla Hutchison
events leading up to the brain injury are lost (typically loss of personal memories, not semantic and self-identity) - Kayla Hutchison also lost language and basic skills and semantic knowledge
anterograde amnesia
unable to encode new memories after a brain injury
Patient H.M. and Clive Wearing
HM bilateral lobectomy of the medial temporal lobe - able to form procedural memories (non-declarative memory depends on the basal ganglia)
cognitive abilities were intact
STM was fine
Wearing encephalitis = hippocampal damage - intact piano-playing, language, proper behaviour, facts about the world
Patient K.F. and Alzheimer’s patients
KF - damage to STM systems (which are not the hippocampus)
Alzheimer’s show less connectivity between PFC and hippocampus, damaged STM/WM
transfer-appropriate processing
retrieval depends on whether the cue matches the way information was encoded + how well it was encoded (“what word rhymed with bat?” cued-recall condition vs. free-recall)
spacing effect and testing effect
information is better remembered if it is presented over multiple spaced-out periods
information is better remembered when asked to retrieve it on your own than passive exposure
brain regions associated with episodic and semantic memory
episodic: occipital and temporal (sensory details)
semantic: frontal and parietal (executive function and decision-making, abstracted representations)
procedural memory and its associated brain regions
learned abilities to perform an automatic behavioural action (more immune to forgetting)
basal ganglia refines action sequences and shapes habits
PFC organizes procedures and monitors them
prejudice as a type of memory
implicit; inclination to automatically judge something positively/negatively based on past experience
familiarity effect of prejudice and its relation to propaganda
more likely to judge something positively if you have encountered it before
propaganda: people more likely to endorse a statement as true if they have heard it before (even if told it is false)
conditioning as a type of memory
implicit; making stable, long-term connections, fear learning & phobias (associations remain despite explicit memory being forgotten)
relies on structures in the limbic system other than the hippocampus
synaptic consolidation
within the synapses: long-term potentiation (structural changes like the number of receptors or NTs released)
stable change that occurs quickly
systems consolidation
making new connections between neurons in the cortex (relies on the hippocampus; hippocampal replay)
more permanent than synaptic consolidation
hippocampal replay
sequence of brain activity is replayed after initial encoding
what is the function of the medial prefrontal cortex in episodic memory?
activates schemas and prior knowledge to integrate within an episodic memory - acts as a scaffold onto which details are added
important for memory integration and making inferences about the world
formation of habits
initially depend on explicit memory but with training and exposure will become implicit
can be motor sequences or repetitive thoughts, emotions
requires the striatum
how to extinguish a habit
inhibit the PFC, replace the habit behaviour with another behaviour (changing/removing the reward doesn’t work) - rats t-maze experiment
priming
prior exposure facilitates processing without awareness
implicit emotional responses
fear responses
amygdala critical for this types of memory (Free Solo movie - amygdala needs a higher level of stimulation to be activated and produce a fear response)
spreading activation in a semantic network
automatic activation spreads to interconnected concepts and features, semantically related concepts also become activated
structure of semantic representations in the brain
modality-specific aspects (action, sound, emotion, colour) and abstracted representations in convergence zones (inferior and lateral temporal lobes and inferior parietal cortex)
Ribot’s law
retrograde amnesia is temporally graded; most recent memories are more affected than more remote memories
dissociative amnesia
retrograde amnesia for episodic memories and autobiographical knowledge - leads to shifts in lifestyle (new identity)
usually a response to psychological trauma
hypometabolism in lateral PFC = impaired executive processes, difficulty accessing stored memories (but they are there)
dementia and memory loss
Alzheimer’s begins with cell death in the medial temporal lobes (hippocampus) = episodic memory deficits, then spreads to other parts of the cortex
offset and treatment of Alzheimer’s
sleep, bilingualism, engaging the brain in a variety of activities can offset progression
music can help management of symptoms (alternate procedural memory pathway)
symptoms of semantic dementia
anomia: loss of word meaning and finding
cannot name functions of objects, calls objects ‘thingies’
cannot access details about concepts (all four-legged animals become dogs)
healthy aging effects on memory
brains shrink, mostly the frontal cortex and hippocampus
implicit and semantic memory are intact
episodic is impaired
deficits in general cognitive processing: lower processing speed, difficulty inhibiting distractors
associative-deficit hypothesis and evidence
problems encoding and retrieving associations
no trouble recognizing someone, but difficulty knowing where they come from (accessing episodic memory)
not due to attentional problems: younger adults still outperform older in a face-name association task while being distracted
evidence of adaptive cognitive aging
high-performing memory older adults recruited the bilateral PFC whereas young adults and low-performing OA recruited the right PFC (neural compensation for deficits)
evidence of individual differences in episodic memory
taxi drivers had increased grey matter of posterior hippocampi (smaller anterior) and better spatial memory (related to years of experience)
HSAMs
Highly Superior Autobiographical Memory
greater accuracy for episodic memories without using strategies (no increased abilities for other types of memory)
memory is still constructive, they just have more details to work with
how to test for HSAM
dates quiz: ask something you can verify for a particular day (weather, day of the week)
public events quiz: when did a particular event happen?
downsides of perfect memory
more prone to imagining future and constantly replaying the past
higher prevalence of OCD
difficult to form social networks because of a disconnect from peers
overly focused on small details = problems recognizing faces, cannot focus on general concepts
imagery
mental recreating a sensory stimulus in the absence of the sensory stimulus
Paivio’s dual-coding theory
human knowledge is represented by a verbal system (abstract code) and a nonverbal/imagery system (an analog code)
imagery debate
does imagery use a picture-like code (Kosslyn) or a symbolic code (Pylyshyn)?
depictive representation (Kosslyn)
analog code that maintains perceptual and spatial characteristics of objects
direct view: knowledge is represented in both mental images and linguistic code
Kosslyn’s mental scanning technique
going from bottom to top in a mental image (roots to petals) - RT is longer when physical distance increases
mental rotation
time taken to match a target object increases when you have to mentally rotate it
mental scaling
using relative size of objects to see if we have to mentally zoom into pictures to answer questions about details - yes
evidence for depictive representations
mental scanning, rotation, scaling
both imagery and perception share the same mechanisms and interfere with each other (visual imagery with visual perception)
imagery can also facilitate perception
imagery is susceptible to visual illusions
descriptive representations (Pylyshyn’s propositional theory)
symbolic codes that convey abstract conceptual information (do not preserve perceptual features)
relies on propositions, imagery is an epiphenomenon (indirect representation of knowledge)
falsification studies of depictive representations
some component shapes of an image weren’t identified as belonging to the original stimulus = not an image
previous studies may have relied on experimenter expectancy and demand characteristics
mental scanning: Ps could be searching through lists of words
neuropsychology cases where perceptual abilities are damaged, but imagery is still fine
brain areas associated with imagery
modality-specific sensory processing areas (other sensory brain areas get deactivated during imagery but not perception)
frontal lobe and other complex thought mechanisms (memory, planning, attention) could be sending top-down signals to early processing areas
generative adversarial networks
computers create realistic images which a discriminator network has to distinguish from original images
picture superiority effect
using imagery leads to better recall
concreteness effect
better recall for concrete words rather than abstract (effect is eliminated when people cannot imagine the concrete words)
imagery’s role in anxiety
increased negative imagery of future events
imagery’s role in PTSD
negative intrusive imagery
imagery’s role in depression
decrease in frequency and vividness of positive imagery
imagining suicidal acts increases risk of suicide
imagery as a treatment for mental disorders
replace negative memories with neutral/positive ones
assessing individual differences in imagery ability
vividness of visual imagery questionnaire: object imagery
paper folding test: spatial imagery
congenital aphantasia
inability to form visual images
hyperphantasia
extremely vivid mental imagery (associated with better autobiographical memory)
vividness of mental images and individual differences
familiarity = more vivid
expertise = more vivid (musicians have more vivid auditory imagery of music)
visualizers vs. verbalizers
visualizers recall past events with images, verbalizers with words
both use visual imagery equally, but verbalizers use more auditory imagery
heard vs. imagined timbre experiment for imagery
Ps asked to judge whether a heard tone is different than an imagined tone on a different instrument = faster RT when both tones matched (similar to the perceptual task, though the effect isn’t as strong)
so imagery and perception share brain mechanisms
imagery feedback piano-playing experimetn
Ps either got all feedback during training, only auditory, only tactile, or no feedback
recall decreases as amount of feedback decreases (but people high on auditory imagery had better recall in the tactile feedback only condition = able to compensate for the lack of auditory feedback)
chromesthesia linked to memory
sound linked to colour - memory aid (people with absolute pitch said their chromesthesia helped determine pitch)
amusia and imagery
tone-deafness - deficits in visual/spatial imagery (higher score on tone-deafness = more errors in a mental rotation task) - shows that types of imagery interact
schematic knowledge
general background gained through experience
category
set of items that are perceptually, functionally, or biologically similar
exemplar
item within a category
concept
mental representation of an object, idea, event (the reason why we group things as part of a category)
commonsense knowledge problem
humans have implicit knowledge, but it has to be explicit in computers (so they don’t have the same common sense)
classical view of categorization
category membership is determined by defining features which are sufficient and necessary
defining vs. characteristic features
defining: necessary and sufficient
characteristic: common but nonessential
works well for simple concepts but not ambiguous ones or ones that are subject to variability
against the classical view of categorization
theoretical: defining features are difficult to pinpoint
complex and changing stimuli = you have to change your defining features or exclude certain exemplars (three-legged dog)
typicality effects cannot be explained
typicality effects
we are faster to ascribe membership to typical exemplars of a category
we name them first as part of a category
infants recognize typical exemplars first
when primed with a typical exemplar, RT is faster for typical exemplars than atypical
prototype/probabilistic theory of categorization
similarity-based approach, treats concepts as context-independent
characteristic features are stored as an abstraction (average and most typical)
family resemblance
at least one feature is shared with another member, but not necessarily shared among all members
issues with prototype theory
doesn’t explain the context-dependent typicality effects (which bird is more typical depends on your environment)
doesn’t explain how to account for atypical members of a category (penguin)
exemplar theory
similarity-based approach
we store actual examples of items we’ve previously encountered (depends on past experience - explains context-dependence of typicality effects)
what is not explained by prototype and exemplar theories?
we give typicality ratings to items that have clearly defined rules (3 is ‘more odd’ than 447)
both are based on comparing similarities - how do we decide which features to compare?
knowledge-based theories of categorization
based on psychological essentialism (categories have a fundamental unique essence)
when we learn about a category, we make associations to knowledge to explain the combinations of features
basic level categories
informative and distinctive from other categories (dog)
support cognitive economy (balancing between general and specific)
children learn this level first, semantic dementia patients have more ready access to basic knowledge (then they turn to superordinate)
subordinate categories
very informative but not distinctive (from other members within that category - German Shephard)
superordinate categories
not informative but very distinctive (animal vs. fruit)
hierarchal model of semantic networks
properties are stored only once at the highest level and aren’t contained within each node = cognitive economy
doesn’t account for typicality effects
property inheritance
in the hierarchal model, subordinate categories inherit the properties of superordinate categories
spreading activation model of semantic networks
nodes are connected via semantic relatedness, not hierarchy
explains typicality effects because typical exemplars are more semantically similar
method of repeated reproduction
abstract drawings copied from memory begin to resemble familiar objects (using schematic memory)
symbol grounding problem
only symbols can represent symbols, they need some way to connect to the real world (like sensory input)
artificial neural networks
knowledge is stored in a distribution of weights, not in nodes (so the network can withstand the loss of some nodes - graceful degradation)
graceful degradation
brain damage to one area doesn’t result in loss of entire brain function because knowledge is stored as a pattern of activity across many units - you can have category-specific deficits in semantic knowledge like living things vs. non-living things
weak view of embodied/grounded cognition
the body indirectly influences cognition (judgments, memory) - matching body position at encoding and retrieval = better autobiographical memory
strong view of embodied/grounded cognition
body causes cognition: cognition is grounded in sensorimotor experiences - knowledge is stored as a distributed pattern of activity in sensorimotor neurons
pros of embodied cognition view
flexible, goal-driven, and context-dependent (most relevant knowledge is most easily retrieved)
semantic dementia and brain areas
loss of knowledge about objects due to neurodegeneration in the anterior temporal lobe (but this area isn’t activated in semantic tasks and damage to it doesn’t always present with semantic dementia)
hub-and-spoke model and evidence
the anterior temporal lobe is where abstracted knowledge is stored and modality-specific details are held in spokes distributed across the cortex
evidence: TMS of the inferior parietal lobe (grasping non-living objects) as a spoke = inability to name those objects
how do we learn concepts?
through generalization from specific episodic memories
fuzzy boundaries of categorization
graded structure: an item can be more or less part of a category, membership can be a matter of degree (it depends what aspect of an object you focus on - a sled can be a toy or a vehicle)
do we use prototype or exemplar theory?
both; sometimes we need to access concepts abstractly, sometimes in terms of specificity of exemplars
conceptual expansion
thinking beyond definite boundaries of concepts - creativity
ADHD: problems inhibiting unrelated information could be beneficial for creativity
perceptual symbols system
perception and concept knowledge are linked as perceptual symbols - we access different features based on our goals (concepts aren’t stored abstractly, but across our senses)
evidence for the perceptual symbols system
property verification tasks (people are faster to verify a perception loud - blender if the previous one recruited the same modality rustling - leaves both auditory)
brain representation: same regions are active when reading action words and performing those actions
sensory functional theories
concepts are represented by defining feature of that concept (living things by visual features vs. nonliving things by their function)
psycholinguistics
the study of how we learn, understand, and produce language
why does language distinguish humans from other animals?
animals have fixed sets of communication (humans can combine the same words to produce novel complex thoughts)
while other animals have vocal or motor capabilities to produce language, they lack the cognitive abilities to generate novel language
functions of language
sharing complex thoughts, emotions, plan the future, organize into groups, transfer information across generations
behaviorist view of language (nurturist view)
language is learned through conditioning and reinforcement (correct/incorrect feedback) and modeling, language is stimulus-dependent (external stimulus required)
universal grammar
contains basic scaffolding of syntax without details (which need to be learned), this is innate in every human
gene FOXP2
responsible for universal grammer
mutations result in developmental verbal apraxia which affects the ability to pronounce syllables and words
poverty of the stimulus
the rules of grammar are ambiguous with just examples, so language cannot be learned only using examples/conditioning
evidence for poverty of the stimulus
adults adopt a grammatically insufficient version of a language when they move elsewhere (pidgin), but their children combine the pidgin and their country’s language to form new grammar (creole) = not learned because it is a new language
deaf isolates (not exposed to traditional sign language) develop their own kind of sign language
evidence for a naturist view of language
universal grammar, poverty of the stimulus
language acquisition develops in the same way for all infants (cooing 0-3 months, babbling 4-8 months, single words 8-12 months, two word phrases 1-2 years, telegraphic speech 2-3 years, complex speech 3-4 years)
child-directed/infant-directed speech
speech directed to a child
motherese/parentese (sing-song, exaggerated vowels, repetition) helps children identify beginnings and ends of sentences and draws attention to important concepts - accelerated language learning
phonemes
smallest unit of speech (sounds) that don’t have meaning, but can change the overall meaning
morphemes
smallest unit of meaningful speech
resolving phonological ambiguity (identifying phonemes)
context (people were unable to identify a single word when taken out of context)
phonemic restoration effect: brain fills in the missing phoneme based on expectations
McGurk effect: brain uses mouth movements (visual signal) which move characteristically based on sounds
how to segment speech into individual morphemes?
statistically: we encode the frequency with which sounds occur together
lexical processing
matching speech units to meaning
homophones
words which sound the same but have different meanings
homographs
words which are spelled the same but have different meanings
lexical decision task (LDT) and results
a string of letters is presented and Ps must decide if it is a word
RT is faster when words that are related appear together
RT is faster when the words are common
resolving lexical ambiguity (lexical processing)
using context (within a certain room, environment) to decipher ambiguity of individual words that could have different meanings - we first identify meaning as the most frequent usage
does context prevent other meanings of individual words from being activated in lexical processing?
brain briefly considers all meanings before setting on the context-dependent one (RT on a LDT was faster for both context and non-context if words presented within 200ms of being primed with biased or unbiased sentences - cross-modal priming task)
parsing
breaking up a sentence into its constituent parts, can lead to ambiguity because we hear sentences incrementally
garden-path sentence
sentence that leads to incorrect parsing, to individuals take the wrong path which leads to a dead end (we have to reinterpret the clause when we get to the end)
clause, subject, predicate
clause expresses a full idea of a subject doing something which is indicated by the predicate
syntax-first approach to parsing
we parse based on syntax without considering the meaning of words beyond the type (noun or verb?)
late closure
we attach words to the sentence we’re currently processing rather than assuming a new phrase
“the man who whistles tunes pianos” - we think ‘whistles’ is the predicate so assume tunes is a noun, but we are forced to re-parse because ‘pianos’ is a noun = ‘tunes’ must become the predicate
evidence against syntax-first approach
we do take semantics into consideration: “the defendant examined by the lawyer” vs. “the evidence examined by the lawyer” - a defendant can examine something so re-parsing is needed vs. evidence cannot examine something so no re-parsing
what do we use to help us parse sentences?
visually-available stimuli (‘put the apple on the towel in the box’)
prosody (many potential layers of meaning depending on intonation, not grammar - ‘I never said she took the money’), punctuation helps indicate prosody to disambiguate sentences and prevent incorrect parsing
discourse processing
ability to understand language that is at least several sentences long
involves integration of STM and LTM
depends on anaphoric and causal inference and pre-existing knowledge (inferring meanings)
anaphoric inference
guess about which word in a first sentence (antecedent) is being referred to in a second sentence (like a pronoun)
causal inference
assumption that something mentioned at one stage leads to something later on
depends on general knowledge of how the world works
backward/deductive/necessary inference
referring to previous information to infer something that is necessary to understanding
takes processing time, so sentences which require it have longer RT - indicative of online inference
elaborative inference
adding information that isn’t essential for understanding of a text - done in a way that is consistent with expectations
online vs. offline inference
online: during reading or listening
offline: during consolidation or retrieval
instrumental inference
tool used for a task is inferred even if it’s not required to understand the meaning
a type of elaborative inference
does elaborative inference occur online or offline?
can occur online if a sentence is rich enough in information so as not to require backward inference (“leisurely pace” - implies nothing vs. “hole-in-one” - implies golf)
neurolinguistics
relationship between linguistic behaviour and the brain
arcuate fasciculus
bundle of fibers that connects Broca’s and Wernicke’s (it’s absent in other species, so may be important for humans’ linguistic capabilities)
right hemisphere’s role in language
higher-level discourse processing, elaborative processing
language relativity and evidence
the language we speak affects other areas of cognition
people are better able to distinguish between colours when their language has more than one word for that colour
nativism
linguistic universalists: differences among languages are superficial and don’t affect other areas of cognition
natural language processing (NLP)
subfield of AI concerned with making machines that can produce and understand language; emotional tone, making summaries, engaging in conversations with humans
Turing test
human conversing with a human and a robot and must decide which is the robot
sequence-to-sequence learning
taking in a string of text and produce a string of text in response - machines don’t learn the rules of syntax (ChatGPT)
definition of language
shared symbolic system for purposeful communication
how is language affected by the environment?
morphology (complexity) decreases with languages spoken by more people
lexical tones are determined by climate (tonal languages that use pitches as meanings less common in cold countries because they lack vocal control because of cold air)
how are language and gender style related?
countries with gendered language experience more gender inequality
women tend to use “we” and more adjectives, more of a reverse accent (upspeak)
aphasia
impaired language function from a brain injury (also from dementias)
Broca’s aphasia
non-fluent/expressive aphasia - cannot produce speech, but can understand it
speech is halting (nouns and verbs), writing also affected, amount of tissue damaged is correlated to amount of impairment
patient Tan
could only speak one syllable but tried to communicate using gestures, tone, inflection - discovery of Broca’s aphasia
Wernicke’s aphasia
fluent - speech is produced but has no meaning (word salad)
uses nonwords and made-up words (paraphasias are common)
verbal paraphasia
substituting a word with another semantically-related one
phonemic paraphasia
feature of Wernicke’s aphasia, swapping or adding speech sounds (sad cralad to mean crab salad)
neologisms
using made-up words (mansplain)
can be culturally-shared, but in Wernicke’s they’re not, so aren’t communicating meaning
paraphasia
misusing words, common in Wernicke’s
conduction aphasia
damage to the arcuate fasciculus = disconnection between understanding and producing speech and cannot repeat speech
load-dependent: deficit increases the more complex the sentence is
lateralization of language
left for language (not fully understood), right for broader aspects of language (prosody and pitch to convey mood, meaning, discourse segmentation, gestures)
classic model of language
dorsal pathway from A1 - speech production and movements
ventral pathway from A1 - speech comprehension
may underspecify how language is represented in the brain (damage to Broca’s doesn’t always result in an aphasia and damage to other parts of the brain can result in the same deficits)
principles of the innateness hypothesis
grammar and syntax are separate from meaning (since we can create grammatical sentences that don’t make sense)
language acquisition device supports principles of how to learn a language
our innate language skills are rules of grammar (universal grammar) that need to be adjusted for the specific language
language acquisition device (LAD)
abstracted entity that supports language (hardwired into our brain, controls principles of learning a language)
support for the innateness hypothesis
- convergence (children are exposed to different learning situations, but converge on the same grammar)
- uniformity: children go through the same learning stages in the same order
- poverty of stimulus argument (linguistic environment is too deficient for children to learn through reinforcement - they hear a small set of infinite possibilities, doesn’t have opportunities to learn from mistakes)
issues with the poverty of stimulus argument
what information is innate?
how to disprove the argument?
how to determine what linguistic information is available to a child?
adults’ reformulate speech based on grammar - children extract regularities to form rules
types of ambiguity in language
phonological (within a sound)
lexical (within a word)
syntactic/parsing (within a sentence)
constraint-based models of parsing
we use constraints to resolve ambiguity: semantic and thematic context, expectation, frequency (opposition to syntax-first)
surface dyslexia
impaired at reading irregular words because reading happens letter-by-letter (they have to sound out, cannot compare to mental dictionary)
phonological dyslexia
impaired at reading nonwords or made-up words because they have to compare words to a lexicon (cannot sound out letter-by-letter)
dual route model of reading
mental dictionary (whole-word reading) and grapheme-phoneme conversion (sounding out)
language of thought hypothesis
nativist view; medium of thought is an innate mentalese, not language (which is why children can think when they don’t have language) - cannot be tested
Sapir-Whorf hypothesis
linguistic determinism (strongest view): person’s thoughts are determined by language (people from different languages have a fundamentally different view of the world)
weaker view that thoughts are influenced by language (colour studies in which language shaped colour memory)
bilinguals
all individuals who use more than one language - they are differentiated by their proficiency, dominance, age of acquisition, where they live, their goals of language use
current estimate of bilingualism in the world
50-70%
traditional psycholinguistics
most research on cognition and language studied monolinguists (English)
idea that only L1 had an impact on cognition
accentedness and grammatical data
the older you are when you learn a second language, the more accented your speech is perceived to be
similar data in grammatical proficiency
traditional view of bilingualism
late bilinguals have a full native L1 and a strange L2
bilinguals are monolinguals in L1
L1 can impact L2, but not the other way around
new research goals of bilingualism
investigating the biological basis of bilingualism
language learning occurs at all ages and is dynamic (greater plasticity)
bilingualism is a lens to study new aspects of cognition (impact of experience on the brain)
three discoveries about bilingualism
both languages are active and competing (parallel activation/coactivation)
L1 and L2 can influence each other
individual variability in language experience (context, distribution of languages in every day lives)
cognate
word that has the same form and meaning in two languages (piano), triple-cognates in three languages
recognized more quickly by bilinguals than monolinguals
homograph in bilingualism
word that has the same form but a different meaning in multiple languages (coin)
recognized more slowly by bilinguals than monolinguals
triple-cognate English-Spanish-Japanese picture naming task
lexical information was activated in target and non-target languages
triple-cognates = you can retrieve the label of the image more quickly = faster at naming the picture (cognate facilitation effect)
parallel activation within context experiment (Libben & Titone)
cognates and homographs in low constraint (target word is not predictable) and high constraint (context narrows the possibilities - should eliminate facilitation and interference effects)
early-stage/low constraint = facilitation for cognates, interference for homographs
early-stage/high constraint = facilitation and interference
late-stage/low constraint = facilitation and interference
late-stage/high constraint = no facilitation of interference (no parallel activation)
fixations
time spent on one word
longer fixations = more complex, difficult word
saccades
movements between fixations
regression (eye movement studies)
returning to what you’ve read already
initial stages of comprehension
first fixation duration (the first time you look at the word)
later stages of comprehension
total fixation duration
parallel activation in languages that are drastically different from each other (Morford et al.)
using semantic relatedness task and phonologically related/unrelated words in ASL - Ps faster to judge relatedness when words were phonologically related (converge) than not (conflict) = languages are both active and competing
event-related potentials and cognate facilitation effect
voltage fluctuations that are time-locked to an event; a reduced N400 indicates facilitation
early Spanish-learners ERPs
L2 learners had a reduced N400 for cognates vs. monolinguals - their newly acquired Spanish was influencing their L1 knowledge (even if behaviorally, there was no facilitation)
classroom-learners vs. immersed-learners experiment
verbal fluency task in L1 and L2: immersed learners are producing less English words and more Spanish words than classroom learners = L1 being suppressed by learning a new language
grammatical impact of L2 on L1
people with high L2 exposure switch parsing strategies to match L2 (high-attached instead of low-attached)
types of individual differences in language use
predominant language in environment (immersed?)
habits of language use (keep languages separate or code-switching)
contextual linguistic diversity (are they surrounded by bilinguals, do they get to use both languages?)
linguistic diversity effect on bilinguals’ brains
these people are monitoring their environment for opportunities to use their language, using context clues
higher connectivity in brain areas used for monitoring (anterior cingulate cortex and putamen)
how does codeswitching affect language processing?
experiment looking at people with compartmentalized languages and people who used languages interchangeably/opportunistically
non-codeswitchers had a processing cost for sentences that switch mid-sentence, but codeswitchers had none
rare codeswitches vs. typical codeswitches
both had a processing cost for noncodeswitchers
only rare codeswitches had a processing cost for codeswitchers because it’s not typical language use
method of studying cognition: neuroscience
studying the brain to link it to the mind - what parts of the brain carry out functions we see behaviorally?
method of studying cognition: cognitive psychology
studying behaviour to understand the mind
method of studying cognition: computational modelling
using computers to simulate brain activity - if we can build a computer that can perform this function, we can understand how the brain does it, uses flow charts
what is cognition?
processes that underlie complex behaviours
basic research
research to understand a phenomenon in its own right (discovery, no end-goal), can inspire applied research and investigation of new phenomena
applied research
research with a goal, to solve a real-world problem (treatments, improving conditions, etc.)
what is zoom fatigue?
exclusive focus on verbal cues because of a lack of other cues is more cognitively demanding (and the audio and visual cues are slightly disconnected), easy to get distracted in a home environment
hypothesis-based research
research is guided by a prediction about what will occur under specific circumstances (linking variables)
phenomenon-based research
an effect is accidentally discovered, then follow-up research is conducted
emotional enhancement effect
emotional stimuli are more easily attended to and remembered (at the expense of other stimuli), especially negative ones
amygdala activity predicts memory for emotional stimuli, but not non-emotional
artificial intelligence
giving a computer a learning function to get it to perform a task, does well with predictable problems (like chess), but doesn’t have flexible intelligence (dealing with evolving, novel situations)
Plato’s epistemology
rationalism - complex thought is the result of the external world and our pre-existing knowledge (deductive reasoning is innate)
Aristotle’s epistemology
empiricism - knowledge comes from observation, we don’t have an innate mind, we just link observations together to form complex thought
structuralism
basic elements of thought combine to form complex thought
relies on introspection and self-report
Wilhelm Wundt’s contributions to psychology
practiced structuralism using introspection and psychophysics (mental chronometry - thought meter) to establish the simplest units of the mind which followed certain laws (like the periodic table)
psychophysics
linking sensory experience with physical changes (thresholds of detection and difference) - amount of time necessary to process a sensory experience is a unit of thought
criticisms of structuralism
experimental methods are too subjective, cannot be replicated
only studying simple sensory experiences, not complex thought
functionalism
studying the function of how/why we think the way we do, which is integral to how mental processing works (functions are adaptive to context)
William James’ approach to psychology
functionalism/pragmatism - practical approaches to problems, emphasized the use of various methodologies (not just introspection) because the function of the mind is always changing
behaviorism
shift away from studying the mind toward studying behaviour (which is applicable to the scientific method), looking at behavioural responses to stimuli, animal research
contributions from behaviorism
Pavlov and Watson - classical conditioning
Thorndike and Skinner - operant/instrumental learning
criticisms of behaviorism
- lack of focus on internal mental states/processes
- overestimated the scope of their explanations
- Tolman’s latent learning (learning without conditioning)
- language - we apply rules to form novel phrases
- individual differences when performing tasks (people have different ways of arriving at the same goal)
cognitive revolution
acceptance of internal mental processes - mind is like a computer, it processes information (performs computations on information from the external world to arrive at a solution/behaviour)
flow charts
boxes represent computational stages, arrows represent how information flows through the system
Waugh & Norman’s model of memory
stimulus enters primary memory, rehearsal = secondary memory (performing a task after learning something = you can’t rehearse = info is forgotten)
rehearsal can be many things - like deep mental processing
what is the relationship between reaction time and information processing?
it takes longer to process uncertain information to try to figure it out (amount of information to be processed is inversely related to how much we expect that information)
*Hick’s lamp experiment measuring reaction time and manipulating certainty
Hick’s law
the more information is contained in a signal, the longer it takes for us to produce a response
decision fatigue
we have a limited amount of cognitive resources, and making decisions requires these resources
Webster & Thompson air traffic controller experiment
air traffic controllers listened to simultaneous messages - one was a call signal (familiar), the other unrelated words (unfamiliar) = more memory for familiar messages (less information, so easier to process)
ecological validity
the extent to which findings can be generalized to real-world settings (labs are highly controlled settings)
physicalism/materialism
the only reality is physical
monism
mind and body are the same substance
idealism
the only reality is mental
neutral monism
there is one substance (neither physical or mental) that is reality
dualism
mind and body are separate
interactionism
a form of dualism: mind (immaterial soul) and brain (physical) affect each other
Pineal gland as the seat of the soul (it actually produces melatonin)
epiphenomenalism
mental thoughts (mind) are caused by physical events (brain), but not the other way around
phrenology
idea that when a particular brain region is used (which corresponds to a particular function), it will grow (and when it’s not, it will shrink)
functional specialization
certain brain areas or networks support certain brain functions (like the FFA selectively responding to human faces)
could be more a matter of degree instead of brain response/no brain response
behavioural measures to study the brain-behaviour link
studying voluntary behaviours like pressing a button in response to something
psychophysiological measures to study the brain-behaviour link
measuring activity in the PNS in response to perceptions/imagination (eye movements, skin conductance - skin conducts electricity when it sweats = physiological/emotional arousal)
behavioural neuroscience to study the brain-behaviour link
animal studies (behaviour, lesioning the brain, physiological brain measures) = causality link, but isn’t necessarily generalizable to humans (and you can’t measure certain things like language and autobiographical memory)
neuropsychological cases to study the brain-behaviour link
comparing the function of brain-impaired participants and normal brains (Region X damage = impairment in task Y = task Y depends on region X)
research on split-brain
left hemisphere = speech and language
right hemisphere = visual-spatial processing
unable to name a word in left visual field, but could draw it with left hand (can name a word in right visual field)
electroencephalography
measuring electrical activity in large brain regions to see which brain regions are active at what time
structural magnetic resonance imaging
anatomy of the brain - gray matter, structural abnormalities
functional magnetic resonance imaging
measures blood flow (oxygenated blood flows to active areas of the brain) to create a spatial image of brain activity
good spatial resolution, bad temporal resolution
transcranial magnetic stimulation
induces temporary change (stimulate/lesion) in brain activity (improvements in memory post-TMS of hipocampus), tests causality but the way it works isn’t clear (effects not localized)
multi-voxel pattern analysis and functional connectivity
studying the brain as interconnected networks (MVPA gets a computer to recognize patterns of activity associated with different cognitive activities)
lateral occipitotemporal cortex
active when perceiving body parts or inanimate objects
parahippocampal place area
responds when imagining a scene/spatial layout
supplementary motor area
active when performing or imagining movement
exteroceptive sensations
sensory organs absorb energy from the physical environment and convert it into electrical signals sent to the brain
interoceptive sensations
sensations from inside our body
proprioception
where our limbs are in space
nociception
pain due to body damage
equilibrioception
sense of balance
synesthesia
neurological condition in which one sense automatically triggers the experience of another sense (grapheme-colour synesthesia = seeing colour with certain letters or numbers)
beneficial for creativity
McGurk effect
change in auditory perception based on visual input (BAA is perceived as FAA if the mouth is articulating an F) - shows an integration of sensory information
early visual processing pathway
light projected onto the retina - photoreceptors convert light waves into electrical signals - signal sent to bipolar and RGCs - axons combine into the optic nerve, which brings information to the brain
rods vs. cones
rods best for low light (concentrated in the peripheral retina = less detail), cones sensitive to colour (most concentrated in the fovea = high visual acuity)
blind spot
where the optic nerve leaves the eye, but we don’t notice it because of perceptual filling-in (with the surrounding)
late visual processing pathway
information crosses contralaterally in the optic chiasm, relays in the thalamus, then to area V1 (edges, angles, colours, light), and visual association areas (ventral and dorsal)
ventral and dorsal pathways of visual processing
ventral/what/perception: object recognition (shape, size) in the temporal lobe
dorsal/where/action: object localization (location, space, movement) in the parietal lobe
bottom-up processing
influence of external world information on perception (sensory organs)
top-down processing
influence of knowledge (expectation, context, goals) on perception
Ponzo illusion
using expectations about depth to perceive the length of lines = mistaken perception
examples of context affecting perception
Ames room: we assume a room is rectangular, not a trapezoid
letters in context effect: ability to read words in context even if letters are mixed
colour in context effect: colour on a dark background appears lighter than if on a light background
Munker-White illusion: columns over black rows
damage to the primary visual cortex
blindsight - no conscious awareness of visual perception in the damaged visual field (but able to respond to questions about objects presented there = implicit perception exists)
damage to the dorsal stream of visual processing
akinetopsia - inability to perceive motion (sees motion as a series of static photos)
optic ataxia - inability to interact with objects (but able to name them), can be specific for certain movements
damage to the ventral stream of visual processing
visual agnosia - difficulties recognizing everyday objects
often damage to the lateral occipital cortex
Greebles experiment
against functional specialization of the FFA for faces: rather an expert discrimination area (it’s just used for faces because we’re experts at face recognition)
agnosia subtypes
apperceptive agnosia: problems perceiving objects - cannot combine features into a whole (faces might look contorted, inability to distinguish facial expressions)
associative agnosia: difficulty assigning meaning/labeling to objects - cannot link visual input to knowledge and memory (cannot recognize famous faces)
apperceptive agnosia
problems perceiving objects - cannot combine features into a whole (faces might look contorted, inability to distinguish facial expressions)
associative agnosia
difficulty assigning meaning/labeling to objects - cannot link visual input to knowledge and memory (cannot recognize famous faces)
constructivist theory of perception
we construct mental models of how things work which are activated during perception (making guesses because the external world is ambiguous)
focus on gestalt principles
Gestalt principle of experience and figure-ground assignment
experience and knowledge drives figure-ground segmentation (figure is more likely based on what we know)
Gestalt visual grouping principles
proximity
closed forms - shapes are closed
good contour - lines are continuous
similarity
direct models of perception
perception involves using information directly from our environment (continuous perception-action feedback loop), no assumptions are necessary
the ambient optical array (AOA) has enough information to direct perception based on cues (like texture gradients - far objects are closer together)
pattern recognition theories
identifying a pattern in visual input and matching it to existing patterns (concepts) in memory - a precept (trace) probes long-term memory traces to see which matches most
template matching theory
every object has a template in LTM (doesn’t explain identification with shifting viewpoints, classification of novel stimuli)
prototype theory
we store ideal versions of objects (most typical) and compare basic features of visual input to see what matches most (flexibility)
feature detection
visual input is broken down into features, which are processed separately and re-assembled
recognition-by-components (RBC): all objects can be reduced to basic geometric shapes (geons)
recognition in context
scene consistency effect - we perceive by considering what’s around us (scene-consistent objects are named more accurately)
identification vs. classification
id.: ability to recognize an object across variations
class.: ability to recognize something as part of a category despite never having encountered it before
motion parallax
objects further away change position on your retina more slowly
binocular disparity
disparity changes based on how far away objects are
figure-ground assignment
more convex = figure
bilateral symmetry = figure
smaller region = figure
past experience - meaningful shapes
what is frequency and what perceptual property does it result in?
peak-to-peak cycles per second, results in the tone/pitch of a note
what is amplitude and what perceptual property does it result in?
the peaks and valleys of a sound wave, results in loudness
pathway of sound
pinna captures auditory stimuli - ear canal - eardrum (vibrates in response to the sound wave) - ossicles (malleus, incus, stapes which increase the pressure of the vibrations to amplify the signal) - cochlea (basilar membrane - hair cells which transduce mechanical signal to electrical) - primary auditory cortex - dorsal and ventral streams
describe the tonotopic map and structure of the basilar membrane
membrane goes from thick and narrow at the base (where high frequencies are encoded) to wide and thin at the apex (where low frequencies are encoded)
connections of the primary auditory cortex
auditory nerve has afferent and efferent connections with the cortex, signal is continuously tonotopically transmitted to the auditory cortex, projects to Broca’s and Wernicke’s areas and the motor cortex, dorsal stream (sound localization) and ventral stream (sound properties)
what is a phon and what is it a function of?
perceptual unit of measurement - how loud did you perceive a sound - a function of both frequency and amplitude (low frequency sounds have to be loud to be perceived, but high frequencies don’t have to be as loud)
which will sound louder: a 50 Hz tone with a sound level of 70 dB, or 1,000 Hz tone with a sound level of 70 dB?
1000 Hz tone - higher frequencies are perceived as louder if amplitude is held constant (humans have better hearing for certain frequency ranges - human speech)
Which sounds louder: a 50 Hz tone at 40 phon, or 1,000 Hz tone at 40 phon?
both will be perceived as equal - a phon is a unit of perception, so the same number of phons means they sound the same
interaural time difference
sound arrives at one ear before the other = your auditory system can localize the sound by computing the difference
interaural level difference
sound is slightly louder in one ear = your auditory system localizes the sound by computing the difference
relation of anatomy to function for sound localization
the way the sounds hit the pinnae varies based on the vertical plane (up and down)
what is different about the auditory system vs. the visual system
sound waves get mixed (as opposed to objects being occluded) so the auditory system has to parse them apart
what is auditory scene analysis?
transforming sound waves into meaningful auditory units (mental representations) by using grouping and separating principles
what is temporal grouping and what is it based on?
sequential integration: creating distinct auditory streams (one sound is melody, the other is rhythm) based on sounds’ relationship in time (physical cues - proximity in time)
what is a complex sound wave?
summing various simple sound waves - there’s a relationship between the fundamental frequency and the harmonics
what is the relationship between the fundamental frequency and the harmonics?
fundamental: lowest frequency component of the sound wave
harmonics: multiples of the fundamental
pitch/harmonic grouping
figuring out whether many frequencies come from the same source by applying the frequency relationships between harmonics - if this relationship breaks, you hear different sounds
spatial neglect
damage to the right parietal lobe (which helps direct attention) results in spatial neglect of the left visual field (less common in damage to the left hemisphere)
spans all sensory modalities, not just vision (also affects memory and imagination)
which brain areas are involved in attentional processing
prefrontal cortex and parietal lobes
which brain areas are involved in top-down attention?
intraparietal sulcus and frontal eye fields (FEF also involved in the interaction between top-down and bottom-up)
which brain areas are involved in bottom-up attention?
temporo-parietal junction and ventral frontal cortex
what are the three aspects of attention?
arousal: when you’re physically alert and present
bottom-up: guided by stimuli, attentional reflex
top-down: controlled attention - goals and expectations
what are the types of top-down attention?
sustained attention: focus on a particular task for a long period of time
divided attention: shifting focus (multi-tasking)
selective attention: focus on one input and ignoring distractors
why do we have selective attention?
there’s too much input from the environment (and we have limited cognitive resources) so we have to voluntarily focus on what we think is important (dynamic - depends on your goal)
spatial-based attention
focus on a certain location in space (waiting for someone to walk through a particular door)
feature-based selective attention
focus on particular stimuli (looking for someone in a crowded room)
change blindness and related conclusion
failure to detect changes in an attended zone (we still cannot process everything when paying attention)
flicker technique paradigm
to test change blindness, presenting two similar visual images with an in-between mask (disrupts selective attention) = people tend not to notice small differences (only begin to notice large differences), more susceptible with age
Broadbent’s early selection filter model of selective attention
information gets filtered out at the level of perception (before semantic analysis): we select which information gets processed based on physical properties - attended information is assigned meaning, unattended information is forgotten
dichotic listening task
present two simultaneous messages to both ears - people are better at remembering ear-by-ear (because we don’t have to shift our filter from one ear to the other)
evidence for early selection models
shadowing task
given two messages simultaneously and asked to attend and repeat to one - people cannot remember unattended content, but can give sensory features (gender of voice)
problems with early selection models
unattended information can sometimes break through the filter (some semantic information gets processed)
evidence: ‘apple’ paired with a shock = it gains meaning (increased skin conductance even when unattended to in a shadowing task)
Treisman’s attenuator model
early filter turns down the unattended information (like lowering the volume), so that important information (like your name) can get through
late selection filter models
meaning is assigned to both attended and unattended information, then we choose what to attend to
evidence: Stroop task (reading colour names is an automatic task that accesses meaning and interferes with the controlled task, then you choose which information to attend to)
automatic vs. controlled tasks
automatic: engage bottom-up processes without intention, very familiar tasks (like reading)
controlled: engage top-down processes, require effort and focus
load theory
based on the idea that we have a limit to how much information we can process, the filter placement depends on how much of your resources are being allocated to your current task (high-load task = save your resources, early filter = less likely to be distracted, low-load task = extra resources, late filter = more likely to be distracted)
central resource capacity view of load theory
we have one resource pool from which all attention resources are allocated (whether information is visual, auditory, etc. doesn’t matter)
evidence: low AUDITORY load vs. high auditory load when driving, low = more likely to see VISUAL stimulus)
multiple resource capacity view of load theory
each perceptual stream has its own attentional pool (attentional capacity reached sooner if relevant and irrelevant information are from the same modality)
ex: more difficult to view directions and drive at the same time than listen and drive because both are pulling on your visual attentional load
inattentional blindness and related conclusions
failure to attend to new or unexpected events in our attended environment when they are not part of our focused task (shows that our attention guides perception because we aren’t perceiving everything we could be)
stimuli that we are inattentionally blind to can still affect our behaviour unconsciously (“armpit” priming on word completion tasks)
Posner’s attentional spotlight theory
attention is about focusing on space (location-based) - attention allows us to shift our attentional space to ready a response
Posner cuing task
spatial cue directs attention to a part of visual space (either valid or invalid cues) - reaction time to a target is measured (faster for congruent cue and target)
duration between cue and target (stimuli onset asynchrony) is long = valid trials have longer reaction times (inhibition of return)
inhibition of return
attention is inhibited from returning to an attended space after it has been searched (adaptive - helps us effectively search our environment)
attention as a feature-integrator
pre-attention phase: features are processed separately and automatically (bottom-up, lines and orientations)
focused attention phase: features are integrated, requiring top-down voluntary attention
conjunction errors occur when features aren’t properly bound together (because of insufficient attention)
visual search tasks
feature search: target is different based on one feature (uses pre-attention phase, search time is independent from set size)
conjunction search: target is different on many features (uses top-down processing, search time is set-dependent)
pop-out effect
time to find an object that is different on one feature is independent of the number of distractors (if the feature is processed automatically in the visual cortex - colour)
embodied theories of attention
eye movements detect visual attention goals - your fixation points depend on your goals (what you are asked to focus on)
overt vs. covert attention
overt: attending to something with your eye movements (eye tracking)
covert: attending to something without moving your eyes
how does culture influence attention?
Western - more eye fixations on the central object of a busy scene
East Asian - more eye fixations on the background (sees a scene holistically)
vigilance decrement in sustained attention
mind-wandering occurs when attention is sustained and breaks
overload theory of vigilance decrement
attentional demands increase over time, so your attentional processes become overloaded
underload theory of vigilance decrement
tasks cause boredom over time, so your attention divides between your focus and mind-wandering
what is task-switching?
switching between mental sets (organizations of our attention based on goals)
switch cost: decline in performance when we have to switch between mental sets
action slips
shift in mental resources from a primary (external) task toward internal thoughts (like mind-wandering) = mental thoughts bleed over into task
endogenous attention
choosing to pay attention based on goals (activates intraparietal sulcus)
exogenous attention
a property of the environment (salient cue, something unexpected could indicate danger) captures your attention (activates bottom-up brain regions)
what types of stimuli automatically capture attention?
important for survival
functionally specialized regions for brain processing (faces and bodies - will pull attention away from a target task)
personally relevant stimuli (name)
addictive stimuli (cigarettes for nicotine addicts)
fearful stimuli
cocktail party effect
we can selectively hear someone talking to us in a crowded room (interfering voices) - but it is possible for some extraneous stimuli to pass through the filter (like our name)
medial temporal lobe
processing visual motion
Balint syndrome
damage to both parietal lobes, resulting in attentional deficits (usually occulomotor apraxia and simultanagnosia)
occulomotor apraxia
inability to conduct visually guided movements, attentional deficit, symptom of Balint syndrome
simultanagnosia
inability to identify or use more than one feature or object at once (focus on individual features instead of big picture), attentional deficit (component of Balint syndrome)