Imagery and Knowledge Flashcards
imagery
mental recreating a sensory stimulus in the absence of the sensory stimulus
Paivio’s dual-coding theory
human knowledge is represented by a verbal system (abstract code) and a nonverbal/imagery system (an analog code)
imagery debate
does imagery use a picture-like code (Kosslyn) or a symbolic code (Pylyshyn)?
depictive representation (Kosslyn)
analog code that maintains perceptual and spatial characteristics of objects
direct view: knowledge is represented in both mental images and linguistic code
Kosslyn’s mental scanning technique
going from bottom to top in a mental image (roots to petals) - RT is longer when physical distance increases
mental rotation
time taken to match a target object increases when you have to mentally rotate it
mental scaling
using relative size of objects to see if we have to mentally zoom into pictures to answer questions about details - yes
evidence for depictive representations
mental scanning, rotation, scaling
both imagery and perception share the same mechanisms and interfere with each other (visual imagery with visual perception)
imagery can also facilitate perception
imagery is susceptible to visual illusions
descriptive representations (Pylyshyn’s propositional theory)
symbolic codes that convey abstract conceptual information (do not preserve perceptual features)
relies on propositions, imagery is an epiphenomenon (indirect representation of knowledge)
falsification studies of depictive representations
some component shapes of an image weren’t identified as belonging to the original stimulus = not an image
previous studies may have relied on experimenter expectancy and demand characteristics
mental scanning: Ps could be searching through lists of words
neuropsychology cases where perceptual abilities are damaged, but imagery is still fine
brain areas associated with imagery
modality-specific sensory processing areas (other sensory brain areas get deactivated during imagery but not perception)
frontal lobe and other complex thought mechanisms (memory, planning, attention) could be sending top-down signals to early processing areas
generative adversarial networks
computers create realistic images which a discriminator network has to distinguish from original images
picture superiority effect
using imagery leads to better recall
concreteness effect
better recall for concrete words rather than abstract (effect is eliminated when people cannot imagine the concrete words)
imagery’s role in anxiety
increased negative imagery of future events
imagery’s role in PTSD
negative intrusive imagery
imagery’s role in depression
decrease in frequency and vividness of positive imagery
imagining suicidal acts increases risk of suicide
imagery as a treatment for mental disorders
replace negative memories with neutral/positive ones
assessing individual differences in imagery ability
vividness of visual imagery questionnaire: object imagery
paper folding test: spatial imagery
congenital aphantasia
inability to form visual images
hyperphantasia
extremely vivid mental imagery (associated with better autobiographical memory)
vividness of mental images and individual differences
familiarity = more vivid
expertise = more vivid (musicians have more vivid auditory imagery of music)
visualizers vs. verbalizers
visualizers recall past events with images, verbalizers with words
both use visual imagery equally, but verbalizers use more auditory imagery
heard vs. imagined timbre experiment for imagery
Ps asked to judge whether a heard tone is from a different instrument than an imagined tone = faster RT when both tones matched (similar to the perceptual task, though the effect isn’t as strong)
so imagery and perception share brain mechanisms
imagery feedback piano-playing experimetn
Ps either got all feedback during training, only auditory, only tactile, or no feedback
recall decreases as amount of feedback decreases (but people high on auditory imagery had better recall in the tactile feedback only condition = able to compensate for the lack of auditory feedback)
chromesthesia linked to memory
sound linked to colour - memory aid (people with absolute pitch said their chromesthesia helped determine pitch)
amusia and imagery
tone-deafness - deficits in visual/spatial imagery (higher score on tone-deafness = more errors in a mental rotation task) - shows that types of imagery interact
schematic knowledge
general background gained through experience
category
set of items that are perceptually, functionally, or biologically similar
exemplar
item within a category
concept
mental representation of an object, idea, event (the reason why we group things as part of a category)
commonsense knowledge problem
humans have implicit knowledge, but it has to be explicit in computers (so they don’t have the same common sense)
classical view of categorization
category membership is determined by defining features which are sufficient and necessary
defining vs. characteristic features
defining: necessary and sufficient
characteristic: common but nonessential
works well for simple concepts but not ambiguous ones or ones that are subject to variability
against the classical view of categorization
theoretical: defining features are difficult to pinpoint
complex and changing stimuli = you have to change your defining features or exclude certain exemplars (three-legged dog)
typically effects cannot be explained
typicality effects
we are faster to ascribe membership to typical exemplars of a category
we name them first as part of a category
infants recognize typical exemplars first
when primed with a typical exemplar, RT is faster for typical exemplars than atypical
prototype/probabilistic theory of categorization
similarity-based approach, treats concepts as context-independent
characteristic features are stored as an abstraction (average and most typical)
family resemblance
at least one feature is shared with another member, but not necessarily shared among all members
issues with prototype theory
doesn’t explain the context-dependent typicality effects (which bird is more typical depends on your environment)
doesn’t explain how to account for atypical members of a category (penguin)
exemplar theory
similarity-based approach
we store actual examples of items we’ve previously encountered (depends on past experience - explains context-dependence of typicality effects)
what is not explained by prototype and exemplar theories?
we give typicality ratings to items that have clearly defined rules (3 is ‘more odd’ than 447)
both are based on comparing similarities - how do we decide which features to compare?
knowledge-based theories of categorization
based on psychological essentialism (categories have a fundamental unique essence)
when we learn about a category, we make associations to knowledge to explain the combinations of features
basic level categories
informative and distinctive from other categories (dog)
support cognitive economy (balancing between general and specific)
children learn this level first, semantic dementia patients have more ready access to basic knowledge (then they turn to superordinate)
subordinate categories
very informative but not distinctive (from other members within that category - German Shephard)
superordinate categories
not informative but very distinctive (animal vs. fruit)
hierarchal model of semantic networks
properties are stored only once at the highest level and aren’t contained within each node = cognitive economy
doesn’t account for typicality effects
property inheritance
in the hierarchal model, subordinate categories inherit the properties of superordinate categories
spreading activation model of semantic networks
nodes are connected via semantic relatedness, not hierarchy
explains typicality effects because typical exemplars are more semantically similar
method of repeated reproduction
abstract drawings copied from memory begin to resemble familiar objects (using schematic memory)
symbol grounding problem
only symbols can represent symbols, they need some way to connect to the real world (like sensory input)
artificial neural networks
knowledge is stored in a distribution of weights, not in nodes (so the network can withstand the loss of some nodes - graceful degradation)
graceful degradation
brain damage to one area doesn’t result in loss of entire brain function - you can have category-specific deficits in semantic knowledge like living things vs. non-living things
weak view of embodied/grounded cognition
the body indirectly influences cognition (judgments, memory) - matching body position at encoding and retrieval = better autobiographical memory
strong view of embodied/grounded cognition
body causes cognition: cognition is grounded in sensorimotor experiences - knowledge is stored as a distributed pattern of activity in sensorimotor neurons
pros of embodied cognition view
flexible, goal-driven, and context-dependent (most relevant knowledge is most easily retrieved)
semantic dementia and brain areas
loss of knowledge about objects due to neurodegeneration in the anterior temporal lobe (but this area isn’t activated in semantic tasks and damage to it doesn’t always present with semantic dementia)
hub-and-spoke model and evidence
the anterior temporal lobe is where abstracted knowledge is stored and modality-specific details are held in spokes distributed across the cortex
evidence: TMS of the inferior parietal lobe (grasping non-living objects) as a spoke = inability to name those objects
how do we learn concepts?
through generalization from specific episodic memories
fuzzy boundaries of categorization
an item can be more or less part of a category, membership can be a matter of degree (it depends what aspect of an object you focus on - a sled can be a toy or a vehicle)
do we use prototype or exemplar theory?
both; sometimes we need to access concepts abstractly, sometimes in terms of specificity of exemplars
conceptual expansion
thinking beyond definite boundaries of concepts - creativity
ADHD: problems inhibiting unrelated information could be beneficial for creativity
perceptual symbols system
perception and concept knowledge are linked as perceptual symbols - we access different features based on our goals (concepts aren’t stored abstractly, but across our senses)
evidence for the perceptual symbols system
property verification tasks (people are faster to verify a perception loud - blender if the previous one recruited the same perception rustling - leaves both auditory)
brain representation: same regions are active when reading action words and performing those actions
sensory functional theories
concepts are represented by defining feature of that concept (living things by visual features vs. nonliving things by their function)