Knowledge and how it's represented in the brain Flashcards
What are concepts?
the mental representations of categories
(hierarchical) networks of concepts in semantic memory
Why do we have concepts?
enable us to generalise from past experience of instances to predict properties, behaviour, etc of new instance
Why categorise?
categorisation predicts
- provided your categories are “natural kinds” (i.e. provide some basis for generalisations)
- provided you generalise on the right basis: i.e. don’t mix up accidental and essential properties
What evidence is there suggestive of economy principle?
reaction time (RT) for verifying - “Object X has property Y” (Collins and Quillian 1966)
if properties stored only at highest level, then properties stored higher up should take longer to retrieve.
retrieval time determined by number of links through which activation must spread to determine if the category and the property are appropriately linked.
What are verification times also influenced by?
when concepts are acquired (RT for “a dog is an animal” < “a dog is a mammal”)
familiarity: more familiar concepts verified quicker
typicality (see later)
A more complex process
tree-structures only appropriate for some kinds of knowledge — e.g. object-concepts. Different structures in semantic memory for, e.g.,
adjectival properties: oppositional pairs or dimensions
“schemas” and “scripts”
How do we grow our conceptual hierarchies from a basic level of abstraction?
children learn basic-level terms first
people spontaneously name at basic level
languages more likely to have single words for basic level categories (e.g.) beech, than for subordinates copper beech, hornbeam, etc.
people can describe properties more easily for basic level (chair vs furniture) — captures a set of objects with some degree of structural and functional similarity
The classic account: Defining features (Aristotle)
we classify by reference to a mental definition listing features (attributes, properties) of a category’s members such that:
- the features are necessary (essential) not merely frequently associated with being a member - having four sides: necessary for square - owning a wedding ring: common but not necessary for wife
- a set of such features is sufficient - a closed figure, four sides, sides of equal length, and equal angles:jointly sufficient for square - female, married, not-divorced, husband alive: jointly sufficient for wife
Defining features and conceptual hierarchies
fits with hierarchical structure of conceptual relationships
apples share some defining features with all members of the superordinate category fruit,
differ from category co-ordinates – pears, bananas, – etc. in terms of other defining features
What are some problems with the classical view? - logical (a priori) arguments
for many concepts, hard to come up with common-sense defining features (especially superordinate categories like furniture, fruit)
some categories seem to have no common core of defining features: e.g. game (Wittgenstein, 1953): they have a “family resemblance”
What are the effects of typicality on perf?
RT for category decision depends on rated typicality of instance: e.g. “chicken is a bird” > “sparrow is a bird”(Rips, Schoben and Smith, 1973, Rosch, 1973)
asked to generate members of a category, people more likely to name typical members first (Mervis Catlin and Rosch, 1976)
children learn typical members earlier (Rosch, 1973)
so, not all members of a category are equally “good” members – and this is not just a matter of frequency/familiarity: chicken is much more familiar than goldfinch, but takes longer to classify as bird.
so all instances are members but not all members are equal
What is the modified feature-list theory?
categorisation by probabilistic/weighted combination of characteristic features
category members share family resemblance, not definitive list of features.
features associated with a concept by varying probabilities, or weights
category membership determined by which category’s features produce the highest weighted sum of matching features.
different combinations of features can generate a better match to category A versus B, C etc. i.e. no features are necessary, just need enough characteristic features
hence:
typical members have more high-weight features: we can think of the feature list as defining a category prototype
category boundaries are fuzzy
context influences the weight or diagnosticity of particular features, so that borderline members may be classified inconsistently (e.g. sheep/goats)
similar members have more features in common
What are the effects of similarity? (Rips et al, 1973)
Ps rate similarity of many pairs, e.g.
canary-robin
goose-sparrow
bird-parrot
“multi-dimensional scaling” ==> finds the “space” that best captures similarity ratings
here a 2D space captured most of the variance.
semantic space a continuum, with focal “good” examples and outliers, no sharp boundaries
Is a prototype an average member?
intuitively appealing when the relevant attributes vary along continuous dimensions
average member = central tendency of a distribution.
E.g.faces:
for length of nose, distance between eyes, etc., can compute mean values, distributions
hence can morph between faces, produce more extreme version of a face (caricature)
average faces more attractive
Is a prototype a mixture of discrete features and central tendency of continuous dimensions?
but: many features are discrete, not continuous:
Reef egrets (the “herons” of Heron Island, on the Great Barrier Reef) are white OR dark gray: the prototypical reef egret is surely not light grey!
perhaps the appropriate account is a hybrid of a feature list and dimensional representation: feature list for discrete properties, average and distribution for continuous dimensions
What are prototype theories?
abstractionist
fundamental property of such theories:
from experience of many instances, a mental description of the defining and/or essential properties of the category is abstracted from encounters with many members.
categorisation works by computing similarity to these abstract representations of our various categories: we assign the category whose prototype has the highest similarity to the instance considered.
What are prototype theories challenged by?
Exemplar/instance theories
What are exemplar/instance theories?
no abstract representation of the category is formed!
we retain in memory records of many exemplars (or instances) we have encountered, presumably labelled (as, e.g., “dog”, “bagel”)
we classify “on the fly” by computing the similarity between each new stimulus and stored instances of various objects retrieved from memory.
we assign the stimulus to the category whose instances in memory have the greatest summed similarity to the stimulus.
classification RT reduced as the summed similarity (over instances) increases.
How can we test prototype v exemplar theories?
exps on learning of novel, exp-designed, categories
Exps on learning of novel, exp-designed, categories
evidence discussed so far involves getting people to verify statements about, or judge the similarity of, or generate, or label, members of pre-existing natural categories.
but, from an experimenter’s perspective, natural concepts are a mess!– Similarity and experience hard to assess/control
to control similarity, experience etc. precisely, teach people new carefully-constructed artificial categories.
What is artificial category learning? (Posner and Keele, 1968)
created instances of 2 dot-pattern categories by applying random but controlled nudges (big or small) to dots in Prototype patterns A and B
What is the ‘prototype effect’ in category learning? (Posner and Keele, 1970)
after training on several A/B instances, test on
old instances (a1 - a4)
new instances (a5 - a8, just as similar to prototype)
the prototype (A) [never seen during training]
prototype A categorised faster and more accurately than
other new instances (a5 – a8)
old instances (a1 - a4)(at least in experiments with many different training instances)
this prototype advantage increased after a week’s delay, (i.e. individual instances show more forgetting).
initially interpreted as evidence for abstraction of prototype
but consistent with exemplar theory also? Prototype’s summed similarity to old instances > than one instance’s similarity to old instances (on average)
What are ‘exemplar effects’ in category learning exps?
old instances (a1-a4) classified more efficiently than new instances (a5-a8) with equivalent similarity to the prototype A
classification of new instances influenced by similarity to old instances.
classification more efficient for new instance t2 than for t1 (although equidistant to the prototype).
classification influenced by the distribution of instances experienced, not just their average (e.g. if value on two dimensions are correlated)
interpreted (by exemplar theorists) as evidence for exemplar theory
Do exemplar effects imply no abstraction?
instances experienced influence classification (over and above any prototype that could have been abstracted from them)
does this imply no abstract representation of the category?
no! - shows only that memory for instances influences classification. Theory that we abstract a prototype does not deny that we also have memory for individual experiences which can influence categorisation.
“connectionist” (“parallel distributed processing” [PDP] or “neural network”) models of learning can – in the same neural network – both abstract a prototype and preserve memory traces of experienced instances in such a way that they influence categorisation