Unstructured Categorisation Flashcards

1
Q

similarity and theories of mental representation

A

spatial models
feature models
structured models
-analogy

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

spatial models of similariy

A

claim: we represent stuff in a mental space. Distance is a function of similarity.

multi-dimensional scaling of animals: pairwise similarities of all items, create a space that respects all similarity relations

reaction time to confirm goose and eagle are in the same category correlated with the distance of each to bird

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

latent-semantic analysis

A

giant matrix: columns are entries in encyclopedia, rows are every word that appears in encyclopedia

every cell in matrix gets 0/1 depending on whether that word appears in the entry or not

every word is represented by a 10000 place vector of 0 and 1s

similarity of two words is conceptually the correlation of the two vector sets of 0s and 1s

does well on ESL synonyms test, behind computational language processing

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

violation of spatial models (Tversky)

A
  1. in space,distance from A to B = B to A, but similarity is asymmetrical: e.g., “Canada is like USA” vs. “USA is like Canada”
  2. Metric spaces show “Triangle inequality”, i.e., the distance between A & C cannot be greater than the sum of the distance between A & B, and B & C
    - Similarity violates this axiom: USSR and Jamaica are more dissimilar than would be expected when comparing USSR to Cuba, and Cuba to Jamaica
  3. Similarity and difference should be metrical inverse, but
    East vs. West Germany both more similar and more different from Sri Lanka & Nepal.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

feature models of similarity

A

Similarity of A and B is the sum of features common to A and B, minus the features A has that B does not, minus the features that B has that A does not

Features,as in, just list features of things to compare: e.g., USSR and Cuba

Each set can be weighed as more or less important according to context (q,a,b).

e.g.,similarity judgments highlight common features, while difference judgments highlight distinct features
– East and West Germany would have both more common features listed and distinct features than Sri Lanka & Nepal

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

category vs concepts

A

category: sets of things in the world that we represent as alike in some way, or treat as equivalent for some purpose.
concept: the representation of a category

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

how are categories represented in spatial and feature-based approaches

A

assume categories are represented by unstructured collections of features, describing the properties of individual objects
– But also, don’t make meaningful distinctions with spatial representations

why “unstructured”: just a big bag of features, dog: four-legged, furry, bark
but not coherence as to why four-legged, furry, bark go together

Many variants on this theme:
•  Classical rule-based view
•  Prototype models
•  Exemplar models
•  Cluster models
•  Category boundaries
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

classical rule-based view

A

categories are represented by a set of defining necessary and sufficient features distinguishing members from non-members
e.g. bachelor: unmarried man

people learn categories by holding candidate rules in mind and test different rule’s ability to predict membership

implies binary: either in the category or not

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

criticism of classical view

A
  1. There are often not necessary and sufficient conditions
    – e.g. for bachelor: pope, widower, man in monogamous long-term non legally binding relationship
  2. Wittgenstein’s example of game; no defining rules, categories have a family resemblance structure where examples share some features with other examples, but no single feature is common to every example
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

prototype theory (Rosch and mervis)

A

Rosch: prototypes are the collection of the average (mean or mode) features across examples (central members of the category)
– Graded category membership: how similar is any given example to the prototype? e.g., robin vs. penguin for bird
– Classification is not just testing rules, but seeing how similar a new exemplar is to the prototype

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

experiment with natural categories for prototype theory

A

Subjects were given examples of categories and rated their typicality
– “how typical is the exemplar of the category?” or “how representative…”
– e.g., robin, eagle for bird; gun, sword, axe for weapon etc.

Other subjects listed properties of exemplars of categories and contrast categories
– e.g., fruits (kiwi, orange) vs. vegetables

The more features any given exemplar had in common with other exemplars, and the fewer with exemplars of contrast categories, the higher the typicality rating
• Typicality is a function of overall “cue validity”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

experiment with artificial categories for prototype theory

A

Subjects learned to classify letter strings as members of two categories
– 6 letter strings per category – e.g., HDFTG, GYHJL

Exemplars of categories had variable number of letters in common with other members and with members of the other category

More features in common with other members and fewer non-members predicted the number of trials to learn, speed of classification, typicality ratings after learning
– That is, higher cue validity, better learning, etc.

people could learn categories with family resemblance structures that have overlaps in certain features but no one feature was defining

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Ponser & Keele: abstraction of prototype

A

Categorise dot-patterns distorted from prototype

During learning, subjects never see actual prototype

After learning, just as fast/accurate or faster and more accurate to classify prototype than many seen exemplars.

abstract underlying commonality even if never saw, classify based on how close to that abstraction

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

exemplar theory

A

Agrees with prototype theory in main advances beyond the classical view
– graded membership, classification about similarity not rules

Challenges that abstractions are ever made. Categories are represented as the collections of encoded exemplars.
– or partially encoded exemplars, based on attention
i.e. every time you classify something, it’s not based on similarity to prototype, but based on all previously classified examples

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Nosolfky & Shin: re-explain unseen-prototype advantage

A

Posner & Keele (1968) unseen-prototype advantage re-explained as summed similarity to all exemplars
– And can explain that experienced distortions classified more easily than equally similar to the prototype non-experienced distortions (Shin & Nosofsky, 1992)
e.g. ? 1 and 2 have same distance from prototype, but better at classifying the one closer with classified examples

Assuming equal similarity to prototype: experienced exemplars classified easier than novel; and novel near other experienced exemplars easier than far away

Novel atypical example: does emu help you with ostrich?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

knowlton & squire (1993) study categorization vs recognition

A

learning mechanism of category learning fundamentally different neural system than memory

control vs amnesia, classification and recognition tasks

amnesias: can tell if it’s similar to learnt category, but do much worse to answer “is this the same one you’ve seen before”

bad at remembering exemplar, but good at extracting prototype

nosofsky: what if all you have is just exemplars in both categorisation and recognition, but used differently?
how similar to past experiences does it need to be?

amnesia leads to less sensitivity towards similarity

17
Q

summary to exemplar theory

A

People generalize to things that are superficially quite similar to what they have previously experienced

Doctors’ diagnoses of skin disorders are facilitated when they are similar to previously presented cases, even when the similarity is based on attributes that are known
to be irrelevant for the diagnosis (Brooks, Norman, & Allen, 1991)

Why would a system be designed in such a way? Why not just store what is useful?

But how would one know what will be useful later? Storing as much as possible allows for a greater variety of information to be used if it turns out the be important.

18
Q

cluster models

A

Can make abstractions, can store exemplars
– Based on task and feedback

For example:
As exemplars are encoded, system predicts their category membership

Forms prototype-esque summary representations of highly similar exemplars that all lead to the same accurate classification.
– Keeps on just seeing bears while learning about mammals
– A series of small metal spoons..

19
Q

Category boundaries

A

As opposed to being concerned with the summary representation of the middle of the category, some theories focus on the importance of the borders between categories
– e.g., Ashby & Maddox (2005)
– Not mutually exclusive, e.g., Love et al., (2004).

Many categories are represented in opposition/contrast to each other, and so the border is highlighted
– e.g., conservative vs. liberal; fruit vs. vegetable

Idealized members/ “caricatures” are seen as critical as they are like prototypes, but exaggerated features away from category boundary
– Predicted by error-based learning mechanisms
– Same error-based learning that can lead to new cluster recruitment (or at least in Love et al., 2004).

20
Q

Davis & Love (2010) study

A

learned 4 categories of energy/political leaders, all 4 different on 2 dimensions

while learning, any given trial only chose between one of two categories

exemplars represented as values along 2 dimensions, choice between 2 categories contrasted a single dimension

subjects asked to show average value for each dimension for each cateogory

finding:
on the dimension of contrast, the average value idealized away from category boundary

on the dimension not contrasted, average value was accurate

point: category learning distorts our understanding/memory

21
Q

summary of feature-based models of categorisation

A

Classical view: categories are represented by necessary and sufficient conditions; people learn categories via the testing of hypotheses of category-defining rules

Prototypes: family resemblance structures, graded membership, form abstract representations of category average (either mean or mode); classification via similarity to prototype

Exemplar: no abstraction; classification via summed similarity from all exemplars, or similarity to individual exemplars

Clusters: concepts composed of multiple clusters picking out smaller-order generalizations or even exemplars

Boundaries: focus on the dividing lines between categories
– Leads to ideals/caricatures, also predicted by the error-driven learning of cluster models

22
Q

Markman & Ross: Category Use and Category Learning

A

After classification or inference training: classify exemplars being either a single feature or all features

Prediction: inference helps learn prototypical value for all features, and so will lead to superior single feature classification

classification only learns part of the puzzle that’s useful, some specific feature that defines category, whereas inference need to learn internal relationship between parts and all features