Similarity & Analogy Flashcards

You may prefer our related Brainscape-certified flashcards:
1
Q

Similarity: Spatial Model

A

We represent things in mental space - distance as f/n of similarity
- RT confirms these distances

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Spatial Model: Latent-semantic analysis (Landauer & Dumais, 1997)

A

Columns = Encyclopedia with words, Rows = Every word in encyclopedia
More similar = more times two words appear in the same entry over the times they don’t appear together in the same entry.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Similarity: Feature Model

A

Spatial model can’t be right (violated) - similarity distances are asymmetrical.
e.g. “Canada is like USA” is not the same distance as “USA is like Canada”
- Metric spaces should show Triangle Inequality: distance AC cannot be longer than AB + BC
But similarity can: e.g. Russia and Jamaica are more dissimilar than would be expected when comparing USSR to Cuba, and Cuba to Jamaica
- Similarity and difference should be metrical inverse

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Similarity: Structural Model

A

Similarities perceived between objects as defined by their roles and relations to other objects, not just their individual features

  • Features are represented in a structured/coherent manner
  • Objects bound by relations to others
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Similarity: Structural Model: Alignable differences

A

E.g. Differences between hotel & motel vs hotel & coconut
When objects have highly alignable differences, similarity ratings are higher than for objects with low alignability
When dimensions of relevance aren’t shared, it is difficult to represent differences

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

How do similarity judgements highlight relations?

A

E.g. with picture of squirrel
People who are asked to rate similarity between two pictures first more readily align the pics based on the depicted relational role
-

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Structured relational representations: Terminology for:

  • big(sun)
  • bigger(sun, planets)
  • CAUSE [bigger(sun, planets), revolves around (planet, sun)]
A

Predicate = the part of a sentence or clause containing a verb and stating something about the subject (e.g. “went home” in “John went home”).

predicate(one argument) = big(sun)
attribute predicates argument?
predicate(multiple arguments/roles) = bigger(sun, planets)
Higher-order relation = predicate that has lower-order relations as its arguments

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What is an analogy?

A

Analogy = Two conceptual domains share relational similarity, not feature or object-based similarity
e.g. CAUSE [bigger (sun, planets), revolves around (planet, sun)] is equiv to
CAUSE [bigger (nucleus, electron), revolves around (electron, nucleus)]

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Structure-mapping theory of analogy (Gentner, 1983; 2010)

A

Comparisons involve an alignment of relational structures
- especially in their hierarchies
E.g. alignment between Nucleus-electron and Sun-planets in “CAUSE [bigger (sun, planets), revolves around (planet, sun)] is equiv to CAUSE [bigger (nucleus, electron), revolves around (electron, nucleus)]”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Structure-mapping theory of analogy: Constraints: Structural Consistency

A
  • One-to-one mapping of arguments
    E.g. “sun” maps onto “nucleus”, not “nucleus” AND “bigger” in CAUSE [bigger (sun, planets), revolves around (planet, sun)] is equiv to CAUSE [bigger (nucleus, electron), revolves around (electron, nucleus)]
  • Parallel connectivity between arguments of each concept
    E.g. In the analogy above, “sun” –> “nucleus” and “planets” –> “electrons”. Not “sun” –> “electrons”
    (So it’s directly parallel across both concepts)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Structure-mapping theory of analogy: Constraints: Systematicity

A

Deeply nested relational structures make better analogies.
E.g. “US Invasion of Iraq is like WW2”
- some overlap in surface features: dictators were both bad guys who killed their own citizens, and targeted specific ethnic subgroups
- No deep systematic commonality: Leaders use different

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Structure-mapping theory of analogy: What can deeply nested relational structures in analogies help us learn?

A

Make inferences
E.g. Solar system/Atom analogy - If planets follow elliptical orbits of sun, we can infer that electrons follow elliptical orbits of the nucleus?

Highlight alignable differences between both concepts (e.g. gravity in solar system = electro-magnetic forces in atom)

Can abstract commonalities - e.g. common theories that underly both systems (e.g. central force system underlies both solar system and atom)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Structure-mapping vs Feature comparison

Similarities and differences

A

Similarities between Structure-mapping and Feature comparison:

  • One-to-one mapping
  • Parallel connectivity
  • Systematicity

Differences:
Structure-mapping:
- Computationally expensive! Even for simple scene comparisons (unable to do this under WM load)
— Space & Feature processing less costly

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Representational pluralism: Inference vs Memory

A

Not all info need be represented in a structured format - info can be represented differently depending on the task/cog processes engaged.
- Due to computationally expensive nature of structurally formatted info

Hence, depending on the task & cognitive process, different representational formats better explain data patterns.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Representational pluralism: Inference vs Memory

Study by Gentner, Rattermann, & Forbus (1993):

A

Participants read stories.
Story A: Bird-hunter and eagle who have conflict, but resolve it
Relational match: One nation wants to invade another, but resolve it
Surface match: Story about hawk raising baby birds

Group 1: Given 2 stories, asked to evaluate inferences from story A to story B, based on shared content
Group 2: Given 1 story and asked what Story A reminded them of.

Inference: Relational match
Memory: Surface match

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Representational pluralism: Inference vs Memory: Model of Analogical memory and Interference

A
  • Cases are represented as an unstructured set of features during memory search (there’s enough space in LTM to store it this non-compact way - thus there is content overlap = vectors cues remindings)
  • Reminding –> Small set of remembered cases become represented as full relational structures to see if any share enough relational commonality to drive inference
  • Some info can be represented in different formats, depending on the task