Exam 2 Flashcards

You may prefer our related Brainscape-certified flashcards:
1
Q

What is the entry level of visual recognition?

A

Sort of analogous to the basic level of categories – the level your visual system recognizes things at
—> objects most easily distinguishable in terms of the relations between their categorical parts

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What does it mean (very broadly) to recognize an object?

A

Match a representation of the stimulus (the visual image) to a representation stored in LTM. When you succeed in finding a match, you’ve recognized the object.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What are some characteristics of the starting point of object recognition – area V1?

A

~Neurons in V1 respond to bars and edges (simple features) in the visual image
-each neuron responds to a specific combination of location in visual field, orientation, spatial frequency, etc…
~Very sensitive to very particular details – change any one of these properties, and you change which neurons respond to the stimulus
-thus, sensitive to viewpoint
-this means that we are NOT storing pattern recognition of V1
~Retinotopic mapping is a thing

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What are the general properties of human object recognition (aka, properties of the representation of shape) and how do they relate to V1? What does this imply about the representations we’re matching to LTM?

A
Invariant with...
   -translation across retina (unlike V1)
   -changes in scale (unlike V1)
   -left-right reflection (unlike V1)
Sensitive to...
   -rotation in picture plane (like V1)
and, to a lesser extent...
   -rotation in depth (unlike V1)
Representations we're matching don't have properties of V1 representations --> we use V1 to compute something elks
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What do neurons in inferotemporal cortex do?

A

Provide info about object identity: what we’re looking at
-mostly (but not exclusively) shape
^because shape is really diagnostic for object identity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What are some of the response properties of neurons in IT in macaque munkeys?

A

~Some (majority?) respond to object shape, independent of viewpoint
~Others respond to particular shapes in particular views
^if something has special relevance, behaviorally, it would be nice to recognize it more quickly –> neurons in IT can learn this!

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is some evidence that neurons in IT can learn?

A

~Some neurons respond to particular shapes in particular views
-the more you train a monkey on a particular view of a given shape, the more likely you are to find a neuron dedicated to that view

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What is visual priming?

A

Priming: processing something on one occasion makes you faster and more accurate to process that thing and related things on subsequent occasions
-form of learning

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

How can priming be used as an index of visual representation?

A

The more two things have in common, the more they prime one another because activating a representation on one occasion makes it easier to activate again on a subsequent occasion
-therefore, the magnitude of priming is a measure of how much the mental representation of one thing has in common with the mental representation of the other

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What is the purpose of the different exemplar control condition in priming experiments?

A

Prime with Exemplar 1, and probe with Exemplar 2
~Different exemplars have:
-same name
-same (or similar) concepts
-different (albeit similar) shapes
SO, priming from Exemplar 1 to Exemplar 2 provides an estimate of non-visual priming

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

How can we calculate non-visual priming?

A

Total priming = visual priming + non-visual priming
SO,
Visual priming = total priming - non-visual priming

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What is the purpose of the identical image condition in priming experiments?

A

Prime from an image to itself

This is a measure of total priming

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

In summary, how do we estimate the magnitude of visual priming?

A
  1. Observe total priming by observing how much a stimulus primes itself in the identical image condition
  2. Estimate non-visual priming by observing how much one object primes a different object with the same name in the different exemplar condition
  3. Estimate visual priming by subtracting non-visual priming from total priming
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Compare priming for identical images, translated images, and different exemplars. Is the visual representation of shape dependent on the location of the image in the visual field? What about size? What about left/right reflection?
What does this mean about the representation we’re priming?

A

~Shape: NOPE! We see complete visual priming: priming for a different location is equal to priming for identical image
~Size: also nope, priming for different sizes is equal to priming for identical image
~Reflection: also also nope, priming for reflection is equal to priming for identical shape
SO, the representation we’re priming is far away from V1 (because invariant to things V1 is sensitive to)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What’s the overall conclusion of the more complicated case of whether or not priming is invariant to rotation in depth?

A

Whether depth rotation matters depends on how it’s measured – the nature of the stimuli and the nature of the task
-if the stimuli have nice volumetric parts and the same parts are all visible
Basically, use or ignore orientation information to the extent that it’s advantageous
–> orientation is separate from shape –> we match it separately –> so, it can help overcome noise

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What’s Biederman’s Recognition-by-Components theory of object recognition?

A

~Use non-accidental properties of image edges (e.g., in V1) to make inferences about eh volumetric (3D) shapes of an objects parts (geons)
~Use the geons and their spatial relations to represent object shape
~Recognize objects based on their geons and the relations among them

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What the fuck’s a geon?

A

Geons are: categories of generalized cylinders
–> within categories, treated the same
~Geons are imprecise
-vagueness makes it robust to variation in viewpoint and makes it possible to recognize objects as members of a category
(vague permits generalization as a natural consequence)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Where do you “get” geons from?

A

You can recover 3D properties of a geon’s shape from 2D non-accidental properties of image edges
-provides a way to go from 2D information in V1 to representation of 3D object shape

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Why is it important that geons are represented categorically?

A

This means they are naturally robust to variations in veiwpoint, and good for category recognition
Even spatial relations between them are imprecise and represented in a categorical way

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

What’s the sequence of events according to RBC?

A
  1. find non-accidental image properties
  2. use them to characterize the geons in the images (and the relations among them)
  3. match the geons and relations to an object model stored in LTM
  4. use the object model to access the concept and name
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Give an overview of how the JIM model works.

A

Layer 1: detects image edges by location in image – retinotopically mapped
2: discovers vertices, axes, and blobs (non-accidental properties!) – still retinotopically mapped
3: discovers geon attributes
-categorical attributes go straight to layer 6 (object memory) for object recognition
-metric attributes go to layers 4&5 to compute relations
4 & 5: decompose object into components (geons) and calculate spatial relations between them (with info from 3)
6: put relations in pairs (with info from 3)
7: put paired relations into objects, and contains neurons that learn to respond to particular objects

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Why is it important that the JIM model represents all geon attributes as independent in layer 3?

A

In this layer, information has been torn apart so that the confounding of location is no longer an issue – for example, one neuron will respond to a curved axis regardless of where it is.
This means that all the attributes are independent, and is important for invariance. It also introduces the binding problem…

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

How does the JIM model represent the binding of geons to their relations?

A

sdfs

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

How does the JIM model conquer the binding problem?

A

~You can’t know which neurons go with which in advance, so binding must be dynamic
~Solution: use non-accidental properties to synchronize bound neurons
-synchrony established in layers 1 and 2 that represent binding relations are preserved in later layers
-if synchrony = binding, asynchrony = NOT binding!

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

Is there a capacity limit to the JIM system?

A

Yes! There are only a limited number of groups that can be out of synchrony with one another, so there’s necessarily a capacity limit.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

What are some predictions from the JIM model, and which are true?

A
  1. Structural description requires attention
    -true: you don’t generate representation of an image and its parts and relations without attention
  2. Perception of relations requires attention
    -true
  3. Object recognition requires a structural description
    -false
    therefore, predictions 4&5 are false…
  4. Object recognition requires visual attention
  5. Object recognition no faster than dynamic binding
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

What did patient DF suffer from?

A

Bilateral damage to lateral occipital cortex (the VENTRAL processing stream)
~Phenomenologically blind, but her dorsal pathway is intact (governs attention and talks to motor cortex)
–> she walks around fine without bumping into things, but can’t interact with objects because she can’t identify them
~Demonstrates that the two functions of the the visual pathway are separate!!

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

Describe that experiment with the bar of light and how DF performed when asked to match its orientation.

A

~When told simply to match the orientation of the bar of light, she fails because she can’t make explicit visual judgements
~When told that there’s a mailslot on the wall, and instructed to post a letter, she succeeds because this is a MOTOR task!

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

Give some evidence that neurons in V1 fire in synchrony when bound (Gray and Singer, 1988).

A

Two neurons with collinear receptive fields…

  1. take two bars, move one across receptive field from top to bottom and other from bottom to top
    • spikes correlated, but asynchronized
  2. two bars, move both across receptive field from top to bottom
    • temporal correlation increased
  3. one long bar that goes across both fields
    • fire strongly in synchrony –> signals likelihood of being same object
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

What is the difference between basic level recognition and individual recognition?

A

Basic level: recognizing an object as a member of a basic-level category
-categorical representation (parts and relations) allows generalization across viewpoint and individuals
Individual: recognizing an object as an individual instance
-metric precision allows precise identification of instances

31
Q

Describe the Cooper and Wojan (2001) study that attempted to discern how faces are processed.
Hypothesis: basic level recognition based on categorical structural description, while face as individual based on metrically-precise technique.

A

Task:
-Basic level: is this a face or not?
-Individual: whose face is this?
Manipulations:
-Move one eye: change categorical relation between two eyes, but only one feature (coordinate) –> less holistic damage
-Move both eyes: preserve categorical relations but disrupt more coordinates –> more holistic damage
Results:
-One eye: hard to recognize using structural descriptions
-Two eyes: hard to recognize using a template
Basically confirmed hypothesis

32
Q

What is induction? What is the problem of induction?

A

Induction: the process of making inferences based on specific observations or examples
The Problem: making the right inferences
-super ill-posed, but super important to survival

33
Q

What are some characteristics of deductive arguments?

A

Deduction: start with premises, conclude something from the premises –> inference from assumed premises
-conclusion of a valid deductive argument is guaranteed to be true if premises are true
~In a sense, a deductive conclusion produces no new knowledge - the conclusion was already present in the premises
~Abstract, no necessary connection with the real world

34
Q

What are some characteristics of inductive arguments?

A

Induction: more like a rational guess
-conclusion of a valid inductive argument is only likely to be true if the premises are true
~Induction is the only means by which genuinely new knowledge comes to be, but we cannot have complete confidence in our new knowledge

35
Q

What is the problem of information?

A

~What kinds of observations count as evidence for an inductive argument? ANYTHING!
-e.g., black raven and a not black non-raven are both evidence for “all ravens are black”
~So, without some basis for choosing what will count as evidence for what, every observation is evidence for literally an infinity of hypothesis.
-The constraints on what to choose do not come from the logic, or the world, so the mind must impose them.

36
Q

Explain the results of the Garcia, Hawkins, & Rusiniak (1972) study that showed that rats are not lil association-learning machines.

A

~Rats only learned to associate nausea with the flavored water and getting shocked with the clicky noise
-so, rats are selective in the associations they can learn

37
Q

How is word-learning underconstrained?

A

The world does not provide enough information to determine the referents of novel words

38
Q

What constraints does the mind place on word learning?

A

~Whole object constraint: assume that the novel word corresponds to a whole object, not to an object part
~Taxonomic constraint: assume that words (nouns) refer to categories of like objects rather than objects in particular relations to other objects, a direction of motion, etc.
~Mutual exclusivity: one object, one name – assume the new word refers to the object whose name you don’t know
~Lexical contrast: one concept one name
~Joint attention: e.g., what’s mom looking at?

39
Q

What is categorization? What is the real advantage of categorization?

A

Categorization: the process of appreciating that an object or event is a member of a more general category
-this IS an inductive process
Advantage: inductive inferences – once you recognize an object as an instance of a category, you can infer that everything true of the category is true of the instance
-you can make predictions, explain actions/properties, etc.

40
Q

What’s the classical view of categorization, and some issues with it?

A

~A concept is a definition of a category – specification of necessary and sufficient conditions for category membership
BUT,
-some concepts have no definitions

41
Q

What does the family resemblance model of categorization entail?

A

~Categories have a probabilistic family resemblance structure

  • probabilistic: any given feature may appear in some (even most) but not necessarily all category members
  • family resemblance: category members resemble one another the way family members resemble one another
42
Q

What is a prototype?

A

Prototype: most typical, best, most central category member

  • may or may not actually exist out in the world
  • you can learn the prototype from exemplars even if you never see it
43
Q

What are prototype effects?

A

Prototypes are cognitively privileged:
~More prototypical exemplars are categorized quicker
~More prototypical exemplars are listed first
~More prototypical exemplars share more features with other exemplars (why they’re prototypical)
~More prototypical exemplars are learned earlier in childhood
~Exposure to exemplars causes learning of the prototype

44
Q

Describe prototype theory and some strengths and problems with it.

A

Prototype IS the concept (the mental representation)
~Through exposure to exemplars, you compute their mean, and store the prototype as the mental representation of the category
~Categorize new exemplars by comparing them to the prototypes in memory
Strength: provides a natural explanation of prototype effects
BUT…
-fails to specify variance: how much deviation from prototype is allowed
-fails to specify relations among exemplar’s features
-predicts (incorrectly) that only linearly separable categories are learnable
-assume that categorization is based on feature-based similarity

45
Q

Describe exemplar theory and some strengths and problems with it.

A

Store all category exemplars: exemplars ARE category representation
~Classify new exemplars by matching to most similar exemplar in memory
Strengths:
-can account for prototype effects
-captures variance, and mean, and min, and max, and…
^because store all data
-captures correlation among features
-can learn non-linearly separable categories
BUT…
-assume that categorization is based on feature-based similarity

46
Q

What are some problems with feature-based models?

A

1) feature lists do not capture our knowledge of relations among properties or parts of an exemplar
2) ad hoc categories have few features in common
3) “Similarity” (e.g., of new instance to prototype or known exemplars) is inadequate to explain much of categorization
- human similarity judgments do not obey metric axioms –> concepts are not just points in metric space where a feature list is the vector denoting the point

47
Q

What’s the relationship between similarity and the distance between concepts in n dimensional space?

A

Similarity is the inverse of the distance between the two points representing concepts.

48
Q

Describe the metric axiom of minimality, and why human categorization does not follow it.

A

Minimality: the minimal distance (and thus, maximal similarity) between two points (concepts) is the distance between a point and itself, which is a) 0 and b) equal for all points
-every concept is as similar to itself as every other concept is similar to itself
This is questionable…
-hypothesis: people judge complex visual patterns (like paisley) to be more similar to themselves than they judge simple patterns (like a circle) as similar to themselves

49
Q

Describe the metric axiom of symmetry, and why human categorization does not follow it.

A

Symmetry: any point, u, is exactly as far from any other point, v, as v is from u
-every concept, u, should be exactly as similar to another concept, v, as v is to u
They aren’t.
-people rate less familiar things as more similar to familiar things than the familiar thing to the unfamiliar thing
-e.g., because people more familiar with China, there are “more ways to know about North Korea being similar to China.” in contrast, because we are less familiar with NK, “we don’t know as many ways that China can be similar to NK because we have less knowledge about it in sum”

50
Q

Describe the metric axiom of triangle inequality, and why human categorization does not follow it.

A

Triangle inequality: distance from a to c can be no greater than the distance from a to b plus the distance from b to c
-d(a,c) < d(a,b) + d(b,c)
-therefore, if concepts are points in metric space, the dissimilarity of a to c can be no greater than the sum of the dissimilarity of a to b plus the dissimilarity of b to c
lol nope
-Russia and Jamaica are utterly dissimilar. Russia and Cuba are quite similar. And, Cuba and Jamaica are quite similar.
-d(Russia, Jamaica) > d(Russia, Cuba) + d(Cuba, Jamaica)

51
Q

What is theory/schema theory?

A

~Concepts are not just a vector, but an object with variables that can take values!
-aka, concepts are schemas or theories describing categories of things
-data structure that explicitly specifies relations between features and other categories
~Therefore, concepts provide an explanatory framework for understanding the properties (i.e., explaining) category members

52
Q

Describe psychological essentialism, and why it provides evidence for schema theory.

A

Psychological essentialism: people assume concepts (esp. natural kinds) have a essence that makes them the way they are
-visible features are merely a reflection of this essence
~So, features do not define a category, but point to it!!

53
Q

What is the benefit of rational inductive reasoning?

A

If you have enough information, it’s easy to calculate the likelihood of an outcome. Rational inductive reasoning does not guarantee correct inferences, but maximizes the probability of making the correct inferences.

54
Q

Why don’t people use probability theory in everyday judgement and decision-making?

A

~We don’t have a general porpoise “probability module”
^it’s too expensive to gather all the necessary data!
~Instead, we fake it: use heuristics that will give us decent statistical estimates (and thus, inductive inferences) most of the time, but without having to gather all the necessary data

55
Q

What’s expected utility? What is the goal associated with it?

A

Utility: subjective value of an outcome
-assumedly monotonic, but not linear (more logarithmic)
–>expected utility is like expected value, but you substitute utility for value!
-this is a property of an ACTION.
GOAL: maximize gains, minimize losses

56
Q

What is the representativeness heuristic?

A

Judge something to be likely to the extent that it is representative (i.e., similar to) members of its category.
-e.g., Catface looks a lot like a tuxedo cat. So, we assume the likelihood of Catface being a tuxedo cat is pretty high.

57
Q

What is the availability heuristic?

A

This is retrieval-based! Assume likelihood to the extent that it’s easy to come up with examples
-e.g., easy to come up with examples of male CEOs, and harder to come up with examples of female CEOs –> there must be more males than females in that position

58
Q

What’s the anchoring and adjustment heuristic?

A

An heuristic for estimating magnitudes! Start with an initial estimate and adjust it as new information comes in
-the choice of anchor biases the estimates

59
Q

Give an example of when representativeness goes awry.

A

e.g., conjunction fallacies: judging likelihood of a conjunction of two events (i.e., both feminist and bank teller) as higher than the likelihood of either event in isolation

60
Q

Give an example of when availability goes awry.

A

Availability is only reliable to the extent that memory retrieval is unbiased, and often it’s not
-e.g., recent events can bias it – estimate of likelihood of car or plane crash goes up immediately after the coverage of such a crash

61
Q

Give an example of when anchoring and adjustment goes awry.

A

Our estimates are biased by initial anchors, even if they are generated by an unreliable or biased source

  • e.g., present 8! two ways:
    8x7x6x. ..x1 yields higher estimates than 1x2x3x…x8
62
Q

What is base rate neglect?

A

Base rate: probability of some event given no other information
Neglect: heuristics are so compelling that we often don’t use probability theory even when we have the necessary information
-e.g., ignore the information given by the base rate and go with our heuristics

63
Q

What is analogy?

A

Making inductive inferences about one thing or situation based on its relational similarity to other things or situations
-two things similar to one another in an abstract way

64
Q

What does analogy require specifically, with respect to relations?

A

Requires you to represent relations independent of their arguments – explicitly, as entities in their own right
At the same time, making the correct inference requires you to be able to bind arguments to correct relational roles

65
Q

Describe match to sample tasks, how animals perform on them, and the implications of the results.

A

~In a match to sample task…animal learns to respond to a relevant dimension in a sample stimulus.
-then, they are shown two alternatives…the animal should choose the alternative that is “the same as” (relation!!) the sample in the relevant dimension
~Many animals (even honeybees!) can perform this task
but…
~An animal can solve this task simply by memorizing the relevant feature (without understanding relation “same as”) and choosing the alternative matching that feature

66
Q

Describe relational match to sample tasks, how animals perform on them, and the implications of the results.

A

~In a relational match to sample task, the sample consists of two stimuli that are the same on a relevant dimension
-then, they are shown two alternatives…correct one with same relation of “same as”
^this time, choosing correct match requires the abstraction of the relation “same as” from its arguments
~Of all animals, only humans and symbol-trained chimps can learn to perform this task

67
Q

Why is analogy useful?

A

Depends on relational matching.

  • -> as a result, permits inferences based on relational roles that things play, rather than just the features of the things themselves
  • -> permits generalization to ANY new system of objects that fit those same roles
    • if you’re an expert, you can generalize from just one example!
  • analogical inferences are inductive inferences that are super sophisticated – not guaranteed to be right, but powerful source of knowledge
68
Q

What are the four components of analogical thinking?

A
  1. Memory retrieval: novel target problem retrieves potentially applicable source from memory
    • this is where we fail most often – it’s hard to retrieve a useful prior example
  2. Mapping: discover structural correspondences between elements of source and target
    • heavily dependent on IQ and WM
      • must hold relevant relations in mind and map between them
  3. Analogical inference: source drives inferences about target
    • copy with substitution and generation
  4. Schema induction: induce a more general schema that retains what the source and target have in common and throws away details on which they differ
69
Q

What’s the difference between languages and communication systems?

A

Communication systems:
~Consist of a vocabulary of basic messages
-lotsa aminals hab dis
Languages are a type of communication system with something extra:
~Grammar: rules for combining basic messages into infinitely complex messages (sentences)
~Rules are recursive

70
Q

What is meant by the statement “the rules of grammar are recursive”? Why is this an important property?

A

~The rules refer to not only words, but also to complex grammatical structures
-i.e., rules can be applied infinitely!
–> permits embedding (phrases within sentences) and constructions like “I think”, “he said”, “I know”, etc.
–> we can create an infinite number of sentences!
~Without recursion, number of potential sentences is large (but not infinite) and some things are inexpressible
If no recursion, then no language!
^distinguishes human languages from animal communication systems

71
Q

Give some components of Chomsky’s critique of Skinner’s attempt to explain language learning by behaviorist principles of association and reward schedules.

A

~Logically impossible to account for grammar learning in terms of associationist principles
~Rather, language must be innate
-innate universal grammar
-all we learn are the details of how own language solves the problems of grammar

72
Q

What are some arguments for the innateness of language?

A
  1. The problem of number
    -performance: what you have done (e.g., finite number of sentences you have uttered)
    -competence: what you can do (e.g., infinite number of sentences you can utter)
    ~Capable of uttering an infinity of sentences –> can’t possibly learn them all, so you must be applying rules
  2. The role of rules
    -you learn rules from example of their use, but rules are unlearnable from their examples
    –> so, you must be born expecting some sorts of rules (UG)
73
Q

Why are grammatical rules unlearnable by examples?

A

~Rules you learn are not adequately constrained by the examples you hear (induction problem!)
~For any finite set of examples, you can’t know if they summarize the rule (there could be exceptions, the rule could change in the future, etc.)
~More formally: you cannot learn a finite state automaton (e.g., a grammar) by observing examples of its input-output pairings (e.g., spoken language)

74
Q

Why does associating which words tend to follow others not suffice to learn grammar?

A

Grammatical structure has a life all its own – it’s an entity that happens to get filled in with a set of words