Cognition Flashcards

1
Q

features of interactive activation models

A
  • Hierarchical representation units
    • Sensory features
    • Segments (Letters / Phonemes)
    • Words
  • Interactive activation
  • Evidence that these models can do optimal Bayesian inference.
  • Hand-wired models. No learning.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

inference problem in generative models

A

Determine state of hidden variables given input.

Given a sensory input, what causal variables generated image?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

learning problem for generative models

A

how to adjust the weights makes the hidden variables generate the observed sensory data

learning a generative model is learning a causal model of sensory input

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

transfer learning (from generative models)

A
  • A generative model of sensory data should be able to transfer to other tasks.
    • Learns robust features that cn be used elsewhere.
  • Recognition models don’t support this kind of transfer because they were trained for labelling and discrimination.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

claim by Zorzi, Testolin, and Stoianov (2013)

A
  • Deep generative models add learning dimension to interactive activation models, so we get
    • Hierarchical representations
    • Top down and bottom up information
    • Structured probabalistic cognition
    • Learning
  • Bridge gap between process-level PDP theory and problem-level structured Bayesian theory
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

basic structure of a deep generative network

A
  • Input layer
  • Some hidden layers that compress observed data of progressively more abstract feature detectors
  • Large hidden layer on top
    • Unfold/unravel/unravel compressed hidden representations into abstract classes and categories
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Hinton’s cognitive connections on RBMs (2007)

A
  • How might sleep-wake algorithm be implemented cortically?
  • RBMs don’t have lateral connections. How might models be augmented to capture lateral inhibition?
  • Deep hidden units are still kind of a black box, but at least with generative models, we can study what kinds of data are generated by certain hidden features.
  • Top-down/bottom-up are plausible because:
    • Some cortical regions reciprocally connected
    • Hallucinations, dreaming, top-down disambiguation
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

advantages of connectionism over symbolic AI

A
  • Context sensitivity
  • Content sensitivity
  • Quasiregularity
  • Gradual learning
  • Learnability
  • Graceful degradation
  • Biological inspiration
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Context Sensitivity (PDP vs Old AI)

A
  • Outcomes constrained by multiple sources of information
  • Modularity doesn’t allow for contextual processing effects (word superiority effect)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Content sensitivity (PDP vs Old AI)

A
  • Semantic content can support processing.
    • The birdwatcher saw the bird with binoculars.
    • The bird saw the birdwatcher with binoculars.
  • But we still like structure too and can parse meaningless sentences.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Quasiregularity (PDP vs Old AI)

A
  • Rule systems always have exceptions.
  • But exceptions have some of the main regularities.
    • E.g., past tense exceptions will end with d/t.
  • Hard to draw a line between regular and irregular patterns.
  • Want to be able to take advantage of varying degrees of regularity.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Gradual learning (PDP vs Old AI)

A
  • Rule-learning predicts discontinuous development. A-ha moments.
  • Cognitive development is not so sudden or abrupt.
  • Periods are relative stability give way to “unstable, probabilistic, and graded patterns of change”.
    • Use of rule might be influenced by frequency or regularity.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Learnability (PDP vs Old AI)

A
  • How can we have innate knowledge about rules for reading when we didn’t evolve as readers?
  • We see more tendencies than universals.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Graceful degradation (PDP vs Old AI)

A
  • Deficits from TBI in a skill are graded, probabilistic.
  • Performance may be sensitivity to frequency, familiarity, or regularity.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Biological inspiration (PDP vs Old AI)

A
  • The implementation-doesn’t-matter argument holds for computers, but brains are different.
  • I would say, need to allow for leaky abstractions
    • Implementation details (neurons, etc) leaks into higher levels of abstraction
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

7 central tenets of connectionism

A

These are recurring themes we see in PDP models. Not all are required for a PDP model.

  1. “Cognitive processes arise from the real-time propagation of activation via weighted connections.”
  2. “Active representations are patterns of activation distributed over ensembles of units.”
  3. “Processing is interactive.”
    1. Network moves into an attractor states, a global interpretation of the input.
    2. Settling into an interpretation is related to behavior over time.
  4. Knowledge is encoded in the connection weights.”
  5. Learning and long-term memory depend on changes to connection weights.”
  6. “Learning, representation, and processing are graded, continuous, and intrinsically variable.”
  7. “Processing, learning, and representation depend on the statistical structure of the environment.”