Terms and Concepts Flashcards

1
Q

Tri-level hypothesis

A

When studying an intelligent system, we should investigate three levels:

  • computational
  • procedural/algorithmic
  • implementational
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

CRUM

A

Computational-Representational Understanding of Mind

  • > Human cognition in terms of representational structures and computational procedures that operate on those structures
  • > Evaluated on:
    1. Representational Power
    2. Computational Power
    3. Psychological Plausibility
    4. Neurological Plausibility
    5. Practical Applicability
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Expert Systems

A
  • A system incorporating the knowledge of experts

- Rule-based with preprogrammed facts and procedures -> No machine learning

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

BVSR theory (Campbell)

A
  • > Blind Variation and Selective Retention
    1. Unsighted generation of ideas
    2. Elaborating whether the idea was useful
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Primary vs Secondary Thinking

A

Primary:

  • analogical, free associative
  • makes discovery of new combinations more likely
  • flat associative curve, defocused attention

Secondary:

  • abstract, logical, goal-oriented
  • steep associative curve, focused attention

-> Creative People can alternate more between the two

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Associative Hierarchy (Steep vs Flat)

A

Steep associative hierarchy:

  • Focused Attention
  • Lateral inhibition
  • Following one train of thought

Flat associative hierarchy:

  • Defocused attention
  • more lateral activation
  • more widespread activation of nodes/ideas
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Klondike problems of creativity

A

1) Rarity - Payoff is rare in the conceptual space
2) Isolation - Areas of payoff are often independent from another in the problem space
3) Oasis - One might stay for long in one area of payoff because they are hard to leave
4) Plateau - The direction where the greater payoff lies might not be clear

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Fault Tolerance

A

A neural network might still give you the desired output even if the input was imperfect

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Delta Rule

A

Supervised learning rule that calculates the change in weight that needs to be done after a learning trial in order to get to the desired activity of the nodes.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Auto Associator

A

Special type of pattern associator which aims to reproduce the input as the output and has recurrent connections.

  • Knows which connections were active before and can predict sequences
  • Often used in models of episodic memory like the hippocampus
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

4-Stage Model of Information Processing

A
  1. Sensory Processing - Acquisition and registration of multiple information sources
  2. Perception/Working Memory - Conscious percept and manipulation of info in WM
  3. Decision Making
  4. Response Selection
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

4 Stages of Automation

A
  1. Information Automation
  2. Analysis Automation
  3. Decision Automation
  4. Action Automation

Each stage can be on a high or a low level.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Competitive Learning

A

The nodes in a network compete for the right response to a given input. The output with the largest activity receives the weight change.

  • Form of unsupervised learning
  • Based on Hebbian Learning
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

When is a learning algorithm called “supervised”?

A

When it has to be fed (labeled) input and constraints regarding the output

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Drive-reinforcement theory was a solution to a problem of differential Hebbian learning. What was this problem?

A

Differential learning did not take time into account

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What kind of learning do pattern associators use?

A

Hebbian

17
Q

How can attractors and phase transitions be related to connectionist models?

A

When a neural network settles, weight changes decrease, acting like an attractor.