Neural Networks and Cognitive Control (1) Flashcards

1
Q

What are the goals of science?

A
  • description: what are we observing?
  • prediction: what will we observe next?
  • explanation: why is that what we observe?
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is an example of quantitative model?

A
  • motion of planets
  • apparent retrograde motion of planets (sometimes planets loop back)
  • early explanation: Helios and other Gods driving chariots
  • Ptolemaic geocentric model: precise predictions of when planes do loops
  • copernican heliocentric model: doesn’t place earth at center
  • Kepler’s law of planetary motion: elliptical orbits, extremely accurate predictions
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Why is it important to have quantitative models?

A
  • data require model to be understood and explained
  • verbal theorizing does not substitute
  • always several alternative models that must be compared
  • model comparison needs quantitative evaluation and intellectual judgement
  • intuitive verbal theories can turn out to be incoherent
  • instantiation in a quantitative model ensures assumption of theory are identified and tested
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is the problem with a “perfect map”?

A
  • perfect map must contain every detail, but if it contains every detail it will be as complex as the original phenomena you are trying to describe
  • detailed models are no better than the phenomena itself
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is the fundamental tradeoff in models? What is the goal?

A
  • simplicity and elegance versus complexity and accuracy

- goal: maximize explanatory power while minimizing complexity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What does predicting the weather require?

A
  • accurate model of how weather works

- accurate measurement of current state of atmosphere

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Why is it difficult to predict the weather?

A
  • it is difficult to get an accurate measurement of the atmosphere
  • sensitive dependence on initial conditions (Butterfly effect)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What does predicting a weather require?

A
  • accurate model of how weather works
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Why is predicting a weather easier than predicting the weather?

A
  • we have big fast computers and might one day have good models
  • matches actual weather in general features, but not in day-to-day details
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What types of cognitive models are there?

A
  • mathematical models
  • symbolic models
  • dynamical systems models
  • hybrid models
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What is a mathematical model (example)?

A
  • Fitt’s Law: time to point to a target
  • D: distance to target
  • W: width of target
  • a: initiation time for limb
  • b: relative speed of limb
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What model types rarely works in psychology?

A
  • mathematical models
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What is a symbolic model (example)?

A
  • EPIC Architecture
  • if/then statements
  • if simple task, wait for tone
  • if tone detected, then send to motor system
  • cognition explained as a system with goals
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What is a dynamical systems model?

A
  • think of mind as a point moving through an abstract mental state space
  • at any given moment each of the brain’s neurons is firing a little, a lot or not at all
  • brain/mind is in some particular state
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What is a hybrid model (example)?

A
  • ACT-R with LEABRA
  • visual input to a system was modeled using a neural network
  • all levels of analysis are related to each other
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

How do the physical and functional structures of the neuron compare?

A
  • dendrites: input
  • cell body
  • axon hillock: integrative
  • axon: conductive
  • synapse: output
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What levels of detail are possible to simulate through computational neuroscience (examples)?

A
  • structure of compartmental model: modeling section of dendrite to demonstrate synaptic transmission
  • membrane potential distribution of a purkinje cell: model every single dendritic branch
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

How does a biological neuron work?

A
  • gets presynaptic inputs (excitatory and inhibitory) to postsynaptic cell
  • trigger zone at hillock
  • action potential travels down axon
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

What are the components of an artificial neuron?

A
  • “integrate and fire” neuron
  • gets inputs
  • strength of inputs depends of strength of connections at the synapse
  • inputs are summed
  • send out to other neurons
20
Q

How is output activation of an artificial neuron calculated?

A
  • input activation: y
  • weights: w
  • net input: sum(y*w)
  • output activation is equal to net input
  • input low = activation low, input high = activation high
21
Q

What are the components of an artificial neural network?

A
  • input layer: made of individual units
  • hidden layer: units that receive various input from input layer
  • output layer: receive input from output layer
22
Q

What is the “current state” of artificial neural networks?

A
  • each unit has an activation which changes rapidly from moment to moment based on current input
  • rate at which the neuron is firing
23
Q

What is the “learned information” of an artificial neural network?

A
  • each connection has a weight which changes slowly based on learning
  • strength of connections between neurons changes over time based on experiences
24
Q

What are different forms of topology of neural networks?

A
  • feedforward: flow of info in one direction
  • simple recurrent (Elman): hidden layer has loop that feeds back to itself
  • self-organizing map (Kohonen): 3-dimensional wiring diagrams with connections between neurons on same ‘layer’
  • fully recurrent: network where everything is connected to everything else
25
What different types of representation of neural networks are there?
- localist representation: each neuron represents something different and specific (ex. neuron for grandma) - distributed representation: a random pattern of neurons represents something - population coding: distributed in nice organized way (like place cells)
26
What are the different types of learning?
- unsupervised learning - supervised learning - reinforcement learning
27
What is unsupervised learning?
- adjust weights based on correlations in input - "neurons that fire together wire together" - just noticing what things are similar and what things aren't
28
What is supervised learning?
- adjust weights based on difference between actual output and correct output - someone corrects you
29
What is reinforcement learning?
- adjust weights based on difference between actual reward and expected reward - reward when right, and nothing when wrong
30
What are the components of a perceptron?
- inputs: y - weights: w - input: sum(y*w) - activation: if x>1, y = or if x<1, y = - this is called a step function
31
What is a feedforward network?
- outputs of one layer become inputs of the next - all connections are from left to right - networks are called supervised because of how they are trained up (weights produced from experience) - all layers inbetween input and output layer are hidden layers
32
How are weights learned?
- by backpropagation: algorithm for training neural networks - compare actual outputs to desired outputs - work way back through network tweaking weights as you go - next time network receives input, the output will be closer to the desired output - formula for error as a function of activation
33
What are single layer unsupervised recurrent (Hopfield) networks?
- all neurons are in a 2-dimensional layer - every neuron is wired up to every other neuron - image: circles with all connecting back and forth with top left - activation function increases sharply
34
How does associative memory work in unsupervised recurrent networks?
- store patterns: present network with an input which is a pattern of some nodes being active and some nodes not being active - content-addressable: similar pattern or partial pattern (based on cue) used to come up with original picture - access memory by presenting similar pattern
35
What are some examples of content-addressable?
- a noisy image produces partial pattern to come up with original image - part of an image or memory produces partial pattern to come up with whole image/memory
36
What is Hebb's rule? What learning does it apply to?
- neurons that fire together wire together - neurons that fire out of sync fail to link - unsupervised learning
37
What is the formula for weight adjustment?
- delta wij = (1/N) si sj - wij: weight of link from unit i to unit j - N: number of units in network - si: activation of unit i - adjust weight based on how correlated the activity is
38
What did the Hopfield demo illustrate?
- network remembers in a way that is similar to human memory | - it also blurs memories together in its recall
39
How does a kohohen map work?
1. output - chose the cell whose weight vector is the nearest to the input vector 2. updating - update each cell's weight vector to be more like the input vector - the amount of updating decreased with (1) topological distance from the chosen cell and (2) time
40
What does a kohonen map look like?
- n inputs - k cells - n*k weights - adjacency relationships define topology - network that self organizes - present system with a set of activations and a single output node becomes active/greatest activation (white circle) - strengthen weights of winning node and surrounding nodes
41
What is a simple recurrent network?
- start with a standard feedforward network - hidden layer sends output to a context layer and gets input from context layer - context layer allows for network to have memory for stuff it has done in the past
42
What is an example of a simple recurrent network?
- letters in words - input of 5 numbers - 20 context and 20 hidden - 5 output numbers - from input, must predict next letter - at end of the word (ex. m-a-n-_) very certain of the next letter (y) - at start of word, network is less sure - words-in-sentences: present sequences of words to network over and over (implicitly learns categories of nouns and verbs)
43
What is the reinforcement learning problem?
- learn what action to take to maximize total reward given the current state of the environment
44
What does a reinforcement learning diagram look like?
- agent perceives new state of world - agent produces an action on the environment - environment gives reward to agent - agent learns based on reward and state of world
45
What is a value-map model?
- shows reinforcement learning - take actions that lead to reward and avoid actions that don't lead to reward - agent learned that certain action gives it more energy
46
What does deep reinforcement learning mean? What can we achieve with this?
- deep: neural networks with lots of layers | - human-level control through deep reinforcement learning
47
What can be learned with reinforcement learning?
- starts with no knowledge of game - increasing score with training epoch (number of trials) on 80 different games - network learned better than novice human for many games - input: what is seen on the screen - output: action to take with the joystick