Neural Networks and Cognitive Control (1) Flashcards
What are the goals of science?
- description: what are we observing?
- prediction: what will we observe next?
- explanation: why is that what we observe?
What is an example of quantitative model?
- motion of planets
- apparent retrograde motion of planets (sometimes planets loop back)
- early explanation: Helios and other Gods driving chariots
- Ptolemaic geocentric model: precise predictions of when planes do loops
- copernican heliocentric model: doesn’t place earth at center
- Kepler’s law of planetary motion: elliptical orbits, extremely accurate predictions
Why is it important to have quantitative models?
- data require model to be understood and explained
- verbal theorizing does not substitute
- always several alternative models that must be compared
- model comparison needs quantitative evaluation and intellectual judgement
- intuitive verbal theories can turn out to be incoherent
- instantiation in a quantitative model ensures assumption of theory are identified and tested
What is the problem with a “perfect map”?
- perfect map must contain every detail, but if it contains every detail it will be as complex as the original phenomena you are trying to describe
- detailed models are no better than the phenomena itself
What is the fundamental tradeoff in models? What is the goal?
- simplicity and elegance versus complexity and accuracy
- goal: maximize explanatory power while minimizing complexity
What does predicting the weather require?
- accurate model of how weather works
- accurate measurement of current state of atmosphere
Why is it difficult to predict the weather?
- it is difficult to get an accurate measurement of the atmosphere
- sensitive dependence on initial conditions (Butterfly effect)
What does predicting a weather require?
- accurate model of how weather works
Why is predicting a weather easier than predicting the weather?
- we have big fast computers and might one day have good models
- matches actual weather in general features, but not in day-to-day details
What types of cognitive models are there?
- mathematical models
- symbolic models
- dynamical systems models
- hybrid models
What is a mathematical model (example)?
- Fitt’s Law: time to point to a target
- D: distance to target
- W: width of target
- a: initiation time for limb
- b: relative speed of limb
What model types rarely works in psychology?
- mathematical models
What is a symbolic model (example)?
- EPIC Architecture
- if/then statements
- if simple task, wait for tone
- if tone detected, then send to motor system
- cognition explained as a system with goals
What is a dynamical systems model?
- think of mind as a point moving through an abstract mental state space
- at any given moment each of the brain’s neurons is firing a little, a lot or not at all
- brain/mind is in some particular state
What is a hybrid model (example)?
- ACT-R with LEABRA
- visual input to a system was modeled using a neural network
- all levels of analysis are related to each other
How do the physical and functional structures of the neuron compare?
- dendrites: input
- cell body
- axon hillock: integrative
- axon: conductive
- synapse: output
What levels of detail are possible to simulate through computational neuroscience (examples)?
- structure of compartmental model: modeling section of dendrite to demonstrate synaptic transmission
- membrane potential distribution of a purkinje cell: model every single dendritic branch
How does a biological neuron work?
- gets presynaptic inputs (excitatory and inhibitory) to postsynaptic cell
- trigger zone at hillock
- action potential travels down axon
What are the components of an artificial neuron?
- “integrate and fire” neuron
- gets inputs
- strength of inputs depends of strength of connections at the synapse
- inputs are summed
- send out to other neurons
How is output activation of an artificial neuron calculated?
- input activation: y
- weights: w
- net input: sum(y*w)
- output activation is equal to net input
- input low = activation low, input high = activation high
What are the components of an artificial neural network?
- input layer: made of individual units
- hidden layer: units that receive various input from input layer
- output layer: receive input from output layer
What is the “current state” of artificial neural networks?
- each unit has an activation which changes rapidly from moment to moment based on current input
- rate at which the neuron is firing
What is the “learned information” of an artificial neural network?
- each connection has a weight which changes slowly based on learning
- strength of connections between neurons changes over time based on experiences
What are different forms of topology of neural networks?
- feedforward: flow of info in one direction
- simple recurrent (Elman): hidden layer has loop that feeds back to itself
- self-organizing map (Kohonen): 3-dimensional wiring diagrams with connections between neurons on same ‘layer’
- fully recurrent: network where everything is connected to everything else
What different types of representation of neural networks are there?
- localist representation: each neuron represents something different and specific (ex. neuron for grandma)
- distributed representation: a random pattern of neurons represents something
- population coding: distributed in nice organized way (like place cells)
What are the different types of learning?
- unsupervised learning
- supervised learning
- reinforcement learning
What is unsupervised learning?
- adjust weights based on correlations in input
- “neurons that fire together wire together”
- just noticing what things are similar and what things aren’t
What is supervised learning?
- adjust weights based on difference between actual output and correct output
- someone corrects you
What is reinforcement learning?
- adjust weights based on difference between actual reward and expected reward
- reward when right, and nothing when wrong
What are the components of a perceptron?
- inputs: y
- weights: w
- input: sum(y*w)
- activation: if x>1, y =
or if x<1, y = - this is called a step function
What is a feedforward network?
- outputs of one layer become inputs of the next
- all connections are from left to right
- networks are called supervised because of how they are trained up (weights produced from experience)
- all layers inbetween input and output layer are hidden layers
How are weights learned?
- by backpropagation: algorithm for training neural networks
- compare actual outputs to desired outputs
- work way back through network tweaking weights as you go
- next time network receives input, the output will be closer to the desired output
- formula for error as a function of activation
What are single layer unsupervised recurrent (Hopfield) networks?
- all neurons are in a 2-dimensional layer
- every neuron is wired up to every other neuron
- image: circles with all connecting back and forth with top left
- activation function increases sharply
How does associative memory work in unsupervised recurrent networks?
- store patterns: present network with an input which is a pattern of some nodes being active and some nodes not being active
- content-addressable: similar pattern or partial pattern (based on cue) used to come up with original picture
- access memory by presenting similar pattern
What are some examples of content-addressable?
- a noisy image produces partial pattern to come up with original image
- part of an image or memory produces partial pattern to come up with whole image/memory
What is Hebb’s rule? What learning does it apply to?
- neurons that fire together wire together
- neurons that fire out of sync fail to link
- unsupervised learning
What is the formula for weight adjustment?
- delta wij = (1/N) si sj
- wij: weight of link from unit i to unit j
- N: number of units in network
- si: activation of unit i
- adjust weight based on how correlated the activity is
What did the Hopfield demo illustrate?
- network remembers in a way that is similar to human memory
- it also blurs memories together in its recall
How does a kohohen map work?
- output
- chose the cell whose weight vector is the nearest to the input vector - updating
- update each cell’s weight vector to be more like the input vector
- the amount of updating decreased with (1) topological distance from the chosen cell and (2) time
What does a kohonen map look like?
- n inputs
- k cells
- n*k weights
- adjacency relationships define topology
- network that self organizes
- present system with a set of activations and a single output node becomes active/greatest activation (white circle)
- strengthen weights of winning node and surrounding nodes
What is a simple recurrent network?
- start with a standard feedforward network
- hidden layer sends output to a context layer and gets input from context layer
- context layer allows for network to have memory for stuff it has done in the past
What is an example of a simple recurrent network?
- letters in words
- input of 5 numbers
- 20 context and 20 hidden
- 5 output numbers
- from input, must predict next letter
- at end of the word (ex. m-a-n-_) very certain of the next letter (y)
- at start of word, network is less sure
- words-in-sentences: present sequences of words to network over and over (implicitly learns categories of nouns and verbs)
What is the reinforcement learning problem?
- learn what action to take to maximize total reward given the current state of the environment
What does a reinforcement learning diagram look like?
- agent perceives new state of world
- agent produces an action on the environment
- environment gives reward to agent
- agent learns based on reward and state of world
What is a value-map model?
- shows reinforcement learning
- take actions that lead to reward and avoid actions that don’t lead to reward
- agent learned that certain action gives it more energy
What does deep reinforcement learning mean? What can we achieve with this?
- deep: neural networks with lots of layers
- human-level control through deep reinforcement learning
What can be learned with reinforcement learning?
- starts with no knowledge of game
- increasing score with training epoch (number of trials) on 80 different games
- network learned better than novice human for many games
- input: what is seen on the screen
- output: action to take with the joystick