Neural Networks part 1 Flashcards
what are the goals of science, quantitative models
Description, prediction, explanation
what is retrograde?
It is a movement of a planet off its normal path. going around a point
ptolemaic geocentric model allowed for what?
Predictions
but lacks satisfying explanation
Copernican heliocentric model
First to say that we are actually going around the sun, interestingly the theory was worse at predicting
Kepler’s law of planetary motion
allowed for accurate prediction and explanation
why do we have quantitative models?
data never speaks for itself but requires a model to understand
verbal theorizing alone cannot substitute for quantitative analysis
always alternative models
model comparison rests on both quantitative evaluation and intellectual and scholarly judgment
what is the fundamental tradeoff in models?
simplicity and elegance vs. complexity and accuracy
Goal: maximize explanatory power while minimizing complexity
what makes a good model?
That there is a trade off between accuracy and simplicity
what is the goal for an explination
explain as much as possible as simply as possible
types of cognitive models
Mathematical model (Fitt’s law)
Symbolic models
-describes what goes on in the mind as a symbolic representation
dynamical systems models
-think of the mind as a point moving through an abstract mental state space
Hybrid models
computational neuroscience
How does a single neuron network
modelling a single neuron in lots of detain
Artificial neural network:
Current state
Each unit has an activation which changes rapidly from moment to moment based on current input
-eg neurons rapidly firing action potentials
Learned information
Each connection has a weight which changes slowly based on learning
-eg synaptic strength changing slowly die to experience
Topology
Feed forward, simple recurrent (elman), self organizing map (Kohonen), Fully recurrent
simple recurrent
Adding a loop not just feed forward
Self organizing map
A lattice where each one is connected to the neighbour
Fully recurrent
Everything is connected to everything and information is flowing freely
learning:
Supervised learning
Adjust weights based on difference between actual output and correct output
when you are wrong you get given the correct answer
unsupervised learning
Adjust weights based on correlations in input “neurons that fire together wire together”
what things go together
reinforcement learning
Adjust weights based on difference between actual reward and expected reward
how do you adjust the weights in supervised feedforward networks
Back propagation
stats at output layer and work our way back
unsupervised recurrent (hopfield) networks
each neuron is connected to every other neuron. a total connection in the network
what is the use of unsupervised recurrent (hopfield) networks
gives association to memory
store pattens
in future if you give it a similar cue/pattern it will recover the original
works a lot like out long term memory
Unsupervised learning
expose the network to an image and it just re wires itself so that image is represented in the network and it uses Hebb’s ruel
What is Hebb’s rule?
Neurons that fire together, wire together, neurons that fire out of sync, fail to link.
Self organizing Maps (kohonen)
net works that receive multi dimensional input
you give it different inputs and organizes the space of possible inputs into a map
Output for kohonen map
Chose the cell whose weight vector is the nearest too the input
Updating for Kohonen map
Updating each cell’s weight vector to be more like the input vector
the amount of updating decreases with 1) topological distance from the chosen cel and 2) time
Supervised recurrent (Elman) networks
look like feed forward but add in a context layer
output to hidden then feeds it back to context
What does it give the network?
interesting memory
normally with feedforward it goes with input but with this it allows it to to keep a record and memory ( remember what it has seen in recent past) -allows it to remember sequences through time
what is the simple recurrent network good for?
Remembering sequences throughout time
what is the simple recurrent network good at predicting
What letter will come next
what letter is likely to follow other letters
as you get further into a word simple recurrent network has a better change of what?
Guessing the next letter
similar to how kids learn language
Reinforcement learning
you over time what actions lead to reward and do that more ofter (maximize)
what is the RL problem
Learn what action to take to maximize total reward given the surrent state of the environment
no one is teaching you but you are learning from feedback
what does deep mean? RL
That there are many layers - this is helpful because with each layer the network can learn things at a slightly more abstract level