Neural Networks part 1 Flashcards

You may prefer our related Brainscape-certified flashcards:
1
Q

what are the goals of science, quantitative models

A

Description, prediction, explanation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

what is retrograde?

A

It is a movement of a planet off its normal path. going around a point

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

ptolemaic geocentric model allowed for what?

A

Predictions

but lacks satisfying explanation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Copernican heliocentric model

A

First to say that we are actually going around the sun, interestingly the theory was worse at predicting

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Kepler’s law of planetary motion

A

allowed for accurate prediction and explanation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

why do we have quantitative models?

A

data never speaks for itself but requires a model to understand

verbal theorizing alone cannot substitute for quantitative analysis

always alternative models

model comparison rests on both quantitative evaluation and intellectual and scholarly judgment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

what is the fundamental tradeoff in models?

A

simplicity and elegance vs. complexity and accuracy

Goal: maximize explanatory power while minimizing complexity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

what makes a good model?

A

That there is a trade off between accuracy and simplicity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

what is the goal for an explination

A

explain as much as possible as simply as possible

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

types of cognitive models

A

Mathematical model (Fitt’s law)

Symbolic models
-describes what goes on in the mind as a symbolic representation

dynamical systems models
-think of the mind as a point moving through an abstract mental state space

Hybrid models

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

computational neuroscience

A

How does a single neuron network

modelling a single neuron in lots of detain

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Artificial neural network:

Current state

A

Each unit has an activation which changes rapidly from moment to moment based on current input
-eg neurons rapidly firing action potentials

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Learned information

A

Each connection has a weight which changes slowly based on learning
-eg synaptic strength changing slowly die to experience

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Topology

A

Feed forward, simple recurrent (elman), self organizing map (Kohonen), Fully recurrent

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

simple recurrent

A

Adding a loop not just feed forward

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Self organizing map

A

A lattice where each one is connected to the neighbour

17
Q

Fully recurrent

A

Everything is connected to everything and information is flowing freely

18
Q

learning:

Supervised learning

A

Adjust weights based on difference between actual output and correct output

when you are wrong you get given the correct answer

19
Q

unsupervised learning

A

Adjust weights based on correlations in input “neurons that fire together wire together”

what things go together

20
Q

reinforcement learning

A

Adjust weights based on difference between actual reward and expected reward

21
Q

how do you adjust the weights in supervised feedforward networks

A

Back propagation

stats at output layer and work our way back

22
Q

unsupervised recurrent (hopfield) networks

A

each neuron is connected to every other neuron. a total connection in the network

23
Q

what is the use of unsupervised recurrent (hopfield) networks

A

gives association to memory

store pattens

in future if you give it a similar cue/pattern it will recover the original

works a lot like out long term memory

24
Q

Unsupervised learning

A

expose the network to an image and it just re wires itself so that image is represented in the network and it uses Hebb’s ruel

25
Q

What is Hebb’s rule?

A

Neurons that fire together, wire together, neurons that fire out of sync, fail to link.

26
Q

Self organizing Maps (kohonen)

A

net works that receive multi dimensional input

you give it different inputs and organizes the space of possible inputs into a map

27
Q

Output for kohonen map

A

Chose the cell whose weight vector is the nearest too the input

28
Q

Updating for Kohonen map

A

Updating each cell’s weight vector to be more like the input vector

the amount of updating decreases with 1) topological distance from the chosen cel and 2) time

29
Q

Supervised recurrent (Elman) networks

A

look like feed forward but add in a context layer

output to hidden then feeds it back to context

30
Q

What does it give the network?

A

interesting memory

normally with feedforward it goes with input but with this it allows it to to keep a record and memory ( remember what it has seen in recent past) -allows it to remember sequences through time

31
Q

what is the simple recurrent network good for?

A

Remembering sequences throughout time

32
Q

what is the simple recurrent network good at predicting

A

What letter will come next

what letter is likely to follow other letters

33
Q

as you get further into a word simple recurrent network has a better change of what?

A

Guessing the next letter

similar to how kids learn language

34
Q

Reinforcement learning

A

you over time what actions lead to reward and do that more ofter (maximize)

35
Q

what is the RL problem

A

Learn what action to take to maximize total reward given the surrent state of the environment

no one is teaching you but you are learning from feedback

36
Q

what does deep mean? RL

A

That there are many layers - this is helpful because with each layer the network can learn things at a slightly more abstract level