Week/Topic 7: Connectionism Flashcards

You may prefer our related Brainscape-certified flashcards:
1
Q

Connectionist networks

A

are a form of
computation that is loosely inspired by the
way the brain works

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Connectionist networks are (roughly) the
same as:

A

Neural Networks
Parallel Distributed Processing (PDP)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Connectionism

A

is the view that such
networks are useful for understanding
how information is processed in the mind

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Perceptron Convergence Rule

A

A learning algorithm for perceptron’s (single-unit networks). It changes a perceptron’s threshold and weights as a function of the difference between the unit’s actual and intended output.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Hebbian Rule

A

a model of how neurons learn and form connections in the brain

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Difference between Hebbian Rule and Perceptron-Convergence Rule

A

Hebbian learning rule – It identifies, how to modify the weights of nodes of a network. Perceptron learning rule – Network starts its learning by assigning a random value to each weight

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Who came up with the Perceptron-Convergence rule, and when?

A

While the perceptron algorithm was invented by Rosenblatt, this theorem and its proof were from NYU mathematician Albert B. J. Novikoff in 1962

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Name the 4 different activation functions

A

Linear
Threshold Linear
Sigmoid
Binary Threshold

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Why cant a single layer network compute XOR.

A

It can only assign 1 weight to each input.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Hidden layer

A

a layer of hidden units in an artificial neural network

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Why is the backpropagation algorithm not very biologically plausible?

A

There is no evidence error is propagated backwards in the brain, and nature never really provides feedback as detailed as the algorithm requires

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What aspect do multilayer networks have?

A

hidden units that are neither input units nor output units

How well did you know this?
1
Not at all
2
3
4
5
Perfectly