Lecture 1 - Luke Flashcards

You may prefer our related Brainscape-certified flashcards:
1
Q

What is the brain initiative?

A

The BRAIN Initiative seeks to deepen understanding of the inner workings of the human mind and to improve how we treat, prevent, and cure disorders of the brain

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Three types of output functions for neural networks

A

Linear
Step
Sigmoid

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is the output of the bias unit always equal to?

A

-1

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is Hebb’s rule?

A

deltaW_{ij} = x_iy_j
with weight update:
w_{ij}(t)=w_{ij}(t-1)+etadeltaW_{ij}

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What does Hebb’s rule suffer from?

A

Self-amplification

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is the use of the synaptic weights in an NN?

A

Reduce error between output y and its desired output

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is the delta rule?

A

delta = t-y

where t is the teaching input and y is the output

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Could you create a perceptron that can separate XOR inputs?

A

No, as they are not linearly separable

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Brief description of an ANN

A

An ANN communicates with the environments through input and output units.
Units are linked by uni-directional connections, characterised by a weight and sign that transforms the signal.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What does a multi-layer nn do to solve problems that are not linearly separable?

A

Re-map input space into a space which can be linearly separated by output units.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What kind of neural network should you use on time series data?

A

Recurrent neural network

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Why should a multilayer nn not use a linear output function?

A

A linear transformation of a linear data set remains linear

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What is backpropagation of error?

A

To propagate the error of the units backwards to the hidden units through the connection weights; once we have the error for the hidden units, we can change the lower layer of connection weights in the same manner as the upper layer

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

How can too many weights in the nn affect performance?

A

Overfitting

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

How to overcome overfitting?

A

Use a validation set; divide the available data into a training set (for weight update) and validation set (for error monitoring).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What is a major caveat of nns?

A

They are easily fooled, with high prediction scores often given for unrecognizable images

17
Q

Some applications of NNs?

A

Character recognition
Image compression
Classification