L16 - Neural Networks 2: Learning in ANNs Flashcards

1
Q

Define supervised learning and unsupervised learning in context of a neural network. What’s the difference?

A

Supervised learning -> The output of the neural network is compared against known correct data. The accuracy of the model can then be determined and fine tuned.

Unsupervised Learning -> Used to establish patterns in data. No comparator output data. Output of the neural network is analysed to identify trends.

  • The difference is that supervised is used mainly for prediction and classification purposes whereas unsupervised is used do identifying patterns in data.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What are the components of a Perceptron?

A
  1. A set of weighted connections that input data and associated weights into the perceptron.
    1. The activation function that operates the input data.
    2. The output axon, that outputs the function result.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What does the weights of each input data indicate?

A

The importance of that input

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What type of learning is a perceptron for?

A

Supervised

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Is a Perceptron an algorithm?

A

Yes. It’s a supervised learning algorithm mainly used for classification.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What does the threshold function do?

A

Gives a binary output

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What does the threshold function not do?

A

Give any information regarding error.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What are the 4 steps of learning in the Perceptron?

A
  1. Initialise input weights and the threshold function.
  2. Present the input data as well as the expected output data.
  3. Calculate the output of the neural network…
    1. Multiply each X by the corresponding weight W.
    2. Sum all XW’s
    3. Feed sum into activation function
    4. Get output
  4. Adjust weights based on amount of error between output and expected…
    * Lots of error → Make big changes.
    * Small error → Small incremental changes.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What are the formats of the weight update function?

A
  1. If the output Y is correct -> Wi(t+1) = Wi(t)
  2. If Y == 0, but should == 1 -> Wi(t+1) = Wi(t) + Xi(t)
  3. If Y == 1, but should == 0 -> Wi(t+1) = Wi(t) - Xi(t)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

If the threshold function is OR, do we always floor the sum of the XW’s?

A

Yes… See letter notes for examples

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

See lecture notes for examples of perceptron learning with XOR and OR activation functions…

A

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What parameter is used to control the learning rate of the weight update function?

A

n

E.g. Wi(t+1) = Wi(t) + nXi(t)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Define the Widrow-Hoff learning rule…

A

Weight updates of n are proportional to the error made.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What are some limitations of the single-layer perceptron? What is the solution to this?

A
  • Can only solve linearly separable problems i.e Can’t solve XOR classification
  • Multi-layer Perceptron
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

How does the multi-layer perceptron solve the issues of the single-layer perceptron?

A
  1. By adding hidden perceptron layers between input and output layer
  2. Activation function used provides information about error
  3. Error can be minimised as data is being passed through the algorithm
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What is the sigmoid function?

A

An activation function that can be used in MLP

17
Q

What are the 2 learning algorithms that can be utilised in MLP?

A

Back propagation

Feed forward

18
Q

How does the feed forward learning algorithm work?

A
  1. Initialise weights and thresholds to small random values.
  2. Feed data
  3. Multiple XiWi for all X’s
  4. Feed sum of XiWi into signed
    activation function
  5. Pass output onto next layer
19
Q

How does the back propagation learning algorithm work?

A

An optimisation algorithm in which weights are adapted starting from the output layer and working backwards using the sigmoid derivative.

20
Q

What are the 2 types of weight updating?

A

Batch -> Weightsare updated after a complete iteration of the entire dataset. Calculating all errors.

Online -> Weights are updated after each data point has been processed, based on the error of that data point.