Artificial Neural Networks Flashcards

1
Q

What does a perceptron model?

A

A brain neuron

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

How are perceptrons like neurons?

A
  • They transmit info to other perceptions
  • They multiply the inputs given by some pre determined weight
  • They apply some function to the set of inputs at each node
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Why are ANN’s fault and noise tolerant?

A

As they run in parallel

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What are recurrent neural networks good at?

A

Things that happen repeatedly in a time series, like music prediction or speech recognition

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What do ANN’s learn?

A

They learn to recognise patterns in the data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Why is the complexity of a network important?

A

As it must be sufficiently complex in order to learn all the patterns, but not too complex that it takes too long to learn

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is necessary to use an activation function?

A

The result of the function applied by the perceptron must be Normalised

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Why do we add a bias to the activation function?

A

This is essentially a threshold for the activation function.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is back propogation?

A

It is the most common learning rule for ANN’s

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

How does back propagation work?

A

It uses the error to calculate how much the weight of each input should change, and goes through the network doing this

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What do non-linearities allow a network to do?

A

Identify complex regions within the search space.

The more layers a network has, the more ‘Lines’ it can draw on a graph

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What needs to happen to the output of each perceptron?

A

It needs to be normalised to between 0 and 1

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Why do outputs of perceptrons need to be normalised?

A

To prevent value growth through a network

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What are some termination conditions for back propogation?

A
  • After a fixed number of iterations
  • Once training error falls below some threshold
  • Once validation error falls below some threshold
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Why does back propogation work?

A
  • Early steps represent an approximately linear function
  • Weights move closer to global minimum
  • In later stages, steps move towards a local minimum
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What are some advantages of ANNs?

A
  • No programming (training instead)
  • Very fast in operation (parallel)
  • Can generalise over examples
  • Can tolerate noise
  • Degrades gracefully
17
Q

When should the use of ANN’s be considered?

A
  • When input is high dimensional or real valued
  • When output is discrete or real valued
  • When the data is noisy
18
Q

What are some disadvantages of ANNs?

A
  • Large training sets necessary
  • Number of choices to make, before running, which affect the quality of the result
  • Unpredictable on data it was not trained on
  • Can only guarantee local optimum (not the best solution, but a solution)
  • Results are unexplainable