Theory-Lesson 8(Neural Networks) Flashcards

1
Q

What are the main steps for a global approximation technique?

A

1)Generation of training data
2)Selection of the Global approximation model
3)Fitting of the model to the generated training data

Some of the most efficient global approximation models are the Taylor series and the Neural Networks

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is supervised learning?

A

It’s the machine learning task of inferring a function that maps an input to an output based on input-output pairs.

When the input data is continuous, we call it a regression problem while if its discrete, we call it a classification problem.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Design of NNs

A

1)Choice of the number and type of units
2)Determination of layers
3)Coding of training examples, in terms of inputs and outputs
4)Initialization and training of the weights on the interconnections

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Feedforward NNs

A

-Each unit is connected only to that of the next layer
-The processing procceeds from the input ot the output
-There is no feedback (static model:it just cares for the relationship of constraints, DVs and OFs)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Deep Learning vs Machine Learning

A

It’s practically the same but in deep learning we have more hidden layers. It’s advantageous for image processing techniques. From the intensity of the colours, the classifier is able to identify the object in a photo.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Activation function in a NN

A

Decides whether a neuron should be activated or not. It must be a non-linear function, otherwise every neuron would be performing a linear transformation of the inputs using the weights and biases.

Usually the sigmoid function is used. g(z)=1/(1+exp(-z)) which is continuously differentiable.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Back propagation NNs

A

This algorithms updates the weights wi of the network by means of successive iterations, that minimize the cost function of the error E.

The minimization of E is obtained using the gradient of the cost function, which consists of the first derivative of the function with respect to all weights w.

In other words, the weights of the connections are repeatedly adjusted (by using fitting rules) to minimize the difference between the actual output vector and the desired putput vector.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Overtraining

A

Overtraining happens when the model is trying to hard to capture the noise of the input data and doesn’t generalize well. We think we are fixing the model but in reality we are only making it worse because it generalizes worse to new data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Training error

A

It’s a function of the difference between predicted outcome and actual outcome for each training data point.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Generalization error

A

It’s a measure of how well a model is able to predict previously unseen data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Regularization

A

Modifying the performance function, which is normally chosen to be the sum of squares of the network error on the training set. It significantly reduces the variance of the model via a tuning parameter.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Cross validation

A

It splits the training data into two sets. The first set (80%) is used to train the network while the second one is used for validation. When the validation error increses for a specific number of iterations, the training is stopped and the weights and biases at the minimum of the validation error are returned.

Keeping the validation error low means that the model generalizes well on unseen data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly