Deep nets overview Flashcards

You may prefer our related Brainscape-certified flashcards:
1
Q

What is the sigmoid function?

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What are layers in a deep neural net?

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is the first layer called?

A

The input layer

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is the main rationale behind a deep net?

A

We can use the outputs as inputs for another layer

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is the last layer called?

A

the output layer

What we compare the targets to

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What are the layers between the input and output layers called?

A

The hidden layers

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What are the building blocks of hidden layers called?

A

Hidden units or hidden nodes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What is the width of the layer?

A

the number of hidden units in a hidden layer

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What are two hyperparamaters?

A

Width, depth, learning rate

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What are examples of paramaters?

A

Weights (w)
Biases (b)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What are the differences between Parameters and Hyperparameters?

A

Hyperparameters are pre-set by us

Parameters are found by optimizing the model

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Why is non-linearity needed?

A

So we can create more complicated relationships

It also gives us the ability to stack layers
without it stacking layers is meaningless

we cannot stack layers when we only have linear relationships

in order to have deep nets and find complex relationships through arbitrary functions, we need non-linearities

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

In machine learning, what are non-linearities called?

A

Activation functions or transfer functions

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What do activation functions do?

A

transform inputs into outputs of a different kind

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What are the 4 common activation functions?

A
  1. Sigmoid (logistic function)
  2. TanH (hyperbolic tangent)
  3. ReLu (rectified linear unit)
  4. softmax
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What are the properties of softmax?

A

Range: (0,1)
They will always sum up to 1

17
Q

What does the softmax transformation do?

A

transforms a bunch of arbitrarily large or small numbers into a valid probability distribution

18
Q

Where is softmax activation function used?

A

In the activation of the output layer in a classification problem

19
Q

What is Backpropagation?

A
20
Q

How do we optimize the objective function?

A

The training process consists of updating parameters through the gradient descent for optimizing the objective function

minimize the loss

21
Q

What is Forward Propogation?

A

the process of pushing inputs through the net