Pytorch_sentdex Flashcards

1
Q

what is F and nn. Is there a relation between them

A

Yes, F is function and nn is more of Oops way. Pytorch has this difference due to architectural differences

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

what is the 1st and 2nd step in building the neural network

A

1st step is to define the structure of NN and next step is to define how the data is passed through the Network

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

what do you understand by flatten

A

Flattened is that all the data in the 28*28 form is converted into one continuous row

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Basic code to form and check the layers

A

Roam -> Pytorch, #nn_layer_initialization

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

why wont this work #nn_data_pass

A

Because activation function is not defined

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Name some standard optimizers and does the last layer needs activation function if no y if yes y

A

ReLu, LeakyRelu, Sigmoid, Tanh. Yes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

For a probability distribution output, what is the best loss function? What is the dimension that is needed

A

Log_soft_max always work with multiclass classifier. Dim=1 You can further note that dim = 0.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Remember how to define the optimizer, loss

A

nn_init_forward_optim_loss. Can the loss be defined

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

How can you send a random training data into a nn and how your dimension of input data should look.

A

X.view(-1,28*28).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

How to build a network where the 1st few layers are general layers and the rest of the later layers are specific layers

A

nn_logics_forward_cool.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

If your output is numbers can you use MSE ?

A

No better use nll_loss. MSE can be used only for one hot encoded output value [0,0,0,1,0,0,0,0,0,0 ]

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Why do we need to call zero_grad() in PyTorch?

A

In PyTorch, we need to set the gradients to zero before starting to do backpropragation because PyTorch accumulates the gradients on subsequent backward passes. This is convenient while training RNNs. So, the default action is to accumulate (i.e. sum) the gradients on every loss.backward() call.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

TORCH.ARGMAX

A

Returns the indices of the maximum value of all elements in the input tensor.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

COde to write one hot vector for multiclass classification

A

np.eye

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

vol = torch.unsqueeze(vol, 0) # Whats teh meaning of 0

A

0 represents the lockation at which the new dimension should be added

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

How to convert an array into tensor float in one line

A

vol = torch.from_numpy(vol_arr).float()

17
Q

How to plot an image data which is in an array form

A

import matplotlib.pyplot as plt

plt.imshow()

18
Q

We should be aware of three different kinds of numerical values

A
continuous values (strictly ordered, and a difference
between various values has a strict meaning) Further ratio scale and interval scale
Ordinal Value (small medium large)
categorical values
19
Q

Can you tell the order of the neural network

A
Epoch
    Forward Pass
    Loss
    Backward pass
    Update Params
20
Q

*params

model(t_un, *params) what is this equivalent to

A

to pass the elements of params as individual arguments

model(t_un, params[0], params[1])

21
Q

what comes under functional API

A

whatever doesnot have parameters and need not be defined in init function, which doesnot have parametes that require a gradient to carry. These things can be associated to funtional API

22
Q

through locality and translation invariance

A

locality

Translation invariace - neifhours