Tutorial 2 Flashcards

1
Q

What are the two optimization steps and loss step you need to perform in the training loop?

A

optimizer. zero_grad() - To zero out accumulated gradients
loss. backward() - To backpropagate and accumulate gradients for all parameters.
optimizer. step() - To update all parameters in the model according to the optimizer.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

what does model(x).detach() do?

A

It constructs a new view on the tensor that is not attached to the computational graph.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

what is tf.nn.CrossEntropyLoss()

A

It is a loss function in pytorch that calculates the cross entropy loss. It also internally applies the softmax function so all inputs should be raw and unnormalized.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What does torch.max(tensor, dim) give you?

A

It gives you the maximum values along the dimensions. It also returns the indices of the maximum values if multidimensional array is being called.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What does model.eval() do?

A

This tells all the layers that you are in eval() mode, meaning batchnorm or dropout layers will work in eval mode rather than training mode.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What does with torch.no_grad(): do?

A

Deactivates the autograd engine, resulting in less memory use and performance speedups, but no backpropagation.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What does tensor.size(d) do

A

Gives you size along the dth dimension.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

How do you reshape a tensor

A

tensor.reshape()

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

How do you some up all the elements of a tensor in a given dimension?

A

tensor.sum(d)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly