Tutorial 1 Flashcards
torch.tensor()
A torch.Tensor is a multi-dimensional matrix containing elements of a single data type
How CPU and GPU torch.tensor() types exist?
8 CPU and 8 GPU
Can a torch.tensor() be created from a Python list?
Yes, like torch.tensor(np.array([[1, 2, 3], [4, 5, 6]]))
Does torch.tensor() copy data?
Yes. If you want to change a numpy array to a tensor, use as_tensor()
How do you use slice notation with torch.tensor()?
Like you would a numpy array.
what is requires_grad argument in torch.tensor()?
It allows Pytorch to record operations including that tensor for automatic differentiation.
How do you see the value of a torch.tensor()?
.item()
Where do gradients accumulate when calling .backward()?
In the leaves
What does a .backward() call do?
Computes the dloss/dx for every gradient enabled variable, and accumulates into the x.grad for every parameter x
torch.randn(size)?
provides a pytorch tensor with each element sampled from the standard normal distribution
torch.nn.Linear(in_features, out_features, bias=True)?
Applies a linear transformation to the incoming data: y = xW^T + b. W and b are randomly initialized
what does nn.loss() create?
Creates a criterion, which you can then supply an input and target for that loss function.
what does optimizer.step() do?
Updates the parameters based on current gradients stored in the .grad attribute of each paramter.
What is a Pytorch parameter?
A parameter is still a Tensor, much like a normal variable, however when associated with a model, it becomes part of the model attributes. This means we can easily feed model parameters to the optimizer. Parameters automatically has requires_grad() set to true.
What does sub() and add() do?
Add the argument to the tensor. If multiple elements, the argument is added to each element.