PyTorch Flashcards
create a tensor in a range
torch.arange(start=0, end, step=1)
create a float_32 tensor. Declare dtype, device, requires_grad
float_3’_tensor = torch.tensor([3.0, 6.0, 9.0],dtype= torch.float32, device = “cuda”, requires_grad = False)
change the dtype of a tensor
float_16_tensor = float_32_tensor.type(torch.float16)
create a random tensor
torch.rand(row, column)
built-in PyTorch functions to multiply, divide, subtract and sum tensors
torch.mul(tensor, 10)
torch.div(tensor,4)
torch.add(tensor, 4)
torch.sub(tensor, 3)
element-wise multiplication of tensors
torch.matmul(matrix1, matrix2)
** Note that size is critic.
(x,y) * (y,x) is possible.
transpose of a tensor
some_tensor.T
Find the min, max, sum and mean of a tensor
Find the min
torch.min(x), x.min()
Find the max
torch.max(x), x.max()
Find the mean
torch.mean(x.type(torch.float))
Find the sum
torch.sum(x)
find the index of the maximum & minimum valued element in a tensor
torch.argmax(some_tensor)
torch.argmin(some_tensor)
or
some_tensor.argmax()
some_tensor.argmin()
reshape a tensor with the classical way
x.reshape(3,3)
reshape a tensor by creating a copy of it (uses the same memory port)
some_tensor.view(3,3)
stack tensors vertically
torch.stack([tensor1, tensor2), dim=0]
torch.vstack([tensor1, tensor2)
stack tensors horizontally
torch.stack([tensor1, tensor2]), dim=1)
torch.hstack([tensor1, tensor2)]
remove the singleton dimensions from a tensor
some_tensor.squeeze()
Previous tensor : tensor([[5, 2, 3, 4, 5, 6, 7, 8, 9]])
Previous tensor’s shape torch.Size([1, 9])
Squeezed tensor : tensor([5, 2, 3, 4, 5, 6, 7, 8, 9])
Squeezed tensor’s shape torch.Size([9])
rearrange the dimensions of the target tensor to desired one
torch.permute(some_tensor)
x_original = torch.rand(size=(224, 224, 3)) # [height, width, colour channels]
Permute the original tensor to rearrange the axis (or dim) order
x_permuted = torch.permute(x_original, (2,0,1)) # shifts axis 0 -> 2, 1 -> 0 , 2 -> 1