PyTorch Flashcards

1
Q

create a tensor in a range

A

torch.arange(start=0, end, step=1)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

create a float_32 tensor. Declare dtype, device, requires_grad

A

float_3’_tensor = torch.tensor([3.0, 6.0, 9.0],dtype= torch.float32, device = “cuda”, requires_grad = False)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

change the dtype of a tensor

A

float_16_tensor = float_32_tensor.type(torch.float16)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

create a random tensor

A

torch.rand(row, column)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

built-in PyTorch functions to multiply, divide, subtract and sum tensors

A

torch.mul(tensor, 10)

torch.div(tensor,4)

torch.add(tensor, 4)

torch.sub(tensor, 3)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

element-wise multiplication of tensors

A

torch.matmul(matrix1, matrix2)

** Note that size is critic.
(x,y) * (y,x) is possible.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

transpose of a tensor

A

some_tensor.T

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Find the min, max, sum and mean of a tensor

A

Find the min
torch.min(x), x.min()

Find the max
torch.max(x), x.max()

Find the mean
torch.mean(x.type(torch.float))

Find the sum
torch.sum(x)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

find the index of the maximum & minimum valued element in a tensor

A

torch.argmax(some_tensor)
torch.argmin(some_tensor)

or

some_tensor.argmax()
some_tensor.argmin()

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

reshape a tensor with the classical way

A

x.reshape(3,3)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

reshape a tensor by creating a copy of it (uses the same memory port)

A

some_tensor.view(3,3)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

stack tensors vertically

A

torch.stack([tensor1, tensor2), dim=0]
torch.vstack([tensor1, tensor2)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

stack tensors horizontally

A

torch.stack([tensor1, tensor2]), dim=1)
torch.hstack([tensor1, tensor2)]

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

remove the singleton dimensions from a tensor

A

some_tensor.squeeze()

Previous tensor : tensor([[5, 2, 3, 4, 5, 6, 7, 8, 9]])
Previous tensor’s shape torch.Size([1, 9])
Squeezed tensor : tensor([5, 2, 3, 4, 5, 6, 7, 8, 9])
Squeezed tensor’s shape torch.Size([9])

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

rearrange the dimensions of the target tensor to desired one

A

torch.permute(some_tensor)

x_original = torch.rand(size=(224, 224, 3)) # [height, width, colour channels]

Permute the original tensor to rearrange the axis (or dim) order
x_permuted = torch.permute(x_original, (2,0,1)) # shifts axis 0 -> 2, 1 -> 0 , 2 -> 1

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

setup device agnostic code

A

device = “cuda” if torch.cuda.is_available else “cpu”

17
Q

change a tensor’s device to ‘CPU’ practically

A

some_tensor.cpu()

18
Q

turn PyTorch tensor into NumPy array

A

some_tensor.numpy()
* If current tensor is working on a GPU it would throw an error.

19
Q

make predictions without gradient descent logs

A

with torch.inference_mode():
y_pred = model_0(X_test)

20
Q

make predictions with gradient descent logs

A

model_0(X_test) = y_pred

21
Q

how can one detect the total failure of a model

A

by using a loss function.

22
Q

list all parameters of a model

A

1 - model_0.parameters()
2 - model_0.state_dict()

23
Q

loss function

A

a function to keeps track of how much a model is wrong about predictions.

24
Q

optimizer

A

takes into account the total loss of a model and adjusts the parameters accordingly to improve the loss function