PyTorch Flashcards

1
Q

What are Tensors in PyTorch?

Tensors are data __ similar to a__ and m__.

A

Tensors are specialized data structure similar to arrays and matrices.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

In PyTorch, what are Tensors normally used for?

Give 2 uses.

  1. Encode the in… and ou…
  2. Encode the model’s pa…
A
  1. Encode the inputs and outputs of a model.
  2. Encode the model’s parameters.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What does Datasets do?

It __ the __ and their __.

A

It stores the samples and their corresponding labels

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What does DataLoader do?

It wraps an iterable around the __ to enable __ to the samples.

A

It wraps an iterable around the Dataset to enable easy access to the samples.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What does ‘transforms’ do?

To perform some __ of the __ and make it suitable for __ the model.

A

To perform some manipulation of the data and make it suitable for training the model.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is torch.autograd?

It enables __ differentiation, making it possible to __ how changes in __ affect the __, essential for __ __ __.

A

It enables automatic differentiation, making it possible to compute how changes in inputs affect the outputs, essential for optimizing neural networks.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is the typical training procedure for a NN?

  • Define the __ that has some learnable __ (or
    weights)
  • __ over a dataset of __
  • Process __ through the __
  • Compute the __
  • Propagate __ back into the network’s parameters
  • Update the __ of the network, typically using a simple update
    rule: weight = weight - learning_rate * gradient
A
  • Define the neural network that has some learnable parameters (or
    weights)
  • Iterate over a dataset of inputs
  • Process input through the network
  • Compute the loss (how far is the output from being correct)
  • Propagate gradients back into the network’s parameters
  • Update the weights of the network, typically using a simple update
    rule: weight = weight - learning_rate * gradient
How well did you know this?
1
Not at all
2
3
4
5
Perfectly