Ch6 Other Computer Vision Problems End-of-Chapter Questions Flashcards

1
Q

What is the difference between a Dataset and DataLoader?

A

DataLoader divides a Dataset into mini-batches.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What does Datasets object normally contain?

A

It contains a training Dataset and a validation Dataset.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What does a DataLoaders object normally contain?

A

It contains a training DataLoader and a validation DataLoader.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What are the methods to customize how the independent and dependent variables are created with the data block API?

A

get_x and get_y

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Why is softmax not an appropriate output activation function when using a one-hot-encoded target for multi-label classification?

A

I don’t think the problem is that the target is one-hot-encoded. The main issue is that we have a multi-label classification problem and softmax only works when there’s one right answer. This is because softmax requires that all predictions sum to 1 and it pushes one activation to be much larger than the others. This is not ideal when you have more than one right answer.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is the difference between nn.BCELoss and nn.BCEWithLogitsLoss?

A

nn.BCELoss calculates cross entropy on a one-hot-encoded target, but does not include the sigmoid function to scale activations between 0 and 1.

nn.BCEWithLogitsLoss does the sigmoid and binary cross entropy in a single function. This is the one you normally want to use.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Why can’t we use regular accuracy in a multi-label problem?

A

The regular accuracy function assumes that the class predicted was the one with the highest activation, but in a multi-label problem we could have more than one right answer.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

When is it okay to tune a hyperparameter on the validation set?

A

When the relationship between the hyperparameter and the metric you’re using to measure performance is a smooth curve. This gives us confidence that we’re not picking an inappropriate outlier to define the hyperparameter value.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

How is y_range implemented in fastai?

A

y_range is implemented in fastai with the sigmoid_range function:

def sigmoid_range(x, lo, hi):

return torch.sigmoid(x) * (hi-lo) + lo

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What do you need to do to make sure the fastai library applies the same data augmentation to your input images and your target point coordinates?

A

Use PointBlock (in the “blocks=” parameter of the DataBlock) to let fastai know that the labels represent coordinates and it will adjust them according to the data augmentations applied.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Why is nll_loss not an appropriate loss function when using a one-hot-encoded target (for multilabel classification)?

A

nll_loss will return the loss of just one activation, i.e. it assumes there’s only one correct label, so it will not work when multiple labels are possible.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

How do we encode the dependent variable in a multi-label classifcation problem?

A

We use one-hot-encoding

How well did you know this?
1
Not at all
2
3
4
5
Perfectly