Transfer Learning and Pre-trained CNNs Flashcards

1
Q

What is a pretrained model?

A

A pretrained model is a saved neural network that has been previously trained on a large dataset and can be reused for new tasks.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Why are pretrained models useful?

A

They improve accuracy and save training time, especially when working with small datasets.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is ImageNet?

A

ImageNet is a large-scale dataset with 1.4 million labeled images and 1,000 different classes.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Name three examples of pretrained models.

A

VGG-16, GoogLeNet (Inception), and MobileNet.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is VGG-16?

A

A deep convolutional neural network developed by the VGG group at the University of Oxford, trained on ImageNet.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is GoogLeNet (Inception)?

A

A deep CNN introduced by Google in 2014, known for using inception modules to improve efficiency.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is MobileNet?

A

A CNN designed for mobile and edge devices, optimized for efficiency.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What is Faster R-CNN used for?

A

Faster R-CNN is used for object detection.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is Mask R-CNN?

A

An extension of Faster R-CNN that provides pixel-wise segmentation masks along with bounding boxes.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What does YOLO stand for, and what is it used for?

A

YOLO (You Only Look Once) is an object detection algorithm that divides an image into a grid and predicts bounding boxes and class probabilities.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What are two main ways to use a pretrained model?

A

Feature extraction and fine-tuning.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What is feature extraction in transfer learning?

A

Using a pretrained model’s convolutional base to extract features from new data and feeding them into a new classifier.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What is fine-tuning in transfer learning?

A

Unfreezing some layers of a pretrained model and training them on new data to adapt the model to a new task.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Why might you reuse only the convolutional base of a pretrained model?

A

Because it captures general image features that can be useful for different tasks, while the classifier is often task-specific.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What is shallow mode in transfer learning?

A

Running the convolutional base over a dataset, recording the outputs, and training a separate classifier on the extracted features.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What is deep mode in transfer learning?

A

Extending a pretrained model by adding new Dense layers on top and training the entire network end-to-end.

17
Q

What is the key difference between shallow and deep mode?

A

Shallow mode uses the pretrained model as a fixed feature extractor, while deep mode trains additional layers on top.

18
Q

What function in Keras loads the VGG16 convolutional base?

A

VGG16(weights='imagenet', include_top=False, input_shape=(224, 224, 3))

19
Q

How do you freeze the convolutional base in Keras?

A

conv_base.trainable = False

20
Q

How do you fine-tune specific layers in Keras?

A

Iterate through layers and set layer.trainable = True for the ones you want to train.

21
Q

Name three additional pretrained models available in Keras.

A

Xception, ResNet50, and MobileNet.

22
Q

What is the self-study reading list for transfer learning?

A

Chapter 6 of ‘Convolutional Neural Networks’ and papers by Simonyan & Zisserman (2014) and Szegedy et al. (2015).