Transfer Learning Flashcards
What is the meaning of transfer learning?
Transfer learning is the use of pretrained networks with the aim of reusing the ‘wisdom’ of an existing model.
What is pretrained networks?
Pretrained networks are saved networks that were previously trained on a large dataset, on a task that is similar to the task posed by a small dataset.
What is the primary benefit to using a pretrained model?
Pre-trained models typically provide solid convolutional feature extraction bases alongside task-specific layers. If we remove the task specific layers and replace them with our own classifiers, we have a solid compression base that optimises our network.
What is fine-tuning?
Fine-tuning is the fitting of our task-specific data on the pre-trained model to help it further adapt to our particular domain. There are two modes for this: shallow and deep.
What is the ‘shallow’ mode of fine tuning?
Running the convolutional base over our dataset, storing its output and giving it as input to a standalone, densely connected classifier.
What is the ‘deep’ mode of fine tuning?
Extending the model we already have by adding dense layers on top of it, and running the whole thing as one big monolithic model.
Why do we freeze the first few layers when fine-tuning?
The first few layers of a pre-trained model are typically aimed at capturing more general features like edges and corners.
We freeze these layers, while allowing the more specific layers to update to our new data.