Learning concepts Flashcards
Why does learning on small data still work?
- Homogenous data (populations/data sources are identical)
- Similar conditions like KITTI dataset or from a single CT-machine
Why use pretrained networks?
Early layers learn general topologies. We and train later layers to be task specific.
Mention some benefits of using pretrained networks
- With less parameters to train, you are less likely to overfit.
- Features is often invariant to many different effects.
- Need a lot less time to train.
- OBS! Since networks trained on ImageNet have a lot of layers, it is still possible to overfit.
What is an auxiliary task in the reinforcement context?
Some predefined extra task that should help the learner achieve its end goal.
What are some problems with vid2depth?
- Assumes static environment
- Too much moving object cause noise in
learning and inference
What are the two main issues with reusing trained layers?
Transferability is negatively affected by two distinct issue:
- the specialization of higher layer neurons to their original task at the expense of performance on the target task
- optimization difficulties related to splitting networks between co-adapted neurons
What is active learning (AL)?
A network that can query some information source (f.eks a user).
What is the overall idea of vid2depth?
Compute two 3D estimated depth images and a corresponding (estimated) geometric transform between them to map them into a common 3D frame where we can compare these point clouds (this is their novelty, the loss in comparing two 3D point clouds).
What is the main idea behind transfer learning?
Train neural network on big general dataset. If very little specific data, retrain only last layer. If medium data, retrain last layers and fine tune the rest.
In what context is multitask learning a good idea?
Where you have different domains which is to carry out similar tasks. Learning jointly while sharing mid-level representation is useful since what one task learns might be useful in general for others.
Give an example on multitask learning
Shared cross-modal scene networks. Using shared mid-level representation to classify real images, clip-art, sketches, spatial text and descriptions.
Explain typical process in active learning
Classify unlabeled images → Auto label clearly classified samples, manually label images with an uncertain classification → add these labeled images to the labeled dataset and train model. Repeat.