Learning concepts Flashcards

1
Q

Why does learning on small data still work?

A
  • Homogenous data (populations/data sources are identical)

- Similar conditions like KITTI dataset or from a single CT-machine

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Why use pretrained networks?

A

Early layers learn general topologies. We and train later layers to be task specific.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Mention some benefits of using pretrained networks

A
  • With less parameters to train, you are less likely to overfit.
  • Features is often invariant to many different effects.
  • Need a lot less time to train.
  • OBS! Since networks trained on ImageNet have a lot of layers, it is still possible to overfit.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is an auxiliary task in the reinforcement context?

A

Some predefined extra task that should help the learner achieve its end goal.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What are some problems with vid2depth?

A
  • Assumes static environment
  • Too much moving object cause noise in
    learning and inference
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What are the two main issues with reusing trained layers?

A

Transferability is negatively affected by two distinct issue:

  • the specialization of higher layer neurons to their original task at the expense of performance on the target task
  • optimization difficulties related to splitting networks between co-adapted neurons
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is active learning (AL)?

A

A network that can query some information source (f.eks a user).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What is the overall idea of vid2depth?

A

Compute two 3D estimated depth images and a corresponding (estimated) geometric transform between them to map them into a common 3D frame where we can compare these point clouds (this is their novelty, the loss in comparing two 3D point clouds).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is the main idea behind transfer learning?

A

Train neural network on big general dataset. If very little specific data, retrain only last layer. If medium data, retrain last layers and fine tune the rest.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

In what context is multitask learning a good idea?

A

Where you have different domains which is to carry out similar tasks. Learning jointly while sharing mid-level representation is useful since what one task learns might be useful in general for others.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Give an example on multitask learning

A

Shared cross-modal scene networks. Using shared mid-level representation to classify real images, clip-art, sketches, spatial text and descriptions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Explain typical process in active learning

A

Classify unlabeled images → Auto label clearly classified samples, manually label images with an uncertain classification → add these labeled images to the labeled dataset and train model. Repeat.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly