Quiz Flashcards

1
Q

Semantic Segmentation

A

assigns a class label to each pixel, grouping all objects of the same class (all cars as one entity)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Instance segmentation

A

identifies each object instance separately, even if they belong to the same class (distinguish between diff cars)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Fully Convolutional network (dense prediction)

A
  • classify each pixel in the input image into different class labels
  • encoder (convolution network)
  • decoder (deconvolution network)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Unsampling methods in Decoder

A
  • transposed convolution
  • backward-stride convolution
  • max unpooling
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

U-Net

A

Fuses upsampled feature maps in the decoder with corresponding feature maps from the encoder to preserve spatial details.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Dilated Convolution

A

uses a large sparse filter to increase receptive field

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Mask R-CNN for instance segmentation

A

Faster R-CNN + fully convolutional network for semantic
segmentation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Hidden states

A
  • receives inputs from previous layer and hidden state
  • weights are same across hidden states in the same hidden layer
  • weights are different across different hidden layers
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Basic RNN’s problems

A

Exploding gradients

Vanishing gradients

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

LSTM (Long Short-Term Memory)

A
  • Sigmoid functions are used to measure importance of input
  • forget and input gates decide the cell state
  • output gate produces output of the cell
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

RNN with Attention

A
  • Context vectors are used as the input to each timestep of the decoder
  • Attention provides different context vectors by calculating alignment scores between decoder states and features from the encoder
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Self-attention

A

Enabled each element in a sequence to interact and learn dependencies with every other element, irrespective of their distance

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Vision transformer

A
  • Split an image into patches and represent patches as lower-dimensional embeddings and add positional embedding
  • apply self-attention
  • replace convolutional layers
  • trained on a large dataset
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Semi-supervised learning

A
  • partially labeled data in the training dataset
  • get pseudo label by applying the trained model using labeled data to unlabeled data
  • merge all data and their labels and retrain the model
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Weakly-supervised learning

A
  • labels in the trained model are inaccurate
  • class activation maps is to calculate regional features by projecting back the weights of the output layer to convolutional feature maps
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Self-supervised learning

A

pretext task is formulated by correlating unlabeled data and its semantics