lesson_11_flashcards

1
Q

What is structured representation in deep learning?

A

Representing relationships between elements explicitly, such as words, pixels, or nodes, to model compositional structures across domains like language and vision.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is a scene graph?

A

A graph-based representation where nodes are objects or object parts, and edges represent relationships like spatial arrangements or actions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What are recurrent neural networks (RNNs)?

A

Neural networks designed for sequential data, maintaining a state vector that represents past inputs while processing sequences of arbitrary length.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is the vanishing gradient problem in RNNs?

A

Gradients become too small during backpropagation through time, making it difficult to learn long-term dependencies.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is attention in deep learning?

A

A mechanism to focus on relevant parts of input data dynamically, weighting elements using similarity scores for better feature representation.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is the softmax function’s role in attention?

A

It converts similarity scores into probabilities, enabling weighted summations for attention mechanisms.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What are transformer architectures?

A

Models that use attention-based mechanisms, including multi-head attention, to process sequences or unordered sets efficiently.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What is a non-local neural network?

A

A network that dynamically learns connectivity patterns between data points using attention mechanisms, generalizing beyond local receptive fields.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

How are graph neural networks (GNNs) structured?

A

Nodes represent entities with feature vectors, and edges represent relationships, enabling propagation of information across the graph.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What is the role of embeddings in GNNs?

A

They represent nodes or elements as vectors, incorporating local and neighborhood features through attention mechanisms.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What is a sequence-to-sequence (seq2seq) task?

A

A task where a sequence of inputs is mapped to a sequence of outputs, such as machine translation or speech recognition.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What is the benefit of multi-head attention in transformers?

A

It allows the model to focus on different aspects of the data simultaneously, improving representation learning.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What is an example of a many-to-many task in sequential modeling?

A

Speech recognition, where an input sequence of sound waves is mapped to an output sequence of words.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What is the application of scene graphs in computer vision?

A

Scene graphs can describe spatial relationships in images, aiding tasks like object detection, relationship modeling, and image captioning.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

How does attention enhance graph representations?

A

By weighting neighbors dynamically, attention refines node features, enabling context-aware embeddings.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly