Quiz #5 Flashcards

1
Q

What is ‘attention’?

A

Weighting or probability distribution over inputs that depends on computational state and inputs.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What does attention allow us to do?

A

It allows information to propagate between “distant” computational nodes while making minimal structural assumptions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is one of the most popular tools used for building attention mechanisms?

A

Softmax

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Softmax is differentiable? (True/False)

A

True

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What are the inputs to softmax attention?

A

A set of vectors {u1, u2, …u_} and a query vector ‘q’. What we want to do is select the most similar vector to q via p = softmax(U.q)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is the difference between softmax applied to the final layer of an MLP and softmax attention?

A
  • When Softmax is applied at the final layer of an MLP:
    • q is the last hidden state, {u1, u2, … u_n} is the embedding of the class labels
    • Samples from the distribution correspond to labelings (outputs)
  • In Softmax attention:
    • q is an internal hidden state, {u1, u2, … u_n} is the embeddings of an “input” (e.g. the previous layer)
    • Samples from the distribution correspond to a summary of {u1, u2, … u_n}
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What was the biological inspiration for attention mechanisms?

A

Saccades (basically rapid, discontinous eye movements between salient objects in the visual field).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

At a conceptual level, what is it that are visual attention mechanisms trying to do?

A

Given the current state/history of glimpses, where and what scale should we look at next?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

The representational power of a softmax attention layer (or more generally, any attention layer) decreases as the input size grows? (True/False)

A

False. It increases. This is because the size of the hidden state grows with the size of the input.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

How is the similarity of query vector ‘q’ typically computed compared to the set of hidden state vectors {u1, u2, … u_n}?

A

Cosine similarity (i.e. inner product of q with each of the u vectors}

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What is the ‘controller state’ in a Softmax Attention layer?

A

Initially, it’s just the input query vector q0. We use q0 to compute the next controller state (i.e. hidden state) q1, then use q1 to compute q2, …so on.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What are transformer models?

A

Models that are made up of multiple attention layers.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What are three important architectural distinctions that result in the superior performance of transformer models compared to previous attention based architectures?

A
  1. Multi-query hidden state propagation (“self-attention”)
  2. Multi-head attention
  3. Residual connections, LayerNorm
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What is self-attention?

A

It uses a controller state (i.e. query/hidden state) for every single input). So the size of the controller state grows with the size of the input, giving it even more representational power than traditional attention networks (remember that the representational power of an attention network grows proportionally to its input size).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What is multi-head attention?

A

Multi-head attention combines multiple attention ‘heads’ being trained in the same way on the same data - but with different weight matrices, thus yielding different values.

Each of the ‘L’ attention heads yields values for each token - these values are then multiplied by trained parameters and added.

(To me this seems kind of similar to the idea in convnets of using multiple “filters” for each convolutional layer so we can learn different feature representations. )

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What are some of the major reasons why machine translation is difficult?

A
  1. Language is ambigous
  2. Language depends on context
  3. Languages are very different (e.g. structure, what is implicit vs. explicit, etc.)
17
Q

Translation is often modeled as a conditional language model? (True/False)

A

True. Typically Prob(tokens | source)

18
Q

The probability of each output token is estimated together based on the source material in a machine translation model? (True/False)

A

False. They are estimated separately from left-to-right.

19
Q

In a machine translation model, we calculate the probability of each output token estimated separately (left-to-right) based on what two things?

A
  1. Entire input sequence (encoder outputs)
  2. All previously predicted tokens (decoder “state”)
20
Q

In the context of machine translation models, the argmax[p(t | s)] is intractable? (True/False)

A

True.

21
Q

In the context of machine translation models, why is argmax[p(t | s)] impossible to find, and what technique do we use to remedy this?

A
  • The problem: Exponential search space of possible sequences
  • Remedy is to use beam search (typical beam size of 4 to 6)
22
Q

What does the beam search algorithm allow us to do?

A

To search exponential space in linear time.

23
Q

How does the beam search algorithm work for machine translation?

A

We explore a limited number of hypotheses ‘k’ at a time. At each step, we extend each of ‘k’ elements by one token. Top ‘k’ overall then become the hypotheses for the next step.

24
Q

Each beam elements has a different _____? For transformer/decoder this is _________ input for _______ steps?

A
  1. State
  2. Self-Attention
  3. Previous steps
25
Q

Total computation scales linearly with beam width? (True/False)

A

True

26
Q

Computation over the beam is difficult to parallelize on GPUs when using the beam search algorithm? (True/False)

A

False. It is highly parallelizable over the beam.

27
Q

What are the three reasons inference can be computational expensive for machine translation models?

A
  1. Step-by-step computation (auto-regressive inference)
  2. Output projection: theta(len_vocab * output_len * beam_size)
  3. Deeper models
28
Q

What are three strategies we can use to overcome some of the inherit inefficiencies associated with machine translation inference?

A
  1. Use smaller vocabularies
  2. More efficient computation

Reduce depth / increase parallelism

29
Q

To improve computational efficiency for machine translation models, what is one good reason why it’s reasonable to use smaller vocabularies?

A

Because while a vocabulary may be huge, for any given input sequence, the likely outputs will generally be constrained to a fairly small set.

IBM alignment models use statistical techniques to model the probability of one word being translated into another. Alternatively, lexical probabilities can be used to predict most likely output tokens for a given input. Using these approaches can achieve up to 60% speedup.

30
Q

What is one way we can overcome the challenge of how to model rare or unseen words?

A

Model most frequent words as their own token, less frequent words get broken up into their constituent parts. One popular algorithm for doing this is byte-pair encoding.

31
Q

How does byte-pair encoding work?

A

BPE comes from the idea of compression, where the most frequent adjacent pair is iteratively replaced.

Example:

Consider the string “abcdeababce”

Step 1: Replace most frequent pair “ab” with “X” (and add replacement rule)

“XcdeXXce”

X=”ab”

Step 2: Replace next most frrequent pair (here including the replacement byte)

“YdeXYe”

X=”ab”

Y=”Xc”

32
Q

When using parallel computation to speedup machine translation inference, what is the major bottleneck?

A

Autoregressive inference time dominated by decoder

33
Q

For machine translation models, average efficiency can be increased by translating multiple inputs at once? (True/False)

A

True (however, may not be practical for real-time systems)

34
Q

What is the biggest slowdown typically the result of in machine translation models?

A

It comes from the requirement that for the decoder we need to predict each token in the output sequence one at a time. (i.e. autoregressive)

35
Q

Neural Machine Translation (NMT) systems have an _______ learning curve with respect to the amount of training data, resulting in _______ quality in _________ settings, but better performance in ___________ settings.

A
  1. Steeper
  2. Worse
  3. Low-resource
  4. High-resource
36
Q

What is one of the primary challenges of Neural Machine Translation (NMT) models?

A

Data Scarcity. There is lots of data available available, for example, English —> French, but what about Pashto –> Swahili? This is a big challenge.

37
Q

Besides data scarcity, what are some of the other challenges associated with machine translation?

A
  1. Language similarity - many (most) languages are very different than English
  2. Domain - the training data that is available might not be closely related enough to your task of interest. (for example, Facebook trying to develop models for newsfeed using Ubuntu training manuals)
  3. Evaluation - no access to a test set (this is one of main blockers to research in low-resource settings)
38
Q

What are some of the most powerful techniques for translation in low-resource settings?

A
  1. Multi-lingual training (i.e. exploiting the relatedness between languages)
  2. Backtranslation - using two languages (typically need to be fairly high-resource languagues) to train an intermediate model that can then be used to bootstrap a better model.
  3. Language Agnostic Sentence Representations - using a model to take a low-resource language into a high-resource language (done using an Bi-Directional Recurrent Encoder-Decoder style network