6. Recurrent Language Model Flashcards

1
Q

What are some advantages and disadvantages of n-gram based modeling?

A

Advantages:
+ Highly Scalable; Simple assumptions
+ Computationally Tractable (count based)

Disadvantages:

  • Sparsity (returning 0 for long sentences which have not been encountered before)
  • Especially for large n-grams; Problems capturing long range dependencies - Symbolic Units - Generalisation Deficit
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is Neural Sequence Modelling? Describe its structure

A

It is a Neural Network based Language Model. Consists of:

  • 1-hot vectors: w_t
  • word vectors: v_t = W^T * w_t, where W ∈ R^(|V| x d)
  • Hidden Layer: h = sigmoid(U^T[u_t-3; u_t-2; u_t-1])
  • Output: y = V^T * h
  • Softmax Normalization
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What are some advantages and disadvantages of Neural Sequence Modelling?

A

Advantages:
+ Better generalisation on unseen n-grams
+ Smaller Memory Footprint (usually use interpolated with the n-gram based Language Model)

Disadvantages:

  • n-gram history is finite, so it can’t capture relationships between words too far apart.
  • Poorer performance on seen n-grams due to no explicit frequency information.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Describe Full Gradient Computation using Back Prop Through Time for RRNs and discuss its advantages and disadvantages.

A

Use dynamic programming to calculate for entire sequence.

We are basically computing the whole net result forward, then going backwards to update our weights.

Advantages:
-Slower? (idk why the fuck)

Disadvantages:
- Memory hog (store entire sequence in memory)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Describe Truncated Back Prop Through Time for RRNs and discuss its advantages and disadvantages.

A

For each output step, calculate some steps of recurrent transition errors.

Instead of performing a full forward pass, we take a few steps forward, calculate the gradient, update weights, and then repeat that until we reach the end of the network.

Advantages:
- Remember only a few results

Disadvantages:
- Less accurate for long range dependencies

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Describe the Vanishing Gradient and Exploding Gradient Problems.

A

Vanishing Gradients:
When our gradients become very small (near 0) which lead to the updates of the weights being negligible, so our network never learns.

Exploding Gradient:
When our gradient values are very large, they cause big updates to the weights which lead to big jumps and our algorithms escape the functions topology, so we fail to find a local minimum.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What was introduced in Gated RNNs?

A

We introduce another concept which is to allow the network to learn skip connections. This works by having an update and a reset gate, so the network chooses which one to use to either transfer data from the previous state or the current computation at the node.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What is a variant of RNNs?

A

LSTMs.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is a bi-directional RNN? What are its benefits?

A

An RNN which has two passes, a forward pass looking at the data from left to right and a backward pass looking at the data from right to left, and then combine them (usually by concatenating or adding the resulting functions). This allows the network to learn multiple types of context for one word.

Benefits:
• Access to the entire sequence before hand
• Access to context
• Better gradient propagation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly