Uda Flashcards

1
Q

Let’s … a bit.

A

sidetrack

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

he does not let himself get … by trends.

A

sidetracked

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

LDA assumes that the every chunk of text we feed into it will contain words that are somehow related. Therefore choosing the right corpus of data is …

A

crucial

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Remove very rare and very … words.

A

common

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Therefore, common words like “the” and “for”, which appear in many documents, will be .. down. Words that appear frequently in a single document will be scaled up.

A

scaled

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

A … but effective device from the 1840’s, rather like a thermometer. It works, but a human must estimate its readings.

A

primitive

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

… dependencies: dependencies over time

A

Temporal

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

You will notice that in these videos I use subscripts as well as … as a numeric notation for the weight matrix.

A

superscript

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Even when knowing which annotation will get the most focus, it’s interesting to see how … softmax makes the end score become.
array:
[ 927., 397., 148., 929.]
softmax:
[ 0.11920292, 7.9471515e-232, 5.7661442e-340, 0.88079708]

A

drastic

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

You could skip the embedding step, and feed in the one-hot encoded vectors directly to the recurrent layer(s). This may reduce the complexity of the model and make it easier to train, but the quality of translation may suffer as one-hot encoded vectors cannot … similarities and differences between words.

A

exploit

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

If you would like to … your knowledge even more, go over the following tutorial.

A

deepen

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Words that you would expect to see more often in positive reviews – like “amazing” – have a ratio greater than 1. The more … a word is toward positive, the farther from 1 its positive-to-negative ratio will be.

A

skewed

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

There are still a couple of problems to … out before we use the bigram probability dictionary to calculate the probabilities of new sentences

A

sort

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

This diet is … in vitamin B.

A

deficient

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Some possible combinations may not exist in our probability dictionary but are still possible. We don’t want to multiply in a probability of 0 just because our original corpus was …

A

deficient

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Carefully read each question and provide … answers.

A

thorough

17
Q

The more … a word is toward positive, the farther from 1 its positive-to-negative ratio will be.

A

skewed

18
Q

You had to calculate all derivatives for all your functions, apply the chain rule, and then implement the result of the calculations, praying that everything … … right.

A

was done