Week 8: Clustering and Text Mining Flashcards

1
Q

RevERSED

  • Stemming (running -> run)
  • Lemmatisation (were -> is)
  • lower casing
  • Stop word removal
  • Punctuation removal
  • Number removal
  • Spell correction
  • Tokenisation
A

What are some typical steps in text pre-processing? (8)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

RevERSED

the task of finding the document d from the D documents in some collection that best matches a query q

A

What is information retrieval?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

RevERSED

Short, dense vectors that can be used to represent words?

A

What are embeddings?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

RevERSED

  • it is a token learner
  • starts with vocabulary of all characters
  • chooses the two symbols that are most frequently adjacent, adds a new merged symbol to the vocabulary and replaces every adjacent pair in the corpus with the new merged symbol
  • continues to count and merge, creating new longer and longer character strings, until k merges have been done creating k novel tokens
A

How does the byte-pair encoding algorithm work for tokenisation?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

RevERSED

it = itoken(wordvector) #create index tokens
full\_vocab = create\_vocabulary(it) #create full vocabulary
A

How do you create a vocabulary of words in R?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

RevERSED

train a classifier such that a given tuple (w,c) of a target word w paired with a candidate/context word c, it will return the probability that c is a real context word P(+|w,c)

A

What is the classifier to train for skip-grams?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

RevERSED

DocumentTermMatrix(corpus)

A

Code to create a document term matrix in R?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

RevERSED

tidy()

A

Code to turn a document term matrix into a dataframe in R?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

RevERSED

Similar rows mean that the words are similar because they occur in similar documents

A

When will two row vectors be similar in a term-document matrix?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

RevERSED

Sentiment = total positive words - total negative words

A

What is the overall sentiment in sentiment analysis?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

RevERSED

Each row is a document, each column is a word

A

What is a document-term matrix (DTM)?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

RevERSED

Hidden groups within the data that are not recorded

A

What is gaussian mixture modelling trying to find?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

RevERSED

latent class analysis
latent profile analysis
types of model based clustering
A

What do LCA and LPA stand for? what are they types of?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

RevERSED

Corpus(textsource)

A

Code to create a corpus in R?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

RevERSED

It separates out clitics (doesn’t becomes does n’t), keeps hyphenated words together, separates out all punctuatio

A

What does Penn Treebank tokenisation do?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

RevERSED

  • tokenising (segmenting) words
  • normalising word formats
  • segmenting sentences
A

What are 3 types of text normalisation?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

RevERSED

create_tcm(it, vectorizer, skip_grams_window = 5)

A

How do you create a token co-occurrence matrix in R?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

RevERSED

Use capture group to store the expression in memory
the (.*)er they were, the \1er they will be

A

How do you get part of a string and reference back to that part in an RE?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

RevERSED

adjust initial embeddings to maximise the similarity (dot product) of the (w, cpos) pairs drawn from the positive examples and minimise the similarity (dot product) of the (w, cneg) pairs from the negative examples

A

How does the skip-gram algorithm adjust during training?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

RevERSED

words that occur in similar contexts tend to have similar meanings

A

What is the distributional hypothesis?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

RevERSED

clustCombi(data=x)

A

What is the code for merging components of clusters in R?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

RevERSED

two matrices W and C each containing an embedding for every one of the |V| words in the vocabulary V

A

What are all the parameters learned in skip-gram?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

RevERSED

\n is newline
\t is tab

A

What are \n and \t in RE?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

RevERSED

AFINN, NRC, bing

A

What are some popular lexicons for sentiment analysis? (3)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

RevERSED

complete morphological parsing of the word
ie. takes cats and parses it into the two morphemes cat and s

A

What is the most sophisticated way of lemmatisation?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

RevERSED

all prior class proportions 1/K, EII model, all posteriors are either 0 or 1

A

What type of Gaussian model mixture is k-means?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

RevERSED

tf = term frequency 
idf = inverse document frequency
A

What are the terms of tf-idf

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

RevERSED

p(x) = pi(1,x) Normal(mean1, var1) + (1-pi(1,x)) Normal(mean2, var2) 
where pi(1,x) is the probability that variable x takes on value 1 (e.g. probability that person is man) 
ie proportion of values expected in each cluster
A

What is the gaussian mixture model formula?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

RevERSED

the task of putting words/tokens in a standard format

A

What is word normalisation?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

RevERSED

represent a word as a point in multidimensional semantic space that is derived from the distributions of word neighbours

A

What are vector semantics for words?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

RevERSED

an algebraic notion for characterising a set of strings

A

What is a regular expression?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

RevERSED

P(+|w,c) = c.w = 1/1+exp(-c.w)
where . is the dot product

A

What is the probability that c is a context word P(+|w,c)?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

RevERSED

the frequency that a word appears is inversely proportional to its rank

A

What is Zipf’s law?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

RevERSED

segmenting a text into sentences

A

What is sentence segmentation?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

RevERSED

same as data$variable

A

what does data %>% pull(variable) do in R?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

RevERSED

The data within each cluster is normally distributed

A

What is the assumption of gaussian mixture modelling?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

RevERSED

n occurrences of the previous char or expression
from n to m occurrences of the previous char or expression

A

What do {n} and {m,n} mean in RE?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
38
Q

RevERSED

AIC: Akaike information criterion - same as BIC but penalty is m
AIC3: same as AIC but penalty is 3m/2
ICL: Integrated information criterion - same as BIC but reconstruction loss includes the assigned clusters

A

What are 3 alternatives to the BIC?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
39
Q

RevERSED

LCA = binomial mixture model 
LPA = gaussian mixture model
A

What are other names for LCA and LPA

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
40
Q

RevERSED

cosine(v,w) = (v.w)/(|v||w|) (where . is dot product)

A

What is the formula for the cosine similarity measure?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
41
Q

RevERSED

A clitic is a part of a word that can’t stand on its own, can only occur attached to another word. E.g. we’re

A

What is a clitic?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
42
Q

RevERSED

Corpus = a collection of documents (our whole dataset)
Lexicon: set of all unique words in a corpus

A

What is a corpus and a lexicon?

43
Q

RevERSED

v.w = v1w1 + v2w2 + … + vNwN

A

How do you get the dot product of 2 vectors?

44
Q

RevERSED

Mclust(data, G=2, modelNames = “E”)

A

Code for implementing mclust in R?

45
Q

RevERSED

library(tidytext)
unnest_tokens(data, outputcolumn, inputcolumn) #takes one term per row and automatically removes punctuation

A

How can you tokenise text in R?

46
Q

RevERSED

Columns and rows both represent words

  • each cell records the number of times the row (target) word and the column (context) word co-occur in some context in some training corpus
  • the context could be a document or smaller such as a window around the word
A

What is a term-term matrix or word co-occurrance matrix?

47
Q

RevERSED

bind_tf_idf(tidytext dataset, tokens, documents, counts) #takes a tidytext dataset as input with one row per token per document

A

How do you get the tf-idf in R?

48
Q

RevERSED

sub(“expression”, “replacement” data) (only first match of each index)
gsub(“expression”, “replacement” data) (all matches)

A

What do sub and gsub do?

49
Q

RevERSED

LCA lets the variables follow any distribution, as long as they are unrelated to each other (independent) within classes.

A

How does latent class analysis work?

50
Q

RevERSED

  • Increasing precision: minimising false positives (strings that were incorrectly matched)
  • Increasing recall: minimising false negatives (strings that were incorrectly missed)
A

What does reducing the error rate of an RE involve?

51
Q

RevERSED

\b matches word boundary
\B matches non-word boundary

A

What are \b and \B in RE?

52
Q

RevERSED

There are often deviations at high ranks, as a corpus often contains fewer rare words than predicted by a single power law

A

Where does Zipf’s law often deviate?

53
Q

RevERSED

Tokenisation is the task of segmenting running text into words

A

What is tokenisation?

54
Q

RevERSED

* zero or more occurrences of the immediately previous character or regular expression
+ one or more occurrences of the immediately previous character or regular expression

A

What is the kleene * and knleene + in RE?

55
Q

RevERSED

grep(“expression”, data) #returns indices of all matching expressions from data
grep(“expression”, data, value = TRUE) #returns all the actual matching expressions from data. gives whole string, not just matching part
length(grep(“expression”, data)) #returns number of matches of expression in data

A

How do you use grep to match and count matches from data? (3 ways)

56
Q

RevERSED

  • each row represents a word in the vocabulary
  • each column represents a document
  • each cell represents the number of times a particular word occurs in a particular document
A

What is a term-document matrix?

57
Q

RevERSED

”-“ e.g. [2-5]

A

How do you specify a range in RE?

58
Q

RevERSED

  1. Get some text
  2. Organise text into corpus
  3. Pre-process
  4. Create representation
  5. Perform analysis as usual
A

What is the basic workflow for text analysis? (5 steps)

59
Q

RevERSED

  • A token learner takes a raw training corpus and induces a vocabulary, a set of tokens
  • A token segmenter takes a raw test sentence and segments it into the tokens in the vocabulary
A

What is a token learner and a token segmenter?

60
Q

RevERSED

  1. Treat the target word and a neighbouring context word as positive examples
  2. Randomly sample other words in the lexicon to get negative samples
  3. Use logistic regression to train a classifier to distinguish those two cases
  4. Use the learned weights as the embeddings
A

What is the intuition of skip-gram/ word2vec?

61
Q

RevERSED

anything within the brackets can be included e.g. colo[ou]r means colour or color

A

What does [] mean in RE?

62
Q

RevERSED

r = regexpr(“expression”, data) #gives index of each match and length of each match (only for first match of each index)
r = gregexpr(“expression”, data) #gives index of each match and length of each match (all matches)
regmatches(data, r) #returns the actual matches obtained from regexpr or gregexpr

A

How do you use regexpr and gregexpr and regmatches to match data?

63
Q

RevERSED

  • analogies: look at the analogies in vector space el.g. King - man + woman = queen
  • Bias: semantics derived automatically from language corps contain human-like biases
A

What are two properties of word embeddings?

64
Q

RevERSED

  1. Parenthesis ()
  2. Counters * + ? {}
  3. Sequences and anchors ^ $
  4. Disjunction |
A

What is the precedence hierarchy in RE?

65
Q

RevERSED

pi(k,X): number of classes -1 : K-1
mu(k): K*p (p is number of features)
var: K*p (or just p when variances are equal over classes)
covariances: K*p*(p-1)/2 (or p*(p-1)/2 when covariances are equal over classes) (or 0 when variables are uncorrelated, spherical clusters)

m = (K-1) + Kp + Kp + Kp(p-1)/2

A

What are the number of parameters in a multivariate Gaussian mixture model:

66
Q

RevERSED

str_view(“string”, “expression”) #only matches the first expression
str_detect(“string”, “expression”) #returns true/false depending on whether the string matches the expression
str_extract(“string”, “expression”) #extracts the first match
str_extract_all(“string”, “expression”) #extracts all matches into a vector
str_match_all(“string”, “expression”) #similar to str_extract_all except output is matrix with column for each

A

What are the stringr options for matching strings?

67
Q

RevERSED

When the documents contain similar words

A

When will two column vectors be similar in a term-document matrix?

68
Q

RevERSED

pipe |

A

How can you say or in RE?

69
Q

RevERSED

Raw frequency is not the best measure of association between words because words like “the” and “good” occur frequently and aren’t informative. tf-idf gives weight to words that appear in fewer documents

A

What is the purpose of tf-idf?

70
Q

RevERSED

task of classifying the polarity of a given text (ie is it a good/bad review)

A

What is sentiment analysis?

71
Q

RevERSED

\

A

How do you do an escape in R?

72
Q

RevERSED

all context words are independent

A

What assumption does skip-gram make?

73
Q

RevERSED

Self-supervised learning. Don’t need labels

A

What is an advantage of word2vec/skip grams?

74
Q

RevERSED

matches the end of a line

A

What is $ in RE?

75
Q

RevERSED

a kind of normalisation that is mapping everything to lowercase

A

What is case folding?

76
Q

RevERSED

  • Inflectional stemming: remove plurals, normalise verb tenses, remove other affixes
  • Stemming to root: reduce word to most basic element
A

What is inflectional stemming and stemming to root?

77
Q

RevERSED

anti_join(data, stop_words) #removes stop words on an unnest_tokens object

A

How can you remove stop words in R?

78
Q

RevERSED

How well the model fits the data

A

What does the likelihood tell us?

79
Q

RevERSED

start with the usual Gaussian mixture solution, merge similar components to create non-Gaussian clusters

A

How can you identify clusters that are not ellipses using the GMM?

80
Q

RevERSED

mclust fits all the models with up to specified number of clusters, computes the BIC of each model and chooses the model with the best BIC

A

How does mclust select a model in R?

81
Q

RevERSED

\d is any digit
\D is any non-digit

A

What are \d and \D in RE?

82
Q

RevERSED

naive version of morphological analysis that consists of chopping of word-final affixes

A

What is stemming?

83
Q

RevERSED

the task of determining that two words have the same root, despite their surface differences

A

What is lemmatisation?

84
Q

RevERSED

vectorizer = vocab_vectorizer(small_vocab)

A

How do you map words to indices in R? ie create a vectorizer

85
Q

RevERSED

term frequency: the frequency of the word t in document d / the total number of terms in the document
tf(t,d)= count(t,d)/total number of terms in the document

inverse document frequency: number of documents / number of documents the term occurs in
idf = log10(N/df(t))

tf-idf = tf*idf

A

How do you calculate tf, idf and tf-idf?

86
Q

RevERSED

GloVe, fastText

A

What are other types of embeddings in R, after word2vec?

87
Q

RevERSED

Sparse because most are 0

A

What type of vector representations come from a term-term matrix?

88
Q

RevERSED

p(data|paramaters) = p(y|theta)

A

What is the likelihood, defined by the statistical model and the assumption?

89
Q

RevERSED

  • Not great with longer tasks
  • negation
  • context-dependency
  • You need partly labelled data
A

What are the problems with sentiment analysis? (4)

90
Q

RevERSED

the preceding character or nothing
e.g. colou?r

A

What is ? in RE?

91
Q

RevERSED

tradeoff between complexity and file size
the lower the better

A

What is the aim of the BIC?

92
Q

RevERSED

E for equal, V for variable, I for identity matrix

  • Volume (size of clusters in data space)
  • Shape (circle or ellipse)
  • Orientation (the angle of the ellipse)

E.g. VVE model has variable volume, variable shape, equal orientation

A

What are the identifiers for each parameterisation of a GMM and what to they measure?

93
Q

RevERSED

  • Bag-of-words: word count or word proportion for each word
  • Time-series: label each token and put in order
  • Tf-idf, embeddings
A

What are some forms of text representation? (4)

94
Q

RevERSED

The weighted sum of two normal curves

A

What is the overall probability curve made up of?

95
Q

RevERSED

Morphology is how words are build up from stems, the central morpheme of the word and affixes

A

What is morphology?

96
Q

RevERSED

sin2(matrix1, matrix2, method=“cosine”, norm=“l2”)

A

How do you calculate the cosine similarity in R?

97
Q

RevERSED

The cosine value ranges from 1 for vectors pointing in the same direction, through 0 for orthogonal vectors, to -1 for opposite vectors

A

What does the value of the cosine metric represent?

98
Q

RevERSED

pi(man,x) = point on male curve/point on total curve

A

How is the posterior probability calculated based on the current estimates of mean and sd?

99
Q

RevERSED

A wildcard expression that matches any single character

A

What is the period . in RE?

100
Q

RevERSED

  1. Guess the parameters
  2. Work out the posterior probability of being M/F assuming normality (E step)
  3. Update the parameters (M step)
    * repeat steps 1 and 2 until parameters stop changing
A

What are the steps of the expectation maximisation (EM) algorithm?

101
Q

RevERSED

Caret ^ matches the start of a line
or negates the contents of [] if it is [^…]

A

What are the 2 things that ^ represents in RE

102
Q

RevERSED

Mean becomes a vector of 2 means
Standard deviation becomes a 2x2 variance covariance matrix determining the shape of the cluster

A

In multivariate model based clustering with 2 observed features, what do the mean and standard deviation become?

103
Q

RevERSED

fit_mclust$paramaters #gives means, variances, proportions, modelName
fit_mclust$bic #gives the bic for each option and shows which models have the lowest. These are negative in the mclust package, take absolute value
fit_mclust$loglik #gives log likelihood used to calculate bic manually
fit_mclust$classification #gives the cluster classification vector
fit_mclust$uncertainty #gives uncertainty of each point

plot(fit_mclust, “density”) #gives density plot

A

When you have fitted an Mclust model to fit_mclust in R, what information can you obtain from the result?

104
Q

RevERSED

BIC = -2.log(l) + m.log(n)

l = likelihood = p(data|theta) 
-2.log(l) = deviance = reconstruction loss = fit 
m = number of parameters 
n = number of observations/examples 
m.log(n) = file size = complexity
A

What is the basis of information criteria (BIC) formula?