IR-TFIDF 6 Flashcards

1
Q

What is Information Retrieval?

A

Information retrieval is finding material (usually documents) of
an unstructured nature (usually text) that satisfies an information need from within large collections (usually stored on computers).

E.g. A search engine (Google, Bing, Baidu, DuckDuckGo, …)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is Ranked Information Retrieval?

A

return an ordering of the documents in the collection with respect to the query

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

How would you rank documents in ranked Information Retrieval?

A
  • Assign a score – say in [0, 1] – to each document

* This score measures how well document and query “match”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

How does the Jaccard Coefficient work?

A

View the query and each document as a set of types.
Check how many times the query appears in each document.
Remember types not tokens.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Name two pros and cons of the Jaccard Coefficient.

A

Pros of Jaccard Coefficient
• Easy to implement, easy to understand
• The baseline method you should consider (if your proposed method performs worse than Jaccard coefficient, throw it away)

Cons of Jaccard Coefficient
• It doesn’t consider term frequency (how many times a term occurs in a document)

• It ignores word sequences (‘Alex is taller than Bob’ and ‘Bob is taller than Alex’ are regarded as the same). This can be mitigated by considering n-grams

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is bag of words?

A
  • View a document as a bag of words
  • Count the appearance of each word, or term frequency (unlike Jaccard coefficient, which ignores the term frequencies)
  • Known as BoW or BOW representation
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Why should you not use euclidean distance to measure the similarity between vectors in bag of words?
And what should be used instead?

A

• Larger distance > less relevancy
• However, Euclidean distance is a bad idea, because it takes the length of the
vector into account
• Cosine Similarity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What is Term Frequency?

A

the number of times term t appears in document d

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is Inverse Document Frequency?

A

No. of Docs / Term t (can add gamma number if necessary e.g. the numerator is very large)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What are the problems with IDF?

Name solutions commonly used?

A

• If the word does not appear in any documents in the corpus (this can happen
when you use a given vocabulary), tf is ill-defined
• When a word appears in a very few docs (i.e. the denominator is very small) and there are many documents (i.e. the numerator is very large), the IDF is very large

Gamma can be any positive real number; usually set to be 1.0 which is added to the term t.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

How do you calculate tf-idf?

A

TF * IDF

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Pros and cons of tf-idf?

A

Pros:
• Easy to understand and easy to implement
• Yields surprisingly good performance in many text classification and text clustering tasks, usually better than BoW and Jaccard
Cons:
• Surficial matching:
• Texts with similar words can deliver very different meanings (S1 and S2)
• Texts with different words can deliver very similar meanings (S1 and S3)
• Sparsity: if the vocabulary size is large (e.g. 30K common words in English), each text vector has many 0s and very few 1s. As a result, similar sentences have low cosine similarity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What is the minimum edit distance?

A

The minimum edit distance between two strings is defined as the
minimum number of editing operations (operations like insertion,
deletion, substitution) needed to transform one string into another.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

How much does substitution cost in Levenshtein?

A

2

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What is Standard precision?

And What is IR Evaluation: Precision@K

A

Standard precision is what proportion of positive identifications was actually correct.

• Precisoin@K
• Of the top-K returned results, how many of them are indeed relevant to the query (or how many are labelled as ‘R’ by the human annotators)
• Decide K by your application:
• E.g., You want to make sure the user finds at least one relevant result on the first page
• For all M queries, compute the average
Performance = mean(Precision@K for results on query_i), i=1,…,M

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

A document with 10 occurrences of a term is
more relevant than another document with just 1
occurrence, but not 10x more relevant.
So how would you measure the difference?

A

Log Term Frequency

17
Q

Given two sequences, what is an alignment?

A

An alignment is a correspondence between

substrings of the two sequences.