C11: user interaction Flashcards

1
Q

how can the search engine learn from user interactions?

A
  • query modification behaviour (query suggestions)
  • interactions with documents (clicks)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

query suggestions

A

goal: find related queries in the query log, based on
- common substring
- co-occurrence in session
- term clustering
- clicks

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

how can we use log data for evaluation?

A

use clicking and browsing behaviour in addition to queries:
- click-through rate: nr of clicks a document attracts
- dwell time: time spent on a document
- scrolling behaviour: how users interact with the page
- stopping information: does the user abandon the search engine after a click?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

what are the limitations of query logs?

A
  • information need is unknown (can be partly deduced from previous queries)
  • relevance assessments unknown (deduce from clicks + dwell time)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

learning from interaction data

A

implicit feedback, needed if we don’t have explicit relevance assessments

assumption: when the user clicks on a result, it is relevant to them

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

3 limitations of implicit feedback

A

noisy: a non-relevant document might be clicked or a relevant document might not be clicked

biased: clicks for reasons other than relevance
- position bias: higher ranked documents get more attention
- selection bias: only interactions on retrieved documents
- presentation bias: results that are presented differently will be treated differently

what is the interpretation of a non-click? => either the document didn’t seem relevant or the user did not see the document

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

probabilistic model of user clicks

A

P(clicked(d)|relevance(d), position(d)) = P(clicked(d)|relevance(d), observed(d)) * P(clicked(d)|position(d))

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

how to measure the effect of position bias?

A

Idea: changing the position of a document doesn’t change its relevance, so all changes in click behaviour come from the position bias

intervention in the ranking:
1. swap two documents in the ranking
2. present the modified ranking to some users (A/B test)
3. record the clicks on the document in both original and modified rankings
4. measure the probability of a document being observed based on the clicks

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

how to correct for position bias?

A

Inverse Propensity Scoring (IPS) estimators can remove bias

Main idea: weigh clicks depending on their observation probability => clicks near the top get low weight, clicks near bottom get large weight

formula on slide 20, lecture 11

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

simulation of interaction

A

session simulation:
- simulate queries
- simulate clicks
- simulate user satisfaction

require a model of range of user behaviour
- users do not always behave deterministically
- might make non-optimal choices
- models need to contain noise

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

click models

A

How do users examine the result list and where do they click?

cascade assumption: user examines result list from top to bottom

Dependent Click Model (DCM)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Dependent Click Model (DCM)

A
  1. users traverse result lists from top to bottom
  2. users examine each document as it is encountered
  3. user decides whether to click on the document or skip it
  4. after each clicked document the user decides whether or not to continue examining the document list
  5. relevant documents are more likely to be clicked than non-relevant documents
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

advantages of simulation of interaction

A
  • Investigate how the system behaves under certain behaviour
  • Potentially a large amount of user data
  • Relatively low cost to create and use
  • Enable the exact same circumstances to be replicated, repeated, re-used
  • Encapsulates our understanding of the process
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

disadvantages of simulation of interaction

A
  • Models can become complex if we want to mirror realistic user behaviour
  • Simulations enable us to explore many possibilities, but which ones, why, how to make sense of data?
  • Does it represent actual user behavior/performance?
  • What claims can we make? In what context?
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

query expansion

A

expand the query with more similar terms: easy to experiment with in a live search engine because no changes to the index are required

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

document expansion

A
  • documents are longer than queries, so more context for a model to choose appropriate expansion terms
  • can be applied at index time, and in parallel to multiple documents
17
Q

Doc2Query

A

document expansion: train a sequence-to-sequence model that, given a text from a corpus, produces queries for which that document might be relevant
- train on relevant pairs of documents-queries
- use model to predict relevant queries for docs
- append predicted queries to documents

18
Q

conversational search: different methods

A

retrieval-based: select best response from a collection of responses

generation-based: generate response in natural language

hybrid: retrieve information, then generate response

19
Q

pros of retrieval-based methods

A
  • source is transparent
  • efficient
  • evaluation straightforward
20
Q

cons of retrieval-based methods

A
  • answer space is limited
  • potentially not fluent
  • less interactive
21
Q

pros of generation-based methods

A
  • fluent and human-like
  • tailored to user and input
  • more interactive
22
Q

cons of generation-based methods

A
  • not necessarily factual, potentially toxic
  • GPU-heavy
  • evaluation is challenging
23
Q

how to evaluate conversational search methods?

A

retrieval-based methods:
- Precision@n
- Mean Reciprocal Rank (MRR)
- Normalized Discounted Cumulative Gain (NDCG)

generation-based methods (measure word overlap):
- BLEU
- ROUGE
- METEOR

24
Q

challenges in conversational search

A
  • coreference issues (referring back to earlier concepts)
  • dependence on previous user and system turns
  • explicit feedback
  • topic-switching user behaviour
25
Q

ConvPR

A

Conversational Passage Ranking: user asks a question, model retrieves a relevant passage from a collection

methods: encoder and retrieval models (fine-tune on conversational data)

26
Q

challenges of conversational search

A
  • logical self-consistency: semantic coherence and internal logic
  • safety, transparency, controllability: difficult to control the output of a generative model (could lead to hate speach)
  • efficiency: time and memory-consuming training and inference