8 Flashcards
Question 1
In churn prediction in Telco, network features are usually
a) not predictive.
b) highly predictive.
b) highly predictive.
Question 2
A process is said to be a finite-valued Markov chain if P(Xt+1=j|X0=k0, X1=k1, …,Xt-1=kt-1, Xt=i)=
a) P(Xt=i) for all t, i, j where 1 ≤ i and j ≤ M.
b) P(Xt=i) for all t, i, j where 1 ≤ i and j ≤ M.
c) P(Xt+1=j|Xt=i) for all t, i, j where 1 ≤ i and j ≤ M.
d) P(Xt+1=j) for all t, i, j where 1 ≤ i and j ≤ M.
c) P(Xt+1=j|Xt=i) for all t, i, j where 1 ≤ i and j ≤ M.
Question 3
Consider the following statement “Essentially, a Markov process is a memoryless random process.”
This statement is
a) correct.
b) not correct.
a) correct.
Question 4
In a transition matrix, the sum of the probabilities
a) across the columns equals 1.
b) across the rows equals 1.
b) across the rows equals 1.
Question 5
Because of the Markov assumption, it is really easy to start doing simulations or projections by
a) adding the transition matrix to itself.
b) dividing the transition matrix by itself.
c) multiplying the transition matrix by itself.
d) subtracting the transition matrix from itself.
c) multiplying the transition matrix by itself.
Question 6
A Markov reward process is essentially a Markov chain with values which represent rewards assigned to
a) a state.
b) a transition.
c) a state or transition.
c) a state or transition.
Question 7
In a Markov Decision Process, the probability to move to a new state depends upon
a) the current state.
b) the action taken.
c) the current state and action taken.
c) the current state and action taken.
Question 8
The well-known bandit problem in reinforcement learning is an example of a
a) Markov Decision Process.
b) Markov Reward Process.
a) Markov Decision Process.
Question 9
In a Markov Decision Process, the discount factor motivates the decision maker to
a) favor taking actions early, rather than postpone them indefinitely.
b) favor taking actions late, rather than early.
a) favor taking actions early, rather than postpone them indefinitely.
Question 10
A Markov Decision Process can be solved by
a) brute force evaluation.
b) value iteration.
c) policy iteration.
d) dynamic programming.
e) all of the above.
e) all of the above.
Question 11
A mover/stayer model is a special type of Markov Chain model where we differentiate between 2 types of customers: movers who switch states and stayers who keep their states. The latter can represent stable, loyal customers since they never leave their initial state. Movers make transitions according to a Markov chain with transition matrix T. Let s represent the vector of stayers and m represent the vector of movers. The state transition after 1 period then becomes:
a) s×T + m.
b) (s + m)×T.
c) s + m×T.
d) s + m.
c) s + m×T.
Question 12
Customers can migrate between states due to different reasons such as
a) marketing actions.
b) competitor actions.
c) macro-economic effects.
d) changing customer needs.
e) all of the above.
e) all of the above.
Question 13
Consider the following statement: “The most extreme example of a stable migration matrix is the identity matrix I.”
This statement is
a) correct.
b) not correct.
a) correct.
Question 14
Modeling why customers migrate between states can be very useful,
a) from an explanatory perspective.
b) from a predictive perspective.
c) both from an explanatory as well as predictive perspective.
c) both from an explanatory as well as predictive perspective.
Question 15
A common technique to model customer migrations is
a) linear regression.
b) decision trees.
c) neural networks.
d) cumulative logistic regression.
d) cumulative logistic regression.