Statistics Flashcards

1
Q

Explain the balance between exploration and exploitation in clinical trials

A

You want to explore new types of medicine, but you have to make a trade-off with exploiting, that is, trying to make the patient well on methods you already know work.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Explain the epsilon greedy approach in a multi armed bandit algorithm

A

Select action with highest expected return (exploitation) but have some probability epsilon to rather select a random action (exploration)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is softmax exploration (in the multi armed bandit algorithm)?

A

The probability of selecting an action is proportional to the expected return of an action (such that with two similar, highly rated actions, both will be selected given some time)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Explain the following term in the Upper Confidence Bound algoritm (from multi armed bandit), with respect to exploration/exploitation:

max_a{Q(a) + sqrt(2*log t / N(a))}

A

You want to maximize the expected value of an action Q(a) (exploit) while also maximize the sqrt term (explore), which will increase as long as the action is not selected

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is the difference between probability mass function and probability density function?

A

PMF is for discrete variables, PDF is for continuous variables.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is the difference between a binomial and a bernoulli distribution?

A

The Bernoulli distribution is a special case of the binomial distribution where a single trial is conducted (so n would be 1 for such a binomial distribution)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is sequential decision making?

A

In artificial intelligence, sequential decision making refers to algorithms that take the dynamics of the world into consideration, thus delay parts of the problem until it must be solved.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What is the difference between a multi armed bandit and the contextual bandit?

A

The contextual bandit also uses information about the state of the environment. For example, my Quora feed may be using some kind of contextual bandits, where both exploration and exploitation is used to provide high quality content (M.A.B.) but also the contextual information (about me, the user).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What three criteria are met in a Poisson Process?

A
  1. Events are independent of each other. The occurrence of one event does not affect the probability another event will occur.
  2. The average rate (events per time period) is constant.
  3. Two events cannot occur at the same time.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What is the name of a Python library that offers a high level interface for MCMC algorithms

A

PYMC3

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Can Gaussian Processes be used for tuning neural network hyperparams?

A

Yes. Acquiring the optimal hyperparams is expensive, and using a Gaussian Process can be much more effective than a brute force grid search.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

How much CPU does a Gaussian Process with d dimensions/features and n number of training samples use?

A

O(n³) + O(dn²). The first term is for the inversion; the second term for the prediction.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

If you run a relay stage close to the average pace of the runners in the stage, and you observe the distribution of the other runners, what distribution will you observe? (Hint: Inspection paradox)

A

A bimodal distribution. The true distribution may peak around your pace, but since you are running in this pace, you will observe most of these runners. And since you run in a relay, your initial ranking is random, that is, the best runners are not necessarily in front of you.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What is the difference between residuals and error?

A

The errors are the deviations of the observations from the (true) population mean, while the residuals are the deviations of the observations from the sample mean. The sum of the residuals are (by definition) 0, this is generally not the case for errors.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

With increasing variance in the distribution, should you focus more (or less) on exploration vs exploitation (in Thompson sampling)

A

More on exploration.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What is the Poisson distribution?

A

A discrete PMF expressing the probability of a given number of events in a fixed interval of time. For example, the number of mails received per day may (if conditions are satisfied) have a Poisson distribution

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What does d_1, … , d_n ~poisson(theta) say?

A

That each sample d_i follows a Poisson distribution, with expected value theta

18
Q

Conjugacy is required to do exact Bayesian inference. What does this imply?

A

That the posterior distribution p(theta|x) is in the same family (such as gamma) as the prior distribution p(theta). This happens when the prior is a “conjugate prior” for the likelihood function p(x|theta). For example, the Beta distribution is a conjugate prior to the binomial and bernoulli distributions.

19
Q

What is random forest?

A

An algorithm where you split your data into multiple decision trees, where the decision trees are a subset of your original data. During prediction, let all your “subset decision trees” make a decision, and select the option that most trees predicted.

20
Q

How does the decision tree algorithm work?

A

It’s a divide & conquer approach with respect to the features in the data. Assume that we have three categorical features A, B, C, and output 0 or 1. First, split data into all possible values of feature A. If data in one subset only has output 1 (or only 0), it is a “pure subset”, and you can stop. If not, continue splitting with feature B and so on, until you are left with only pure subsets.

21
Q

What is pruning (in a decision tree setting)

A

Construct an overfitted tree, and then delete leaves based on selected criteria.

22
Q

Explain the difference between bagging and boosting

A

Bagging: you create an ensemble of predictions (with replacement, such that each dataset is different), where you output the average (or some other measure) of the predictions. This reduces the variance, and handles overfitting.

Boosting: you do a prediction on parts of the training dataset, and sequentially, you select the points with high error for the next dataset to train. Hence, you train selectively on difficult parts of the dataset. This can reduce bias and variance, but you may also overfit.

23
Q

What is meant by additive modelling? (in gradient boosting terminology)

A

When modelling a complex function, you can add several simple functions (e.g. 30, 2x, sin x) to model something very complex.

24
Q

What are weak learners (in gradient boosting terminology)

A

That you add simple models (i. e. weak learners) to learn complicated functions.

25
Q

Which quantity is the best approximation of the L1 loss? (sigma | y_i - ??? | )

A

The median (and not the mean, which is the best quantity of the L2 loss (i.e. the mean squared error)

26
Q

What is the MAE?

A

The mean absolute error, defined as MAE = sigma ( |e_i| / n )

27
Q

Explain the histogram-based methods used on features in lightgbm

A

To reduce training time, features are aggregated into bins, such that the computational cost becomes O(n_binsn_data), which - depending on the bin size - is much less than O(n_featuresn_data). However, increasing the bin size reduces the accuracy, and finding the optimal is very complex.

28
Q

Name four methods lightgbm utilizes to speed up the rate determining splitting step in its algorithm

A
  1. Histogram (binning features together)
  2. Ignoring sparse inputs (i.e. ignoring zeros before calculating optimal splitting)
  3. Subsampling (by assuming low gradient data points are well trained, and does not need as much focus)
  4. Feature bundling (not exactly sure, but something about merging together features with certain behaviors)
29
Q

What does the L1 and L2 loss optimize?

A

The mean absolute error, and the mean square error, respectively. (Or similarly, the least absolute deviations, and the least square errors, respectively).

30
Q

How does gradient boosting machines (GBM) perform gradient descent, when you use MSE as loss function?

A

The derivative of MSE is (y_i - y_bar), i.e. the distance between the predicted value and the exact values, i.e. the residuals, which is what we optimize for in GBM.

31
Q

Explain the difference between using gradient descent on a neural network, and gradient boosting machines (GBM)

A

When using gradient descent on a neural network, we use gradient descent to update the weights based on the training data. In GBM, we use gradient descent to add models based on the predicted results from our current model: it is helpful to think that in GBM, we are sweeping prediction space.

32
Q

What is the confusion matrix?

A

A metric to evaluate the accuracy of a classification, with true negatives/positives on the diagonal, and false negatives/positives on off-diagonal elements.

33
Q

When does it make sense to log transform data?

A

If you are interested in relative differences. For example, a stock price increase from 1 to 1.1 or 100 to 110 both have a relative increase of 10%. These relative differences are captured by the log transform: lg 1.1 - lg 1.0 = lg (1.1/1.0) = lg(110/100) = lg 110 - lg 100. Without the log transform, the increase from 1 to 1.1 is poorly captured by the model.

34
Q

Why may ML algorithms perform worse if one applies ttoo many features?

A

Then there’s a risk that the algorithm is simply fitting noise, i.e. that it overfits.

35
Q

Explain the filter method within feature selection, and when to use it.

A

The filter method is to
a) drop features that have low correlation with the output variable, and subsequently,
b) drop remaining features that are correlated with each other (that is, keep only one of them if two are correlated).
This method is extremely fast and may be good if you have many features. However, it is not very accurate.

36
Q

Explain the backward elimination method, which is a subset of the wrapper methods within feature selection.

A

a) Evaluate model with all features.
b) Remove the worse performing feature and re-evaluate model, hopefully this will improve accuracy.
c) Continue until accuracy stops improving.
This method is relatively accurate, but may come with a large computational cost.

37
Q

What is the n_estimators parameter in lightgbm, and what can happen if it’s too large?

A

It means the number of boosted rounds, essentially the number of rounds one trains on the error in the decision tree. If it becomes too large, the model will overfit (imagine n_estimators being 10⁶ without proper stopping criteria: then the model would obviously overfit)

38
Q

How can you detect overfitting with k fold cross validation?

A

If you have a difference in the error of the different folds (i.e. large variance in the “k fold dataset”), it may be an indication of overfitting.

39
Q

Given two causual possibilities,

  1. A -> B: P(A, B) = P(A)*P(B|A)
  2. B-> A: P(A, B) = P(B)*P(A|B)

and assume that A->B is the causal effect.

When applying a trained model on the transfer data, assuming the marginal probabilities are slightly different in the transfer data, why do you need fewer samples to get low regret on model 1?

A

If A->B is the causal relation, P(B|A) will stay constant even if you change P(A).

If e.g. A is a discrete variable with 10 possibilities, you will now have a model with 10 parameters (roughly).

In the second case, both P(B) and P(A|B) will change if you change P(A), hence you get a model with 10x10 parameters. In such a case, you need more transfer data to get a good model with low regret.

40
Q

If you how infinite computational power, and know all relevant parameters in your system, how can you always find the causal relation relations?

A

By trying all possible causal models on the transfer data, and observe where you get the fastest learning rate.