Week 6: Regression and Classification Flashcards

1
Q

How do you make a prediction vector using knn in R?

A

knn(train, test, cl, k)
train is the training set with only the predictor variables, test is the test set with only the predictor variables, cl is the outcomes from the training set, k is the desired k

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What are interaction terms?

A

Y = B0 + B1X1 + B2X2 + B3X1X2 + e
Combination of predictors

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is overfitting?

A

following the errors or noise too closely

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is variance?

A

the amount by which f* would change if we estimated it using a different training data set. In general, more flexible methods have higher variance

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is a generative model for classification?

A

Model the distribution of the predictors X separately in each of the response classes. Then use Bayes theorem to flip these around into estimates for Pr(Y=k|X=x)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Why does k-fold CV give more accurate estimates of MSE than LOOCV?

A

LOOCV has higher levels of variance than k-fold because the model outputs are highly correlated with each other and therefore the mean has higher variance

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is the residual sum of squares (RSS)?

A

sum of all the residuals squared:
e1^2 + e2^2 …

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What is confounding?

A

When performing regressions with a single predictor shows a very different outcome to performing regressions with multiple predictors that are also relevant

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

How do you find f with a parametric method?

A

First make an assumption about the functional form or shape of f, then use a procedure that uses the training data to fit or train the model

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

How do you predict the outcome of the model in R?

A

predict(model)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

How do you interpret B0 and B1 when there is a dummy variable 1, when someone is a student, 0 when they are not a student

A

B0 can be interpreted as the average Y among non-students. B0 + B1 as the average Y among students. B1 as the average difference in Y between students and non students

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What is the aim of linear discriminant analysis?

A

Find estimate for fk(x) to estimate pk(x) by approximating bayes classifier

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What is sensitivity/recall?

A

the percentage of Trues that are identified correctly = TP/(TP+FN)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What is the difference between prediction and inference?

A

Prediction: predict Y using Y* = f*(X),
Inference: understanding the association between Y and X

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What is the advantage and disadvantage of parametric methods?

A

Advantage: reduce the problem of estimating f down to one of estimating a set of parameters
Disadvantage: will usually not match the true unknown form of f

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What do you need to check the significance of multiple coefficients together e.g. B0 = B1 = B2 = 0?

A

F-statistic, large F-statistic to reject null hypothesis

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What is k-fold cross validation?

A

Randomly divide the set of observation into k groups (or folds) of approximately equal size. The first fold is treaded as a validation set and the method is fit on the remaining k-1 folds. Repeat k times and get k estimates of the MSE. Find the average

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

How do you quantify the test error on classification problems?

A

use the number of mis-classified observations rather than the MSE

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

How can you detect non-linearity of data with a linear model?

A

Plot the residuals ei versus the predictor xi

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

What is the error rate?

A

1 - Accuracy

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

What happens to MSE as model flexibility increases?

A

training MSE will decrease, but test MSE may not

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

How do you interpret the coefficients of multinomial logistic regression? with stroke, overdose and epilepsy as the 3 classifications

A

If epilepsy is set as the baseline, the B(stroke)0 is interpreted as the log odds of stroke versus epilepsy given that x1 = … = x1 = 0. A one unit increase in Xj is associated with a B(stroke)j increase in the log odds of stroke over epilepsy

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

What is the logistic function?

A

p(X) = [e^ B0+B1X] / [1 + e^ B0+B1X]

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

How do you produce table of observed vs predicted results when classified discretely?

A

table(true=, predicted=)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

When selecting a level of smoothness for non-parametric methods, what is the trade-off?

A

Low-levels of smoothness can lead to overfitting

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

What is reducible error?

A

f* will not be a perfect estimate for f and will cause some error, this error is reducible because we can potentially improve the accuracy of f

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

What is the standard multiple linear regression formula?

A

Y = B0 + B1X1 + B2X2 + … + BpXp + e

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

What is the R squared statistic?

A

Measures the proportion of variability in Y that can be explained using X
R^2 = (TSS-RSS)/TSS

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

What is the positive predictive value (PPV) / precision?

A

TP/(TP+FP)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

What is irreducible error?

A

e: cannot be predicted using X, therefore the error introduced by e cannot be reduced

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

How do you assess the accuracy of the coefficient estimates?

A

Compute the standard error of B0 and B1

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

What does fk(x) represent in linear discriminant analysis?

A

Pr(X|Y=k)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

How can you assess collinearity between 2 variables and between multiple variables?

A

2 variables: correlation matrix
multiple variables: variance inflation factor (VIF). value exceeding 5 or 10 is problematic

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

How do you alter the sensitivity and specificity of a classifier?

A

Change the threshold at which an observation is assigned to a class - default is 0.5

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

How does the k-nearest-neighbours classifier work?

A

Given a positive integer K and a test observation x0, the KNN classifier first indentifies the K points in the training data that are closest to x0, represented by N0. It then estimates the conditional probability for class j as the fraction of points in N0 whose response values equal j. KNN classifies the test observation x0 to the class with the largest probability.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

What is the problem with regression trees? What can they be used for?

A

Tend to overfit
Use them as a basic building block for ensembles

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

What is the ROC curve?

A

false positive rate vs true positive rate. The area under the curve gives the overall performance of a classifier

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
38
Q

How do you test for significance of the coefficients?

A

Hypothesis test. Can compute t-statistic using standard errors. p-value is the probability of observing a value equal or larger than t. small p-value indicates unlikely to observe such a substantial association between the predictor and the response due to chance.
If p-value is small, reject hypothesis that co-efficient is 0

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
39
Q

What type of model is polynomial regression?

A

It is still a linear model

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
40
Q

What will happen if you increase the cutoff value?

A

less true positives and less false positives (ie less positives overall)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
41
Q

What does Bayes classifier do?

A
assigns an observation X=x to the class for which pk(x) is the largest (assigns each observation to the most likely class given predictor values) 
-produces the lowest possible test error rate, called the Bayes error rate
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
42
Q

How do you assess the accuracy of the linear regression model?

A

residual standard error:
RSE = sqrt[(1 / n-2)*RSS]
It is the average amount that the response will deviate from the true regression line.
It is an absolute measure of the lack of fit of the model

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
43
Q

What happens to the linear model if the error terms are correlated?

A

the estimated standard errors will be too low -> unwanted sense of confidence in model

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
44
Q

How do you produce table of observed vs predicted results when classified as probability?

A

pred_prob 0.5, labels = c(“No”, “yes”)
table(true=, predicted = pred_lr)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
45
Q

What is the curse of dimensionality?

A

as p increases (more dimensions), a given observation has no nearby neighbours

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
46
Q

What is a high leverage point?

A

An unusual value for xi
for multiple regression: point that is unusual in terms of the full set of predictors

47
Q

What is the most commonly used measure for measuring the quality of fit? what is the formula?

A

mean squared error
MSE = 1/n SUM(y - predicted y)^2

48
Q

What are the 2 assumptions for linear discriminant analysis when p=1?

A
  • fk(x) is normal
  • there is a common variance across all K classes
49
Q

What do TSS and RSS each measure? and TSS-RSS

A

TSS measures the total variance in the response Y before the regression is performed
RSS measures the variability that is left unexplained after performing the regression
TSS - RSS measures the amount of variability in the response that is explained by performing the regression

50
Q

How do you create a linear model in R? and get its coefficients and r^2 etc

A

model

51
Q

How can you write the expected test MSE

A

E(MSE) = bias-squared + variance + e

52
Q

How do you interpret B1 in a logistic regression model?

A

increasing X by one unit changes the log odds by B1

53
Q

What is R-squared always equal to in multiple linear regression?

A

Cor(Y, Y*)

54
Q

What does Bayes classifier correspond to in a two-response value setting?

A

predicting class one if Pr (Y = 1 | X = x0) > 0.5

55
Q

How do you find the confidence intervals of the coefficient estimates?

A

B0* +- 2.SE(B0*)
where 2 is actually the 97.5% quantile of a t-distribution with n-2 degrees freedom

56
Q

How do you fit a logistic regression model using R?

A

lr_mod

57
Q

How do you find the coefficients of a simple linear regression model?

A

Use least squared approach to minimise RSS (residual sum of squares)

58
Q

What happens to MSE when model is overfitted?

A

small training MSE but large test MSE

59
Q

How many dummy variables will there be when there is a predictor with more than 2 levels?

A

One less than the number of levels, because there is a baseline level with no dummy variable

60
Q

What is the default when using predict with logistic? How do you change it?

A

Automatically outputs the log odds. To change it:
predict(lr_mod, type=“response”)

61
Q

What is the trade-off for a low or high K value in KNN classifier?

A

A very low K value means the decision boundary is overly flexible and finds patterns in the data that don’t correspond to the Bayes decision boundary. This classifier has low bias but very high variance
A very high K value means the classifier becomes less flexible and produces a decision boundary that is close to linear. This is a low-variance but high-bias classifier

62
Q

What is reinforcement learning?

A

Develop an agent that improves its performance based on interactions with the environment

63
Q

What is autoML and its code in R?

A

AutoML performs all the steps of comparing different models using cross-validation, choosing parameters automatically
fit_aml

64
Q

How do you put a categorical variable in a linear regression model?

A

Use dummy variables. 0 for one, 1 for other. Or -1 and 1

65
Q

What happens to the MSE as you increase flexibility?

A

bias initially decreases faster than variance increases, so the MSE declines. But at some point increasing flexibility has more impact on the variance, so the MSE increases.

66
Q

How does k nearest neighbours regression work?

A

Given a value for K and a prediction point x0, KNN regression first identifies the K training observations that are closest to x0 represented by N0. Then it estimates f(x0) using the average of all the training responses in N0

67
Q

What does the accuracy of Y* as a prediction for Y depend on?

A

irreducible and reducible error

68
Q

What is a prediction interval?

A
  • Prediction intervals are used to answer the question how much will Y vary from Y*
  • Prediction intervals are always wider than confidence intervals because they incorporate both the reducible error and irreducible error
69
Q

What is specificity?

A

the percentage of Falses that are identified correctly = TN/(TN+FP)

70
Q

What is accuracy?

A

(TP+TN)/(TP+FP+FN+FN)

71
Q

What are the assumptions of the linear model? (3)

A
  • Additivity assumption: that the association between a predictor X and the response Y does not depend on the values of the other predictors
  • the error terms e1, e2, … are uncorrelated
  • the error terms have a constant variance, Var(ei) = sigma squared
72
Q

How do you fit a linear discriminant analysis model in R?

A

lda_mod

73
Q

What is boosting?

A

ensemble of weak learners. Make trees that are too simple and make more of them for observations with big residuals, then average them

74
Q

What is bias?

A

the error that is introduced by approximating a real-life problem which may be very complicated, by a simpler model. In general, more flexible methods result in less bias

75
Q

What is the common task framework /benchmarking?

A

There is a dataset, set of people trying to find prediction rule and a referee.
The referee runs the prediction rule against a testing dataset, which is sequestered behind a Chinese wall
The referee objectively and automatically reports the score achieved by the submitted rule
Results in declining error rate

76
Q

What are the odds from a logistic function?

A

p(X)/(1-p(X)) = e^ B0+B1X

77
Q

What are classification trees?

A

recursive partitioning . Find the split that makes observations as similar as possible on the outcome within that split. Do that again with each resulting group. Stop at stopping parameter

78
Q

What is a code for splitting data into test and train in R?

A

splits

79
Q

When will parametric methods outperform non-parametric methods?

A

When there is a small number of observations per predictor

80
Q

What are the log odds from a logistic function?

A

log(p(X)/(1-p(X))) = B0 + B1X

81
Q

What are the three approaches for deciding which variables to include in a model? How do they work?

A
  • Forward selection: begin with null model. Then fit p simple linear regressions and add to the null model the variable that results in the lowest RSS. Continue adding variables until some stopping rule is satisfied
  • Backward selection: start with all variables, remove the variable with the largest p-value, continue removing variables until a stopping rule is reached
  • Mixed selection: combination of forward and backward. Start with no variables in the model. Add the variable that provides the best fit. Continue to add variables one by one. If at any point the p-value for one of the variables in the model rises above a certain threshold, then remove that variable from the model. Continue until all the variables in the model have a sufficiently low p-value and all variables outside the model would have a large p- value if added to the model
82
Q

What is the F1 score and what is the point of it?

A
F1 score = sqrt(precision x recall) 
it is not affected by uneven class distributions
83
Q

What is the hierarchical principle?

A

if we include an interaction in a model we should also include the main effects, even if the p-values associated with their coefficients are not significant

84
Q

Code to generate sequence of 1000 equally spaced values from 0 to 40

A

seq(0, 40, length.out = 1000)

85
Q

What is Bayes theorem in general terms?

A

P(A|B) = P(B|A) . P(A) / P(B)

86
Q

What is the simple linear regression model?

A

Y = B0 + B1X

87
Q

What is callibration?

A

A model is perfectly calibrated if for any probability value p, a prediction of a class with confidence p is correct 100*p percent of the time

88
Q

What is an advantage and a disadvantage of the LOOCV approach?

A

Advantage: It has less bias because the training set is bigger
Disadvantage: time consuming to implement

89
Q

In the formula Y = f(X) + e, what are the properties of e?

A

e is a random error term which is independent of X and has mean 0

90
Q

What is supervised learning?

A

For each observation xi, there is an associated response measurement yi

91
Q

What is a logistic regression model?

A

models the probability that Y belongs to a particular category

92
Q

What are 2 drawbacks of the validation set approach?

A
  • validation estimate of the test-error rate can be highly variable depending on which observations are included in the training set and which are included in the validation set
  • not as many observations in the training set
93
Q

How would you turn a vector of “yes” and “nos” into a vector of 1s and 0s

A

ifelse(student==“Yes”, 1, 0)

94
Q

What are the assumptions for linear discriminant analysis when p>1?

A

Assume that X= (X1, …Xp) is drawn from multivariate normal distribution with a class-specific mean vector and common covariance matrix

95
Q

How can you detect non-constant variance of error terms? What is a solution?

A

Presence of a funnel shape in the residual plot. Transform Y to log(Y) or sqrt(Y)

96
Q

What is the negative predictive value (NPV)?

A

TN/(TN+FN)

97
Q

What are the 2 advantages of k-fold cross validation?

A
  • only has to be fit k times compared to n times in LOOCV
  • variability in the test error estimate is lower than when using the validation set approach
98
Q

How do you predict the outcome of the model in R based on new data?

A

predict(model, newdata = data)

99
Q

What is unsupervised learning?

A

For each observation we observe a vector of measurements xi but no response yi. Seek to understand relationship between observations

100
Q

What is a calibration plot?

A

predicted probability versus observed proportion, should be a straight line with slope 1

101
Q

What is multinomial logistic regression?

A

Classifying a response variable with more than 2 classes

102
Q

How can you detect correlation in the error terms?

A

Plot the residuals as a function of time. Adjacent residuals may have similar values if they are correlated

103
Q

What are non-parametric methods?

A

Do no make explicit assumptions about the functional form of f

104
Q

How can you detect outliers?

A

Plot the standardised residuals. Those with absolute values greater than 3 may be outliers

105
Q

What happens when there is collinearity between the predictor variables?

A

Collinearity reduces the accuracy of the estimates of the regression coefficients, and causes the standard error to grow

106
Q

What is softmax coding for multinomial logistic regression?

A

Instead of selecting a baseline classes, treat all K classes symmetrically. Estimate coefficients for all K classes

107
Q

What is the validation set approach?

A

randomly divide the available set of observations into two parts, a training set and a validation set. The model is fit on the training set and the fitted model is used to predict the responses for the observations in the validation set

108
Q

What is the leave one out cross validation (LOOCV) approach?

A

A single observation (x1, y1) is used for the validation set and the remaining observations make up the training set.
Find the MSE and repeat this approach n times and get the average of the n MSE estimates

109
Q

What is the advantage and disadvantage of non-parametric methods

A

Advantage: have the potential to accurately fit a wider range of possible shapes for f
Disadvantage: a very large number of observations is required to obtain an accurate estimate for f

110
Q

What are random forests?

A

bagged trees with feature sampling. Make trees that are too complex and average over bootstrapped samples to cancel out the overfitting parts

111
Q

How do you predict the outcome of the model in R based on new data and get the upper and lower confidence intervals?

A

predict(model, newdata = data, interval = “confidence”)

112
Q

What is used for hypothesis testing in logistic regression?

A

Z-statistic

113
Q

What is the residual ei for each data point?

A

yi = yi*. difference between observed and predicted response