Week 6: Regression and Classification Flashcards

1
Q

How do you make a prediction vector using knn in R?

A

knn(train, test, cl, k)
train is the training set with only the predictor variables, test is the test set with only the predictor variables, cl is the outcomes from the training set, k is the desired k

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What are interaction terms?

A

Y = B0 + B1X1 + B2X2 + B3X1X2 + e
Combination of predictors

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is overfitting?

A

following the errors or noise too closely

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is variance?

A

the amount by which f* would change if we estimated it using a different training data set. In general, more flexible methods have higher variance

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is a generative model for classification?

A

Model the distribution of the predictors X separately in each of the response classes. Then use Bayes theorem to flip these around into estimates for Pr(Y=k|X=x)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Why does k-fold CV give more accurate estimates of MSE than LOOCV?

A

LOOCV has higher levels of variance than k-fold because the model outputs are highly correlated with each other and therefore the mean has higher variance

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is the residual sum of squares (RSS)?

A

sum of all the residuals squared:
e1^2 + e2^2 …

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What is confounding?

A

When performing regressions with a single predictor shows a very different outcome to performing regressions with multiple predictors that are also relevant

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

How do you find f with a parametric method?

A

First make an assumption about the functional form or shape of f, then use a procedure that uses the training data to fit or train the model

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

How do you predict the outcome of the model in R?

A

predict(model)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

How do you interpret B0 and B1 when there is a dummy variable 1, when someone is a student, 0 when they are not a student

A

B0 can be interpreted as the average Y among non-students. B0 + B1 as the average Y among students. B1 as the average difference in Y between students and non students

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What is the aim of linear discriminant analysis?

A

Find estimate for fk(x) to estimate pk(x) by approximating bayes classifier

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What is sensitivity/recall?

A

the percentage of Trues that are identified correctly = TP/(TP+FN)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What is the difference between prediction and inference?

A

Prediction: predict Y using Y* = f*(X),
Inference: understanding the association between Y and X

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What is the advantage and disadvantage of parametric methods?

A

Advantage: reduce the problem of estimating f down to one of estimating a set of parameters
Disadvantage: will usually not match the true unknown form of f

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What do you need to check the significance of multiple coefficients together e.g. B0 = B1 = B2 = 0?

A

F-statistic, large F-statistic to reject null hypothesis

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What is k-fold cross validation?

A

Randomly divide the set of observation into k groups (or folds) of approximately equal size. The first fold is treaded as a validation set and the method is fit on the remaining k-1 folds. Repeat k times and get k estimates of the MSE. Find the average

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

How do you quantify the test error on classification problems?

A

use the number of mis-classified observations rather than the MSE

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

How can you detect non-linearity of data with a linear model?

A

Plot the residuals ei versus the predictor xi

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

What is the error rate?

A

1 - Accuracy

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

What happens to MSE as model flexibility increases?

A

training MSE will decrease, but test MSE may not

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

How do you interpret the coefficients of multinomial logistic regression? with stroke, overdose and epilepsy as the 3 classifications

A

If epilepsy is set as the baseline, the B(stroke)0 is interpreted as the log odds of stroke versus epilepsy given that x1 = … = x1 = 0. A one unit increase in Xj is associated with a B(stroke)j increase in the log odds of stroke over epilepsy

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

What is the logistic function?

A

p(X) = [e^ B0+B1X] / [1 + e^ B0+B1X]

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

How do you produce table of observed vs predicted results when classified discretely?

A

table(true=, predicted=)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
When selecting a level of smoothness for non-parametric methods, what is the trade-off?
Low-levels of smoothness can lead to overfitting
26
What is reducible error?
f\* will not be a perfect estimate for f and will cause some error, this error is reducible because we can potentially improve the accuracy of f
27
What is the standard multiple linear regression formula?
Y = B0 + B1X1 + B2X2 + … + BpXp + e
28
What is the R squared statistic?
Measures the proportion of variability in Y that can be explained using X R^2 = (TSS-RSS)/TSS
29
What is the positive predictive value (PPV) / precision?
TP/(TP+FP)
30
What is irreducible error?
e: cannot be predicted using X, therefore the error introduced by e cannot be reduced
31
How do you assess the accuracy of the coefficient estimates?
Compute the standard error of B0 and B1
32
What does fk(x) represent in linear discriminant analysis?
Pr(X|Y=k)
33
How can you assess collinearity between 2 variables and between multiple variables?
2 variables: correlation matrix multiple variables: variance inflation factor (VIF). value exceeding 5 or 10 is problematic
34
How do you alter the sensitivity and specificity of a classifier?
Change the threshold at which an observation is assigned to a class - default is 0.5
35
How does the k-nearest-neighbours classifier work?
Given a positive integer K and a test observation x0, the KNN classifier first indentifies the K points in the training data that are closest to x0, represented by N0. It then estimates the conditional probability for class j as the fraction of points in N0 whose response values equal j. KNN classifies the test observation x0 to the class with the largest probability.
36
What is the problem with regression trees? What can they be used for?
Tend to overfit Use them as a basic building block for ensembles
37
What is the ROC curve?
false positive rate vs true positive rate. The area under the curve gives the overall performance of a classifier
38
How do you test for significance of the coefficients?
Hypothesis test. Can compute t-statistic using standard errors. p-value is the probability of observing a value equal or larger than t. small p-value indicates unlikely to observe such a substantial association between the predictor and the response due to chance. If p-value is small, reject hypothesis that co-efficient is 0
39
What type of model is polynomial regression?
It is still a linear model
40
What will happen if you increase the cutoff value?
less true positives and less false positives (ie less positives overall)
41
What does Bayes classifier do?
``` assigns an observation X=x to the class for which pk(x) is the largest (assigns each observation to the most likely class given predictor values) -produces the lowest possible test error rate, called the Bayes error rate ```
42
How do you assess the accuracy of the linear regression model?
residual standard error: RSE = sqrt[(1 / n-2)\*RSS] It is the average amount that the response will deviate from the true regression line. It is an absolute measure of the lack of fit of the model
43
What happens to the linear model if the error terms are correlated?
the estimated standard errors will be too low -\> unwanted sense of confidence in model
44
How do you produce table of observed vs predicted results when classified as probability?
pred\_prob 0.5, labels = c(“No”, “yes”) table(true=, predicted = pred\_lr)
45
What is the curse of dimensionality?
as p increases (more dimensions), a given observation has no nearby neighbours
46
What is a high leverage point?
An unusual value for xi for multiple regression: point that is unusual in terms of the full set of predictors
47
What is the most commonly used measure for measuring the quality of fit? what is the formula?
mean squared error MSE = 1/n SUM(y - predicted y)^2
48
What are the 2 assumptions for linear discriminant analysis when p=1?
- fk(x) is normal - there is a common variance across all K classes
49
What do TSS and RSS each measure? and TSS-RSS
TSS measures the total variance in the response Y before the regression is performed RSS measures the variability that is left unexplained after performing the regression TSS - RSS measures the amount of variability in the response that is explained by performing the regression
50
How do you create a linear model in R? and get its coefficients and r^2 etc
model
51
How can you write the expected test MSE
E(MSE) = bias-squared + variance + e
52
How do you interpret B1 in a logistic regression model?
increasing X by one unit changes the log odds by B1
53
What is R-squared always equal to in multiple linear regression?
Cor(Y, Y\*)
54
What does Bayes classifier correspond to in a two-response value setting?
predicting class one if Pr (Y = 1 | X = x0) \> 0.5
55
How do you find the confidence intervals of the coefficient estimates?
B0\* +- 2.SE(B0\*) where 2 is actually the 97.5% quantile of a t-distribution with n-2 degrees freedom
56
How do you fit a logistic regression model using R?
lr\_mod
57
How do you find the coefficients of a simple linear regression model?
Use least squared approach to minimise RSS (residual sum of squares)
58
What happens to MSE when model is overfitted?
small training MSE but large test MSE
59
How many dummy variables will there be when there is a predictor with more than 2 levels?
One less than the number of levels, because there is a baseline level with no dummy variable
60
What is the default when using predict with logistic? How do you change it?
Automatically outputs the log odds. To change it: predict(lr\_mod, type=“response”)
61
What is the trade-off for a low or high K value in KNN classifier?
A very low K value means the decision boundary is overly flexible and finds patterns in the data that don’t correspond to the Bayes decision boundary. This classifier has low bias but very high variance A very high K value means the classifier becomes less flexible and produces a decision boundary that is close to linear. This is a low-variance but high-bias classifier
62
What is reinforcement learning?
Develop an agent that improves its performance based on interactions with the environment
63
What is autoML and its code in R?
AutoML performs all the steps of comparing different models using cross-validation, choosing parameters automatically fit\_aml
64
How do you put a categorical variable in a linear regression model?
Use dummy variables. 0 for one, 1 for other. Or -1 and 1
65
What happens to the MSE as you increase flexibility?
bias initially decreases faster than variance increases, so the MSE declines. But at some point increasing flexibility has more impact on the variance, so the MSE increases.
66
How does k nearest neighbours regression work?
Given a value for K and a prediction point x0, KNN regression first identifies the K training observations that are closest to x0 represented by N0. Then it estimates f(x0) using the average of all the training responses in N0
67
What does the accuracy of Y\* as a prediction for Y depend on?
irreducible and reducible error
68
What is a prediction interval?
- Prediction intervals are used to answer the question how much will Y vary from Y\* - Prediction intervals are always wider than confidence intervals because they incorporate both the reducible error and irreducible error
69
What is specificity?
the percentage of Falses that are identified correctly = TN/(TN+FP)
70
What is accuracy?
(TP+TN)/(TP+FP+FN+FN)
71
What are the assumptions of the linear model? (3)
- Additivity assumption: that the association between a predictor X and the response Y does not depend on the values of the other predictors - the error terms e1, e2, … are uncorrelated - the error terms have a constant variance, Var(ei) = sigma squared
72
How do you fit a linear discriminant analysis model in R?
lda\_mod
73
What is boosting?
ensemble of weak learners. Make trees that are too simple and make more of them for observations with big residuals, then average them
74
What is bias?
the error that is introduced by approximating a real-life problem which may be very complicated, by a simpler model. In general, more flexible methods result in less bias
75
What is the common task framework /benchmarking?
There is a dataset, set of people trying to find prediction rule and a referee. The referee runs the prediction rule against a testing dataset, which is sequestered behind a Chinese wall The referee objectively and automatically reports the score achieved by the submitted rule Results in declining error rate
76
What are the odds from a logistic function?
p(X)/(1-p(X)) = e^ B0+B1X
77
What are classification trees?
recursive partitioning . Find the split that makes observations as similar as possible on the outcome within that split. Do that again with each resulting group. Stop at stopping parameter
78
What is a code for splitting data into test and train in R?
splits
79
When will parametric methods outperform non-parametric methods?
When there is a small number of observations per predictor
80
What are the log odds from a logistic function?
log(p(X)/(1-p(X))) = B0 + B1X
81
What are the three approaches for deciding which variables to include in a model? How do they work?
- Forward selection: begin with null model. Then fit p simple linear regressions and add to the null model the variable that results in the lowest RSS. Continue adding variables until some stopping rule is satisfied - Backward selection: start with all variables, remove the variable with the largest p-value, continue removing variables until a stopping rule is reached - Mixed selection: combination of forward and backward. Start with no variables in the model. Add the variable that provides the best fit. Continue to add variables one by one. If at any point the p-value for one of the variables in the model rises above a certain threshold, then remove that variable from the model. Continue until all the variables in the model have a sufficiently low p-value and all variables outside the model would have a large p- value if added to the model
82
What is the F1 score and what is the point of it?
``` F1 score = sqrt(precision x recall) it is not affected by uneven class distributions ```
83
What is the hierarchical principle?
if we include an interaction in a model we should also include the main effects, even if the p-values associated with their coefficients are not significant
84
Code to generate sequence of 1000 equally spaced values from 0 to 40
seq(0, 40, length.out = 1000)
85
What is Bayes theorem in general terms?
P(A|B) = P(B|A) . P(A) / P(B)
86
What is the simple linear regression model?
Y = B0 + B1X
87
What is callibration?
A model is perfectly calibrated if for any probability value p, a prediction of a class with confidence p is correct 100\*p percent of the time
88
What is an advantage and a disadvantage of the LOOCV approach?
Advantage: It has less bias because the training set is bigger Disadvantage: time consuming to implement
89
In the formula Y = f(X) + e, what are the properties of e?
e is a random error term which is independent of X and has mean 0
90
What is supervised learning?
For each observation xi, there is an associated response measurement yi
91
What is a logistic regression model?
models the probability that Y belongs to a particular category
92
What are 2 drawbacks of the validation set approach?
- validation estimate of the test-error rate can be highly variable depending on which observations are included in the training set and which are included in the validation set - not as many observations in the training set
93
How would you turn a vector of "yes" and "nos" into a vector of 1s and 0s
ifelse(student==“Yes”, 1, 0)
94
What are the assumptions for linear discriminant analysis when p\>1?
Assume that X= (X1, …Xp) is drawn from multivariate normal distribution with a class-specific mean vector and common covariance matrix
95
How can you detect non-constant variance of error terms? What is a solution?
Presence of a funnel shape in the residual plot. Transform Y to log(Y) or sqrt(Y)
96
What is the negative predictive value (NPV)?
TN/(TN+FN)
97
What are the 2 advantages of k-fold cross validation?
- only has to be fit k times compared to n times in LOOCV - variability in the test error estimate is lower than when using the validation set approach
98
How do you predict the outcome of the model in R based on new data?
predict(model, newdata = data)
99
What is unsupervised learning?
For each observation we observe a vector of measurements xi but no response yi. Seek to understand relationship between observations
100
What is a calibration plot?
predicted probability versus observed proportion, should be a straight line with slope 1
101
What is multinomial logistic regression?
Classifying a response variable with more than 2 classes
102
How can you detect correlation in the error terms?
Plot the residuals as a function of time. Adjacent residuals may have similar values if they are correlated
103
What are non-parametric methods?
Do no make explicit assumptions about the functional form of f
104
How can you detect outliers?
Plot the standardised residuals. Those with absolute values greater than 3 may be outliers
105
What happens when there is collinearity between the predictor variables?
Collinearity reduces the accuracy of the estimates of the regression coefficients, and causes the standard error to grow
106
What is softmax coding for multinomial logistic regression?
Instead of selecting a baseline classes, treat all K classes symmetrically. Estimate coefficients for all K classes
107
What is the validation set approach?
randomly divide the available set of observations into two parts, a training set and a validation set. The model is fit on the training set and the fitted model is used to predict the responses for the observations in the validation set
108
What is the leave one out cross validation (LOOCV) approach?
A single observation (x1, y1) is used for the validation set and the remaining observations make up the training set. Find the MSE and repeat this approach n times and get the average of the n MSE estimates
109
What is the advantage and disadvantage of non-parametric methods
Advantage: have the potential to accurately fit a wider range of possible shapes for f Disadvantage: a very large number of observations is required to obtain an accurate estimate for f
110
What are random forests?
bagged trees with feature sampling. Make trees that are too complex and average over bootstrapped samples to cancel out the overfitting parts
111
How do you predict the outcome of the model in R based on new data and get the upper and lower confidence intervals?
predict(model, newdata = data, interval = “confidence”)
112
What is used for hypothesis testing in logistic regression?
Z-statistic
113
What is the residual ei for each data point?
yi = yi\*. difference between observed and predicted response