3. Linear Regression Flashcards
Sample covariance and correlation, least square regression, alternative regression models
What is regression used for?
- many data sets have observations of several variables for each individual
- the aim of regression is to ‘predict’ the value of one variable, y, using observations from another variable, x
What is linear regression used for?
-linear regression is used for numerical data and uses a relation in the form:
y ≈ α + βx
-in a plot of y as a function of x, this relation describes a straight line
Paired Samples
- to fit a linear model we need observations of x and y
- it is important that these are paired samples, i.e. that for each iϵ{1,..,n} the observations xi and yi belong to the same individual
Examples of Paired Samples
- weight and height of a person
- engine power and fuel consumption of a car
Linear Regression
Constructing a Model
-assume we have observed data (xi,yi) for iϵ{1,…,n}
-to construct a model for these data, we use random variables Y1,…,Yn such that:
Yi = α + βxi + εi
-for all iϵ{1,…,n} where ε1,…,εn are i.i.d random variables with E(εi)=0 and Var(εi)=σ²
-here we assume that the x-values are fixed and known
-thus the only random quantities in the model are Yi and εi
-the values α, β and σ² are parameters of the model, to fit the model to data we need to estimate these parameters
Linear Regression
Residuals/Errors
-starting with the model:
Yi = α + βxi + εi
-the random variables εi are called residuals or errors
-in a scatter plot they correspond to the vertical distance between the samples and the regression line
-often we assume that εi~N(0,σ²) for all i
Linear Regression
Expectation of Yi
-we have the linear regression model:
Yi = α + βxi + εi
-then the expectation is given by:
E(Yi) = E(α + βxi + εi)
-the expectation of a constant is just the constant itself, and remember that xi represents a known value here:
E(Yi) = α + βxi + E(εi)
-recall that εi are modeled as random variables with E(εi)=0:
E(Yi) = α + βxi
-thus the expectation of Yi depends on xi and, at least for β≠0, the random variables Yi are not identically distributed
What are sample covariance and correlation used for?
-to study the dependence between paired numeric variables
Sample Covariance
Definition
-the sample covariance of x1,…,xnϵℝ and y1,…,ynϵℝ is given by:
σxy = 1/(n-1) Σ(xi-x^)(yi-y^)
-where the sum is taken from i=1 to i=n, and x^ and y^ are the sample means
Sample Correlation
Definition
-the sample correlation of x1,…,xnϵℝ and y1,…,ynϵℝ is given by:
ρxy = σxy / √( σx² σy²)
What is the sample covariance of a sample with itself?
- we can show that the sample covariance sample with itself equals the sample variance
- i.e. Cov(X,X) = Var(X)
What values can correlation take?
-the correlation of two samples is always in the interval [-1,1]
Interpreting Correlation
ρxy≈1
- strong positive correlation, ρxy≈1 indicates that the points (xi,yi) lie close to a straight line with increasing slope
- in this case y is almost completely determined by x
Interpreting Correlation
ρxy≈-1
- strong negative correlation, ρxy≈-1 indicates that the points (xi,yi) lie close to a straight line with increasing slope
- in this case y is almost completely determined by x
Interpreting Correlation
ρxy≈0
- this means that there is no linear relationship between x and y which helps to predict y from x
- this could be because x and y are independent or because the relationship between x and y is non-linear
How can the sample covariance be used to estimate the covariance of random variables?
-if (X1,Y1),…,(Xn,Yn) are i.i.d. pairs of random variables, then we can show:
lim σxy(X1,…,Xn,Y1,…,Yn) = Cov(X1,Y1)
-where the limit is taken as n tends to infinity
Correlation and Covariance in R
-the functions to compute sample covariances and correlations in R are cov() and cor()
Correlation and Covariance in R
else
- both functions, cov() and cor() have an optional argument use=… which controls how missing data is handled
- -if use=’everything’ or is not specified, the functions return NA if any input data is missing
- -if use=’all.obs’, the functions abort with an error if any input data are missing
- -if use=’complete.obs’, any pairs (xi,yi) where either xi or yi is missing are ignored and the covariance/correlation is computed using the remaining samples
What is least squares regression?
- least squares is a method for determining the parameter values α, β and σ²
- most methods for doing this differ in the way that they consider outliers in the data
Least Squares Regression
Minimising the Residual Sum of Squares - Formula
-we estimate the parameters α, β and σ² using the values which minimise the residual sum of squares:
r(α,β) = Σ (yi - (α + βxi))²
-for given α and β, the value r(α,β) meaures how close the given data points (xi,yi) are to the regression line α+βx
-by minimising r(α, β) we find the regression line which is closest to the data
Least Squares Regression
Minimising the Residual Sum of Squares - Lemma
-assume σx²>0
-then the function r(α,β) from takes its minimum at the point (α,β) given by:
β = σxy/σx²
α = y^ - βx^
-where x^, y^ are sample means, σxy is the sample covariance and σx²is the sample variance
Least Squares Regression
Minimising the Residual Sum of Squares - Lemma Proof
-obtain a simplified expression for r(α,β) using the substitutions:
xi~ = xi - x^
yi~ = yi - y^
-differentiate this with respect to beta and set equal to 0 to impose the condition for beta at a stationary point
-the second derivative should be greater than 0 showing that the expression for beta applies at the minimum value of r(α,β)
Least Squares Regression
Fitted Regression Line
-now that we have used the method of least squares to determine the values α^ and β^, the values which minimise r(α,β)
-we can consider the fitted regression line:
y = α^ + x*β^
-this is an approximation to the unknown true mean α+βx from the model
Least Squares Regression
Fitted Values
-now that we have used the method of least squares to determine the values α^ and β^, the values which minimise r(α,β)
-we can consider the fitted values:
yi^ = α^ + xi*β^
-these are the y-values of the fitted regression line at the points xi
-if we consider εi as being the ‘noise’ or ‘errors’, then we can consider the values yi^ to be the versions of yi with the noise removed
Least Squares Regression
Estimated Residuals
-now that we have used the method of least squares to determine the values α^ and β^, the values which minimise r(α,β)
-we can consider the estimated residuals:
εi^ = yi - yi^
= yi - α^ - xi*β^
-these are the vertical distances between the data and the fitted regression line
Least Squares Regression
Estimating σ²
-in order to fit a linear model we also need to estimate the residual variance, σ²
-this can be done using the estimator:
σ²^ ≈ 1/(n-2) Σ(εi-ε)²
= 1/(n-2) Σ(yi-α^-xiβ^)²
-to understand the form of this estimator, remember that σ² is just the variance of εi
-thus using the standard estimator for variance we could estimate σ² as:
σ² ≈ 1/(n-1) Σ(εi-ε
)²
= 1(n-1) Σ(εi^-ε^)²
-where
is used to indicate a sample mean and ^ is for the values estimated using the least squares model
Least Squares Regression
Unbiased Estimators α^, β^, σ^² Lemma
-let x1,…,xnϵR be given, εi,…,εn i.i.d random variables with E(εi)=0 and Var(εi)=σ²
-Let α,βϵR and define Yi=α+βxi+εi for all iϵ{1,…,n}
-furthermore, let α^, β^ and σ^² ve the estimators
-then we have:
E(α^(x,Y)) = α
E(β^(x,Y)) = β
E(σ^²(x,Y)) = σ²
-where x=(x1,…,xn) and
Y=(Y1,…,Yn)
Least Squares Regression
Unbiased Estimators α^, β^, σ^² Proof
Least Squares Regression
Variance of α^ & β^ Lemma
-here α^ and β^ the estimators
Var(α^(x,Y)) =
(x²~σ²) / (σx²(n-1))
-where x²~ indicates the mean of x² i.e. 1/n Σxi²
Var(β^(x,Y)) = σ² / (σx²(n-1))
-and σx² is the sample variance of x1,…,xn
Least Squares Regression
Variance of α^ & β^ Proof
Least Squares Regression
Variance of α^ & β^ Interpretation
-once we have found estimates α^ and β^ for the parameters α and β we can predict the y value for any given x using:
y^ = α^ + xβ^
-since the estimates α^ and β^ are affected by noise in the true observations, the estimates regression line will differ from the ‘true’ regression line:
y = α + xβ
-but we expect the error in y^ to decrease with n since we can see that the variance in α^ and β^ will decrease with n, i.e. our estimates will become more stable
Least Squares Regression
Unbiased Estimators y^ Lemma
-let x*∈R and: y^* = α^ + x*β^ -then y^ is an unbiased estimator for the y value of the unknown true regression line at the point x*, i.e. : E(y^*) = α + x*β -and Var(y^) = 1/n (1 + n(x*-x_)²/(n-2)σx²) -where x_ is the mean of xi
Least Squares Regression
Unbiased Estimators y^ Proof
How do you work with the fitted model in R?
- residual(m) returns εi^ for each data point
- fitted m returns yi^ calculated by yi^=α^ + xiβ^
- you can print m to screen to see the key values, α and β
- summary(m) can be called for more statistical information about the model
- the coefficients as a vector can be obtained using coef(m) and can then be assigned to variables alpha and beta
How to make predictions using a fitted model in R?
-one of the main aims of fitting a linear model is to make predictions for new not previously observed x values, i.e. to compute:
ynew = α + βxnew
-the command for this is predict(m , newdata=…)
-where m is the model previously fitted using lm and newdata specifes the new x-values to predict responses for
-the argument newdata should be a data.frame with a column which has the name of the original variable and contains the new values, e.g.:
predict(m,newdata=data.frame(x=1))
Alternative Regression Models
- so far we have considered a regression line in the form y=α+βx to predict y from x
- instead we could have used x=γ+𝛿y to predict x from y
- regression for y as a function of x minimises the (average squared) length of the vertical lines from the points to the line
- regression of x as a function of y minimises the (average squared) length of the horizontal lines from the points to the line
- thus the two models are different
Residuals in Alternative Regression Models
-in the model Yi=α+βxi+εi, the residuals εi can be seen as an error or uncertainty in the observations in yi whereas the values xi are assumed to be known exactly
-how would we construct a model where there are uncertainties about the values of both x and y
-a simple model would be:
Xi = xi + ηi
Yi = α+βxi+εi
-for i=1,…,n, where ηi~N(0,ση²) and εi~N(0,σε²) independently
-models of this form are called ‘errors in variables models’
How do you fit a linear regression model to data in R?
-use the lm() command
m