Biostat | Prefinal - Regression Analysis Flashcards
A graph that shows the relationship between the 2 variables.
Scatter Plot
Also called a Regression Line is a straight line that best represents the data on a scatter plot.
LINE OF BEST FIT
REGRESSION EQUATION:
Y = bx + a
The single variable being explained by the regression model - criterion
DEPENDENT VARIABLE (Y)
The explanatory variables used to predict the dependent variables - predictors
INDEPENDENT VARIABLE (X)
The values computed by the regression tool: reflecting explanation to dependent variable relationship
COEFFICIENTS (b)
The portion of the dependent variable that isn’t explained by the model.
RESIDUALS
METHODS
Linear Regression
> Straight-line relationship
Form: y=mx+b
METHODS OF REGRESSION ANALYSIS:
> Straight-line relationship
> Form: y=mx+b
Linear Regression
METHODS OF REGRESSION ANALYSIS:
> Implies curved relationship
> Logarithmic relationships
Non-Linear
METHODS OF REGRESSION ANALYSIS:
> data gathered from the same time period
Cross-Sectional
METHODS OF REGRESSION ANALYSIS:
> Involves data observed over equally spaced points in time.
Time series
> Only one dependent variable, x
Relationship between x and y is described by a linear function.
Changes in y are assumed to be caused by changes in x.
SIMPLE LINEAR REGRESSION MODEL
Regression variability that is explained by the relationship b/w X and Y
SSR
Unexplained variability, due to factors than the regression
SSE
CORRELATION COEFFICIENT:
> the strength of the relationship between X and Y variables
r
Total variability about the mean
SST
CORRELATION OF DETERMINATION:
> Proportion of explained variation
r Square
SD of error around the regression line
Standard Error
Significance of the Regression Model
TEST FOR LINEARITY
Variation of Model
Variation of Model
Errors may be positive or negative.
VARIABILITY
- Measures the total variable in Y
Sum of Squares Total (SST)
– Less than SST bcoz the regression line reduced the variability
Sum of Squared Error (SSE)
- Indicated how much of the total variability is explained by the regression model.
Sum of Squared due to Regression (SSR)
The proportion of the variability in Y is explained by the regression equation.
COEFFICIENT OF DETERMINATION
TEST FOR LINEARITY:
If the significance level for the F test is low,,,
reject the null hypothesis and conclude there is a linear relationship.
An F test is used to statistically test the null hypothesis that there is no linear relationship between the X and Y variables.
TEST FOR LINEARITY
The mean squared error (MSE) is the estimate of the error variance of the regression equation
S^2 = MSE = SSE
n - k - 1
STANDARD ERROR
ASSUMPTIONS OF THE REGRESSION MODEL
Errors are independent
Errors are normally distributed
Errors have a mean of zero
Errors have a constant variance
Special variables that are created for qualitative data
The number of dummy variables must equal to 1 less than the number of categories of the qualitative variable.
BINARY/DUMMY VARIABLES
Takes into account the number of independent variables in the model.
ADJUSTED R-SQUARE
Occurs when two or more predictor variables are highly correlated to each other.
MULTICOLLINEARITY
Exists when an independent variable is correlated with another independent variable.
MULTICOLLINEARITY
Creates problems in the coefficients because duplication of information may occur.
MULTICOLLINEARITY
A metric to detect multicollinearity that measures the correlation and strength of correlation between the predictor variables in a regression model.
Variance inflation factor (VIF)
Variance inflation factor (VIF)
no correlation between a given predictor
1
Variance inflation factor (VIF)
moderate correlation between a given predictor variable
1-5
Variance inflation factor (VIF)
potentially severe correlation between a given predictor variable
> 5
Form of regression that allows the prediction of discrete variables by a mix of continuous and discrete predictors.
LOGISTICS REGRESSION
graphical representation of the relation between two or more variables.
Types of Logistics Regression:
- Used when the dependent variable is dichotomous
Binary Logistic Regression
Types of Logistics Regression:
- Used when the dependent variable has more than two categories
Multinomial Logistic Regression
WHEN TO USE LOGISTIC REGRESSION?
> When the dependent variable has only two levels (yes/no, male/female, taken/not taken)
If multivariate normality is suspected
If we don’t have linearity
Assumptions in Logistic Regression
No assumptions about the distributions of the predictor variables
Predictors do not have to be normally distributed
Does not have to be linearly related
Does not have to have equal variance within each group
There should be a minimum of 20 cases per predictor, with a minimum of 60 total cases.
captures how one variable is different from its mean as the other variable is different from its mean.
between two random variables is a statistical measure of the degree to which the two variables move together.
covariance
A measure of the strength of the relationship between or among variables.
correlation coefficient
A positive covariance indicates that….
the variables tend to move together; a negative covariance indicates that the variables tend to move in opposite directions.
an extreme value of a variable
OUTLIER
The appearance of a relationship when in fact there is no relation.
Spurious correlation
is the analysis of the relation between one variable and some other variable(s), assuming a linear relation. Also referred to as least squares regression and ordinary least squares (OLS).
REGRESSION ANALYSIS
The purpose is to explain the variation in a variable (that is, how a variable differs from it’s mean value) using the variation in one or more other variables.
REGRESSION ANALYSIS
is the variable whose variation is used to explain that of the dependent variable. Also referred to as the explanatory variable, the exogenous variable, or the predicting variable.
INDEPENDENT VARIABLE
is the variable whose variation is being explained by the other variable(s). Also referred to as the explained variable, the endogenous variable, or the predicted variable.
DEPENDENT VARIABLE
exists between dependent and independent variable.
Linear Relationship
What is the expected value of the disturbance term ?
zero
disturbance terms:
homoskedastistic.
is the percentage of variation in the dependent variable (variation of Yi’s or the sum of squares total, SST) explained by the independent variable(s).
coefficient of determination
It is the range of regression coefficient values for a given value estimate of the coefficient and a given level of probability.
confidence interval
is the square root of the ratio of the variance of the regression to the variation in the independent variable
standard error
using regression involves making predictions about the dependent variable based on average relationships observed in the estimated regression.
forecasting
A regression analysis with more than one independent variable.
Multiple regression
has the same interpretation as it did under the simple linear case – the intercept is the value of the dependent variable when all independent variables are equal zero.
intercept
are values of the dependent variable based on the estimated regression coefficients and a prediction about the values of the independent variables.
Predicted values
is a measure of how well a set of independent variables, as a group, explain the variation in the dependent variable.
F-statisctics
is the percentage of variation in the dependent variable explained by the independent variables.
coefficient of determination,
are qualitative variables that take on a value of zero or one.
Dummy variables
The situation in which the variance of the residuals is not constant across all observations.
Heteroskedasticity
is the situation in which the residual terms are correlated with one another. This occurs frequently in time-series analysis.
Autocorrelation
The residuals are independently distributed,,,
the residual or disturbance for one observation is not correlated with that of another observation. [A violation of this is referred to as autocorrelation.]
If last year‟s earnings were high, this means that this year‟s earnings may have a greater probability of being high than being low. This is an example of?
positive autocorrelation
When a good year is always followed by a bad year, this is,,,, .
a negative autocorrelation
is the problem of high correlation between or among two or more independent variables.
Multicollinearity
Form of regression that allows the prediction of discrete variables by a mix of continuous and discrete predictors.
LOGISTIC REGRESSION
TYPES OF LOGISTIC REGRESSION
- used when the dependent variable is dichotomous
Binary Logistic Regression
TYPES OF LOGISTIC REGRESSION
- It is used when the dependent or outcomes variable has more than two categories
Multinomial Logistics Regression
WHEN TO USE LOGISTIC REGRESSION?
- When the dependent variable is non parametric and we don’t have homoscedasticity (variance of dependent variable and independent variable is not equal).
- Used when the dependent variable has only two levels. (Yes/No, Male/Female, Taken/Not Taken)
- If multivariate normality is suspected
- If we don’t have linearity.
are the number of independent pieces of information that are used to estimate the regression parameters.
DEGREES OF FREEDOM
is the square root of the ratio of the variance of the regression to the variation in the independent variable
Standard Error