Biostat | Prefinal - Regression Analysis Flashcards

1
Q

A graph that shows the relationship between the 2 variables.

A

Scatter Plot

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Also called a Regression Line is a straight line that best represents the data on a scatter plot.

A

LINE OF BEST FIT

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

REGRESSION EQUATION:

A

Y = bx + a

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

The single variable being explained by the regression model - criterion

A

DEPENDENT VARIABLE (Y)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

The explanatory variables used to predict the dependent variables - predictors

A

INDEPENDENT VARIABLE (X)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

The values computed by the regression tool: reflecting explanation to dependent variable relationship

A

COEFFICIENTS (b)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

The portion of the dependent variable that isn’t explained by the model.

A

RESIDUALS

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

METHODS
Linear Regression

A

> Straight-line relationship
Form: y=mx+b

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

METHODS OF REGRESSION ANALYSIS:
> Straight-line relationship
> Form: y=mx+b

A

Linear Regression

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

METHODS OF REGRESSION ANALYSIS:
> Implies curved relationship
> Logarithmic relationships

A

Non-Linear

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

METHODS OF REGRESSION ANALYSIS:
> data gathered from the same time period

A

Cross-Sectional

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

METHODS OF REGRESSION ANALYSIS:
> Involves data observed over equally spaced points in time.

A

Time series

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

> Only one dependent variable, x
Relationship between x and y is described by a linear function.
Changes in y are assumed to be caused by changes in x.

A

SIMPLE LINEAR REGRESSION MODEL

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Regression variability that is explained by the relationship b/w X and Y

A

SSR

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Unexplained variability, due to factors than the regression

A

SSE

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

CORRELATION COEFFICIENT:
> the strength of the relationship between X and Y variables

A

r

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Total variability about the mean

A

SST

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

CORRELATION OF DETERMINATION:
> Proportion of explained variation

A

r Square

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

SD of error around the regression line

A

Standard Error

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Significance of the Regression Model

A

TEST FOR LINEARITY

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Variation of Model

A

Variation of Model

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Errors may be positive or negative.

A

VARIABILITY

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q
  • Measures the total variable in Y
A

Sum of Squares Total (SST)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

– Less than SST bcoz the regression line reduced the variability

A

Sum of Squared Error (SSE)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
- Indicated how much of the total variability is explained by the regression model.
Sum of Squared due to Regression (SSR)
26
The proportion of the variability in Y is explained by the regression equation.
COEFFICIENT OF DETERMINATION
27
TEST FOR LINEARITY: If the significance level for the F test is low,,,
reject the null hypothesis and conclude there is a linear relationship.
27
An F test is used to statistically test the null hypothesis that there is no linear relationship between the X and Y variables.
TEST FOR LINEARITY
27
The mean squared error (MSE) is the estimate of the error variance of the regression equation S^2 = MSE = SSE n - k - 1
STANDARD ERROR
28
ASSUMPTIONS OF THE REGRESSION MODEL
Errors are independent Errors are normally distributed Errors have a mean of zero Errors have a constant variance
29
Special variables that are created for qualitative data The number of dummy variables must equal to 1 less than the number of categories of the qualitative variable.
BINARY/DUMMY VARIABLES
30
Takes into account the number of independent variables in the model.
ADJUSTED R-SQUARE
31
Occurs when two or more predictor variables are highly correlated to each other.
MULTICOLLINEARITY
31
Exists when an independent variable is correlated with another independent variable.
MULTICOLLINEARITY
32
Creates problems in the coefficients because duplication of information may occur.
MULTICOLLINEARITY
33
A metric to detect multicollinearity that measures the correlation and strength of correlation between the predictor variables in a regression model.
Variance inflation factor (VIF)
34
Variance inflation factor (VIF) no correlation between a given predictor
1
35
Variance inflation factor (VIF) moderate correlation between a given predictor variable
1-5
36
Variance inflation factor (VIF) potentially severe correlation between a given predictor variable
> 5
37
Form of regression that allows the prediction of discrete variables by a mix of continuous and discrete predictors.
LOGISTICS REGRESSION
38
# graphical representation of the relation between two or more variables. Types of Logistics Regression: - Used when the dependent variable is dichotomous
Binary Logistic Regression
39
Types of Logistics Regression: - Used when the dependent variable has more than two categories
Multinomial Logistic Regression
40
WHEN TO USE LOGISTIC REGRESSION?
> When the dependent variable has only two levels (yes/no, male/female, taken/not taken) > If multivariate normality is suspected > If we don’t have linearity
41
Assumptions in Logistic Regression
No assumptions about the distributions of the predictor variables Predictors do not have to be normally distributed Does not have to be linearly related Does not have to have equal variance within each group There should be a minimum of 20 cases per predictor, with a minimum of 60 total cases.
42
captures how one variable is different from its mean as the other variable is different from its mean. between two random variables is a statistical measure of the degree to which the two variables move together.
covariance
43
A measure of the strength of the relationship between or among variables.
correlation coefficient
43
A positive covariance indicates that....
the variables tend to move together; a negative covariance indicates that the variables tend to move in opposite directions.
44
an extreme value of a variable
OUTLIER
45
The appearance of a relationship when in fact there is no relation.
Spurious correlation
45
is the analysis of the relation between one variable and some other variable(s), assuming a linear relation. Also referred to as least squares regression and ordinary least squares (OLS).
REGRESSION ANALYSIS
46
The purpose is to explain the variation in a variable (that is, how a variable differs from it's mean value) using the variation in one or more other variables.
REGRESSION ANALYSIS
46
is the variable whose variation is used to explain that of the dependent variable. Also referred to as the explanatory variable, the exogenous variable, or the predicting variable.
INDEPENDENT VARIABLE
47
is the variable whose variation is being explained by the other variable(s). Also referred to as the explained variable, the endogenous variable, or the predicted variable.
DEPENDENT VARIABLE
48
exists between dependent and independent variable.
Linear Relationship
49
What is the expected value of the disturbance term ?
zero
50
disturbance terms:
homoskedastistic.
51
is the percentage of variation in the dependent variable (variation of Yi's or the sum of squares total, SST) explained by the independent variable(s).
coefficient of determination
52
It is the range of regression coefficient values for a given value estimate of the coefficient and a given level of probability.
confidence interval
53
is the square root of the ratio of the variance of the regression to the variation in the independent variable
standard error
54
using regression involves making predictions about the dependent variable based on average relationships observed in the estimated regression.
forecasting
55
A regression analysis with more than one independent variable.
Multiple regression
56
has the same interpretation as it did under the simple linear case – the intercept is the value of the dependent variable when all independent variables are equal zero.
intercept
57
are values of the dependent variable based on the estimated regression coefficients and a prediction about the values of the independent variables.
Predicted values
58
is a measure of how well a set of independent variables, as a group, explain the variation in the dependent variable.
F-statisctics
59
is the percentage of variation in the dependent variable explained by the independent variables.
coefficient of determination,
60
are qualitative variables that take on a value of zero or one.
Dummy variables
61
The situation in which the variance of the residuals is not constant across all observations.
Heteroskedasticity
62
is the situation in which the residual terms are correlated with one another. This occurs frequently in time-series analysis.
Autocorrelation
63
The residuals are independently distributed,,,
the residual or disturbance for one observation is not correlated with that of another observation. [A violation of this is referred to as autocorrelation.]
64
If last year‟s earnings were high, this means that this year‟s earnings may have a greater probability of being high than being low. This is an example of?
positive autocorrelation
65
When a good year is always followed by a bad year, this is,,,, .
a negative autocorrelation
66
is the problem of high correlation between or among two or more independent variables.
Multicollinearity
67
Form of regression that allows the prediction of discrete variables by a mix of continuous and discrete predictors.
LOGISTIC REGRESSION
68
TYPES OF LOGISTIC REGRESSION - used when the dependent variable is dichotomous
Binary Logistic Regression
69
TYPES OF LOGISTIC REGRESSION - It is used when the dependent or outcomes variable has more than two categories
Multinomial Logistics Regression
70
WHEN TO USE LOGISTIC REGRESSION?
* When the dependent variable is non parametric and we don't have homoscedasticity (variance of dependent variable and independent variable is not equal). * Used when the dependent variable has only two levels. (Yes/No, Male/Female, Taken/Not Taken) * If multivariate normality is suspected * If we don't have linearity.
71
are the number of independent pieces of information that are used to estimate the regression parameters.
DEGREES OF FREEDOM
72
is the square root of the ratio of the variance of the regression to the variation in the independent variable
Standard Error