Econometrics Flashcards
1.3: Time Series Data Set
A time series data set consists of observations on a variable or several variables over
time.
1.3: Cross-Sectional Data Set
A sample of individuals, households, firms, cities,
states, countries, or a variety of other units, taken at a given point in time.
1.3: Pooled Cross Section Data Set
A data configuration where
independent cross sections, usually collected at different
points in time, are combined to produce a single
data set.
1.3: Panel Data Set
A data set constructed from repeated cross
sections over time. With a balanced panel, the same
units appear in each time period. With an unbalanced
panel, some units do not appear in each time period,
often due to attrition.
2.1: Simple Linear Regression Model
2.1: The Zero Conditional Mean Assumption
2.1: Population Regression Function (PRF)
2.2: Equation of the Slope Parameter
2.2: Equation for the Intercept Parameter
2.2: OLS Regression Line/ Sample Regression Function
2.3: Total Sum of Squares, Explained Sum of Squares and Residual Sum of Squares
2.3: Coefficient of Determination
2.4: Δy in level-level, log-level, level-log, and log-log models
2.5: The Four Assumptions for Unbiasedness of OLS
2.5: Proof of Unbiasedness of the OLS Slope Parameter
2.5: Proof of Unbiasedness of the OLS Intercept Parameter
2.5: Sample Variances of the OLS Estimators
2.5: Definition of Homoskedasticity
2.5: Definition of Heteroskedasticity
2.5: Unbiased Estimator of the Error Variance and Standard Error of Regression
2.5: Standard Error of the Estimated Slope Parameter
2.6: Regression Through the Origin
3.1: General Multiple Linear Regression Model
3.1: The Zero Conditional Mean Assumption for Multiple Regression
3.2: Sample Regression Function for Mutiple Regression
3.2: Equation for the Slope Parameter for Multiple Regression
3.2: Comparison of Simple and Multiple Regression Estimators
3.3: The Four Assumptions for Unbiasedness of Multiple OLS
3.4: Sample Variance of Simple OLS Estimator
3.4: Sampling Variances of Multiple OLS Slope Parameters
3.4: Standard Deviation and Standard Error of Parameters
4.1: Assumption MLR.6: Normality
4.1: Normal Sampling Distribution Theorem
4.2: t Distribution for the Standardized Estimators
4.2: The t Statistic for a Parameter
4.2: Testing Against One-Sided Alternatives
4.2: Testing Against Two-tailed Alternatives
4.3: Confidence Interval for Population Parameters
4.4: Test Statistic for Testing Parameters
4.4: Comparing Two Parameters Method 2
4.5: F Statistic for the Multiple Linear Restrictions Test
4.5: Steps for Testing Multiple Linear Restrictions
4.5: R-Squared Form of the F Statistic
4.5: Testing the Overall Significance of Resgression
5.1: Assumption MLR.4’ Zero Mean and Zero Correlation
5.1: Consistency of OLS Theorem
5.1: Deriving the Inconsistenct in OLS
5.2: Asymptotic Normality of OLS
5.2: Langrange Multiplier Statistic
6.1: Beta Coeffiecients
6.2: Making Logarithmic Approximations Accurate
6.2: Models with Interaction Terms
6.3: Adjusted R-Squared
6.3: Using Adjusted R-Squared to Choose between Functional Forms
6.4: Confidence Interval for Prediction
6.4: Confidence Interval for a Particular Value
6.4: Predicting y with a Logarithmic Dependent Variable
7.2: Dummy Variables on the Intercept
7.3: Uncentered Coefficient of Determination
7.3: Ordinal Information Using Dummy Variables
7.4: Dummy Variables on the Slope
7.4: Testing for Differences in Regression Functions across Groups
7.5: Linear Probability Model (LPM)
8.2: Heteroskedasticity-Robust Variance for Simple Regression
8.2: Heteroskedasticity-Robust Variance for Multiple Regression
8.2: Computing Heteroskedasticity-Robust LM Tests
8.3: The Breusch-Pagan Test for Heteroskedasticity
8.3: A Special Case of the White Test for Heteroskedasticity
9.1: Regression Specification Error Test (RESET)
9.1: The Davidson-MacKinnon Test