General Flashcards

1
Q

The variance of a random variable
𝑋 is

A) Equal to the standard deviation of
𝑋

B) Strictly smaller than zero

C) Equal to zero if
𝐸[𝑋]=0

D) Is strictly larger than zero

A

D) Is strictly larger than zero

Variance measures the spread of values around the mean. It is non-negative and strictly positive unless
X is constant, in which case it is zero. Unlike the standard deviation, variance cannot be negative.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

The best predictor for
π‘Œ to minimize the mean squared error (MSE) after observing that 𝑋 takes value π‘₯ is:

A) 𝐸[π‘Œ]

B) Var(π‘Œ)

C) 𝐸[π‘Œβˆ£π‘‹=π‘₯]

D) π‘₯

A

A: E[Y∣X=x]

Explanation:
The conditional expectation E[Y∣X=x] minimizes the MSE when predicting Y. It represents the best estimate for Y given the observed value of X.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

The standard error of an estimator (such as the OLS estimator) is:

A) Equal to the square root of the variance of the sampling distribution of the estimator

B) Equal to the squared mean of the dependent variable Y

C) Equal to the variance of the dependent variable Y

D) Equal to the variance of the sampling distribution of the estimator

A

A) Equal to the square root of the variance of the sampling distribution of the estimator

Explanation:
The standard error is the square root of the variance of the estimator’s sampling distribution, providing a measure of variability in the estimates.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

An estimator for the population parameter Ξ² is said to be unbiased if:

A) The estimated value converges to Ξ² as the sample size increases

B) The estimate obtained from a single sample is equal to Ξ²

C) The expected value of the estimator is equal to 𝛽

D) The estimator has the smallest possible sampling variance

A

C) The expected value of the estimator is equal to Ξ²

Explanation:
An unbiased estimator has an expected value equal to the true parameter value, ensuring that on average, it accurately estimates the parameter.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

A sample of time-series data:

A) Contains always only stationary variables
B) Can be obtained by drawing a random sample from the population
C) Is not useful for economic analysis
D) Is a single realization from a stochastic process

A

D) Is a single realization from a stochastic process

Explanation:
Time-series data represents one specific path or realization from an underlying stochastic process, observed sequentially over time.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

An estimator is said to be consistent if:

A) It converges to the true population parameter when the sample size increases to infinity

B) It is unbiased

C) It converges to the sample mean of the data when the sample size increases to infinity

D) It is efficient.

A

A: A) It converges to the true population parameter when the sample size increases to infinity

Explanation:
A consistent estimator approaches the true value of the parameter as the sample size becomes infinitely large, ensuring accuracy over larger samples.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

The OLS estimator of a stationary AR(1) model:

A) Is biased but consistent because the error terms are serially correlated

B) Is biased but consistent because the explanatory variables are strictly exogenous

C) Is biased but consistent because the error terms are non-stationary

D) Is biased but consistent because the explanatory variables are weakly exogenous

A

D) Is biased but consistent because the explanatory variables are weakly exogenous

Explanation:
OLS estimates in an AR(1) model are biased in small samples but become consistent as sample size increases if the explanatory variables are weakly exogenous.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

The conditional expectation E[Y∣X=x]:

A) Is equal to E[Y∣X=x]=E[Y] if Y and X are correlated

B) Always can be modeled as a linear regression

C) Gives the expected value of Y given that X equals x

D) Is equal to E[Y∣X=x]=E[Y]/x

A

Gives the expected value of Y given that X equals x

Explanation:
The conditional expectation represents the expected value of Y when the value of X is known, minimizing prediction error.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

If a Durbin-Watson test rejects its null hypothesis, the data provides evidence that the error terms in the regression are:

A) Homoscedastic

B) Not serially correlated

C) Serially correlated

D) Heteroscedastic

A

C) Serially correlated

Explanation:
The Durbin-Watson test is used to detect serial correlation. Rejecting the null hypothesis indicates that there is serial correlation in the residuals.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Serial correlation in the error terms of a linear regression model:

A) Requires a robust estimator of the standard errors, such as the Newey-West estimator

B) Implies that the explanatory variables are only weakly exogenous

C) Implies that OLS is BLUE

D) Implies that the error terms are a function of the explanatory variables

A

Requires a robust estimator of the standard errors, such as the Newey-West estimator

Explanation:
Serial correlation affects the validity of standard errors, necessitating the use of robust estimators like the Newey-West to obtain correct inference.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

If the explanatory variables in a linear regression are weakly exogenous:

A) The error terms are homoscedastic
B) The error term in period
𝑑
t may be correlated with explanatory variables in future periods
C) OLS is unbiased in finite samples
D) OLS is BLUE

A

B) The error term in period t may be correlated with explanatory variables in future periods

Explanation:
Weak exogeneity allows for correlation between current errors and future explanatory variables, unlike strict exogeneity.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

If the error terms in a linear regression model with strictly exogenous regressors are heteroscedastic:

A) OLS is biased but consistent

B) OLS is unbiased but not efficient

C) Statistical inference can proceed in the same way as with homoscedastic errors

D) The error terms from two consecutive periods are correlated

A

OLS is unbiased but not efficient

Explanation:
Heteroscedasticity impacts the efficiency of OLS, meaning the estimator is unbiased but less precise

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

If we assume that the error terms in a linear regression with strictly exogenous regressors are normally distributed:

A) OLS is the best unbiased estimator

B) OLS is not efficient

C) The sampling distribution of the OLS estimator can only be derived asymptotically

D) The sampling distribution of the OLS estimator is the t-distribution

A

OLS is the best unbiased estimator

Explanation:
With normally distributed errors and strictly exogenous regressors, OLS is BLUE (Best Linear Unbiased Estimator).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

The finite sampling distribution of the OLS estimator is:

A) The distribution of the OLS estimates from repeated samples
B) The distribution of the dependent variable
C) The distribution of the residuals
D) Always the normal distribution

A

A) The distribution of the OLS estimates from repeated samples

Explanation:
The finite sampling distribution shows the variability of OLS estimates from repeated sampling, not the distribution of residuals or the dependent variable.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

A weakly stationary time-series:

A) Has an unconditional variance that changes over time
B) Has unconditional moments that do not change over time
C) Exhibits a trend over time
D) Has a conditional expectation that does not change over time

A

B) Has unconditional moments that do not change over time

Explanation:
A weakly stationary time series has a constant mean and variance over time, and its autocovariance depends only on the lag.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

A time-series of white noise error terms:

A) Is a non-stationary time-series
B) None of the other answers
C) Has expected value of zero, constant variance, and covariance
D) Has autocorrelations that decline over time

A

C) Has expected value of zero, constant variance, and covariance

Explanation:
White noise has a mean of zero, constant variance, and no autocorrelation, making it a stationary process.

17
Q

The variance of the OLS coefficient estimator is, everything else equal, larger when:

A) The variance of the error term is larger
B) The sample size is larger
C) The variance of the explanatory variables is larger
D) The expected value of the dependent variable is larger

A

A) The variance of the error term is larger

Explanation:
A larger error variance increases the variance of the OLS estimator, making it less precise.

18
Q

If the explanatory variables in a linear regression are strictly exogenous:

A) The explanatory variables are uncorrelated with each other
B) The error term is mean independent from all explanatory variables in all time periods
C) The error terms are not serially correlated
D) OLS delivers a biased estimate of the slope coefficients

A

The error term is mean independent from all explanatory variables in all time periods

Explanation:
Strict exogeneity means that the error term has no correlation with any explanatory variables across all time periods, ensuring unbiased estimates.

19
Q

What is heteroscedasticity?

A

A: Heteroscedasticity occurs when the error term’s variance is not constant across observations; it changes based on the values of the independent variables.

20
Q

How can heteroscedasticity be addressed?

A

To handle heteroscedasticity, you can use robust standard errors that adjust for the changing variance, or apply generalized least squares (GLS) to correct for it.

21
Q

How is the finite sampling distribution of the OLS estimator derived?

A

The finite sampling distribution is derived by assuming the error terms follow a normal distribution with a mean of zero and a constant variance.

22
Q

What is meant by the assumption of mean independence?

A

Mean independence means that the independent variables do not influence the expected value of the error term. The error term has an average of zero regardless of the values of the independent variables.