Week 1 Flashcards

1
Q

Population

A

A large group, or repeated process, that we are studying.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Sample space

A

Collection of all possible samples. Uncertainty about which sample is drawn. The probability function P(x

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Random variable

A

Describes a feature of the random sample. Formally, a rule that assigns a number to each possible sample.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Variance

A

Measures the spread of the distribution and also how accurate the prediction is.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Covariance

A

Measures how different variables move together - either in same direction, positive, or in opposite directions, negative. Or not correlated at all.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Exprectation

A

The expected value of a random variable is always the average of it.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Random sampling

A

A recipe for picking out points from the population at random and for recording their characteristics.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Realized sample

A

Recorded characteristics of the sample, a dataset with numbers.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Estimand, estimator, estimate

A

The estimand is the population feature that we try to infer using an estimator (rule) that gives us an estimate of the estimand.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Method of moments

A

Strategy for constructing an estimator by replacing population quantities by sample analogues.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Linear regression model (OLS-1):

A

Functional form. Assumption that the m (the regression curve) takes a functional form of a linear model with k regressors and excluded variables U. This means that it is linear in the coefficients, the betas, but can still have nonlinear variables in the x’s.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Regression curve

A

Rule that describes how the regressors contribute to the outcome. It is pinned down by k+1 parameters (coefficients)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Marginal effect

A

The causal effects that we want to compute. If we change on regressor by one (infinitesimal) unit and keep everything else constant (ceteris paribus), we get the marginal effect.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Conditional expectation

A

A recipe that tells us how to construct, for each state of the world, a “best” prediction of a rv by using the info contained in other rv’s.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Exogeneity assumption (OLS-2)

A

The regressors are not informative about the level of the U. The m gives the conditional expectation of the Y given the regressors. Cov(Xj, U)=0

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Observational data

A

Data collected by observing economic agents in real world. The observed regressors can themselves be the product of economic behaviour.

17
Q

Randomized controlled trial

A

At least partially artificial environment in which the econometrician assigns regressor values to the economic agents they are studying. This will play out the role that U can have on the outcome, for ex the mathematical ability (U) that affect study time (x1) is not affecting the outcome if you divide into treatment group and control group randomly.

18
Q

Observational equivalence

A

Two models with different sets of coefficients have the same regression function, i.e. transform regressors into outcomes in the same way. Cannot say which one is the best one…

19
Q

Full rank assumption (OLS-3):

A

We cannot find a model that is observationally equivalent to the true model, but is describes by a different set of coefficients.

20
Q

Variance decomposition

A

After a regression we can, at the sample level, decompose the total variation in outcomes into variation that is due to regressors (explained part) and in the estimated unobserved component (unexplained part).

21
Q

R^2

A

This measures how well our model predicts the outcome, in fact it is measuring how much of the variance that is described in the model. A high value is good because then our model is a good model for predicting Y.

22
Q

If the regressors have no predictive power of Y…

A

Then you should continue estimating Y with E(Y) instead of making use of the x’s in a conditional expectation. They are statistically independent we say.