Week 1 and Week 2 Flashcards

1
Q

What is probability theory good for?

A

Learn about a large group:
large group = population
> Too large to look at everyone
• Look at small subgroups
> describe sample = descriptive statistics
• Infer properties of a population from a sample
> Inferential statistics

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

How to choose a smart sample?

A

Choose a random sample

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

How do we denote sample and realized sample?

A

Sample set: Ω
realized sample: ω_0

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Denote Expectation of X

A

E[X]

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

How do we calculate variance? And why?

A

measures how accurately x is predicted by E[X]. Variance of x= var(x)=(x-E[X])^2

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

If we calculate Var(x) and it’s LARGE, is E[X] a good prediction of x(ω_0)?

A

No, E[X] is not a good prediction of x(ω_0). Only if variance is small.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

How do we calculate standard deviation?

A

sd(x)=sqrt(var(x))

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Optimality of E[X]

A

E[X]=arg min E[(x-a)^2]

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What does it mean if:
Cov(Y_1,Y_2)<0
Cov(Y_1,Y_2)>0
Cov(Y_1,Y_2)=0

A

Cov(Y_1,Y_2)<0 = negative relationship, negatively correlated
Cov(Y_1,Y_2)>0= positive relationship, positively correlated
Cov(Y_1,Y_2)=0 = uncorrelated

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Does a random sample give a representative sample?

A

Yes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What does k and K denote?

A

k = observed X’s
K = all of the X’s

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

“we cannot predict the value of U by observing regressors” is denoted how?

A

E[U|X1,…,Xk]=0

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Assumption OLS-2 (exogeneity)

A

The linear regression model satisfies: E[U|X1,…,Xk]=0

The regressors are exogenous.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

If E[U|X1,…,Xk]=0 holds, what does that say about the covariance?

A

cov(U,Xj)=0
- each regressor uncorrelated with unobserved component
- find j with cov (U, Xj) ≠ 0 and exogeneity fails

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Assumption OLS-3

A

(Full rank, informal statement): The best linear prediction of Y is unique.
This assumption is often called the “no perfect collinearity assumption”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

B^_0 is equal to what?

A

B^_0=E^[Y]-B^_1E[X]

17
Q

How much does the score increase if the model is: score = β0 + β1 log(study time) + β2courses + U

A

We estimate βˆ1(ω0) = 16.74237. This means that the estimated effect of increasing study time by 1% is approximately 16.74237/100 points on the exam.

18
Q

How much does the score increase if the model is: log(score) = β0 + β1study time + β2courses + U.

A
  • This implies that changing the variable X1 by 1 unit will change output by approximately 100β1 %.
  • We estimate βˆ1(ω0) = .014173. This means that we estimate that studying one extra hour will increase your exam score by approximately 100 × .014173% = 1.4173%.
19
Q

How much does the score increase if the model is: log(score) = β0 + β1 log(study time) + β2courses + U.

A
  • This implies that changing the variable X1 by 1% will change output by approximately β1%. In this case, the coefficient β1 is called the (approximate) elasticity of X1.
  • We estimate an approximate elasticity of βˆ1(ω0) = .263274. This means that increasing your study time by 1% will boost your exam score by approximately .263274%.
20
Q

How much does the score increase if the model is: score = β0 + γ0stat + β1study time + U

A
  • We estimate γˆ0(ω0) = 51.33396. The estimated effect can be interpreted as follows. Suppose we could re-write the past of someone who didn’t take a statistics course and make them attend at least one such course. It is estimated that this would boost their grade by approximately 51.33 points.
21
Q

How much does the score increase if the model is: score = β0 + β1study time + β2math + γ1(study time × math) + U.

A
  • The marginal effect of study time is given by β1 + γ1math. If γ1 > 0 this means that the higher your mathematical ability the more efficiently you study.
22
Q

OLS assumption 1

A

Functional form

23
Q

Ols assumption 4

A

Random Sample

24
Q

If OLS 1-4 holds what does that mean?

A

That E[B^_1]=B_1 –> B^_1 is unbiased for B_1
- B^_1 is consistent for B_1
- B^_1 ≈ B_1 in large samples

25
Q

OLS 5

A

The conditional variance of the unobserved component U is constant. If OLS 5 is satisfied: the error term is HOMOSKEDASTIC.

26
Q

Can standard deviation replace standard error and vice versa?

A

Yes.

27
Q

What does large and small values of rj indicate?

A

Small values of rj indicate that Xj cannot be approximated well by the other regressors. Large values of rj indicate that Xj can be approximated quite well by the other regressors.

28
Q

What if
- σ decreases
- var(Xj) increases
- rj decreases

A

If σ decreases then the variance of βˆj will decrease.

If var(Xj) increases then the variance of βˆj will decrease.

If rj decreases then the variance of βˆj will decrease.

29
Q

Do we try to disprove the null hypothesis?

A

YES

30
Q

If H_0 is true and we reject the H_0, what is that called?

A

Type 1 error

31
Q

If H_0 is false and we don’t reject H_0, what is that called?

A

Type 2 error.

32
Q

What do we say if H_0 is rejected?

A

X is significant.

33
Q

When do we use F tests?

A

For testing of multiple hypothesis.

34
Q

When do we use t-test?

A

For testing of one hypothesis.

35
Q

F test:
Which one do we reject?

If p-value(ω0) ≥ α ⇒ Fˆ(ω0) ≤ cα

If p-value(ω0) < α ⇒ Fˆ(ω0) > cα

A

We reject if p-value(ω0) < α ⇒ Fˆ(ω0) > cα

36
Q

Can we say anything if we can’t reject the null hypothesis?

A

No. We learn nothing.

37
Q

Formula for R^2 when Var^ (Y) is unknown

A

R^2 = Var^ (Y^) / (Var^ (Y^) + Var^ (U^)) = 1 / (1 + Var^ (U^)/(Var^ (Y^))

38
Q

Formula for Var (X_1)

A

Var (X_1) = 1/n * (X - E(X) )^2

39
Q

Compute marginal effect of a square variable and another variable. Example m(x1) = β0 + β1x_1 + β2x_1^2

A

We derive and calculate as: m(x1) = β1 + 2*β2x_1.