Lecture 3 Flashcards

1
Q

What is the expected value of Yi

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is the linearity of expectation rule?

A

The Linearity of Expectation states that the expectation (mean) of a sum of random variables is equal to the sum of their expectations, regardless of whether the variables are independent or dependent

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

How do you get the expected value of a constant

E[c] = ?

A

E[c] = c

A constant is the same value every time, so its “average” is just itself.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

How do you get the expected value of the error term in linear regression?

A

The expected value of the error term in a regression model is 0 because of a key assumption in regression

The errors should have an average of zero to ensure the model is unbiased.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Say out error term has a expected value of 1, rather than 0. What does this mean about our model?

A

This means that, on average, the error term is always slightly positive, meaning our regression model is underpredicting the true values of
𝑌.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q
A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q
A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

what is the expected value, and variance of error term

A

expected value is 0, variance is [certain number]

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Why is the variance of yi the variance of Ei?

A

-Yi inherits its variance from ϵi because that’s the only random component.
-Constants like β0 and β1xi do not contribute to variance.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Talk me through this notation

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

what do oyu need to compute MLE

A

Need to know the distribution (e.g., normally distributed)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

what do you use MLE for

A

method used to estimate the values of unknown parameters in your model

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q
A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q
A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q
A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

how do you simplify product of exponentials into a single exponential

17
Q

Simplify these products of exponentials into a single exponential

A

anything that applies to all items in the summation, can be factored out. Hence the -1/2sigma2 factored out

18
Q
A

All of the above

22
Q

What doe the MLE estimates ?

A

Estimates which minimise the sum of the squared residuals

23
Q

Maximising the log likelihood minimises the squared residuals

24
Q

Both Least Squares Estimation (LSE) and Maximum Likelihood Estimation (MLE) help us find the best parameters β0 and β1, but they use different approaches. What are they?

A

In Ordinary Least Squares (OLS) regression, we estimate β0 and β1 by minimizing the sum of squared residuals (SSR)

In Maximum Likelihood Estimation (MLE), instead of minimizing squared errors, we maximize the likelihood function. Here goal is to find estimate of β0 , β1 and σ2 that maximize this function. E.g., are finding the value of 𝛽1 that makes the observed data most probable under our statistical model.

25
Do the esitmates for b0 and b1 = the same using OLS and MLE?
These same estimates obtained using OLS are obtained using MLE under the assumption of normally distributed errors.
26
To obtain the MLEs what do we need to maximise
The log likelihood
27