Lecture 3 Flashcards
What is the expected value of Yi
What is the linearity of expectation rule?
The Linearity of Expectation states that the expectation (mean) of a sum of random variables is equal to the sum of their expectations, regardless of whether the variables are independent or dependent
How do you get the expected value of a constant
E[c] = ?
E[c] = c
A constant is the same value every time, so its “average” is just itself.
How do you get the expected value of the error term in linear regression?
The expected value of the error term in a regression model is 0 because of a key assumption in regression
The errors should have an average of zero to ensure the model is unbiased.
Say out error term has a expected value of 1, rather than 0. What does this mean about our model?
This means that, on average, the error term is always slightly positive, meaning our regression model is underpredicting the true values of
𝑌.
what is the expected value, and variance of error term
expected value is 0, variance is [certain number]
Why is the variance of yi the variance of Ei?
-Yi inherits its variance from ϵi because that’s the only random component.
-Constants like β0 and β1xi do not contribute to variance.
Talk me through this notation
what do oyu need to compute MLE
Need to know the distribution (e.g., normally distributed)
what do you use MLE for
method used to estimate the values of unknown parameters in your model
how do you simplify product of exponentials into a single exponential
Simplify these products of exponentials into a single exponential
anything that applies to all items in the summation, can be factored out. Hence the -1/2sigma2 factored out
All of the above
What doe the MLE estimates ?
Estimates which minimise the sum of the squared residuals
Maximising the log likelihood minimises the squared residuals
true
Both Least Squares Estimation (LSE) and Maximum Likelihood Estimation (MLE) help us find the best parameters β0 and β1, but they use different approaches. What are they?
In Ordinary Least Squares (OLS) regression, we estimate β0 and β1 by minimizing the sum of squared residuals (SSR)
In Maximum Likelihood Estimation (MLE), instead of minimizing squared errors, we maximize the likelihood function. Here goal is to find estimate of β0 , β1 and σ2 that maximize this function. E.g., are finding the value of 𝛽1 that makes the observed data most probable under our statistical model.