6. MRA Further issues Flashcards
What are quadratic functions used for?
They are used to capture decreasing or increasing marginal effects
When the coefficient on x is positive and the coefficeint on x^2 is negative, what shape is the curve?
The quadratic takes a parabolic shape
How can you find the turning point of the parabola for x?
x*=|^B1/(2^B2)| the turning point (or maximum of the function) is always achieved at the coefficient on x over twice the absolute value of the coefficient on x2
Say for example that the turning point of experience, the return to experience becomes zero at about 24.4 years. What should we make of this?
First, it may be that few people in the sample have more than 24 years of experience, and so the part of the curve to the right of 24 can be ignored. A more likely possibility is that the estimated effect of exper on wage is biased because we have controlled for no other factors, or because the functional relationship between wage and exper in equation (6.12) is not entirely correct
What is the issue with using quadratic functions?
The cost of using a quadratic to capture diminishing effects is that the quadratic must eventually turn around. If this point is beyond all but a small percentage of the people in the sample, then this is not of much concern
What do we do if we have an unrealistic relationship to begin with in our quadratic?
For example, in the rooms example on page 190, the regression believes there is initially a negative return to log price when increasing the number of rooms until 4.4. It turns out that only five of the 506 communities in the sample have houses averaging 4.4 rooms or less, about 1% of the sample. This is so small that the quadratic to the left of 4.4 can, for practical purposes, be ignored.
What do we do if our coefficient of our squared variable is quite small?
In many applications with quadratics, the coefficient on the squared variable has one or more zeros after the decimal point: after all, this coefficient measures how the slope is changing as x (rooms) changes. A seemingly small coefficient can have practically important consequences, as we just saw. As a general rule, one must compute the partial effect and see how it varies with x to determine if the quadratic term is practically important. In doing so, it is useful to compare the changing slope implied by the quadratic model with the constant slope obtained from the model with only a linear term
What happens generally if the coefficients on the level and squared terms have the same sign (either both positive or both negative) and the explanatory variable is necessarily nonnegative (as in the case of rooms or exper)?
In either case, there is no turning point for values x > 0. For example, if β1 and β2 are both positive, the smallest expected value of y is at x = 0, and increases in x always have a positive and increasing effect on y. Similarly, if β1 and β2 are both negative, the largest expected value of y is at x = 0, and increases in x have a negative effect on y, with the magnitude of the effect increasing as x gets larger.
What is an interaction term?
Sometimes, it is natural for the partial effect, elasticity, or semi-elasticity of the dependent variable with respect to an explanatory variable to depend on the magnitude of yet another explanatory variable.
What must we be careful with when using interaction terms in our model?
The parameters on the original variables can be tricky to interpret when we include an interaction term. For example, in the previous housing price equation, equation (6.17) shows that β2 is the effect of bdrms on price for a home with zero square feet! This effect is clearly not of much interest. Instead, we must be careful to put interesting values of sqrft, such as the mean or median values in the sample, into the estimated version of equation
How can we resolve this issue with our model when including interaction terms?
Often, it is useful to reparameterise a model so that the coefficients on the original variables have an interesting meaning
How should we test interaction variables?
Take the derivative of your equation with respect to the variable you are trying to test and then look which coefficients are useful. Then run an f-test and use the p-value of this to test joint significance
What is the average partial effect?
After computing the partial effect and plugging in the estimated parameters, we average the partial effects for each unit across the sample. The average partial effect tells us the size of the partial effects on average
What does reparameterisation do?
It makes the coefficient measure the partial effect of x2 on y at the mean value of x1
What are the advantages of reparameterisation?
- Easy interpretation of parameters
- Standard errors for partial effects at the mean values available