Inference Flashcards
fitted values?
Denoted by y^_i; points on FITTED REGRESSION LINE corresponding to values x_i
Residuals?
Residual sum of squares?
SS_T
Total sum of squares; SS_R + SS_E
SS_R =?
SS_E?
SS_T for constant model?
Given Y_i = β_0 + ε_i
Degrees of freedom for SS_T ? Why?
n-1
One degree of freedom is taken up by y-
Dof for SS_E? Why?p
n-2; because 2 estimated parameters
MS_R =?
SS_R / ν_R
ν_R =
1
ν_E =?
n-2
MS_E=?
SS_E / ν_E
ν_T
n-1
Variance Ratio=?
MS_R / MS_E
F test, F=?
H_0 for F test?
F test; we reject H_0 if?
H0 : β1 = 0
F_cal?
Value of variance ratio F calculated for given data set
F test; F_(α;1,n-2) is ?
Such that
What does rejecting H_0 in F test mean?
(n-2)σ^2
MS_E is biased? Estimator of?
Unbiased estimator of σ^2 ; often denoted S^2
Null model also known as?
Constant model
In null model, S^2 is? Isn’t in?
Sample variance; Full model
S^2 in full model?
Standardise SLRM β^_1
Form student t from
When converting normalised SLRM β^_1 to student t, U = ?
Form student t from SLRM β^_1
To find a CI for an unknown parameter θ ?
To find values of boundaries A and B which satisfy
Find CI for SLRM β^_1? (In terms of Probability)
Explicitly, CI for SLRM β_1?
Form T_cal under null from SLRM for β^_1?
Where null is β_1 = 0
For SLRM T-test reject H_0 if?
H_0: β_1=0
Standard error of β^_1 (sqrt of variance of β^_1)
Estimator of standard error of β^_1
Rewrite (1-α) 100% CI for β_1 in terms of standard error (T test for constant model)
Rewrite to include standard error: test statistic for H_0: β_1 = 0
in SLRM, μ_i = ?
In SLRM, LSE of μ_i is?
In full SLRM, distribution of LSE of μ_0 is?
In full SLRM, CI for μ_0 is?
In full SLRM, test H_0 : μ_0 = μ* given?
se(μ0^)^
Hat matrix
Special property of Hat matrix?
Idempotent:
- H=H^T
- HH = H
If matrix A is idempotent then
(I-A) is idempotent
Residual vector
E(e) =?
0
Var(e) = ?
σ2(I - H)
Proof of Var(e)?
Var(e) = (I -H)Var(ε)(I - H)T =
Proof of E(e) ?
Vector of sum of squares of residuals?
0
Total sum of squares ? (describe)
Regression sum of squares and Residual sum of squares
Proof of SS T in vectors
SS R in vectors
SSE in vectors
H 0: for F-test for overall significance of regression
F-test for Overall significance of regression; DF of overall regression?
p-1 (p=#parameters)
F-test for Overall significance of regression; df of residual?
n-p
F-test for Overall significance of regression; df of total?
n-1
F-test for Overall significance of regression; sum of squares of regression?
F-test for Overall significance of regression; sum of squares of residual
F-test for Overall significance of regression; sum of squares for total
In F-test for overall significance of regression; E(SSE) =
(n-p)σ 2
The 2 test stats from F-test for overall significance of regression;
For F-test of overall significance of regression; Reject H 0
If?
Reject at (1-α) 100% significance level if
β ^vector takes ~
β^j ~
100(1-α)%CI for βj is ?
Test stat for H0 : β j = 0 ?
Where c_k is jth diagonal element of (x-Tx-)-1 counting from 0 to p-1)
T test for parameters β_j doesn’t tell us anything about comparisons between
Models E(Yi) = β0
And E(Yi) = β0+β jx j,i
Doesn’t tell us wether we can accept or reject constant model for linear
Point estimate?
Prove normality of
Prove
Prove
Orthogonal matrix?
C has, CT C = I
For symmetric idempotent A of rank r, There exists…
Orthogonal C
Properties of trace for any matrices A and B (of appropriate dimensions) and scalar k
For idempotent A, trace(A) =
Rank(A)
Proof of reltionship between trace and rank of idempotent A
Rank(I-H) = ? And prove
E(ZTAZ) = ?
Proof of E(ZTAZ) =
Proof of
σ 2
Lemmas needed to prove
Prove
SSR in terms of
SSR in terms of
Prove
What does the hat matrix do?
Maps observed values to predicted values:
Y^ = HY
DoF of SS_R? Why?