Quantitative Methods V Flashcards
Null hypothesis
One you want to reject in order to assume alternate is correct
Test statistic
same for Z and T
(sample mean - hypothesized mean) / standard error
remember, standard error = stdev / sqrt(n)
Two tailed vs. one-tailed
Two tailed is Ho = something
One-tailed is Ho > something
Type I error
Rejection of null when it’s actually true
Type II error
Failure to reject when it is actually false
Power of test
1 - probability of type II error
Confidence interval (Z-test)
Sample mean - (standard error * critical z-value) < mean < sample mean + (standard error * critical z-value)
P-value
Prob of test stat that would lead to a type I error.
T-Test
Use if population variance is unknown and either:
- Sample is large
- Sample is small, but distribution is normal
Z-Test
Use if population is normal, with known variance or when sample is large and population variance is unknown
Sample distribution
Sample statistics computed from samples of the same size drawn from the same population
Desired properties of estimators (3)
Efficiency, consistency and unbiasedness
Data mining
Searching for trading patterns until one “works”
Sample selection bias
Some data is systematically excluded (e.g. from lack of availability)
Look ahead bias
Study tests relationship using sample data that wasn’t available on test date
Time period bias
Either too long or too short
Chi-squared (X^2)
Used for tests concerning variance of normally distributed population vs. sample
Symmetrical, approaches normal as d.f. increases
Chi is bounded by 0 so test stats can’t be negative
Chi-squared test statistic
[(n-1) * sample variance] / hypothesized pop. variance
FYI, critical value is for one tail, so you have to adjust for two
F-test
Tests equality of variances of two populations via samples of said populations
Populations are normal and samples are INDEPENDENT.
Bounded by 0 (like chi square)
F-test: two tailed vs. one-tailed
Two tailed: variance of pop. 1 = variance pop. 2
One-tailed: variance of pop. 1 >= variance pop. 2
F-statistic
Variance sample 1 / variance sample 2
Note: always put larger variance in numerator, use d.f. of larger sample and look at right tail.
Difference in means test
T-statistic
Two INDEPENDENT samples, normally distributed populations
Formula won’t be on test.
Paired comparisons test
T-test statistic
Used when samples are dependent.
Sample data must be normally distributed.
Paired comparisons test (formula)
Tstat = (sample mean difference - mean) / standard error of mean difference
Sample mean difference = 1/n * sum(mean1 - mean2…)
Standard error = sample stdev / sqrt(n)
Elliot wave theory - impulse wave
Direction of the prevailing trend, has five smaller waves
Elliot wave theory - corrective wave
Against the prevailing trend, has three smaller waves.
Type II Error probability
Calculate probability that you fail to reject the null when is actually false. Here you use the actual M and get the stat of (X-bar - M) / standard error.
Treynor Ratio
(portfolio return - risk free return) / Beta
Consistent estimator
Gets closer to population as n increases
Unbiased estimator
Expected value = true population value
Efficient estimator
Has a variance of sampling distributions that is lower than that of any other estimator