Stats rejects Flashcards
<p>What is the T test independent sample equivalent for ordinal data?</p>
<p>Mann Whitney or Kolmogorov-Smirnov</p>
<p>What is the T test paired sample equivalent for ordinal data?</p>
<p>Wilcoxon sign rank test</p>
<p>When is trend analysis used?</p>
<p>For non-linear outcome effects (often with IV is quantitative). It’s an extension of ANOVA</p>
<p>When is point biserial correlation used?</p>
<p>One interval/ratio variable one dichotomous variable</p>
<p>When is Phi correlation used?</p>
<p>2 dichotomous variables</p>
<p>When is Eta correlation used?</p>
<p>Curvilinear relationship between X and Y</p>
<p>When is a Part or Semi partial correlation used?</p>
<p>When you’ve removed a third variables effect on only one of the two variables.</p>
<p>What’s the difference between stepwise and hierarchical regression?</p>
<p>Stepwise is computer generated/data driven in the order variables are added/removed. In hierarchical the researcher determines order based on theory</p>
<p>Discriminant function analysis</p>
<p>Predict group membership (nominal DV) based on continuous variables.</p>
<p>Loglinear analysis</p>
<p>Predict group membership based on multiple nominal predictors</p>
<p>Whats the difference between Principal Components Analysis and Factor analysis?</p>
<p>In PCA, there is no hypothesis about the communalities ahead of times, it just produces a few uncorrelated factors empirically.</p>
<p>What is cluster analysis?</p>
<p>Gathering data on several DVs and looking for naturally occurring subgroups without prior hypotheses.</p>
Define reliability according to the true score model / classical test theory
Variability is comprised of true score variability and error variability. Reliability is the proportion of true score variability (out of 100%)
What is the coefficient of equivalence and what are the sources of error?
Parallel forms reliability (equivalent but not identical versions given to the same group). Time and content sampling are sources of error.
What is Internal consistency reliability and what are the sources of error?
Consistency of scores within a test (assessed with split-half or cronbach’s alpha). Error results from content sampling.
Why can’t you use split-half reliability on speeded tests?
Items are meant to be easy, so all completed items should be correct (artificially high split-half reliability)
Speeded tests vs Power tests
Speeded tests – easy items, how many can the person complete.
When are Kuder-Richardson and Cronbach’s alpha used?
To measure internal consistency reliability.
Kuder-Richardson is for nominal data, Cronbach’s alpha is for ratio/interval data
What are the sources of error for internal consistency tests? (Cronbach’s alpha, Kuder-Richardson)
Content Sampling, Test heterogeneity.
What do Kappa and Yule’s Y measure?
Inter Rater reliability
What are the two types of criterion-related validity
Concurrent (measured at close to the same time, eg within weeks) Predictive validity (Delay between predictor and criterion)
When a test has no ability to predict the criterion variable (no criterion-related validity), what is the maximum standard error of the estimate?
Equals the standard deviation of the criterion
Selection ratio
The proportion of open positions to applicants. A low ratio means there are many more applicants than positions
<p>Item Characteristic Curve</p>
<p>plot of the relationship between item performance and total score.</p>
<p>Item response theory</p>
<p>A mathematical approach to determine how much a specific item correlates with the target of the test (latent trait).</p>
Correction for attenuation
A formula stating how much higher validity would be if predictor and criterion were both perfectly reliable.
Convergent validity
Correlation of scores with other available measures (should be moderate to high)
Divergent/discriminant validity
Correlation of scores on a test with scores on a completely different trait. Should be low.