Stats - standards of reporting, parametric and non-parametric statistics Flashcards
The following guidelines are for which topics?
CONSORT
TREND
PRISMA
CONSORT (Consolidated Standards of Reporting Trials) = for Randomised controlled trials
TREND (Transparent Reporting of Evaluations with Non-randomized Designs) = for Non-randomised controlled trial
PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) = for Systematic Reviews and Meta-Analyses (Replaced QUORUM - Quality of Reporting of Meta-Analyses)
The following guidelines are for which topics?
MOOSE
STROBE
SQUIRE
Note: both MOOSE and STROBE are for same type of studies but different aspects
MOOSE (Meta-analysis Of Observational Studies in Epidemiology) = for Observational Studies in Epidemiology
STROBE (Strengthening the Reporting of Observational Studies in Epidemiology) = for Observational Studies in Epidemiology
SQUIRE (Standards for QUality Improvement Reporting Excellence) = for Quality improvement studies
The following guidelines are for which topics?
STARD
MIAME
COREQ
STARD (Standards for the Reporting of Diagnostic accuracy studies) = for Diagnostic studies
MIAME (Minimum Information about a Microarray Experiment) = for Microarray studies
COREQ (Consolidated criteria for reporting qualitative research) = for Qualitative studies
What is the Kappa statistic (Cohen’s Kappa coefficient)?
What values can it take?
The Kappa statistic (Cohen’s Kappa coefficient) is a widely used measure to assess the magnitude of agreement between two independent observers or raters, accounting for agreement occurring by chance. It provides a more robust measure than simple percent agreement by incorporating the probability of chance agreement. It is particularly relevant in fields such as psychiatry, where interrater reliability is crucial for consistent and accurate diagnostic practices.
Kappa values range from -1 to 1:
1 indicates perfect agreement.
0 indicates agreement equivalent to chance.
-1 indicates perfect disagreement.
< 0 Poor agreement (less agreement than expected by chance).
0.01 – 0.20 Slight agreement.
0.21 – 0.40 Fair agreement.
0.41 – 0.60 Moderate agreement.
0.61 – 0.80 Substantial agreement.
0.81 – 1.00 Almost perfect (near-complete) agreement
When can you use Kappa?
Kappa is used when assessing categorical data (e.g., diagnostic classifications) and is applicable in any situation with two or more independent observers evaluating the same phenomenon.
What are 3 limitations to Kappa?
1) Kappa is sensitive to prevalence. When the prevalence of a condition is very high or very low, Kappa may underestimate or overestimate agreement.
2) Unequal distributions (skew) of categories (e.g., one diagnosis much more common than others) can distort Kappa values.
3) It requires independent observations; Kappa cannot account for systematic bias between observers
What is a parameter?
A parameter is a numerical quantity, that describes a certain population characteristic. Within a population, a parameter is a fixed value that does not vary.
How do we calculate parameters?
What are “statistics”?
It is often not possible know the value of a parameter as this would involve collecting data from all the individuals in a population. The next best option is to take a sample and then to make estimates of the parameters (these are called statistics).