Stats - standards of reporting, parametric and non-parametric statistics Flashcards

You may prefer our related Brainscape-certified flashcards:
1
Q

The following guidelines are for which topics?

CONSORT
TREND
PRISMA

A

CONSORT (Consolidated Standards of Reporting Trials) = for Randomised controlled trials

TREND (Transparent Reporting of Evaluations with Non-randomized Designs) = for Non-randomised controlled trial

PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) = for Systematic Reviews and Meta-Analyses (Replaced QUORUM - Quality of Reporting of Meta-Analyses)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

The following guidelines are for which topics?

MOOSE
STROBE
SQUIRE

Note: both MOOSE and STROBE are for same type of studies but different aspects

A

MOOSE (Meta-analysis Of Observational Studies in Epidemiology) = for Observational Studies in Epidemiology

STROBE (Strengthening the Reporting of Observational Studies in Epidemiology) = for Observational Studies in Epidemiology

SQUIRE (Standards for QUality Improvement Reporting Excellence) = for Quality improvement studies

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

The following guidelines are for which topics?

STARD
MIAME
COREQ

A

STARD (Standards for the Reporting of Diagnostic accuracy studies) = for Diagnostic studies

MIAME (Minimum Information about a Microarray Experiment) = for Microarray studies

COREQ (Consolidated criteria for reporting qualitative research) = for Qualitative studies

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is the Kappa statistic (Cohen’s Kappa coefficient)?

What values can it take?

A

The Kappa statistic (Cohen’s Kappa coefficient) is a widely used measure to assess the magnitude of agreement between two independent observers or raters, accounting for agreement occurring by chance. It provides a more robust measure than simple percent agreement by incorporating the probability of chance agreement. It is particularly relevant in fields such as psychiatry, where interrater reliability is crucial for consistent and accurate diagnostic practices.

Kappa values range from -1 to 1:

1 indicates perfect agreement.
0 indicates agreement equivalent to chance.
-1 indicates perfect disagreement.

< 0 Poor agreement (less agreement than expected by chance).
0.01 – 0.20 Slight agreement.
0.21 – 0.40 Fair agreement.
0.41 – 0.60 Moderate agreement.
0.61 – 0.80 Substantial agreement.
0.81 – 1.00 Almost perfect (near-complete) agreement

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

When can you use Kappa?

A

Kappa is used when assessing categorical data (e.g., diagnostic classifications) and is applicable in any situation with two or more independent observers evaluating the same phenomenon.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What are 3 limitations to Kappa?

A

1) Kappa is sensitive to prevalence. When the prevalence of a condition is very high or very low, Kappa may underestimate or overestimate agreement.
2) Unequal distributions (skew) of categories (e.g., one diagnosis much more common than others) can distort Kappa values.
3) It requires independent observations; Kappa cannot account for systematic bias between observers

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is a parameter?

A

A parameter is a numerical quantity, that describes a certain population characteristic. Within a population, a parameter is a fixed value that does not vary.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

How do we calculate parameters?
What are “statistics”?

A

It is often not possible know the value of a parameter as this would involve collecting data from all the individuals in a population. The next best option is to take a sample and then to make estimates of the parameters (these are called statistics).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly