TL: Critical Appraisal Flashcards

1
Q

Critical Appraisal

A

The process of assessing and interpreting evidence systematically, considering its validity, results, and relevance.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Hypothesis

A

An educated guess on the nature of the patient’s illness, usually obtained by selecting those diseases having the same history or physical examination characteristics as the patient

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Peer-Reviewed literature

A

f

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Textbook

A

f

*systematic review, typically not critically appraised?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Systematic Review

A

A formal review of a focused clinical question based on a comprehensive search strategy and structured critical appraisal of all relevant studies.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Expert opinon

A

Opinions, not critically-appraised by a structured scientific method, voiced or communicated by an individual. The level of clinical expertise of the individual certainly holds merit so the level of strength of opinion does as well.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Hierarchy of Evidence

A

A system that grades evidence or published information related to healthcare, according to the strength of scientific merit that produced the evidence. The model used by ArizonaMed EBDM places Level 1 evidence at the top. Level 1 evidence is typically the result of Systematic Reviews or very large RCTs. Level 2 evidence is typically the result of Prospective Observational Studies, e.g., prospective cohort study. Level 3 evidence is typically the result of Retrospective Observational Studies, e.g., case-control study. Level 4 evidence is typically the result of a Case Series. Level 5 evidence is typically found in Textbooks or Expert Opinion that have not been critically-appraised.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Internal Validity

A

The study design performance at measuring differences, if they exist, between groups
eg, intervention and control, that are due only to the hypothesized effect.
*the quality of the scientific method used, typically the study design chosen; is it appropriate?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Construct Validity

A

A construct is a theoretically derived notion of the domain(s) we wish to measure. An understanding of the construct will lead to expectations about how an instrument should behave if it is valid. Construct validity therefore involves comparisons between measures, and examination of the logical relationships, which should exist between a measure and characteristics of patients and patient groups. Essentially, does the study aptly measure what it proposes to measure? In social science and psychometrics, construct validity refers to whether a scale measures the unobservable social construct (such as “fluid intelligence”) that it purports to measure. It is determined after studies show correlation between the measure itself and the characteristics it is aimed at.

*does the study measure what it proposes to measure; ex: are the outcomes chosen true representations of the disease outcomes you care about in your patient(s)?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

External Validity

A

The same as “Applicability” (The degree to which the results of a study are likely to hold true in your practice setting. Also called external validity, generalizability, particularizability, relevance.) Are the patients in the study similar to the patients you might apply the evidence to?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Statistical Significance

A

A measure of how confidently an observed difference between two or more groups can be attributed to the study interventions rather than chance alone.When statistically significant, the probability of the observed results, given the null hypothesis, falls below a specified level of probability (most often P < 0.05).Describes the probability of incorrectly rejecting the null hypothesis and concluding that there is a difference when in fact none exists (i.e., probability of Type I error). Many times this probability is 0.01, 0.05, or 0.10. For medical studies it is most commonly set at 0.05.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Clinical Significance

A

Results that make enough difference to you and your patient to justify changing your way of doing things. For example, a drug which is found in a mega trial of 50000 adults with acute asthma to increase FEV1 by only 0.5% (P < 0.0001) would fail this test of significance. The findings must have practical importance as well as statistical importance. (See statistical Significance.)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What are examples of descriptive study designs?

A
  • case report
  • case series
  • cross-sectional study
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What are examples of observational - retrospective study designs?

A
  • case-control
  • retrospective cohort
  • outcome and effectiveness registries
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What are examples of observational - prospective study designs?

A
  • cohort
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Analysis

A

the statistical methods, results, and conclusions of the study
are they appropriately conducted and stated?

17
Q

Application

A

the relative impact or importance of the study to your practice of medicine
ex: how would I apply this knowledge to my patient(s) and how important is it?

18
Q

The quality of evidence is what you are trying to measure via the process of what?

A

critical appraisal