Week 5 Flashcards

1
Q

WHAT DO SYSTEMATIC REVIEWS DELIVER?

A

Deliver a clear and comprehensive overview of available evidence on a given topic.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

WHY WE SHOULD CONDUCT REVIEWS?

A

The goal of conducting a review is to draw conclusions based on the cumulative weight of the evidence about managing elements of interest.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

what are the types of reviews?

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Explain systematic reviews

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is the process of conducting reviews?

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Formulate a PICO question: The review aimed to compare knee kinematics and kinetics in individuals with ACLR with the contralateral limb and healthy age-matched controls during three tasks (walking, stair ascent/descent, and running).

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Formulate a PICO question: In adults with chronic back pain, how effective is physical therapy compared to over-the-counter pain medication in improving pain levels over a 12-week period ?

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

How do you use AND, OR, and NOT?

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

How do you screen and select studies?

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

How do you assess study credibility?

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

How do you assess if the investigators limit the review to high-quality studies?

A

Consider type of research question and research design of included studies. For example:
 Studies about the patient management use the non-experimental research design

 Intervention studies- should include RCTs

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

How do you assess if the investigators implement a comprehensive search and study selection process?

A

 Description of the process where all electronic and print databases and collections are searched.

 List of databases

 Consider Inclusion/exclusion of studies from English/non-English languages

 Gray literature: unpublished studies, theses, dissertations, abstracts, proceedings

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

How do you assess if the investigators assess the quality of individual studies with the standardized processes and/or tools?

A

 Bias may occur from different sources such as:
 Lack of blinding of investigators assessing the outcome measure,
 Not randomly allocating the groups,
 Loss of subjects in one group in the trial.

 Use of right methods and tools to assess the quality of included studies.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What are some ways to assess the risk of bias in included studies?

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

How is risk of bias supported?

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

How do you assess if the investigator provides details about the research validity or quality of studies included in the review?

A

 If a threshold quality score was required for the study inclusion, it should be stated with the rationale.

 Sensitivity analysis in the meta-analysis.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

How do you assess if the investigators address the publication bias?

A

PRISMA guidelines recommends whether the gray literature was included or not, whether there are any non-significant results studies included.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

If it is a meta analysis, how do you assess if the investigators use the individual data in the analysis?

A

 Using summary scores OR individual data from the studies

 May contact authors for missing data; Report in the methods

 Subgroup analysis

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

How are results studied to extract and appraise data?

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

HOW DO YOU SYNTHESIZE DATA AND PERFORM META-ANALYSIS?

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

What is risk ratio? How is it interpreted?

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Give an example of risk ratio assessment

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Give an example of a meta analysis result

24
Q

What is the standard for systematic reviews?

25
Give examples of variables
26
What are the types of variables?
27
What are the levels of variables?
28
Define independent variable
29
Can secondary independent variables have levels?
30
What does “secondary outcome(s)” mean in a study?
31
What are the levels of measurement?
32
Define the levels of measurement
33
Provide examples for levels/scales of measurement
34
Define these
1. Norm-Referenced & Criterion-Referenced Standards: Norm-referenced measurements compare an individual’s performance to a group, assessing relative standing, while criterion-referenced measurements evaluate performance against a fixed standard, focusing on mastery rather than comparison. 2. Reliability: Reliability refers to the consistency and stability of a measurement tool over time. A reliable test produces the same results under consistent conditions, ensuring reproducibility in assessments. 3. Validity: Validity determines whether a test measures what it is intended to measure. A test must be valid for its results to be meaningful and applicable in clinical or research settings. 4. MDC (Minimal Detectable Change): MDC represents the smallest measurable change in a score that exceeds potential measurement error, ensuring that observed differences reflect real change rather than variability. 5. MCID (Minimal Clinically Important Difference): MCID is the smallest change in a score that a patient perceives as beneficial, indicating meaningful clinical improvement rather than just statistical significance. 6. Ceiling & Floor Effect: The ceiling effect occurs when a test is too easy, leading many to score at the maximum, limiting the ability to measure improvement. The floor effect happens when a test is too difficult, causing many to score at the minimum, restricting the assessment of decline.
35
Define normative data
36
Define norm referenced standard
37
Define criterion referenced standards
38
Give an example of criterion reference
39
Define validity and reliability
40
Define test/retest reliability
41
Define interrater and intrarater reliability
42
Define internal consistency
43
What is Tampa scale of kinesiophobia?
44
Define face validity
45
what is the role of criterion validity?
46
Define predictive validity
47
Define construct validity
48
Define convergent validity
49
Define minimal detectable change (MDC)
50
Define minimally clinically important difference (MCID)
51
Define responsiveness
52
Define ceiling effect
53
Define floor effect
54
C
55
A) How many variables? B) How many continuous variables?
A) 11 B) 6 (because they have a mean and standard deviation)