Methodological Research Flashcards

1
Q

How is the reliability affected in an experiment 3 things effect it

A

The reliability of a measurement is influenced by:
The sources of variability studied.
The participants selected.
The range of scores exhibited by the sample

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Things that can affect Validity 4 things

A

Instrument
Intra-rater (intra-tester)
Inter-rater (inter-tester)
Intra-subject

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Examples of Instrument, Intra-rater, Inter-rater, intra-subject issues

A
Instrument
Loose axis (slips during measurement)
Tight axis (too difficulty to move precisely)
Interinstrument differences
Intra-rater (intra-tester)
Variations in participant positioning
Inconsistent identification of landmarks
Variable end-range pressure
Inconsistent stabilization
Reading errors
Inter-rater
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Levels of Standardization:

Nonstandrized. Highly Standardized, Partially Standarized approach. Explain each term.

A

Nonstandardized approach: No control of any sources listed. Low reliability.
Highly standardized approach: Control of the all possible sources. High reliability.
Partially standardized approach: Standardized a few sources of variability.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Range of Scores

What does restricted range of scores lead to.

what about having an extremely heterogenous group of people?

Reliability may vary at various places of measurement. For example the use of measure swelling might accurate at the knee x amount of times. However measuring swelling at the ankle maybe low reliability cause some might use the figure eight method while others may measure around the malleoi.

A

Restricted range of scores leads to low reliability coefficients. The use of normal participants can restrict the rage of scores within a study.
Using extremely heterogeneous group would overestimate the reliability of the measure for clinical use.
Reliability may vary at different places in the range scores because of difficulties unique to taking measurements at particular points in the range.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Levels of Realiabilty

if realiability is

below .5 = represents what level of reliability

between .5-.75 = what level of reliability

Above .75 = what level of reliability

A

Many reliability coefficients are based on measures of correlation.
Reliability coefficients are interpreted on the basis of their proximity to a value of 1.00.
Below .50= poor reliability
Between .50-.75= moderate reliability
Above .75= good reliability
For most clinical measurements, reliability should exceed .90 to ensure valid interpretations of findings.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

A measure has a high internal consistency reliability when:
Each of the items correlates with other items in the measure.
Participants score at the high end of the scale every time they complete the measure.
Multiple observes obtain the same score every time they use the measure.
Multiple observes make the same ratings using the measure

A

Answer: Each of the items correlates with the other items in the measure.

Internal consistency occurs when all items are intercorrelated.
Cronbach’s coefficient alpha is often used for statistical documentation of internal consistency.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q
Measurement reliability refers to the:
Dependency of the scores
Consistency of the scores
Comprehensiveness of the scores
Accuracy of the scores
A

If the measure is consistent over multiple occasions, it has…………………….

Answer: consistency of the scores

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q
A study to establish reliability of Mini Mental-State Examination reported that Cronbach’s α measure of reliability was 0.91. This finding refers to the 
A. Equivalency Reliability 
B. Stability Reliability
C. Internal Consistency 
D. Interrater Reliability
A

???????

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What is construct validity

A

Reflects the ability of an instrument to measure an abstract concept, or construct.
Strength, function, pain…etc.
FOR STRENGTH: Manual muscle testing, the number of times that particular weight can be lifted, handheld dynamometers, and a multitude of isokinetic tests.
FOR FUNCTION: Self dressing, transfers, housekeeping, recreational skills,…

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What is content validity??

A

Reflects the ability of an instrument to measure an abstract concept, or construct.
Strength, function, pain…etc.
FOR STRENGTH: Manual muscle testing, the number of times that particular weight can be lifted, handheld dynamometers, and a multitude of isokinetic tests.
FOR FUNCTION: Self dressing, transfers, housekeeping, recreational skills,…

Determination of content validity is a subjective process. The validation is made by the panel of experts.
Example: A test of gross motor skills should not include items that assess fine motor skills, nor should it be influenced by the patient’s anxiety level or ability to read.
What range of activities are representative of “function” ?
Should a functional status questionnaire include questions related to physical, cognitive, social, and emotional status?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Criterion-related validation

A

Is determined by comparing it with an accepted (GOLD) standards of measurement. It is the most practical approach to validity and the most objective.
It is based on the ability of one test to predict results obtained on another test.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Criterion is based on what 3 concepts

A
Selecting the criterion: 
Instrument accuracy
Concurrent criterion
Predictive criterion
When two test are administered to one group of subjects. If the  correlation is high (CC is close to 1.00), the target test is considered a valid predictor of criterion score.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

The validity of a measure refers to:
Comprehensiveness with which it measures the construct
Consistency of the measurement
Accuracy with which it measures the construct
Particular type of construct specification

A

Accuracy with which it measures the construct

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q
A researcher developed a measure of shyness and is now asking whether this measure does in fact measure a person's true state of shyness. This is a question of:
Reliability
Construct validity
Criterion validity
Concurrent validity
A

Criterion validity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

My measure allows me to successfully predict future behavioral outcomes. My measure has:
Criterion validity
Construct validity

A

Criterion

17
Q

Basic Appraisal Steps

A

Clarify your reason for reading
Specify your information needed
Identify relevant literature
Critically appraise what you read

18
Q

Clarify your reason for reading

A

Keeping up to date.
Answering specific clinical questions.
Pursuing a research interest.

19
Q

Stages in the critical apprasial

A

A quick skim read to familiarize yourself with the study.
Skilled reading to understand the different parts of the study in relation to the whole.
Each part of the study is broken down and appraised in terms of its scientific merit and use to health care professionals.
All parts are reconstituted into a whole. The reader decides how each stage of the research process is addressed and then considers the over-all impact of the study

20
Q

This describes an abstract. How many words are in the abstract.

A

Summarizes research purpose, methods, and results.
Usually 150-300 words.
No summary of literature, or limitations and implications of the research.

21
Q

Elements composed in an introduction

A

Was there search of a wide range of literature pertinent to the topic?
Was there search strategy with named databases and key search terms?
Was the literature critically appraised?
Is the literature review up to date?
LIKE a FUNNEL from broad statement to essence of the study
Should end with a clear statement of the purpose of the study.

22
Q

Methods section “sample selection” and Research desing proption.

A

Sample selection: Sample selection?, how participants were recruited and selected to take part in the study? Informed consent?
Research design and data collection: Design and data collection for the research question?, protection of the participants? researcher’s role?, appropriate statistical analysis?

Ethic approval
Test, assessments, instruments, and outcome measures
Interventions
Data analysis

23
Q

Information on the results section.

A

Were the results and analysis linked back to the original research question?
Was there any evidence of lost data?
Was there evidence of a statistician’s input complex analysis?

24
Q

What does the research results anwser in???

A

Were the conclusion and recommendations based on the results of the study?
Was is clear that there was no intention to mislead or give false conclusion?
Did the researchers acknowledge any limitation?
The whole use of research is how far the results can be generalised.
What is new here?
What does it mean for health care?
Is it relevant to my patients?

25
Q

conclusion answers what????

A

Concisely restates the important findings of the research.

Presents a conclusion for each purpose outlined in the introduction

26
Q

What is included in the detailed treatment protocols

A

If included, follows the references.

Typically includes survey instruments or detailed treatment protocols

27
Q

Guidelines for discussing published research

A

Discuss the study in the past tense.
Clearly distinguish between your opinions and those of the authors.
Qualify generalization so they are not erroneously attributed.
Justify each position with evidence from study.

28
Q

Standard appraisal questions

A

Are the aims clearly stated?
Was the sample size justified?
Are the measurements likely to be valid and reliable?
Are the statistical methods described?
Did untoward events occur during the study?
Were the basic data adequately described?
Do the numbers add up?
Was the statistical significance assessed?
What do the main findings mean?
How are null findings interpreted?
Are important effects overlooked?
How do the results compare with previous reports?
What implications does the study have for your practice