Research Flashcards
Independent variable
intervention or condition
-Cause or influences the dependent variable.
Controlled and manipulated by the researcher
Dependent variable
outcome or response
Clinical trial
research design tests how well methods of screening, preventions, diagnosis, or tx of dx work in people
Completely randomized design
random assignment to group with unique intervention
outcomes btwn groups compared at end of trial.
- parallel design
cross over design
subject is the own control
subject gets both treatments
Factorial design
subjects experience different combination of 2 more interventions
- subjects get different combos of levels of ind variable
Pretest-posttest design control group design
- basic format of randomized controlled trial
- testing of randomized groups before and after treatment
posttest only design control group design
randomized groups are only tested after the intervention
repeated measures design
subjects acts as own control
subjects are tested under all conidtions
also known as within subjects design
Sequential clinical trial
Data is analyzed as it becomes available so trial can be stopped as soon as evidence is sufficient enough to show difference between treatments
Single subject design
Conclusions can be drawn from the effects of a treatment based on single patient.
-Uses repeated measurments overtime for at least two periods/pieces.
- Baseline (A) equal prior to treatment
- Intervention phase (B) after treatment
- intervention phases replicated to create design
(A-B) (A-B-A)(B-A-B)
Quasi experimental design
Research design without control group, random assignment, or both
- One group pretest post test design.
- One way repeated measures design overtime.
- Time series design.
One group pretest post test design ; Quasi experimental design
Measurements on 1 group of subjects before and after treatment.
Time = independent variable with two levels (pretest and post test)
One way repeated measures design overtime; Quasi experimental design
Measurements on 1 group made at prescribed time intervals.
Time series design;Quasi experimental design
multiple measurements made before and after treatment observed patterns or trends during pre-treatment and post treatment periods
Internal validity
Intervention causes the outcome.
- Control of extraneous variables and sources of bias that may reduce validity of results
- blinding
- control groups establishment
- Matching/ pairing
- Intent to treat analysis
what is blinding and what different types exist in order to create internal validity
Method to keep individuals knowing from which subjects have received intervention or not. Reduce bias and placebo effect
Single blind: subjects are unknowing of group assignment until end of study
Double blind:Subjects and some researchers unaware of hypothesis or group assignment until end of study
Triple blind: Subject, members of research team, and data analyzers unaware of research hypothesis group assignment until end of study
Control group; what are the two most common forms of control?
- Statistically identical to treatment group, except for variable of interest.
- Help isolate the effects of the independent variable
- Active control: Effective (not sham) treatment is compared to experimental treatment. When effective treatment is available it is unethical to use placebo controlled for comparison to experimental treatment.
- Placebo control:Active substance/treatment that looks the same and is administered the same as the active drug or treatment
What is Matching/ pairing used for ? What is it?
- Method to increase internal validity
- Before random group assignment subjects who have a identical characteristics (weight age and race) are selected in order to ensure result in groups are based on important variables that may affect outcome
What is intention to treat analysis, why is it used
All subjects are analyzed together.
This preserves original balance of subject groups through randomization.
External validity
- Degree that research results are generalizable to population/circumstances beyond the study.
- Threats to external validity are specific types of subjects tested in one place (setting) or time ( history) that study was performed
Threats to internal validity
-Threats to internal validity: history, maturation, attrition, testing, instrumentation, regression towards the mean
What is an example of a Hawthorne effect, what is the Hawthorne effect
Tendency for individuals to change behavior in response to being watched or observed in a study
-Child acts differently, well behaved in front of observer
alternate hypothesis H1
Statement of the population parameter has value different from the null hypothesis.
The alternate hypothesis is accepted when the null hypothesis is rejected
Null hypothesis H0
Value of a population parameter mean, proportion, correlation coefficient is equal to claimed value
P-Value
Probability that statistical result happen by chance.
- reject the null when P >alpha
- (alpha= level of significance)
- Accept the null when P is less than alpha
- “Accept the null when P is small”
Type 1 error
Alpha error
- Wrong rejection of Null hypothesis
- There’s a difference a relationship when there is not
- Significance of a .01 there’s a 1% chance of a type one error occurring.
- False positive finding
Type 2 error
Wrongly deciding to not reject no hypothesis.
Concluding there is no difference when there is.
False negative
What type of significance is a conclusion made from a small probability and difference between the groups/relationship variable happen by chance?
statistical significance
What is statistical power
Chance that statistically significant result is found
- chance that study leads to rejection of a false null hypothesis
Effect size
Magnitude of difference between two treatments. Magnitude of relationship between two variables.
The larger the ES the more likely it is statistically significant.
Effect size index
Calculating the mean of the treatment group minus the meaning of the control group and dividing it by the standard deviation of one of the groups
-
What is a trivial, small, moderate and large effect using the effect size index
<0.1 = trivial wffect
1 - .3 = small effect
.3 - .5 = moderate affect
>.5 = large effect
MCID minimla clinically important difference
Indicate smallest difference in patient’s condition that may warrant a change in patient management.
For research study: indicates a meaningful or trivial affect on the patient status
MDD
Change in patient’s condition beyond threshold of error. Smallest difference that would be statistically significant
What is the difference between parameter and statistics
Parameters: numerical characteristic of population, population mean/STD
Statistic: used to estimate population parameters such as sample mean and standard deviation
What’s the difference between systematic review and meta analysis
- Systematic review: comprehensive review using explicit methods to systematically search identify and appraise all literature
- Meta- Analysis: systematic review using statistical technique to estimate effect size. Minimizes problem of small sample size and individual studies since it pulls trials and increases overall sample size
Nominal scale
Each person can only be assigned to one category values are mutually exclusive and exhaustive.
Blood type, type of breath sound, type of arthritis
Ordinal scale
Ranking scale.
Intervals between ranks may be unequal or unknown
Manual muscle test grade, level of assistance, pain, and joint laxity grades
Interval scale
Intervals between values are equal but there is no true zero points.
Temperature, Celsius functional status tests
Ratio scale
Intervals between valuables are equal and there is a true zero point.
-Range of motion in degrees, distance walked, time to complete activity, nerve conduction velocity
What is the difference between alternate forms of reliability and internal consistency
Alternate forms of reliability: Different forms of test are agreed-upon as reliable (version 1 2 and 3 are the same)
Internal consistency: Extent that items/ elements reflect one basic phenomenon or dimension.
-EX:Assessment scale only includes items related to patient’s physical function
Intrarater reliability
Consistency repeated overtime by same person
Interrater reliability
Consistency and agreement of measurements taken by different examiners
Test retest reliability
Consistency/equivalence of repeated measurements on the same person on separate occasions.
- Can be affected by interval between tests due to affects of fatigue/learning or changes of characteristic while being measured
Face validity
Degree that a measurement appears to test what it supposed to do.
-See how measurements derived from test relate to specific problem
Content validity
Measurement reflects the meaningful element of a construct and items in the test reflect the question at hand and not extraneous elements.
-McGill pain questionnaire (Better because of more thorough pain assessment)versus visual analog pain scale
Construct validity
- Degree that theoretical construct is measured by a test or instrument.
MMT scores reflect as a valid indication of innervation status if there’s a relationship between MMT and electromyographic testing
Concurrent validity
- Form of criterion related validity
comparing measure to the gold standard
Predictive validity
Form of criterion related validity that considers a measurement to be valid because of it predicts future of event or behavior
-GPA or GRE indicate success in academia
Prescriptive validity
- Form of criterion related validity
- Measurement suggest form of treatment that the person should receive
- Person with asystole on ECG has an arrhythmia and would be successfully revived by CPR