Exam 2 Flashcards
For a question to be answerable empirically, it must be _________
Specific
This section describes the process of research design, expanding on the question, so that a reader can know exactly what you did
Method
Is the process of answering all those Wh-questions in a way that ensures the question is answered and the answer will be valid
Research Design
Ensures that the research answered the question and answered it in a way that is believable
Internal Validity
Ensures that the results of the study can, to some degree, be generalized outside the study
External Validity
What are the types of quantitative research design?
Group and single subject
Subjects are assigned to a group or groups - question is answered by comparing the overall performance of the group to a control condition or different group
Group Design
Type of research that allows for making casual inference about the effects of an intervention compared to a baseline; usually over a period of time — individual performance
Single Subject Design
Also known as small-N designs or time-series design
Single subject design
What is the general criteria for single subject design?
Conditions must be controlled and More than one measurement of DV is made
Baseline phase with repeated measurements and an intervention phase continuing the same measures
Basic Design (A-B design)
Technique and method of comparison among phases is similar; addition of one or more additional phases
Baseline-treatment Withdrawl/Replication (ABA or ABAB)
Take away treatment — see if performance goes back to baseline
ABA
Remove treatment — then repeat both baseline and treatment - does it repeat? (method of replication)
ABAB
Tests more than one IV as when gauging the effect of two or more treatments on the DV
ABACA Design
Applied across subjects; can be applied across behaviors to study effect of one intervention on several DV related behaviors
Multiple Baseline Design
What is the advantage of multiple baseline design?
No withdrawal or reversal of treatment is necessary
The effect of the IV is shown by successive changes in the DV to match a stepwise performance criterion that is specified as a part of the intervention
Changing-Criterion Design
At each segment, the target behavior must both satisfy the present criterion and achieve some stability before the next criterion level is applied
Changing-Criterion Design
Being able to change the design
Design Flexibility
Participants not assigned randomly but assigned with the question in mind (criterion-based selection)
Purposeful Sampling
Participants are not put into “experiment” situations, but questioned about and allowed to be in their everyday environment; can be participatory or non-participatory
Naturalistic Inquiry
The researcher interacts with the participants at least to some degree
Participatory Design
Researcher does not interact with the participant(s); let’s situation develop naturally; records for later analysis
Non-participatory Design
Anything that happens in between measurements that is NOT the independent variable in question
History
Changes due to development that cannot be controlled by the researcher
Maturation
A test can have an effect on later results by its nature
Reactive Pre-test
How is the normal bell curve shaped?
Symmetrical and not excessively peaked with ends that taper off
What can you find in the methods section?
Subjects/participants, materials, and procedures
Qualitative data collected first - quantitative portion of the studies, supplements the qualitative
Sequential exploratory
Quantitative data collected first — qualitative data collected to explain results, especially if unexpected
Sequential explanatory
Quantitative and qualitative portions happen at the same time, and have equal standing (equally as important)
Concurrent triangulation
Two portions may happen at the same time, but one is given priority
Concurrent nested
What threatens internal validity?
Anything that can affect the quality of the measurements made
Doing better on a condition because it was earlier or later in the sequence
Sequencing
Selecting participants, based on extreme scores, some may have been lucky/unlucky so their second test performance may be more like the average with no treatment —regression toward the mean
Statistical regression
Assigning participants to groups that differ on anything other than the variable you want
Differential subject selection
When participants drop out more from one, especially if it’s something to do with their performance
Attrition/mortality
When any internal threat interacts with another — consider levels of all variables
Interaction
Rather than ruling out other explanation using control groups, ensured by gathering more data, until clear that other explanations are not reasonable
Credibility
Researcher should be careful to avoid injecting their own point of view or hypothesis into what participant say
Researcher Bias
Way to avoid bias by summarizing results and check in with participants to verify
Member checking
When data is collected in such a way as to “fit” a given hypothesis, rather than in a way that allows another interpretation
Researcher reactivity
How well does the test reflect what you want to measure
Content validity
How well, the measure correlates with another (validating) criterion
Criterion validity
Consistency of results between two test assess the same skill
Concurrent validity
Theoretical validity — does this test measure the concept that is intended to measure?
Construct validity
Precision and accuracy of measurement — how stable the measurement is and how much error it has
Reliability
Amount that the obtained score varies from the true score
Measurement error
How much do different readers agree with each other on the same stimuli?
Inter-rater agreement
How much does an individual rater agree with him/her self across the same stimuli?
Intra-rater agreement
What are the three ways to demonstrate reliability?
Test-retest, alternate forms, split half
Ensure that variables are similar overall
Match group matching
Randomly assigned to groups by pairs matched on nuisance variables
Pair matching
Refers to undertaking a “mini” version of the study, in order to make sure that it works — also ensures that data seem reasonable given previous research, questions can be refined, etc.
Pilot testing
Serves to avoid negative surprises when the full blown experiment is already underway
Pilot testing
Looking at differences between groups of subjects — one group usually a control
Between subjects design
Looking at differences in performance across tasks performed by all subjects within a group of subjects — participants are their own controls
Within-subjects design
Compares performance across tasks performed by different groups
Mixed design
Treatment is _______ when it can be shown to do what it was designed to do — it has a benefit.
Effective
When a treatment shows more benefit to the client, whether it was the intended benefit or not
Therapeutic effect
Therapeutic effect must be reliable and large enough to be considered important
Clinical significance
Must be collected to show a therapeutic effect
Clinical outcome data
Work or cost of one treatment versus another
Efficiency
What are the factors that enhance to generalizability of results?
Use of random sampling, larger sample size, replication
Weakness is no pretest to which to compare observation
One shot case study
Better, but no control group
One group pre-test/post test
No pre-test again, maybe groups were different
Static group comparison
Similar to the one group pre-test/post test design, except with the addition of a comparison group that is not randomized
Non-equivalent control group
Like single subject, many observations
Time series design
Ensures groups are equivalent before starting; treatment and control are tested at equivalent time intervals
Randomized controlled trials (RCT)
Main difference with quasi-experiment is random assignment of participants to groups
Randomized pretest, posttest control group designs
4 randomized groups. Above, plus third and fourth groups; third with treatment, but no pretest; forth with only post test; allows for examination of interactive effects.
Solomon design
Determine whether therapeutic effect exist
Phase 1 treatment outcome research
Determine appropriateness of intervention; determine who treatment is effective for
Phase 2 treatment outcome research
Involve more rigorous experimental design; examine efficacy not just therapeutic effect
Phase 3 treatment outcome research
Takes treatment from lab to clinic
Phase 4 treatment outcome research or translational research
Focuses on efficiency as well as effectiveness
Phase 5 treatment outcome research
What is the strongest level of experimental evidence?
Systematic reviews and meta analyses
What is a strong level of experimental evidence?
Well designed randomized controlled clinical studies