Evaluative research exam 1 Flashcards
what are the features of Random Assignment
People assigned to conditions based on chance.
Each person has a nonzero probability of being assigned to any given condition.
Random sampling is a different thing
What does Random assignment do for your study
Evenly distributes participant characteristics across conditions.
Rules out selection threat.
Makes it unlikely that other threats to internal validity are confounded with condition.
Basic random assignment design
subjects randomly assigned to treatment (Could be more than one) or control condition(or other treatment condition).
Post-test measure of outcome
Potential problem with random assignment
Attrition-people dropping out
Feasibility in terms of ethics and time or availibility of people that meet the criteria
it is difficult to randomize correctly
participants receive assigned treatment and no other
Why is attrition a problem in a random assignment design?
The people who drop out could be different in important ways (for particular conditions or the study as a whole) and their leaving could threaten validity.
Pretest helps assess the extent of the problem
Pretest-posttest design
Same as basic random assignment with a pretest condition, which could be used as a covariate or a repeated measures ANOVA can be performed.
Factorial design
Comnine two or more independent variables (factors) that each have at least two levels.
Major advantage: Test interactions along with main effects
Longitudinal design
Multiple pretest and posttest measurements of coutcome variable.
Show changes over time
Problems with longitudinal design
attriction, and can be unethical to withhold treatment for long periods of time
Crossover design
Two groups (randomly assigned), two treatment levels, put each group through both treatment conditions at different times: R O Xa O Xb O and R O Xb O Xa O
list the different Random assignment designs
Basic, Pretest-posttest, Factorial, Longitudinal, crossover
When would you use random assignment in feild research
when demand is greater than supply
when a treatment can’t be delivered to everyone at once
when temporal isolation is possible
when people are spatially seperated or don’t communicate much
when change is needed but it’s unclear which solution will work best
When there is ambiguous need
when some people have no preference among alternatives
when you can create your own organization
when you have control over experimental units
when loteries are expected
When should you not use random design
Short on time
research question is not about causation
impossible or unethical to manipulate the IV
More conceptual or empirical work must be done to determine whether an experiment is a good use of resources
What are some techniques of randomization
Simple random assignment
Restricted random assignment to force equal cell sizes
restricted random assignment to force unequal cell sizes
Haphazard assignment
Regression discontinuity design
subjects assigned to condition on the basis of a cutoff score on assignment variable
assignment variable mus be continuous and taken before the treatment
post-test measure after treatment
Interrupted time series
Building on the pre-test post test design by increasing number of both
the interuption is the treatment
time series because pretest and posttest measurements are taken at intervals
100 is a generally acceptable number of data points
interupted time series designs are vulnerable to what threats of validity?
History (biggest)-to alleviate make measurements intervals smaller.
Instrumentation
attrition
ITS additions to increase validity
Nonequivalent control group Nonequivalent DV Introducing and then removing treatment Multiple replications Switching replications
ITS potential difficulties
Gradual interventions
Delayed causation
Short time series
limitations of archival data
Quasi-experiments
attempt to test causal hypothesis
called “quasi” because they lack random assignment
How do you determine causation in quasi-experiments?
Cause precedes effect
cause co-varies with effect
Alternative explanations are implausible
Validity
The approximate truth of an inference
A matter of degree
A property of inferences, not designs or methods
Four types of validity
Statistical conclusion
Internal
Construct
External
Statistical Conclusion Validity
Validity of inferences about the covariance between treatment and outcome
I.E. How large and how reliable is the co-variation between the presumed cause and effect?
Methods of assessing statistical conclusion validity
NHST: P values, Effect size and confidence intervals
P values
tell us the probability that the results observed in an experiment could have been obtained by chance from a population in which the null hypothesis is true
Threats to statistical conclusion validity
Low statistical power Violated assumptions of statistical tests Fishing and error rates Unreliability of measures restriction of range Unreliable treatment implementation extraneous variance in setting Heterogeneity of respondents inaccurate effect size estimation
Power
The probability that a statistical test will reject the null hypothesis when it is actually false and should be rejected
Power is usually set at .80
Fishing and error rate
Increasing the number of statisitical tests increases probability of a type 1 error
Restriction of range
Independent variable-Levels too few or too similar
Dependent variable-categorical instead of continuous:floor or ceiling effects
Internal validity
Validity of the inference that the treatment as implemented had a causal effect on the outcome as measured
Is the co variation causal or would the same co variation have been obtained without the treatment?
Local molar causal validity
Local- causal conclusions are limited to the particular treatments, outcomes, settings, and persons studied.
Molar-treatments are a complex package with many components (INUS conditions)
Causal-did the treatment as implemented cause changes in the outcome as measured?
Threats to internal validity
Ambiguous temporal precedence Selection history maturation regression to the mean attrition Testing instrumentation
Selection threat
Participant characteristics are confounded with treatment condition
History
Occurs when an event outside of the study could have produced the outcome
Maturation
when natural changes in participants could have produced the outcome
Regression to the mean
Extreme scores at one point (or measure) tend to be followed by scores closer to the mean
when people are selected for a study because of a high school, a lower score after treatment could be regression instead of treatment
Testing
Taking a test once can lead to participant changes that are responsible for an observed effect
Instrumentation
Changes over time in measuring instruments might be responsible for an observed effect
Construct Validity
Validity of inferences about higher order abstract constructs based on specific manipulations and operationalizations
Which general constructs are involved in the persons, settings, treatments, and observations used in the experiment?
Ways to increase construct validity
Pilot testing
manipulation checks for IV
explication of constructs
using existing, workable manipulations and measures
Inadequate explication of constructs
Failure to adequately explicate a construct may lead to incorrect inferences about the relationship between operation and construct
Construct confounding
Operations usually involve more than one construct and failure to describe all the constructs may result in incomplete construct inferences
mono-operation bias
Any one operationalization of a construct both under-represents the construct of interest and measures irrelevant constructs, complication inference
Mono-method bias
When all operationalizations use the same method (e.g. self report) that method is part of the construct actually studied
Confounding constructs with levels of constructs
Inferences about the construct that best represent study operations may fail to describe the limited levels of the construct that were actually studied
Treatment sensitive factorial structure
The structure of a measure may change as a result of treatment change that may be hidden if the same scoring is always used
Reactive self report changes
Self reports can be affected by participant motivation to be in a treatment condition motivation that can change after assignment is made
Reactivity to the experimental situation
Participant responses reflect not just treatments and measures but also participants’ perceptions of the experimental situation and those perceptions are part of the treatment construct actually tested
Experimenter expectancies
The experimenter can influence participant responses by conveying expectations about desirable responses and those expectations are part of the treatment construct as actually tested
Novelty and disruption effects
Participants may respond unusually well to a novel innovation or unusually poorly to one that disrupts their routine a response that must then be included as part of the treatment construct description
Compensatory equalization
When treatment provides desirable goods or services administrators, staff , or constituents may provide compensatory goods or services to those not receiving treatment and this action must them be included as part of the treatment construct
Compensatory rivalry
Participants not receiving treatment may be motivated to do as well as those receiving treatment and this must be included in the treatment construct description
Resentful demoralization
Participants not receiving the treatment may become so resentful or demoralized that they respond more negatively that otherwise
Treatment diffusion
Participants may receive services from a condition to which they were not assigned
External validity
Validity of inferences about generalizing the cause-effect relationship to other person. settings, treatments, and measurements
How generalizable is the locally embedded causal relationship over variation in persons treatment, observations, and settings?
Four types of replication
Exact replication
conceptual replication
constructive replication
participant replication
Conceptual replication
Investigates the relationship between the same conceptual variables studied in previous research using different operational definitions
Constructive replication
Tests the same hypothesis of original experiment with added condition(s) to asses specific variables that might change the previously observed relationship
Participant replication
Conduct the original experiment using a new type of participants