Chapter 4: Research Methods Flashcards
Internal Validity
the extent to which the interpretation drawn from the results of a study can be justified and alternative interpretations can be reasonably ruled out
External Validity
the extent to which interpretations drawn from the results of a study can be generalized beyond the narrow boundaries of a specific study
Statistical Conclusion Validity
the extent to which the results of a study are accurate and valid based on the type of statistical procedures used in research
Factor Analysis
a statistical procedure used to determine the conceptual dimensions or factors that underlie a set of variables, test items, or tests
Moderator
a variable that influences the strength of the relation between a predictor variable and a criterion variable
Mediator
a variable that explains the mechanism by which a predictor variable influences a criterion variable
Structural Equation Modeling
a comprehensive statistical procedure that involves testing all components of a theoretical model
Randomized Control Trials
an experiment in which research participants are randomly assigned to one of two or more treatment conditions
Clinical Significance
in addition to the results of a study attaining statistical significance, the results are of a magnitude that there are changes in some aspects of participants’ daily functioning
Systematic Review
the use of a systematic and explicit set of methods to identify, select, and critically appraise research studies
Meta-analysis
a set of statistical procedures for quantitatively summarizing the results of a research domain
Effect Size
a standardized metric, typically expressed in standard deviation units or correlations, that allows the results of research studies to be combined and analyzed
What is qualitative research?
better suited to generating hypotheses, describing intricate processes, and export the subjective experiences of small groups of subjects
specifically seeks to avoid establishing parameters which tend to limit the range of participants’ responses
data collection looks like clinical interviewing
qualitative data often take the form of lengthy narratives which are carefully analyzed for the emergence of recurrent themes
What are the disadvantages of qualitative research?
inherent difficulty in comparing studies that purport to examine similar phenomena
small n studies have severely limited generalizability
What are the advantages of qualitative research?
better at illuminating process
What is quantitative research?
based on specific research designs intended to eliminate confounds
designs are available, or can be modified, to accommodate variously-sized research samples, multiple conditions, and the passage of time
the use of validated measures generates data which can be statistically analyzed to identify trends, identify significant between-group differences, and describe performance
this allows for systematic inquiry into specific research questions
deliberately seeks to limit responses
What are confounding factors?
anything that introduces competing explanations for observed phenomena
improves interpretability of results by ruling out alternative explanations
What are the advantages of quantitative research?
well-suited to examining the effectiveness of interventions and describing the population for whom those interventions have proven useful
results are reported in a way that contributes to an ongoing research enterprise
What is the jigsaw puzzle analogy of quantitative research?
think of it as a community of researchers cooperating to assemble a jigsaw puzzle
the idea is to both utilize, and contribute to the existing knowledge base(s)
doesn’t mean you have to base your designs on others’ work, but similar methods are often used
How should the results of studies relate to previous research?
it is incumbent upon the researchers, in reporting their results, to explain the fit of their findings with the existing body of literature
results which appear contradictory to previous findings must be explained with reference to differences in study procedures, participants, analytic methods, etc. or revision of theory
these matters are typically covered in the Discussion section of a research report
Why is educating patients about research important?
it is reasonable for consumers of professional services to expect information concerning: the likely outcomes, the expected benefits, potential risks
based on the results of properly conducted studies
psychologists should not expect patients to participate in treatment on the basis of their professional reputation (“eminence-based practice”)
educating patients about research finding may improve compliance
What is deductive hypothesis generation?
designing research provides opportunities to test hypotheses emerging from various theories
if ______ then _______
to the extent that those hypotheses are disconfirmed, there is an opportunity to modify the theory
this, in turn, will result in new hypotheses which can also be subject to testing
evidence and theory inform one another reciprocally, and a good theory must be able to accommodate existing data
What is the inductive process?
there is an inherently qualitative component to clinical interviewing, and to making observations in the course of providing psychological services
almost invariably, research attempts to explicate the relationship between two or more variables
the hypotheses that emerge from those contacts are colored by our unique personal experiences, theoretical orientation, and perceptions of the client/patient
What is operationalization?
once the research question has been conceptualized, variables must be chosen to translate the (relatively abstract) concept into data
it is often difficult to identify measures that adequately encapsulate complex ideas
may be necessary to choose multiple measures in order to capture the relevant aspects of the concept under study
Why is generalizability important?
it is very important for researchers to appreciate cultural assumptions and obstacles that would compromise the usefulness of the research data when available
this is ultimately a question of generalizability, which speaks to the range of individuals to which the research outcome could potentially be applied
Why is an ethics evaluation important in research?
an ethics evaluation is essential even though it may actually limit the range of designs available
for example, it may be unethical to place individuals in a control group if there is pre-existing evidence that an experimental condition might offer relief from symptoms
What is the goal of research?
almost invariably, research attempts to explicate the relationship between two or more variables
there are essentially three classes of relationship: correlation, moderation, and mediation
What is correlation?
the degree to which two or more variables change together
in a positive correlation, an increase in one is associated with an increase in the other
in a negative correlation, changes in one variable are met with changes in an opposite direction in the other variable
note that no causal relationship can be surmised from correlations alone
What is moderation?
is the treatment equally efficacious for all participants?
e.g., does this intervention for the treatment of bulimia work equally well for boys and girls and for patients of different ages (moderator analysis)
What is mediation?
what is the mechanism of change?
e.g., how does the intervention work? is it changing body image by resist media image or by learning relaxation? (mediator analysis)
What is the Canadian Code of Ethics?
not just about treatment but guides psychologists in all aspects of their practice
What safeguards ensure research is scientifically and ethically sound?
institutional approval
informed consent for research
informed consent for recording
protecting research participants
only dispensing with informed consent under conditions highly unlikely to result in harm
avoiding coercion or offering excessive inducements to participate in research
avoidance of deception
debriefing research participants following their involvement in the study
humane care and use of animals in research
integrity in reporting research results
avoidance of plagiarism
utilizing authorship credits which accurately convey intellectual contribution to a study
ensuring independence of data from previous publication
sharing research data for verification
respecting the confidentiality of any material submitted for review
In what way do research designs exist on a continuum?
at the lowest level, are purely descriptive and observational studies
at another level there may be multiple conditions, repeated measures, and several variables
no one design is universally superior, rather some are more appropriate for answering specific questions than others
What is internal validity?
refers to the degree to which a design is capable of supporting unambiguous conclusions
to provide an adequate test of the research hypothesis, by isolating effects, and minimizing other sources of variance
What is random assignment?
a hallmark of a true experiment
seldom available in clinical experiments
most psychological studies are quasi-experimental in nature
this is often misunderstood to mean that the studies are inherently “less sophisticated”
reflects the fact that human behavior is inherently complex, and subject to a far greater number of uncontrollable influences than physical reagents and compounds
What is external validity?
generalization
the degree to which findings would apply to individuals outside the experimental sample
What is statistical conclusion validity?
the degree to which the chosen statistical procedures support the conclusions and claims made by the study
In what way is history a threat to internal validity?
factors not controlled for in the study
things you did not know about subjects beforehand
In what way is maturation a threat to internal validity?
changes occurring within members of the participant group that are not controlled for in the design
occurred while they were in the study
In what way is testing a threat to internal validity?
repeated exposure to testing procedures may change the way participants respond to them, independently of the main effects
In what way is instrumentation a threat to internal validity?
procedural drifts that take place over the course of a longitudinal study
involves researchers developing heuristics and exposing participants to non-identical stimuli
In what way is statistical regression a threat to internal validity?
the tendency of high-scoring individuals and low-scoring individuals to score closer to the mean upon subsequent measurement
In what way is selection bias a threat to internal validity?
inadvertently constructing groups composed of non-equivalent participants
the risk is that between-group differences will be incorrectly attributed to the experimental effect
fails to account for differences
In what way is attrition a threat to internal validity?
systematic differences in participants drop outs, on the basis of a research-relevant variable
e.g., intelligence, mood, severity of mental disorder
may bias a study toward a certain conclusion
this is the complement of selection bias
In what way is sample characteristics a threat to external validity?
using a particularly narrow participant group limits the applicability of research findings to different individuals
e.g., SES, intellectual functioning, academic achievement, ethnicity, etc.
a huge issue in psych testing
In what way is stimulus characteristics and settings a threat to external validity?
sometimes results obtained in a confined, clinical setting are not paralleled in community or other real-world environments
bears on effectiveness, not efficacy
may not be a good recreation of the real world
In what way is reactivity of research arrangements a threat to internal validity?
the influence that participation in a research study has on participants, which may lessen the applicability of its findings to individuals not involved in the research
In what way is reactivity of assessment a threat to internal validity?
knowing that one is being observed can influence responses
In what way is timing of measurement a threat to internal validity?
about generalizability over time
observed effects may be unique to the intervals at which measurements were made
Why should a research design be balanced between internal and external validity?
highly rigorous designs go to great lengths to maximize internal validity but utilize participants, measures, and environments so narrowly defined that external validity may be marginal
conversely, broadening the research sample, including multiple measurement sites and instruments, will frequently introduce confounding variables that could threaten internal validity, even it improves external validity
What is the role of replication in research?
part of the on-going research enterprise is to begin with studies with adequate internal validity, and then gradually implement follow-up studies making minor variations to procedure, participants, and other variables
effects that show up across a broad range of such manipulations are said to be robust
What are case studies?
often reported in the back of journals by clinicians who encounter atypical individuals, or experience unexpected results with a given intervention
presented primarily to generate research hypotheses but are not capable of providing adequate controls to establish acceptable internal or external validity
What are single case design?
these are typically enacted with only one individual
can take the form of careful recordings of one or more variables of interest, for a significant period before and after the introduction of a planned intervention
this is known as an AB design
it can be strengthened by removing that intervention at some point, and noting whether or not the behavior tends to revert to its former level
there is also ABAB variant in which the intervention is re-instated after it has been withdrawn for a time
What are the complications of single case designs?
ethical prohibition: in particular where the intervention is directed at reducing potentially harmful behaviors such as cutting or head banging
to the extent that there are other rewards for that behavior desisting, removing the intervention may not be accompanied by its return to a former level
can’t be sure that some correlate of the intervention (i.e., not the intervention itself) accounts for the change
What are correlational designs?
by definition, there is no manipulation of an independent variable
therefore it is not possible to attribute causal influence
use discrete group comprised of individuals who vary on one or more dimensions, such an anxiety, intellectual functioning, or gender
these designs yield data that can be analyzed using a variety of statistical techniques, not just correlational analysis
What is a correlation analysis?
simply a statistical tool that described the strength of association between two or more variables
that value can be calculated for virtually anything
What is the defining feature of correlational designs?
groups are non-equivalent from the start, but are otherwise not manipulated
they are simply compared on the basis of one or more variables
What are true experiments?
manipulation + random assignment
strongest in terms of internal validity
when applied to treatment outcomes studies, these are referred to as Randomized Control Trials or RCTs
groups received different treatments
requires one or more control groups
What is a meta-analysis?
a set of statistical procedures was necessary to quantify findings from diverse studies
most forms of meta-analysis depend on some methods of standardizing effect size, so the research results would be comparable
allows the findings from many investigations and potentially varied groups of participants to be combined to increase generalizability and to describe an existing area of literature in less ambiguous terms
range of results is always described with respect to participants and methods
Cohen’s d = (mean1 - mean2) / s
How can experimental designs improve their limited external validity?
careful selection of participants and instruments can reduce the need for certain controls without sacrificing internal validity, and at the same time improving generalizability
e.g., ensuring that groups are reasonably diverse in their composition, yet comparable to one another
e.g., choosing assessment measures that minimize cultural bias
What is random sampling?
means every member of a given population as the same chance of being selected as every other member
at times, this is impractical; for example, if the study is to be conducted in an area that has a disproportionate representation for certain ethnic groups
if the population is randomly sampled, the research findings may apply well to that geographic area, but not to other victims
What is probability sampling?
also known as stratified sampling
is a “weighting” of sampling from a population in a way that ensures the research sample will closely resemble the demographic structure of the intended target population
What is non-probability sampling?
involves actively recruiting participants through the use of advertising, bulletins, or by drawing from an existing mental health population
not great for generalizability
may not concern as long as the sample closely matches a give population of interest, for example individuals in a community mental health setting
How many participants should be included in a study?
through the statistical technique of power analysis, those values can be estimated on the basis of hypothesized effect size
usually derived from theory or from existing research
What are instruments in research?
these include self-report measures, informant reports, rather evaluations, performance measures, observation of behavior, archival data, psych test, and psychophysiological measures
once data are collected, they should be analyzed according to the use of planned techniques
should be chosen to adequately reflect the nature of the variables being utilized, the effects of multiple measurements in comparison, to control for inflation, and to ensure the most reliable and valid data possible
What is clinical vs. scientific signficance?
sometimes research produces academically interesting results although the clinical applicability or relevance of those findings is questionable
highlights the difference between clinical and statistical significance
while the latter is easily determined through the use of statistical procedures and software, clinical significance can be more difficult to evaluate
What is reliability?
always about consistency or stability
three main forms: internal consistency, test-retest reliability, inter-rater reliability
What is internal consistency?
the degree to which a measure taps a unitary construct
What is test-retest reliability?
the degree to which scores will be stable over time
What is inter-rater reliability?
correspondence between two or more raters
What is content validity?
how well a measure captures the construct under investigation
What is face validity?
the degree to which a measure appears to tap the construct of interest
What is criterion validity?
the degree to which an instrument measures some central feature of the construct under investigation
What is concurrent validity?
the degree to which a measure correlates with other measures purporting to capture the same construct
What is predictive validity?
the ability of the measure to forecast certain outcomes (or data) measured subsequently
What is convergent validity?
the degree of specificity associated with a measures of constructs related to that of central interest to the study
What is discriminant validity?
the ability of a measure to differentiate between the construct under investigation, and others with which it might be confused
What is incremental validity?
the value that a give measure adds to existing measures of a central construct