Chapter 4: Research Methods Flashcards

You may prefer our related Brainscape-certified flashcards:
1
Q

Internal Validity

A

the extent to which the interpretation drawn from the results of a study can be justified and alternative interpretations can be reasonably ruled out

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

External Validity

A

the extent to which interpretations drawn from the results of a study can be generalized beyond the narrow boundaries of a specific study

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Statistical Conclusion Validity

A

the extent to which the results of a study are accurate and valid based on the type of statistical procedures used in research

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Factor Analysis

A

a statistical procedure used to determine the conceptual dimensions or factors that underlie a set of variables, test items, or tests

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Moderator

A

a variable that influences the strength of the relation between a predictor variable and a criterion variable

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Mediator

A

a variable that explains the mechanism by which a predictor variable influences a criterion variable

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Structural Equation Modeling

A

a comprehensive statistical procedure that involves testing all components of a theoretical model

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Randomized Control Trials

A

an experiment in which research participants are randomly assigned to one of two or more treatment conditions

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Clinical Significance

A

in addition to the results of a study attaining statistical significance, the results are of a magnitude that there are changes in some aspects of participants’ daily functioning

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Systematic Review

A

the use of a systematic and explicit set of methods to identify, select, and critically appraise research studies

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Meta-analysis

A

a set of statistical procedures for quantitatively summarizing the results of a research domain

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Effect Size

A

a standardized metric, typically expressed in standard deviation units or correlations, that allows the results of research studies to be combined and analyzed

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What is qualitative research?

A

better suited to generating hypotheses, describing intricate processes, and export the subjective experiences of small groups of subjects

specifically seeks to avoid establishing parameters which tend to limit the range of participants’ responses

data collection looks like clinical interviewing

qualitative data often take the form of lengthy narratives which are carefully analyzed for the emergence of recurrent themes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What are the disadvantages of qualitative research?

A

inherent difficulty in comparing studies that purport to examine similar phenomena

small n studies have severely limited generalizability

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What are the advantages of qualitative research?

A

better at illuminating process

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What is quantitative research?

A

based on specific research designs intended to eliminate confounds

designs are available, or can be modified, to accommodate variously-sized research samples, multiple conditions, and the passage of time

the use of validated measures generates data which can be statistically analyzed to identify trends, identify significant between-group differences, and describe performance

this allows for systematic inquiry into specific research questions

deliberately seeks to limit responses

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What are confounding factors?

A

anything that introduces competing explanations for observed phenomena

improves interpretability of results by ruling out alternative explanations

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

What are the advantages of quantitative research?

A

well-suited to examining the effectiveness of interventions and describing the population for whom those interventions have proven useful

results are reported in a way that contributes to an ongoing research enterprise

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

What is the jigsaw puzzle analogy of quantitative research?

A

think of it as a community of researchers cooperating to assemble a jigsaw puzzle

the idea is to both utilize, and contribute to the existing knowledge base(s)

doesn’t mean you have to base your designs on others’ work, but similar methods are often used

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

How should the results of studies relate to previous research?

A

it is incumbent upon the researchers, in reporting their results, to explain the fit of their findings with the existing body of literature

results which appear contradictory to previous findings must be explained with reference to differences in study procedures, participants, analytic methods, etc. or revision of theory

these matters are typically covered in the Discussion section of a research report

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Why is educating patients about research important?

A

it is reasonable for consumers of professional services to expect information concerning: the likely outcomes, the expected benefits, potential risks

based on the results of properly conducted studies

psychologists should not expect patients to participate in treatment on the basis of their professional reputation (“eminence-based practice”)

educating patients about research finding may improve compliance

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

What is deductive hypothesis generation?

A

designing research provides opportunities to test hypotheses emerging from various theories

if ______ then _______

to the extent that those hypotheses are disconfirmed, there is an opportunity to modify the theory

this, in turn, will result in new hypotheses which can also be subject to testing

evidence and theory inform one another reciprocally, and a good theory must be able to accommodate existing data

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

What is the inductive process?

A

there is an inherently qualitative component to clinical interviewing, and to making observations in the course of providing psychological services

almost invariably, research attempts to explicate the relationship between two or more variables

the hypotheses that emerge from those contacts are colored by our unique personal experiences, theoretical orientation, and perceptions of the client/patient

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

What is operationalization?

A

once the research question has been conceptualized, variables must be chosen to translate the (relatively abstract) concept into data

it is often difficult to identify measures that adequately encapsulate complex ideas

may be necessary to choose multiple measures in order to capture the relevant aspects of the concept under study

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

Why is generalizability important?

A

it is very important for researchers to appreciate cultural assumptions and obstacles that would compromise the usefulness of the research data when available

this is ultimately a question of generalizability, which speaks to the range of individuals to which the research outcome could potentially be applied

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

Why is an ethics evaluation important in research?

A

an ethics evaluation is essential even though it may actually limit the range of designs available

for example, it may be unethical to place individuals in a control group if there is pre-existing evidence that an experimental condition might offer relief from symptoms

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

What is the goal of research?

A

almost invariably, research attempts to explicate the relationship between two or more variables

there are essentially three classes of relationship: correlation, moderation, and mediation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

What is correlation?

A

the degree to which two or more variables change together

in a positive correlation, an increase in one is associated with an increase in the other

in a negative correlation, changes in one variable are met with changes in an opposite direction in the other variable

note that no causal relationship can be surmised from correlations alone

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

What is moderation?

A

is the treatment equally efficacious for all participants?

e.g., does this intervention for the treatment of bulimia work equally well for boys and girls and for patients of different ages (moderator analysis)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

What is mediation?

A

what is the mechanism of change?

e.g., how does the intervention work? is it changing body image by resist media image or by learning relaxation? (mediator analysis)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

What is the Canadian Code of Ethics?

A

not just about treatment but guides psychologists in all aspects of their practice

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

What safeguards ensure research is scientifically and ethically sound?

A

institutional approval

informed consent for research

informed consent for recording

protecting research participants

only dispensing with informed consent under conditions highly unlikely to result in harm

avoiding coercion or offering excessive inducements to participate in research

avoidance of deception

debriefing research participants following their involvement in the study

humane care and use of animals in research

integrity in reporting research results

avoidance of plagiarism

utilizing authorship credits which accurately convey intellectual contribution to a study

ensuring independence of data from previous publication

sharing research data for verification

respecting the confidentiality of any material submitted for review

33
Q

In what way do research designs exist on a continuum?

A

at the lowest level, are purely descriptive and observational studies

at another level there may be multiple conditions, repeated measures, and several variables

no one design is universally superior, rather some are more appropriate for answering specific questions than others

34
Q

What is internal validity?

A

refers to the degree to which a design is capable of supporting unambiguous conclusions

to provide an adequate test of the research hypothesis, by isolating effects, and minimizing other sources of variance

35
Q

What is random assignment?

A

a hallmark of a true experiment

seldom available in clinical experiments

most psychological studies are quasi-experimental in nature

this is often misunderstood to mean that the studies are inherently “less sophisticated”

reflects the fact that human behavior is inherently complex, and subject to a far greater number of uncontrollable influences than physical reagents and compounds

36
Q

What is external validity?

A

generalization

the degree to which findings would apply to individuals outside the experimental sample

37
Q

What is statistical conclusion validity?

A

the degree to which the chosen statistical procedures support the conclusions and claims made by the study

38
Q

In what way is history a threat to internal validity?

A

factors not controlled for in the study

things you did not know about subjects beforehand

39
Q

In what way is maturation a threat to internal validity?

A

changes occurring within members of the participant group that are not controlled for in the design

occurred while they were in the study

40
Q

In what way is testing a threat to internal validity?

A

repeated exposure to testing procedures may change the way participants respond to them, independently of the main effects

41
Q

In what way is instrumentation a threat to internal validity?

A

procedural drifts that take place over the course of a longitudinal study

involves researchers developing heuristics and exposing participants to non-identical stimuli

42
Q

In what way is statistical regression a threat to internal validity?

A

the tendency of high-scoring individuals and low-scoring individuals to score closer to the mean upon subsequent measurement

43
Q

In what way is selection bias a threat to internal validity?

A

inadvertently constructing groups composed of non-equivalent participants

the risk is that between-group differences will be incorrectly attributed to the experimental effect

fails to account for differences

44
Q

In what way is attrition a threat to internal validity?

A

systematic differences in participants drop outs, on the basis of a research-relevant variable

e.g., intelligence, mood, severity of mental disorder

may bias a study toward a certain conclusion

this is the complement of selection bias

45
Q

In what way is sample characteristics a threat to external validity?

A

using a particularly narrow participant group limits the applicability of research findings to different individuals

e.g., SES, intellectual functioning, academic achievement, ethnicity, etc.

a huge issue in psych testing

46
Q

In what way is stimulus characteristics and settings a threat to external validity?

A

sometimes results obtained in a confined, clinical setting are not paralleled in community or other real-world environments

bears on effectiveness, not efficacy

may not be a good recreation of the real world

47
Q

In what way is reactivity of research arrangements a threat to internal validity?

A

the influence that participation in a research study has on participants, which may lessen the applicability of its findings to individuals not involved in the research

48
Q

In what way is reactivity of assessment a threat to internal validity?

A

knowing that one is being observed can influence responses

49
Q

In what way is timing of measurement a threat to internal validity?

A

about generalizability over time

observed effects may be unique to the intervals at which measurements were made

50
Q

Why should a research design be balanced between internal and external validity?

A

highly rigorous designs go to great lengths to maximize internal validity but utilize participants, measures, and environments so narrowly defined that external validity may be marginal

conversely, broadening the research sample, including multiple measurement sites and instruments, will frequently introduce confounding variables that could threaten internal validity, even it improves external validity

51
Q

What is the role of replication in research?

A

part of the on-going research enterprise is to begin with studies with adequate internal validity, and then gradually implement follow-up studies making minor variations to procedure, participants, and other variables

effects that show up across a broad range of such manipulations are said to be robust

52
Q

What are case studies?

A

often reported in the back of journals by clinicians who encounter atypical individuals, or experience unexpected results with a given intervention

presented primarily to generate research hypotheses but are not capable of providing adequate controls to establish acceptable internal or external validity

53
Q

What are single case design?

A

these are typically enacted with only one individual

can take the form of careful recordings of one or more variables of interest, for a significant period before and after the introduction of a planned intervention

this is known as an AB design

it can be strengthened by removing that intervention at some point, and noting whether or not the behavior tends to revert to its former level

there is also ABAB variant in which the intervention is re-instated after it has been withdrawn for a time

54
Q

What are the complications of single case designs?

A

ethical prohibition: in particular where the intervention is directed at reducing potentially harmful behaviors such as cutting or head banging

to the extent that there are other rewards for that behavior desisting, removing the intervention may not be accompanied by its return to a former level

can’t be sure that some correlate of the intervention (i.e., not the intervention itself) accounts for the change

55
Q

What are correlational designs?

A

by definition, there is no manipulation of an independent variable

therefore it is not possible to attribute causal influence

use discrete group comprised of individuals who vary on one or more dimensions, such an anxiety, intellectual functioning, or gender

these designs yield data that can be analyzed using a variety of statistical techniques, not just correlational analysis

56
Q

What is a correlation analysis?

A

simply a statistical tool that described the strength of association between two or more variables

that value can be calculated for virtually anything

57
Q

What is the defining feature of correlational designs?

A

groups are non-equivalent from the start, but are otherwise not manipulated

they are simply compared on the basis of one or more variables

58
Q

What are true experiments?

A

manipulation + random assignment

strongest in terms of internal validity

when applied to treatment outcomes studies, these are referred to as Randomized Control Trials or RCTs

groups received different treatments

requires one or more control groups

59
Q

What is a meta-analysis?

A

a set of statistical procedures was necessary to quantify findings from diverse studies

most forms of meta-analysis depend on some methods of standardizing effect size, so the research results would be comparable

allows the findings from many investigations and potentially varied groups of participants to be combined to increase generalizability and to describe an existing area of literature in less ambiguous terms

range of results is always described with respect to participants and methods

Cohen’s d = (mean1 - mean2) / s

60
Q

How can experimental designs improve their limited external validity?

A

careful selection of participants and instruments can reduce the need for certain controls without sacrificing internal validity, and at the same time improving generalizability

e.g., ensuring that groups are reasonably diverse in their composition, yet comparable to one another

e.g., choosing assessment measures that minimize cultural bias

61
Q

What is random sampling?

A

means every member of a given population as the same chance of being selected as every other member

at times, this is impractical; for example, if the study is to be conducted in an area that has a disproportionate representation for certain ethnic groups

if the population is randomly sampled, the research findings may apply well to that geographic area, but not to other victims

62
Q

What is probability sampling?

A

also known as stratified sampling

is a “weighting” of sampling from a population in a way that ensures the research sample will closely resemble the demographic structure of the intended target population

63
Q

What is non-probability sampling?

A

involves actively recruiting participants through the use of advertising, bulletins, or by drawing from an existing mental health population

not great for generalizability

may not concern as long as the sample closely matches a give population of interest, for example individuals in a community mental health setting

64
Q

How many participants should be included in a study?

A

through the statistical technique of power analysis, those values can be estimated on the basis of hypothesized effect size

usually derived from theory or from existing research

65
Q

What are instruments in research?

A

these include self-report measures, informant reports, rather evaluations, performance measures, observation of behavior, archival data, psych test, and psychophysiological measures

once data are collected, they should be analyzed according to the use of planned techniques

should be chosen to adequately reflect the nature of the variables being utilized, the effects of multiple measurements in comparison, to control for inflation, and to ensure the most reliable and valid data possible

66
Q

What is clinical vs. scientific signficance?

A

sometimes research produces academically interesting results although the clinical applicability or relevance of those findings is questionable

highlights the difference between clinical and statistical significance

while the latter is easily determined through the use of statistical procedures and software, clinical significance can be more difficult to evaluate

67
Q

What is reliability?

A

always about consistency or stability

three main forms: internal consistency, test-retest reliability, inter-rater reliability

68
Q

What is internal consistency?

A

the degree to which a measure taps a unitary construct

69
Q

What is test-retest reliability?

A

the degree to which scores will be stable over time

70
Q

What is inter-rater reliability?

A

correspondence between two or more raters

71
Q

What is content validity?

A

how well a measure captures the construct under investigation

72
Q

What is face validity?

A

the degree to which a measure appears to tap the construct of interest

73
Q

What is criterion validity?

A

the degree to which an instrument measures some central feature of the construct under investigation

74
Q

What is concurrent validity?

A

the degree to which a measure correlates with other measures purporting to capture the same construct

75
Q

What is predictive validity?

A

the ability of the measure to forecast certain outcomes (or data) measured subsequently

76
Q

What is convergent validity?

A

the degree of specificity associated with a measures of constructs related to that of central interest to the study

77
Q

What is discriminant validity?

A

the ability of a measure to differentiate between the construct under investigation, and others with which it might be confused

78
Q

What is incremental validity?

A

the value that a give measure adds to existing measures of a central construct