Research Methods Flashcards

1
Q

What’s a good guideline for reviewing manuscripts?

A

(Journal of Counseling Psychology Reviewer Guidelines, 2013)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

A good hypothesis is a ___________ research question that provides direction for an experimental inquiry

A

testable (Heppner, Wampold, & Kivlighan, 2008)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Hypotheses should also be phrased as ______ statements so that they can be tested.

A

falsifiable (Popper, 1959)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

research designs often offer a trade-off between ______ and __________ validity

A

internal and external (Heppner, Wampold, & Kivlighan, 2008)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is an error that often occurs in reporting reliability and validity?

A

Authors don’t reference the sample from which estimates are derived (See Wilkinson, L. and the Task Force on Statistical Inference APA Board of Scientific Affairs, 1999).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What are some reliability statistics?

A

Cronbach’s alpha, KR-20, Intraclass Correlation Coefficients, Kappa, or test-retest reliability (Heppner, Wampold, & Kivlighan, 2008)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What are some different types of validity that are important?

A

1 face validity (the degree to which a measure appears “at face value” to measure the construct),
2 content validity (the content is relevant to the construct),
3 construct validity (the degree to which the measure accurately reflects the construct),
4 predictive validity (measure’s ability to predict something it should theoretically be able to predict),
5 concurrent validity (ability for measure to distinguish between groups it should theoretically be able to distinguish from),
6 convergent validity (degree to which measure is associated with measures it should be theoretically similar to),
7 discriminant validity (degree to which measure is dissimilar to measures it is theoretically dissimilar to).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What are 4 different types of studies that vary in their different levels of internal and external validity?

A

(Gelso, 1979) Experimental Field (Moderate I, Moderate E), Descriptive Field (Low I, High E), Experimental Laboratory (High I, Low E), Descriptive Laboratory (Low I, Low E)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is MAXMINCON?

A

Kerlinger, 1986
1 MAXimize variance of experimental (important variables)
2 MINimize error variance
3 CONtrol for confound variables

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Pros of MAXMINCON

A
  • greater likelihood of significant effects

- better able to establish causality (if design affords this)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Cons of MAXMINCON

A

may not apply to applied settings where “perfect” control is not possible

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What are methods of statistical control?

A

multiple regression, ANCOVA, partial correlations, residualizing

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What are disadvantages of statistical control (compared to experimental control)?

A
  • Assumes a linear relationship between confound variables and outcome variable
  • Statistically controls for a “measured” confound variable. It is possible that aspects of the confound are not accounted for by the measurement, which always includes error.
  • Cannot definitely rule out confounds, and therefore attribution of causality is attenuated
  • May not be able to account for problems due to not being able to randomly assign participants
  • Collinearity can be a problem if the confound variable (which is controlled for) correlates with predictor variables.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What are citations for describing the differences between quantitative and qualitative research?

A

Johnson & Christensen (2008); Lichtman (2006)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Qualitative research assumes there are multiple realities, in fact as many as there are participants. What field does this come from, and what ontology does this represent?

A

anthropology

interpretivist-constructivist relativist ontology
Guba & Lincoln, 1994

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What aspect of people does qualitative research examine?

A

experiential life of people (Polkinghorne, 2005)

17
Q

What is the definition of quantitative psychology research?

A

psychological research that performs mathematical modeling and statistical estimation or statistical inference or a means for testing objective theories by examining the relationship between variables (Cresswell, 2009)

18
Q

What sort of ontology does quantitative psychology research utilize?

A

Assumes a modified objectivist epistemology, viewing objectivity as an ideal (Guba & Lincoln, 1994)

19
Q

What is it called when you combine both qualitative and quantitative research?

A

Mixed methods (Hanson et al., 2005)

20
Q

What are advantages of a brief form measure?

A
  • Participants more likely to complete (Robins et al., 2002)
  • Eliminate item redundancy (robins et al., 2001)
  • Long precedent for single-item measures for things such as subjective well-being, cultural/ethnic identity, relationship intimacy, intelligence, self-esteem, etc. (Gosling et al., 2003)
  • Briefer
  • Reduce test fatigue
21
Q

What are disadvantages of brief form measures?

A
  • At best may only be a “reasonable proxy” (Gosling et al., 2003)
  • Not accurate, not reliable, validity suspect
22
Q

According to Cohen (1994) significance testing does not tell us, “what is the probability that Ho is true.” Instead it tells us what?

A

“Assuming that Ho is true, what is the probability of these extreme data?” (not what we really want to know)

23
Q

Cohen (1994) refers to null hypotheses where the null effect size = 0 as the ________ hypothesis.

A

“nil hypothesis.”

24
Q

What is the bias against publishing negative (null) results called

A

the file drawer problem (Rosenthal, 1979)

25
Q

Who said, “amount as well as direction is vital” (in advocating confidence intervals)

A

Tukey (1969)

26
Q

Because the statistical size of an effect is heavily
dependent on the ____________ of the independent variables and the choice of a dependent variable in a particular study, a researcher can design
an experiment so that even a ______ effect is impressive.

A

(Prentice & Miller, 1992) (Rosenthal, 1990)
operationalization

small

27
Q

Questions involving moderators address _________or __________ a variable most strongly predicts or causes an outcome variable.

A

“when” or “for whom” (Frazier et al., 2004)

28
Q

mediators establish ________ or ___________one variable predicts or causes an outcome variable

A

“how” or “why” (Frazier et al., 2004)

29
Q

moderators often are introduced when there are unexpectedly __________ or _________relations between a predictor and an outcome across studies

A

weak or inconsistent (Baron & Kenny, 1986)

30
Q

How do you judge if a qualitative study is adequate/credible/matching reality?

A

Lincoln and Guba (1985):

1) Triangulation
2) Member checks (taking data interpretations back to participants)
3) Peer experimentation (colleagues comment on findings)
4) Involving participants in all phases of research
5) Clarifying researcher bias at the outset
6) Prolonged engagement in the field (long term and repeated observations)

31
Q

How do you judge if a qualitative study is transferable or applicable?

A

1) use of rich, thick description (so the readers can make judgments about what is
applicable/transferable)
2) examination of the typical or modal category (rather than the unique or obscure)
3) use of cross – site analysis (to have a greater number of cases to draw from)

32
Q

SEM and path analysis citation

A

Hoyle, 2012

33
Q

Judd et al (1995) make 3 recommendations about data analysis, what are they?

A
  1. use model comparison approach
  2. Compute estimates of effect size
  3. compute effect estimates for interactions