Chapter 4: Research Methods in Clinical Psychology Flashcards

You may prefer our related Brainscape-certified flashcards:
1
Q

What two things are clinical psychologist supposed to do in EBP (evidence based practice)?

A

supposed to be sensitive and empathic, while at the same time well informed about current research relevant to the services that they provide

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

what is eminence based practice?

A

when recommendations are accepted because the person delivering them is seen as an expert

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What are 6 faulty factors in thinking?

A
  1. faulty reasoning –> form of argument that is inaccurate or misleading in some way
  2. False Dilemma –> takes the form of reducing the only viable range of options available to just two (usually extreme ones)
  3. Golden Means Fallacy –> logical error involving assumption that the most valid conclusion to reach is a compromise of two competing positions
  4. The Straw Person Argument –> involves mischaracterizing a position in order to make it look absurd or unpalatable
  5. Affirming the Consequent –> saying that x causes y and when observing y you claim that x MUST have caused it
  6. Appeal to Ignorance –> because theres no evidence that something is false then it must be true
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What are deductive processes to research?

A

deductive process includes testing a theory by looking at the results and observations of your experiment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What are inductive processes to research?

A

inductive processes include looking at observations and results and then coming up with a theory to explain them

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What are 5 possible sources of research ideas for clinical psychologists?

A
  1. Everyday experience and observation
  2. professional experience and observation
  3. Addressing applied problems and needs
  4. previous research
  5. Theories
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What role do theories serve in research?

A

theories serve to organize and give meaning to the results of research endeavours and to generate new ideas to be tested in the future

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What are the 5 steps that researchers take to ensure that their hypothesis is properly formulated and tested?

A
  1. Researcher conducts a systematic search of the published research on the phenomenon of interest
  2. Researcher begins to formalize ideas so that they can be tested in a scientific manner (also starts to operationalize variables)
  3. Researcher must carefully consider the extent to which the research idea may be based on cultural assumptions that limit the applicability or relevance of the planned search
  4. considering ethical issues in testing of the area
  5. drawing together the results from above and then sketching out the study procedure
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What are two methods used to look at the given relations among variables of interest?

A
  1. correlation (degree that two variables are associated with one another)
  2. causation (one variable directly or indirectly influences the level of the second variable)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What is a concept researchers look at to see the factors that influence the relations among variables?

A

moderating variables are variables that influence the direction or size of the relation between two other variables

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What is a concept researchers look at to see how one factor influences a second factor?

A

Mediating variables are variables that influence the relationship between two variables

influence of one variable on the second is due in whole by the third variable

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What are two ways researchers can look at possibilities to alter an outcome of interest?

A
  1. prevention –> attempt to decrease likelihood that an undesirable outcome occurs
  2. Intervention –> an attempt to decrease or eliminate an undesirable outcome that has already occurred
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What does the Canadian Code of Ethics for Psychologists state about ethical decisions?

A

states that ethical principles such as respect for the dignity of persons, responsible caring, integrity in relationships and responsibility to society should be practiced within research context and within the professional role of the psychologist

other countries state that ethical principles only need to be practice in research

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What are 15 research relevant ethical principles found in the APA Ethical principles of Psychologists and Code of Conduct?

A
  1. Institutional Approval –> in Canada the Research Ethics Board (ERP) needs to approve the research and research needs to conform to Ethical conduct for research Involving Humans
  2. Informed Consent for Research –> participants must be informed of the purpose of your research and the rights to decline and withdraw …etc..
  3. Informed Consent for recording
  4. Client/patient, student, and subordinate research participants –> protection of the participants from adverse consequences of withdrawal and declining
  5. Dispensing with informed consent –> you can do this only if research is expected to not cause any harm or distress
  6. Offering Inducements for research participation –> must be very minor and appropriate or else can cause coercion in the study
  7. Deception in research –> can only be applied if justifies the value of the study and its the only way to do the study
  8. Debriefing
  9. humane care and use of animals in research
  10. reporting research results
  11. plagiarism
  12. publication credit –> minor contributions to the research or writing do not merit authorship
  13. Duplicate publication of data –> data that you have used in a previous experiment should not be used in a new one
  14. Sharing Research data for verification –> must make data accessible to replicate
  15. reviewers –> people who review the data before its published should respect the confidentiality of the material
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What are 11 Ethical evaluation requests of a research project (application completed and submitted by researchers w/ human participants)?

A
  1. Type of research
  2. researchers
  3. research project
  4. research participants
  5. participant recruitment –> how and where participants will be recruited
  6. Screening of participants –> what steps are taken to exclude individuals from research participant
  7. Research participation –> what are participants asked to do in the research
  8. informed consent
  9. potential risks and benefits
  10. anonymity
  11. confidentiality –> who will collect the data, have access to it and how will it be stored and how long will it be maintained?
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Who is Donald Campbell and what did he conceptualize about research designs?

A

He identified a large number of design problems that are classified as representing threats to internal validity, external validity and statistical conclusion validity of a study

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What is internal validity?

A

IV refers to the extent to which interpretations drawn from results of a study can be justified and alternative interpretations can be ruled out (how accurately does your data reflect what you’re trying to measure)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

What are the 7 threats to Internal Validity?

A

threats to internal validity are things that makes the data obtained less accurate and less reflective of what the study is supposed to be measuring

  1. History –> s/t happening externally that influences changes in the study
  2. Maturation
  3. Instrumentation –> applies to longitudinal studies and differences in how things are measured over time and by different people resulting in inaccuracy of measurement
  4. statistical regression –> when measuring extreme scores (high and low) over and over again, they tend to shift towards the mean (extreme scores tend to be less extreme upon retesting)
  5. Selection bias –> systematic differences in recruiting participants or assigning participants to experimental conditions
  6. Attrition –> loss of participants throughout the study
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

What is external validity?

A

External validity is the extent to which the results of the study can be applied over to a large and more generalized population beyond the narrow boundaries of the study

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

What are 5 threats to external validity?

A

external validity threats involve anything that makes the study harder or less likely to generalize to a bigger population

  1. stimulus characteristics –> degree to which characteristics of the research participants map onto other samples and populations of interest can limit external validity (i.e. their sociodemographic or psychological characteristics)
  2. Stimulus characteristics and settings –> features of the study like where it is taking place can influence the generalization of the results
  3. reactivity of research arrangements –> participants might respond differently than they would in other contexts cause they know they’re in a study situation
  4. reactivity of assessment –> participants become aware of there own moods, behaviours and attitudes and they adjust it because they know they’re being monitored on it and this may influence the response in the study
  5. timing of measurements –> when you choose to grab measurement in a study can have an effect on external validity because you might miss important times
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

What is a case study? and what does it involve?

A

Case studies are non-experimental observations that are going to happen, or have happened due to natural or economic or personal causes.

case study involves a detailed presentation of an individual patient, couple or family illustrating some new or rare observation or treatment innovation

its a good start to see connections between events, behaviours and symptoms that have not been addressed in existing research

they don’t allow for rigorous testing of the hypothesis because sample size is low

22
Q

What are some weaknesses to case studies?

A

it has the most threats to internal validity

i.e. if you’re working with a kid who has temper problems and then they stop all of a sudden, it could be due to a number of reasons besides the treatment like maturation or history and its hard to rule out alternative methods in case studies

23
Q

What is the difference of single case designs and case studies?

A

single case designs focus purely on single individuals and investigates the effectiveness of an intervention

24
Q

How do single case studies address the threats to maturation and regression to the mean?

A

By extending the period of time that the person is assessed and the frequency with which the assessments occur

25
Q

how do single case studies address threats to instrumentation validity?

A

by using the same measures at each assessment point rather than relying on separate measures

26
Q

what two methods do single case studies use to address threats to history and external treatment events?

A

overall they do this by addressing the behaviour of the participant before and after treatment to see if the treatment had an affect on the participant

A-B Single Case design

starts off with A (baseline) and then a period where intervention is applied and then B (level of symptoms after presentation of interventions … then using a graph or stats you can figure out if the persons behaviour changed towards the intervention or stayed the same as baseline

A-B-A Single Case Design
- recording baseline, and then providing intervention and then recording symptoms following intervention and then withdrawing intervention and recording baseline again

this really shows whether or not the intervention is working and is mainly used in children and behaviour modification more than anything else

27
Q

Why are A-B and A-B-A single case designs a problem?

A

because they can cause ethical issues of withdrawing treatment that appears to be working in individuals who have severe problems

28
Q

What two experimental conditions need to be present in order to draw causation between two variables?

A

you need experimental manipulation and random assignment

29
Q

What are correlational design studies?

A

they are the most commonly used research designs in clinical psychology

they focus on association among variables

they do not determine causality between them because they lack experimental manipulation and random assignment

Correlational studies can be descriptive which means that they are used in a bulk of epidemiological research on incidence, prevalence, and the distribution of disorders in a population

30
Q

What is factor analysis and how are correlation studies used to determine factor analysis?

A

factor analysis is a statistic procedure used to determine conceptual dimensions or factors that underlie a set of variables, test items or tests

its used to develop a measurement to determine which items contribute meaningfully to the test

it puts variables tested into a common group depending on the stats that is obtained in the research about how the variables correlate with one another and their degree of correlation

for example if a bunch of variables had really high correlation scores amongst each other then they can be grouped into a larger group of testing which means that if you tested the larger group instead of the individual smaller variables then you would get the same results as if you measure all the small variables separately

31
Q

What are the two types of factor analysis?

A
  1. exploratory factor analysis –> when the researcher has no prior hypothesis about the structure of data and the pattern of correlations among variables or test items provides the evidence for the underlying factor structure in data
  2. confirmatory factor analysis –> is used to test a specific hypothesis regarding the nature of the factor structure and the researcher specifies beforehand what the factor structure should be and how each variable contributes to this structure
32
Q

what is a moderator variable?

A

one that influences the strength of the relation between a predictor variable and a criterion variable …

has an effect on why x = y

33
Q

What is a mediator variable?

A

mediator variables explain the mechanism by which a predictor variable influences a criterion variable

explains why x = y

34
Q

What is Structural Equation Modelling (SEM)?

A

it is a comprehensive approach to testing an entire theoretical model

involves combination of confirmatory factor analysis and mediator analysis

first the researcher sees how each variable is related to each other using mediator analysis and then the researcher considers how best to measure each variable in the model which is similar to confirmatory factor analysis

35
Q

What is a downside to conduction SEM?

A

it requires a large sample population of over 200 people and its tougher to do for research based on infrequent conditions because that involves less sample size

36
Q

What are Quasi-Experimental Deisgns?

A

they involve some form of manipulation by the researcher like variations in the nature of the information prior to undertaking some task

they do not include random assignment because in situations where quasi-experiments are done it is usually unethical to randomly assign groups so they use pre-existing groups and manipulate certain factors to see the comparison between them

its cost effective and straight forward compared to other experimental designs

37
Q

What are some downsides to quasi-experimental designs?

A

includes groups being different before intervention and counting the results

a baseline data should be collected before the experiment to compare results to

38
Q

What are experimental designs? what do they include?

A

experimental designs involve both random assignment to condition and experimental manipulation

it provides the best protection against threats to internal validity

39
Q

What are randomized control trials?

A

they involve random assignment of participants into one of two or more treatment conditions

many times theres a no-treatment condition to compare the results to

40
Q

What are meta-analysis? how were they written before the 1980s and how are they written now?

A

meta-analysis is the standard for making general statement about findings in a research field by looking at statistical strengths and determining the current state of knowledge about many areas of research

it used to be quantitative which involved a narrative account of various studies

but now it is more qualitative because it uses stats

41
Q

What are some things to watch out for when selecting a sample?

A

being aware of biases, and reflecting that the findings may not apply across age, gender, ethnicity, educational level and socioeconomic status

42
Q

What are two types of sampling methods?

A

probability sampling –> focuses on strategies to ensure that the research sample is representative of the population … the researcher in this case KNOWs the probability of selecting participants from the population of interest because they know that out of a population, how many people responded a certain way so they can calculate the probability of the sample. This is used to obtain accurate and precise estimate of strength, level, or frequency of some construct in the population and is used in epidemiological studies and prevalence of mental health disorders

Non-probability sampling –> what most psychologists use where they put out ads in newspapers, websites or mental health treatment settings to find participants and obviously these are biased towards which area they grabbed participants from so its harder to generalize to the total public but that usually doesn’t matter

43
Q

What was jacob cohen’s contribution to statistical work in psychology research?

A

he developed tools using statistical work which are available to assist researchers in determining the optimal number of participants to recruit for a study based on the phenomenon under investigation, the research design and the type of planned data analysis

44
Q

What are 8 measurement options (range of general measurement modalities that may be appropriate for study)?

A
  1. self report measures –> i.e. questionnaires
  2. Informant- report measures –> info is gathered from other individuals that are close to the participant
  3. Rater evaluations –> someone knowledgable about participants involvement in study like the therapist or interviewer gives evaluation
  4. Performance measures –> measure of tasks done in study
  5. Projective measures –> story telling and looking at underlying needs or emotion
  6. Observation of behaviour –> summarize elements of participants behaviour
  7. psychophysiological measures –> i.e. measuring neurological ability, autonomic arousal, cardiovascular…etc..
  8. Archival data –> info outside of research used to aid research like police records, health records..etc..
45
Q

What are 3 psychometric properties of reliability measures?

A
  1. internal consistency –> degree to which elements of the measure are homogenous
  2. test-retest reliability –> stability over time of scores on a measure
  3. inter-rater reliability –> consistency of scores on a measure across different raters and observers
46
Q

what are 8 psychometric properties of validity measures?

A
  1. content validity –> extent to which the measure fully and accurately represents all elements of the domain of the construct being assessed
  2. Face validity: the extent to which the measure overly appears to be measuring the construct itself
  3. criterion validity: the association of a measure with some criterion of central relevance to the construct like as differentiating between groups of research participants
  4. concurrent validity: association of a measure with other relevant data measure at the same point in time
  5. predictive validity –> association of a measure with other relevant data measured at some future point in time
  6. convergent validity: the association between a measure other measures of either the same construct or conceptually different constructs
  7. discriminant validity: the association between measures that conceptually should not be related
  8. incremental validity: the extent to which a measure adds to the prediction of a criterion beyond what can be predicted with other measurement data
47
Q

What is statistical conclusion validity?

A

refers to aspects of the data analysis that influence the validity of the conclusions drawn about the results of the research study

48
Q

What are 5 common threats to the statistical conclusion validity of a study?

A
  1. low statistical power ->stat power refers to the ability to detect group differences when such difference truly exists. if a study has low stat power it means that its cause by the use of samples that are too small so it causes inconclusive results
  2. multiple comparisons and their effects on error rates: most studies involve testing multiple research hypotheses with multiple measures. the researcher needs to consider how many analyses to conduct and the error rate to use for analyses in order have a reasonable balance between the desire to avoid type 1 errors (concluding theres an effect when no effect exists) and type 2 errors (concluding there is no effect when a true effect exists)
  3. procedural variability: interviewers, observational raters, and therapists (people conducting) may differ in how they interpret or use the instructions and procedures. this increases variability in the study and decreases ability to detect phenomenon
  4. participant heterogeneity: variability in participant characteristics may results in differential results within sample… increases variability in the study and makes it more difficult to detect a true effect
  5. measurement unreliability: the less reliable a measure, the more that measurement error influences the data obtained from participants this increases the within-study variability and negatively affects ability to detect an effect
49
Q

what is clinical significance?

A

has been defined as evaluating the degree to which the intervention has had a meaningful impact on the functioning of treated participants even if the numbers obtained in the study are not statistical significant

50
Q

What are two top methods for evaluating clinical significance?

A
  1. Comparing post-treatment to norms: evaluating to see for each patient whether the participant could be said to be in the normal range of functioning, this uses cut-off scores on scale,s normals and pre-determined criteria to operationalize normal range of functioning
  2. Using reliable change index by Neil Jacobson, which determines whether a participants pretreatment to post treatment change on a scale is statistically greater than what would be expected due to measurement error
    if the score is moved within 2 standard deviations of the means score for a non-disordered sample then a clinically significant change is said to have occurred