Research Methods Flashcards

1
Q

Postpositivist

A

Things are subjective and observable. If we observe it long enough, we will figure it out.

E.g., Shroedinger’s Cat - objectively, the cat is either dead or alive, can’t be simultaneously dead and alive

There is an objective reality that can be known, and we experience it subjectively

Most common in the science of psychology

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

constructivist

A

What we know is socially constructed and changes over time depending on who is examining it - all we know is what someone is explaining in the moment

No objective reality - knowledge is constructed, not discovered

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

transformative

A

We use research to directly help people - research and action/practice are one in the same - when we research, we are actively encouraging/creating change

E.g., combining social justice and the practice of research

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Pragmatic

A

Focuses on what works as opposed to what is true

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Null Hypothesis

A

In a postpositivist paradigm, always testing compared to something
-This is a model where there is no effect (H0) - statement of effect is called the alternative hypothesis (H1)
-E.g., Saying DBT works and DBT doesn’t work - we have to assume that DBT works, so the null hypothesis is that DBT doesn’t work (no effect of DBT) - to test the hypothesis we put yourself in a “fake” world where we expect DBT to not be effective (null) so we can prove that it actually does work

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

post hoc hypotheses & a priori

A

Generating hypotheses based on data already observed, in the absence of testing them on new data
-Questions that we try to answer with our data after the study had finished and was not the intent of that particular study

A Priori - Before the fact - say what the hypotheses are ahead of time so you can change them after the results are obtained
-Hypotheses based on assumed principles and deductions from the conclusions of previous research, and are generated prior to a new study taking place.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Sampling & sampling frame

A

Design procedure in which we collect individuals for research

sampling Frame - Procedures for obtaining a sample - because we cannot get everyone in a population, we set a sampling frame

So you have a population you want to research, you set a sampling frame to obtain individuals from that population, and your sample is the group of individuals you actually obtain for the study

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

random probability sampling

A

Likelihood that a particular event will occur or in this case likelihood that an individual will be chosen for a study
-Researchers must set up some process or procedure that ensures, with confidence, that the different units in their sample population have equal probabilities of being chosen

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Simple Random Sampling

A

In a given population, every individual has equal probability of being selected to participate
-Advantages: Most statistically efficient and statistically unbiased
-Disadvantages: Does not guarantee all subgroups are represented in a sample

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

stratified random sampling

A

In a given population within a pre-specified strata, every individual has equal probability of being selected to participate - subjects initially grouped into different classifications such as gender, level of education, or SES, then researchers randomly select the final list of subjects from the different defined categories

Best used when the goal is to study a particular subgroup within a greater population

Advantages: greater precision in approximating population

Disadvantages: Have to get strata correct - assume homogeneity in strata

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Systematic Random Sampling

A

In a given population, every kth individual is selected to participate
-Advantages: less complicated method of selection
-Disadvantages: order may matter

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

non-random probability sampling

A

Individuals are selected for a “reason.”

There is a purpose behind them being chosen and it is not random - each member of the population would not have the same odds of being selected

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

convenience sampling

A

Accidental sample - average person on the street, not likely to be random - based solely on convenience and availability
-Often volunteers
-Advantages: easy, cheap to obtain
-disadvantages: sample likely vested in population

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

quota sampling

A

With pre-specified strata, a certain number of individuals are obtained - data is collected in a homogenous group, but it isn’t random, it is based on convenience.
-Advantages: more representative of a population
-Disadvantages: have to get strata correct - homogeneity - bias

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

purposive sampling

A

Recruit individuals with certain predefined characteristics – selective

Thought to be representative of population of interest - main goal is to focus on a particular characteristic of a population that are of interest, which will best enable you to answer your research question (exact opposite of random sampling)
-You specify who you want in your sample (E.g., latino women living in Northeast Philly)
-Modal: Most common example - most typical members of population are selected
-Exceptional: Rare - look for an expert within the population
-Snowball: sample is linked together - you tell people to encourage their friends to participate

Advantages: face valid, easy to collect

Disadvantages: oversampling certain types of individuals

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

data saturation

A

Occurs when no new themes emerge in data - all research points to similar findings

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

universal effect
homogenous
heterogenous
unstable estimate
outliers

A

If we assume that everybody functions similarly then it doesn’t matter who you get - If there is no reason to believe it’s going to differ by subgroups then you don’t need a huge sample size
-Homogenous - Don’t need as large of a sample if the population is SIMILAR
-Heterogeneous - Need a larger sample if the population is DIFFERENT
-Unstable Estimate - Can’t detect an effect is the sample size is too small
-Outliers-can ruin results

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

power analysis and the cons of a large and small sample size

A

Statistical method used to determine sample size - prediction used to detect how large you expect the effect of the sample size to be - have to guess what the effect size will be ahead of time to detect the power analysis
-Can ensure your sample size isn’t too large or too small.
-Large Sample Size - Can waste time and resources and might be minimal gain
-Small Sample Size - Can lack precision to provide reliable answers to the investigation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

type 1 error

A

Null hypothesis is REJECTED but it is true
-False positive - you say there is something there but in actuality nothing is there.

Ex: You go to bed and there is no fire - at 3 AM the smoke detector goes off but there is still no fire

Determined by alpha level (alpha = .05) and sensitivity is set by experimenter

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

type 2 error

A

Null hypothesis is ACCEPTED but the alternative is true

False negative - you say there is something there but in actuality there truly is something there

Ex: you go to bed assuming everything is safe and there is no fire, but there is a fire, the smoke detector just did not go off

Beta used and is not controlled by the experimenter

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

power

A

Probability of avoiding Type 2 error, that the test correctly supports the alternative hypothesis, accurately rejects the null hypothesis when the alternative hypothesis is true
Inverse of Beta (1-beta), range from 0-1

-the greater the effect size, the more power you have

Higher alpha (probability of type 1 error) allows you to increase true positives which increases power - BUT it also increases type I error (increases sensitivity)

The greater sample size you have the greater power will be

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

reliability

A

How reproducible a measure is - the likelihood of getting the same result

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

test retest reliability

A

Reproducibility over time or how similar a measurement remains after a period of time

Measured by correlation between two administrations at different times

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

split half reliability

A

if test is homogenous and measures the same thing, correlation should be high - take first half of test and correlate with the second

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
alternative forms within split half reliability
splitting may diminish size of your test so come up with alternate forms with similar questions
26
internal consistency
Items measuring the same construct should produce similar estimates - if score high on one item, should score high on another similar item Measured by Cronbach's Alpha - .50 = low, .70 = medium (want you want), .90 = high
27
interrater reliability consistency rate "drift"
How reproducible measurements are for a particular group of "judges" or people observing the same thing absolute agreement: give the same score Consistency: same scoring over time even if don't necessarily agree Rate "drift": over time, we go off on our own - might be aligned in the beginning, but very different by the end
28
factor structure exploratory factor analysis confirmatory factor analysis
Correlation (or non-correlation) of items occurs in a predictable fashion. Factors of sub scales underlie the data Ex: IQ tests measure verbal and mathematical skills - different subgroups but related Exploratory factor analysis: examine item correlations for patterns Confirmatory factor analysis: specify pattern ahead of time, computer tells me if pattern fits the data
29
validity
the extent to which a test measures or predicts what it is supposed to
30
face validity
Measures whether a test looks like it tests what it is supposed to test - how much a measure captures the construct
31
content validity
The degree to which the content of a test is representative of the domain it's supposed to cover - measurement of construct contains everything suspected to be in that construct
32
convergent validity
Extent to which scores from the test correlate with other measure of the same construct - Scores on my test correspond to scores on other well-established tests Ex: do scores on two measurements of depression correlate
33
divergent validity
Different measures of different constructs should not be correlated or related to each other - tests of anxiety do not correlate with tests of psychosis Criterion Validity - The extent to which a measure is related to an outcome - when we see how much the measurement meets some standard or criteria Ex: Criteria for some diagnosis
34
predictive validity
Refers to how well a test is in predicting a particular behavior, criterion, or trait.
35
postdictive validity
The accuracy with which a test score predicts a previously obtained criterion, if a test is a legitmate measure of something that happened in the past
36
relationship between validity and reliability
Cannot have validity without reliability - BUT you can have reliability without validity
37
experimental design
Manipulate one variable (independent variable) and observe another variable (dependent variable)
38
extraneous variables
"Lurking variables" -Sneaking around and impacting the rate of impact on the dependent variable
39
within subjects
Observations of the same person over time (longitudinal) Each participant is their own control - most factors held constant within each individual
40
between subjects
Comparing groups (cross-sectional)
41
uncontrolled experiment
Manipulate something and observe it without putting any type of control in - group of participants given treatment and monitored
42
controlled experiment
A condition is within the experiment that doesn't receive the treatment - or there are differing levels of the IV Manipulate one variable for one group and hold it constant for another group (control group)
43
comparative experiment
Two treatments that we expect to work, we compare and see how effective they are ● Have two groups - in first group, manipulate one variable and hold the other - in second group manipulate the shelf variable from the first group and hold the other
44
quasiexperimental
You have a manipulation but it is not given at random - there are two groups, one treatment/one control, but individuals are not randomly assigned to the groups because they are already "in" it ● Ex: people with PTSD vs people without PTSD - males vs females
45
interaction effect internal validity
How the IV and DV are related - depends on the level of another variable Ex: relation of time (IV) and outcome (DV) depends on the level of treatment you get Internal Validity - Extent to which changes observed in the DV are truly due to manipulation in the IV
46
threats to internal validity
selection bias, confound, history, attrition, practice effects, maturation
47
selection bias
Selection Bias: some characteristics of sample responsible for effect on DV - you select people who will do well
48
confound
Any type of third variable or extraneous variable responsible for effects on DV
49
maturation
Time response for effect on DV (not IV)
50
history
Some event outside experiment responsible for effect on DV
51
attrition
Some characteristic of those who complete study responsible for effects on DV - certain characteristics cause people to drop out
52
practice effects
Participants answer measure differently due to prior exposure - exposure responsible for effects of DV
53
regression to the mean
Participants with extreme scores pretreatment exhibit more moderate scores later (what goes up must come down)
54
expectancy bias (experiementer)
Researcher wants a certain treatment to succeed (allegiance) or fail (nocebo) leading to subtle bias in procedures
55
expectancy bias (participants)
Participants guess hypothesis - change DV answers to please researcher (compliance), frustrate researcher (opposition), knowledge of condition responsible for change (demoralization/rivalry)
56
external validity and the relationship with internal validity
Extent to which inferences from experiment can be generalized to contexts outside the experiment When you increase this, you decrease internal validity
57
correlational design (direction, strength, significance)
Measures two variables and compares them -Measure of association between two variables -Direction: What way do the variables change together or not? (positive, negative, zero) -Strength: What is the effect size of the association? How powerful is it? (absolute value - .1 = small, .3 = medium, .5 = large) -Significance: Are we surprised if we assume that no relationship exists? (p value) -Symbolized by r or beta (standardized beta used for regression)
58
regression
Technique to observe how much change in one variable (predictor, or x) relates to change in another variable (criterion, or y) -Correlation is a special case of regression (one predictor, one criterion) -y = alpha + beta(x) (+ e) -Y = criterion -alpha = intercept (value of y when x = 0 -beta = slope (coefficient of change on y with 1-unit increase in x) -x = predictor -e = residual (difference between predicted y and actual y)
59
unstandardized beta weights
Tell you how much change in y occurs with one unit change in predictor (x)
60
standardized beta weights
Tell you how much standard deviation change in y occurs with one standard deviation change in predictor (x)
61
moderator
Depends on - tested by statistical interaction - does the relation of two variables DEPEND ON the level of a third? Moderation: tells us for whom things work - what works for whom?
62
mediator
Because of - tested by multiple regression (fancy correlation) - does the relation of the two variables work through (is BECAUSE OF) a third variable? Mediation: tells us how things work - how does a treatment work?
63
descriptive design
Observation of phenomenon - organization of data to depict phenomenon in meaningful patterns or themes -An approach that can be qualitative or quantitative - most qualitative research is descriptive - all good quantitative research includes a descriptive component
64
ways to collect data
-Survey -Interview -Observation -Portfolio (psych assessment) -Case history -Open-Ended Questions - Questions that allow respondents to answer however they want - must have some coding scheme -Closed-Ended Questions - Questions a person must answer by choosing from a limited, predetermined set of responses
65
different kinds of questions that can be asked or ways to do research
-Multiple Choice: A, B, C, D -Forced Choice: You must choose true/false, yes/no -Likert-Scale: Responses ae in some way related to one another - interval distinction between them - no wrong answer -Visual Analog Scale: Little sider bar that you move right or left to a certain degree -Covert Observations - Hiding - individual doesn't know they're being observed -Overt Observations - Someone monitoring a kids behavior in a classroom - kid knows they're there -Participant-Observer - A researcher who watches from the perspective of being part of the social setting
66
t tests
Compares two treatment conditions
67
one sample t test
Compare the average (or mean) of one group against the set average (or mean); measure whether a single group differs from a known value
68
dependent samples (paired samples) t test
Measures one group at two different times - within subjects - compare separate means for a group at two different times or two different conditions
69
independent two sample t-test
Measures two unrelated groups - between subjects - measures the mean of two different samples
70
related samples t test (matched subjects design)
Each individual in one treatment is matched one-to-one with a corresponding individual in the second treatment
71
repeated measures design
A single group of individuals is obtained and each individual is measured in both of the treatment conditions being compared. Thus, the data consist of two scores for each individual.
72
ANOVA (F test)
Comparing 3 or more treatment conditions; more than 1 IV (factor); more than 2 levels of an IV -Establishes that differences exist, it does not indicate exactly which treatments are different.
73
post tests for ANOVA
-The Scheffe test and Tukey=s HSD are examples of post tests. Indicates exactly where the difference is: -MANOVA - ANOVA with the addition of multiple Dependent variables -ANCOVA - ANOVA with a covariate- a variable that varies systematically with the IV
74
chi square
Tests the shape of the distribution of nominal data/categorical - not a parametric test; compare observed results with expected results
75
goodness of fit
roughly the same number of subjects in each category? OR does the distribution fit a predetermined distribution (i.e. 40% male and 60% female)?
76
test for independence
similar to correlation in that it looks at the relationship between 2 variables but uses NOMINAL data
77
Mann Whitney U
Analogous to independent measures t-test -2 or more groups - independent samples with frequencies
78
ad hoc hypotheses
When a hypothesis/assumption added to a theory in order to save it from being falsified - explain why a hypothesis doesn't work. E.g., First CBT trial showed that CBT didn't work but people still believed it because situations were identified where CBT would be deemed less effective