Research Methods Flashcards
Postpositivist
Things are subjective and observable. If we observe it long enough, we will figure it out.
E.g., Shroedinger’s Cat - objectively, the cat is either dead or alive, can’t be simultaneously dead and alive
There is an objective reality that can be known, and we experience it subjectively
Most common in the science of psychology
constructivist
What we know is socially constructed and changes over time depending on who is examining it - all we know is what someone is explaining in the moment
No objective reality - knowledge is constructed, not discovered
transformative
We use research to directly help people - research and action/practice are one in the same - when we research, we are actively encouraging/creating change
E.g., combining social justice and the practice of research
Pragmatic
Focuses on what works as opposed to what is true
Null Hypothesis
In a postpositivist paradigm, always testing compared to something
-This is a model where there is no effect (H0) - statement of effect is called the alternative hypothesis (H1)
-E.g., Saying DBT works and DBT doesn’t work - we have to assume that DBT works, so the null hypothesis is that DBT doesn’t work (no effect of DBT) - to test the hypothesis we put yourself in a “fake” world where we expect DBT to not be effective (null) so we can prove that it actually does work
post hoc hypotheses & a priori
Generating hypotheses based on data already observed, in the absence of testing them on new data
-Questions that we try to answer with our data after the study had finished and was not the intent of that particular study
A Priori - Before the fact - say what the hypotheses are ahead of time so you can change them after the results are obtained
-Hypotheses based on assumed principles and deductions from the conclusions of previous research, and are generated prior to a new study taking place.
Sampling & sampling frame
Design procedure in which we collect individuals for research
sampling Frame - Procedures for obtaining a sample - because we cannot get everyone in a population, we set a sampling frame
So you have a population you want to research, you set a sampling frame to obtain individuals from that population, and your sample is the group of individuals you actually obtain for the study
random probability sampling
Likelihood that a particular event will occur or in this case likelihood that an individual will be chosen for a study
-Researchers must set up some process or procedure that ensures, with confidence, that the different units in their sample population have equal probabilities of being chosen
Simple Random Sampling
In a given population, every individual has equal probability of being selected to participate
-Advantages: Most statistically efficient and statistically unbiased
-Disadvantages: Does not guarantee all subgroups are represented in a sample
stratified random sampling
In a given population within a pre-specified strata, every individual has equal probability of being selected to participate - subjects initially grouped into different classifications such as gender, level of education, or SES, then researchers randomly select the final list of subjects from the different defined categories
Best used when the goal is to study a particular subgroup within a greater population
Advantages: greater precision in approximating population
Disadvantages: Have to get strata correct - assume homogeneity in strata
Systematic Random Sampling
In a given population, every kth individual is selected to participate
-Advantages: less complicated method of selection
-Disadvantages: order may matter
non-random probability sampling
Individuals are selected for a “reason.”
There is a purpose behind them being chosen and it is not random - each member of the population would not have the same odds of being selected
convenience sampling
Accidental sample - average person on the street, not likely to be random - based solely on convenience and availability
-Often volunteers
-Advantages: easy, cheap to obtain
-disadvantages: sample likely vested in population
quota sampling
With pre-specified strata, a certain number of individuals are obtained - data is collected in a homogenous group, but it isn’t random, it is based on convenience.
-Advantages: more representative of a population
-Disadvantages: have to get strata correct - homogeneity - bias
purposive sampling
Recruit individuals with certain predefined characteristics – selective
Thought to be representative of population of interest - main goal is to focus on a particular characteristic of a population that are of interest, which will best enable you to answer your research question (exact opposite of random sampling)
-You specify who you want in your sample (E.g., latino women living in Northeast Philly)
-Modal: Most common example - most typical members of population are selected
-Exceptional: Rare - look for an expert within the population
-Snowball: sample is linked together - you tell people to encourage their friends to participate
Advantages: face valid, easy to collect
Disadvantages: oversampling certain types of individuals
data saturation
Occurs when no new themes emerge in data - all research points to similar findings
universal effect
homogenous
heterogenous
unstable estimate
outliers
If we assume that everybody functions similarly then it doesn’t matter who you get - If there is no reason to believe it’s going to differ by subgroups then you don’t need a huge sample size
-Homogenous - Don’t need as large of a sample if the population is SIMILAR
-Heterogeneous - Need a larger sample if the population is DIFFERENT
-Unstable Estimate - Can’t detect an effect is the sample size is too small
-Outliers-can ruin results
power analysis and the cons of a large and small sample size
Statistical method used to determine sample size - prediction used to detect how large you expect the effect of the sample size to be - have to guess what the effect size will be ahead of time to detect the power analysis
-Can ensure your sample size isn’t too large or too small.
-Large Sample Size - Can waste time and resources and might be minimal gain
-Small Sample Size - Can lack precision to provide reliable answers to the investigation
type 1 error
Null hypothesis is REJECTED but it is true
-False positive - you say there is something there but in actuality nothing is there.
Ex: You go to bed and there is no fire - at 3 AM the smoke detector goes off but there is still no fire
Determined by alpha level (alpha = .05) and sensitivity is set by experimenter
type 2 error
Null hypothesis is ACCEPTED but the alternative is true
False negative - you say there is something there but in actuality there truly is something there
Ex: you go to bed assuming everything is safe and there is no fire, but there is a fire, the smoke detector just did not go off
Beta used and is not controlled by the experimenter
power
Probability of avoiding Type 2 error, that the test correctly supports the alternative hypothesis, accurately rejects the null hypothesis when the alternative hypothesis is true
Inverse of Beta (1-beta), range from 0-1
-the greater the effect size, the more power you have
Higher alpha (probability of type 1 error) allows you to increase true positives which increases power - BUT it also increases type I error (increases sensitivity)
The greater sample size you have the greater power will be
reliability
How reproducible a measure is - the likelihood of getting the same result
test retest reliability
Reproducibility over time or how similar a measurement remains after a period of time
Measured by correlation between two administrations at different times
split half reliability
if test is homogenous and measures the same thing, correlation should be high - take first half of test and correlate with the second
alternative forms within split half reliability
splitting may diminish size of your test so come up with alternate forms with similar questions
internal consistency
Items measuring the same construct should produce similar estimates - if score high on one item, should score high on another similar item
Measured by Cronbach’s Alpha - .50 = low, .70 = medium (want you want), .90 = high
interrater reliability
consistency
rate “drift”
How reproducible measurements are for a particular group of “judges” or people observing the same thing
absolute agreement: give the same score
Consistency: same scoring over time even if don’t necessarily agree
Rate “drift”: over time, we go off on our own - might be aligned in the beginning, but very different by the end
factor structure
exploratory factor analysis
confirmatory factor analysis
Correlation (or non-correlation) of items occurs in a predictable fashion.
Factors of sub scales underlie the data
Ex: IQ tests measure verbal and mathematical skills - different subgroups but related
Exploratory factor analysis: examine item correlations for patterns
Confirmatory factor analysis: specify pattern ahead of time, computer tells me if pattern fits the data
validity
the extent to which a test measures or predicts what it is supposed to
face validity
Measures whether a test looks like it tests what it is supposed to test - how much a measure captures the construct
content validity
The degree to which the content of a test is representative of the domain it’s supposed to cover - measurement of construct contains everything suspected to be in that construct