Evaluative Research Final Review Flashcards
Independent Variable
The independent variable (IV) is hypothesized to cause or lead to change/variation in another variable. It’s the PREDICTOR (possible cause).
Dependent Variable
The dependent variable (DV) is hypothesized to vary depending upon the influence of another variable. It is the OUTCOME (possible effect).
Traits of the IV
If you aren’t sure which is the IV, it may be:
- That which occurs 1st in time
- Something inherent within us, like demographics (age, ethnicity, education…)
- Experimental condition (treatment condition)
- That which is presumed to be causal, according to theory or prior research
Direction of Association / Influence
A hypothesis may make a prediction, based on prior research and theory, about the expected direction of the relationship (the direction of association) between the variables.
May be positive:
- as one variable (IV) increases, another increases (DV)
- as one variable (IV) decreases, so too does the other (DV)
- the key is that the variables co-vary, i.e., they change in the same direction.
May also be a negative (or inverse) relationship:
- as one variable increases (IV), the other decreases (DV)
i. e. the variables move in opposite directions - May also have curvilinear relationship, graphed as an up or down U… as the value of one variable (the IV) rises, the other’s value (the DV) first drops and then rises or first rises then drops.
- You may also predict no relationship between variables - the variables do not co-vary – they are unrelated.
- This can be useful in trying to correct misinformation or challenge assumptions.
- When you have only 1 IV and 1 DV, it can be helpful to graph the expected relationship in order to formulate the predicted direction of association.
- Don’t forget: exploratory and descriptive studies may have no a priori hypotheses, and yet you can learn much from such studies.
Importance of a Literature Review
A literature review helps you develop & distinguish background knowledge, helps you build the rationale for further research, and it also helps you make your own best research plans.
Goals of a Literature Review
- Identify concepts (variables) relevant to your research question & their definitions (conceptualization)
- Assess the scope (prevalence and incidence) of the experience, behavior, or problem identified in the research question.
- Identify demographic (e.g. income, educational attainment, primary language, etc.) & other correlates (esp. related problems) of the important concepts (& their definitions).
- Identify consequences or outcomes of the experience or problem identified in the research question.
- Identify recommended interventions & their effects to treat problems or maximize assets, where appropriate.
- Uncover possible measurement tools for important concepts
- Uncover possible methodologies to help answer the research question
- Uncover & assess possible theories/paradigms to guide formulation of a hypothesis about the relationship between important concepts (e.g. risk and protective factors for a particular outcome.
How research informs practice and practice informs research
Research evidence informs social work practice:
o Choose what & how to assess clients using knowledge of populations at risk, correlates & causes.
o Use knowledge of causes & correlates of problems as targets of intervention.
o Choose treatments/interventions w/effectiveness shown in the literature.
Social work practice informs research:
o SWers identify new risks and problems & holes in knowledge.
o SWers identify new variables (causes and effects of problems)
o SWers identify new treatment models to assess
o you conduct research to confirm your “practice wisdom,” what you intuit to be true based on personal & vicarious experience with similar/same client issues.
o Evaluation: SWers assess & document their treatment impacts (helpful or not) & the degree to which client needs are met.
Good closed-ended survey questions
o Avoid difficult vocabulary, terminology
o Minimize ambiguity and complexity
o Avoid multiple barrels and multiple negatives
o Reduce recall burden and make estimation as easy as possible
o Limit bias
o Have mutually exclusive & exhaustive response sets
o Use skip patterns: filter and contingency Qs
o Include valid scales/indices where appropriate
o Have clear formatting
Open/Qualitative Research Questions
- Q’s for which the R is asked to come up with own answers
- Ensures that a possible answer is not missed, because the R can answer however she chooses.
- Of course, a R may provide irrelevant answers.
- Answers must be categorized before summarized, which requires researcher interpretation, which introduces bias
- Measurement validity is strengthened: open interviewing allows deeper Qs to confirm that interviewer understands R’s meaning
Closed/Quantitative Research Questions
- Q’s in which R is asked to select an A from a list provided by researcher
- Answers to Qs can be targeted to a concept under study (operationalized)
- Uniformity of response, ease of processing
BUT limits depth and possibly quality of answers. - Also, constrained by researcher’s offering of A’s – may not include a R’s desired answer.
- R may misinterpret or not understand a Q, limiting measurement validity.
- However, reliability may be strengthened because all R’s administered same Q/A choices.
Simple Random Sampling
Each element in the population has an EQUAL CHANCE of being chosen for the sample.
EPSEM: equal probability of selection method
like colored marbles being pulled from a bag
HOW TO:
- First, assign a number to each member of the sampling frame (without skipping any #s).
- Then, use a table of random numbers (Appendix B, text), a lottery, or a random number generator (e.g www.random.org; www.randomizer.org) to pick numbers which correspond to the elements in your sampling frame, and which will, in combination, become your sample.
- If doing a phone survey, may use random digit dialing
Works better than a phone book because it can hit on unlisted numbers
BUT it misses out on those with no phones, and over-represents those who are willing to pick up the phone, have time & agree to talk.
- Can calculate the probability (ratio) of being selected w/SRS:
N / size of sampling frame
Where N is sample size, the actual count of elements selected
SRS is typically used for small projects, with modest sample sizes, since it can be cumbersome when seeking a large N
Systematic Random Sampling
With systematic random sampling, we create a list of every member of the population. From the list, we randomly select the first sample element from the first k elements on the population list. Thereafter, we select every kth element on the list.
This method is different from simple random sampling since every possible sample of n elements is not equally likely.
HOW TO: Arrange your elements into a list, in no meaningful order, & then take every kth element listed:
- Determine k, called the sampling interval, by dividing the sampling frame size by the desired sample size (N).
- Use a sampling frame (list) in which elements appear in no meaningful order (e.g. alphabetically, or in order of enrollment).
- Use a table of random #s or another method to pick a random start spot for selecting first element
- From that one spot, select every kth* element for the sample, until desired N is achieved.
- Note: if k is a non-whole number, you will need to alternate your interval, rounding up then down to nearest whole number.
(e. g.: 75 elements in the frame, need to select 30: k=75/30=2.5, so start at random spot, select that element, then take the element two down, then one three down from there, then the second, then the third, etc., until you have 30 selected.)
Can be efficient method if there is no actual sampling frame/list.
Beware: if your elements are in some regular pattern, not a random order, then you will suffer from periodicity, a selection bias, i.e., you may have an atypical sample. (e.g., the list is in boy-girl-boy-girl order)
It’s an EPSEM design, too.
Probability Samples
With probability sampling methods, each population element has a known (non-zero) chance of being chosen for the sample.
Non-Probability Samples
With non-probability sampling methods, we do not know the probability that each population element will be chosen, and/or we cannot be sure that each population element has a non-zero chance of being chosen.
Stratified Random Sampling
With stratified sampling, the population is divided into groups, based on some characteristic. Then, within each group, a probability sample (often a simple random sample) is selected. In stratified sampling, the groups are called strata.
As a example, suppose we conduct a national survey. We might divide the population into groups or strata, based on geography - north, east, south, and west. Then, within each stratum, we might randomly select survey respondents.
HOW TO:
- Divide the sampling frame into smaller subgroups called “strata” by one or more salient characteristics, like racial or age group, prior to drawing the sample.
- Choice of stratification characteristics depends on variables available (what is already known) and what is relevant to concept.
- Requires that you can categorize (into mutually exclusive & exhaustive categories) each element in the sampling frame (i.e., every element fits into just one category (strata)).
- Also requires that you know the size of each strata (the % w/each value in the population) to determine representativeness and probability of being selected.
- Once strata established, simple or systematic random samples are then drawn from within each strata.
- Can generate a proportionate or disproportionate sample:
- Proportionate Stratified Sample: % in each category is the same in sample as in sampling frame - little pie looks like big pie
- Disproportionate Stratified Sample: % in category is different from the sampling frame - little pie is sliced differently than big pie.
- Has more representativeness than simple or systematic random sampling, and less sampling error
- Representativeness: the degree to which your sample looks like the population from which it was drawn on some key criteria
- Sampling Error: the difference between characteristics of the sample and the characteristics of the population
Cluster Sampling
Cluster = naturally occurring group of elements found in different social structures
Cluster sampling done when it’s not practical to compile an exhaustive list (sampling frame) of elements in the target population.
a.k.a: Multi-stage sampling: researcher samples from a larger set (cluster) of elements, and then samples from within the subset, using a smaller unit, and so on, until the unit of measurement is reached.
HOW TO:
- You randomly sample groups of larger sampling units called clusters (i.e. counties, census tracts, etc.)
- From these clusters, still smaller clusters (e.g. zip codes, neighborhoods) are (usually randomly) selected until finally you reach your unit of measurement (e.g. individual, household, etc.)
- From that, you draw your sample of elements, using any method, (e.g. simple random sampling or even a non-probability sampling methods).
- Requires listing and sampling, repeatedly.
- The more clusters sampled, the smaller N required w/in each cluster to achieve representativeness.
Why not always do probability (random) sampling?
- It’s not always feasible.
- May not be able to identify a sampling frame (no list of possible participants), esp. one with desired characteristics (i.e, the target pop.)
Particularly in treatment studies, you may be constrained by:
- Willingness (volunteerism) of participants
- Appropriateness of participants (i.e. those who meet eligibility criteria)
- May need an intensive investigation into a small population
- May wish to speak with “key informants” in qualitative designs
Not always necessary, as in:
- Pilot studies: preliminary, small studies completed to test procedures prior to a larger study or implementation
- Exploratory studies: literature doesn’t exist yet to indicate a probability sample (i.e., you don’t yet know enough to determine what variable(s) you want representativeness on.)
BUT: non-probability sampling ALWAYS leads to selection bias, and to limitations in generalizability, i.e., validity threats.
Generalizability (a.k.a. external validity): extent to which findings from a subset (sample) hold true or are consistent with those from some larger or whole set
Availability/Convenience Sampling
“NON-probability” because not everyone has a chance of being included in study & we cannot estimate what that chance is.
Advantage: fast, cheap, easy
HOW TO:
- Researcher uses whatever participants are available
- Can stand at a given spot and solicit volunteers passing by
- Can advertise (on a board, on a website) and see who responds
- Serious external validity threats because there are lots of elements in the population who never had a chance to be included.
- Convenience sampling is the most frequently used sampling method, despite these risks!
- Always ask is “how is the group I enrolled in the study different from the population to whom I want to generalize?”
- It’s impossible to anticipate all possible biases with no definable population
- Can help to collect descriptive info on your participants so at least you can report on who actually DID participate, even if you can’t say how typical (representative) they are.
Purposive/Intentional Sampling
HOW TO:
- Researcher intentionally selects elements on basis of his/her own judgment, participant self-referral, or gatekeeper referral
- Those selected usually meet some selection criteria or are perceived by researchers or gatekeepers to have something to say on the topic, to have a desired characteristic, or to be a useful informant.
- A gatekeeper is a person with access to the population who can point researcher towards subjects.
Participants selected should be:
- Knowledgeable about the situation or experience being studied
- Willing to talk
- Representative of the range of points of view
- May include “key informants”: those who are “in the know” about the population or the issue under study and can talk about it well.
- Typically, researchers keep selecting participants/elements until
- They have a sample that provides an overall sense the answer to the research Q.
- They are no longer hearing anything new; the findings are saturated.
- Commonly used in qualitative & experimental designs.
- In experiments, researchers often intentionally seek folks who have a given condition or have had a given experience.
Quota Sampling
Quota = proportional part or share
HOW TO:
-Elements are selected by availability but with consideration of pre-specified characteristics (usually demographic: gender, age, SES, race…)
That is, you take by convenience a certain # of folks from certain categories (e.g., you get nurses and doctors, mid-school and high-school students, Spanish speakers and English speakers…)
-The number in each group is determined proportionately, so that the total sample will have same distribution of characteristics (parameters) assumed to exist in the population
-Liken to proportionate stratified random sample, EXCEPT that it is NOT random.
That is, the little pie looks like the big pie on some variable, but otherwise, you have no idea how typical they are because you didn’t select them using probability.
Strives for representativeness, but still, relies on availability of those who have the desired characteristics, & therefore limits generalizability
No way of knowing if the sample is representative in any way other than on the chosen characteristic.
Snowball Sampling
HOW TO:
- Find and collect data from a few members of a target population
- Ask those individuals to suggest additional people for interviewing & to provide info to help you locate other members that they know.
- Repeat step 2 as needed (until saturation)
Used w/hard-to-reach or hard-to-identify populations where:
- Group members are inter-connected
- You have no available sampling frame
- You may be looking for folks who may not want to be found
- Requires establishment of trust & rapport, as in qualitative studies where researchers get to know participants better
- Often used in conjunction with quota and purposive sampling.
Uses:
- Ecologically-based/systems studies that chart social networks & relationships among group members
- Can look at at meso-systems (linkages between 2+ micro systems)
E.g., research into the the spread of behaviors or infections.
- Exploratory studies, to gain info in a newly emerging field or pop. group.
- Always suffers limited generalizability, because of informant (selection) bias: the first person you talk to may ultimately shape your sample.
Validity threats related to sampling:
External Validity
External Validity = Generalizability = extent to which you can safely draw general conclusions about a larger or different population based on findings about a subset or sample; depends upon sampling methodology.
Validity threats related to sampling:
Selection Bias
Selection Bias = a type of validity threat occurring when those selected by researchers for a study sample are not typical or representative of the larger population from which they were chosen.
+ Always a threat with NP sampling.
Validity threats related to sampling:
Response Bias
Response Bias/Non-response bias = a validity threat occurring when there is some difference between who participates in a study (e.g. volunteers or completes the survey) and who doesn’t.
Validity threats related to sampling:
Statistical Conclusion Validity & Low Power
Statistical Conclusion Validity = degree of confidence with which you can infer that a statistical finding is accurate, that it will hold true in the population, based on the results from a sample.
One kind of SCV (there are others), related to sampling:
Low Power = apparent lack of support for a hypothesis or limited strength of findings is due to a small sample size (N), which limits ability to detect a statistically significant relationship if one exists (statistical power), rather than the lack of a meaningful relationship between variables.
Open ended (qualitative) data collection methods
Qualitative Research:
- Process is inductive (concepts to be explored and theory arise from the data itself, from what a researcher learns from those observed)
- Seeks only to make explicit any subjectivity or values of researcher, not to limit it. Researcher’s reaction to the experience is a source of data.
- Researcher is often interactive & participatory (insider POV)
- Data comes from detailed (thick) descriptions, careful observation, intensive interviews, focus groups
- Typically, open Qs employed
- Naturalistic (real world), uncontrolled observation
- Potentially valid measurement (real, rich, deep data)
- Limited generalizability
- Analysis through coding, content analysis, grounded theory analysis
Intensive Interview
• Qualitative interview is based on a set of topics to be discussed in depth, rather than standardized Q’s with a list of A’s to choose from.
– Researcher must first ID purpose of interviews, the broad concepts to be explored… the researcher asks “what do I want to learn?”
– That then guides the development of an interview protocol (or guide).
• The researcher asks Q’s from this protocol to guide discussion on the target concepts.
• Interview protocol/guide: a set of broad, open-ended Q’s with a few more directive, follow-up Qs (probes) to help interviewer cover the concepts she wishes to explore
– Qs are OPEN-ENDED and researchers begin with who, what, where, how, when and use probes & encouraging prompts to get more information.
– Interview is flexible… researcher need not adhere to protocol exactly:
• Interviewer may ask new Q’s in response to the informant’s comments in order to delve more deeply or in a new direction.
• Informant is encouraged to elaborate, clarify, & illustrate
• Interviewee given room to raise issues not anticipated by researcher.
– Researcher may revise the protocol as new information is uncovered, a process known as “reflexivity”.
– Researcher continues interviews with the same or new informants until they are sharing info she has heard before, and a pattern has emerged (saturation)
– Interviews are recorded & transcribed for later analysis
• Analysis of transcripts involves looking for themes, core messages within each informant’s story, but also common threads between different stories. Themes are given a label & explanation, a process called coding.