Evaluative Research Final Review Flashcards

1
Q

Independent Variable

A

The independent variable (IV) is hypothesized to cause or lead to change/variation in another variable. It’s the PREDICTOR (possible cause).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Dependent Variable

A

The dependent variable (DV) is hypothesized to vary depending upon the influence of another variable. It is the OUTCOME (possible effect).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Traits of the IV

A

If you aren’t sure which is the IV, it may be:

  • That which occurs 1st in time
  • Something inherent within us, like demographics (age, ethnicity, education…)
  • Experimental condition (treatment condition)
  • That which is presumed to be causal, according to theory or prior research
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Direction of Association / Influence

A

A hypothesis may make a prediction, based on prior research and theory, about the expected direction of the relationship (the direction of association) between the variables.

May be positive:

  • as one variable (IV) increases, another increases (DV)
  • as one variable (IV) decreases, so too does the other (DV)
  • the key is that the variables co-vary, i.e., they change in the same direction.

May also be a negative (or inverse) relationship:

  • as one variable increases (IV), the other decreases (DV)
    i. e. the variables move in opposite directions
  • May also have curvilinear relationship, graphed as an up or down U… as the value of one variable (the IV) rises, the other’s value (the DV) first drops and then rises or first rises then drops.
  • You may also predict no relationship between variables - the variables do not co-vary – they are unrelated.
  • This can be useful in trying to correct misinformation or challenge assumptions.
  • When you have only 1 IV and 1 DV, it can be helpful to graph the expected relationship in order to formulate the predicted direction of association.
  • Don’t forget: exploratory and descriptive studies may have no a priori hypotheses, and yet you can learn much from such studies.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Importance of a Literature Review

A

A literature review helps you develop & distinguish background knowledge, helps you build the rationale for further research, and it also helps you make your own best research plans.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Goals of a Literature Review

A
  • Identify concepts (variables) relevant to your research question & their definitions (conceptualization)
  • Assess the scope (prevalence and incidence) of the experience, behavior, or problem identified in the research question.
  • Identify demographic (e.g. income, educational attainment, primary language, etc.) & other correlates (esp. related problems) of the important concepts (& their definitions).
  • Identify consequences or outcomes of the experience or problem identified in the research question.
  • Identify recommended interventions & their effects to treat problems or maximize assets, where appropriate.
  • Uncover possible measurement tools for important concepts
  • Uncover possible methodologies to help answer the research question
  • Uncover & assess possible theories/paradigms to guide formulation of a hypothesis about the relationship between important concepts (e.g. risk and protective factors for a particular outcome.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

How research informs practice and practice informs research

A

Research evidence informs social work practice:
o Choose what & how to assess clients using knowledge of populations at risk, correlates & causes.
o Use knowledge of causes & correlates of problems as targets of intervention.
o Choose treatments/interventions w/effectiveness shown in the literature.

Social work practice informs research:
o SWers identify new risks and problems & holes in knowledge.
o SWers identify new variables (causes and effects of problems)
o SWers identify new treatment models to assess
o you conduct research to confirm your “practice wisdom,” what you intuit to be true based on personal & vicarious experience with similar/same client issues.
o Evaluation: SWers assess & document their treatment impacts (helpful or not) & the degree to which client needs are met.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Good closed-ended survey questions

A

o Avoid difficult vocabulary, terminology
o Minimize ambiguity and complexity
o Avoid multiple barrels and multiple negatives
o Reduce recall burden and make estimation as easy as possible
o Limit bias
o Have mutually exclusive & exhaustive response sets
o Use skip patterns: filter and contingency Qs
o Include valid scales/indices where appropriate
o Have clear formatting

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Open/Qualitative Research Questions

A
  • Q’s for which the R is asked to come up with own answers
  • Ensures that a possible answer is not missed, because the R can answer however she chooses.
  • Of course, a R may provide irrelevant answers.
  • Answers must be categorized before summarized, which requires researcher interpretation, which introduces bias
  • Measurement validity is strengthened: open interviewing allows deeper Qs to confirm that interviewer understands R’s meaning
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Closed/Quantitative Research Questions

A
  • Q’s in which R is asked to select an A from a list provided by researcher
  • Answers to Qs can be targeted to a concept under study (operationalized)
  • Uniformity of response, ease of processing
    BUT limits depth and possibly quality of answers.
  • Also, constrained by researcher’s offering of A’s – may not include a R’s desired answer.
  • R may misinterpret or not understand a Q, limiting measurement validity.
  • However, reliability may be strengthened because all R’s administered same Q/A choices.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Simple Random Sampling

A

Each element in the population has an EQUAL CHANCE of being chosen for the sample.

EPSEM: equal probability of selection method
like colored marbles being pulled from a bag

HOW TO:
- First, assign a number to each member of the sampling frame (without skipping any #s).
- Then, use a table of random numbers (Appendix B, text), a lottery, or a random number generator (e.g www.random.org; www.randomizer.org) to pick numbers which correspond to the elements in your sampling frame, and which will, in combination, become your sample.
- If doing a phone survey, may use random digit dialing
Works better than a phone book because it can hit on unlisted numbers
BUT it misses out on those with no phones, and over-represents those who are willing to pick up the phone, have time & agree to talk.
- Can calculate the probability (ratio) of being selected w/SRS:
N / size of sampling frame
Where N is sample size, the actual count of elements selected
SRS is typically used for small projects, with modest sample sizes, since it can be cumbersome when seeking a large N

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Systematic Random Sampling

A

With systematic random sampling, we create a list of every member of the population. From the list, we randomly select the first sample element from the first k elements on the population list. Thereafter, we select every kth element on the list.

This method is different from simple random sampling since every possible sample of n elements is not equally likely.

HOW TO: Arrange your elements into a list, in no meaningful order, & then take every kth element listed:

  • Determine k, called the sampling interval, by dividing the sampling frame size by the desired sample size (N).
  • Use a sampling frame (list) in which elements appear in no meaningful order (e.g. alphabetically, or in order of enrollment).
  • Use a table of random #s or another method to pick a random start spot for selecting first element
  • From that one spot, select every kth* element for the sample, until desired N is achieved.
  • Note: if k is a non-whole number, you will need to alternate your interval, rounding up then down to nearest whole number.
    (e. g.: 75 elements in the frame, need to select 30: k=75/30=2.5, so start at random spot, select that element, then take the element two down, then one three down from there, then the second, then the third, etc., until you have 30 selected.)

Can be efficient method if there is no actual sampling frame/list.

Beware: if your elements are in some regular pattern, not a random order, then you will suffer from periodicity, a selection bias, i.e., you may have an atypical sample. (e.g., the list is in boy-girl-boy-girl order)
It’s an EPSEM design, too.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Probability Samples

A

With probability sampling methods, each population element has a known (non-zero) chance of being chosen for the sample.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Non-Probability Samples

A

With non-probability sampling methods, we do not know the probability that each population element will be chosen, and/or we cannot be sure that each population element has a non-zero chance of being chosen.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Stratified Random Sampling

A

With stratified sampling, the population is divided into groups, based on some characteristic. Then, within each group, a probability sample (often a simple random sample) is selected. In stratified sampling, the groups are called strata.

As a example, suppose we conduct a national survey. We might divide the population into groups or strata, based on geography - north, east, south, and west. Then, within each stratum, we might randomly select survey respondents.

HOW TO:

  • Divide the sampling frame into smaller subgroups called “strata” by one or more salient characteristics, like racial or age group, prior to drawing the sample.
  • Choice of stratification characteristics depends on variables available (what is already known) and what is relevant to concept.
  • Requires that you can categorize (into mutually exclusive & exhaustive categories) each element in the sampling frame (i.e., every element fits into just one category (strata)).
  • Also requires that you know the size of each strata (the % w/each value in the population) to determine representativeness and probability of being selected.
  • Once strata established, simple or systematic random samples are then drawn from within each strata.
  • Can generate a proportionate or disproportionate sample:
  • Proportionate Stratified Sample: % in each category is the same in sample as in sampling frame - little pie looks like big pie
  • Disproportionate Stratified Sample: % in category is different from the sampling frame - little pie is sliced differently than big pie.
  • Has more representativeness than simple or systematic random sampling, and less sampling error
  • Representativeness: the degree to which your sample looks like the population from which it was drawn on some key criteria
  • Sampling Error: the difference between characteristics of the sample and the characteristics of the population
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Cluster Sampling

A

Cluster = naturally occurring group of elements found in different social structures

Cluster sampling done when it’s not practical to compile an exhaustive list (sampling frame) of elements in the target population.
a.k.a: Multi-stage sampling: researcher samples from a larger set (cluster) of elements, and then samples from within the subset, using a smaller unit, and so on, until the unit of measurement is reached.

HOW TO:

  • You randomly sample groups of larger sampling units called clusters (i.e. counties, census tracts, etc.)
  • From these clusters, still smaller clusters (e.g. zip codes, neighborhoods) are (usually randomly) selected until finally you reach your unit of measurement (e.g. individual, household, etc.)
  • From that, you draw your sample of elements, using any method, (e.g. simple random sampling or even a non-probability sampling methods).
  • Requires listing and sampling, repeatedly.
  • The more clusters sampled, the smaller N required w/in each cluster to achieve representativeness.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Why not always do probability (random) sampling?

A
  • It’s not always feasible.
  • May not be able to identify a sampling frame (no list of possible participants), esp. one with desired characteristics (i.e, the target pop.)

Particularly in treatment studies, you may be constrained by:

  • Willingness (volunteerism) of participants
  • Appropriateness of participants (i.e. those who meet eligibility criteria)
  • May need an intensive investigation into a small population
  • May wish to speak with “key informants” in qualitative designs

Not always necessary, as in:

  • Pilot studies: preliminary, small studies completed to test procedures prior to a larger study or implementation
  • Exploratory studies: literature doesn’t exist yet to indicate a probability sample (i.e., you don’t yet know enough to determine what variable(s) you want representativeness on.)

BUT: non-probability sampling ALWAYS leads to selection bias, and to limitations in generalizability, i.e., validity threats.

Generalizability (a.k.a. external validity): extent to which findings from a subset (sample) hold true or are consistent with those from some larger or whole set

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Availability/Convenience Sampling

A

“NON-probability” because not everyone has a chance of being included in study & we cannot estimate what that chance is.

Advantage: fast, cheap, easy

HOW TO:

  • Researcher uses whatever participants are available
  • Can stand at a given spot and solicit volunteers passing by
  • Can advertise (on a board, on a website) and see who responds
  • Serious external validity threats because there are lots of elements in the population who never had a chance to be included.
  • Convenience sampling is the most frequently used sampling method, despite these risks!
  • Always ask is “how is the group I enrolled in the study different from the population to whom I want to generalize?”
  • It’s impossible to anticipate all possible biases with no definable population
  • Can help to collect descriptive info on your participants so at least you can report on who actually DID participate, even if you can’t say how typical (representative) they are.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Purposive/Intentional Sampling

A

HOW TO:

  • Researcher intentionally selects elements on basis of his/her own judgment, participant self-referral, or gatekeeper referral
  • Those selected usually meet some selection criteria or are perceived by researchers or gatekeepers to have something to say on the topic, to have a desired characteristic, or to be a useful informant.
  • A gatekeeper is a person with access to the population who can point researcher towards subjects.

Participants selected should be:

  • Knowledgeable about the situation or experience being studied
  • Willing to talk
  • Representative of the range of points of view
  • May include “key informants”: those who are “in the know” about the population or the issue under study and can talk about it well.
  • Typically, researchers keep selecting participants/elements until
  • They have a sample that provides an overall sense the answer to the research Q.
  • They are no longer hearing anything new; the findings are saturated.
  • Commonly used in qualitative & experimental designs.
  • In experiments, researchers often intentionally seek folks who have a given condition or have had a given experience.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Quota Sampling

A

Quota = proportional part or share

HOW TO:
-Elements are selected by availability but with consideration of pre-specified characteristics (usually demographic: gender, age, SES, race…)
That is, you take by convenience a certain # of folks from certain categories (e.g., you get nurses and doctors, mid-school and high-school students, Spanish speakers and English speakers…)
-The number in each group is determined proportionately, so that the total sample will have same distribution of characteristics (parameters) assumed to exist in the population
-Liken to proportionate stratified random sample, EXCEPT that it is NOT random.

That is, the little pie looks like the big pie on some variable, but otherwise, you have no idea how typical they are because you didn’t select them using probability.

Strives for representativeness, but still, relies on availability of those who have the desired characteristics, & therefore limits generalizability

No way of knowing if the sample is representative in any way other than on the chosen characteristic.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Snowball Sampling

A

HOW TO:

  • Find and collect data from a few members of a target population
  • Ask those individuals to suggest additional people for interviewing & to provide info to help you locate other members that they know.
  • Repeat step 2 as needed (until saturation)

Used w/hard-to-reach or hard-to-identify populations where:

  • Group members are inter-connected
  • You have no available sampling frame
  • You may be looking for folks who may not want to be found
  • Requires establishment of trust & rapport, as in qualitative studies where researchers get to know participants better
  • Often used in conjunction with quota and purposive sampling.

Uses:
- Ecologically-based/systems studies that chart social networks & relationships among group members
- Can look at at meso-systems (linkages between 2+ micro systems)
E.g., research into the the spread of behaviors or infections.
- Exploratory studies, to gain info in a newly emerging field or pop. group.
- Always suffers limited generalizability, because of informant (selection) bias: the first person you talk to may ultimately shape your sample.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Validity threats related to sampling:

External Validity

A

External Validity = Generalizability = extent to which you can safely draw general conclusions about a larger or different population based on findings about a subset or sample; depends upon sampling methodology.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Validity threats related to sampling:

Selection Bias

A

Selection Bias = a type of validity threat occurring when those selected by researchers for a study sample are not typical or representative of the larger population from which they were chosen.

+ Always a threat with NP sampling.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Validity threats related to sampling:

Response Bias

A

Response Bias/Non-response bias = a validity threat occurring when there is some difference between who participates in a study (e.g. volunteers or completes the survey) and who doesn’t.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

Validity threats related to sampling:

Statistical Conclusion Validity & Low Power

A

Statistical Conclusion Validity = degree of confidence with which you can infer that a statistical finding is accurate, that it will hold true in the population, based on the results from a sample.

One kind of SCV (there are others), related to sampling:

Low Power = apparent lack of support for a hypothesis or limited strength of findings is due to a small sample size (N), which limits ability to detect a statistically significant relationship if one exists (statistical power), rather than the lack of a meaningful relationship between variables.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

Open ended (qualitative) data collection methods

A

Qualitative Research:

  • Process is inductive (concepts to be explored and theory arise from the data itself, from what a researcher learns from those observed)
  • Seeks only to make explicit any subjectivity or values of researcher, not to limit it. Researcher’s reaction to the experience is a source of data.
  • Researcher is often interactive & participatory (insider POV)
  • Data comes from detailed (thick) descriptions, careful observation, intensive interviews, focus groups
  • Typically, open Qs employed
  • Naturalistic (real world), uncontrolled observation
  • Potentially valid measurement (real, rich, deep data)
  • Limited generalizability
  • Analysis through coding, content analysis, grounded theory analysis
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

Intensive Interview

A

• Qualitative interview is based on a set of topics to be discussed in depth, rather than standardized Q’s with a list of A’s to choose from.
– Researcher must first ID purpose of interviews, the broad concepts to be explored… the researcher asks “what do I want to learn?”
– That then guides the development of an interview protocol (or guide).
• The researcher asks Q’s from this protocol to guide discussion on the target concepts.

• Interview protocol/guide: a set of broad, open-ended Q’s with a few more directive, follow-up Qs (probes) to help interviewer cover the concepts she wishes to explore

– Qs are OPEN-ENDED and researchers begin with who, what, where, how, when and use probes & encouraging prompts to get more information.
– Interview is flexible… researcher need not adhere to protocol exactly:
• Interviewer may ask new Q’s in response to the informant’s comments in order to delve more deeply or in a new direction.
• Informant is encouraged to elaborate, clarify, & illustrate
• Interviewee given room to raise issues not anticipated by researcher.

– Researcher may revise the protocol as new information is uncovered, a process known as “reflexivity”.
– Researcher continues interviews with the same or new informants until they are sharing info she has heard before, and a pattern has emerged (saturation)
– Interviews are recorded & transcribed for later analysis
• Analysis of transcripts involves looking for themes, core messages within each informant’s story, but also common threads between different stories. Themes are given a label & explanation, a process called coding.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

Focus Groups

A

• Small group of participants (typically 7-12) are interviewed together, on a given topic, prompting a discussion, which is recorded, transcribed, and analyzed for common themes (as with individual interview)
– This is a GROUP interview FOCUSED on a topic.

• Researcher asks open-ended Qs (from an interview protocol) & guides discussion to cover target concepts. As with individual interviews, this process is flexible.
• Dialogue and non-verbal behavior of group members are recorded (audio & sometimes video) for later analysis.
• Advantages:
– Social interaction may be directly observed.
– Ideas/concepts can be developed/exchanged as group members reflect on what the others have said
– Can speed up the process of getting consensus on important themes.

• Disadvantage:
– Group dynamics can be tricky and require conscientious group facilitation. One loud voice can intimidate and drown out other voices.

• Number of groups limited by feasibility but otherwise continues until saturation* achieved.
– * Saturation: the point in data collection at which no new information is being uncovered & participant selection can be discontinued.

• Can be a useful, cheaper, quick way to assess range of opinions about topic and to hear from a “community” rather than individuals.
– Commonly used in program evaluation and needs assessment.

29
Q

Participant & Non-Participant Observation

A

Participant (if you’re a participant observer and working with ppl you have extra responsibility to treat them correctly. If you’re working with gangs, you don’t participate in crime just bc the community is) and non-participant observation (watch and listen and record)

-Think about ethics
-Concern for “reactivity” effects (an internal validity threat) – in qualitative research, ppl act different because they’re being watched. They perform for the researcher. Minimize reactivity by being around someone or someplace for a long time and encouraging comfortability. This is called prolonged engagement.
-Thick description – detailed description of what you see
-Observation occurs when natural social processes are studied as they happen, in the real world, not in artificial settings or situations chosen in advance by the researcher.
· Researchers conscientiously and attentively watch & listen to people interacting and record what they see and hear
· They record details on the setting where the interaction takes place.
· Sometimes, researchers also “participate” in the actions of the community under study, (i.e., they are participant observers) and therefore also record perceptions of their own experience
· Observer may observe overtly (researcher status made known) or covertly (purpose and identity hidden, which can pose ethical problems)
· A risk: observer may change the experience or context of the participants, which may lead to reactivity effects; i.e., the participants change their behavior b/c they are being watched, which limits validity
· Accessing the population:
· Researchers may use gatekeepers, people who know the community well, to help them access target group or gain entrée into a setting.
· Access & sharing are increased because, through prolonged engagement, participants and researchers develop trust, rapport, & relationships
· these relationships impact researchers’ subjectivity & can raise ethical dilemmas.
· Instead of transcripts, data takes the form of thick description, a detailed, written account of what has been observed.

30
Q

Existing Narratives and Artifacts

A

Existing narratives and artifacts:
• We need not gather original data through observation or interviews. We can look to existing documents.
• When studying a culture (called ethnography), social setting or phenomenon, texts & artifacts produced & used by its members may be a source of data.
• Basically, any source of information that has been recorded in an open-ended way (rather than as numbers or filled bubbles) can serve as a picture of social phenomena and experiences.
• There are many different types of documents that researchers may be interested in collecting:
• e.g., diaries, newspaper articles, poems, blogs, emails, memos, photos, books, educational materials, paintings, videos, laws, written protocols & procedures, meeting minutes, case notes & client charts…
• Typically, existing narratives (just like original transcripts & written observation descriptions) are “coded,” examined for meaning and common themes, which are then described and summarized, (a process called content analysis).
• In addition to the meaning of the content, the researcher may also assess how & for whom the artifact was created, what was included & not included in the document, & how the document was used.
• NOTE: Existing records/artifacts are often analyzed in tandem with other data collected (e.g., interview transcripts).

31
Q

Content Analysis/Coding

A
  • Qualitative analysis involves closely examining and summarizing what has been observed and heard, the “content” of our thick descriptions, transcripts, & other narratives.
  • We organize what we learned from our participants into categories, combining together similar stories or bits of data (e.g., interview passages) into a “theme” or “concept”
  • A theme is comparable to “variable” in quantitative analysis.
  • This theme is then assigned a “code” - a label or name that fits the content.
  • This process is known as content analysis, coding or theme analysis: the categorizing and classifying of qualitative data.

Here’s how:

  • Data (transcripts, written observations, records…) are broken into smaller parts, examined closely, compared for similarities & differences.
  • Repeated or significant messages or themes (e.g. behaviors, symptoms, feelings, experiences, concerns) are assigned a label, and a concise conceptual definition is written specifying the meaning of the code/label
  • Liken the code to the conceptual definition of a concept established a priori in deductive research, but here, the code is made after data collection has begun & can be modified w/new data from new P’s.
  • As more themes are identified, we create an organized system or catalog to help track codes & link them to the hard data to allow easy retrieval.
  • The product of a qualitative descriptive study is a report on the shared characteristics & experiences of the P’s.
32
Q

Grounded Theory Analysis

A

Sometimes, we want to do more than just describe the themes we’ve heard… instead, we want explain the patterns we observe:

  • The characteristics of those who share an experience
  • The precursors (risk and protective factors) that lead to a shared experience
  • The experience’s impacts or effects
  • Grounded theory analysis aims to identify patterns in the experiences of the folks we qualitatively observe and listen to, and to inductively generate theories about cause & effect.

Here’s how:

  • Multiple thick descriptions of observations, interview transcripts, & narratives are examined for patterns & commonalities; (i.e., you look at the data carefully)
  • A model called a “working hypothesis” or “emerging theory” is developed to understand the social phenomenon.
  • Thus, along with coding themes, you make, develop & adapt a working hypothesis re associations between concepts as research progresses.
  • The model is then molded & adapted as new data is collected.
  • Researchers refine the focus of data collection to accommodate the emerging theory. Thus, the process is flexible… researchers might change interview Qs or the focus of their observations, or seek new or different participants:
  • P’s may be selected because they have something in common with those already selected and may be able to provide support for the emerging theory (theoretical sampling)
  • P’s may be selected because they are different (negative case sampling), such that researchers can best determine to whom the emerging theory applies
  • When research is saturated, the hypothesis achieves consistency & becomes a theory.
33
Q

Audit Trail

A

Written documentation of where your themes and working hypotheses come from, how they evolved. It also serves as a record of the data collecting itself.

Code notes, operational notes, theoretical notes (bottom line, you write down everything you do lest ppl think it’s subjective and you made it up.)

34
Q

Summation of Qualitative Theory Analysis

A

In other words:

Exploratory, Qualitative Grounded Theory Analysis:

  • Grounded theory analysis makes & records observations, & identifies themes, which are then analyzed for causal patterns in the relationships between identified concepts or between characteristics of P’s and their experiences.
  • Observed patterns are used to compose a working hypothesis, which is then tested via constant comparison (i.e., by comparing new data to data already collected) and to the emerging theory.
  • Once a concept or theory arises in the analysis of one case, evidence of it is looked for in other cases.
  • The concept or theory may then be modified to accommodate new information.
  • Organizing/Categorizing into themes, (documented in audit trail)
  • Corroborating/Legitimating (theoretical sampling &
    negative case analysis…sends you back to top)
35
Q

Reporting Themes

A

Themes revealed through content analysis are typically presented in reports in this format:

  • Code/label: the name chosen is often a word/phrase used by participants
  • Description/definition of the code, in words
  • Quotes/stories from participant(s) to ground the concepts & illustrate their meaning
  • Commentary on the quote(s), showing how it relates to the code/theme.
  • Additionally, concept maps are often used to illustrate emerged (grounded) theories.
36
Q

Concept Maps

A

Pictures about how variables relate

Concept maps:

  • Highlight key concepts
  • Graphically represent the associations between repeated topics/themes
  • Explain or link concepts, when you have observed patterns, correlations & causal links (moving beyond description to explanation.)
  • Look like flow charts or Venn diagrams and illustrate possible causal directions between key concepts.

In concept maps:

  • Concepts, represented as boxes or circles, are connected with lines or arrows in a branched, hierarchical structure.
  • Arrows may be used to indicate the direction of influence.
  • Sometimes they appear less like a flow chart with arrows than a Venn diagram demonstrating overlapping influence.

(Sample concept map in class 11 power point)

37
Q

Unobtrusive Methods

A

Unobtrusive means (quantitative or qualitative): Studies social phenomena by examining existing sources of information, rather than collecting new data…. Examples of data that are already out there

Qualitative data sources: case notes, intake forms, insurance claim forms, medical charts, diaries or journals, oral histories, blogs, books, letters, web/newspaper articles, poetry, meeting minutes, videotapes

Quantitative data sources: government-collected records and survey data; other large omnibus surveys (which collect data on a wide variety of subjects); numerical research findings

38
Q

Content Analysis

A

Using open ended records

Content Analysis (usually qualitative): the exploration and analysis of recorded human communications (e.g. oral histories, blogs or other forum postings, books, videos, laws, case notes/charts…)

Typically, existing narratives or other artifacts are “coded,” examined for meaning & common themes, which are then described & summarized.

When seeking to understand cause and effect patterns, grounded theory analysis is used.

39
Q

Demography

A

(quantitative)

Statistical study of the size, composition, & spatial distribution of human populations & how they vary
Census: the #s & characteristics of people (their race, language, nativity, family structure…) & their locations

Government records: births, deaths, marriages, recorded diseases, accidents, arrests, TANF beneficiaries, kids in foster care, unemployment claims, high school graduation, etc.

Typically, we’ll compare different geographic regions on these indicators, or we’ll track changes over time, longitudinally

40
Q

Secondary Data Analysis

A

(quantitative)

Analysis of existing quantitative data in a new way; Answering a new research Q using data you did not collect yourself or which you collected in the past.

Typically, use of large, publically available survey data archives, which often have bigger N & better sampling than you could conduct yourself.

Examples: CPS, GSS, YRBSS, BRFSS, NHANES, PRAMS, NSDUH…

NOTE: you are not just going on the websites to uncover %s and rates - you are actually running your own statistical analyses to detect correlations between variables.

41
Q

Meta-Analysis

A

(quantitative)
Statistically analyzes patterns in the findings of multiple studies on the same or similar social phenomena.

The “data” are the summary statistics reported in each study (e.g. the means, p-values, correlation coefficients…)
That is, you do stats on the stats of published studies.

Note: reading meta-analyses can often speed up literature reviews:
An expert on the topic has already summarized many studies in one article & has drawn conclusions about the statistical patterns represented across those studies.

The end reference list should include relevant studies published to date of the meta-analysis and can therefore help you find relevant articles.

42
Q

Causal Criteria

A

Three criteria for assuming a causal effect:

  1. Time Order: IV must occur prior to DV
  2. Correlation: IV & DV must be statistically associated/correlated
  3. No Extraneous Variables: Correlation cannot be due to some other (extraneous) variable.
43
Q

Causal Criteria Strengthened By:

A

o Strengthened by:
 Identification of context
 Identification of causal mechanism (intervening/mediating variable)
 Replication with multiple methods, measures, control variables, samples

Causal mechanism (a.k.a. intervening variables) = the process that creates the connection between variables: sometimes measureable but often only identified by social science theory
e.g., it is the lack of neighborhood opportunities that explain why poor kids have higher rates of delinquency.
Context = for whom, when, in what conditions the effect occurs (to whom or what the findings are generalizable)

Certain research designs may help establish causality:
-Longitudinal designs: help establish time order.
-Non-experimental (correlational, cross-sectional) designs: can use statistical controls (moderating variables) to limit or reveal extraneous variables.
-Experimental designs:
Establish time order (FIRST give the intervention, THEN measure outcomes)
-Reduce rival hypotheses (3rd variables) by controlling for natural variability in IV’s (standardizing the intervention received).

44
Q

• Internal validity threats:
o Threats with one group or two+ group w/non-random assignment:
History (External Events)

A

risk that the findings are due NOT to the IV but to something external to the Tx that occurs during course of the study… Folks change because of something happening in society or their community.

45
Q

• Internal validity threats:
o Threats with one group or two+ group w/non-random assignment:
• Testing (Practice)

A

risk that the findings (about the change or difference in the DV) are due NOT to the IV but to P’s familiarity w/a test… Folks improve on a test because they have practiced it before.

46
Q

• Internal validity threats:
o Threats with one group or two+ group w/non-random assignment:
Maturation:

A

risk that a finding that the DV changes or differs between groups is due NOT to the IV but to P’s growing older, wiser, stronger, more experienced, or further from a crisis (or, conversely, declining naturally over time)… Folks change because time is passing.

47
Q

Internal validity threats:
o Threats with one group or two+ group w/non-random assignment:
• (Statistical) Regression

A

: very low and very high scores tend to move closer to average on post-tests, so findings of DV scores dropping or rising could be due to this tendency & not to impact of the Tx (IV)… Folks look like they are changing because their original scores were so unusual, and now, they’re average out.

48
Q

o Threats with two or more groups:

• Diffusion/Contamination

A

risk that lack of observed lack of difference between 2+ Tx groups is due to info, resources, & behaviors being shared between members from different groups – esp. if they are in close proximity.

49
Q

o Threats with two or more groups:

• Compensatory Equalization

A

risk that observed lack of difference on DV between groups is due to the actions of an outsider (non-P, often part of research team) who tries to make up for perceived losses for those who receive the no Tx or a perceived less desirable Tx
“Blinding” the researchers, not letting them know which treatment a P is getting, can prevent this.

50
Q

o Threats with two or more groups:

• Compensatory Rivalry

A

risk that the lack of observed difference between Tx groups is due to the P’s getting less desirable Tx seeking outside help, to compensate, saying “I’ll show you!”
“Blinding” the participants can prevent this.

51
Q

o Threats with two or more groups:

• Resentful Demoralization

A

risk that observed difference between IV & DV is due to the P’s in less-desirable Tx group performing worse because they feel badly about being in “worse” group; they put in no effort, asking “why bother?”
“Blinding” the participants, not letting them know which Tx they are getting, or what is the hypothesis about which Tx may be better, can prevent this

52
Q

o Threats with two or more groups:

• Differential Attrition

A

risk that the observed differences between 2+ groups’ average DV scores are due to the differences in each group’s drop-out rates, esp. if one group is more unpleasant and more P’s drop out.
Those getting the treatment with more side effects that DON’T drop out may be atypical in some way (hardier!), changing the observed outcomes.

53
Q

Construct validity threats

Attention/praise/Hawthorne

A

Occurs when attention given to the Ps as part of the experiment impacts performance on DV measure… Subject improves because (s)he feels valued.
having “attention-only” comparison group helps rule this threat out.

54
Q

Construct validity threats

Placebo/Subject Expectancy

A

P’s hope for improvement has a positive effect, independent of the Tx… P improves because (s)he feels hopeful.
Blinding P’s to the hypothesis or treatment group helps prevent.

55
Q

Construct validity threats

Experimenter Expectancy/Rosenhan

A

researcher/rater subtly conveys (by gesture, voice, tone, etc.) that one treatment is better, impacting the P’s response to it… The researcher instills hope.
Blinding researchers to Tx conditions & the hypothesis helps prevent

56
Q

Experiments

A
Experiments are a type of explanatory study that involves taking action (giving an intervention) and later observing consequences of that action.  
In all experiments (true or quasi):
The action is the IV, the supposed causal factor.  What varies is:
the treatment condition, when there is >1 group (e.g. intervention vs. control)
time, relative to treatment, when there is more than one measurement occasion (e.g., pre- vs. post-test)
The consequence (change over time in a target variable, the outcome) is always the DV
57
Q

True Experiments: always 2+ groups, randomly assigned

A

IV (treatment (Tx) condition) is always manipulated by researcher
By manipulation, we mean that the researcher plans & controls the exposure of P’s to the treatment/procedure
A post-test measures the outcome of the Tx.
i.e., the researcher varies the IV before measuring the DV.
There may also be a pre-test, but it is not required. If so, the researcher looks for CHANGE in the DV.
2 or more groups (Tx conditions) are compared, including
An “experimental group” that receives the Tx under study
A “comparison group” that receives either:
NO treatment (then we call it a Control Group, a Placebo)

58
Q

“wait-list control

A

that will get the treatment after the experimental period is over, but first serves as a comparison to the experimental group, measuring the effects of getting no treatment

59
Q

Treatment as Usual

A

The normal/usual standard of care for that condition.
Participants are randomly assigned to the Tx conditions.
P’s are randomly assigned to one of the groups (like flipping a coin), w/all P’s having an EQUAL chance of being assigned to each group.
Probability predicts this should assure roughly equivalent groups, that there is nothing notably different about the folks assigned to one group vs. the other.

60
Q

Selection/Treatment Interaction (Volunteer Bias):

A

extent to which you can generalize to individuals not in the study… asks what is different about the participants if they volunteer for the “treatment” (study) or if they are coerced?

61
Q

Context/Treatment Interaction(generalizability)

A

extent to which you can generalize to non-experimental conditions, to outside the artificiality of the laboratory.
We want to generalize to the real world from our experimental setting. Appropriateness of this may be limited.

62
Q

Quasi-Experimental Designs

A

Always have manipulation by researcher (i.e., a Tx)
Always use post-test (outcome measure after manipulation)
They often (but not always) have pretests, like in true experiments
P’s typically meet some selection criteria (e.g. similar in SES, diagnosis, or health) – i.e. they are part of target group for the Tx
BUT, Quasi-Experimental designs
Do not always have a comparison group
Where they DO have a comparison group, they do not have randomization
Thus, quasi designs are vulnerable to many internal validity threats:
Findings may be due to something that happened outside the Tx but you can’t measure it w/o a group to compare to (who would also be influenced by that outside thing).
Findings may be due to inherent differences between groups, if you lack equivalent (randomized and/or matched) groups
That is, you can’t rule out extraneous variables that you haven’t controlled for by making equivalent groups through randomization

63
Q

post-test

A

outcome measure after manipulation

a measurement of the DV after the experimental group has received the treatment/intervention
Multiple post-tests (called repeated measures) let you assess the persistence or longevity of the intervention’s effects.

64
Q

pretests

A

Many (not all) true and quasi experiments ALSO have pretests, which are identical measures to the post-test, but are measured prior to the treatment/intervention
Pretests allow you to confirm that your groups are similar at the outset of a study, and therefore comparable.
Randomization should have led to equivalent groups, but this confirms it.
That is, pretests let you see if the groups differ more than natural chance variation.
Also, pretests help you quantify the effects of your IV/intervention, i.e. you can measure the size of the change on DV
i.e., you can measure, by how much symptoms were reduced
Also, pretests allow you to compare each S to him/herself
i.e., pretests control for each P’s starting point/baseline.

65
Q

Constructive (additive) designs

A

have one group getting one treatment, another group getting that treatment plus something additional, a third group getting the first two treatments plus something else, etc.

66
Q
Random 
Sampling (or random selection) and
randomization (or random assignment)
are not
the same thing.
A

random assignment-Individuals who are to participate in a study are randomly divided into an experimental group and a comparison group.

Random sampling- (a tool for maximizing generalizability) Individuals are randomly selected from a population to participate in a study.

67
Q

Blind researcher or participants (or both: double-blind)

A

A blind or blinded experiment is an experiment in which information about the test that might lead to bias in the results is concealed from the tester, the subject, or both until after the test.[1] Bias may be intentional or unconscious. If both tester and subject are blinded, the trial is a double-blind experiment.

Blind testing is used wherever items are to be compared without influences from testers’ preferences or expectations, for example in clinical trials to evaluate the effectiveness of medicinal drugs and procedures without placebo effect, nocebo effect, observer bias, or conscious deception; and comparative testing of commercial products to objectively assess user preferences without being influenced by branding and other properties not being tested.

68
Q

Information only control

A

hi liz, couldn’t find this one