R - done Flashcards
Post positivism paradigm -
Definition?
Associated w which type of research?
Truth can only be approximated because of error in measurement.
More prevalent in quantitative research.
Positivism paradigm -
Definition?
Associated w which type of research?
Objective truth exists and can only be understood if directly measurable.
Tied to quantitative research.
Constructivism paradigm -
AKA?
Definition?
Associated w which type of research?
Interpretivism.
There are multiple realities or perspectives for any given reality.
Qualitative research.
Critical/ideological paradigm -
Definition?
Associated w which type of research?
- Researchers take a proactive role and
confront social structure affecting oppressed groups. - Qualitative research.
Ethics in research include? (4)
- Informed consent w right to decline.
- Risks as well as benefits.
- Human studies review board.
- Debriefing, especially if deception was used.
Nazi medical war crimes -
Ethics violated?
Deceived, exploited,and tortured prisoners in the name of research.
Milgram study -
Studied what?
Ethics violated?
- Milgram obedience study.
- Study subjects shocked learners when they were incorrect. Shock was at the max 65% of the time
- Participants were deceived, emotionally harmed and not debriefed,
Tuskegee study -
Studied what?
Ethics violated?
- Tuskegee syphilis study.
- Deceived participants, not telling them their correct Dx or that effective Txt of penicillin was available when it came out.
Jewish Chronic Disease Hospital study -
Studied what?
Ethics violated?
Subjects & controls were injected w live cancer cells, but not informed.
Willowbrook study -
Studied what?
Ethics violated?
- Kids at a school for the mentally disabled were injected w hepatitis.
- Parents who wanted to enroll their kids signed consent, but were never told they could refuse or about effects.
Legal standards for research? (2)
- Use of human studies review board.
- HIPAA protects private health information.
Human studies review board?
IRB must be used by all federally funded institutions who do research w humans,
and any research conducted by such institutions, even if all of it isn’t federally funded.
45 CFR 46?
Code of federal regulations, title 45, part 46, contains policies to guide researchers using human subjects, including use of an institutional review board.
Independent variable?
Construct that is manipulated or controlled in some way.
Dependent variable?
The outcome variable that is checked for influence by the independent variable.
Extraneous variables?
Other variables, besides the independent variable, that could affect the dependent variable.
Confounding variable?
An extraneous variable that the experimenter has not controlled for and that affects the dependent variable.
Descriptive vs relational vs causal research Qs?
- Descriptive -examine what exists - counts, averages, descriptive stats
- Relational - relationship between variables, correlations
- Causal - cause-effect relationships
Research hypothesis?
A testable concise statement involving the expected relationship between 2 or more variables.
Directional vs nondirectional hypotheses?
Directional - indicates direction of relationship, eg positive correlation.
Nondirectional - doesn’t indicate direction of relationship
Null hypothesis?
There is no relationship between IV & DV.
Alternative hypothesis?
- used to identify extraneous variables, developed to be eliminated.
- the experimental hypothesis; there is a relationship between IV and DV.
Significance level -
AKA?
Definition?
AKA the alpha value
Same as the p value
Threshold for rejecting the null hypothesis
P value?
- A p value is the likelihood of obtaining a false positive result for the experimental hypothesis.
- A p value of less than or equal to .05 or .01 indicates significant results.
- Default is a tailed test: When alpha = .05, there is a .025 cutoff region at both tails.
Type I error?
- Rejecting the null when it’s true, or
- A false positive result for the experimental hypothesis.
Type II error?
- Accepting the null hypothesis when it’s false, or
- Rejecting the experimental hypothesis when it’s true.
Alpha?
The probability of a type I error.
Beta?
The probability of a type II error.
Power?
- The likelihood of detecting a significant relationship between variables when there is one.
- Power is avoiding a type II error.
How can power be increased? (6)
- increase alpha
- increase sample size
- increase effect size
- minimize error
- use a one-tailed test
- use a nonparametric statistic
Probability vs nonprobability sampling?
- Probability sampling - all persons in a known population have a chance of being selected, therefore more likely to reflect the whole population.
- Non-probability sampling - accessing samples of convenience.
Probability sampling methods? (5)
- simple random
- systematic
- stratified random
- cluster
- multistage
Non-probability sampling methods (3)?
- Convenience
- Purposeful
- Quota
Simple random sampling -
Type?
Definition?
- Probability sampling method.
- Every member of the population has an equal chance of being selected.
Systematic sampling -
Type?
Definition?
- Probability sampling method.
- Every nth element is chosen.
Stratified random sampling -
Type?
Definition?
- Probability sampling method.
- A pop is divided into subgroups, eg by gender, race, and samples are drawn randomly from the subgroups.
Cluster sampling -
Type?
Definition?
Caveat?
- Probability sampling method.
- Identify/list existing groups
- Take a random sample of groups
- No sampling of subjects within groups
- Less representative than other methods.
Multi-stage sampling -
Type?
Definition & examples?
- Probability sampling method.
- Common in cluster sampling.
- Might include a 2 stage random sample ( eg randomly select 60 schools & then 10 classes from each school.)
- Might include a 3 stage random sample (eg randomly select 200 school districts, then 20 schools from each district, & then 10 classes from each school.
And so forth.
Convenience sampling - Type? Definition? (2) Caveat? Example?
- Non probability sampling.
- An easily accessible population.
- Most common method.
- Most likely doesn’t fully represent the population of interest.
- Eg survey clients willing to participate.
Purposeful sampling?
- Non probability sampling.
- Select a sample from a population which will be most informative about a topic of interest.
- Participants are selected because they represent needed characteristics
Quota sampling?
- Non probability sampling.
- Similar to cluster & stratified, but no randomization.
- Draw the needed # of participants with the needed characteristics (eg race, gender) from the convenience sample.
Randomization -
Purpose?
2 Types?
- Helps to maximize the credibility & generalizability of a study’s findings.
- random assignment
- random selection.
Random selection -
Definition?
Related to what type of validity?
- Every member of the population has an equal chance of being selected.
- Closely related to external validity.
Random assignment
Definition?
Related to what type of validity?
- Randomly assigning participants to different groups. Ensure groups are equal & any systematic differences are due to chance.
- Closely related to internal validity.
Experimental vs control groups?
- An experimental or treatment group is receiving active treatment.
- A control group is a group w similar characteristics that does not receive the experimental treatment.
Types of control groups? (3)
- Wait list
- Placebo
- Treatment as usual (TAU)
Blind study?
Participants aren’t aware whether they are getting the experimental treatment.
Double Blind study?
Reduces risk of?
- Neither experimenter nor Participants know who is getting the experimental treatment.
- Reduces placebo effect & researcher bias.
Placebo effect?
% showing placebo effect?
- Positive effects without treatment.
- 20-30% of participants may show placebo effect.
Internal validity -
Definition?
What strengthens it? (1)
- Means change in the DV is due to the IV.
- Control of external variables strengthens it
Some threats to internal validity? (10)
History Selection Statistical regression Testing - learning the test Instrumentation Attrition Maturation Diffusion of treatment - groups talk to each other about their txt Experimenter effects Subject effects
Threat to internal validity- History?
Extraneous events occurring during the expt, inside or outside the study.
Threat to internal validity- Selection?
Group differences exist before the intervention due to lack of random assignment, eg a co-occurring variable that affects the DV.
Threat to internal validity- Statistical regression? (3)
- Statistical phenomenon of regression toward the mean.
- Scores of participants who were selected because of their extreme score (eg very depressed) are affected.
- May look like improvement or worsening.
Threat to internal validity- Testing? (4)
- An Issue when pretests are involved.
- Practice effects
- memory effects
- developing familiarity w the test
Threat to internal validity- instrumentation?
Examples?
- Changes in the instrument affect results.
- Eg paper, computer, evaluator.
Threat to internal validity- attrition? (2)
- Individuals systematically drop out.
- Esp problem for longitudinal studies.
Threat to internal validity- maturation? (3)
- Changes in a participant over time affect the DV.
- Tend to be normal developmental changes.
- Includes fatigue.
Threat to internal validity- diffusion of treatment?
Problem when groups have contact & effects of an intervention are felt in another group.
Threat to internal validity- experimenter effects?
2 types?
- Bias of the investigator affects participants.
- Halo effect.
- Hawthorne effect.
Halo effect?
The investigator’s subjective perception of one characteristic are generalized to perceptions of other traits.
Hawthorne effect -
Definition?
AKA?
- The presence of an investigator influences responses independent of any intervention.
- AKA reactivity.
Threat to internal validity- subject effects?
Participants pick up cues (ie demand characteristics).
External validity -
Definition?
Investigators must?
- The ability to generalize results to a larger group.
- Investigators must describe participants, variables, procedures & settings so readers can ascertain generalizability.
Threats to external validity? (5)
- Novelty
- Experimenter
- Measurement of the DV
- Measurement by treatment
- History by treatment effects
External validity threat - novelty effect?
A txt produces positive results just because it is novel to participants.
External validity threat - experimenter effect -
Definition?
2 types?
- Same as for internal threat: Bias of the investigator affects participants.
- Halo effect.
- Hawthorne effect.
External validity threat - history by txt effect?
An experiment is conducted in a time period full of contextual factors that can’t be duplicated.
External validity threat - measurement of the DV?
Similar to instrumentation threat, the effectiveness of a txt may depend on the type of measurement used.
External validity threat - time of measurement by txt effect?
Timing of post test may influence post test results.
Four main types of research?
- Quantitative
- Qualitative
- Mixed method
- Single subject
Quantitative research - definition? (4)
- Attempts to capture the relationship between 2 things that can be measured numerically.
- Tests a hypothesis.
- Descriptive or causal relationship.
- Results are given in numbers and in terms of statistical significance.
Qualitative research -
- definition? (2)
- data? (2)
- sampling? (1)
- a couple of types?
- Answers Qs about how a phenomenon occurs.
- Greater subjectivity.
- Data in words, rather than numbers.
- Data include interviews, field notes, photos, video, artifacts.
- Sampling usually not randomized.
- Includes case studies, policy evaluation.
Mixed method research -
Definition?
2 pros?
1 con?
- Mix of the qualities of quantitative & qualitative the research.
- Can strengthen what one method alone can provide.
- Results may be more generalizable.
- Can be time consuming.
Two types of Mixed method research?
- Concurrent design
- Sequential design
Mixed method research - Concurrent design -
Definition?
AKA?
- Qualitative & quantitative data are collected at the same time.
- AKA triangulation.
Mixed method research - Sequential design -
Definition?
Two types?
- Either qualitative or quantitative data are collected first.
- Exploratory - qualitative first.
- Explanatory - quantitative first.
Single subject research design? (SSRD) (3)
- Usually quantitative.
- Measure how receiving or not receiving txt affects
a single subject or a group who can be treated as a single subject. - Often behavioral.
Specialized research designs that can be both quantitative or qualitative? (6)
- Descriptive
- Longitudinal
- Cross-sectional
- Survey
- Action research
- Pilot study
Descriptive research? (3)
Example?
- Describes a phenomenon
- No intervention
- No causal info
- Eg buying habits
Longitudinal research?
Definition? (2)
Limitations? (3)
- Repeated assessments over time
- Track pattern or development
- Limitations: evaluation costs, cohort effects, attrition
Cross-sectional research?
Limitations? (2)
- Examines different groups (w similar characteristics) that differ on the variable of interest (eg age) at a particular point in time
- Limitation: comparisons can only be inferred since the same individuals are not being studied, so the developmental changes observed may not be real changes
- Limitation: different age groups may have cohort differences, eg historical experiences
Survey research?
Definition?
Includes? (6)
Caveats? (2)
- Select a sample, administer questions
- Includes written, oral, questionnaires, surveys, interviews, or written statements from participants
- Surveys are only as good as their design
- Capabilities of subjects must be considered
Action research -
Definition? (2)
Example?
- To improve your own practice or organization.
- To test new approaches, theories, ideas, teachings
- Eg needs assessment
Pilot study -
Definition?
Advantages? (3)
Smaller version of a study used to assess feasibility of larger study
Advantages: increase likelihood of success, identify problems, opportunity to revise
3 categories of quantitative research design?
- Nonexperimental
- Experimental
- SSRDs- single subject research designs (may contain qualitative components)
Nonexperimental research designs?
Category?
Definition? (2)
- Quantitative
- Exploratory & descriptive, no interventions
- Observe & outline the properties of a variable
Experimental research designs?
Category?
All have in common? (3)
- Quantitative
- Involve an intervention in which conditions & variables are manipulated
- Goal - assess cause & effect relationships
- Random assignment is necessary for most experimental designs
Single subject research designs? (SSRDs)
Category?
Definition?
- Primarily quantitative, may contain qualitative components
- Measure behavioral &/or attitudinal changes across time for an individual or a few individuals
4 types of Nonexperimental research designs?
- Descriptive
- Comparative
- Correlational
- Ex post facto design
Descriptive research design?
Category & Type of research design?
Definition?
2 kinds?
- Quantitative Non experimental research design
- Thoroughly describing a variable at one time or over time
- Simple descriptive designs & Longitudinal designs
Simple descriptive designs -
Definition?
What type of research design? (2)
A special type of simple descriptive design?
- 1 shot surveys of a variable
- Quantitative Nonexperimental, descriptive design
- A special type of simple descriptive design: cross sectional
Cross sectional designs - Category & type of research design? A special type of? Definition? Example
- Quantitative non experimental, descriptive research design
- A special type of simple descriptive design
- Involve different groups of participants studied at the same time
- eg the degree of financial support given by alumni who graduated 1yr, 5 yrs, 10yrs ago
Longitudinal designs?
What category & type of research design?
3 kinds?
- Quantitative nonexperimental, descriptive design
- 3 kinds: trend, cohort, panel.
Trend study -
Category & Type of research design?
Definition?
- Quantitative nonexperimental, descriptive, longitudinal design
- Involves assessing the general population over time w new individuals sampled each time data are collected
Cohort study -
Category & Type of research design?
Definition?
Example?
- Quantitative, nonexperimental, descriptive, longitudinal design
- Assessing the same population over time -
A cohort sample is a group that experiences some type of event (typically birth) in a selected time. This group may be compared over time to another cohort group or other differing group. This alternate comparison group would not have the same associated event or exposure. These studies compare the lives of the differing groups to draw conclusions. - Example: A group of graduates that are the same age from different colleges with the same degree are studied every 5 years on how they have progressed.
Panel study?
Category & Type of research design?
Definition?
- Quantitative, nonexperimental, descriptive, longitudinal design
- Panel studies measure the same sample of respondents at different points in time. Similar to cohort studies. Panel studies are particular to such things as age bands or common experience such as first births. These studies may include a much smaller group and still maintain national representation.
Comparative research design -
Category & Type of research design?
Definition?
Example?
- Quantitative nonexperimental design
- Allows researcher to say there is a difference between groups, but cannot say it’s causative
- Eg racial differences in use of MH services
Correlational research design -
Category & Type of research design?
Definition?
Possible values?
Quantitative nonexperimental design
Describes strength & direction of a relationship between 2 variables
r=.5 is a moderate positive relationship
r=0 is no relationship
r=+-1 is a perfect correlation
Coefficient of determination?
Square the correlation coefficient
r=.5 squared is .25 or 25% shared variance between the 2 variables
Ex post facto research design? Category & Type of research design? AKA? Definition? Example?
Quantitative nonexperimental design
AKA causal-comparative design
Data already collected; IV cannot be manipulated; no randomization; Looks at how an IV potentially affected a DV
Eg using archival data
Experimental research designs - 3 general categories of design?
Within subject
Between groups
Split plot
Within subject design?
Experimental research design
Assess changes within participants in a group as they experience an intervention
Could be before & after intervention (repeated measures)
Could be serial interventions
Between groups design?
Experimental research design
Looking at effects of intervention between 2+ groups
One group is a control
Split plot design?
Experimental research design
Assess an intervention on a whole group & assess sub-interventions on subgroups
Eg a mentoring club’s effect on careers, with focus on resumes, interviewing, or shadowing
3 degrees of experimental research design?
Pre-experimental
True-experimental
Quasi-experimental
See table 8.3
Pre-experimental research designs?
3 types?
Often don’t use control groups; no random assignment
One-group posttest-only design
One-group pretest-posttest design
Nonequivalent groups posttest-only design
One-group posttest-only design?
Experimental research designs, Pre-experimental
A group receives an intervention and change is measured
One-group pretest-posttest design?
Experimental research designs, Pre-experimental
A group is evaluated before & after an intervention
Nonequivalent groups posttest-only design?
Experimental research designs, Pre-experimental
No attempt made to use equivalent groups
An experimental group & a control group are evaluated after intervention
Experimental research designs - True-experiment?
AKA randomized experimental designs
At least 2 groups for comparison & random assignment
What differentiates true & quasi experiments?
Random assignment, usually
Five types of True-experiment research designs?
Randomized pretest post test control group design
Randomized pretest post test comparison group design
Randomized post test only control group design
Randomized post test only comparison group design
Solomon four-group design
Randomized pretest post test control group design?
True-experiment research design
2 groups, 1 is a control
Randomized pretest post test comparison group design?
True-experiment research design
2+ groups
Each receives a distinct intervention
Randomized post test only control group design?
True-experiment research design
treatment & control group
Randomized post test only comparison group design?
True-experiment research design
At least 2 groups for comparison & no control group
Solomon four-group design?
True-experiment research design Rigorously assesses presence of pretest & intervention 4 groups: - Pretest, intervention, post test - Pretest, post test, no intervention - Intervention, post test - Post test
Quasi experimental designs?
No random assignment.
Existing, non equivalent groups - nested data (classrooms, counseling groups) or naturally occurring groups
2 types of Quasi experimental designs?
Nonequivalent groups pretest posttest control or comparison group designs
Time series design
Nonequivalent groups pretest posttest control or comparison group designs?
Quasi experimental design
Keep intact groups
Administer pretest, intervention to 1 group or to at least 2 comparison groups, then give post test
Time series design?
Quasi experimental design
Repeatedly measure before & after an intervention with 1 group only or using a control group. Eg:
O O O X O O O
O O O O O O
Measures are at equal intervals w same tests
SSRDs- single subject research designs - Category of research? Definition? How represented? Example?
- Quantitative research, may contain qualitative components
- Repeated measures over time for an individual or group
- A= the baseline data, B= treatment data, C= treatment data
- Eg Assess effectiveness of programs
3 designs of SSRDs- single subject research designs?
- Within series designs - effectiveness of 1 intervention or program
(A-B or A-B-C) - Between series designs - effectiveness of 2+ interventions for a single variable
- Multiple baseline designs - assess data for a target behavior across multiple individuals, multiple environments/places, or multiple behaviors
SSRDs - Within series designs?
4 types
- Effectiveness of 1 intervention or program
- A-B designs
- A-B-C designs - measures interaction among treatment components
- Changing criterion designs - criterion for success becomes more restrictive to see how much incentive is needed for maximum performance
- Parametric designs - treatments are compared across phases
Frequency distribution?
- The number of observations per possible response (or equal sized intervals of responses) for a variable
- Rows for each response, columns for freq counts, %, cumulative %
Frequency polygon?
A line graph of the frequency distribution
X = possible values of the variable
Y = frequency count for each value
Data are ordinal, interval or ratio
Histogram?
Connected-bar graph showing frequencies of values for a variable
Data are ordinal, interval or ratio
Bar graphs?
Data are nominal
Separated bars for distinct responses
Measures of central tendency?
Mean
Median
Mode
Mean?
Average
An outlier can inflate/deflate the mean
Median -
How compute?
When use?
The middlemost score, 50% of the scores are above, 50% are below. If the number of scores is even, take the average of the 2 middlemost scores
Use when outliers are present or data is skewed
Mode?
Types?
Most frequently occurring score, not influenced by extreme scores
If there are 2 most frequent scores, it’s bimodal
More than 2, multimodal
Variability?
3 types?
How dispersed scores are from a measure of central tendency
Range
Standard deviation
Variance
Range?
Largest value - smallest value + 1 place value
Can be affected by outliers
Standard deviation -
Definition?
Percentages under the curve?
Person scores at +2 SD is at what percentile?
- How dispersed scores are around the mean. Most frequently used indicator of variability
1 SD = 34% of the normal curve 2 SD = 34+14=48% 3 SD = 34+14+2=50% \+- 1 SD = 68% \+- 2 SD = 95% \+- 3 SD = 99% 97th percentile
Variance?
Standard deviation squared
Skewness?
- an asymmetrical distribution
For positively (vs negatively) skewed - Where are outliers? Where are most scores? Where is the tail? Where is mode, mean, median? Skewness index?
- Pos - outliers at high end more scores are lower tail to the right mode, median, mean \+0.01 to +1.00
Skewness index looks like?
+ for positive skew, - for negative skew
Not skewed = -1.00 to +1.00
Kurtosis?
- how peaked or flat a curve is
3 types of kurtosis?
- Mesokurtic - normal curve, kurtosis = -1 to +1
- Leptokurtic - tall & thin, kurtosis > 1
- Platykurtic - low & wide, kurtosis < -1
Inferential statistics -
Definition?
2 types?
- Infers conclusions about a population from sample data. Based on the probability of particular differences occurring
- Parametric & Nonparametric
Statistical assumptions for parametric statistics? (3)
Normal distribution, approximately
Randomly selected samples
Interval or ratio scales for each variable
When use Nonparametric statistics?
When Statistical assumptions aren’t met
Can be used even if they are met, as Nonparametric stats are robust
Correlation coefficient indicates what 3 things?
Presence of a relationship
Direction (+ or -)
Strength - the higher the absolute value
NOT causation
Types of correlation?
Pearson
Spearman r - for rank ordered variables
Point biserial - comparing 1 continuous & 1 dichotomous variable
Biserial
Perfect correlation?
+1.00 or -1.00
Spurious correlation?
Another variable is really responsible for the relationship
Attenuated correlation?
Measures are unreliable and show a low relationship
Restriction of range problem w correlation?
sample isn’t representative
Coefficient of determination?
The percent of variance shared in correlated variables is the square of the correlation
Regression -
Definition?
3 types?
- Regression studies are prediction studies and are extensions of correlational studies
- Bivariate, Multiple, Logistic regression
Bivariate regression?
How well scores from IV (predictor variable) predict scores on DV (criterion variable)
Multiple regression?
- More than 1 IV/predictor variable is used to predict DV/criterion variable
- Each predictor is weighted to determine its contribution
- Generally, the more predictor variables, the stronger the prediction
Logistic regression?
- How well scores from 1 or more IVs (predictor variable) predict scores on DV (criterion variable)
- DV Is dichotomous
Parametric statistics -
Used when?
List of 6?
- Used when statistical assumptions are met. T-test ANOVA Factorial ANOVA ANCOVA MANOVA MANCOVA
Nonparametric statistics -
Used when? (3)
List of 6?
- Used when:
- only can make a few assumptions about the distribution of scores
- data is nominal or ordinal
- interval or ratio data is skewed
Chi-square
Mann-Whitney U
Kolmogorov-Smirnov Z
Kruskal-Wallis
Wilcoxon’s signed ranks
Friedman’s ranks
T-test - Para/Nonparametric? Definition? 2 kinds? Score looks like?
- Parametric
- Compares 2 means for 1 DV
- Independent t-test - 2 separate groups compared on 1 DV
- Dependent t-test - repeated measures w same group or paired-subject groups
- Score is a t ratio
ANOVA - Para/Nonparametric? Definition? Example? Score looks like? Post hoc analysis?
- Parametric
- At least 1 IV with 3 or more groups/levels
- Eg income with 3 ranges defining 3 levels
- Score is an F ratio
- post hoc analysis allows examination of every possible pairing of groups after finding main effects
Factorial ANOVA?
Para/Nonparametric?
Definition?
- Parametric
- More than 1 IV; not trying to control statistically for a covariate
- Yields both main effects & interaction effects, using post hoc analysis
Analysis of covariance, ANCOVA -
Para/Nonparametric?
Definition?
Example?
- Parametric
- An IV that is a covariate must be statistically controlled for in order to look at the relationship of other IVs and the DV.
- Eg removing the effects of gender while looking at the relationship between income & work satisfaction
Multiple Analysis of variance, MANOVA -
Para/Nonparametric?
Definition?
- Parametric
- Similar to ANOVA, but w multiple dependent variables
Multiple Analysis of covariance, MANCOVA -
Para/Nonparametric?
Definition?
- Parametric
- Similar to ANCOVA, but w multiple dependent variables
Chi-square -
Para/Nonparametric?
Definition?
Example?
- Nonparametric
- Used w 2+ categorical or nominal variables, where each variable contains 2+ categories. All scores must be independent - the same person cannot be in multiple categories of the same variable.
Observed frequencies are compared to expected frequencies - eg decision to quit counseling (Y/N) by type of counseling (CBT, Rogerian)
Mann-Whitney U -
Para/Nonparametric?
Definition?
Example?
- Nonparametric
- analogous to T-test; uses ordinal data; compares ranks from 2 groups
- eg students in grades 9-12 (IV) with educational aspirations (HS, BA, MA) as DV.
Kolmogorov-Smirnov Z -
Para/Nonparametric?
Definition?
- Nonparametric
- analogous to t-test and U test, but for N less than 25.
Kruskal-Wallis -
Para/Nonparametric?
Definition?
- Nonparametric
- analogous to ANOVA; IV has 3+ groups/levels
Wilcoxon’s signed ranks -
Para/Nonparametric?
Definition?
Example?
- Nonparametric
- analogous to dependent t-test
- eg to assess changes in perceived level of competence before & after a training
Friedman’s rank test -
Para/Nonparametric?
Definition?
- Nonparametric
- analogous to dependent t-test
- may be used w 2+ comparison groups
Factor analysis?
Factors?
Reduces a larger number of variables to a smaller number of groups or factors.
Factors explain covariation among variables; each factor explains a percentage of variance.
Example: checking construct validity in a test which has several items that are supposed to be about the same construct.
2 forms of factor analysis?
Exploratory factor analysis, EFA
Confirmatory factor analysis, CFA
Exploratory factor analysis, EFA
The 2 steps?
The 3 types within the 1st step?
The 2 types within the 2nd step?
- Extraction of factors - clumping factors of interest
- principal axis factoring
- principal components analysis
- maximum likelihood method
- Factor Rotation & interpretation of those factors - cleaning them up
- orthogonal - factors are uncorrelated
- oblique - factors are correlated
Confirmatory factor analysis, CFA?
Confirming the EFA results. Most common method is the maximum likelihood method
Confirmatory factor analysis - fit index?
After attaining a factor solution, one tests how the overall model fits the data.
Meta-analysis?
Used to combine & synthesize results of numerous similar studies for particular outcome or DVs.
Steps in Meta-analysis? (5)
- Establish criteria based on operational definitions
- Locate studies based on criteria
- Consider possible IVs
- Calculate an effect size on any outcome variable in the study. The DV in a meta-analysis is the effect size of the outcome.
- Effect sizes are grouped according to IV of interest & compared & combined across studies
Effect size?
- A measure of the strength of the relationship between 2 variables in a population.
- An effect size expresses the increase or decrease in achievement of an experimental group (of students) in standard deviation units. If an effect size for a study is 1, the average score of students in the experimental group is 1 standard deviation higher than the average score of the control group.
Qualitative research design - definition?
Involves the study of processes, participants’ meaning of phenomena, or both, usually in a natural setting
Qualitative research design - possible characteristics? (15)
- Process or sequence of a phenomenon
- Evolving theory
- Thick description
- Effect of researchers’ participation
- Maximize validity
- Participants’ narratives
- Purposive sampling
- In depth and detail
- Contextual
- Discovery-oriented
- Creation of meaning
- Fieldwork
- Inductive
- Document analysis
- Reflexivity
Qualitative research - 7 major research traditions?
Case study Phenomenology Grounded theory Consensual qualitative research, CQR Ethnography Biography Participatory action research
Case study - Type of research? Participants roles? What is a case? Example?
- more often qualitative, can be quantitative
Participants are active in data collection - A case is a distinct system of an event, process, setting, or individual or a small group of individuals.
- Eg: implementation of no child left behind laws, or exploration of a counseling process.
Phenomenology -
Type of research?
Definition?
Try to assess?
- Qualitative
- Used to discover the meaning or essence of participants’ lived experiences.
- Assess participants’ intentionality (ie internal experience of being conscious of something.
Grounded theory?
Type of research?
Definition? (3)
- Qualitative
- generate theory grounded in data from participants’ perspectives
- Inductive
- Theories often explain a process or action
Consensual qualitative research, CQR?
Type of research?
Definition? (4)
- Qualitative
- Combines phenomenology & grounded theory
- selects participants who are very knowledgeable about a topic, remaining close to data, without major interpretation, with hope of generalizing to larger population
- researchers often reflect on their own experiences when developing interview Qs
- consensus is key
- shared power
Ethnography -
Type of research?
Definition? (2)
Example?
- Qualitative
- researcher describes & interprets a culture - process & experience of culture, socialization process
- participants observations used
- eg studying a local community re its methods for addressing MH concerns
Biography -
Type of research?
Definition? (3)
Methods?
- Qualitative
- identify personal meanings individuals give to their social experience
- gathers stories, explore meanings, examine fit w broader social/historical context
- methods: life history, oral history
Participatory action research -
Type of research?
Definition? (3)
Example?
- Qualitative
- Focuses on change of the participants & researcher as a result of qualitative inquiry
- goals are emancipation & transformation
- researchers critically reflect on power of research as a change agent
- Eg working w a community agency & its clients to improve the agency
Purposive/purposeful sampling?
- Obtain info rich cases for maximum depth & detail
- seek sample sizes that reach saturation-no new data
Purposive/purposeful sampling - 15 types?
Convenience Maximum variation Homogeneous Stratified purposeful purposeful random Comprehensive Typical case Intense case Critical case Extreme/variant case Snowball/chain/network Criterion Opportunistic/emergent Theoretical Confirming/disconfirming
Convenience?
Type of sampling?
- Based on availability
- Least desirable/trustworthy
- Purposive/purposeful sampling
Maximum variation
Type of sampling?
- Purposive/purposeful sampling
- eg teachers of diverse backgrounds from diverse types of HSs, w training in various grade levels & forms of math
Homogeneous?
Type of sampling?
- Purposive/purposeful sampling
- selecting participants w theoretically similar experiences
Stratified purposeful?
Type of sampling?
AKA?
- Eg 6 samples of teachers for each type of math course offered in HS
- Purposive/purposeful sampling
- AKA samples within samples
Purposeful random?
Type of sampling?
- Identifying a sample & randomly selecting participants from it.
- Purposive/purposeful sampling
Comprehensive?
Example?
Useful when?
Type of sampling?
- All teachers at a particular school.
- Useful when a case has few participants.
- Purposive/purposeful sampling
Typical case?
Type of sampling?
- Selecting the average participant w typical experience
- Purposive/purposeful sampling
Intense case?
Example?
Type of sampling?
- Identifying those w intense but not extreme experience
- Teachers of advanced math courses
- Purposive/purposeful sampling
Critical case?
Type of sampling?
- Sampling those w intense or irregular experience
- Purposive/purposeful sampling
Extreme or deviant?
Type of sampling?
- Looking for the boundaries of difference. May look at poles of experience, or just 1 pole
- Purposive/purposeful sampling
Snowball/chain/network ?
Used when?
Type of sampling?
- Participants are found by obtaining recommendations of earlier participants
- Used when sample is difficult to obtain
- Purposive/purposeful sampling
Criterion?
Type of sampling?
- Selecting cases that meet criteria
- Purposive/purposeful sampling
Opportunistic/emergent?
Type of sampling?
- Changing one’s research design to include a particular individual
- Purposive/purposeful sampling
Theoretical?
Type of sampling?
- As theory evolves, sampling those who best contribute info
- Purposive/purposeful sampling
Confirming/disconfirming case?
Type of sampling?
- Including cases that add depth or provide exceptions
- Purposive/purposeful sampling
Qualitative data collection methods -
Best practice?
The 3 methods?
- Use multiple methods
- Interviews
- Observations
- UnobtrusiveI methods
Qualitative data collection - interviews -
3 types of structure?
2 types of subjects?
- Unstructured - no preset Qs
- Semistructured - flexibility to add/delete Qs
- Structured - standardized
- Individuals - sensitive topics
- Focus groups of 6-12 - get social interaction
Qualitative data collection - observational methods -
Gather? (1)
Reflect on? (1)
Use? (4)
- Gather description of setting/context
- Reflect on content & process
- Use:
- fieldwork
- memoing
- rubrics
- participant observer is most common, but zero to full interaction is possible
Qualitative data collection - Unobtrusive methods -
Interactions?
Includes?
- usually don’t interact w participants
- photos, videos, docs, archival data, artifacts
Qualitative data management- contact summary sheet?
A single page snapshot of a specific contact
Qualitative data management - document summary form?
Similar to contact summary sheet, but for unobtrusive data sources eg letters, photos
Qualitative data management - data display?
Presents organized data in a table or figure
Qualitative data analysis - inductive analysis?
Involves searching for keywords & themes without preconceived notions about theories. The data allow notions of a phenomenon to emerge.
Qualitative data analysis - steps? (6)
- write memos through out research
- write initial summary
- organize & segment the text
- code the data
- search for patterns to address research Qs
- decide on main themes, describe, discuss
Qualitative research- trustworthiness -
Definition?
4 components?
- The validity or truthfulness of findings
- credibility - accurate?
- transferability - to other contexts
- Dependability - consistency over time, across researchers
- confirmability - biases & assumptions controlled?
Strategies that maximize trustworthiness of qualitative data? (10)
-Prolonged engagement
-Persistent observation - depth
-Triangulation - multiple sources of data
-Peer debriefing - check w peers outside the study
-Member checking - consult participants to verify findings
-Negative case analysis - inconsistencies that might refute findings
-Referential adequacy - check against data collected at various times
during the study
-Thick Description - describe collection & analysis in detail
-Auditing - unbiased outsider to review study
-Reflexive journal - memos done during study to reduce counselor bias
Program evaluation -
Definition?
Purpose?
Due to?
- Assessment of a program, at any stage
- To improve quality
- Often due to recent emphasis on evidenced based treatment, accountability, and external funding
Questions that program evaluation addresses? (8)
- Is a program needed
- For whom and how long
- Was the program implemented as planned
- Resources properly used
- What are the program outcomes
- Which programs have the best outcomes
- Are benefits maintained over time
- Do benefits outweigh costs
How does program evaluation differ from research?
- Program evaluation usually leads to narrow applicability/generalizability
- Program evaluation usually diffused, done w individuals w different roles
4 types of program evaluation?
Needs assessment
Process evaluation - ensure activities match plans
Outcome evaluation - how participants are performing
Efficiency analysis - do gains outweigh costs
Program evaluation - accountability?
Providing feedback to stake holders
Program evaluation - stakeholders?
Any individuals involved or affected by the program
Eg family, administrators, clients, community leaders, funding agencies, schools
Program evaluation - formative evaluations?
Evaluation throughout implementation to ensure program is done as planned
With changes as needed from stakeholder feedback
Program evaluation - summative evaluation?
Assessment of the whole program and the degree to which it meets goals & objectives
General steps in program evaluation? (9)
- Identify the program to be evaluated
- Plan the evaluation - decide research design
- Conduct needs assessment & make recommendations
- Define what “success” is - short & long term
- Select data sources - use multiple
- Monitor & evaluate program progress
- Determine the degree to which the program is successful
- Analyze the program’s efficiency
- Continue, revise, or stop the program
Program evaluation - Needs assessment? (7)
- Look at similar programs, lit review
- Understand needs of a client population
- Develop/revise program goals & objectives
- Establish advisory committee of stake holders
- Work out details - target pop, stakeholders
- Develop program objectives
- Write executive summary - including data sources, data analyses, findings and recommendations
ABCD model of developing program objectives?
Audience Behavior Conditions Description Eg, the client will decrease substance use by 2 beers/week as reported by family
Process evaluation -
AKA?
Consists of (3)
Used by?
Program monitoring Evaluate progress at various points to ensure: -implemented as planned -met outcomes expected -methods were the best available Often used by government social programs
Outcome evaluation -
Definition?
3 aspects that can be evaluated?
Measure the effectiveness of the program at the end
Usually by determining 1 of 3 aspects:
-more effective than no intervention
-more effective than another program
-the degree to which it was more effective than another program
Efficiency analysis -
AKA?
Example?
Caveat?
Cost benefit analysis
Eg, Does a more expensive treatment lead to a better outcome?
Costs aren’t necessarily monetary, and cost benefit may not be the best evaluation method
Program evaluation models/strategies? (11)
Treatment package strategy or social science research model Comparative outcome strategy Dismantling strategy Constructive strategy Parametric strategy Common factors control group strategy Moderation design strategy Objectives-based evaluation model Expert opinion model Success case method Improvement focused approach
Program evaluation - Treatment package strategy or social science research model?
Example?
Control and treatment group are compared
Eg, Prevention program is done in one community and number of incidences of bullying are compared for 2 communities
Program evaluation - Comparative outcome strategy?
Example?
2+ programs or interventions are compared
Eg outcomes for a bullying prevention program is compared to those for a program in another community
Program evaluation - Dismantling strategy?
Example?
Components of a program are evaluated to determine which parts are effective
Eg kids, parents, teachers, counsellors are interviewed about several aspects of a bullying program eg PSAs, workshops, support groups
Program evaluation - Constructive strategy?
Example?
A new component is added to an already effective program and evaluated for added value
Eg adding a required group for perpetrators of bullying to see if number of victims will decrease even more
Program evaluation - Parametric strategy?
Example?
A program is evaluated at different stages to determine the best time to evaluate it
Eg evaluate weekly, monthly, every 6 months to see which is best for future program evaluations
Program evaluation - Common factors control group strategy?
Example?
Determine whether a specific component or common factors of a program results in its effectiveness
Eg kids are surveyed about services to see which were most beneficial to them
Program evaluation - moderation design strategy?
Example?
Participants and other stakeholders are assessed to see who might benefit most from a program
Eg interview various kids and determine the degree to which they perceived the program to be effective for them
Program evaluation - objectives based evaluation model?
Example?
Determine if goals and objectives were met
Eg compare the number of times a kid is victimized to the reduction objective
Program evaluation - Expert opinion model?
Example?
An outside neutral expert examines the program process and outcome
Eg experts in bullying review a program and determine if it should get future funding
Program evaluation - success case method?
Example?
Info is sought from individuals who benefited most from the program
Eg observe counsellors who seem to be intervening in bullying reports most effectively
Program evaluation - improvement focused approach?
Example?
Ineffective program components are reviewed to figure out what went wrong
Eg interview a perpetrator after bullying incident to understand why the program is not preventing the behavior