Chapter 8: Research and Program Evaluation (Research & Program Eval) Flashcards
RESEARCH AND PROGRAM EVALUATION
Studies clearly indicate that only a small percentage of counselors
actually conduct research or use research fi ndings in their practice.
Many counselors feel that research is virtually cold, impersonal, and
irrelevant to their day-to-day practice and thus say that helping, rather
than research, is their top priority. A gap between research and practice is evident. A high percentage of beginning master’s level students
actually resent having to take research and statistics courses. It is true
that a lot of studies are not helpful to counselors. What’s more, it has
been discovered that research articles are perused primarily by other
researchers and not practitioners. Research that is considered helpful
is often dubbed experience-near research or applied research. When
counselors do integrate research into practice it is called Empirically Validated Treatment (EVT) or Empirically Supported Treatment
(EST).
Correlation is not the same as causality. Correlation is simply
an association. The correlation between people who have an umbrella
open and rain is very high, but opening your umbrella does not cause
it to rain.
Correlations go from negative 1 to 0 to positive 1. Zero
means no correlation while positive 1 and negative 1 are perfect correlations. A negative .5 is not higher than a correlation of -.5.
In fact, a correlation of -.8 is stronger than a correlation of .5.
In a positive correlation, when X goes up, Y goes up. For
example, when you study more, your GPA goes up.
In a negative correlation, when X goes up Y goes down. For
example, the more you brush your teeth, the less you will be
plagued by cavities.
Research is quantitative when one quantifi es or measure things.
Quantitative research yields numbers. When research does not use
numerical data, we call it qualitative research. All research has
fl aws, sometimes referred to as bubbles.
True Experiment Two or more groups are used
The people are picked via random sampling and placed in groups
using random assignment. Systematic sampling where every
nth person is chosen can also be used, however, researchers still
prefer random sampling and random assignment.
When the groups are not picked at random or the researcher cannot control the IV then it is a quasi rather than a true experiment.
Quasi-experimental research does not ensure causality
The experimental groups get the independent variable (IV)
also known as the experimental variable.
The control group does not receive the IV.
The outcome data in the study is called the DV or dependent
variable. If we want to see if eating carrots raises one’s IQ then
eating carrots is the IV while the IQ scores at the end of the
study would be the DV.
Each experiment has a null hypothesis: there is no signifi cant
difference in people’s IQs who eat carrots and those who don’t
eat carrots. The experimental or alternative hypothesis is:
there is a signifi cant difference between people’s IQ’s who do eat
carrots versus those who do not.
When a researcher rejects a null hypothesis that is true, it is a
type I alpha error. When a researcher accepts null when it
should have been rejected, we say that a type II beta error has
occurred.
The signifi cance level for the social sciences is usually set at .05 or
less (.01 or .001). The signifi cance level gives you the probability
of a type I error.
N = 1 is known as a single subject design or case study and thus
does not rely on IV, DV, control group, etc. Case studies are
becoming more popular.
Demand characteristics are evident when subjects in a
study have cues regarding what the researcher desires or does
not desire that infl uence their behavior. This can confound an
experiment rendering the research inaccurate.
If subjects know they are being observed we refer to the
process as an obtrusive or a reactive measure. Observers’
presence can infl uence subject’s behavior rather than merely the
experimental variable or treatment modality. When subjects are
not aware that they are being measured we say that it is an
unobtrusive measure.
Internal validity is high when an experimental has few fl aws and
thus the fi ndings are accurate. In other words, the IV caused the
changes in the DV, not some other factor (known as confounding
extraneous variables or artifacts). When internal validity is low
the researcher didn’t measure what he thought he measured.