AOR 4: Eval and Research Flashcards
dissemination
the process of communication procedures, findings, or lesions learned from an evaluation to relevant audiences in a timely, unbiased, and consistent fashion
true experimental designs
include manipulation of at least one independent variable and the research participants are randomly assigned to either the experimental or control group arms of the trial
decision-making components
based on four components designed to provide the user within the context, input, processes, and products with which to make decisions
outcome evaluation
focused on the ultimate goal, product, or policy and is often measured in terms of health status, morbidity, and mortality
convenience sampling
selection of individuals or groups who are available
descriptive statistics
show what the data reveal, as well as provide simple summaries about what the samples’ measure
validity
the consistency, dependability, and stability of the measurement process
descriptive analysis
designed to describe phenomenon specific to a population using descriptive statistics such as raw numbers, percentages, and ratios (exploratory)
quasi-experimental designs
include manipulation of at least one independent variable and they may contain a comparison group; however, due to ethical or practical reasons, random assignment of participants does not occur
test-retest reliability
evidence of stability over time
nonexperimental designs
cross-sectional in nature and do not include manipulation of any kind
process questions
help the evaluator understand phenomena, such as internal and external forces that affect program activities
purposive sampling
researcher makes judgments about who to include in the sample based on study needs
stratified multistage cluster sampling
in several steps, a variable of interest is used to split the sample, and then groups are randomly selected from this sample
impact evaluations
immediate and observable effects of a program leading to the desired outcomes
propriety
behave legally, ethically, and with due regard for the welfare of those involved and those affected
research
organized process in which a researcher uses the scientific method to generate new knowledge
list of factors that affect program decisions
-political environment
-cultural barriers
-funding limitations
-shifting and variable leadership priorities
steps in evaluation practice
- engage stakeholders
- describe the program
- focus the eval design
- gather credible evidence
- ensure use and share lessons learned
unit of analysis
what or who is being studied or evaluated
utilization-focused
accomplished for and with a specific population
criterion validity
refers to a measure’s correlation to another measure of a variable
steps involved in qualitative data analysis
- date reduction (selecting, transforming, focusing, and condensing data)
- data display (creating an organized way of arranging data through a diagram or chart)
- conclusion drawing and verification (data is revisited multiple times to verify, test, or confirm patterns and themes)
stratified random sampling
the sample is split into groups based on a variable of interest, and an equal number of potential participants from each group are selected randomly
attainment
focused on program objectives and the program goals
feasibility
be realistic, prudent, diplomatic, frugal
variables
operational forms of a construct
five elements for critically ensuring the use of an evaluation
- design
- preparation
- feedback
- follow-up
- dissemination
rater reliability
the difference among scorers of items and controls for variation due to error introduced by rater perceptions
systematic random sampling
an inclusive list of the priority population is used, and starting with a random number, every nth potential participant is selected
quota sampling
selecting individuals who have a certain characteristic up to a certain number
network sampling
when respondents identify other potential participants who might have desired characteristics for the study
goal-free
not based on goals; evaluator searches for all outcomes including unintended positive and negative side effects
summative evaluation
associated with measures or judgments that enable the investigator to conclude impact and outcome evaluations
utility
serve the info needs of intended users
multivariate outliers
unusual combinations of scores on different variables
data analysis plan
used to detail how data will be scored and coded, missing data will be managed, and outliers will be handled
nominal scores
cannot be ordered hierarchically but are mutually exclusive (male and female)
logic model
take a variety of forms but generally depict aspects of a program such as inputs, outputs, and outcomes
formative evaluation
process that evaluators or researchers use to check an ongoing process of the evaluation from planning through implementation phases
steps in data analysis and synthesis
- enter data into the database and check for errors
- tabulate the data
- analyze and stratify the data
- make comparisons
systems analysis
based on efficiency that cost benefits or cost-effectiveness analysis is used to quantify effects of a program
ratio scores
represent data with common measurements between each score and a true zero (number of staff)
limitations
phenomena the evaluator or research cannot control that place restrictions on methods and, ultimately, conclusions
data screening
may include assessing the accuracy of data entry, how outliers and missing values will be handled, and if statistical assumptions are met
analytic analysis
is explanatory and both descriptive statistics and inferential statistics may be used to explain the phenomenon
inferential statistics
used when researchers or evaluators wish to draw conclusions about a population from a sample
accuracy
reveal and convey technically accurate information
naturalistic
focused on qualitative data and responsive information from participants in a program is used; most concerned with narrative explaining “why” behavior did or did not change
multistage cluster sampling
in several steps, groups are selected using cluster sampling
delimitations
decisions made by an evaluator or researcher that ought to be mentioned because they are used to help the evaluator identify the parameters and boundaries set for a study
-often involve narrowing a study by geographic location, time, and population traits
evaluation
a series of steps that evaluators use to assess a process or program to provide evidence and feedback about the program
cluster sampling
when naturally occurring groups are selected instead of individuals
ordinal scores
do not have a standard unit of measurement between them but are hierarchal (grades)
data management plan
a set of procedures for determining how the data will be transferred from the instruments used in the research to the data analysis software
-should include procedures for transferring data from the instruments to the data analysis software
process evaluation
any combination of measures that occurs as the program is implemented to assure or improve the quality of performance or delivery
simple random sampling
an inclusive list of the priority population is used to randomly select a certain number of potential participants from the list
interval scores
have common units of measurement between scores, but no true zero (temperature)
content/face validity
a concept that involves the instrument’s items of measurement for the relevant areas of interest
vulnerability
a weakness in a system design resulting from inadequate risk management and testing
kirkpatrick’s four-level training evaluation method
reaction, learning, behavior, result
Construct validity
Ensures that concepts of an instrument relate to the concepts of a particular theory