Evaluation And Research Flashcards
Formative evaluations
Vs
Summartive evaluations
Examine the process of service delivery
Ongoing process that allows implementation of feedback
Changes can be made to achieve program goals
I.e. needs assessment
Vs
Examine the outcome of services
Occur at the end of services and provides descriptions of effectiveness
Determines whether objectives were met
Helps future decisions about service direction
I.e. impact evaluations and cost benefit analysis
Objective data
Vs
Subjective data
Objective data is based on based on facts
Subjective data is baxed on how a client perceived an events i.e. feelings and emotions
3 Types of Research
- Experimental
Also called randomized experiments
Most rigorous - Quasi-experimental
When randomization is not feasible or practical
Uses intervention and comparison group but assignments are not random - Pre-experimental
Contain intervention groups only no control or comparison group
Weakest
Randomized Control Group
Randomly assigned to either intervention or control group
Difficult to do in sw practice becauseit can be unethical to withhold tx to people who need it just to make a control group
More helpful with treatment modality
Qualitative and quantitative can be used but quantitative more common because clients dont know if the are control group or intervention group
Quasi-experimental Design
Measures intervention with target population
Does not include random assignment researcher selects
There is no control group everyone given the independent variable or treatment
ANSWERS QUESTIONS SUCH AS:
Does a tx or intervention have an impact?
What is the relationship between program practices and outcome?
Qualitative bc they know they are gettin treatment so clients can provide more info about experience
Single Subject Design
Subject serves as their own control group
Ideal for studying bx changes in client as a result of some tx
Casual effect between intervention and outcome
Cost effective, simplistic, and flexible
Easier to plan in comparison to experimental
IEXAMPLES
Pre and post test
Single case study AB ( compares bx before tx BASELINE (A) and bx after tx started INTERVENTION (B)
Reversal or multiple baseline ABA or ABAB
TENDS TO HAVE POOR EXTERNAL VALIDITY THUS FINDINGS CANNOT BE GENERALIZED ON A BROADER SCALE
SW SHOULD HAVE CONTROL OVER ENVIRONMENT WITH THIS DESIGN I.I. INPATIENT HOSPITAL
Internal Validity
Confidence placed in cause and effect relationship
Or intervention and target bx
External Validity
Research can be generalized to other populations, settings, and tx modalities
Concurrent Validity
How results compare to previously established findings
I.e correlation between a new test and exusting test to demonstrate if the new test correlates well with existing test
Predictive Validity
Involes future or predicted results or outcome
Reliability
The consistency of measure
Higher reliability means statistically similar results under consisten experimental conditions
Overall ability to get consistent results
I.e.
2 sw administer assessment will they get the same results from client
Retrospective Design
Participants are asked to look back and remember how they were diring an earlier tim/ point
Cross sectional design
Collect data at a single point in timefrom participants of different ages
What is happening within specific population
I.e. age, income, gender, education, etc
AB
ABAB
A is baseline
B is treatment
What happens when baseline
Longitudinal Design
Same people are measured at different ages
Repeated data from same subject over period of time
Findings are profound due to the length of time
Less common
Cross sequential Design
Combo of cross sectional and longitudinal designs
Groups followed over time at different ages
4 types of reliability
- Interrater or interobserver: different raters or observers give conistent estimates in researched issue
- Test-Retest: measure from one time to another
- Parallel Forms: assess consistency of 2 tests constructed in the same way with same content
- Internal Consistency: assess consistency of results within a fest
Face Validity
If a test appears to measure what it claims to
When purpose is clear its easier to have higher face validity
Content Validity
Examines if all relevant content or domains are covered
Croterion related validity
Examine whether construct performed as anticipated in relation to other theretical constructs
Includes
Predictive
Concurrent
Convergent
Discriminitant
Convergent Validity
Assess is constructs are similar to other constructs that should be similar
Construct
Idea
Concept
Topic of study
Discriminant Validity
Degree to which constructs are different from other constructs which they should be dissimilar
Qualitative
Vs
Quantitative
Qualitative: collecting data through unstructured interviews, observation, and focus groups
Quantitative: collects data through questionnaires, surveys, phone survey or interviews
Secondary data?
Information already collected for other purpose
Issues with completeness and reliability
Descriptive statistics
Decribes basic features of data
Describes what data shows
Inferential statistics
Answer research questions or test models or hypotheses
Independent
Vs
Dependent
What is being tested as effective or ineffective.
Vs
Result of the independent variable being introduced i.e. results on exam