Chapter 10 & Readings Flashcards
What are some critical issues related to data collection?
A. Language of participants
B. Literacy level of participants
C. Use of a dominant or colonizing language
D. All of the above
D
Mertens and Wilson recommend that when you use mixed methods data collection that you be aware of the implications of your ontological and epistemological beliefs.
A. True
B. False
True
Reliability can be influenced by how the instrument is administered.
A. True
B. False
True
Reliability and validity are commonly used terms to describe the quality of quantitative data collection. What does validity mean in this situation?
A. Does the instrument measure cultural competence?
B. Does the instrument (as used with the participants) really measure what it is supposed to measure?
C. Does the instrument measure what is it is supposed to measure consistently?
D. Does the instrument reliably measure what it is supposed to measure over time?
B
An evaluator created a test. In order to test reliability, he had the participants take the test and he analyzed the results to examine the consistency of their responses. What is this an example of? A. Repeated measures reliability B. Intraparticipant reliability C. Internal-consistency reliability D. Multi-dimensional reliability
C
In multiple regression, when we say that we control for the effects of some variable(s) we are:
A. statistically adjusting or subtracting the effects of a variable to see what a relationship would have been without it
B. actually removing a variable from a model so that it does not interact with the effects of other variables
C. changing the mediating capabilities of an endogenous variable
D. changing the mediating capabilities of an exogenous variable
A
What are some important things to notice when you are conducting observation based on Michael Patton (2002b) as discussed in your textbook?
A. Observing what does not happen, program setting, and native language used
B. Social setting, program activities and behaviors, and nonverbal communications
C. Informal interactions and unplanned activities
D. All of the above
All of the Above
What is the validity/credibility evidence strategy used when evaluators share data with participants to obtain feedback on perceived accuracy and quality? A. Multiple data sources B. Member checks C. Persistent observations D. Progressive subjectivity
B
What is INTRArater reliability?
A. It is used to determine whether a single rater or observer is consistent over time.
B. It compares the data of two raters or observers to see whether they are rating the same behavior consistently.
C. It is used to compare two kinds of data collection to see whether they are describing the same event.
D. It is used to compare when different raters are administering similar instruments
A
What are some forms of evidence used to support validity/credibility in quantitative data collection?
A. Construct validity and criterion-related validity
B. Peer debriefing
C. Member checks
D. Persistent observations
A
What does the depiction of an evaluand include:
Specification of outputs
outcomes
knowledge
impacts
What are the levels that outcomes and impacts need to be measured?
individual client level
program or system level
broader community level
organizational level
___________ is a critical issue that permeates decisions about data collection.
Language
language differences can be translated and then back translated into the original language
_________ means does the instrument really measure what it is supposed to measure?
Validity
______________ is the consistency in measurement. Does the instrument measure what it is supposed to.
Reliability
Evaluators in the Values Branch developed parallel critiera for the quality of qualitative evaluations; ________ instead of reliability and __________ instead of validity.
dependability: credibility
Reliability Coefficient can be interpreted in two ways_______ and ____________.
Coefficient of stability and alternate-form coefficient
__________ is when the evaluator administers the same instrument twice, separated by a short period of time. Results are compared using a statistic such as correlation coefficient, also called test-retest reliability.
Coefficient of stability
___________ is when the evaluator administers two equivalent versions of the same instrument (parallel forms) to the same group of people. Results are compared using a coefficient of stability.
Alternate form coefficeint
____________ is when participants take one instrument, their socres are subjected to an analysis to reveal their consistency of responses within the instrument.
Reliability/precision
_______________ is when 2 observer’s data are compared to see whether they are consistently recording the same behaviors when they view the same events.
Interrator reliability
______________is used to determine whether a single observer is consistently recording data over a period of time.
intrarater reliability
___________ can be influenced by how the instrument is administered.
Reliability
Psychologists recommend the use of different types of evidence to support ________ claims.
validity
_________ is considered the unitary concept of validity. It is to what degree does all accumulated evidence suppor the intended interpretation of scores for the proposed purpose.
Construct Validity
_________ refers to items on the test that represent content covered in the program.
Content-related evidence
__________ indicates that the measure actually reflects current or future behaviors or dispositions.
Criterion-related evidence
Evaluators need to be aware of the consequences of using data, especially with regard to the potential to worse inequities
Consequential evidence
Evaluators need to stay on site for sufficient time
Prolonged and substantial engagment
Strategy to enhance credibility
Observations need to be conducated a variety of times of the day, week, and year
Persistant observations
strategy to enhance credibility
An evaluator should find a peer with whom to discuss the study at different stages
Peer debriefing
strategy to enhance credibility
Evaluators need to be aware of their assumptions, hypotheses, and understandings, and of how these change over the period of the study.
Progressive subjectivity
strategy to enhance credibility
Evaluators can share their data with participants to obtain feedback on the perceived accuracy and quality of their work.
Member Checks
strategy to enhance credibility
Qualitative evaluators recommend the use of _______.
Multiple Data Sources
strategy to enhance credibility
Triangulation
The use of multiple data of multiple data sources and different data collection strategies to strengthen the credibility of the findings of an evaluation
___________ evaluators want to assume their stakeholders that they have measured what they say they are measuring.
Methods Branch
____________ evaluators want to provide evidence of their believability of their findings.
Values Branch
___________ evaluators begin data collection by acknowledging power differences between themselves and study participants as well as the need to establish a trusting relationship with community members.
Social Justice Branch
What are examples of Quantitative Data collection methods?
Tests, performance and portfolio assessments, surveys, goal attatinment scaling, and analysis of secondary data sources
What are the types of tests?
standardized, locally developed, objective, nonobjective, norm-referenced, criterion referenced
*Pilot testing is important
Strong survey research utilizes what kind of samples
random
survey data collected from the research participants during a single, relatively brief time period
cross-sectional study
Longitundinal, panel and Trend studies; describe them
longitudinal collected at one than one point over time
panel study is a synonym for longitudinal
trend-samples are taken over time with the same questions asked, diff than Long because different people are selected.
A technique used to measure the meaning participants attach to various attitudinal objects or concepts:
semantic differential technique
A term whose meaning is debated by philosophers, but in everyday language implies that manipulation of one event produces another event
Causation
Effect
The difference between what would have happened and what did happen when a treatment
is administered
Effect