Module 05 Flashcards

Chapter 15, 16, 17, and 18

1
Q

What are the five types of measuring instruments discussed for evaluation measurement needs?

A
  • Journals and diaries
  • Logs
  • Inventories
  • Checklists
  • Summative instruments

These instruments are practical for various evaluation contexts.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is the primary use of journals and diaries in evaluations?

A

Data collection in interpretive studies that collect data in the form of words

They are not typically used in positivistic studies focused on numerical data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is a key consideration regarding the reliability of journals?

A

A journal is reliable if the same experience evokes the same written response

However, fatigue or changes in perspective can affect this reliability.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

True or False: Logs are more detailed than journals.

A

False

Logs are structured and generally less detailed than journals.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What does an inventory typically consist of in evaluations?

A

A list completed by evaluation participants

For example, an inventory designed to measure depression may ask participants to list things that make them feel depressed.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is the purpose of a checklist in evaluation?

A

A list prepared by the evaluator to measure specific criteria

For example, a checklist for measuring depression may include feelings experienced over a specific timeframe.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is the main function of summative instruments?

A

To obtain data from one question or multiple questions about program objectives and combine responses into a single score

They provide a composite score indicating the individual’s position on the objective being measured.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What are the response categories commonly used in summated scales?

A
  • Strongly agree
  • Agree
  • Neutral
  • Disagree
  • Strongly disagree

This format allows for varied degrees of agreement or disagreement.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is the difference between unidimensional and multidimensional summative measuring instruments?

A

Unidimensional measures one variable; multidimensional measures multiple related subvariables

Multidimensional instruments combine several unidimensional ones.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What are standardized measuring instruments known for?

A

Being extensively tested and providing information on their testing results

They typically include details about purpose, description, norms, scoring, reliability, and validity.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Fill in the blank: A standardized measuring instrument measuring client satisfaction is called the _______.

A

Client Satisfaction Inventory (CSI)

This instrument assesses how clients feel about the services they have received.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What does the scoring process of the Client Satisfaction Inventory involve?

A
  • Adding the value of valid responses (SUM)
  • Determining the number of valid responses (N)
  • Subtracting N from SUM
  • Multiplying the result by 100
  • Dividing by (N x 6)

This method calculates a score reflecting client satisfaction.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What has been the impact of the social service agency on the individual?

A

Positive change and feeling of being understood

The individual feels they can talk openly and that the help received is better than expected.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

How does the individual perceive the social workers?

A

Some are helpful, while others seem only concerned with payment

There is a mix of feelings regarding the social workers’ effectiveness and intentions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What scale is used to measure social service satisfaction?

A

A scale from one to five

The scale includes options from ‘Strongly agree’ to ‘Strongly disagree’.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What does the social worker’s attitude affect according to the individual?

A

Feelings of embarrassment and trust

Some social workers ask embarrassing questions, affecting the individual’s comfort level.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What is the purpose of the Self-Esteem Index (SEI)?

A

To measure problems with self-esteem

The SEI is a 25-item scale focusing on self-concept evaluative components.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

What are the scoring criteria for the Self-Esteem Index?

A

Scores above 30 indicate significant problems; below 30 indicates no problems

The SEI is designed to assess self-esteem issues.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

What is the reliability of the Self-Esteem Index?

A

Mean alpha of 0.93 indicating excellent internal consistency

The SEI has excellent stability with a high test-retest correlation.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

What demographic groups were included in the SEI study?

A

Single and married individuals, clinical and nonclinical populations, various ethnicities

The study included Caucasians, Japanese, Chinese Americans, and others.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

What are the key factors to evaluate when assessing standardized measuring instruments?

A
  • Sample representativeness
  • Validity of the instrument
  • Reliability of the instrument
  • Practicality of application

These factors help determine the accuracy and applicability of the instrument.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

What are some advantages of standardized measuring instruments?

A
  • Readily available and easy to access
  • Established reliability and validity
  • Norms available for comparison
  • Often free of charge

Standardized instruments are beneficial for systematic evaluations.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

What are some disadvantages of standardized measuring instruments?

A
  • Language may be difficult
  • Tone may not fit program philosophy
  • Target population may not understand the instrument
  • Scoring procedures may be complex

Disadvantages can affect the effectiveness of measurement in certain contexts.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

True or False: The Self-Esteem Index can be used with children under the age of 12.

A

False

The SEI is not recommended for use with children under 12.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Fill in the blank: The SEI is designed to measure the _______ of a problem the client has with self-esteem.
[degree, severity, or magnitude] ## Footnote The SEI evaluates how significant self-esteem issues are for clients.
26
What is the consequence of using a measuring instrument with a population it wasn't tested on?
It may yield inaccurate results ## Footnote It's crucial to match the instrument with the characteristics of the population being assessed.
27
What is an important consideration regarding the validity of an instrument?
The content domain must be clearly defined ## Footnote Validity ensures the instrument measures what it is intended to measure.
28
What does the term 'instrument bias' refer to?
When standardized tests do not accurately reflect the abilities of minority populations ## Footnote Instrument bias can lead to underestimation of abilities.
29
What is the primary concern regarding standardized intelligence tests for ethnic minority children?
Scores may underestimate their actual abilities due to lack of representation in standardization samples. ## Footnote The concern is that these tests require proficiency in the European American culture.
30
What does validity refer to in the context of measuring instruments?
Validity addresses the extent to which a measuring instrument achieves what it claims to measure. ## Footnote Validity is particularly questioned when ethnic minorities are not included in the development of the instruments.
31
What is a potential misuse of measurement related to cultural values?
Assuming that all groups value variables equally can lead to misrepresentation and the assertion of superiority of one group's values over another's. ## Footnote For example, spirituality may be more important to Native Americans than material possessions.
32
How can language create measurement issues in research with ethnic minorities?
Some ethnic minorities may lack proficiency in English, leading to potential misinterpretation of results from instruments designed only in English. ## Footnote Translations of instruments may not be equivalent, affecting the validity of the findings.
33
What is a one-group post test-only design?
A design that measures the success of participants after they have undergone an intervention without comparison to another group. ## Footnote It is often used to evaluate program objectives.
34
What are the five basic types of one-group evaluation designs?
* One-group post test-only design * Cross-sectional survey design * Longitudinal designs (trend, cohort, panel) * One-group pretest-post test design * Interrupted time-series design.
35
What is a cross-sectional survey design?
A design that surveys a cross-section of a population only once to gather data. ## Footnote It is commonly used in needs assessment studies.
36
What is the difference between trend studies and cohort studies in longitudinal designs?
* Trend studies take different samples at different times. * Cohort studies follow a specific group over time.
37
True or False: Two-group designs compare an experimental group against a control group.
True.
38
What issue arises from using culturally insensitive instruments with ethnic minorities?
It leads to misrepresentation and poor understanding of ethnic minorities. ## Footnote The validity of studies using such instruments is often questioned.
39
Fill in the blank: The lack of __________ of measuring instruments with ethnic minority populations has been well documented.
sensitivity
40
What is a trend study?
A trend study takes different samples of people who share a similar characteristic at different points in time.
41
How does a trend study differ from other types of studies?
A trend study samples different groups of people at different points in time from the same population.
42
What is the purpose of Antonia's trend study?
To determine whether parents of second-grade children are becoming more receptive to child abuse prevention education.
43
What is a cohort study?
A cohort study takes place when evaluation participants who have a certain condition and/or receive a particular treatment are sampled over time.
44
In a cohort study, what is followed over time?
A particular cohort of people who have shared a similar experience.
45
What is a panel study?
In a panel study, the same individuals are followed over a period of time.
46
What is the primary advantage of panel studies?
Panel studies can reveal both net change and gross change for the same individuals.
47
What is the one-group pretest-posttest design?
A design that includes a pretest of the program objective, which can be used as a basis of comparison with the posttest results.
48
What does the pretest-posttest design help determine?
It helps determine how the intervention affects a particular group.
49
What is the interrupted time-series design?
A design that conducts a series of pretests and posttests on a group over time, before and after an independent variable is introduced.
50
What is the main characteristic of trend studies?
Data are collected from the population at more than one point in time.
51
True or False: In trend studies, there is experimental manipulation of variables.
False.
52
What does a cohort analysis attempt to identify?
Cohort effects and whether changes in the dependent variable are due to aging or other factors.
53
Fill in the blank: A cohort is any group of individuals who are linked in some way or who have experienced the same _______.
[significant life event].
54
What type of studies are often used with public opinion polls?
Trend studies.
55
What can cohort studies tell us about populations?
What circumstances in early life are associated with the population's characteristics in later life.
56
What is the significance of measuring the same individuals over time in a panel study?
It allows for the determination of changes in attitudes and behaviors that might go unnoticed in other research approaches.
57
What is one potential outcome indicator for evaluating a child abuse prevention program?
A reduction in parents’ risk for abusive and neglecting parenting behaviors.
58
What is a common use for cohort studies?
To follow a group linked by a shared condition or treatment over time.
59
What is the main purpose of panel studies?
To measure the same sample of respondents at different points in time ## Footnote Panel studies can reveal both net change and gross change in the dependent variable for the same people.
60
What are the two types of panel studies?
* Continuous panel * Interval panel
61
What is internal validity?
The approximate certainty about inferences regarding cause-effect or causal relationships.
62
What does a one-group pretest-posttest design attempt to establish?
A relationship between the intervention and the program objective.
63
What is the effect of history on internal validity?
Any outside event that may affect the program objective and is not taken into account in the evaluation's design.
64
Fill in the blank: The higher the internal validity, the greater the extent to which _______ can be controlled.
rival hypotheses
65
What is maturation in the context of internal validity?
Changes, both physical and psychological, that take place in evaluation participants over time.
66
True or False: Maturation can be controlled by using a control or comparison group.
True
67
What is the testing effect?
The effect that taking a pretest might have on posttest scores.
68
What is a potential threat to internal validity related to instrumentation?
Errors that occur when measuring instruments change over time.
69
List three threats to internal validity.
* History * Maturation * Testing
70
What is the purpose of establishing a baseline in a time-series design?
To ensure that normal fluctuations are not confused with the results of the policy or intervention.
71
Fill in the blank: Panel data are particularly useful in predicting _______.
long-term or cumulative effects
72
What is a one-group pretest-posttest design?
A design where measurements of a program objective are taken before and after an intervention.
73
What is the role of a control group in a study?
To help control for extraneous variables like history.
74
True or False: Internal validity is irrelevant in studies that do not attempt to establish causal relationships.
True
75
What does the term 'reactive effects' refer to in research?
Changes in participants' behavior due to their awareness of being studied.
76
What are alternative explanations in the context of internal validity?
Other factors that could account for observed changes in the dependent variable.
77
What are the three steps of the one-group pretest-posttest design?
1. Measuring some program objective 2. Initiating a program to change that variable 3. Measuring the program objective again at the conclusion of the program
78
What is the testing effect?
The effect that taking a pretest might have on posttest scores, potentially influencing participants' responses due to pretest exposure
79
True or False: Testing effects are a threat to internal validity.
True
80
What can cause a participant to score worse on a posttest?
Anxiety induced by the pretest or boredom from responding to the same questions again
81
What is instrumentation error?
Weaknesses of a measuring instrument itself, such as invalidity, unreliability, improper administration, or mechanical breakdowns
82
Fill in the blank: Instrumentation error refers to the weaknesses of a measuring instrument itself, such as _______.
invalidity, unreliability, improper administration, mechanical breakdowns
83
What does statistical regression refer to?
The tendency of extremely low and extremely high scores to regress, or move toward the average score over time
84
True or False: Statistical regression can be mistaken for the effects of an intervention.
True
85
What is differential selection of evaluation participants?
The potential lack of equivalency among preformed groups of evaluation participants
86
Why is mortality a threat to internal validity?
Participants dropping out may be different from those who stay, affecting the study's findings
87
Fill in the blank: Mortality refers to the loss of _______ through normal attrition over time in evaluation designs.
evaluation participants
88
What are reactive effects?
Changes in behaviors or feelings of research participants caused by their reaction to the novelty of the situation or knowledge of being part of a study
89
What historical study is associated with reactive effects?
Studies at the Hawthorne plant of the Western Electric Company
90
What do interaction effects refer to?
The effects produced by the combination of two or more threats to internal validity
91
What is the relationship between experimental and control groups concerning internal validity threats?
They can experience diffusion of treatments, compensatory equalization, compensatory rivalry, and demoralization
92
What is diffusion of treatments?
When members of the experimental and control groups talk about the study, potentially invalidating the findings
93
What occurs during compensatory equalization of treatment?
When researchers attempt to compensate control group participants who are not receiving the intervention
94
What is compensatory rivalry?
When the control group becomes motivated to compete with the experimental group
95
Fill in the blank: Demoralization refers to feelings of deprivation among the control group that may cause them to _______.
give up and drop out of the study
96
What is demoralization in the context of control groups?
Feelings of deprivation among the control group that may cause them to give up and drop out of the study ## Footnote This effect can also be referred to as mortality.
97
What is the primary focus of two-group designs in research?
To minimize threats to internal validity and provide data that approaches proving cause-effect relationships.
98
What is the comparison group pretest-post test design?
A design that includes a comparison group that receives both pretest and post test but does not receive the intervention.
99
True or False: In the comparison group pretest-post test design, random assignment to groups is used.
False
100
What statistical technique can be used if pretest differences are not statistically significant but still affect the posttest?
Analysis of covariance.
101
What does the comparison group post test-only design improve upon?
It improves on the one-group post test-only design by introducing a comparison group that does not receive the intervention.
102
In classical experimental design, what method is used to create experimental and control groups?
Random assignment method.
103
What is the main advantage of the classical experimental design?
It controls for many threats to internal validity due to random assignment.
104
Describe the randomized post test-only control group design.
Participants are randomly assigned to either an experimental group that receives the intervention or a control group that only takes the post test.
105
What is external validity?
The degree to which the results of a specific study are generalizable to another population, setting, or time.
106
List the four major threats to external validity relevant to program evaluations.
* Selection-treatment interaction * Specificity of variables * Multiple-treatment interference * Researcher bias
107
What does selection-treatment interaction refer to?
It occurs when an evaluation design cannot provide for random selection of participants from a population.
108
Fill in the blank: Specificity of variables relates to the generalizability of results based on the __________ of the study sample.
[specific group of people]
109
What is multiple-treatment interference?
It occurs when an evaluation participant is given two or more interventions in succession, affecting the results of each.
110
How can researcher bias affect study outcomes?
Researchers may unconsciously manipulate a study to see results that align with their expectations.
111
What is one method to control for researcher bias?
Conducting a double-blind experiment.
112
What is the significance of pretests in evaluation designs?
Pretests help ensure equivalence between control and experimental groups.
113
True or False: The randomized post test-only control group design requires a pretest.
False
114
What should be allowed between two interventions to minimize multiple-treatment interference?
Sufficient time.
115
What is the final takeaway regarding evaluation designs?
Each design has its advantages and disadvantages, and familiarity with them helps in selecting the most appropriate one for a specific evaluative effort.
116
What are the three basic procedures needed for program evaluation?
1. Selecting a data source 2. Selecting a sample from your data source 3. Collecting data from your sample
117
What can serve as data sources in evaluations?
Data sources can be: * People * Existing data
118
Who are some examples of people that can be data sources?
Examples include: * Federal and state personnel (politicians, government officials) * Program workers (therapists, caseworkers) * Clients (individuals, families, groups, communities)
119
What is a key characteristic of the best data sources?
The best data sources provide firsthand or direct knowledge about the experience being evaluated.
120
What is the difference between firsthand and secondhand data?
Firsthand data comes directly from individuals with experience, while secondhand data is provided by individuals with indirect knowledge.
121
What are existing data sources?
Existing data sources are previously recorded documents or artifacts relevant to current evaluation questions.
122
Name three general areas where existing data can be found.
1. Public data (census data, government documents) 2. Client data (client records, service plans) 3. Program data (evaluation reports, program contracts)
123
What is a sampling frame?
A sampling frame is a comprehensive list of every unit (people, documents, artifacts) from which a sample is drawn.
124
What are the two main types of sampling methods?
1. Probability sampling 2. Nonprobability sampling
125
What is the benefit of probability sampling?
Probability sampling produces samples considered representative of the larger sampling frame.
126
What is simple random sampling?
Simple random sampling involves selecting each unit using a chance procedure (e.g., rolling dice).
127
How does systematic random sampling work?
Systematic random sampling involves determining the total number of units, the desired sample size, calculating an interval, and randomly selecting a starting point.
128
What is stratified random sampling?
Stratified random sampling involves identifying relevant strata, determining their proportions in the population, and using random sampling within each stratum.
129
What is cluster sampling?
Cluster sampling involves selecting clusters randomly and then sampling units within those clusters.
130
What is convenience or availability sampling?
Convenience or availability sampling includes the nearest or most accessible units.
131
Fill in the blank: The best data sources are those that provide _______ knowledge regarding the experience that is the subject of your evaluation.
firsthand
132
True or False: Nonprobability sampling ensures that each unit in a sampling frame has an equal chance of being selected.
False
133
What is purposive sampling?
Purposive sampling includes units known or presumed to provide good data.
134
What is nonprobability sampling?
Nonprobability sampling methods do not give each unit in a sampling frame an equal chance of being picked for an evaluation study.
135
Name the four types of nonprobability sampling.
* Convenience or Availability Sampling * Purposive Sampling * Quota Sampling * Snowball Sampling
136
What is Convenience or Availability Sampling?
Include the nearest or most available units.
137
What is Purposive Sampling?
Include units known or used to be good data sources based on some theoretical criteria.
138
What is Quota Sampling?
Identify variables relevant to the evaluation, combine them into discrete categories, determine the percentage of each category, calculate quotas, and select data sources until each quota is filled.
139
What is Snowball Sampling?
Locate a small number of data sources, then ask them to identify others in the population, continuing until the desired sample size is obtained.
140
Why are nonprobability sampling methods used?
They are used in situations where it's desirable to limit or pick data sources based on some unique characteristic.
141
What is the aim of nonprobability sampling strategies?
To produce quality, firsthand data from sources that share something in common.
142
When is it necessary to use sampling strategies in an evaluation plan?
When the sampling frame is large, previous efforts to include all units have failed, only unique data sources are desired, program resources are limited, or multiple data sources are needed.
143
What are the advantages of Simple Random Sampling?
* High representativeness if all subjects participate * Ideal sampling method
144
What are the disadvantages of Simple Random Sampling?
* Not possible without a complete list of population members * Potentially uneconomical to achieve
145
What is Stratified Random Sampling?
Random sample from identifiable groups (strata), ensuring specific group representation.
146
What is Cluster Sampling?
Random samples of successive clusters of subjects, allowing random selection when no single list of population members exists.
147
What is Purposive Sampling?
Hand-pick subjects based on specific characteristics.
148
What is Quota Sampling?
Select individuals as they come to fill a quota by characteristics proportional to populations.
149
What is Accidental Sampling?
Includes volunteers or subjects who happen to be available.
150
What is Participant Observation?
The evaluator has no formal role but observes as a participant.
151
What are the main data collection methods mentioned?
* One-to-One Interview (General) * One-to-One Interview (Unstructured) * One-to-One Interview (Semi-structured) * One-to-One Interview (Structured) * Focus Group * Phone Interview * Participant Observation
152
What are the advantages of One-to-One Interviews?
* Provide in-depth information * Allow for clarification and probing
153
What are the disadvantages of Focus Groups?
* Requires great skill from the interviewer * Data can be challenging to analyze
154
Fill in the blank: Nonprobability sampling methods are used when it's desirable to limit or pick data sources based on _______.
[unique characteristic]
155
True or False: Nonprobability sampling allows for generalization of characteristics to the larger population.
False
156
What should be clear when selecting data collection methods?
The evaluation question should guide the selection of data sources and data-collection methods.
157
What is a critical factor to ensure when collecting data?
Develop protocols that yield credible data.
158
What is the role of the evaluator in participant observation?
The evaluator has no formal role as a participant; they act as a silent observer.
159
What is a major advantage of the evaluator's role in participant observation?
The evaluator retains beliefs of the participant without ethical issues at stake.
160
What is a disadvantage of participant observation?
It is difficult to maintain distinct roles; the observer's presence can change the nature of interactions.
161
What are the two main types of data discussed?
* Existing data * New data
162
What are examples of existing data?
* Documents and reports * Datasets
163
What is the purpose of using existing data?
To profile recent and past characteristics or patterns that describe communities, clients, workers, or program services.
164
What are the challenges of gathering existing data?
Old documents may not be easily accessible and there may be no existing data recorded.
165
What is the significance of reviewing existing documents?
It provides insights from previously analyzed and summarized data.
166
What types of data are included in existing statistical data?
Numbers and figures calculated from original raw data.
167
What are datasets?
Datasets store existing raw or original data and organize them to connect all data elements to their source.
168
What is the disadvantage of census data?
Census data can become outdated quickly and provide only a general picture of a population.
169
What types of data do client datasets include?
* Demographics * Treatment progress * Intake forms * Assessments
170
What are the two main problems associated with client and program datasets?
* Data may be incomplete or inconsistently recorded * Data apply to a specific point in time
171
What methods can be used to obtain new data?
* Individual interviews * Surveys * Group interviews * Observations
172
What is a key characteristic of individual interviews?
They can produce new, original data about social needs, program processes, or outcomes.
173
Fill in the blank: The evaluator's presence can change the nature of the _______.
interactions being observed.
174
True or False: Reviewing existing documents can save time and help avoid reinventing the wheel.
True
175
What are examples of existing documents used in evaluations?
* Published research studies * Government documents * Client reports
176
What is the main goal of face-to-face interviews in needs assessment?
To ask questions that permit open-ended responses.
177
What type of interviews are used when prior knowledge of the topic exists?
Structured interviews.
178
When is it appropriate to use informal unstructured interviews?
When very little is known about the problem area.
179
What is a key characteristic of informal interviews?
They allow for a free-flowing discussion.
180
What do surveys aim to gather?
Opinions from numerous people to describe them as a group.
181
How do survey questions differ from structured and unstructured interview questions?
Survey questions are narrower and yield shorter responses.
182
What should be considered when creating survey questions?
They should yield valid and reliable responses.
183
What is one disadvantage of mail surveys?
Low response rate.
184
List three common methods of administering surveys.
* Mail * Telephone interview * In-person interviews
185
What is one strategy to increase the number of survey respondents?
Include a cover letter stating the purpose of the evaluation.
186
What are the three strategies for conducting group interviews?
* Open forums * Focus groups * Nominal groups
187
What is the purpose of open forums?
To address general evaluation questions and gather stakeholder responses.
188
What is a focus group designed to do?
Gather perceptions about a predetermined topic of interest.
189
What role does a facilitator play in a focus group?
Guides discussion and ensures balanced participation.
190
What is the nominal group technique?
A structured method for collecting data from individuals with limited interaction.
191
What is the main difference between structured observations and participant observations?
Structured observations are objective, while participant observations involve the observer's involvement.
192
What is an essential aspect of structured observations?
They occur under controlled conditions and aim to collect precise data.
193
What is the role of the observer in structured observations?
To remain impartial and focus on specific interactions.
194
What is a key advantage of using the nominal group technique?
Efficient data collection from numerous sources.
195
Fill in the blank: The main goal of surveys is to gather ______ from numerous people.
[opinions]
196
True or False: Surveys rely heavily on interviewer skills to separate responses.
False.
197
What should be done after administering a survey?
Enter and tabulate survey information.
198
What is a common disadvantage of open forums?
They tend to draw a select group of people with strong opinions.
199
What is the purpose of follow-up letters in survey administration?
To prompt nonrespondents to complete the survey.
200
What should be included in the initial orientation for focus group participants?
An introduction to the topic of focus.
201
In a nominal group, what is the process for ranking responses?
Individually rank the top five responses.
202
What should observers in structured observations record?
Only observable facts related to the predefined dimensions.
203
What is a significant factor in ensuring the validity of structured observations?
Using trained observers and precise protocols.
204
What should be the characteristics of reports for outside organizations?
Reports should be clear and understandable.
205
How can participant observation differ from structured observation?
1. The observer is not impartial. 2. The rules for observation are far more flexible.
206
What is a challenge for participant-observers?
To balance their dual roles so that data are based on fact and not personal impressions.
207
What are the basic tasks in implementing regular trained observer measurements?
1. Identify specific data needed. 2. Develop the trained observer rating guide. 3. Test the guide with raters. 4. Decide when ratings will be made and reported. 5. Select and train observers. 6. Assign staff to oversee the process.
208
True or False: Participant-observers interact with the people they are watching.
True.
209
What is the benefit of participant observation?
Members of the group can pick up subtle or cultural nuances that may be obscure to an impartial viewer.
210
What is a data-collection plan?
A structured approach to gather and evaluate data for program objectives.
211
What should the data-collection plan specify?
1. Program objectives to be measured. 2. Measurement methods for each objective. 3. Data sources. 4. Data-collection methods. 5. Time frame for data collection. 6. Location for data collection. 7. Who will collect the data.
212
Fill in the blank: The variable for self-esteem can be measured using the _______.
Rosenberg Self-Esteem Scale.
213
What is essential for data collectors to ensure high-quality data?
Training and supervision.
214
What should training for data collectors aim to achieve?
1. Consistent application of standards and procedures. 2. Understanding of how data will be used. 3. Clarity of roles and responsibilities. 4. Preparedness for handling events that may arise.
215
What types of training methods can be used for data collectors?
1. Written instructions. 2. Verbal instructions. 3. Meetings. 4. Data-sharing agreements. 5. Train-the-trainers approach. 6. Formal training.
216
What is the purpose of a Memorandum of Understanding in data collection?
To establish formal agreements on data access, usage, and conditions.
217
What should be included in formal training for data collectors?
Background material about the data being collected, including type, source, and purpose.
218
What factors influence the selection of training methods for data collectors?
Audience, training needs, resources, and personal style.
219
What is the purpose of providing background material in data-collection training?
To clarify the type of data being collected, from whom, and for what purpose ## Footnote Background material helps data collectors feel more confident, motivates them to obtain high-quality data, and aids in troubleshooting.
220
What should data-collection instructions cover?
Every aspect of data collection, including: * Identifying or locating appropriate respondents or records * Processing the collected data * Specific roles and responsibilities
221
True or False: Data collectors need to know their own roles and responsibilities.
True ## Footnote Understanding their roles helps ensure proper data collection and management.
222
Fill in the blank: An evaluation overview statement can be developed and used to provide _______.
[information about the evaluation purpose and data usage]
223
What is a common training topic related to data collection logistics?
Identifying appropriate respondents/records ## Footnote Understanding the importance of adhering to data-collection protocols preserves data quality.
224
What is the importance of recruiting participants effectively?
To achieve high response rates and protect respondents' rights to refuse participation.
225
What should data collectors be trained to do when gaining access to records?
Know what to say to gain admittance and request records.
226
What does accurate recording of data ensure?
Meaningful comparison and interpretation of the data.
227
What should data collectors do if they encounter an emotionally upset respondent?
Terminate or reschedule the interview.
228
What is a critical consideration regarding data confidentiality and security?
Understanding who is allowed access and what to do in case of a breach.
229
What is the role of supervision and monitoring in data collection?
To ensure data are collected appropriately and resolve issues as they arise.
230
What should training on data collection include regarding feedback?
Methods for routinely gathering feedback from data collectors.
231
True or False: Data collectors should be aware of special considerations when working with different target audiences.
True ## Footnote This includes cultural or religious customs and disabilities that need to be accommodated.
232
What is one strategy for ensuring data collectors' safety during fieldwork?
Pairing them to work together in a 'buddy' system.
233
What should be emphasized in team training for data collectors?
How to work together and how their roles complement one another.
234
Fill in the blank: Conducting practice sessions allows data collectors to _______.
[practice all aspects of the data-collection protocol]
235
What is the significance of conducting a pilot test of data-collection methods?
To identify potential issues and refine the data-collection protocol.
236
What is a key tip for successful data-collection training?
Always conduct some type of data-collection training.
237
Why is it important to avoid assuming data-collection procedures are intuitive?
To prevent misunderstandings that can result in unusable data.
238
What should be included in a handbook for data collectors?
Protocols, measuring instruments, instructions, contact numbers, and supplementary materials.
239
What is the primary purpose of data-collection training?
To ensure all data-collection activities in your evaluation are conducted properly and effectively ## Footnote Data-collection training, whether formal or informal, is crucial to avoid misunderstandings and ensure data usefulness.
240
True or False: Experienced data collectors do not need training.
False ## Footnote Even experienced data collectors benefit from training to understand specific procedures for each evaluation.
241
What is a key factor when selecting trainers for data-collection teams?
Use high-quality trainers ## Footnote Recruiting the best supervisors and trainers can improve the performance of data collectors.
242
Why is it important to ensure respondent comfort?
Respondents need to feel comfortable with data collectors ## Footnote This may involve selecting data collectors who share similar backgrounds with respondents.
243
Fill in the blank: Training needs should be considered _______.
broadly ## Footnote Consider all procedures needed to access and use data, even from secondary sources.
244
What should data collectors be encouraged to report?
Problems and observations ## Footnote Data collectors' observations are invaluable as they are closest to the evaluation implementation.
245
What is the importance of documentation in data collection?
Ensures approaches are well documented and others can take over if necessary ## Footnote Documentation also serves as a historical record of evaluation methods.
246
What does ongoing monitoring of the data-collection process help identify?
Whether data collection is proceeding as planned ## Footnote It allows for intervention or additional training if needed.
247
List three didactic approaches for training.
* Overview of the evaluation * Understanding evaluation standards * Review of data collection instruments ## Footnote Didactic approaches ensure important content is conveyed in a structured way.
248
What is an example of a hands-on training approach?
Role-playing ## Footnote Role-playing simulates actual data-collection situations allowing practice and feedback.
249
What checklist items should be included in data-collector training?
* Background information on the program * Clear instructions on data collection * Expectations regarding professional evaluation standards * Contact information for questions * Discussion on data custody and safeguarding ## Footnote This checklist ensures comprehensive training for data collectors.
250
What is the summary conclusion regarding data collector training?
Data collectors must be highly trained and supervised to collect high-quality data ## Footnote Proper training ensures data meets standards of utility, accuracy, and propriety.