Test #3 (chapter 12) Flashcards

1
Q

What is evaluation research? What else is it sometimes called?

A

it is research undertaken for the purpose of determining the impact of some social intervention, such as a program aimed at solving a social problem
- sometimes called program evaluation
- refers to a research purpose rather than a specific research method

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

evaluation research is a form of ______ research

A

applied

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

When is evaluation research appropriate to use?

A

Evaluation research is appropriate whenever some social intervention occurs or is planned
- social intervention is an action taken in a given social context with the goal of producing an intended outcome
- appropriate topics are endless

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What are the 3 variations in the intent of evaluation research?

A
  1. Needs assessment studies
    - to determine the existence and extent of problems
  2. Cost-benefit studies
    - to determine whether the results of a program justify the expense (financial and other) of the program
  3. Monitoring studies
    - to provide a steady flow of information about something of interest
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What are the 2 purposes of evaluation research?

A
  1. Formative
    - aimed at improving the process of an intervention
  2. Summative
    - aimed at evaluating the effectiveness of an intervention at achieving its goals
    - often referred to as program evaluation
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What does “measurement issues” in conducting evaluation research mean?

A

Evaluation research is a matter of finding out whether something is there or not there, whether something happened or didn’t happen
- this requires the ability to operationalize, observe, and recognize the presence or absence of what is under study

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Explain “specifying outcomes” as a measurement issue

A

The key variable to measure is the RESPONSE variable
- the outcome measured to determine a program’s effectiveness
* if a social program is intended to accomplish something, we must be able to measure that something
- have to agree on definitions in advance

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Explain “measuring experimental contexts” as a measurement issue

A

Measuring the dependent variables directly involved in the experimental program is only the beginning
- it’s often appropriate and important to measure aspects of the context of an experiment
ex. evaluating a job training program –> should measure the subject’s employment rate as well as the employment rates of society at large during the evaluation (slump in job market?)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Explain “specifying interventions” as a measurement issue

A

In addition to making measurements relevant to the outcome of a program, researchers must measure the program intervention - the experimental stimulus

ex. job training program - some ppl will participate and others will not
- should also measure the extent or quality of participation in the program –> if the program is effective, you should find that those who participated fully have higher employment rates than those who participated less

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Explain “specifying the population” as a measurement issue

A

When evaluating an intervention, it’s important to define the population of possible subjects for whom the program is appropriate
ex. evaluating a new form of psychotherapy –> need a population with mental health problems –> how will “mental health problems” be defined and measured

  • variables considered in the definitions should have fairly precise measurements
  • demographic variables (ex. gender, age, race) should also be measured
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Explain “new vs. existing measures” as a measurement issue

A

New:
- new measures are required if the study addresses something that has not been measured before
- creating measurements for a study can offer greater relevance and validity than using existing ones
- newly created measures will require pretesting or will be used with considerable uncertainty

Existing:
- measures that have been used frequently by other researchers carry a body of possible comparisons that might be important to our evaluation
- measures with a long history of use usually have known degrees of validity and reliability

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What’s an example of how to operationalize success/failure

A

Using a cost-benefit analysis:
- if the benefits outweigh the cost, continue the program
- if the reverse occurs, then discontinue the program

Sometimes it can be handled through competition among programs (which one costs less?)

*situations in evaluation research are seldom amendable to straightforward economic accounting
*ultimately, criteria of success and failure are often a matter of agreement

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Evaluation research is an application of ________ _______ _______

A

social research methods

*it itself is not a method

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What are the three main types of research designs for evaluations?

A
  1. Experimental
  2. Quasi-experimental
  3. Qualitative evaluation
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What is the Experimental design type of evaluation research (and an example)

A

Uses some variation of the classic experimental design with subjects randomly assigned to either the experimental or control group

ex. evaluating a new psychotherapy treatment for sexual impotence:
- assign 100 patients randomly to experimental and control groups
- the experimental group receives the new therapy; the control group is removed from therapy (ethical considerations?)
- make and record observations
- did the therapy have any intended, or unintended consequences?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What are quasi-experimental designs?

A

Non-rigorous inquiries somewhat resembling controlled experiments but lacking key elements such as random assignment of subjects to experimental and control groups, pre- and posttesting, and/or any control group

17
Q

Explain the 3 kinds of quasi-experimental designs

A
  1. Time-Series Designs:
    - a research design that involves measurements made over some period, such as the study of traffic accident rates before and after lowering the speed limit
    - can kind of measure the impact of a stimulus without a control group (ex. controversial discussion in the middle of the week increasing class participation afterwards)
  2. Nonequivalent control group:
    - the control group is similar to the experimental group but is not created by the random assignment of subjects
    –> this sort of control group can differ greatly from the experimental group with regard to the dependent variable or variables related to it
  3. Multiple Time-Series Designs:
    - the use of more than one set of data collected over time (ex. accident rates over time in several provinces or cities) in order to make comparisons
    –> improved version of the nonequivalent control group design
    - key aspect is comparability
18
Q

What are qualitative evaluations?

A

in-depth interviews or focus groups

  • sometimes even structured quantitative evaluations can yield unexpected qualitative results
  • most effective evaluation research is one that combines qualitative and quantitative components
19
Q

List the 3 challenges of evaluation research?

A
  1. logistical problems
  2. ethical concerns
  3. use of research results
20
Q

What’s the main idea of logistical problems in evaluation research?

A

Getting subjects to do what they’re supposed to do, getting research instruments distributed and returned, and other seemingly unchallenging tasks
this involves:
1. coordinating service
2. administrative control

21
Q

Explain coordinating service (example) and administrative control within logistical problems

A

Coordinating service:
ex. evaluating a pilot program of a new coordinated mental health program
- agency service providers designed a project with consultation from the evaluators –> the design included the elaborate and well-planned evaluation components
- not all agencies were committed to the project, and it was not properly implemented
- the program was neither implemented as intended nor successful
*shows that the program can have a good plan and good process laid out, but it just might not be implemented correctly (because the experiment is taking place in REAL, uncontrollable life)

Administrative control:
- ability to complete a project is impacted by “real life”:
- those selected for the study leave or become unavailable
- study conditions change
- unexpected events occur
- some choose to discontinue the study
- others become ill
*these and many other administrative issues need to be considered

22
Q

List some of the ethical considerations in evaluation research

A
  • responsibility to subjects
  • informed consent
  • voluntary participation
  • deception
  • research designs that would yield the most scientifically decisive evidence are not always ethical
  • can be evaluating a controversial topic and it can be hard to not let your opinion get involved
  • those paying for the research may want a particular result (biased) (can end up determining whether people are subjected to medial or social remedies)
23
Q

Why are the implications of evaluation research sometimes not put into practice?

A
  1. implications may not be presented in a way understandable to non-researchers
  2. evaluation results may contradict deeply held beliefs
  3. vested interests
24
Q

What are social indicators?

A

Measurements that reflect the quality or nature of social life, such as crime rates, infant mortality rates, number of physicians per 100,000 population, and so forth
- social indicators are often monitored to determine the nature of social change in a society

25
Q

Explain the research conducted on the use of capital punishment in the united states

A

(ex. of social indicators research)

  • if capital punishment actually deters people from committing murder, then we should expect to find lower murder rates in those states that have the death penalty than in those that do not
  • results: states with capital punishment had dramatically higher murder rates than those without
  • is a counterexplanation possible? –> maybe the data was backwards and the existence of the death penalty as an option was a consequence of high murder rates
    **this was dispproven
26
Q

How is social indicators research proceeding?

A
  1. Researchers are developing more refined indicators
  2. Research is being devoted to discovering the relationships among variables within whole socieities
27
Q

What are examples of how our social research class could be assessed in formative and summative evaulations?

A

formative :
- focus on teaching and learning activities and how they could be improved
- ex. maybe less individual work and more group work

summative:
- focus on whether appropriate levels of student learning occurred
- ex. look at the distribution of final grades in the class

28
Q

What are common problems in evaluation research?

A

Often the people whose programs are being evaluated aren’t thrilled at the prospect - an independent evaluation could threaten the program and their jobs

The program might have a vague purpose, and it’s hard to measure if that purpose was achieved (how to measure the “unmeasurable”

29
Q

Potentially, what is one of the most taxing aspects of evaluation research?

A

Determining whether the program succeeded or failed, and especially how to measure what success and failure looks like (how much better is enough)?