Research Design Flashcards

1
Q

What is Procedural Reliability?

A

The formula for calculating procedural reliability is: number of observed behaviors divided by number of planned behaviors multiplied by 100.

This formula yields a percentage that reflects the degree to which the interventionist followed prescribed procedures (Treatment Fidelity)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is Reliability of Effect?

A

It concerns your confidence that the outcome of an intervention is “real”; that is, that if the experiment is repeated, the outcome will be the same.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is Reliability of Measurement?

A

There are two facets of reliability of measurement that must be considered.

First, you must ask the question: “Are the data accurate?”

“Does the definition and the dimension of the behavior being measured (rate, latency, duration, etc.) correspond with how others define the behavior or event?”

A second aspect of reliability of measurement pertains to the consistency with which data are collected, i.e., interobserver agreement.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is Validity?

A

In research, if we profess that a behavior count is valid or accurate, we must convince our audience that the “observed value” corresponds to the “true value.”

To obtain a valid or true measure requires that we observe our target behaviors at appropriate times of the day, within appropriate activities when the behaviors are likely to occur, and for an appropriate length of time.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is Internal Validity?

A

“Is the independent variable and only the independent variable responsible for the observed changes in behavior?”

If the observed effect of an intervention can be repeated while controlling for potentially confounding variables (i.e., threats to internal validity), then the intervention is said to have internal validity.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is External Validity?

A

Refers to the effectiveness and generality of the independent variable. For example, “Given that an intervention produced a measurable effect with this study participant, will it have a similar effect with other individuals, in other settings, when implemented by other investigators, and when implemented with minor variations in the basic procedure?”

Is this possible to achieve with a SSR design?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is Social Validity?

A

Baer et al. (1968) set the stage for this concern when they specified that the domain of ABA was “behaviors that are socially important, rather than convenient for study” (p. 92).

Social validity should not take the place of direct measures of behavior, but used to supplement primary data by providing insights into how clients, students, and others view aspects of your study.

E.g. social significance…did it matter?

The term social validity refers to the degree to which an intervention is valued by the client, interventionist, and community (consumer satisfaction).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What is Ecological Validity?

A

Simply put, translating research to practice.

Ecological validity, within the context of single subject research methodology, refers to the extent to which a study has “relevance” and the intervention can be reliably implemented in the “real world” (Kratochwill, 1978).

Consider setting, skills of the interventionist, resources available…

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is Content Validity?

A

In the area of achievement testing, the proper variation of the question is, “Does the test measure what was taught?”

Poor scores (low student achievement) may occur when a radically new curriculum is introduced but student performance is evaluated on a traditional measure. The same result can occur when a traditional curriculum is substantially altered but the program is evaluated with traditional measures.

In single subject research design, content validity refers to the degree to which baseline or probe conditions and measures truly measure what is the focus of the treatment or instructional program.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What is Critereon Validity?

A

Sometimes referred to as “predictive validity” (Babbie, 1995) and “concurrent validity” (Barlow & Hersen, 1984), criterion validity addresses the degree to which two alternative assessments measure the same behavior or content of knowledge.

In single subject research the question is, “Do alternative baseline or probe test forms, or different observation periods (e.g., morning and afternoon) administered across days, yield similar behavioral measures?” A test of criterion validity is the substitutability of assessments or observations and the degree to which they yield consistent measures regardless of test form or observational period.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What are the Threat to Internal Validity? History.

A

History - History refers to events that occur during an experiment, and after the introduction of the independent variable, that may influence the outcome. Sources can be actions of others (parents intervene) or the student participant themselves (independent research).

Solution: Single subject research designs address history threats by withdrawing and reintroducing the independent variable (A-B-A-B design and its variations) or by staggering the introduction of the independent variable across behaviors, conditions, or participants (multiple baseline and multiple probe designs and their variations).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What are the Threats to Internal Validity? Maturation.

A

Maturation refers to changes in behavior due to the passage of time.

Solution: As with history confounding, potential maturation threats to internal validity are addressed through the withdrawal or the staggering of the introduction of the independent variable. There are those who refer to “session fatigue” as a maturation threat to validity. Session fatigue refers to a participant’s performance decreasing over the course of a session,

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What are the Threats to Internal Validity? Testing.

A

Testing is a threat in any study that requires participants to respond to the same test repeatedly during a baseline or probe condition.

Repeated testing may have a facilitative effect (improvement in performance over successive baseline or probe testing or observations sessions) or an inhibitive effect (deterioration in performance over successive baseline or probe testing or observations sessions)

Solution: It is important to design your baseline and probe conditions so that they yield participants’ best efforts so that you neither overestimate nor underestimate the impact of the independent variable on the behavior.

Facilitative effects of testing can be avoided by randomizing stimulus presentation order across sessions; not reinforcing correct responses, particularly on receptive tasks; not correcting incorrect responses; and not prompting (intentionally or unintentionally) correct responses. Procedural-reliability checks will help with detecting these procedural errors that could influence participant performance. Inhibitive effects of testing can be avoided by conducting sessions of an appropriate length, i.e., avoid session fatigue; interspersing known stimuli with unknown stimuli and reinforcing correct responses to known stimuli; and reinforce correct responses on expressive, comprehension, and response-chain tasks.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Threats to Internal Validity: Instrumentation.

A

Instrumentation threats refer to concerns with the measurement system, i.e., behavioral definitions, recording procedures, frequency of reliability observations, formula used to calculate interobserver agreement (IOA), independence of observers, observer bias, observer drift, etc.

In single subject research the percentage agreement between two independent observers is the most common strategy for determining whether there is a threat to internal validity due to instrumentation.

Suffice it to say here you must attend to the details of your measurement system to avoid instrumentation threats to internal validity.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What are the Threats to Internal Validity: Procedural Infidelity

A

If the procedures of an experimental condition (baseline, probe, intervention, maintenance, generalization) are not consistently implemented across behavior episodes, time, interventionists, etc., as described in the methods section of the research proposal or report, this constitutes a major threat to the internal validity of the findings. It is recommended that a percentage agreement be calculated and reported for each interventionist (parent, teacher, clinician) behavior in order to measure the degree to which each component of the prescribed condition procedures has been followed.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What are the Threats to Internal Validity? Attrition

A

Participant attrition refers to the “loss of participants” during the course of a study, which can limit the generality of the findings.

A minimum of three participants is typically recommended for inclusion in any one single subject research design investigation. Play it safe and start with 4 or 5.

17
Q

What are the Threats to Internal Validity? Multiple Treatment Interference

A

Multiple-treatment (or intervention) interference can occur when a study participant’s behavior is influenced by “treatments” or interventions, other than the independent variable alone, during the course of a study.

An interactive effect may be identified due to sequential confounding (the order in which experimental conditions are introduced to participants may influence their behavior) or a carryover effect (the effect a procedure used in one experimental condition influences behavior in an adjacent condition).

To avoid sequential confounding, the order in which experimental conditions are introduced to participants is counterbalanced (e.g., Participant 1, A-B-A-C-A-B-A-C; Participant 2, A-C-A-B-A-C-A-B). Carryover effects, however, are less easily controlled. It is imperative, however, that, if they exist, they be identified.

18
Q

What are the Threats to Internal Validity? Data Instability.

A

Instability refers to the amount of variability there is in the data (dependent variable) over time.

As a consumer of research, you should determine if there is high percentage overlap between data points of two adjacent conditions, (e.g., if 30% or more of the data points of Condition B fall within the range of the data-point values of Condition A) and, if there is, you should be skeptical of any statements a researcher might make regarding the effectiveness of the independent variable.

In your own research, when data variability is observed, it is best to (a) maintain the condition until the data stabilize, or (b) attempt to isolate the source of the variability.

19
Q

What are the Threats to Internal Validity? Cyclical Validity.

A

Cyclical variability is a specific type of data instability that refers to a repeated and predictable pattern in the data series over time. When experimental conditions are of equal length (e.g., 5 days in each condition of an A1-B1-A2-B2 withdrawal design) it is possible that your observations coincide with some unidentified natural source that may account for thevariability.

20
Q

What are the Threats to Internal Validity? Adaptation.

A

Adaptation refers to a period of time at the start of an investigation in which participants’ recorded behavior may differ from their natural behavior due to the novel conditions under which data are collected. It is recommended that study participants be exposed to unfamiliar adults, settings, formats, data-collection procedures (e.g., video recording), etc. prior to the start of a study, through what is sometimes referred to as history training, to increase the likelihood that data collected on the first day of a baseline condition is representative of participants’ “true” behavior. A “reactive effect” to being observed has been reported and discussed in the applied-research literature for quite some time (Kazdin, 1979), leading to recommendations to be as unobtrusive as possible during data collection (Cooper et al., 2007; Kazdin, 2001).

21
Q

Threats to Internal Validity:

A

The Hawthorne Effect, which refers to participants’ observed behavior not being representative of their natural behavior as a result of their knowledge that they are participants in an experiment (Kratochwill, 1978; Portney & Watkins, 2000), is a specific type of adaptation threat to validity related to participants knowing they are part of an ongoing investigation.

e.g. the participant may try to please the researchers