Chapter 5 Flashcards
Facts
It’s is not possible to calculate reliability, researchers can only estimate

Validity Definition
Wether the scale measures what it was intended to measure
Reliability to Validity Relationship
- low reliability to low validity - high reliability to low validity - high reliability to high validity
Face Validity
Examines how the test appears - the logical sense of the survey
Criterion- Related Validity
Measure one topic in two different ways
Construct Validity
measures a concept that is not actually observable
Content Validity
How well a test measures the specific content intended to measure
What are the four types of Validity?
- Face Validity - Criterion - Related Validity - Construct Validity - Content Validity
What are the nine threats to Internal Validity?
- History 2. Maturation 3. Testing 4. Instrumentation 5. Regression 6. Ceiling and floor effects 7. Attrition 8. Selection 9. Hawthorn effect
Internal Validity: History
When an event happens during research that influences the behavior of participating individuals
Internal Validity: Maturation
The natural change that occurs over time with individuals
Internal Validity: Testing
Differences noted from pre-test to post test that can be attributed to students becoming familiar with the test
Internal Validity: Instrumentation
Measures changes in respondent performance which cannot be credited to the treatment or intervention
Internal Validity: Regression
Some respondents performing well on pre-test and poorly on post-test Orr vice versa merely by chance
Reliability definition
Related to consistency or ability to repeat results
Internal Validity: Instrumentation
Measures changes in respondent performance which cannot be credited to the treatment or intervention
Internal Validity: Regression
Some respondents performing well on pretests and poorly on posttests or vice versa merely by chance
Internal Validity: Ceiling and floor effects
- Ceiling effect is when all participating individuals perform extremely well on a pretest and posttest 1. Floor effect occurs when individual performance starts out low and remains low
Internal Validity: Attrition
Individuals lost from the study
Internal Validity: Selection
When participating individuals are different at the onset of the study
Internal Validity: Hawthorne effect
Workers at the Western Electric Company in Hawthorne, Illinois improved their performance when they know that they are being watched
Generalizability
Generalizability is linked to independent variables
Independent variables
variables that researchers manipulate and control
Dependent variables
variables are fixed and not manipulated
Threats to External Validity
- Refers to the generalizability of research results - Repeating research in different populations is the best way to access generalizability
Seven Factors that influence generalizability
- Population 2. Environment 3.Temporal / Sequential 4. Participants 5.Testing and treatment interaction 6.Reactive arrangements 7. Multiple treatment conditions
External Validity: Population
When population selection is so specific, treatment is matched to a specific sample and doesn’t apply to a wider population
External Validity: Environment
The change from a controlled environment to a less controlled environment vice versa
External Validity: Temporal / Sequential
Change of temperature could affect study
External Validity: Participants
- Animal to human links - Human to human links - Gender bias - Racial bias - Cultural and ethnocentric bias
External Validity: Testing and treatment interaction
If participants learn from the pretest, then they may be less likely to learn as much from treatment
External Validity: Reactive arrangements
If individuals change their behavior when observed (threat to internal validity), results are not generalizable to real world conditions (threat to external validity)
External Validity: Multiple treatment conditions
Multiple treatments may create an artificial setting that does not exist in the real world, so results are not generalizable
Relationship between Internal and External Validity
Internal validity is more critical than external Without internal validity, research is not testing what it reports to measure As the study inclusion criteria becomes more selective, the results become less generalizable
Random errors
- Occur by chance and are inconsistent across the respondents - Increase or decrease results in an unpredictable manner - Researchers have no control over the occurrence of random errors - Reduced through statistical methods by averaging scores over a larger sample size - Influences reliability
Systematic errors
- Consistent in the same direction (all results have the same error) - Introduce inaccuracy and bias into the measurement - Problematic to detect and eliminate - Not possible to reduce the effect of systematic errors through statistical methods - Influences validity - Occurs in three areas: Environment, Observation, Drift
Randomized Controlled Trial (RCT)
- The gold standard of research design - Participants are randomly assigned to either a treatment group or a non-treatment group - Participants in each group have similar characteristics - Allows researchers to draw conclusions with confidence
If Sample Size is too small?
When sample is too small, results are inconclusive and significant differences among groups are statistically harder to determine
If Sample Size is too large?
When sample is too large, cost, feasibility and time become problematic
Precision and Accuracy of the study Increase as…
as the sample size increases
Selection bias:
Specific individuals or groups are purposely omitted from research
Measurement bias
Occurs because of systematic errors in measurement
Intervention bias:
How intervention groups are treated differently than control groups if the researchers involved know which group is which
Pilot Testing
Involves conducting a preliminary test of data collection tools and procedures to identify and eliminate problems
Conduct a pilot study by categories:
- Sample of Respondents - Data Collection for Pilot Tests - Data Analysis - Outcome
Cronbach’s alpha
Cronbach’s alpha is a measure of internal consistency, that is, how closely related a set of items are as a group
