Midterm Flashcards
Belmont Report
1) Beneficence: risk-benefit analysis of findings vs. harm
2) Autonomy: respect for participants and their decisions
3) Justice: fairness in accepting risk and receiving benefits
APA Code of Ethics
1) Beneficence: risk-benefit analysis of findings vs. harm
2) Fidelity and responsibility: maintaining trust and following through
3) Integrity: don’t lie, cheat, plagiarize, etc.
4) Justice: fairness in accepting risk and receiving benefits
5) Respect: respecting individual differences, respecting consent, being aware of own biases
Six steps of a research project
1) Ask a question stemming from a theory
2) Develop a specific and testable hypothesis
3) Select a method and design the study
4) Collect the data
5) Analyze data and draw conclusions
6) Report findings
How do we minimize harm?
1) Informed consent
2) Debriefing
3) IRB
What defines experimental design?
Must have manipulation of independent variables and random assignment
What is a quasi-experimental or subject variable?
A trait that cannot be changed about the participant, but participants can be grouped based on these traits (height, shoe size, age, eye color, etc.)
Internal validity
The extent to which causal conclusions can be substantiated
External validity
The extent to which results can be generalized
Construct validity
The degree to which variable operations accurately reflect the construct they’re designed to measure (free from systematic error)
Criteria for causality
1) Relationship between variables
2) Causal variable precedes affected variable
3) No possibility of a third variable affecting both (confounding)
What makes a true experiment?
A true experiment has internal validity
Reliability
The extent to which a measure is consistent (free from random error)
Ways to measure reliability
1) Test-retest reliability
2) Internal consistency
3) Inter-rater reliability
Test-retest reliability
If you measure the same individuals at two different points in time the results should be highly correlated
Internal consistency
Whether the individual items in a scale correlate well with each other – Cronbach’s Alpha assesses the correlation of each item with each other
Inter-rater reliability
The agreement of observations made by two or more judges
Ways to measure construct validity
1) Face validity
2) Content validity
3) Convergent validity
4) Discriminant validity
5) Predictive validity
6) Concurrent validity
Face validity
How obvious it is to the participant what the test is measuring
Content validity
Whether experts believe the measure relates to the concept being assessed
Convergent validity
The measure overlaps with a different measure that is intended to tap the same theoretical construct (the participant should be able to fill out two surveys and get correlating results)
Discriminant validity
The measure does not overlap with other measures that are intended to tap different or opposite theoretical constructs
Predictive validity
The measure’s ability to predict a future behavior or outcome
Concurrent validity
The extent to which the measure corresponds with another current behavior or outcome
Nominal scale
Numbers stand for categories but mean nothing themselves (male = 1, female = 2)
Ordinal scale
Numbers indicate rank order, indicating preference but not by how much (psych = 1, bio = 2, math = 3)
Interval scale
The distances between numbers on a scale are all equal in size, but zero is an arbitrary reference point (Likert scale)
Ratio scale
The only scale that measures a true amount of something. Zero means a non-existent amount of that variable, there cannot be negative numbers, and 4 is twice as much as 2
Close-ended question
Has a limited number of response alternatives, meaning higher specificity but less variety
Open-ended question
Allows respondents to generate their own answers, meaning more variety but less control and harder to analyze
Interview bias
The researcher may subtly suggest a desired response, interpret the response in the desired way, or probe open-ended questions to get the desired response
Respondent bias
Participants may act due to social desirability or response set (answering all questions similarly)
Ways to assess the construct validity of the independent variable
1) Pre-test
2) Manipulation check
Pre-test
Conducted before the actual study with a different set of participants and is meant to determine if the IV manipulation works as predicted
Manipulation check
Conducted during the study and assesses whether the manipulation of the IV had its intended effects
Ways to measure the dependent variable
1) Self-report
2) Behavioral
3) Physiological
Self-report
Asking participants about the behavior of interest; is easy and cheap, but is subject to bias
Behavioral report
Direct observations of participant behavior; is effective and direct, but can be expensive, time-consuming, and subject to reactivity
Physiological report
Directly recording responses of the body; is objective and measures strength of the reaction, but does not always capture valence and is subject to reactivity
Ways to control for participant expectations
1) Cover story: provides rationale
2) Filler items: reduces face validity
3) Placebo group: level of IV that shows role of expectations
Experimenter bias
When an experimenter might subtly suggest how they hope the participant will respond
Ways to reduce experimenter bias
1) Double-blind study: experimenter is blind to IV group of the participant
2) Blind to hypothesis: experimenter does not know the hypothesis of the study
3) Automated scripts and computers
4) Running participants in groups
Post-test only design
Participants are randomly assigned to one level of the IV and then measured
Pre-test Post-test design
Participants are given a pre-test and then randomly assigned to one level of the IV and measured
What is the purpose of a pre-test?
The pre-test gives a baseline measure of the DV before any IV manipulation in order to…
- ensure that groups are similar to start
- identify certain characteristics of participants
- measure the amount of change
- understand mortality
Between participants/independent groups
Each participant is randomly assigned to one level of the IV
Within participants/repeated measures
Each participant is assigned to all of the levels of the IV