Test 1 Flashcards
- Involves manipulation of an independent variable (IV) while controlling for confounding variables.
- Uses random assignment to ensure groups are comparable.
- Allows researchers to make causal conclusions about the effect of the IV on the dependent variable (DV).
- Example: Testing the effect of a new drug on depression by randomly assigning participants to a drug group or placebo group.
- Goal: Establish causation.
5 Points
Experimental Research:
- Involves manipulation of an IV but lacks random assignment to conditions.
- Cannot establish strong causal relationships due to potential confounding variables.
- Used when random assignment is impractical or unethical.
4 Points
Give example
Quasi-Experimental Research:
- Example: Studying the effect of a school program on students’ test scores when students are assigned based on existing class groupings.
- No manipulation of variables;
- Focuses on observation,description, or correlation.
- Cannot determine cause-and-effect relationships
- Can identify associations.
Give example
Non-Experimental Research:
- Example: Studying the relationship between social media use and anxiety through surveys.
Key Goals of Experimental Psychology:
3 Points
- Description
- Prediction
- Explanation
Observe and document behaviors and patterns.
Description
Identify relationships between variables to predict outcomes.
Prediction
Determine cause-and-effect relationships between variables using controlled experimentation.
Explanation
Methods of Knowing:
4 Points
- Intuition: Relying on gut feelings or instinct (can be biased).
- Authority: Accepting knowledge from experts (must be critically evaluated).
- Rationalism: Using logical reasoning to draw conclusions (depends on valid premises).
- Empiricism: Gaining knowledge through direct observation and experience.
A structured way of integrating methods of knowing, minimizing biases and errors through careful methodology.
Scientific reasoning
A systematic approach to acquiring knowledge, reducing bias, and ensuring replicability.
Scientific Method (SM)
Key Features of scientific method:
- Empiricism: Data is collected through structured observation.
- Determinism: Behaviors have identifiable causes.
- Parsimony: The simplest explanation is preferred.
- Testability: Hypotheses must be falsifiable and testable.
APA-Style Guidelines:
- Title Page: Includes title, author(s), and institutional affiliation.
- Abstract: A summary of the research (150–250 words).
- Introduction: Background, hypothesis, and research purpose.
- Method: Details participants, materials, and procedures for replication.
- Results: Data presentation and statistical analysis.
- Discussion: Interpretation of results, implications, and limitations.
References: Cited sources in APA format.
A testable prediction about the relationship between variables.
Hypothesis:
Types of Hypotheses:
- Null Hypothesis (H₀): No effect or relationship between variables (default assumption).
- Alternative Hypothesis (H₁): There is an effect or relationship.
Steps in Hypothesis Testing:
- Hypothesize: Form a research question & hypothesis.
- Operationalize: Define variables in measurable terms.
- Measure: Collect data.
- Evaluate: Analyze the data.
- Replicate/Revise/Report: Confirm findings or refine hypothesis.
The factor manipulated or categorized in the study.
Example: Amount of coffee consumed.
Independent Variable (IV):
The outcome being measured.
Dependent Variable (DV):
Example: Cognitive performance on a test.
Types of Research Designs:
- Between-Subjects Design
- Within-Subjects Design
- Mixed Design
Explain Between-Subjects Design:
Different groups experience different conditions.
Example: One group studies with music, another studies in silence.
Explain Within-Subjects Design:
The same participants experience all conditions.
Example: Each participant studies with and without music, then their performance is compared.
Explain Mixed Design:
Combines elements of both designs.
Example: Two groups (between-subjects) test two study methods (within-subjects).
Operationalization:
Defining variables in specific, measurable terms.
Example: If studying stress levels, an operational definition could be heart rate variability or scores on a stress questionnaire.
Ensures consistency and replicability in research.
Measurement:
- Measurement is how researchers quantify variables in a study.
- Involves assigning values to variables using different scales of measurement.
Scales of Measurement:
- Nominal Scale
- Ordinal Scale
- Interval Scale
- Ratio Scale
Explain the nominal scale:
Categories with no inherent order.
Example: Eye color (blue, brown, green).
Explain Ordinal Scale:
Ordered categories but without equal intervals.
Example: Class rankings (1st, 2nd, 3rd) without knowing grade differences.
Explain Interval Scale:
Ordered with equal intervals, but no true zero.
Example: Temperature in Celsius (0°C does not mean ‘no temperature’).
Explain Ratio Scale:
Ordered, equal intervals, and has a true zero.
Example: Reaction time (0 seconds means no reaction).
Common Data Collection Methods:
1.Surveys & Questionnaires
2.Observations
3. Experimental Tasks
Explain Surveys & Questionnaires:
- Self-reported responses.
Pros: Easy, cost-effective.
Cons: Subject to bias (social desirability, self-report errors).
Explain Observations:
Researchers systematically record behaviors.
Example: Watching children’s play behavior in a natural setting.
Pros: Direct data, real-world insights.
Cons: Observer bias, difficult to control variables.
Explain Experimental Tasks:
Controlled tasks in a lab setting.
Example: Memory recall tasks after different sleep conditions.
Pros: Allows for causation, highly controlled.
Cons: Can lack real-world application.
Explain Sampling:
The process of selecting participants for a study.
Types of Sampling:
- Random Sampling
- Convenience Sampling
- Stratified Sampling
- Snowball Sampling
Explain Random Sampling:
Every individual in the population has an equal chance of being selected (reduces bias).
Explain Convenience Sampling:
Selecting participants who are readily available (common but may not represent entire population).
Explain Stratified Sampling:
Dividing the population into subgroups and randomly selecting participants from each (ensures diversity).
Explain Snowball Sampling:
Participants recruit other participants (useful for hard-to-reach populations).
Define Ethics:
The study of right and wrong behavior; provides guidelines for fairness, respect, and integrity in research
Define Morality:
Personal or societal beliefs about right and wrong; varies across cultures and individuals.
Why Ethics Matter in Research:
- Protects participants from harm (physical, psychological, emotional).
- Ensures research integrity and credibility.
- Prevents exploitation and deception (e.g., Tuskegee Syphilis Study).
Explain Informed Consent:
- Participants must be fully aware of the study’s purpose, procedures, risks, and benefits before agreeing.
- Consent must be voluntary (no coercion).
- Participants have the right to withdraw at any time.
- If participants lack full decision-making ability (e.g., children, cognitively impaired individuals), a legal guardian must provide consent.
Explain confidentiality:
- Researchers must keep participant data secure and anonymous.
- Data should be stored safely (e.g., encrypted files, locked cabinets).
- Participants must be informed if confidentiality cannot be guaranteed (e.g., mandatory reporting of harm to self or others).
Explain Participant Rights:
- Right to be treated with respect.
- Right to be informed of research findings (when applicable).
- Right to refuse or withdraw without penalties.
Ethical Guidelines and Oversight Committees:
- Institutional Review Board (IRB)
- American Psychological Association (APA) Ethics Code
- Institutional Animal Care and Use Committee (IACUC)
Explain Institutional Review Board (IRB):
- Reviews research proposals involving human subjects.
- Ensures ethical treatment and weighs benefits vs. risks.
- Requires researchers to follow guidelines for informed consent and participant protection.
Explain the American Psychological Association (APA) Ethics Code:
- First established in 1953, regularly updated (most recently in 2017).
Key ethical principles:
- Beneficence & Nonmaleficence – Maximize benefits, minimize harm.
- Fidelity & Responsibility – Researchers should be trustworthy and accountable.
- Integrity – Honesty in conducting and reporting research.
- Justice – Equal treatment and fair distribution of research benefits.
- Respect for People’s Rights & Dignity – Protect autonomy, privacy, and confidentiality.
Explain Institutional Animal Care and Use Committee (IACUC):
- Oversees research involving animals.
- Ensures humane treatment and compliance with the Three Rs:
A. Replacement – Use alternatives to animal research when possible.
B. Reduction – Use the fewest number of animals necessary.
C. Refinement – Minimize distress and improve animal welfare.
Historical Unethical Studies (Examples from Slides):
- Tuskegee Syphilis Study (1932-1972): Deception and denial of treatment to African American men with syphilis.
- Milgram Obedience Experiment (1961-1963): Psychological distress due to coercion into administering “shocks.”
- Stanford Prison Experiment (1971): Extreme psychological harm from role-playing as guards and prisoners.
Types of Validity:
1.Internal Validity
2. External Validity
3. Construct Validity
4. Content Validity
5. Face Validity
Explain Internal Validity:
- The extent to which an experiment establishes a cause-and-effect relationship between the independent and dependent variables.
- High internal validity means changes in the DV are due to the IV and not confounding variables.
Threats to Internal validity:
- Confounding variables (uncontrolled factors affecting results).
- Selection bias (non-random assignment of participants).
- Maturation (natural changes in participants over time).
- Instrumentation (changes in measurement tools).
- Experimenter bias (researcher’s influence on results).
Explain External Validity:
The generalizability of the study’s findings to real-world settings, populations, and situations.
Threats to External Validity:
- Population validity (limited generalizability to different groups).
- Ecological validity (results may not apply outside the lab).
- Temporal validity (findings may not hold over time).
- Interaction effects (study conditions affecting generalizability).
Explain Construct Validity:
Ensures the study measures what it claims to measure.
Example: A test for depression should actually assess depressive symptoms, not just general sadness.
Explain Content Validity:
Ensures the measure covers all aspects of the construct.
Example: A science test should include physics, biology, and chemistry, not just physics.
Explain Face Validity:
The surface-level appropriateness of a measure.
Example: A math test should look like it assesses mathematical skills.
Define Reliability:
Reliability refers to the consistency of a measurement method.
Explain Test-Retest Reliability:
Consistency of results over time.
Example: A personality test should give similar results if taken twice.
Explain Internal Consistency:
The degree to which all parts of a test measure the same concept.
Example: If a happiness survey includes five related questions, all should yield similar scores.
Explain Inter-Rater Reliability:
Consistency in judgments between different observers.
Example: Two teachers grading the same essay should give similar scores.
Techniques to Control for Confounding Variables:
- Random Assignment
- Standardization
- Counterbalancing
- Matching
Explain Random Assignment:
Ensures each participant has an equal chance of being in any condition, reducing selection bias.
Explain Standardization:
Using the same procedures for all participants to minimize variability.
Explain Counterbalancing:
In within-subjects designs, varying the order of conditions to control for order effects.
Explain Matching:
Pairing participants with similar characteristics across conditions.
Explain Random Assignment:
- A method used in experimental research where participants are randomly placed into different groups (e.g., experimental vs. control).
- Helps eliminate bias and increases the likelihood that differences between groups are due to the IV, not extraneous variables.
Experimental controls, such as random assignment and blinding, help ensure…
fairness and prevent bias, aligning with ethical principles like justice (ensuring equal treatment of participants).
Validity ensures that :
Research produces meaningful and truthful results, which is a moral obligation to avoid misleading conclusions that could harm individuals or groups.
The APA’s acknowledgment of past racism in psychology highlights the importance of
ethical research design to prevent the misuse of scientific findings
Informed Consent, Confidentiality, and Participant Rights:
- Informed consent requires participants to understand a study’s purpose, risks, and benefits.
- Ensuring internal validity (that the study truly measures cause and effect) is crucial for ethical research—if a study is poorly controlled, misleading results can violate participants’ rights to accurate information.
- Ethical guidelines stress that research must not deceive or exploit participants. Historical cases of unethical research (e.g., intelligence testing used to justify racial discrimination
) show why validity and transparency are essential.
Ethical Guidelines and Oversight (IRBs, APA, IACUC):
- Institutional Review Boards (IRBs) evaluate research designs to ensure they minimize risks and control confounding variables that could invalidate findings.
- The APA’s ethics code emphasizes integrity and beneficence, which align with ensuring research has strong validity so that results contribute positively to society
. - The Institutional Animal Care and Use Committee (IACUC) applies similar principles to animal research, ensuring that experiments are designed to yield valid results while minimizing harm.