Test 1 Flashcards

1
Q
  1. Involves manipulation of an independent variable (IV) while controlling for confounding variables.
  2. Uses random assignment to ensure groups are comparable.
  3. Allows researchers to make causal conclusions about the effect of the IV on the dependent variable (DV).
  4. Example: Testing the effect of a new drug on depression by randomly assigning participants to a drug group or placebo group.
  5. Goal: Establish causation.

5 Points

A

Experimental Research:

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q
  1. Involves manipulation of an IV but lacks random assignment to conditions.
  2. Cannot establish strong causal relationships due to potential confounding variables.
  3. Used when random assignment is impractical or unethical.

4 Points

Give example

A

Quasi-Experimental Research:

  1. Example: Studying the effect of a school program on students’ test scores when students are assigned based on existing class groupings.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q
  1. No manipulation of variables;
  2. Focuses on observation,description, or correlation.
  3. Cannot determine cause-and-effect relationships
  4. Can identify associations.

Give example

A

Non-Experimental Research:

  1. Example: Studying the relationship between social media use and anxiety through surveys.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Key Goals of Experimental Psychology:

3 Points

A
  1. Description
  2. Prediction
  3. Explanation
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Observe and document behaviors and patterns.

A

Description

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Identify relationships between variables to predict outcomes.

A

Prediction

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Determine cause-and-effect relationships between variables using controlled experimentation.

A

Explanation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Methods of Knowing:

4 Points

A
  1. Intuition: Relying on gut feelings or instinct (can be biased).
  2. Authority: Accepting knowledge from experts (must be critically evaluated).
  3. Rationalism: Using logical reasoning to draw conclusions (depends on valid premises).
  4. Empiricism: Gaining knowledge through direct observation and experience.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

A structured way of integrating methods of knowing, minimizing biases and errors through careful methodology.

A

Scientific reasoning

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

A systematic approach to acquiring knowledge, reducing bias, and ensuring replicability.

A

Scientific Method (SM)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Key Features of scientific method:

A
  1. Empiricism: Data is collected through structured observation.
  2. Determinism: Behaviors have identifiable causes.
  3. Parsimony: The simplest explanation is preferred.
  4. Testability: Hypotheses must be falsifiable and testable.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

APA-Style Guidelines:

A
  1. Title Page: Includes title, author(s), and institutional affiliation.
  2. Abstract: A summary of the research (150–250 words).
  3. Introduction: Background, hypothesis, and research purpose.
  4. Method: Details participants, materials, and procedures for replication.
  5. Results: Data presentation and statistical analysis.
  6. Discussion: Interpretation of results, implications, and limitations.
    References: Cited sources in APA format.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

A testable prediction about the relationship between variables.

A

Hypothesis:

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Types of Hypotheses:

A
  1. Null Hypothesis (H₀): No effect or relationship between variables (default assumption).
  2. Alternative Hypothesis (H₁): There is an effect or relationship.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Steps in Hypothesis Testing:

A
  1. Hypothesize: Form a research question & hypothesis.
  2. Operationalize: Define variables in measurable terms.
  3. Measure: Collect data.
  4. Evaluate: Analyze the data.
  5. Replicate/Revise/Report: Confirm findings or refine hypothesis.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

The factor manipulated or categorized in the study.
Example: Amount of coffee consumed.

A

Independent Variable (IV):

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

The outcome being measured.

A

Dependent Variable (DV):

Example: Cognitive performance on a test.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Types of Research Designs:

A
  1. Between-Subjects Design
  2. Within-Subjects Design
  3. Mixed Design
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Explain Between-Subjects Design:

A

Different groups experience different conditions.

Example: One group studies with music, another studies in silence.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Explain Within-Subjects Design:

A

The same participants experience all conditions.

Example: Each participant studies with and without music, then their performance is compared.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Explain Mixed Design:

A

Combines elements of both designs.

Example: Two groups (between-subjects) test two study methods (within-subjects).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Operationalization:

A

Defining variables in specific, measurable terms.

Example: If studying stress levels, an operational definition could be heart rate variability or scores on a stress questionnaire.

Ensures consistency and replicability in research.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Measurement:

A
  1. Measurement is how researchers quantify variables in a study.
  2. Involves assigning values to variables using different scales of measurement.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Scales of Measurement:

A
  1. Nominal Scale
  2. Ordinal Scale
  3. Interval Scale
  4. Ratio Scale
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

Explain the nominal scale:

A

Categories with no inherent order.

Example: Eye color (blue, brown, green).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

Explain Ordinal Scale:

A

Ordered categories but without equal intervals.

Example: Class rankings (1st, 2nd, 3rd) without knowing grade differences.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

Explain Interval Scale:

A

Ordered with equal intervals, but no true zero.

Example: Temperature in Celsius (0°C does not mean ‘no temperature’).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

Explain Ratio Scale:

A

Ordered, equal intervals, and has a true zero.

Example: Reaction time (0 seconds means no reaction).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

Common Data Collection Methods:

A

1.Surveys & Questionnaires
2.Observations
3. Experimental Tasks

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

Explain Surveys & Questionnaires:

A
  1. Self-reported responses.

Pros: Easy, cost-effective.
Cons: Subject to bias (social desirability, self-report errors).

31
Q

Explain Observations:

A

Researchers systematically record behaviors.

Example: Watching children’s play behavior in a natural setting.

Pros: Direct data, real-world insights.
Cons: Observer bias, difficult to control variables.

32
Q

Explain Experimental Tasks:

A

Controlled tasks in a lab setting.

Example: Memory recall tasks after different sleep conditions.

Pros: Allows for causation, highly controlled.
Cons: Can lack real-world application.

33
Q

Explain Sampling:

A

The process of selecting participants for a study.

34
Q

Types of Sampling:

A
  1. Random Sampling
  2. Convenience Sampling
  3. Stratified Sampling
  4. Snowball Sampling
35
Q

Explain Random Sampling:

A

Every individual in the population has an equal chance of being selected (reduces bias).

36
Q

Explain Convenience Sampling:

A

Selecting participants who are readily available (common but may not represent entire population).

37
Q

Explain Stratified Sampling:

A

Dividing the population into subgroups and randomly selecting participants from each (ensures diversity).

38
Q

Explain Snowball Sampling:

A

Participants recruit other participants (useful for hard-to-reach populations).

39
Q

Define Ethics:

A

The study of right and wrong behavior; provides guidelines for fairness, respect, and integrity in research

40
Q

Define Morality:

A

Personal or societal beliefs about right and wrong; varies across cultures and individuals.

41
Q

Why Ethics Matter in Research:

A
  1. Protects participants from harm (physical, psychological, emotional).
  2. Ensures research integrity and credibility.
  3. Prevents exploitation and deception (e.g., Tuskegee Syphilis Study).
42
Q

Explain Informed Consent:

A
  1. Participants must be fully aware of the study’s purpose, procedures, risks, and benefits before agreeing.
  2. Consent must be voluntary (no coercion).
  3. Participants have the right to withdraw at any time.
  4. If participants lack full decision-making ability (e.g., children, cognitively impaired individuals), a legal guardian must provide consent.
43
Q

Explain confidentiality:

A
  1. Researchers must keep participant data secure and anonymous.
  2. Data should be stored safely (e.g., encrypted files, locked cabinets).
  3. Participants must be informed if confidentiality cannot be guaranteed (e.g., mandatory reporting of harm to self or others).
44
Q

Explain Participant Rights:

A
  1. Right to be treated with respect.
  2. Right to be informed of research findings (when applicable).
  3. Right to refuse or withdraw without penalties.
45
Q

Ethical Guidelines and Oversight Committees:

A
  1. Institutional Review Board (IRB)
  2. American Psychological Association (APA) Ethics Code
  3. Institutional Animal Care and Use Committee (IACUC)
46
Q

Explain Institutional Review Board (IRB):

A
  1. Reviews research proposals involving human subjects.
  2. Ensures ethical treatment and weighs benefits vs. risks.
  3. Requires researchers to follow guidelines for informed consent and participant protection.
47
Q

Explain the American Psychological Association (APA) Ethics Code:

A
  1. First established in 1953, regularly updated (most recently in 2017).

Key ethical principles:

  1. Beneficence & Nonmaleficence – Maximize benefits, minimize harm.
  2. Fidelity & Responsibility – Researchers should be trustworthy and accountable.
  3. Integrity – Honesty in conducting and reporting research.
  4. Justice – Equal treatment and fair distribution of research benefits.
  5. Respect for People’s Rights & Dignity – Protect autonomy, privacy, and confidentiality.
48
Q

Explain Institutional Animal Care and Use Committee (IACUC):

A
  1. Oversees research involving animals.
  2. Ensures humane treatment and compliance with the Three Rs:

A. Replacement – Use alternatives to animal research when possible.

B. Reduction – Use the fewest number of animals necessary.

C. Refinement – Minimize distress and improve animal welfare.

49
Q

Historical Unethical Studies (Examples from Slides):

A
  1. Tuskegee Syphilis Study (1932-1972): Deception and denial of treatment to African American men with syphilis.
  2. Milgram Obedience Experiment (1961-1963): Psychological distress due to coercion into administering “shocks.”
  3. Stanford Prison Experiment (1971): Extreme psychological harm from role-playing as guards and prisoners.
50
Q

Types of Validity:

A

1.Internal Validity
2. External Validity
3. Construct Validity
4. Content Validity
5. Face Validity

51
Q

Explain Internal Validity:

A
  1. The extent to which an experiment establishes a cause-and-effect relationship between the independent and dependent variables.
  2. High internal validity means changes in the DV are due to the IV and not confounding variables.
52
Q

Threats to Internal validity:

A
  1. Confounding variables (uncontrolled factors affecting results).
  2. Selection bias (non-random assignment of participants).
  3. Maturation (natural changes in participants over time).
  4. Instrumentation (changes in measurement tools).
  5. Experimenter bias (researcher’s influence on results).
53
Q

Explain External Validity:

A

The generalizability of the study’s findings to real-world settings, populations, and situations.

54
Q

Threats to External Validity:

A
  1. Population validity (limited generalizability to different groups).
  2. Ecological validity (results may not apply outside the lab).
  3. Temporal validity (findings may not hold over time).
  4. Interaction effects (study conditions affecting generalizability).
55
Q

Explain Construct Validity:

A

Ensures the study measures what it claims to measure.

Example: A test for depression should actually assess depressive symptoms, not just general sadness.

56
Q

Explain Content Validity:

A

Ensures the measure covers all aspects of the construct.

Example: A science test should include physics, biology, and chemistry, not just physics.

57
Q

Explain Face Validity:

A

The surface-level appropriateness of a measure.

Example: A math test should look like it assesses mathematical skills.

58
Q

Define Reliability:

A

Reliability refers to the consistency of a measurement method.

59
Q

Explain Test-Retest Reliability:

A

Consistency of results over time.
Example: A personality test should give similar results if taken twice.

60
Q

Explain Internal Consistency:

A

The degree to which all parts of a test measure the same concept.

Example: If a happiness survey includes five related questions, all should yield similar scores.

61
Q

Explain Inter-Rater Reliability:

A

Consistency in judgments between different observers.

Example: Two teachers grading the same essay should give similar scores.

62
Q

Techniques to Control for Confounding Variables:

A
  1. Random Assignment
  2. Standardization
  3. Counterbalancing
  4. Matching
63
Q

Explain Random Assignment:

A

Ensures each participant has an equal chance of being in any condition, reducing selection bias.

64
Q

Explain Standardization:

A

Using the same procedures for all participants to minimize variability.

65
Q

Explain Counterbalancing:

A

In within-subjects designs, varying the order of conditions to control for order effects.

66
Q

Explain Matching:

A

Pairing participants with similar characteristics across conditions.

67
Q

Explain Random Assignment:

A
  1. A method used in experimental research where participants are randomly placed into different groups (e.g., experimental vs. control).
  2. Helps eliminate bias and increases the likelihood that differences between groups are due to the IV, not extraneous variables.
68
Q

Experimental controls, such as random assignment and blinding, help ensure…

A

fairness and prevent bias, aligning with ethical principles like justice (ensuring equal treatment of participants).

69
Q

Validity ensures that :

A

Research produces meaningful and truthful results, which is a moral obligation to avoid misleading conclusions that could harm individuals or groups.

70
Q

The APA’s acknowledgment of past racism in psychology highlights the importance of

A

ethical research design to prevent the misuse of scientific findings​

71
Q

Informed Consent, Confidentiality, and Participant Rights:

A
  1. Informed consent requires participants to understand a study’s purpose, risks, and benefits.
  2. Ensuring internal validity (that the study truly measures cause and effect) is crucial for ethical research—if a study is poorly controlled, misleading results can violate participants’ rights to accurate information.
  3. Ethical guidelines stress that research must not deceive or exploit participants. Historical cases of unethical research (e.g., intelligence testing used to justify racial discrimination​
    ) show why validity and transparency are essential.
72
Q

Ethical Guidelines and Oversight (IRBs, APA, IACUC):

A
  1. Institutional Review Boards (IRBs) evaluate research designs to ensure they minimize risks and control confounding variables that could invalidate findings.
  2. The APA’s ethics code emphasizes integrity and beneficence, which align with ensuring research has strong validity so that results contribute positively to society​
    .
  3. The Institutional Animal Care and Use Committee (IACUC) applies similar principles to animal research, ensuring that experiments are designed to yield valid results while minimizing harm.