manipulated variable Flashcards

You may prefer our related Brainscape-certified flashcards:
1
Q

Impacts the dependent variable, created in order to observe the effects on dependent variable, remember IN dependent meaning it impacts the dependent variable.

A

Independent variable

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Outcome or response of a variable

A

dependent variable

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

factor in an experiment that a scientist purposefully keeps the same

A

control group

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

the participants in an experiment whose levels on the independent variable that involves a medication therapy or intervention

A

treatment groups

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

a group in an experiment whose level on the independent variable differs from those of the treatment group

A

comparison group

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

a group in an experiment whose level on the independent variable differs from those of the treatment group in some intended and meaningful way

A

control group

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

a control group of participants who believe they are receiving treatment but who are only receiving a placebo

A

placebo group

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

another variable varies systematically along with the independent variable other, explains other explanations for the outcome

A

confound

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

two separate groups of participants experience two different conditions of the experiment, had more people

A

independent groups design

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

the really bad experiment, lacks a comparison group which makes it vunerable to the threats to internal validity

A

one group/pretest-posttest

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

what are the first three threats to consider for internal validity

A

design confounds, selection effects, and order effects.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

a threat to the internal validity in an experiment in which a second variable happens to vary systematically along with the independent variable and therefore is an alternative explanations for the results

A

design confounds

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

a threat to internal validity that occur in an independent groups design when the kinds of participants at one level of the independent variable are systematically different from those at the other level

A

selection effects

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

in a within groups design, a threat to internal validity in which exposure to one condition changes participants responses to a later condition

A

order effect

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

more people, two separate groups of participants experience two different conditions of the experiment

A

Independent group

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

participants are randomly assigned to independent variable groups and are tested on the dependent variable once

A

post-test only design

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

participants are tested on the dependent variable once before and once after exposure to the treatment.

A

pretest-posttest design

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

participants are measured on a dependent variables at roughly the same time and a single attitudinal or behavior preference is the dependent variable.

A

concurrent measures design

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

participants are measured on a dependent variable more than once after exposure to each level of the independent variable

A

repeated measure design

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

a test to determine whether the manipulation of the independent variable actually has the intended effect

A

manipulation check

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

a threat to internal validity that occurs when an observed change in an experimental group could have emerged more or less spontaneously over time: solution include comparison group

A

maturation threat

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

external events that occur during the study that could affect the dependent variable. Solution: use a control group or match participants to control for external events

A

history threat

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

extreme scores that regress toward the mean over time. Solution: use a control group or measure the dependent variable multiple times.

A

regression threat/regression to the mean

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

participants dropping out of the study, solution: use incentives or follow up procedures to reduce attrition.

A

attrition threat

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

the pretest may affect the posttest scores, solution: use a control Group or alternate forms of the test

A

testing threat

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

changes in the way the dependent variable is measured over time, solution: use consistent measurement tools and procedures

A

instrumentation threat

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

tendency of observers to see what they expect to see, solution: use a masked double blind design

A

observer bias

28
Q

participants form an interpretation of the experiment’s purpose and unconsciously change their behavior to fit that interpretation, solution: use a double blind design

A

demand characteristics,

29
Q

experimental results caused by expectations alone; any effect on behavior caused by the administration of an inert substance or condition, which the recipient assumes is an active agent.
Solution: use a double blind placebo control study

A

placebo effect

30
Q

what are the reasons for null effects

A
  1. sample size: studies with small samples may lack the power to detect real differences between groups or conditions
  2. measurement error: the measures used in the study may be unreliable or not sensitive enough to detect small differences between groups or conditions
  3. confounding variables: other variables that were not controlled for in the study may be influencing the results, making it difficult to detect a true effect.
  4. manipulation check failure: the manipulation used in the study may have failed to produce the intended effect
  5. floor or ceiling effects: the dependent variable may be already at ITS highest or lowest possible value, making it difficult to detect any further changes
  6. lack of construct validity: the intervention or manipulation used in the study may not have been a good representation of the construct being measured
  7. publication bias: studies with null effects may be less likely to be published, leading to an over-representation of studies with positive results
31
Q

there is no effect of the independent variable study finds that the independent variable did not make a difference in the dependent variable (no significant covariance)

A

null effect

32
Q

an independent variable manipulated in the study

A

a factor

33
Q

a specific combination of levels of the different factors

A

a cell

34
Q

different conditions or values that the factor can take

A

levels

35
Q

an experimental design in which each participant is presented with all levels of the independent variable, requires fewer people

A

within groups design

36
Q

The consistent total effect of a single IV on a DV over all other IV’s in an experimental design

A

main effect

37
Q

the differing individual characteristics of participants in an experiment that may impact the results

A

participant variable

38
Q

: the average of all participants on one level of the independent variable, ignoring the other independent variable

A

marginal mean

39
Q

: the simultaneous effect of two or more independent variables on at least one dependent variable in which their joint effect is significantly greater (or significantly less) than the sum of the parts.

A

interaction effect

40
Q

a factorial design with at least one between-subjects factor and one within-subjects factor

A

mixed factorial design

41
Q

: Research study conducted by a branch of the U.S. government, lasting for roughly 50 years (ending in the 1970s), in which a sample of African American men diagnosed with syphilis were deliberately left untreated, without their knowledge, to learn about the lifetime course of the disease.

A

Tuskegee syphilis study

42
Q

The results showed that those in the “negative” group tended to then post negative stories themselves, and those in the “positive” group tended to post positive stories

A

Kramer et al.’s Facebook study

43
Q

Principle of respect for persons: that individuals should be treated as autonomous agents, and second, that persons with diminished autonomy are entitled to protection
Principle of beneficence: Persons are treated in an ethical manner not only by respecting their decisions and protecting them from harm, but also by making efforts to secure their well-being

A

Belmont report

44
Q

: advocates fair treatment for all and a fair distribution of the risks and benefits of the research

A

principal of justice

45
Q

a committee at each institution where research is conducted to review every experiment for ethics and methodology

A

Institutional Review Board/IRB

46
Q

an ethical principle that research participants be told enough to enable them to choose whether they wish to participate

A

Informed consent

47
Q

: misleading participants about the true purpose of a study or the events that will actually transpire

A

deception

48
Q

: the post-experimental explanation of a study, including its purpose and any deceptions, to its participants

A

debriefing

49
Q

repeating the same experiment under the same conditions

A
  • Direct replication studies
50
Q

testing the same theory using different experimental procedures

A
  • Conceptual replication studies
51
Q

A replication study in which researchers replicate their original study but add variables or conditions that test additional questions. See also conceptual replication, direct replication.

A

Replication-plus-extension studies

52
Q

: A way of mathematically averaging the effect sizes of all the studies that have tested the same variables to see what conclusion that whole body of evidence supports.
Generalization mode - external validity is essential: The intent of researchers to generalize the findings from the samples and procedures in their study to other populations or contexts. See also theory-testing mode.

A

meta-analysis

53
Q

external validity is essential: The intent of researchers to generalize the findings from the samples and procedures in their study to other populations or contexts.

A

Generalization mode

54
Q
  • external validity is less important than internal validity and may not be important at all: A researcher’s intent for a study, testing association claims or causal claims to investigate support for a theory.
A

Theory-testing mode

55
Q

: A subdiscipline of psychology concerned with how cultural settings shape a person’s thoughts, feelings, and behavior, and how these in turn shape cultural settings.

A

cultural psychology

56
Q
  • Reevaluate two common assumptions: that important studies use diverse, random samples and that important studies take place in real-world settings:
A

Assumption of diverse, random samples: The assumption that important studies use diverse, random samples may not always hold true. While random sampling is considered the gold standard for selecting participants in research studies, there may be practical or ethical limitations that prevent researchers from using truly random samples. For example, certain populations may be difficult to access or may be excluded from studies due to safety concerns. Additionally, studies that use convenience samples, such as college students, may be more prone to bias and may not be representative of the general population. Therefore, it is important for researchers to carefully consider the potential limitations of their sampling methods and to report the characteristics of their participants in a transparent and detailed manner.
Assumption of real-world settings: The assumption that important studies take place in real-world settings may also not always be accurate. While studies conducted in naturalistic settings may have high ecological validity, they may also be more difficult to control and replicate. On the other hand, studies conducted in laboratory settings may have more control over extraneous variables, but may lack ecological validity. It is important for researchers to carefully consider the trade-offs between ecological validity and control when designing their studies and to clearly report the setting in which the study took place.

57
Q

Why is the inclusion of a comparison group important for an experiment? Relate the concept of a comparison group to the discussion of why research is better than experience (Ch. 2) and to the following threats to internal validity: maturation, history, regression to the mean, and testing

A

In the context of the discussion of why research is better than experience (Chapter 2), the inclusion of a comparison group allows researchers to make causal inferences about the relationship between the independent variable(s) and the dependent variable(s). By comparing the outcomes of the experimental and control groups, researchers can determine whether any changes in the dependent variable(s) are due to the independent variable(s) or to other factors.
The inclusion of a comparison group also relates to several threats to internal validity. For example:
Maturation: In a longitudinal study, participants may naturally change or mature over time, regardless of the intervention. By comparing the outcomes of the experimental and control groups, researchers can determine whether any changes in the dependent variable(s) are due to maturation or to the independent variable(s).
History: Events outside of the study, such as societal or environmental changes, may also affect the outcome of the study. By comparing the outcomes of the experimental and control groups, researchers can determine whether any changes in the dependent variable(s) are due to history or to the independent variable(s).
Regression to the mean: Extreme scores on a measure may naturally regress towards the mean over time, regardless of the intervention. By comparing the outcomes of the experimental and control groups, researchers can determine whether any changes in the dependent variable(s) are due to regression to the mean or to the independent variable(s).
Testing: Repeated testing may also affect the outcome of a study, as participants may become more familiar with the test and may perform better over time. By comparing the outcomes of the experimental and control groups, researchers can determine whether any changes in the dependent variable(s) are due to testing or to the independent variable(s)

58
Q

Why are research-based conclusions better than beliefs based on intuition? What are some common ways in which intuition can be biased? (Ch. 2)

A

Research-based conclusions are better than beliefs based on intuition because they are grounded in empirical evidence that has been systematically collected, analyzed, and interpreted using scientific methods. Intuition, on the other hand, is often based on personal beliefs, assumptions, and biases, which can be unreliable and inaccurate.
Some common ways in which intuition can be biased include:
Confirmation bias: The tendency to seek out information that confirms our pre-existing beliefs and to ignore information that contradicts them.
Availability bias: The tendency to rely on readily available information, such as vivid or memorable examples, rather than considering all available evidence.
Hindsight bias: The tendency to believe that we would have predicted an outcome after it has occurred, leading us to overestimate the accuracy of our intuitions.
Anchoring bias: The tendency to be overly influenced by initial information or estimates, even if they are irrelevant or inaccurate.
Overconfidence bias: The tendency to be overly confident in our own abilities and judgments, leading us to overestimate our accuracy and underestimate the likelihood of error.
Representativeness bias: The tendency to rely on stereotypes or prototypical examples when making judgments or decisions, rather than considering all available evidence.

Research-based conclusions, on the other hand, are based on systematic methods that are designed to minimize bias and increase the validity and reliability of the findings. By using rigorous research methods, researchers can reduce the impact of individual biases and increase the generalizability and replicability of the results. Therefore, research-based conclusions are generally more accurate, reliable, and generalizable than beliefs based on intuition.

59
Q

Give an example of each of the three main types of claims (frequency, association, and causal). How would you interrogate each of the big four validities (construct, statistical, external, internal) for each of these claims? Which validities are most important for each type of claim? (Ch. 3 and Ch. 6-11)

A

Frequency Claim: The majority of college students in the United States experience symptoms of stress. Interrogating the big four validities:
Construct validity: How was stress defined and measured? Was it measured reliably and validly?
Statistical validity: How large was the sample size and how representative was it of the population of interest? Was the sampling method appropriate? Was the statistical analysis used appropriate?
External validity: Can the results be generalized to other populations or contexts?
Internal validity: Could there be alternative explanations for the observed frequency of stress symptoms among college students? Were there any confounding variables that could have influenced the results?
For a frequency claim, construct validity and statistical validity are the most important validities to interrogate.

Association Claim Claim: There is a positive association between exercise and mental health. Interrogating the big four validities:
Construct validity: How was exercise and mental health defined and measured? Were the measures reliable and valid?
Statistical validity: How strong is the association? Was the statistical analysis appropriate? Were potential confounding variables controlled for in the analysis?
External validity: Can the results be generalized to other populations or contexts?
Internal validity: Is it possible that the observed association between exercise and mental health is due to other factors? Could there be reverse causality (e.g., does exercise improve mental health or does good mental health lead to increased exercise)?
For an association claim, construct validity and statistical validity are the most important validities to interrogate.

Causal Claim Claim: A mindfulness-based intervention causes a reduction in symptoms of anxiety. Interrogating the big four validities:
Construct validity: How was mindfulness and anxiety defined and measured? Were the measures reliable and valid?
Statistical validity: Was there a significant difference between the intervention and control groups? Was the statistical analysis used appropriate? Was the sample size large enough to detect a significant effect?
External validity: Can the results be generalized to other populations or contexts?
Internal validity: Is it possible that the observed reduction in symptoms of anxiety was due to other factors? Were potential confounding variables controlled for in the study design and analysis?
For a causal claim, all four validities are important to interrogate, with internal validity being the most important. It is essential to determine whether the observed effect is actually due to the intervention or whether it could be explained by alternative explanations such as confounding variables or selection bias.

60
Q

Describe procedures that are in place to protect human participants in research.

A

Informed Consent: Participants must be informed about the purpose and nature of the research, including any potential risks and benefits. They must also give their voluntary and informed consent to participate in the study.
Institutional Review Boards (IRBs): Most universities and research institutions have IRBs that review research proposals involving human participants. The IRB ensures that the study is ethical and that the risks are minimized and acceptable compared to the potential benefits.
Confidentiality and Anonymity: Participants’ personal information and data collected during the study must be kept confidential and anonymous.
Debriefing: Participants must be given a debriefing after the study is completed to inform them of the study’s purpose and to address any concerns they may have.
Right to Withdraw: Participants have the right to withdraw from the study at any time without penalty.
Minimization of Harm: Researchers must minimize the risks to participants and ensure their safety throughout the study.
Special Populations: Special procedures must be followed when working with vulnerable populations such as children, pregnant women, and individuals with disabilities.

61
Q

Compare and contrast reliability and validity. Explain how measurement reliability and measurement validity are both important for establishing a measure’s construct validity.

A

Reliability and validity are two important concepts in research that are used to determine the quality of a measure or assessment tool.
Reliability refers to the consistency of a measure or assessment tool. In other words, if a measure is reliable, it should produce consistent results when administered multiple times to the same group of individuals. There are different types of reliability, including test-retest reliability, inter-rater reliability, and internal consistency reliability.
Validity, on the other hand, refers to the accuracy of a measure or assessment tool. If a measure is valid, it should accurately measure what it is intended to measure. There are different types of validity, including content validity, criterion validity, and construct validity.
Measurement reliability and measurement validity are both important for establishing a measure’s construct validity. Construct validity refers to the extent to which a measure assesses the construct it is intended to measure. If a measure is not reliable, it is not possible to obtain consistent results, which can make it difficult to establish construct validity. Similarly, if a measure is not valid, it cannot accurately measure the construct it is intended to measure, which also makes it difficult to establish construct validity

62
Q

Describe how carefully worded questions can improve the construct validity of a survey.

A

Carefully worded questions are important for improving the construct validity of a survey because they help ensure that the survey questions are measuring what they are intended to measure. Construct validity refers to the degree to which a survey question measures the intended construct or concept, such as attitudes, beliefs, behaviors, or preferences.
Carefully worded questions can help improve the construct validity of a survey in several ways:
Clarity: Carefully worded questions are clear and unambiguous, ensuring that respondents understand what they are being asked. Clear questions help reduce confusion and increase the accuracy of the responses.
Precision: Carefully worded questions are precise and specific, avoiding vague or overly broad language. Specific questions help ensure that respondents are providing accurate and relevant information.
Unbiased: Carefully worded questions are unbiased and neutral, avoiding leading or loaded language that could influence respondents’ answers. Unbiased questions help ensure that responses are not influenced by the wording of the questions.
Consistency: Carefully worded questions use consistent terminology and language throughout the survey, reducing the risk of confusion or inconsistency in responses.
Relevance: Carefully worded questions are relevant to the construct being measured, ensuring that the survey is measuring the intended construct and not something else.

63
Q

How are observer bias, demand characteristics, and placebo effects similar? How are they different? What are some ways that researchers can avoid them? (Ch. 6 and 11)

A

Observer bias, demand characteristics, and placebo effects are all forms of bias that can impact the validity of research findings. While they are similar in that they can all lead to inaccurate or misleading results, they differ in their underlying mechanisms.
Observer bias refers to the tendency of researchers or observers to see what they expect or want to see, rather than what is actually present. This can lead to inaccurate or biased observations and interpretations of data.
Demand characteristics refer to cues or subtle hints that may influence participants to respond in a particular way, often unconsciously. For example, if participants are led to believe that certain responses are more socially desirable, they may be more likely to provide those responses.
Placebo effects refer to the psychological and physiological effects that can result from a participant’s expectation or belief that they are receiving a particular treatment or intervention, even if that treatment has no active ingredient or effect.
To avoid these biases, researchers can take several steps. One way to avoid observer bias is to use standardized procedures and measurements that are applied consistently across all participants and conditions. Researchers can also use blind or double-blind procedures, where neither the participant nor the researcher knows which condition or treatment the participant is receiving, to reduce bias.
To avoid demand characteristics, researchers can use deception or disguise to conceal the true purpose of the study, or use a placebo or control group to minimize the impact of expectations on participants’ responses.
To avoid placebo effects, researchers can use a placebo or control group to provide a baseline against which to compare the effects of a treatment or intervention. Researchers can also use objective measures, such as physiological or behavioral outcomes, rather than relying solely on self-reported measures or subjective evaluations.
In summary, while observer bias, demand characteristics, and placebo effects are similar in their potential to bias research findings, they differ in their underlying mechanisms. Researchers can take several steps to avoid these biases, such as using standardized procedures, blind or double-blind procedures, deception, and objective measures.

64
Q

What is random sampling, and how is it different from random assignment? What type(s) of validity do these techniques address (Ch. 7)?

A

Random sampling is a statistical technique used to select a representative sample from a larger population, where each individual in the population has an equal chance of being selected for the sample. The purpose of random sampling is to reduce bias and increase the generalizability of research findings to the entire population.
Random assignment, on the other hand, is a technique used in experimental research to assign participants to different treatment groups or conditions randomly. Random assignment helps ensure that each participant has an equal chance of being assigned to any given group, which helps control for potential confounding variables and increases the internal validity of the study.
Random sampling primarily addresses external validity, which refers to the extent to which research findings can be generalized to the population from which the sample was drawn. Random assignment, on the other hand, primarily addresses internal validity, which refers to the extent to which a study can establish causal relationships between variables by controlling for extraneous variables that could affect the outcome.

65
Q

What are the three criteria for a causal claim (Ch. 3)? Why are simple bivariate correlations insufficient for establishing causation (Ch. 8-9)? How does an experiment establish all three causal criteria? (Ch. 10)

A

The three criteria for a causal claim are:
Covariation: There must be a relationship between the presumed cause and effect.
Temporal precedence: The presumed cause must come before the presumed effect in time.
Elimination of alternative explanations: There should be no plausible alternative explanations for the relationship observed between the presumed cause and effect.
Simple bivariate correlations are insufficient for establishing causation because they do not establish temporal precedence or eliminate alternative explanations. A correlation only shows that two variables are related, but it cannot determine which variable caused the other or whether a third variable is responsible for the observed relationship. Therefore, it is important to conduct additional research using experimental designs or other methods to establish causality. An experiment can establish all three causal criteria by manipulating the independent variable and randomly assigning participants to different conditions, which ensures temporal precedence and eliminates alternative explanations. By controlling for other variables that could affect the outcome, experiments provide strong evidence for causality. However, it is important to note that not all research questions can be addressed using experimental designs, and other research methods may be necessary to establish causality in some cases.