Experimental Exam #1 Flashcards

(91 cards)

1
Q

Types of data in studies

A

1) experiments
2) quasi-experiments
3) correlational studies
4) observational studies
5) surveys

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Basic research

A

Fundamental questions of behavior (usually human behavior)
Ex: what is the capacity of memory?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Applied research

A

finding solutions to problems/ is related to specific situations
Ex: a new treatment for depression
Often in a clinical landscape

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Characteristics of scientists

A

1) Scientists are empiricists: the practice of basing ideas and theories on testing and experience
2) Scientists test theories
3) Scientists tackle both basic and applied problems
4) Scientists make science public
5) Scientists talk to the world in popular media

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is good evidence?

A

1) Should be peer reviewed → should catch flaws in results and study design
2) Want to isolate cause and effect → manipulate IV then measure DV
3) Rule out potential alternative explanations (confounding variables)
4) Can show you what would have happened (can make predictions about the future and evaluate the validity of those)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

How to ensure control

A

Groups are identical in every possible way except for the condition which you are manipulating
- Same demographic, ages, education statuses, genders, socioeconomic status

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Theory definition

A

systematic body of ideas about a particular topic/phenomenon

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Qualities of a theory

A

Describes the relationship among variables
Organizes / summarizes knowledge or findings
Describes, explains or predicts behavior
Supported by data
Falsifiable: a principle or theory can only be considered scientific if it is even possible to establish it as false
Parsimonious (occam’s razor): all else being equal, we want the simplest solution

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

How do we know what we know?

A

1) based on experience
2) using intuition
3) trusting authority figures

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Problems with basing science on experience

A

1) Not probabilistic → only one data point
2) No comparison group: you need a control/placebo group to have something to compare your results to
3) Has confounds
4) Not systematic: need to hold everything constant and change one thing

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Confound definition

A

Confounds: plausible alternative explanation for the fining → when a second variable varies systematically along with the IV and provides an alternative explanation for the results

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Problems with using intuition

A

1) Often uses colloquial phrases / trendy phrases (ex: “toxins”)
2) Sometimes intuitions are inconsistent
3) Sometimes intuitions describe the past
4) Intuitions can lead us astray
5) biases (x3)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Availability bias

A
  • things that come to mind easily can bias our thinking
    Ex: recent or vivid memories (often from the news)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Present/present bias

A
  • Examples that are easier to call to mind are more “available” to call to mine and guide our thinking
  • Very similar to availability but more specifically deals with the fact that we often fail to look at absences
  • Can be from family and friends, stories, culture at large
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Confirmation bias

A
  • When people are asked to test a hypothesis, they gather evidence that supports their previous thoughts
  • Can be conscious or unconscious
  • Collaboration can help overcome this
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Problems with trusting authority figures

A

Ex: jenny mccarthy went on opera and said vaccines caused autism in her son → gave a boost to the antivax community
No evidence and her son didn’t even have autism
Ex: Dr. Oz and Dr. Phil → often used status to sell products (unethical)
Dr. Oz: did have credentials but was not using/applying them in an ethical way

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Empirical articles

A

a first-time published study

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Review articles

A

summarize and integrate all the published studies that have been done in one research area

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Components of a scientific paper

A

1) Title
2) Abstracts
3) Introduction
4) Methods
5) Results
6) Discussion
7) References

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Title

A

should tell you the main idea of the article

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Abstracts

A

brief summary of the articles content (summary paragraph)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Introduction

A

introduces the problem and explain why it is important
Mostly described what other research have found and explains why their research is relevant to your → last paragraph: usually introduces the method you used, your variables and hypothesis

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Methods

A

explains how you conducted your study
Usually so detailed that someone else could conduct a direct replica of the study
Includes: participants, materials, procedures

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Results

A

present the study’s numerical results, including any statistical tests, tables or figures

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Discussion
summarize the results of your study; describe how they relate to your hypothesis or answer your research question Evaluate your study, advocating for the strengths and defending the weaknesses Suggest what the next step might be for the theory-data cycle
26
Measured variable
something that is observed or recorded ex: Height, weight,
27
Manipulated variable
- controlled for - Depression: can not force you to be depressed but you can make it manipulated by having a control group
28
Conceptual variables
Called constructs: abstract, general, theoretical ex: Depression, pain
29
Operational definitions
A specific way to measure something abstract -- Not directly measuring the construct as it is theoretical, but helps us tap into it
30
Operationalizing
taking something invisible and turning it into something we can measure
31
Types of measures
1) self report 2) Observational/behavior: spatial reasoning tasks, working memory tasks 3) Physiological: fMRI BOLD signal, Physiological measures are not inherently better than behavioral measures
32
Self-report pros and cons
Pros: you know yourself best Cons: - Can be influenced by emotional/physical states (biased) and life events - Might play up symptoms for experimenter (experimenter effects) - Might not be accurate in certain groups (dementia, kids)
33
Pros and cons for observational and behavior
Pros: - Can’t play up symptoms Cons: - Need lots of things: technology, software, instructions
34
Scales of measurement
1) nominal 2) ordinal 3) interval 4) ratio
35
Nominal variables
"names variables" - categories; not continuous; can not be added or subtracted
36
Ordinal variables
"rankings" - Inherent order to the variables - spacing between not necessarily the same
37
Interval variables
"equally spaced numbers" - Temp in fahrenheit → 0 doesn’t mean anything - Equally spaced - Continuous
38
Ratio variables
"has a meaningful zero" - When zero is the absence of something - Continuous
39
Types of claims
1) frequency claims 2) association claims 3) causal claims
40
Frequency claim
- One variable that is measured - Usually in percentages (surveys)
41
Association claims
- 2 variables are linked, both measured - “Correlation”
42
Causal claims
- One variable causes change in the other - One must be manipulated
43
Things you need to make a causal claim
1) covariance: As A changes, B also needs to change (can be in the same direction or different direction) 2) temporal precedence: Have to verify the order in which the variables come A causes B → A has to happen earlier in time than B NOT bidirectional → correlations can be bidirectional but causal claims can not be 3) internal validity: The study’s method ensures that there are no plausible alternative explanations for the change in B; A is the only thing that is changed
44
4 types of validity
1) construct validity 2) external validity 3) internal validity 4) statistical validity
45
Construct validity
"quality of measures and manipulations" - How did you measure your DV? - How well did you operationalize the construct - How reliable are your measurements **How well did you measure what you claim to measure**
46
Reliability
- Refers to consistency: if you are given a measurement on personality, it should be the same next week - Reliability is necessary but not sufficient for validity
47
External validity
"does it generalize to other situations / populations " - Generalize: how do we extrapolate our findings to other settings, entities, groups, operationalizations of the same construct (ex: depression inventories), etc. - May not be the goal of the study: if studying a rare disease, you may not care if it generalizes; it can still be a valid claim
48
Internal Validity
"no alternative causal explanation for the outcome" - Was the study free of confounds - Was there random assignment - Was there controls and/or counterbalancing - Was there no different between the condition other than the IV
49
Statistical validity
"appropriate and reasonable statistical conclusions" How well do the numbers support the claim - P-value - Effect size - Well-powered Do you believe the number to think the stats are lying to you
50
Control group
Control: closest to a null condition - The neutral or no treatment level of the IV
51
Comparison group
something else - not a null but just something different
52
experimental group
group exposed to a manipulation
53
Placebo conditions
when the control group is exposed to an inert treatment (no active ingredient)
54
Treatment conditions
the non-neutral level of levels of the IV
55
Prioritizing external and internal validity
Doing experiments in a lab allows you to control for confounds; BUT a lab is not the real world - how do you know it will generalize? Often: do in a controlled environment first, and then do it in a less controlled environment and see if results hold
56
Systematic variability
when confounds trends together → threatens internal validity
57
Unsystematic variability
random/haphazard trends that affect both groups → NOT a confound Ex: a random amount of people from both groups drop out of the study
58
Individual differences study
what makes us different → where is our variance? - Hard to make a causal claims when there are individual - Don’t use an experiment → use a correlational design; harness the variability differences
59
Experiments (in relation to individual differences)
when we break people into groups we are treating them as one unit 1) Should be hard to detect differences **within** the groups 2) Should be easy to detect differences **across** the groups (due to manipulation)
60
Selection effects
when the kinds of participants in 1 group are systematically different than another group
61
How can we combat selection effects?
Random assignment: a way of assigning participants to levels of the IV such that each participant has an equal chance of being in each group All participants should be equivalent on all important dimensions (age, eduction, race, income, sex etc.) → called group 1 and 2 matching **can use a t-test to ensure this
62
Independent groups (between groups)
Each group has different participants
63
Posttest design (+pros and cons)
Posttest design: one measure at the end of the study Pros: - No attrition (participant drop out) → only gathering one thing - One-time snapshot Cons: - No baseline: can not tell if something improved or got worse
64
Pretest /posttest design
test before and after manipulation Pros: - You get a baseline - Allows you to study change over time Cons: - Practice effect: get better over time solely because of practice, not the manipulation - Fatigue effect: get worse over time, not due to the manipulation but due to fatigue - Attrition (drop outs): two sessions required - Costs more: have to pay double (more for two sessions)
65
Within groups (repeated-measures)
All participants are exposed to all levels of the independent variable
66
Repeated measures design
participants respond to a dependent variable at least twice → after exposure to each independent variable Pros: - Do not have to worry about assignment issues or selection effects (don’t even assign people) - Less recruitment - The group is its own control (fewer participants needed for a study) - Increases statistical power Cons: - Change in behavior → might figure out what the experiment is looking for - Includes practice behavior and fatigue behavior - Carryover effects: the first trial affects later trials
67
Concurrent-measures design
do it at the same time
68
Order effects
"exposure to one conditions changes participant response to a later condition" 1) item effects 2) carryover effects
69
Item effects
maybe the previous item gives you information or influences the next one
70
Carryover effects
contamination carrying over from one condition to the next Ex: you drink caffeine and then take a test then you drink decaf and take a test → caffeine is still in your system for the second test 1) Practice effects: participants get better at a task over time 2) Fatigue effects: participants get worse at a task over time
71
Counterbalancing
presenting the levels of the IV to participants in different sequences Can help fix some order effects
72
12 threats to internal validity
1) Design confounds 2) Selection effect 3) Order effect 4) Maturation 5) History 6) Regression to the mean 7) Attrition / mortality 8) Testing effect 9) Instrumentation 10) Observer bias 11) Demand characteristics 12) Placebo effects
73
History
Refers to any event that occurs between the beginning of treatment and the measurement of outcome that night have produced the observed effects *Nothing to be done → can not fix this effect; have to just put it in the discussion of your research paper or start over
74
Maturation
A change in behavior that emerges spontaneously over time Changes in the organism that occurs regardless of treatment might masquerade as treatment effects *usually in development
75
Attrition
- Refers to who is dropping out of your study (or dying → that is mortality) - Makes the designs complicated: the N at pretest doesn't necessarily match the N at posttest - As long as attrition happens at random it is ok → if all people in one group drop out and others don’t, this is a confound
76
Regression to the mean
Regression to the mean: when extreme scores become less extreme over time Example: do a study and find a HUGE effect size → if you run this again, the extreme scores you saw previously will likely regress to the mean *replication studies
77
Testing effects
- Testing effects: refer to a change in participants as a result of experiencing the DV more than once - Example: practice and fatigue effects
78
Instrumentation
- Example: Goodhart's law → when a measure becomes the target it ceases to be a good measure - Factory asked workers to create x amount of nails → people made really really tiny nails to get a big number → asked to fo it by weight instead to avoid this → made three really heavy nails - Example: scales that are designed for one population
79
Observer bias
performance/behavior might change because you are being watched by the researcher
80
Demand characteristics
participants guess what the study is supposed to be about and then change their behavior in the expected direction - You may also be bad at self reported measures → you may not realize you are angry because you are hungry so you just report angry even though that’s not the root of the problem
81
Placebo effects
- When people are not getting the treatment but improve - Can be a limitation of the study
82
Why might there be a null effect?
1) Weak manipulations 2) No variability/variance 3) Individual differences 4) Measurement error 5) Statistical power
83
Ceiling effects
scores are all on the high end
84
Floor effects
scores of all on the low end
85
Statistical power
the ability to detect an effect if one is there
86
Ways to increase power
Increase sample size (N) - When you are underpowered, can only detect effect size if it is massive → only thing you can detect
87
P-value definition
the probability of getting our data or something more extreme IF the null hypothesis is true
88
Linear regression
Y = b0 + b1X + e b0 = intercept b1 = coefficient X = IV Y = DV e = error
89
Interactions
test whether the effect of one IV depends on the level of the other IV
90
ANOVA
Multiple IVs If two IVS: ____ x _____ In each blank, you put how many levels of each IV there are: First IV: 2 levels Second IV: 3 levels _2_ x _3_ ANOVA
91
Main effects
effects of each IV alone → ignoring / averaging across the other IV Number of main effects is the number of IV’s