Quantitative Flashcards

1
Q

Goal of research

A

design studies carefully to make alternative interpretations implausible

Methods are about designing a study so that if a particular finding is obtained we can reach a conclusion

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Illusory Correlation

A

cognitive bias that occurs when we focus on two events that stand out and occur together

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

How do we know things

A

feelings, intuition, AUTHORITY (Expert), reasoning (logic)- assumption has to be true.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

How do we know things part II

A

Empiricism- idea that knowledge is based on observations SCIENCE- empiricism and reasoning

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Process of science

A

Hypothesis-new hypothesis- theory building- body of knowledge

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Goodsteins Evolved Theory of Science

A

1) data play a central role
2) cientists are not alone- observations reported to other scientists and the public
3) science is adversarial- can be falsified or supproted
4) peer reviewed

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Tenets of Science

A

Empiricism
Replicability
Falsifiability- can be testable
Parsimony- simple account

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Hypothesis gains support

A

Hypothesis can not be proved

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Extend literature

A

take idea further
remove confounds and improve generalizability

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

behavioral science goals

A

describe behavior, predict behavior, explain behavior, determine the causes of behavior

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

causation

A

temporal, and covariation, and elimination of alternatives

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

causation

A

temporal, and covariation, and elimination of alternatives

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

causation

A

temporal, and covariation, and elimination of alternatives

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Efficacy vs effectiveness

A

does intervention produce expected result in ideal circumstances/// degree of benefit in clinical settings

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Construct Validity

A

adequacy of there operational definition

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Internal Validity

A

Ability to draw conclusions about causal relationships

Integrity of experiment

Ability to draw casual link between IV and DV

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Mediating Variables:

A

psychological processes that mediate
the effects of the situational variable
on a particular response

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

COnstruct vs Variable

A

the idea then what’s used to test it

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Operational Definitions

A

Set of defined and outlined procedures used to measure and manipulate variables

A variable must have operational definition to be studied empirically

Allows others to replicate!

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Construct validity

A

Adequacy of the operational definition of variables

Does the operational definition reflect the true theoretical meaning of the variable?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Nonexperimental method

A

Variables are observed as they occur naturally
If they vary together, there is a relationship (correlation)

Reduction of internal validity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Experimental Control

A

Extraneous variables are kept constant
Every feature of the environment is held constant except the manipulated variable

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Strong internal validity requires:

A

Temporal precedence
Covariation between the two variables
Elimination of plausible alternative explanations

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Issues When Choosing A Method

A

Often the higher the internal validity, the lower the external validity (generalization)
Harder to generalize when strict experimental environment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Reliability and validity of measurement
Not to be confused with internal or external validity of a study However, reliability and validity of measurement affects internal validity of a study Measured score = “true” score (real score on the variable) + measurement error
25
Reliability of Measures
Consistency or stability of a measure of behavior We expect measures to give us the same result each time Should not fluctuate much
26
Test-retest reliability- same individuals at two points in time
Practice effects – literally more practiced 2nd time Maturation – simply that subjects change because time has passed
27
Alternate forms reliability
Individual takes 2 different forms of the same test Also at 2 different times
28
Internal consistency reliability
Generally measures whether several items that propose to measure the same general construct produce similar scores Assessed using responses at only one point in time In general, the more the number of questions, the higher the reliability 3 common measures of internal consistency: Item total Split –half reliability Cronbach’s alpha (α)
29
Item-total
Correlation between an individual item and the total score without that item For example, if you had a test that had 20 items, there would be 20-item total correlations. For item 1, it would be the correlation between item 1 and the sum of the other 19 items Helpful in identifying items to remove Or in creating a short-from
30
Split –half reliability
Correlation of the total score on one half of the test, with the total score on the other half Randomly divided items Spearman-Brown split-half reliability coefficient We want > .80 for adequate reliability However, for exploratory research, a cutoff as low as .60 is not uncommon
31
Cronbach’s alpha
How closely related a set of items are as a group How well all the items “hold together” Simply put: Average of all possible split-half reliability coefficients Expressed as a number between 0 and 1 Generally want > .80, (in practice > .70 considered ok) By far the most common measure you will see reported
32
Interrater reliability
Correlation between the observations of 2 different raters on a measure Measured by: Cohen’s Kappa By convention > .70 is considered acceptable
33
Construct Validity
To what extent does the operational definition of a variable actually reflect the true theoretical meaning of the variable? Does the measure reflect the operational definition? Ex. Depression DSM-5 criteria BDI sxs
34
Face validity
Content of the measure appears to reflect the construct being measured Very subjective Easy for a participant to “fake”
35
Content Validity
Extent to which a measure represents all facets of a given construct Subject matter experts may be part of the process
36
Criterion
Measures how well one measure predicts an outcome for another measure
37
Criterion
Predictive Validity Scores on the measure predict behavior on a criterion measured at a future time Ex: GRE -> grad school success Concurrent Validity Relationship between measure and a criterion behavior at the same time
38
Criterion- Convergent Validity
Convergent Validity Scores on the measure are related to other measures of the same construct i.e., if we test two measures that are supposed to be measuring the same construct and show that they are related The scores should “converge” Ex: BDI & CES-D
39
Criterion- Discriminant Validity
Scores on the measure are not related to other measures that are theoretically different i.e., if we test two measures that are not supposed to be related, and show that in fact, they are unrelated The scores should “discriminate” between constructs Ex: Narcissism and Self-esteem
40
Reactivity of Measures
Measurement is reactive if awareness of being measured changes an individual’s behavior e.g., self-monitoring, wearing a fitbit Hawthorne effect / observer effect Productivity and working conditions
41
sensitivity 
Ability of a test to correctly identify those with the condition (true positive rate) Proportion of people with the condition who will have a positive result
42
specificity 
Ability of the test to correctly identify those without the condition (true negative rate) Proportion of people without the condition who will have a negative result
43
Nominal
Categories with no numeric scales
44
Ordinal
Rank ordering Numeric values limited
45
Interval
Numeric properties are literal Assume equal interval between values
46
Ratio
Zero indicates absence of variable measured
47
Types of Variables: Discrete/categorical
Consist of indivisible categories i.e., don’t do math on them, only frequency or percent Answers “how many” questions Usually naturally occurring groups or categories (not always)
48
Continuous/dimensional
Infinitely divisible into whatever units Ex: time or weight Scores that provide information about the magnitude of differences between participants in terms of the amount of some characteristic
49
Converting variables
A continuous variable can be converted into a categorical one Ex: level of anxiety symptoms (0-100)
50
Controlling extraneous/confounding variables
Key to experimental method Want least ambiguous interpretations of results Manipulate IV and hold all other variables constant, either by experimental control or randomization
51
Confounding variable
Varies along with the independent variable Cannot determine which variable is responsible for the effect Ex: exercise vs. video’s effect on mood Windows vs. window-less room
52
Internal validity
Reminder: Internal validity is ability to draw conclusions about causal relationships from the data Results can be attributed to the effect of the independent variable (IV) and not confounding variables More we control confounding variables, the more we strengthen internal validity
53
Posttest-only design
Obtain two equivalent groups of participants Introduce the independent variable Measure the dependent variable Assume the effect is of the IV on the DV
54
Selection Bias
When people selected to conditions differ in an important way This is why we prefer to recruit all our participants first, and then randomize them! Basically anything other than randomization may lead to bias in some way Even when we do randomize, we usually want to make sure the groups are equivalent on important variables…
55
Pretest-posttest design
Pretest is given to each group Help assure that groups are equivalent at the beginning of the experiment
56
Pros to adding a pre-test
Can look at change in DV from pre to post Can assure the randomization worked i.e., we started with “equivalent groups” Compare the pre measures between the groups Especially helpful if have a small sample size Sometimes used to select participants for the experiment Like screening for people who score above a cutoff on depressive symptoms Then they get randomized to groups
57
Pre test and attrition
May have started with equivalent groups, but then people start dropping out…. Drop out may be random… but what if it is due to something non-random, i.e. systematic? Like patients with “worse” symptoms dropped out Pretest allows us to see if those who dropped out were different in some way from those who remained in the study
58
Cons to adding a pre-test
Time consuming Might sensitizes participants to what is being studied Affects the way participants react to manipulation/intervention Practice effects When same pretest is used to post-test (measure DV) Disguise pretest if possible Different form Embed in other questions
59
Solomon 4-group design
A design that tests for effects of a pretest Half receive posttest only Other half receive pretest and posttest
60
Between-subjects design/Independent groups design:
>= 2 groups/conditions Participants participate in only one group Comparisons are made between different group/condition of participants
61
Within-subjects design /Repeated measures design:
Participants experience all groups/conditions Comparisons are made within the same group of participants
62
Within types
Time Pre-post i.e. how did the group of people do from pre to post? T1 T2 T3 T4 Pre, post, 6-month follow-up, 1-year follow-up Condition All participants go through all conditions Ex. Taste test challenge Condition: 1) Pepsi, 2) Coke, 3) RC Cola Participant tastes all three.
63
Repeated Measures Design: Pros
Fewer participants - Each participant serves as their own “control” - Extremely sensitive to statistical differences (between-group tends to have more random error) - When you use the same participant, you automatically are controlling for a large amount of potential confounding factors: such as any demographic/historical differences
64
Repeated Measures Design: Cons
Order effects: Order of presenting the conditions/treatments affects the dependent variable Practice (learning) effects: Performance improves because of the practice gained from previous tasks, i.e., repeated measurement of the DV
65
Counterbalancing
All possible orders of intervention/condition are included Can help with order and practice effects (depending on measurement)
66
Repeated Measures Design: Cons (cont)
Fatigue effect: Performance deteriorates because the participant becomes tired, bored, or distracted from previous tasks Carryover effect: Effect of the previous treatment carry over to influence the response of the next treatment
67
Time Between Repeated Measures
Longer time interval between measures Sometimes called “washout” Helps with fatigue and potential carryover effects However longer intervals can lead to more attrition….
68
Matched Pair Design
Method of assigning participants to conditions in an experiment based on a participant characteristic Goal is to achieve the same equivalency of groups Participants are grouped together into pairs based on some variable they “match” on e.g., age, gender, SES Then, within each pair, participants are randomly assigned to different treatment groups
69
Populatoin
All possible individuals of interest
70
sample
The group of people you actually study that are drawn from the population
71
Reasoning for sampling
How you select your sample affects external validity In general, the larger the sample, the closer it is to estimating the population Sample size matters Must consider the cost/benefit of increasing sample size
71
Inferential statistics
inferences and predictions about a population based on a sample of data taken from the population in question Because a sample is typically only a part of the whole population, sample data provide only limited information about the population. This is why sampling is so important!
72
Probability sampling
utilizes some form of random selection assures that a member of your population has equal probability of being chosen
73
Nonprobability sampling
Just about all other methods Also known as a “convenience sample” A sample that is not drawn randomly Usually consists of participants who are readily available to the researcher
74
Stratified Sampling
Stratified sampling Used to ensure that the proportional representation of groups in the sample is the same as in the population. Example: Population of psychology grad students may be 88% female,10% male, 2% non-binary. How would you sample?
75
Sampling bias
Systematic differences between the characteristics of a sample and a population Leads to underrepresentation of many types of people Limits the potential generalizability of results
76
Sampling Error
The discrepancy between a sample statistic and its population parameter
77
Straightforward manipulations
Ex: Assignment to groups/conditions Treatment vs. control: therapy or pill 2 different signs in the bathroom Different instructions to participants
78
Staged manipulations
Might be when trying to create some psychological state (like anger, anxiety) Frequently use a confederate or accomplice Frequently uses deception Ex: Milgram obedience study Sneezing confederate
79
Strong IV manipulations
Maximizes the difference between groups and increases the chances that the IV will have an effect that is statistically significant Early stages of research Want to show that a relationship exists External validity? Might create situations which never naturally occur
80
Measuring the Dependent Variable- Self-report measures:
Simple ask them! Used to measure aspects of human thought and behavior Attitudes, judgements, emotional states, intended behaviors, etc. Ex: How often do you exercise? How often? How long?
81
Measuring the Dependent Variable Behavioral measures:
Direct observations of behavior Ex: Observe how much exercise one does FitBit – accelerometer/pedometer data the fitbit observes for you!
82
Measuring the Dependent Variable Physiological measures:
Recordings of responses of the body Ex: FitBit – Heart rate monitor data blood lactate concentration
83
Measuring the Dependent Variable Physiological measures:
Recordings of responses of the body Ex: FitBit – Heart rate monitor data blood lactate concentration
84
Measuring the Dependent Variable Ceiling effect:
Maximum level is quickly reached Task too easy? Scale of 1-5 -> most responses are 4 or 5 Maybe would have seen a better spread if scale was 1-10
85
Floor effect:
Reduces variability in data and difficult to detect any differences – just not captured by the DV measure
86
Floor and ceiling
This also happens when there isn’t much room for improvement in your sample Ex: Low depression scores to start, no where to go A1C Stress management 1550 SAT score
87
Measuring the DV: Likert scale
Most people use “Likert-like” or “Likert-type” scales “True” Likert scale contains several items response levels are arranged horizontally and anchored with consecutive integers and verbal labels connote more-or-less evenly-spaced gradations, bivalent and symmetrical about a neutral middle
88
Participant expectations-Demand characteristics
Participants respond/behave how they think is expected Especially problematic if they know the hypothesis Control by deception, filler items, asking about perception of purpose of research, blinding!
89
Placebo effect
Used to control placebo effect i.e., just taking a pill makes a difference Compare placebo pill with active pill effects – needs to above and beyond placebo response
90
Balanced-placebo design
When specifically looking for effect of expectations
91
Nocebo
A nocebo response occurs when participant’s symptoms are worsened by the administration of an inert, sham, or dummy (simulator) treatment due to negative expectations of treatment or prognosis
92
Experimenter
Subtle biases of how experimenter interprets and records behaviors, or how they interact with participants
93
Controlling the expectancy problem
Train experimenters Run all participants at some time so experimenter’s behavior is the same Take the human out of the equation - Automate procedures print, video directions
94
Controlling demand characteristics and expectancy problem
Single and double blind