PSYC 301 Flashcards

1
Q

simplest explanation for difference is

A

chance

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

independent samples t test equation

A

t=x1-x2/SE

why? assuming pop = 0

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

what’s the idea of SE

A

accuracy or precision of our estimates

when it’s small, our estimates are probably pretty good

as sample size increases, SE decreases

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Abelson’s MAGIC criteria

A

M- Magnitude
A- Articulation
G- Generality
I- Interestingness
C- Credibility

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

what is s (standard deviation)

A

dispersion of scores around the means

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

a bigger S (SD) means

A

means a bigger spread; worse estimate

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

focus of NHST

A

is on a qualitative decision: does a systematic difference exist or is the different merely a function of chance?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

NHST is a method for

A

deciding a difference likely exists, but does not speak to the size of that difference

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

bayesian statistics argues that

A

just bc the null is unlikely for our data, does not necessarily mean the data are likely to be drawn from a population where our systematic difference is true (alt is true)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

in bayesian statistics, we calculate a?

A

Bayes factor

(ratio of the liklelihood of the alt hypothesis relative to the liklihood of the null hypothesis

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

what bayes factor is considered moderate and strong evidence for alternative hypothesis more likely than null

A

3 moderate (threshold for starting claims)
10 strong

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

bayesian approach offers

A

an alternative method for assessing the viability of our random chance explanation vs a systematic explanation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

bayesian statistics has same problem as NHST which is

A

“does increase in confidence in the alternative relative to the null really translate into magnitude of the effect and how do I interpret that”

  • doesn’t tell you whether it’s big or small which is the same issue with NHST
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

raw effect sizes

A
  • not used much in psych
  • look at the size of the difference between the two means and treat that as an index of magnitude
  • used in econ with money which is a meaningful benchmark
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

raw effect sizes are helpful when

A

when the outcome variable of interest (DV) is on a metric that is meaningful and readily interpretable in light of some clear criteria

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

raw effect sizes are problematic when

A
  • the outcome variable is not easily interpretable with respect to specifiable criteria
  • one needs to compare effects with outcome variables that are on different metrics
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

standardized effect sizes indices names

A

cohen’s d and ___

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

independent samples t test cohens d formula

A

ds = x1-x2/pooled s

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

pooled S equation

A

Pooled S=
√(n1-1) S2 + (n1-1) S22/(n1+n2-2)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

ds ____ as the mean difference ____ and the standard deviations ____

A

increases increases decrease

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

ds is not influenced by

A

sample size

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

cohens d is sensitive to 2 properties of the data

A
  • differences of means (2 rly far apart bigger than closer together)
  • standard deviation (as SD gets rly small, effect sizes get bigger)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

cohens d is an index for

A

for how distinct 2 groups are from each other

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

ds has a minimum value of __ and an upper boundary of ___

A

min value of 0 and no upper boundary

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

ds can be interpreted as the % of the SD

A

0.5- difference between the means is half the size of the dependent variable’s SD

1.00- indicates the difference is as big as the SD of the dependent variable

2.00- indicates a mean difference twice the size of the standard deviation of the DV

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

ds guidelines (cohen’s d guidelines)

A

0.2 small
0.5 medium
0.8 large

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

dav equation

A

dav = D/ avg. S

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

what does dav ignore and drm takes into account

A

ignores the magnitude of correlation between sets of observations

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

drm equation

A

look @ ipad

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

___ will tend to be more similar to ___ than ___ except when r is low and the difference between SD are large

A

dav will tend to be more similar to ds than drm except when r is low and differences between SDs are large

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

___ is more conservative than ___ but is considered overly conservative when r is large

A

drm dav

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

pearson r coefficient r is what

A

r is the strength of association between variables

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

r can be calculated to express what

A

r can be calculated to express the strength and direction of association between two continuous variables and also the relationship between a dichotomous variable (ex. membership in one of two groups) and a continuous variable (ex. a dependent variable)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

biserial correlation

A

r express the relationship between a dichotomous variable (ex. membership in one of two groups) and a continuous variable (ex. a dependent variable)

in this context r can be conceptualized as the strength of association between membership in one of the two groups and scores on the dependent variable or when squared it expresses the proportion of variance in the DV accounted for by group membership

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

interpreting r as an effect size index

A

r ranges from -1.00 to 1.00 with .00 indicating no association

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

cohen’s guidelines for r

A

.10 (small)
.30 (medium)
.50 (large)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

if r gets bigger, our cohen’s d gets ___

A

smaller

will adjust down the more correlated the two scores are

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
38
Q

large effect sizes do not directly imply practical significance, why?

A
  1. metric can be hard to interpret without reference to more concrete reference criteria
  2. durability of an effect might also be relevant in addition to its size
  3. cost/benefit analysis also can determine practicality
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
39
Q

when are small effects impressive

A

when there are minimal manipulations of the IV

when it’s difficult to influence the DV

40
Q

conceptual consequences of an effect also critical to evaluating importance

A

existence of an effect differentiates between competing theories

existence of an effect challenges reigning theory

existence of an effect demonstrates a new or disputed phenomenon

41
Q

when computing confidence intervals, we typically specify

A

95% CIs

42
Q

The confidence interval is a

A

range of values where you expect the true difference between the population averages to fall.

43
Q

the width of confidence intervals will be determined by what

A

standard errors which are influenced by sample size and variability around the means

44
Q

if a confidence interval around an effect size contains 0,

A

that indicates the effect is not statistically significant

45
Q

a really big t value says what

A

that it came from a null pop

46
Q

how unlikely does t-value need to be before concluding the chance explanation is no longer tenable

A

alpha (0.05)

47
Q

type 1 error

A

concluding a mean diff exists in the pops (rejecting the null) when there isn’t actually a difference

48
Q

type 2 error

A

concluding there is no mean difference between pops (failing to reject the null) when there is actually a diff in means betweeb=n the pops

20% chance

49
Q

power

A

likelihood of finding an effect when it’s really there; flip of B

conventionally 80%

50
Q

determinants of power

A
  1. alpha level
    - stricter the a, lower the power (more likely to make a type 2 error, under control of researchers
  2. sample size
    as n increases, precision increases which shrinks SE
    - the larger the n the greater the power
    - under control of the researchers
  3. magnitude of effect
    - the larger the effect of IV, the greater the power
    - somewhat under the control of researchers
51
Q

when is a between-subjects one way ANOVA used

A

when there are more than 2 levels and thus require comparing means from 3 or more independent samples

52
Q

single factor experiment

A

an experiment with one IV

53
Q

one way anova

A

an anova with a single factor

54
Q

two factor experiment

A

an experiment with two IVS

55
Q

two way anova

A

an anova with two factors

56
Q

what does a one way anova test

A

tests if at least 1 mean difference exists among the levels

57
Q

null and alt hypothesis in ANOVA

A

null says all means = 0
alr says at least one mean is different from the others

58
Q

anova used to test mean diffs but its calculations are based on

A

variances

all abt testing mean diffs but really its just tests for variance

59
Q

total variability in scores can be divided into:

A

between treatments variability (captures mean diffs) and within treatments variability (variability within scores)

60
Q

between treatments variability: if one were to compare a single score drawn from each of two conditions, these 2 scores could be different for 3 reasons

A
  1. treatment effect: the maniputation distinguishing between conditions could influence scores
  2. individual differences: differences in backgrounds, abilities, attributes, and circumstances of individual people
  3. experimental error: chance errors that occur when measuring the construct of interest (ex. lack of attention)
    - researchers try to minimize this in their studies
61
Q

within treatment variability: if one were to compare two scores drawn from the same condition, these scores could be different for 2 reasons

A
  1. individual differences
  2. experimental error

note* no treatment effect listed bc this is a constant within the condition

62
Q

what test statistic is associated with an ANOVA and what’s its formula

A

F-ratio statistic (F test)

F= variance between treatments/variance within treatments

in other words:

F= treatment effect+individual diffs+experimental error/individual diffs+experimental error

63
Q

when the null is true for a between subjects one way ANOVA:

A

0+individual diffs+experimental error/individual diffs+experimental error

results in a value nearly equal to 1

64
Q

when the null is false for a between subjects one way ANOVA:

A

F=treatment effect+individual diffs+experimental error/individual diffs + experimental error

results in a value larger than 1

65
Q

denominator of the f test

A

measures uncontrolled and unexplained (unsystematic) variability in the scores

called the error term

66
Q

numerator of the f test

A

measures same error variability, but also variability from systematic influences (treatment effect)

67
Q

other way to describe f test

A

systematic variability/error term

68
Q

k, n, N, T, G meaning in ANOVA

A

k- number of levels (conditions) in the factor

n- sample size for specific condition

N- sample size for whole study

T- sum of scores within a specific condition

G- sum of all scores in the experiment (all Ts)

69
Q

SS ANOVA

A

sums of squares (the sum of the squared deviations of each individual score from the mean)

an index of variability

70
Q

ANOVA involves two parts

A

analysis of sums of sqaures
analysis of dfs

71
Q

SS between formula

A

look at ipad

72
Q

what is SS means

A

deviation of group/condition means around a grand mean

represents how much spread there is

if conditions deviate lots from grand mean, they’re really diff from eachother

73
Q

in ANOVA the term for variance is

A

mean square

74
Q

sample variance equation

A

S2 = ss/n-1 + ss/df

75
Q

general formula for mean square (MS) is

A

MS = SS/df

gives us variance

76
Q

f ratio formula

A

F = MS between/MS within

77
Q

if we get an F value for which there is only a 5% or less chance of obtaining a value that large or larger, we no long consider what

A

no longer consider the null explanation tenable and conclude that at least 1 difference exists among the means

78
Q

the one way anova is an ____ test that _______

A

the one way anova is an omnibus test that evaluates a very global and diffuse question

means that it tells us at least 1 difference exists but not the precise number of differences or where they occur which presents a challenge for the Articulation in abelson’s magic criteria as results get more complx, there are more ways in which they can be articulated

79
Q

to general approaches to follow up tests for anovas

A

post hoc: follow up tests that are not based on prior planning or clear hypothesis

a priori tests (planned tests): planned or theoretically driven follow up tests

80
Q

when is a post hoc test considered appropriate and what do they assume

A

when the omnibus F test is significant

assume no clear conceptual basis for comparisons and thus explores all possible pairwise comparisons

81
Q

what do post hoc tests attempt to control for

A

attempt to control for familywise error (type I error rate across tests conducted on the same data
- once you do 5 of these, error rate is about 23% chance r

82
Q

relationship between familywise error and power

A

stricter control of family wise error comes at the sacrifice of less power

83
Q

common post hoc tests

A

LSD
- doesn’t control for family wise
Bonferroni adjustment
- keeps family wise down with few number of coparisons
- takes traditional a and divide by # of comparisons
Tukey HSD
- tests all pairwise with strong conrol of familywise error
- unequal sample sizes and differences in variances are a problem

84
Q

a priori tests used when

A

when there are expectations about specific differences or there are specific comparisons that are particularly important to the research question

planned contrasts allow us to test more specific patterns or comparisons within our omnibus f test

85
Q

the precise comparisons that are conducted in an a priori test are specified by

A

contrast weights

86
Q

what does an anova with a significant f test entail

A

tells that at least 1 difference exists among our 4 means

87
Q
A
88
Q

if contrast weights sum to 0 then

A

its orthogonal;

89
Q

is contrast weights = anything other than 0 then

A

its nonorthogonal

90
Q

orthogonal meaning

A

slices of variance completely independent of each other

91
Q

non orthogonal

A

when slices of variance overlap (results of contrast are not independent of one another

nothing wrong with using non-orthogonal contrasts so long as you recognize the lack of independence

92
Q

most commonly reported effect size in anova

A

eta-squared

93
Q

eta squared equation

A

n,2 = ss between/ ss total

ranges from 0-1.00

94
Q

cohen’s f

A

another effect size used for anova

f= square root of ss effect/ss error

95
Q

assumptions of between-subjects one way anova

A
  • independence of observations
  • the distribution of the outcome variable should be normally distributed in each group
  • homogeneity (equality) of variance in the outcome variable across the groups