Chapter 9-11 Flashcards

1
Q

When are t distributions used?

A

(a) when we don’t know the population sd and (b) when we compare two samples

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Sample standard deviation

A

an estimation of the population standard deviation; the only practical different between the t test and z test

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

N-1

A

used in the sample sd to correct for the probability that it underestimates the population sd

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

t statistic

A

distance of a sample mean from a population mean in terms of the estimated standard error

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

single-sample t test

A

hypothesis test in which we compare a sample from which we collect data to a population for which we know the mean but not the standard deviation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

degrees of freedom

A

number of scores that are free to vary when we estimate a population parameter from a sample (i.e. can take on different values when a given parameter is known)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

paired-samples t test

A

aka dependent-samples t test; used to compare two means for a within-groups design, a situation in which every participant is in both samples

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

replication

A

repetition of a study that gives us confidence that a particular observation is true

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Independent-samples or between-groups t test

A

used to compare two means for a between-groups design, wherein each participant is assigned to only one condition

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Pooled variance

A

a weighted average of the two estimates of variance, one from each sample in an independent-samples t test

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Error bars

A

vertical lines added to bars or dots on a graph that represent the variability of those data and give us a sense of how precise an estimate summary statistic is

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

inflating alpha

A

the probability of a type I error increases as the number of samples increases and the number of statistical comparisons increases

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What do the z, t, and f distributions have in common?

A

They all rely on the characteristics of the normal bell-shaped curve; f distributions are just more conservative and versatile versions of t and z distributions

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Analysis of variance (ANOVA)

A

a hypothesis test typically used with one or more nominal, sometimes ordinal, independent variables with three groups overall and a scale dependent variable

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

F statistic

A

a ratio of two measures of variance (1) between-groups variance or the difference between sample means and (2) within-groups variance or the average of sample variances

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Between-groups variance

A

an estimate of the population variance, based on the differences among the means

17
Q

Within-groups variance

A

an estimate of the population variance, based on the differences within each of the three or more sample distributions; the difference between means we’d expect by chance

18
Q

one-way ANOVA

A

a hypothesis test that includes both one nominal independent variable with more than two levels and a scale dependent variable

19
Q

between-groups ANOVA

A

a hypothesis test in which there are more than two samples, and each sample is composed of different participants

20
Q

within-groups or repeated-measures ANOVA

A

a hypothesis test wherein there are more than two samples, and each sample is composed of the same participants

21
Q

Three assumptions for ANOVA

A

(1) Random selection is necessary if we want to generalize beyond a sample (2) A normally distributed population allows us to examine the distributions of the samples to get a sense of what the underlying population distribution might look like (3) Homoscedasticity assumes that the samples all come from populations with the same variance

22
Q

Homoscedastic populations

A

those that have the same variance

23
Q

Heteroscedastic populations

A

those that have different variances

24
Q

Source table

A

presents the important calculations and final results of an ANOVA in a consistent and easy-to-read format

25
Q

MS

A

the conventional symbol for variance in ANOVA; stands for means square because variance is the arithmetic mean of squared deviations for between-groups variance and within-groups variance

26
Q

Grand mean

A

mean of every score in a study, regardless of which sample the score came from

27
Q

What are the three sums of squares of ANOVA?

A

SSbetween, SSwithin, SStotal

28
Q

R^2

A

an estimate of the proportion of variance in the dependent variable that is accounted for by the independent variable

29
Q

Omega squared

A

less biased estimate of effect size for ANOVA; an attempt at correcting the bias of R^2

30
Q

Post hoc test

A

a statistical procedure frequently carried out after the null hypothesis has been rejected in an analysis of variance; allows us to make multiple comparisons among means

31
Q

Tukey HSD test or q test

A

a widely used post hoc test that determines the differences between means in terms of standard error; allows us to make comparisons to identify differences that are “honestly” there