T-tests Flashcards

1
Q

Why are repeated measures t-tests also called correlated t-tests?

A

In repeated measures designs it is likely that data from participants will be correlated e.g. if someone has a good memory anyway their scores will be high regardless of condition

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What are t-tests?

A

Parametric tests to test significance of differences between two groups
For independent samples use unrelated/independent samples t-test
For repeated measures use related/paired t-test

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What are the key assumptions that need to be met for use of parametric tests?

A

At least interval-level data
Samples drawn from normally-distributed populations
Homogeneity of variance i.e. need variability of each group to be similar (esp for independent designs where sample sizes may be different)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is the basis of a t-test?

A

The null hypothesis is that two samples of scores come from populations with the same mean i.e. experimental condition has no effect
State null in terms of DIFFERENCE VALUES i.e. if no experimental effect differences should centre around 0 and be relatively small –> “population of differences has mean of zero”

We are essentially asking through using this test whether a DIFFERENCE BETWEEN TWO MEANS IS MEANINGFUL

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is the equation used in related t-tests?

A

t=(sum of observed differences in sample means)/(estimate of SE of the difference between the sample means)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

So for t-tests we are interested in whether an observed difference between two conditions is significant i.e. unlikely in the normal population.
What information do we need to know?

A

The probability that a sample of difference scores with a difference mean as large as/larger than our calculated value would be drawn at random from a population with a difference mean of zero

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is a sampling distribution of difference means?

A

The distribution we obtain when we dip into a population of close-to-zero differences taking samples of a certain size over and over and recording the difference mean each time - the means should vary slightly around zero
This distribution will be narrower than a distribution of differences because it is composed of SAMPLES of differences taken in groups at a time - means don’t vary as much around zero as individual differences do

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What do we want to know in a related t-test and how can we calculate it?

A

How many standard deviations our difference mean is from the population mean

Standard error=sd/square root of sample size
In a study where sample size is 13 and sd is 2.6, SE is 0.721

Now we ask how many SEs our difference mean is away from the hypothetical difference mean of zero - divide our difference mean (4.54) by our SE and we get our t-value (6.297)

So our difference mean, sampled at random from all difference means, would be 6.297 SEs away from the population mean of 0

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Why can’t we interpret SDs away from the mean as a z-score in this example and what must we do instead?

A

Can’t look up z-value to find probability of a score that large occurring as the distribution of t is not normal unless N is very large at which point t values can be interchangeable with z values
Lower N - t distribution broader than normal
So consult t-tables and find whether calculated value exceeds a critical value at our alpha level and degrees of freedom

In our example our value exceeds the critical value by far so we confidently reject the null hypothesis and argue that the intervention does produce a difference between groups

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What are effect size and power?

A

EFFECT SIZE - estimate of size of effect we appear to have demonstrated (independent of sample size)
POWER - probability of not making a type 2 error i.e. the probability that a test will be able to identify the presence of a significant effect

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What are the interpretation values of Cohen’s d and Rosenthal’s R?

A

Cohen: 0.2 small, 0.5 medium, 0.8 large
Rosenthal: 0.1 small, 0.3 medium, 0.5 large

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What does a minus value for a t-statistic suggest?

A

Mean value for first variable name in SPSS output paired_variables box is lower on average for the mean value for second variable

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

How can you calculate Cohen’s D?

A

d=(mean1-mean2)/S1 (i.e. add the SDs for each condition and divide by two)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

When would we use an unrelated t-test?

A

On independent groups i.e. between subjects design or matched pairs design with TWO GROUPS

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What do independent t-tests do?

A

Compare MEANS for two sets of data from 2 completely DIFFERENT SAMPLES
Separate samples for each condition/population
Can have different n in each condition (but desirable for them to be roughly similar)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What is a key difference from related t-test calculations?

A

We use a sampling distribution of difference between two sample means, rather than sampling distribution of difference means
We can’t simply look at pairs of scores for each participant as we do in related designs, and find the difference for each; instead we consider what would happen if we took two samples from two identical populations at random over and over again and recorded the difference between the sample means each time
Again we expect the differences to centre around zero i.e. two underlying population means are identical under the null hypothesis

17
Q

What is a key difference from the related t-test in how we go about finding our standard error?

A

In related t-tests we can use the central limit theorem to estimate standard error from the standard deviation of our sample
In this case, however, we have two standard deviations from two samples, so we have to ESTIMATE THE VARIANCE of the distribution from the POOLED variance of both samples - the square root of this value will then be our estimated standard error

18
Q

How do we complete an unrelated t-test?

A

Again use tables and see whether obtained value of difference mean/SE exceeds a critical value and is therefore significant

19
Q

What tests can we use if we have more than 2 groups?

A

ANOVA if parametric assumptions met, or Kruskal-wallis/Freidman’s ANOVA if not

20
Q

How does a single sample t-test differ from the related and unrelated t-tests?

A

In related/unrelated we don’t know the features of the underlying population and usually we have a control group to make comparisons with what would happen without treatment
We use a single-sample t-test when we DO know the features of the population - we don’t need a control group because we already know the population mean for the condition of no treatment
So our null hypothesis in this case is that the mean of our sample is the same as that of the population (not necessarily zero)

21
Q

What is the calculation for the t-value in single-sample t tests?

A

Difference between sample and population means divided by SE (which can be estimated from SD using central limit theorem)

Again use tables of critical values