Weeks 5 and 6 Flashcards

1
Q

Note: “N” is the number of observations in an entire design, “ni” is the number of observations in sample i. In a balanced design, n1 = n2 = n

1) The samples in an independent-samples t-test are said to be “independent” because the same participants are measured in both samples.
a) true
b) false

A

b) false

Explanation … The word “independent” here means that participants in one sample have no individual relationship with those in the other sample. In particular this means that if you know how much above or below the sample mean a particular participant scores in the first sample, this will tell you nothing about how much above or below the mean any participant in the second sample will score. The usual way to produce an independent-samples design is to ensure that different groups of participants appear in the 2 samples.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

The sampling distribution underlying an independent-samples t-test is a distribution of what values? (Note, the subscripts 1 and 2 in the answers refer to samples 1 and 2 in the design)

a) M1 - M2
b) < M1 - M2>
c) M1 + M2
d) sM1 – M2

A

a) M1 - M2

Explanation … In an independent-samples t-test you have 2 samples (often a treatment and a control). The statistic used in the test is the difference between these 2 means and the relevant sampling distribution is therefore the sampling distribution of differences between means. This makes a) the correct answer. Answer c) is the sum of the means rather than the difference. Answers b) and d) are descriptions of the centre and width, respectively, of the desired sampling distribution rather than the values that form it.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Which of the following is the formula for degrees of freedom in an independent-samples t-test?

a) n1 + n2 - 1
b) N - 2
c) n1 - n2
d) N – 1

A

b) N - 2

Explanation … For tests with multiple independent samples, the rule is that each sample contains ni - 1 degrees of freedom. Thus, in a 2-sample test there are df1 = n1 – 1 degrees of freedom in the first sample and df2 = n2 – 1 degrees of freedom in the second sample. Adding these together gives dftotal = df1 + df2 = (n1 – 1) + (n2 – 1) = (n1 + n2) - 2 = N – 2. Another way of thinking about this is that in these designs you start off with a degrees of freedom equal to the total number of observations, N, in the design, and then you lose 1 degree of freedom for each distinct sample you have (a rule works for ANOVA designs too)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Which of the following is the formula for degrees of freedom in a balanced independent-samples t-test? (where n is the number of observations in each sample)

a) n - 2
b) 2n - 2
c) n – 1
d) none of the above

A

b) 2n - 2

Explanation … The overall rule for degrees of freedom is the same as in the last question, after all this is still an independent-samples t-test. However for a balanced design n1 = n2 = n and this, in turn means that N = 2n. Substituting this into the expression for degrees of freedom from the last question gives dftotal = N – 2 = 2n – 2

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Which of the following is not a formula for degrees of freedom in a balanced independent-samples t-test?

a) n - 2
b) N - 2
c) (n1 - 1)+(n2 - 1)
d) 2n – 2
e) df1 + df2

A

a) n - 2

Explanation … There are lots of equivalent ways of writing degrees-of-freedom formulas for 2-sample designs. When thinking about this, It may help to sketch out the samples involved and write out underneath them the sample sizes and corresponding degrees of freedom so that you can see how the overall degrees of freedom corresponds to the degrees of freedom in each sample. You could also try out a concrete example. For example, suppose n = 10 and try working things out from there. If you have n = 10 observations in each sample, then there df = 9 for each sample and, in a 2-sample design, this would yield df = 18. You can now see that the only answer of the above that does not give df = 18 is a) where df = n – 2 = 10 – 2 = 8. So this must be the mistaken formula.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

6) In an independent-samples t-test with a balanced design – the larger the sample size, the larger the standard error of the difference between means.
a) true
b) false

A

b) false

Explanation … In an independent samples t-test, the standard error of the mean is given by
SM1 - M2 = square root of (s1squared/(n1 = s2squared/n2)) which, for a balanced design where n1 = n2= n, becomes
square root of ((s1squared + s2squared)/n) . From this you can see that the standard error depends inversely on the sample size and so as the sample size grows larger, the standard error grows smaller.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

The standard error in an independent-samples t-test is larger than the standard error associated with either of the 2 samples

a) true
b) false

A

a) true

Explanation … The answer to this question is most easily seen by thinking about the variances of the sampling distributions involved rather than thinking about their standard errors. Since a standard error is by definition the standard deviation of a sampling distribution, the variance of any sampling distribution is just the square of its standard error.
The sampling distribution underlying an independent-samples t-test is the sampling distribution of differences between sample means (see questions 2, above). The “variance sum law” says that the variance of this sampling distribution is the sum of the variances of the sampling distributions of the mean for the 2 samples. Thus, the variance of the overall sampling distribution must be larger than the variance of either of its 2 component variances. Since this is true of the variances it is also true of their standard errors and so the statement in the question is true.
Why does this matter? It means that the sampling distribution for an independent-samples t-test is wider than the sampling distributions for either of the samples. But in statistics wide sampling distributions are bad and narrow sampling distributions are preferred (because in a narrow sampling distribution it is easier to get out into the tails of the distribution and thereby reject the hypothesis being tested). Independent-samples t-tests with their wide sampling distributions therefore come with an inbuilt liability. The reason independent-samples t-tests are popular despite this liability is because they allow the use of a control group and this strengthens the interpretation of the test result.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

“Aggression internalization index scores were calculated for _____ participants following a session using active visual imagery. One randomly assigned group was asked to imagine positive visual images and a second randomly assigned group of equal size was asked to imagine neutral images. An independent-samples t-test showed that the mean internalized aggression score for those imagining positive images was significantly lower than for those imagining neutral images (positive image: M = 23, sX2 = 20; neutral image: M = 26, sX2 = 16; t(16) = _____, p = _____, 2 tailed)”

What is the total number of participants in the design?

a) 14
b) 16
c) 18
d) 36
e) 38

A

c) 18

Explanation … This is an independent-samples design with 16 degrees of freedom (as you can see from the report of the t-test). For designs like this, the appropriate degrees of freedom is N-2 where N is the total number of participants in the design (taking into account both samples). Thus, if df = 16 = N - 2 then N = 18.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

“Aggression internalization index scores were calculated for _____ participants following a session using active visual imagery. One randomly assigned group was asked to imagine positive visual images and a second randomly assigned group of equal size was asked to imagine neutral images. An independent-samples t-test showed that the mean internalized aggression score for those imagining positive images was significantly lower than for those imagining neutral images (positive image: M = 23, sX2 = 20; neutral image: M = 26, sX2 = 16; t(16) = _____, p = _____, 2 tailed)”

How many participants are there in each sample?

a) 4
b) 9
c) 12
d) 18

A

b) 9

Explanation … From the previous question, the total number of participants in the design is 18. But in the report of the results it says that the samples in this design were of equal size. Therefore the sample size in this design is n = 18/2 = 9.
Note that this question asks for the sample size, n, and not the overall number of participants in the design, N. For tests with more than 1 sample, n does not equal N.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

“Aggression internalization index scores were calculated for _____ participants following a session using active visual imagery. One randomly assigned group was asked to imagine positive visual images and a second randomly assigned group of equal size was asked to imagine neutral images. An independent-samples t-test showed that the mean internalized aggression score for those imagining positive images was significantly lower than for those imagining neutral images (positive image: M = 23, sX2 = 20; neutral image: M = 26, sX2 = 16; t(16) = _____, p = _____, 2 tailed)”

Ignoring its sign, what is the t-value for the test reported in the passage?

a) 1
b) 3/2
c) 2
d) 4

A

b) 3/2

Explanation … The formula for the t-value in an independent-samples design is …
t = (M1 - M2)/square root of ((s1squared/n1) + (s2squared/n2))

 To calculate t, all we need to do is to harvest some values from the quoted passage and put them in the formula.  We can make the following calculations …
  M1 – M2 = 23 - 26 = -3,   and
 = square root of ((20/9) + (16/9)) 
= square root of ((20 + 16) / 9)
= square root of (36/9)
= square root of (f) 
= 2
(where n1 = n2 = n = 9 because N = 18 from the previous question).  Putting everything together gives t = -3/2 which is answer c).
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

In the previous question, why were you asked to ignore the sign of the t-value you calculated?

a) Because in calculations like this you always use the treatment as sample 2 and the control as sample 1 whereas in the calculations you use M1 – M2
b) Because t-values can only be positive
c) Because whether a t-value is positive or negative makes no difference to the corresponding p-value in a 2-tailed test.

A

c) Because whether a t-value is positive or negative makes no difference to the corresponding p-value in a 2-tailed test.
Explanation … t-distributions are always symmetric and centred on zero. The p-value in a 2-tailed t-test, therefore, depends only on the absolute value of t and not on whether it is positive or negative.
In a test like this it is purely a matter of preference which to call sample 1 and which to call sample 2. There is no widely agreed upon convention. But this decision is what determines whether or not the t-value will be positive or negative. For this reason, in published reports the sign of the t-value is often ignored and instead the direction of the difference in means is reported in words. In the example of aggression internalization scores, for instance, it is better to report that those who visualized positive images had lower mean aggression than those who visualized neutral images than it is to report that the t-value is positive or negative..

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Suppose you conduct a z-test on a set of data (n = 7) and fail to reject the hypothesis being tested. You would be more likely to reject the hypothesis if you conduct a t-test on the same data.

a) true
b) false

a normal bell curve shows a normal distribution or t-distribution of df greater than or equal to 30
squished on has a t-distribution of df = 6
critical point of a normal distribution for the normal distribution reaches near the end, at which point is the (more +) critical point t-distribution df = 6

A

b) false

Explanation… you should think of a t-distribution as like a z-distribution that has been squished so that the tails have spread out to the sides. As you can see in the figure below, this means that the critical points for rejecting a hypothesis in a t-distribution are greater than, or equal to, those for the z-distribution. Thus, for the same data (i.e., for the same difference between the measured sample mean and the hypothesized mean) it is as hard or harder to reject a hypothesis using a t-test than with a z-test. This means that t-tests are inherently weaker than z-tests.

This is all true particularly for small sample sizes. The smaller the sample size the weaker a t-test is relative to a z-test. For large samples, however, the results of z-test and t-tests become almost the same

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Suppose the result of a t-test is t(6) = 1.9. On the basis of this result, one would fail to reject the hypothesis under test.

a) true
b) false

a normal bell curve shows a normal distribution or t-distribution of df greater than or equal to 30
squished on has a t-distribution of df = 6
critical point of a normal distribution for the normal distribution reaches near the end, at which point is the (more +) critical point t-distribution df = 6

A

a) true

Explanation …. The stated t-value of 1.9 is less than the critical point of 1.96 used for a design with many degrees of freedom. Now look at the figure below. You can see that as the degrees of freedom grow smaller, the critical points of the t-distribution become larger than 1.96, not smaller. This means that even for a small number of degrees of freedom like df = 6, a t-value of 1.9 will be less than the critical point in the t-distribution. Therefore we fail to reject the hypothesis under test

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

3) Which of the following qualifies as a formula for degrees of freedom in a single-sample t-test?
a) n - 2 b) n
c) n - 1
d) none of the above

A

c) n - 1

Explanation … In a single-sample test the degrees of freedom for a t-test is the number of observations minus 1. Since “n” is the number of observations the formula is df = n – 1.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

There is no such thing as a t-test for the skew.

a) true
b) false

A

a) true

Explanation… when conducting tests of the mean it is necessary to find the standard error of the mean so that its value can go in the denominator of the test. According to the most often-used formula in PSYC3000, you find this standard error by either taking a hypothesized population standard deviation (Ox, z-test) or an estimated population standard deviation (sx, t test) and dividing by the square root of the sample size, i.e., …
O_M=O_x/√n or s_M=s_x/√n
For a test of the skew, you also likewise need to find a standard error. This time it is the standard error of the skew you need, not the standard error of the mean. But finding the standard error of the skew is easy and you don’t need to worry about the population standard deviation at all. The expression for the standard error of the skew is simply …
O_Skew=√6/√n
This means that we never have to worry about estimating or not estimating the population standard deviation. It isn’t in the formula at all. And that means that the sampling distribution of the skew (for a skew = 0 population) is always Normal and never a t-distribution. Hence the test of the skew is always a z-test

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

In a single-sample t-test – the larger the sample size, the larger the standard error of the mean

a) true
b) false

A

b) false

Explanation … In a single-sample t-test, the standard error of the mean is given by . From this formula you can see that as n grows larger the standard error of the mean grows smaller, not larger. This is fundamental result for experimental design: large samples yield small standard errors and small standard errors make it easier to reject a hypothesis.

17
Q

If you want to reject the hypothesis of a single-sample t-test (often the goal of research!), is it better for a sample to have a large standard deviation or a small standard deviation?

a) large sample standard deviation
b) small sample standard deviation

A

b) small sample standard deviation

Explanation … To reject the hypothesis in a single sample t-test we need to show that the sample mean is more than some critical number of standard errors from the hypothesized population mean (i.e. that it is in the tails of the t-distribution). The larger the standard error of the sampling distribution of the mean the harder this will be. But since our estimate of the standard error of the mean in a t-test is , you can see that a large sample standard deviation is associated with a large standard error. Therefore to better reject the hypothesis we would like a nice small sample standard deviation. Normally, though, this is not under our control.

18
Q

Researchers ask participants to press a button as soon as they perceive a brief flashes of light at the edge of their visual filed. A spatial bias in perception is measured by the difference in reaction times for flashes on the left versus the right. If the difference is zero there is no bias. Suppose that a sample of 100 participants have a mean response-time difference of 20 milliseconds (favouring the right side) with a standard deviation of 100 milliseconds. Calculate the t-value for a test of the hypothesis that humans have no spatial bias in visual perception.

a) t = 1
b) t = 2
c) t = 20

A

b) t = 2

Explanation … This is a t-test with a sample size of n = 100, observed mean of M = 20, a hypothesized mean of u = 0 (i.e., no bias) and a standard error of the mean estimated by s_M=s_X⁄√n=100⁄(√100=100⁄(10=10)). Assembling all this information into the typical t-test formula gives t = [((observed-expected))⁄(standard error)] =((20-0))⁄10=2.