Semester 2 Flashcards

1
Q

What does the difference between the means of 2 groups depend on?

A
  • means, s.ds, var. and pop.

sample

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is Cohen’s D?

A

A measure of distance between 2 condition means which takes variability into account

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

How do you calculate Cohen’s D?

A

(m1 - m2) / meanSD
meanSD = (s1 + s2) / 2

can do same use pop. meand and s.d

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

for Cohen’s D:

as overlap decreases, does effect size increase or decrease?

A

increases

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Give an example of a small, medium and large effect size

A

0.2, 0.5, 0.8

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What are the two types of 2 sample t-tests? When are they used?

A
  • related (paired, repeated measures) t-test - use when ppts take part in both conditions of ppt design
  • independent t-test - use when ppts perform only 1 of 2 conditions between ppt design
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

How to calculate a related t-test for a 1 tail hypothesis?

A
  • calculate the mean change between the 2 conditions (post - pre)
  • calculate change s.d. change between 2 conditions
  • assuming null is correct means pop. m = 0
  • calculate ese.
  • calculate t statistic and use to find p
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

How to calculate a related t-test for 2 tailed hypothesis?

A

same method as when calculating for 1 tailed but making sure to find p relating to two-tailed rather than one

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is the mean for a sampling distribution of difference?

A

pop. mean A - pop. mean B (= pop. mean D)

= 0 if assuming null is true

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What is the s.d. for a sampling distribution of difference?

A

SQRT(pop. s.d. A^2/nA + pop. s.d. B^2/nB)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

How do you calculate a z-score for an independent t-test?

Is this used often? Why?

A

z = (mA-mB) - (pop. meanA-pop. meanB) / SQRT(pop. s.d. A^2/nA +pop. s.d. B^2/nB) ~ N(0, 1)

not often used as often don’t have access to pop. s.d.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

How do you calculate a t-statistic for an independent t-test?

A

t = (mA-mB) - (pop. meanA-pop. meanB) / SQRT(sA^2/nA + sB^2/nB)

v = nA + nB - 2

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What is another way of writing SQRT(sA^2/nA + sB^2/nB)?

A

SQRT(e.s.e. A^2 + e.s.e. B^2)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What is covariance?

A

The extent to which a change in one variable is associated with predictable change in another variable

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What would high and low covariance suggest?

A

high covariance = if scores for one variable change than the scores for the other variable also change is a predictable manner

low covariance = changes in 1 variable aren’t accompanied by a predictable change in the other variable

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What does Pearson’s r determine?

A

If there is a linear relationship between variables

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

How to calculate total covariance?

A

TC(x, y) = SUM( (xi - mx) x (yi - my) )

xi - mx = difference between x co-ord and mean
yi - my = difference between y co-ord and mean
multiple = multiple the difference of the co-ord pairs
sum = add products of co-ord pairs

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

How to calculate sample co-variance?

A

C(x, y) = TC(x, y) / (n-1)

= (SUM((xi - mx) x (yi - my))) / n-1

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

What does sample covariance describe?

A

How much 2 variables co-vary (amount of variance they share)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

What is positive, negative and zero covariance?

A
positive = higher than average values of 1 variable tend to be paired with higher than average values of the other variable
negative = higher than average values of one variable tend to be paired with lower than average values of the other variable
zero = 2 random variables are independent (note, not always independent, could instead have a non-linear relationship)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

How can covariance and variance be related?

A

Var (x) = C (x, x)

22
Q

How to calculate Pearson’s r?

A

r(x, y) = C(x, y) / sx x sy

sx x sy can also be written as:
SQRT Var(x) x SQRT Var (y)
SQRT C(x, x) x SQRT C(y, y)
23
Q

What are the strength descriptors for Pearson’s r?

A
Perfect = +- 1
Strong = +- 0.7, 0.8, 0.9
Moderate = +- o.4, 0.5, 0.6
Weak = +- 0.1, 0.2, 0.3
Zero = 0
24
Q

What is the null, 1-tail and 2-tail hypothesis for a correlation?

A
null = no correlation 
1-tail = positive/ negative correlation
2-tail = a correlation
25
Q

What is NHST framework for a correlation?

A
  • formulate hypothesis
  • collect data from study
  • calculate Pearson’s r
  • compare with p value to determine whether to reject or fail to reject null
  • interpret in context
26
Q

How do you calculate a p-value for Pearson’s r?

A

Use table
need number of tails and sample size
compare your value to value in table to see if it is significant or not (like in a t-test)

27
Q

What do you need to remember when interpreting a hypothesis in context for a correlation?

A

Need to describe strength of correlation using the strength descriptors
e.g., r = 0.3 maybe be significant and you can reject null but it is only a weak positive correlation

28
Q

How do you calculate shared (explained) variance?

A

(Pearson’s r)^2

= r^2

29
Q

How do you calculate unshared (unexplained) variance?

A

1 - (Pearson’s r) ^2

= 1 -r^2

30
Q

What are degrees of freedom?

A

related to sample size –> tells you which distribution you need to use
relates to how much data you have and therefore how good your sample statistics are likely to be

31
Q

What are parametric tests?

A

Make certain assumptions about pops. from which data are sampled

32
Q

What are 3 common assumptions that parametric tests make?

A

pops. from which samples are drawn should be normally distributed
variances of pops. should be approx. equal
no extreme scores

33
Q

Why are parametric tests useful?

A

More powerful/sensitive than other approaches

34
Q

What are non-parametric tests?

A

Make fewer assumptions about pops. from which data are sampled

35
Q

Why are non-parametric tests useful?

A

The assumptions of parametric tests are sometimes violated

36
Q

How do you take tied scores into account when ranking data?

A

Find the average of the ranking and then they all get the same rank
e.g., 1, 4, 4, 4, 5, 7 would first be ranked as 1, 2, 3, 4, 5, 6,
value 4 falls in rankings 2, 3, 4 so average = 3
new rankings become: 1, 3, 3, 3, 5, 6

37
Q

What are Mann-Whitney U tests the NP alternative to?

A

Independent t-test

38
Q

How do you calculate a Mann-Whitney U test?

A

rank data irrespective of which condition it falls in
Calc sum of the ranks in each condition (takes ties into account)
Consider what the smallest sum of the ranks could’ve been for each condition
Work out difference between smallest possible sum of ranks and actual sum for each condition
Mann-Whitney u stat = smallest difference out of the 2 conditions (U = x)
p-value calc by SPSS = exact sig (2-tailed) - compare to 0.05
if you have 1-tailed, divide p-value by 2 then compare

39
Q

What is a Wilcoxon signed ranks test the NP alternative to?

A

Paired t-test

40
Q

How do you conduct a Wilcoxon signed ranks test?

A

calc difference between 2 conditions (post - pre)
rank the non-zero difference scores (ignore signs but takes ties into account)
split ranks into negative and positive difference ranks (2 columns)
t-stat formed as sum of ranks of least occurring difference sign
use SPSS output for p-value –> Exact sig (1-t or 2-t) –> compare to 0.05

41
Q

What is Spearman’s rho the NP alternative for?

A

Pearson’s R

42
Q

How do you calculate Spearman’s rho?

A
  • convert scores to ranks (rank x and y values separately)
  • Calc difference in ranks (Rx - Ry)
  • Square the differences
  • Spearman’s rho –> p = 1 - ((6 x sum of squared differences) / n (n^2 - 1))
  • use SPSS to find p-value
43
Q

What values does Spearman’s rho fall between?

A

-1 and 1

44
Q

When can we use this specific Spearman’s rho equation?

A

When there are no tied ranks

45
Q

Why is a 1-variable Chi-squared test used?

A

to asses whether observed frequencies in categories are different from what might be expected

46
Q

What is the DoF from a 1-variable chi-squared test?

A

n - 1 (n = number of categories)

47
Q

What must the value of the 1-variable chi-squared test always be?

A

> 0

48
Q

How do you calculate a 1-variable chi-squared test?

A
  • calc difference between observed and expected (if null were true) values
  • square differences
  • divide squared differenced by expected value
  • chi-squared stat = sum of values obtained from step above (Sum ((E - O)^2 / E)
  • use DoF and sig. level in table to compare to p-value to determine if significant or not (similar to t-test)
49
Q

What is a 2 x 2 chi squared test for?

A

asses whether there is a relationship between 2 categorical variables

50
Q

How do you calculate a 2 x 2 chi squared test?

A
  • (sum row x sum column) / total –> gives expected values for each category
  • (E - O)^2 / E for each category
  • sum these values to give chi squared stat
  • use table to compare to p-value to determine if significant or not (like t-test)
51
Q

what is the DoF for a 2 x 2 chi squared test?

A

(rows - 1 ) x (columns - 1)