Exam Flashcards

1
Q

What are measures of variability?

A
  • Statistical procedures to show how spread out the data is around a point
  • The difference between the mean and the data point tells us how far away each data point i from the mean
  • Range, SD & variance
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is standard deviation?

A

Represents how far from the average each data point is

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is variance?

A

A number indicating how spread out the data is

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is the population?

A

The complete collection to be studied

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is a Sample?

A

A section of the population

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Why do we need measures of variability?

A

You want the sample to reflect the overall population, so you use measures of variability to see how comparable your sample data is to the estimated population

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

How do you prepare the data before calculating variance/SD

A
  1. Take each value and square it
  2. Add all of the values together
  3. Add the squared values together
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

How do you calculate variance?

A
  1. Sample size x Calculated squared value
  2. Square the standard value
  3. subtract one from the other
  4. Divide this by sample size times sample size -1

n(sum of x squared) - (sum of x) squared
_______________________________
n(n-1)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

How do you calculate the standard deviation?

A

Find the square root of the variance.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What is the standard error (of the means)?

A

Tells you how good your sample is

The standard deviation of the sample distribution

The average difference between our sample and the target population

Measure of uncertainty in the mean.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What is the sampling distribution?

A

This is used to estimate how much deviation we will find between our sample and the target population.

A distribution of something like the sample mean - not the raw data values

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

How do you calculate the standard error of the mean?

A

Standard deviation
______________
Square root of n

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What does the SD (high/low) for the SE tell us?

A

SD high number = more variable = mean is not as good estimate of he population mean

SD low number = less variable = mean is a much more reliable estimate of the population mean

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What does N (high/low) for the SE tell us?

A

Higher N = lower SE

Lower N = higher SE

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What does the 95% confidence interval tell us?

A

That we can be 95% confident that the population mean will fall between the upper and lower boundaries

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What do error bar graphs represent?

A

The 95% confidence interval

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What does overlap of the error bars mean?

A

Less overlap indicates more effect.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Statistical significance is…

A

A difference that is most probably not due to chance.

This is affected by the sample size. A large enough sample will always result in statistical significance.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Effect size is…

A

The magnitude of the relationship or difference found.

Looks at whether the difference is enough to be of practical significance.

This is not affected by the sample size.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

How so you calculate the Effect size (Cohen’s d)?

A

Mean of group 1 - Mean of group 2
___________________________
SD of the population

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

What does the Effect value (high/low) tell us?

Cohen’s convention for interpretation

A

Large Effect = .8

Medium Effect = .5

Small Effect = .2

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

What does Cohen’s d actually mean?

A

The differences in two groups based on the SD.

(can be thought of as the % of non-overlap between a condition and a control group on a bell curve - the two would sit bang on top of each other.)

Larger effect = less overlap

Smaller effect = more overlap

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

When would you use Cohen’s d?

A
  • When comparing 2 means

- T-test

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

When would you use a partial eta squared?

A

When statistical tests are based on variance rather than SD

ANOVA

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

What is partial eta squared?

A

An indication of the proportion of variance in the dependant variable that is explained by the independent variable.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

How do you interpret a partial eta squared value?

A

Ranges from 0 - 1

Small Effect = .01

Medium Effect = .06

Large Effect = .138

  • The % of variance in the DV that is explained by the IV
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

What is the correlation coefficient?

A

Pearson’s r

The effect for hypothesis that look at relationship.

28
Q

How do you interpret Pearson’s r?

correlation coefficient

A

No relationship = 0

Small = .10 - .29

Medium = .30 - .49

Large = .50 - 1

Positive or negative - still interpreted the same.

r squared = the % of variance in one that is explained by the other.n

29
Q

What is the relationship between effect and error?

A

If you have more effect than error then you’re more likely to see the results in the target population

Effect
_____
Error

30
Q

How do you Interpret the t value?

A

T = > 1 - There is more effect than error so it is significant

T = < 1 - There is more error than effect so it is NOT significant

31
Q

What is the p value?

A

The probability as to how true our results are.

The probability of our null hypothesis being true.

A percentage

Reported to 3 decimal places.

32
Q

What is a Type 1 error?

A

When it looks like there is an effect but THERE IS NOT AN EFFECT

We have selected a sample that was better overall, before out manipulation.

33
Q

What is a Type 2 error?

A

Sampling from the correct group and thinking that there isn’t an effect but THERE IS AN EFFECT

34
Q

How do you interpret the p value?

A

P<0.05 = significant

Bigger than the p value = accept the null hypothesis - no effect

Smaller than the p value = reject the null hypothesis and accept the directional hypothesis - effect.

35
Q

What is a sampling error?

A

When you use the wrong population

36
Q

What is the standard deviation error?

A

A measure of how much error we would expect.

37
Q

What do you aim for with the sampling error and the standard deviation error?

A

The difference between the two to be low.

38
Q

When would you use the Kruskal Wallace H test?

A

Violated assumptions of a 1 way ANOVA

NON parametric

Between subjects

39
Q

What is the Kruskall Wallace H test?

A

The non parametric anova for between subjects design

40
Q

What is the post hock test after a kruskall Wallace significant output

A

Man Whitney U

41
Q

What output do you report for man Whitney U?

A

U and P

42
Q

What is the non parametric anova for within participants design?

A

Friedman

43
Q

What is the post hock test after a significant Friedman output?

A

Wilcoxon

44
Q

What output do you report for wilcoxon

A

Z and P

45
Q

When would you use a Friedman test

A

1 way ANOVA

NON parametric

within subjects design

46
Q

How do you run a Kruskal Wallace test?

A

Analyse - non parametric tests - legacy dialogues - K independent samples - move variables - define range (amount of groups) - options - descriptives (and quartiles if you want medians) - continue - ok.

47
Q

How do you run a Friedman test

A

Analyse - non parametric tests - legacy dialogues - K RELATED samples - move groups across - tick Friedman - click statistics - select quartiles - ok - continue

48
Q

When do you use a one way ANOVA? // What are the 4 assumptions of an ANOVA?

A

1 factor with more than 2 levels

IV - Factor
DV - Levels

Between participants design

normally distributed data

equal sphericity

49
Q

Why do we use the ANOVA?

A

Otherwise, we would need to do 3 (or more) paired t-tests each with a p value of .05 which makes an overall chance of error of 15% (or more) which is far too high.

50
Q

What does the ANOVA find?

A

Whether the effect is bigger than the error.

51
Q

How does the ANOVA find out whether the effect is bigger than the error?

A

F = Between groups variance
____________________
Within groups variance

52
Q

What is the treatment effect?

A

The difference in something before and after manipulation.

53
Q

What is the grand mean and how do we calculate it?

A
  • The figure that is used to compare the individual group mean scores to which gives us the within group variance (measure of error)
  • Add up the three groups means, then divide by the number of groups
54
Q

How do you interpret the F value?

A

An F value greater than 1 means that there is more effect that error.

55
Q

How do you report the ANOVA results?

A

A one way ANOVA showed that there is an effect between the (conditions)
F(between, within) = F value, P value.

56
Q

What is the post hock test for an ANOVA?

A

Independent t test.

57
Q

What is a Bonferonni correction?

A

Correcting the p value to reflect the amount of conditions.

3 conditions = 0.05/3 = new Bonferroni corrected p value of P = 0.17

58
Q

What is a test of homogeneity?

A

A method of finding whether there is equal variance between the groups.

59
Q

What test of homogeneity would you use for an independent t test?

A

Levene’s test

60
Q

What does a significant Levene’s test result mean?

A

There is a difference in equal variance between the two groups so you’re violating one of the assumptions of the ANOVA.

This means you have to revert back to a non parametric ANOVA.

61
Q

What does an insignificant Levene’s test result mean?

A

There is equal variance between the groups and you have met the assumption of the ANOVA

Continue with post hock tests.

62
Q

When would you use a related ANOVA?

A

Within participants design.

63
Q

What changes with a related ANOVA?

A

The between groups variance becomes the between conditions variance.

We don’t have to account for individual difference so we don’t compare the grand mean with condition means.

The f value uses the error and variance.

64
Q

What is the Greenhouse geisser correction?

A

This works out the difference score and the mean variances of the differences for a related ANOVA

65
Q

What does it mean when the mean variances of different scores are significantly different for a related ANOVA

A

We have violated the assumption of sphericity and therefore have to read the greenhouse geisser correction

66
Q

When do we report the greenhouse geisser correction?

A

We always report this in front of the f value for related ANOVA’s