Stats Flashcards

You may prefer our related Brainscape-certified flashcards:
1
Q

Frequency

A

Number of times each score occurs.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Normal distribution

A

Most scores gravitate towards the mean score with few deviating outliers.

Recognisable by: bell-shaped curve

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Positively skewed distribution

A

Frequent scores are clustered at the lower end.

Recognisable by: a slide down to the right.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Negatively skewed distribution

A

Frequent scores are clustered at the higher end.

Recognisable by: a slide down to the left.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Platykurtic distribution

A

A wider spread of high scores.

Recognisable by: thick “platypus” tail that’s low and flat in the graph.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Leptokurtic distribution

A

High scores are extremely centralised and obviously close to the mean.

Recognisable by: skyscraper appearance - long and pointy.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Mode

A

Most common score.

If there are 2 equally common scores it is bimodal and there can be no mode.

Can use mode for nominal data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Disadvantages of mode

A

1) Could be bimodal (or multimodal) and give no true mode (e.g. 3/10 and 7/10 are opposites but bimodal).
2) A mode can be changed dramatically if one single case is added.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Median

A

Central point of scores in ascending data. Middle number in odd number of cases. If even - mean of 2 central numbers.

+ Relatively unaffected by outliers
+ Less affected by skewed distribution
+ Ordinal, interval, and ratio data

  • Can’t use on nominal
  • Susceptible to sampling fluctuation
  • Not mathematically useful
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Mean

A

Add all scores and divide by total number of scores collected.

\+ Good for scores grouped around central
\+ Interval and ratio data 
\+ Uses every score
\+ Can be used algebraically
\+ Resistant to sampling variation - accurate
  • Outliers that are extreme
  • Affected by skewed distributions
  • Not used for nominal and ordinal
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Range

A

Subtract smallest score from largest.

+ Good for cluster scores
+ Useful measure of variability for nominal & ordinal data

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Symbols

A
x = score
x̅ = mean
x-x̅ = deviation (d)
∑ = sum
N = number in a sample 
s² = variance of a sample
s = standard deviation
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Accuracy of the mean

A

Hypothetical value - doesn’t translate to real values (i.e. 2.5 children)

1) Standard deviation
2) Sum of squares
3) Variance

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Total error

A

Worked out by adding all the deviations.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Deviation

A

Observed value - mean

Negative score = overestimate for this participant

Positive score = underestimate

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Sum of squared errors (SS)

A

Square all deviations so they become positive.

Add them together to make a sum of squares.

The higher the sum of squares, the more variance in the data.

More variance = less reliable

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Standard deviation (σ)

A

A measure of spread - is it statistically significant or expected variance?

Anything within standard deviation value from mean would be expected variance. Anything outside 2 standard deviations is statistically significant.

Same as SS / x̅. Square root the result.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Sampling distribution

A

The frequency distribution of sample means from the same population.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Standard error of the mean (SE)

A

The accuracy with which a sample reflects the population. Measured by deviation from the mean.

Large value = different from the population
Small value = reflective of population

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Confidence interval

A

If we can assess the accuracy of sample means, we can calculate the boundaries within which most sample means will fall. This is the confidence interval.

If it represents the data well, the confidence interval of that mean should be small.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Descriptive statistics

A

Shows what is happening in a given sample.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Inferential statistics

A

Allows us to make assumptions based on the information we have analysed.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

At what probability value can we accept a hypothesis and reject a null hypothesis?

A

0.05 or less.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Type 1 error

A

When we believe our experimental manipulation has been successful when it is actually due to random errors. E.g if we are accepting 5% as significance value and repeated the experiment 100 times, we would still have 5 times we get statistical significance that is due to random error.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

Type 2 error

A

Accepting the difference found was due to random errors when it was actually due to the independent variable.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

Effect size

A

An objective and standardised measure of the magnitude of the observed effect. As it is standardised, we can compare effect sizes across different studies.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

Pearson’s correlation coefficient (r)

A

Measures the strength of a correlation between 2 variables. Also a versatile measure of the strength of an experimental effect. How big are the differences (the effect)?

0 = no effect 
1 = perfect effect
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

Cohen’s guidelines to effect size

A

r = 0.10 (small effect): explains 1% of total variance.

r = 0.30 (medium effect): 9%

r = 0.50 (large effect): 25%

N.B. Not a linear scale (i.e. .6 is not double .3)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

Why use effect size?

A

To show the level of significance of the effect that we are observing - how significant is the [p < .05] significance?

Is not affected by sample size in the way that p is.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

Properties linked to effect size:

A

1) Sample size on which the sample effect size is based.
2) The probability level at which we will accept an effect as being statistically significant.
3) The power of the test to detect an effect of that size.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

Statistical power

A

The probability that a given test will find an effect, assuming one exists in the population. Can be done before a test to reduce Type II error.

Must be 80% or higher.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

What assumptions need to be met for parametric test

A

1) Data measured at interval or ratio level.
2) Homogeneity of variance (Levene’s test)
3) Sphericity assumption (Mauchley’s test)
4) Sample should form a normal distribution.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

Checks for normal distribution

A

1) Plot a histogram to see if data is symmetrical.
2) Kolmogorov-Smirnov test or Shapiro-Wilk test.
3) Mean and median should be less than half a standard deviation different.
4) Kurtosis and skew figures should be less than 2x their standard error figure.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

Kolmogorov-Smirnov or Shapiro-Wilk

A

Compares your set of scores with a normally distributed set (with the same mean and standard deviation)

We do not want our data to be significantly different from the normal set if p< 0.05

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

Homogeneity variance

A

Individual scores in samples vary from the mean in a similar way.

Tested using Levene’s test.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

Levene’s test

A

Measures homogeneity variance in order to tell if the individual scores on the samples vary from the mean in a similar way.

An assumption for a parametric test.

If unequal group sizes, we must run additional tests: Brown-Forsythe F and Welch’s F adjustments.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

T-test

A

The difference between means as a function of a degree to which those means would differ by chance alone.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
38
Q

Independent t-test

A

An experiment on two groups in a between subjects test to see if the difference in means is statistically significant.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
39
Q

Degrees of freedom

A

The number of independent values or quantities, which can be assigned to a statistical distribution.

N - 1

40
Q

Dependent t-test

A

An experiment on two groups in a within subjects test to see if the difference in means is statistically significant.

Pearson’s correlation follows to measure effect size.

41
Q

Effect of sample size

A

Smaller samples mean the t-test would have to detect a bigger effect.

Larger samples means the effect would only have to be small.

42
Q

ANOVA

A

A series of t-tests run to analyse 3 or more levels of the independent variable.

Explores data from between groups studies.

Done instead of a chain of t-tests to reduce a Type I error.

43
Q

F-ratio

A

Test statistic produced by ANOVA. Tells us if the means of three or more samples are equal or not.

Compares systematic variance in the data (SSm) to the amount of unsystematic variance (SSr).

44
Q

Orthogonal

A

A planned ANOVA contrast used when there is a control group.

45
Q

Non-orthogonal

A

A planned ANOVA contrast when there is no control group.

46
Q

Follow-up tests

A

Done after an ANOVA to see where the difference lies.

47
Q

Planned comparisons

A

When specific predictions about which group means should differ, before data is collected, are made.

48
Q

Post-hoc tests

A

Done after the data has been collected and inspected. More cautious and generally easier to do.

49
Q

One-way independent ANOVA

A

Used when you’re going to test 3 or more experimental groups and different participants are used in each group (between subjects design).

When one independent variable is manipulated.

50
Q

One-way repeated measures ANOVA

A

Used when you’re going to test 3 or more experimental groups and the same participants are used in each group (within subjects design).

When one independent variable is manipulated.

51
Q

Two-way ANOVA

A

Analysing 3 or more samples with 2 independent variables.

52
Q

Three-way ANOVA

A

When 3 or more samples are tested with 3 independent variables.

53
Q

MANOVA

A

An ANOVA (an analysis of three or more samples) with multiple dependent variables. It tests for two or more vectors of means.

54
Q

Choosing Tukey HSD as post-hoc

A

Equal sample sizes and confident in homogeneity of variances assumption being met.

Most commonly used - conservative but good power.

55
Q

Choosing Bonferroni as post-hoc

A

For guaranteed control over the Type I error rate.

Very conservative so not as popular as Tukey but still favoured amongst researchers.

56
Q

Choosing Gabriel’s as post-hoc

A

If sample sizes across groups are slightly different.

57
Q

Choosing Hochberg’s GT2 as post-hoc

A

If sample sizes are greatly different.

58
Q

Choosing Games-Howell as post-hoc

A

If any doubt whether the population variances are equal.

Recommended to be run with other tests anyway because of uncertainty of variances.

59
Q

Choosing REGWQ as post-hoc

A

Useful for 4+ groups being analysed.

60
Q

Interpreting ANOVA

A

Degrees of freedom of groups, then degrees of freedom of residuals of the model (participants minus groups). Followed by F value, then significance.

F(5, 114) = 267, p < .001

61
Q

Sphericity

A

We assume that the relationship between one pair of conditions is similar to the relationship between another pair of conditions.

Measured by Mauchly’s test.

The effect of violating sphericity is a loss of power (increases chance of Type II error).

62
Q

Variance

A

How far a value is spread out from the average.

63
Q

Mauchly’s rest

A

Used to measure sphericity.

If it is significant we conclude that some pairs of conditions are more related than others and the conditions of sphericity have not been met.

64
Q

If sphericity is violated

A

Corrections can be made using:

1) Greenhouse-Geisser
2) Huynh-Feldt
3) The lower-bound

65
Q

Two-way mixed ANOVA

A

Measuring one independent variable with between groups, and one independent variable within groups.

Two overall independent variables.

66
Q

Main effects

A

The effect of an independent variable on a dependent variable, ignoring the other independent variables.

67
Q

Significant main effect of group (generally between subjects)

A

There are significant differences between groups.

68
Q

Significant main effect of time (generally within subject)

A

There are significant differences between repeated measures samples.

69
Q

A significant interaction effect

A

There are significant differences between groups (independent measures) and over time (within subjects).

So the change in scores over time is different dependent on group membership.

70
Q

Cohen’s kappa

A

Correlation coefficient for qualitative inter-rater data.

71
Q

Two-way repeated measures ANOVA

A

Two independent variables measured using the same participants. Repeated measures.

72
Q

ANCOVA

A

ANOVA, but to see if there is also an investigation into the effect of covariates. We assume that our covariates have some effect on the dependent variable.

We assume the covariate has a correlation with the dependent variable in all groups.

Reduces error variance.

73
Q

Covariates

A

Variables that we already know will influence the study (e.g. age, memory, etc.).

74
Q

ANCOVA assumptions

A

Same parametric assumption as all the basic tests, plus the additional homogeneity of regression.

75
Q

MANCOVA

A

An ANCOVA (an analysis of three or more samples) with multiple dependent variables affected by covariates. It tests for two or more vectors of means.

76
Q

Non-parametric tests

A

Also known as assumption-free tests. We would use these tests if the typical assumptions aren’t met.

Transforms raw scores into ordinal data so assumptions are not needed. Analysis is performed on ranks.

Uses medians instead of means.

77
Q

Power of non-parametric tests

A

Reduced, leading to increased chance of Type II error.

78
Q

Mann-Whitney

A

Non-parametric.

Equivalent of independent measures t-test (between-subjects).

79
Q

A good way to show non-parametric tests

A

Using a box-whisker plot.

80
Q

Box-whisker plot

A

A shaded box represents the range between which 50% of the data fall with lines either side. A.k.a. Interquartile range.

Horizontal bar is the median. Each whisker reaches to highest and lowest value.

The eye shape allows us to see the limits in which most or all of the data fall.

81
Q

Reporting Mann-Whitney

A

E.g.

Men (Mdn = 27) and dogs (Mdn = 24) did not significantly differ in the extent to which they displayed dog-like behaviours, U = 194.5, ns, Z=-.15.

U = significance
Z = displays how close to the mean in standard deviations (0 is close)
82
Q

Wilcoxon signed rank test

A

Non-parametric equivalent of dependent t-test (within-subjects studies).

83
Q

Reporting Wilcoxon

A

E.g.

Men (Mdn = 27) and dogs (Mdn = 24) did not significantly differ in the extent to which they displayed dog-like behaviours, T = 111.50, p

84
Q

Kruskal-Wallis test

A

Non-parametric equivalent of one-way independent ANOVA (between-subjects).

85
Q

Following up non-parametric tests

A

None are very commonly used.

We can use Mann-Whitney follow-ups for each pair of IV groups.

Must consider using a Bonferroni correction to reduce Type I error.

86
Q

Bonferroni correction

A

Used for Mann-Whitney tests. Divide critical value (.05) by number of tests carried out.

E.g. an ANOVA would mean 3 tests so p < 0.0167.

87
Q

Reporting Kruskal-Wallis

A

Children’s fear beliefs about clowns were significantly affected by the format of info given (H(3) = 17.06, p < .01).

H = test statistic
(3) = degrees of freedom
88
Q

Chi-squared (χ2)

A

For nominal data correlation.

89
Q

Friedman’s ANOVA

A

Non-parametric one-way dependent ANOVA (repeated measures).

Follow-ups with Wilcoxon tests, with the same Bonferroni corrections.

90
Q

Reporting Friedman’s ANOVA

A

Children’s fear beliefs about clowns were significantly affected by the format of info given (χ2(2) = 7.59, p < .05).

χ2 = test statistic
(2) = degrees of freedom
91
Q

χ2 test

A

Used when nominal data is between-subjects.

92
Q

Assumptions of χ2 (chi-squared)

A

1) Must be between-subjects.

2) Frequencies must be large.

93
Q

Binomial sign test

A

Nominal data but within-subjects. DV has 2 possible values: yes or no.

94
Q

Choosing a test - 5 questions

A

1) What kind of data will I collect?
2) How many IVs will I use?
3) What kind of design will I use?
4) Independent measures or repeated?
5) Is my data parametric or non?

95
Q

Spearman’s rho

A

Non-parametric equivalent to Pearson’s r.