T test Flashcards

1
Q

What does the T test assess?

A

Differences in the means of 2 data sets

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Assumptions of the T test

A
  1. Sample drawn from normal population
  2. randomly selected
  3. homogeneity of variance
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Why T test instead of Z test?

A

-Z test innacurate with small sample sizes
-T distributions are similar in shape to normal distributions but have thicker walls

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What does the student’s T test account for?

A

Bias in the estimate of the SEm (standard error of the mean)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Types of T tests

A
  1. Single Sample
  2. Independent
  3. Dependent
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

what does a single sample T test compare?

A

mean against a known population
(actual mean difference between the sample and the population)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

what does an independent t test compare?

A

samples independent from one another

Usually compares two different groups of people

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What does a dependent t test compare?

A

Repeated measures

correlated samples (same subject tested twice)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is the t -ratio?

A

-Signal to noise ratio
-signal = difference between means (numerator)
-Noise = standard error of the mean difference (denominator)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Important points about the T test

A

T test does not identify the causative factor

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Purpose of a repteated measures t test

A

If subjects are tested more than once
i.e. pretest posttest

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What is the Size of Effect, why is it relevant?

A

The magnitude of difference

Just because an effect is statistically significant does not necessarily mean that the effect is meaningful.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

SSW

A

Sum of Squares within: how much variation is there within the sample

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

SSB

A

Difference between group mean and mean of means (between groups)
group mean - grand mean squared for each group.

how much variation there is between the samples

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What is the F statistic

A

Test value for an ANOVA

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What is the result if F stat is higher than F critical

A

Reject Ho accept Ha

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What is alpha level

A

Probability of rejecting null hypothesis when its true (type I error)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

What is the result if p is less than alpha?

A

Reject Ho

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

What does an ANOVA test?

A

the liklihood that the samples came form the same population

compares the means of two or more groups

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

What is the F - Ratio?

A

The ratio of the between group and within group variance

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Why do an ANOVA instead of multipel T tests?

A

Increased risk of type 1 error

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

What is the F value if the null hypothesis is true?

A

F = 1

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

What is sheffe’s CI?

A

Most conservative Post Hoc Test
All possible comparisons - combo comparisons (more than just pairwise)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Example of scheffe’s analysis

A

average of several different treatments against control

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

Symbol for Scheffe’s CI?

A

I

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

what is “k”

A

number of groups

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

what is Falpha

A

F critical value from tables

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

HSD

A

Tukey’s Honestly Significant Difference
-calculates the minimum raw score mean difference that must be attained to declare statistical significance between any two groups.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

Difference Between HSD and I

A

Tukey’s does not make all possible comparisons, only makes pairwise

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

n (lowercase)

A

size of groups

*this assumes they are equal

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

q (tukey’s)

A

value from studentized range distribution

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

MSe

A

Mean square error value from the ANOVA analysis

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

Eta Squared

A

(h squared) Same as R squared
The magnitude of effects of treatment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

Eta Squared value of .52

A

52% of the total variance can be explained by the treatment effect

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

RM ANOVA

A

Repeated Measures ANOVA
within subjects design
same subjects measured two or more times

36
Q

What is the T test analog of the RM ANOVA?

A

dependent t test

37
Q

Assumptions of an RM ANOVA

A

Normality and Homogeneity
Sphericity

38
Q

What does sphericity require?

A

the variance of the difference of all scores are equal

39
Q

What happens if sphericity is violated?

A

inflate the type I error rate

40
Q

Within Subjects Design

A

Repeated Measures ANOVA

41
Q

Between Subjects Design

A

Single Factor ANOVA (one way)

42
Q

Interindividual Variability

A

Variability between people in different groups

43
Q

Intraindividual Variability

A

Variability in a persons scores

44
Q

Sources of Variability

A
  1. Interindividual Variability
  2. Intraindividual Variability
  3. Variability between groups due to treatment effects
  4. Variability due to error (inter, intra, unexplained)
45
Q

Variability due to error

A

unexplained variability

46
Q

Result of eliminating interindividual variability

A

reduce mean square error in the denominator of the F ratio (like dependent t test)

47
Q

SStotal

A

total sum of squares

48
Q

SStime

A

variance due to differences between time periods

49
Q

SSsubjects

A

Variance due to differences between subjects (t is the number of time periods)

50
Q

F ratio for RM ANOVA

A

MStime / MSerror

51
Q

Corrections to RM ANOVA

A

Greenhouse - Geiser (GG) adjustment
Huynh -Feldt (HF) adjustment

52
Q

Greenhouse-Geiser Adjustment

A

WHEN VIOLATION IS SEVERE!
Adjustment degrees of freedom for RM ANOVA
- estimate of epsilon (sphericity)
correction for lack of sphericity
*assumes maximum violation

53
Q

Huynh-Feldt Adjustment

A

WHEN VIOLATION OF SPHERICITY IS LESS SEVERE
Adjustment degrees of freedom for RM ANOVA
correction for violations of sphericity

54
Q

Sphericity

A

assumes that the variances of the differences between all combinations of related groups (levels) are equal. In simpler terms, it assumes that the spread or dispersion in one condition is the same in all other conditions.

55
Q

When Can Post Hoc Tests be used in RM ANOVA

A

Tukey’s can be used when sphericity is not violated

56
Q

What can be used in place of post hoc analysis of RM ANOVA if sphericity is violated?

A

dependent t test with bonferroni correction

57
Q

Factorial ANOVA

A

Analyize multiple factors on the DV simultaneously (FACTORS AND LEVELS)

58
Q

Main Effects

A

Individual factor F values

59
Q

Interactions

A

Combined F values

60
Q

A graph of an ANOVA with no interaction should show lines that are

A

Always parallel and evenly spaced (same variance)

61
Q

A graph of an ANOVA with an interaction should show lines that

A

Are not always parallel or evenly spaced (different variance)

62
Q

Types of Factorial ANOVA

A
  • Between- Between
  • Between- Within (Mixed)
  • Within – Within
63
Q

What is an ANCOVA

A

Analysis of Covariance: Adjusts the DV for the covariates allowing you to asses the effect of the IV on the DV while controlling the effects of covariates

combo of regression and ANOVA

64
Q

Covariate

A

A variable that might affect the DV but is not the variable of interest

65
Q

Axis of DV

66
Q

Axis of IV

67
Q

When do you use an ANCOVA?

A

When analyzing the effects of multiple IVs on DV in same model
-1 or more interval / ratio IV (performance rating 1-10)
-1 or more nominal IV (Time to finish)

68
Q

ANCOVA assumptions

A

homogeneity of regression
- slope between covariate and DV similar

69
Q

If the slopes of a regression are not parallel what does that mean for ANCOVA?

A

assumption is violated ANCOVA not safe to use

70
Q

Reliability

A

does the test measure what its supposed to

71
Q

Inter rater reliability

A

are the outcomes consistent from researcher to reseracher for same subject

72
Q

intrarater reliability

A

are the outcomes consistent if the same rater administers the test to a given subject

73
Q

Test retest reliability

A

are test scores fromt eh same subjects similar between multiple occasions of taking the test.

74
Q

ICC

A

reliability measurement
Intraclass Correlation Coefficient: reliability coefficeint
true score variance / total variance
how consistent ratings are

75
Q

what is required for ICC?

A

variance terms from RM ANOVA

76
Q

SEM

A

Standard Error of Measurement
reliability measurement
measure of the precision of individual test scores

77
Q

Nonparametric Tests

A

Distribution free
generally less powerful
do not make assumptions about the distributions of the population
used when data does not fit the criteria for parametric tests

78
Q

Examples of nonparametric tests

A

Mann-Whitney U Test
Kruskall-Wallis H test
Spearmans’ Rank Correlation Coefficient
Chi-Square Test of Independence

79
Q

What does a Chi Square Test compare?

A

two or more sets of NOMINAL data that have been arranged by frequency

significant association between two categorical variables

80
Q

Spearman Rho

A

(p) ORDINAL
non parametric equivalent to pearson r

81
Q

Mann-Whiteney U test

A

quantifies relationship between two sets of ordinal data
Non parametric equivalent of indepenent T test

82
Q

Kruskal Wallis ANOVA

A

Quantifies difference between more than two groups of RANKED data
Nonparametric equivalent to the one way ANOVA

83
Q

Friedman’s two way ANOVA

A

Quantifes the difference betwen RANKED data when measured on subjects three or more times
Nonparametric equivalent to RM ANOVA

84
Q

IF THE P VALUE IS LESS THAN THE ALPHA LEVEL!!!!

A

REJECT HO!!!!!

85
Q

Meta Analysis

A

Procedure that allows an investigator to statistically combine the results of multiple studies