Parametric Statistics - Dr. Wofford Flashcards

1
Q

Mean

A

•Mean: average

  • •Sum of a set of scores/number of scores
  • •µ= average of a population; x-bar= average of a sample
  • •Best measure to use with ratio or interval data
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
1
Q

Types of Data

A
  • •Normally distributed data
    • •Bell shaped curve
    • •Parametric Statistics
  • •Nonnormally distributed data
    • •Skewed curve
      • •Skewed to the left
      • •Skewed to the right
    • •Nonparametric statistics
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
1
Q

3 Post - hoc testing options

A
  • •Different options
    1. •Tukey (SPSS does for you)
    2. •Scheffe (SPSS does for you)
      • •Most flexible method
    3. •Bonferroni t-test (by hand, bu tPSS might be able to run for you)
      • •Alpha level/# of tests
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Three things about ANOVA (suff besides assumptions):

A
  1. •ANOVA determines whether the means of >2 samples are different
  2. •Was developed to prevent Type 1 error from using multiple t-tests
  3. •ANOVA partitions total variance within a sample (sst) into two sources: a treatment effect (between the groups) and unexplained sources of variation, or error variation, among the subjects (within the groups)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Variability means

A

The dispersion of scores

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Homogeneity of variance

A

•Homogeneity of variance: relatively similar degrees of variance between groups

  • •Levene’s test for homogeneity of variance
    • •Want the p value to be >.05
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Sum of Squares is squared because:

A

to get rid of the negative values

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Nonnormally distributed data

A
  1. Skewed curve
    • skewed to the left
    • skewed to the right
  2. Nonprarametric statistics
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Prediction

A
  • •Prediction
    • •Simple and multiple linear regression
    • •Logistic regression

A category of parametrict test

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

T-Stat for T-Test

A
  • •T-stat= difference in means between groups/variance between groups.
    • T-stat = (x-bar2 - x-bar1)/s2
  • •Increased t-stat values=increased probability of having a significant result
    • •Greater mean difference with less variance equates to a higher t-stat
  • •Compare the t-stat to a critical value (located in the appendix) to determine whether it is significant at the predetermined alpha level
    • (will produce p-number that can show if results are statistically significant or not)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

•Post hoc testing – what is it for, what else is it called?

A
  • Post hoc testing – use to find where the difference lies after you establish there is a significant difference somewhere with ANOVA
  • Also called unplanned comparisons
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q
A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

If T-stat is high, _________ is low (or should be?).

A

p-value

p-value standard should be alpha (0.05 is the usual standard)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

ANOVA and t-test are really about the same thing?

A

yes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

T-Test: Test statistic

A

•Test statistic: t-stat

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

•ANCOVA:

A

1 DV, 1 IV, 1+ covariates

  • •Covariates must not be correlated
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

In parametric statistics Every test has _________ and researcher must _______________.

A

assumptions

meet assuptions of the test

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Mean Square

A

•Mean square (ms): ss/n-1= sample variance

  • •Combats the problem of sample size with sum of squares
  • •Is called variance and is a true measure of variability
  • •Sample variance is annotated as s2
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Meaning of ANOVA Assumption:
•Scores are independent- not correlated

A

They are not too related. its hard to explain without going into too much. You will meet that.

If you have two scores that are similar. It could be that they are correlated, but that hardly ever happens.

We don’t have to worry about this right now.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Three Parametric Statisics terms

A
  1. Difference in Groups
  2. Association/Strength of Relationship
  3. Prediction
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What is the problem with running multiple t-tests

A

if you run several t-tests, you increase the liklihood of getting a significan result, so you can get a type I error.

So ANOVA was created

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

•Kolmogorov Smirnov test

A

a stats test of normality

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Range

A

•Range: Difference between highest and lowest values in a distribution

  • •Limited utility when describing variance
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

T-Test Assumptions

A
  • •Data is measured at least at the interval level
  • •Scores are independent- not correlated
  • •Normality
  • •Homogeneity of variance
    • •Independent t-test
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

analagous homogenity of variance for Repeated measures testing is _______________

A

•Sphericity: similar to homogeneity of variance for repeated measures

  • •Test using Mauchly’s test- want a nonsignificant result
  • •Available corrections:
    • •Huynh and Feldt, Greenhouse- Geisser
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What do you compare T-stat to?

A

The critical value (in a glossary in your book, that will tell us if it is statistically significant)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Draw a Box Plot

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Mean, median, and mode are the same number when

A

data is normally distributed

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Draw the parametric/non-parametric test chart

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Levene’s test

A

a stats test for homogenity of variance

Want the p-value to be > 0.05

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Draw a Scatterplot

A
22
Q

Two stats tests to assess normallity

A
  1. •Kolmogorov Smirnov test
  2. •Shapiro Wilk test
23
Q

Mode

A

•Mode: score that occurs most frequently in a distribution

  • •Most easily seen using a frequency distribution/graph
  • •Useful with nominal and ordinal data
24
Q

Three groups that Parametric Testing can be split into

A
  • •Difference in Groups
    • •Univariate: 1 DV
    • •Multivariate
  • •Association/Strength of Relationship
  • •Prediction
25
Q

Normally Distributed Data

A
  1. Bell shaped durve
  2. Parametric Statistics
26
Q

Can correct for finding significant difference when checking Sphericity with the following 2 methods (don’t have to know what they are):

A
  1. Huynh and Feldt,
  2. Greenhouse- Geisser
27
Q

T/F: a T-test can be used for more than two samples.

A

False!

You can use it for 2 things:

same people before and aftor

or two different people one time

29
Q

What else is needed with ANOVA to find which variables are have significant variables?

A

Post hoc testing

Because ANOVA just tells you there are easter eggs out there (there are significant relationships), but not where they are (what relationships are significant)

31
Q

Do you want T-stat to be high or low?

A

Higher the better

32
Q

If we have a high f-stat, what will p-value do?

A

it will be low

33
Q

•Repeated Measures (ANOVA type)

A
  • •Sphericity: similar to homogeneity of variance for repeated measures
    • •Test using Mauchly’s test- want a nonsignificant result
      • •Available corrections:
      • •Huynh and Feldt, Greenhouse- Geisser
35
Q

ANOVA: everything

A
  • ANOVA determines whether the means of >2 samples are different
  • Was developed to prevent Type 1 error from using multiple t-tests
  • ANOVA partitions total variance within a sample (sst) into two sources: a treatment effect (between the groups) and unexplained sources of variation, or error variation, among the subjects (within the groups)
  • Assumptions:
    • Data is measured at least at the interval level
    • Scores are independent- not correlated
    • Normality
    • Homogeneity of variance- only really a problem with unequal sample sizes
      • Independent t-test
36
Q

Draw a Histogram

A
38
Q

Do we want sphericity to be significant or non-significant?

A

we want it to be non-significant (because we want individuals in a group to be similar)

Just like levines.

Can correct for significance with the following methods (don’t have to know what they are):

  • •Huynh and Feldt, Greenhouse- Geisser
39
Q

scatter plots are used in ______ analyses.

A

regression

40
Q

T-Tests: everything

A
  • •T-test: test whether the means of two samples are different
  • •Assumptions:
    • •Data is measured at least at the interval level
    • •Scores are independent- not correlated
    • •Normality
    • •Homogeneity of variance
      • •Independent t-test
  • •Two types:
    • •Independent (unpaired): comparison of two independent, different samples (Like two subjects)
    • •Dependent (paired): comparison of same sample on two different occasions, (like same people before and after treatment)
  • •Test statistic: t-stat
41
Q

Sum of Squares

A

•Sum of squares (ss): sum of (X-Xbar)2

  • •As variability increases, the sum of squares will be larger
  • •Influenced by sample size- as sample size increases, the sum of squares will increase because there are more scores
42
Q

Histogram can be used to assess what?

A

Visually asses normal distribution (or not) of data

Should also back this test up with a stats test to confirm/deny normality

44
Q

ANOVA stands for

A

Analysis of variance

45
Q

what three things do we need to understand in order to understand a t-test and an ANOVA?

A
  1. Sum of squares
  2. Mean square
  3. Standard Deviation
46
Q

What is another name for box plot?

A

Whisker plot

47
Q

•Shapiro Wilk test

A

a stats test of normality

49
Q

why did they do the standard deviation?

A

So we would have a standard units of measure

Same units of measure as used in original measurements

so if you measured ROM in degrees, the SD will be in degrees too?

50
Q

Four ANOVA assumptions:

A

Assumptions:

  1. Data is measured at least at the interval level
  2. Scores are independent- not correlated
  3. Normality
  4. Homogeneity of variance- only really a problem with unequal size
    • Independent t-test
51
Q

Difference in Groups (2)

A
  • •Difference in Groups
    • •Univariate: 1 DV
    • •T-test
  • •Analysis of variance (ANOVA)
    • •Multivariate
52
Q

quick way to tell your data is normally distributed

A

mean, median, and mode are the same value

53
Q

To use Parametric Statistics you need what kind of data?

A

normally distributed data

54
Q

Percentiles:

A

•Percentiles: Description of a score’s position within a distribution

  • •Helpful in converting actual scores into comparative scores or for a point of reference
56
Q

Three Measures of Central Tendency

A
  1. Mode: Score that occurs most frequently in a distribution
  2. Median: rank-ordered distribution split into two equal halves
  3. Mean: average
57
Q

Variability: Measures of Range

A
  1. Range
  2. Percentiles
  3. Quartiles
58
Q

Three Statistical Graphs

A
  1. Histogram
  2. Scatterplot
  3. Box plot
60
Q

One-way ANOVA:

A

1 DV, 1 IV

61
Q

Normality

A

•Normality is an assumption of every parametric test

  • •Ways to assess normality:
    • •Histogram
    • •Kolmogorov Smirnov test
    • •Shapiro Wilk test
62
Q

Two assumptions that are neccessary for parametric statistics

A
  1. Normalitiy is an asmpption of eery parametric test (normal distribution)
  2. Homogenity of variance: relatively similar degress of variance betwen groups
63
Q

What can you do instead of T-tests if you want to compare more than 2 groups?

A

ANOVA!

(if you run several t-tests, you increase the liklihood of getting a significan result, so you can get a type I error)

64
Q

ANOVA produces an __________, whereas a t-test produces a ___________.

A

F-stat

t-stat

(they produced almost the same way)

65
Q

Association/strength of relationship

A

•Association/Strength of Relationship

  • •Linear= pearson’s r

A group for parametric testing

66
Q

Two types of T-tests

A
  1. •Independent (unpaired): comparison of two independent, different samples
  2. •Dependent (paired): comparison of same sample on two different occasions
67
Q

•Quartiles

A

•Quartiles: Divides a distribution into four equal parts or quarters

  • •Q1-Q3 (25%-75%)
  • •Q2 is the median (50%)
  • •Interquartile range: Q3-Q1- represents boundaries of the middle 50% of the distribution
  • •Visually depicted using a box plot
69
Q

Mode is most useful with which kind of data?

A

Nominal and ordinal data

70
Q

Median

A

•Median: rank-ordered distribution split into two equal halves

  • •Useful for ordinal data
71
Q

Factorial ANOVA:

A

1+ IV, 1 DV

72
Q

Why did mean square come about?

A

It came about from the probelm of sample size effecting sum of squares

It basically makes it an average value

This is called true variance!!

74
Q

Types of ANOVA

A

•Types of ANOVA:

  1. •One-way ANOVA: 1 DV, 1 IV
  2. •ANCOVA: 1 DV, 1 IV, 1+ covariates
    • •Covariates must not be correlated
  3. •Factorial ANOVA: 1+ IV, 1 DV
  4. •Repeated Measures
    • •Sphericity: similar to homogeneity of variance for repeated measures
      • •Test using Mauchly’s test- want a nonsignificant result
        • •Available corrections:
        • •Huynh and Feldt, Greenhouse- Geisser
75
Q

Standard Deviation

A

Standard deviation: square root of s2

  • •ss in original units of measurement is the standard deviation

A measure of Variability

76
Q

•Test statistic: F-stat

A

Used for ANOVA. Basically calculated the same as t-stat (even though there are some differences below)

Test statistic: F-stat

  1. •F= MSb/MSe
  2. •mean square= ss/df
  3. •Calculate mean square values for both between groups (MSb) and error sum of squares (Mse)
  4. •Greater between group difference with less variance= larger f-stat
  5. •Compare f-stat to critical value (in appendix) to determine whether the value is significant at the predetermined alpha level
77
Q

Variability

A

•Variability: dispersion of scores

  • •Measures of range
  • •Measures of variance