Anova Analysis Of Variance Flashcards

1
Q

Assumptions underlying analysis of variance

A
  • The measure taken is on an interval or ratio scale.
  • The populations are normally distributed
  • The variances of the compared populations are the same.
  • The estimates of the population variance are independent
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

ANOVA

Analysis of variance uses the

A

the ratio of two sources of variability to test the null hypothesis
•Between group variability estimates both experimental error and treatment effects
•Within subjects variability estimates experimental error
•The assumptions that underly this technique directly follow on from the F-ratio.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Variability and averages

A

Graph 1: peaks in same place and different width
Different variability
Same averages
Graph 2: peaks in different places and same width
Same variability
Different averages

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

The normal distribution is used in

A

statistical analysis in order to make standardized comparisons across different populations (treatments).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

The kinds of parametric statistical techniques we use

A

assume that a population is normally distributed. This allows us to compare directly between two populations

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

The Normal Distribution is a mathematical function that

A

defines the distribution of scores in population with respect to two population parameters

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

(Normal distribution)

The first parameter is the

A

Greek letter (m, mu). This represents the population mean.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

(Normal distribution)

The second parameter is the

A

Greek letter (s, sigma) that represents the population standard deviation.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Different normal distributions are generated whenever

A

the population mean or the population standard deviation are different

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Normal distributions with different population variances and the same population mean

A

Different length peak troughs but all at the same place

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

•Normal distributions with different population means and the same population variance

A

Peaks and troughs the same size and width but in different places, overlapping next to each other

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

•Normal distributions with different population variances and different population means

A

Different size and width peak and troughs and not next to each other

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Most samples of data are

A

normally distributed (but not all)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

When the null hypothesis (Ho) is approximately true we have the following:

A

There is almost a complete overlap between the two distributions of scores
Very similar in terms of shared variances and similar averages. Same size peak and trough same weak and very close to be completely overlapping

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

When the alternative hypothesis (H1) is true we have the following:

A

•There is very little overlap between the two distributions

Similar peaks and troughs and width but very far from over lapping

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

The crux of the problem of rejecting the null hypothesis is

A

the fact that we can always attribute some portion of the difference we observe among treatment parameters to chance factors
•These chance factors are known as experimental error

17
Q

An experimental error

A

The crux of the problem of rejecting the null hypothesis is the fact that we can always attribute some portion of the difference we observe among treatment parameters to chance factors

18
Q

What are the potential contributors to an experimental error

A

All uncontrolled sources of variability in an experiment

19
Q

There are two basic kinds of experimental error:

A
  • individual differences error

* measurement error.

20
Q

In a real experiment both sources of experimental error will

A

influence and contribute to the scores of each subject.
•The variability of subjects treated alike, i.e. within the same treatment condition or level, provides a measure of the experimental error.
•At the same time the variability of subjects within each of the other treatment levels also offers estimates of experimental error

21
Q

Estimate of treatment effects

A
  • The means of the different groups in the experiment should reflect the differences in the population means, if there are any.
  • The treatments are viewed as a systematic source of variability in contrast to the unsystematic source of variability the experimental error.
  • This systematic source of variability is known as the treatment effect.
22
Q

What is the treatment effect?

A

Systematic source of variability

23
Q

What is Partitioning?

A

Dividing the deviation from the grand mean

24
Q

Each of the deviations from the grand mean have a specific name
AS25-T …

A

Is called the total deviation

25
Q

Each of the deviations from the grand mean have specific names
A2-T

A

Is called th between groups deviation

26
Q

Each of the deviations from the grand mean have specific names
AS25-À

A

Is called within subjects deviation

27
Q

The between groups deviation…

A

Ā2-T

Represents the effects of both error and treatment

28
Q

The within subjects deviation…

A

AS25- Ā

Represents the effect of error alone

29
Q

If we consider the ratio of the between groups variability and the within groups variability

A

Differences among treatment means divided by difference among subjects treated alike

Then we have…

Experimental error + treatment effects divided by
Experimental error

30
Q

If the null hypothesis is true then

A
The treatment effect is equal to zero 
Experimental error + 0
Divided by 
Experimental error 
= 1
31
Q

If the null hypothesis is false then the

A
Treatment effect is greater than zero 
Experimental error + treatment effect 
Divided by 
Experimental error 
=Greater than 1