Week 5 Flashcards

1
Q

Are raw measurement scales always meaningful?

A

Not necessarily. The raw score may not be important, but the relative position may be. Eg. You finished your race in 43 minutes vs you finished in 3rd place.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What are two good elements of standardised scales?

A
  1. They are easy to determine how extreme/unusual a score is

2. easy to compare data from different scales

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What are two common standard scores?

A
  1. Z scores: M=0, SD= 1

2. T scores: M=50, SD=10

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What do Z-scores use as a ‘ruler’?

A

Standard deviation. Measured scores are re-expressed as standard deviation scores.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What are two examples of scores being converted to Z-scores?

A
\+1.0= 1 SD > M
-2.5= 2.5 SDs < M
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Does transforming data to Z scores change the distributional shape?

A

No, it does not.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

When data is normally distributed, what does this mean for percentages of Z-scores?

A

~68% of scores within +/-1.0 SD of mean
~95% of scores within +/- 2.0 SD of mean
~99% of scores within +/- 3.0 SD of mean

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

How do you convert raw scores to Z-scores?

A

Subtract the mean from the individual score

Divide by the standard deviation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

How do you transfer Z-scores to raw scores?

A

Multiply z-score by standard deviation

Add mean of raw scores

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Do Z-scores allow comparison rates between count based performance and time based performance?

A

Yes.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Does the size of the sample systematically affect the standard deviation?

A

No.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

For a normally distributed sample, M +- SD contains what percentage of observed scores?

A

~68%.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What does the mean (M) + or - standard error (SE) describe?

A

a sampling distribution

  • theoretical distribution
  • expected distribution of statistics if sampling was repeated many times.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What does standard error describe?

A

The variability of statistics

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

With standard error, what does one sample provide?

A

One statistic (mean, average).. If many samples were collected from the same population their statistics would vary.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Is standard error systematically affected by sample size?

A

Yes it is. It has an inverse relationship; bigger samples have smaller standard errors.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What does the confidence interval length indicate?

A

It indicates the precision of the estimate.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

What are confidence intervals calculated from?

A

They are calculated from the standard error, which is also affected by sample size.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

What is the three point summary that APA publication manual stated about confidence interval?

A
  1. CIs can be an extremely effective way of reporting results
  2. CIs combine info about location and precision and can often be directly used to infer significance levels
  3. CIs are, in general, the BEST reporting strategy
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

What range does the CI specify?

A

The range in which we can have a specified level of confidence that the true population lies

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

What is the most commonly reported confidence interval?

A

95%.

22
Q

What can sample means and differences between sample means provide? Is there a way to calculate a range statistic within one can be confident that the true value lies?

A

A point estimate of the value in the population. However, the estimate is unlikely to be exactly correct, and it falsely implies infinite precision.

Yes. This is what we term a confidence interval. And using it can be much more valuable.

23
Q

95% CI are the likely range within which what sits?

A

The true value of the population parameter

24
Q

Narrow 95% CIs indicate what?

A

High precision

25
Q

Wide 95% CIs indicate what?

A
low precision 
(However, precision is about variability, accuracy is about location)
26
Q

Should confidence intervals be used to describe sample distribution?

A

NO! You should use SD if you want to describe the distribution of your sample.

27
Q

How do you state a confidence interval?

A

Write the percentage, and then CI. Then, write square brackets to close the lower and upper CI limits. E.g,
95% CI [10.2, 21.2]

28
Q

Is it important to state what statistic a CI is constructed around?

A

Yes. You need to state if its constructed around a group or condition mean, a difference between two means or a correlation coefficient.

29
Q

What is assumed when p-values are being calculated?

A

That random variation is the only cause of variability

30
Q

What is a p-value?

A

The probability that a sample statistic as extreme of more extreme than the observed sample would occur if random variation is the only cause of variability.

31
Q

When is a result declared statistically significant?

A

When a p-value is small (

32
Q

Psychology tests theories using a statistical approach called hypothesis testing. Who developed this theory?

A

It was a combination of two statistical philosophies developed by (1) Fisher and (2) Neyman and Pearson.

33
Q

What is involved in hypothesis testing?

A

The procedure involves testing the null (nil) hypothesis.

34
Q

What two things are involved in testing the null (nil) hypothesis?

A
  1. Assume the size of the observed effect is purely a result of random sampling (no, or nil effect)
  2. If the null is true, what is the probability of an effect as or more extreme than is observed?
35
Q

What does NHST stand for?

A

Null hypothesis significance testing

36
Q

What should be assumed in nil hypothesis testing?

A

That there is no effect. eg, correlation = 0, Mean difference = 0, etc.

37
Q

Three points of logic in testing the NHST:

A
  1. Use the sample variance to estimate the variance of the population
  2. Test how likely an affect of the observed or larger size would be for this sample size
  3. If the chance of an observed effect as large or larger is less than 5% (probability
38
Q

Use the IQ of Tasmanian to provide an example of NHST

A

Imagine a study testing whether the IQ of Tasmanians is greater than the IQ of the general population.
Null hypothesis: assume the IQ of Tasmanians is not different from the general population.

39
Q

Using the example of IQ testing Tasmanians intelligence and if it is greater than the general population, what then is the alternative hypothesis (Ha).

A

The hypothesis you hope to support. The alternative hypothesis would state then that the IQ of Tasmanians is higher than the general population due to some non-random factors.

40
Q

What is the difference between directional and non directional hypothesis, in regards to the alternative hypothesis?

A

Ha can be directional (one tailed)
- IQ is higher or IQ is lower
Ha can be non-directional (two tailed)
- IQ is different

41
Q

What is the variability of a distribution of sample statistics (e.g M) called?

A

The standard error. Larger sample sizes have smaller standard errors (narrower sampling distribution).

42
Q

What are the two means for hypothesis testing?

A
  1. A process for making decisions about the value of statistics for the entire population (parameters) (i.e mean, difference of means, r, SD)
  2. Calculate probability of the observed size (or more extreme) if variation was only due to random process
    - sampling error
43
Q

When should we reject the null hypothesis and accept the alternative hypothesis?

A

If p

44
Q

When should we not reject the null hypothesis?

A

If p>.05: A statistic as large or larger would occur more than 5% of the time IF only random sampling is responsible for the variation.

45
Q

Why should we NEVER accept the null hypothesis?

A

The observed effect size might not be rare IF random process was responsible for the variation, but this doesn’t mean the statistic was produced by a random process.
Could a non-random (systematic) process produce the statistic?

46
Q

In the Tasmanian IQ thought example, if we use the z distribution because we know the population SD, what should we then do?

A

Determine the probability of obtaining a sample mean z-score at least as extreme if H0 (null hypothesis) is true

47
Q

If the null hypothesis is false and we retain the null hypothesis, what is this called?

A

A type 2 error

48
Q

If a null hypothesis is true and we reject the null hypothesis, what do we call this?

A

Type 1 error

49
Q

If the null hypothesis is false and we fail to reject it, what is this called?

A

A type 2 error beta

50
Q

If a null hypothesis is true and we reject it, what is this called?

A

A type 1 error alpha

51
Q

What if rejecting the null hypothesis is wrong?

A

Generally, we reject the null hypothesis if P

52
Q

What if we wrongly fail to reject the null?

A

If p=.151, fail to reject the null hypothesis.
This could be an error. (e.g null is really false and the sample was from a different population)
An error associated with failing to reject the null hypothesis when it is actually false is a type 2 error. False negative.