Exam 3 Flashcards

1
Q

extraneous variable

A

a variable that is not systematically manipulated in an experiment but that still may affect the behavior being observed

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

repeated measures design

A

same participants participate in different treatment conditions

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

independent measures design

A

two different groups participants participate in different treatments

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

null hypothesis (Ho)

A

the iv has NO EFFECT on the dv
Ho: mu1 = mu2

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

alternative hypothesis (Ha)

A

the iv HAS AN EFFECT on the dv
H1: mu1 > (< or = with slash) mu2

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

directional

A

increase or decreases
ONE-TAIL TEST

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

nondirectional

A

has an effect or doesn’t have an effect
TWO-TAIL TEST

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

alpha level (a)

A

defines the maximum probability that research result was obtained

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

p-value

A

indicates how likely it is that a result occurred by chance

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

type 1 error

A

you reject the Ho when you should’ve retain the Ho

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

type 2 error

A

you retain the Ho when you should’ve reject the Ho

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

effect size vs practical significance

A

significance: whether or not there was a difference and how likely it would occur by chance alone
effect size: how large the difference was

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

decision rule for obtained probability

A

obtained probability </= a -> reject Ho
obtained probability > a -> retain Ho

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

why do we evaluate Ho first?

A

easier to disprove our hypothesis than prove it because we can never 100% prove something

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

power

A

the ability to detect an effect when one is present
- value can vary from 0 to 1

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

power (a priori use)

A

determine sample size necessary to detect an effect

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

power (a posteriori use)

A

determining whether sample size and research were adequate to detect an effect

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

the effect N has on power

A

N increases = power increase

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

the size of real effect on power

A

effect size increases = power increases

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

the effect of alpha level on power

A

alpha level closer to 1 = stronger power
alpha level closer to 0 = weaker power

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

explain the relationship between power and beta

A

the power of a test is the probability of rejecting the Ho, given it is false
- power= 1-Beta
- power + beta = 1

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

why do we never accept Ho and instead reject Ho?

A

we cannot assert that Ho is true thats why we reject it

23
Q

distribution of sample means

A

the collection of sample means for all the possible random samples of a particular size that can be obtained from a population

24
Q

sampling distribution of a statistic

A

a distribution of statistics obtained by selecting all the possible samples of a specific size (n) from a population

25
Q

characteristics of distributions of sample means

A
  • sample mean pile up around the population mean
  • sample mean is approximately normal in shape
  • larger the sample size the closer the sample mean should be to the population mean
26
Q

central limit theorem

A

when n is large the distribution of the sample means will approach a normal distribution

27
Q

critical region

A

area under the curve that contains all the values of the statistic that allow rejection of the null hypothesis

28
Q

critical value

A

the value that bounds the critical region

29
Q

conditions for use of the one-sample z test

A
  • N >/= 30 (normal distribution)
  • Xbar is known
  • mu and population standard deviation is known
30
Q

t test vs z test (single sample)

A

z test: needs population mean and standard deviation
t test: needs population mean and sample standard deviation

31
Q

degrees of freedom (single sample t test)

A

the number of values that are free to vary when calculating the statistics
df: N- 1

32
Q

T distribution characteristics

A
  • flatter than the normal distribution
  • more spread out than the normal distribution
  • more variability in t distribution
33
Q

conditions to use t-test for single sample

A
  • only 1 sample
  • population mean is known but not population SD
  • sample SD is known
  • sampling dist. is normal
34
Q

sampling distribution of t

A

sample means to the population mean when the population standard deviation is not known

35
Q

cohen’s d effect size

A

small effect: d </= 0.2
medium effect: d = 0.21-0.79
large effect: d >/= 0.8

36
Q

confidence intervals

A

range of values that probably contains the population value
- are always TWO-TAILED AND NON DIRECTIONAL

37
Q

confidence limits

A

the values that bound the confidence interval

38
Q

correlated group designs

A

two sets of data with no population parameter
1. repeated measures design
2. matched subject design

39
Q

repeated measures design

A

same participants receive every level of IV

40
Q

matched subject design

A

participants are matched on a specific variable to hold constant

41
Q

assumptions for a t test for correlated groups

A
  • N >= 30 (normally distributed)
  • participants are the same in each condition or matched
  • IV is nominal and DV is interval or ratio
42
Q

t test for independent samples

A

use to analyze the mean difference between the two groups

43
Q

assumptions for t test for independent samples

A
  • two scores of sampling
  • IV is nominal with 2 levels; DV is interval or ratio
  • N >= 30
  • homogenity of variance
  • each condition contains groups of separate people
  • participants receive only one level of the IV
44
Q

estimated standard error of the mean difference (independent groups)

A

tells us how far away, on average, two sample mean would be from each other if the null is true

45
Q

are t-test or z-test more powerful?

A

t-test are less powerful because the critical t’s are larger than the z

46
Q

test decision rule

A

|tobt| > |tcrit| -> reject Ho

47
Q

z-test formula

A

z = Xbar - mu / (PopSD/sqrtN)

48
Q

single sample t-test formular

A

t= Xbar - mu / (sampleSD/ sqrtN)

49
Q

correlated sample t-test formula

A

t= Dbar / sqrt(SSd/N(N-1))
SSd = sigma D^2 - (sigma D)^2/N

50
Q

independent sample t-test (same size) formula

A

t= Xbar1 - Xbar2/ sqrt (SSxbar1 + SSxbar2/ n(n-1))

51
Q

independent sample t-test (different size)

A

t= Xbar1 - Xbar2/ sqrt (SSxbar1 + SSxbar2 / n1 + n2 - 2) (1/n1 + 1/n2)

52
Q

degrees of freedom t independent sample test

A

n1 + n2 - 2

53
Q

t-test correlated groups with raw date

A

t= Dbar - mu / sqrt (SSd/N(N-1))
where SSd = sigma D^2 - (sigma D)^2/N
MU IS ZERO