Exam 3 Flashcards

1
Q

extraneous variable

A

a variable that is not systematically manipulated in an experiment but that still may affect the behavior being observed

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

repeated measures design

A

same participants participate in different treatment conditions

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

independent measures design

A

two different groups participants participate in different treatments

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

null hypothesis (Ho)

A

the iv has NO EFFECT on the dv
Ho: mu1 = mu2

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

alternative hypothesis (Ha)

A

the iv HAS AN EFFECT on the dv
H1: mu1 > (< or = with slash) mu2

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

directional

A

increase or decreases
ONE-TAIL TEST

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

nondirectional

A

has an effect or doesn’t have an effect
TWO-TAIL TEST

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

alpha level (a)

A

defines the maximum probability that research result was obtained

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

p-value

A

indicates how likely it is that a result occurred by chance

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

type 1 error

A

you reject the Ho when you should’ve retain the Ho

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

type 2 error

A

you retain the Ho when you should’ve reject the Ho

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

effect size vs practical significance

A

significance: whether or not there was a difference and how likely it would occur by chance alone
effect size: how large the difference was

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

decision rule for obtained probability

A

obtained probability </= a -> reject Ho
obtained probability > a -> retain Ho

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

why do we evaluate Ho first?

A

easier to disprove our hypothesis than prove it because we can never 100% prove something

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

power

A

the ability to detect an effect when one is present
- value can vary from 0 to 1

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

power (a priori use)

A

determine sample size necessary to detect an effect

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

power (a posteriori use)

A

determining whether sample size and research were adequate to detect an effect

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

the effect N has on power

A

N increases = power increase

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

the size of real effect on power

A

effect size increases = power increases

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

the effect of alpha level on power

A

alpha level closer to 1 = stronger power
alpha level closer to 0 = weaker power

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

explain the relationship between power and beta

A

the power of a test is the probability of rejecting the Ho, given it is false
- power= 1-Beta
- power + beta = 1

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

why do we never accept Ho and instead reject Ho?

A

we cannot assert that Ho is true thats why we reject it

23
Q

distribution of sample means

A

the collection of sample means for all the possible random samples of a particular size that can be obtained from a population

24
Q

sampling distribution of a statistic

A

a distribution of statistics obtained by selecting all the possible samples of a specific size (n) from a population

25
characteristics of distributions of sample means
- sample mean pile up around the population mean - sample mean is approximately normal in shape - larger the sample size the closer the sample mean should be to the population mean
26
central limit theorem
when n is large the distribution of the sample means will approach a normal distribution
27
critical region
area under the curve that contains all the values of the statistic that allow rejection of the null hypothesis
28
critical value
the value that bounds the critical region
29
conditions for use of the one-sample z test
- N >/= 30 (normal distribution) - Xbar is known - mu and population standard deviation is known
30
t test vs z test (single sample)
z test: needs population mean and standard deviation t test: needs population mean and sample standard deviation
31
degrees of freedom (single sample t test)
the number of values that are free to vary when calculating the statistics df: N- 1
32
T distribution characteristics
- flatter than the normal distribution - more spread out than the normal distribution - more variability in t distribution
33
conditions to use t-test for single sample
- only 1 sample - population mean is known but not population SD - sample SD is known - sampling dist. is normal
34
sampling distribution of t
sample means to the population mean when the population standard deviation is not known
35
cohen's d effect size
small effect: d /= 0.8
36
confidence intervals
range of values that probably contains the population value - are always TWO-TAILED AND NON DIRECTIONAL
37
confidence limits
the values that bound the confidence interval
38
correlated group designs
two sets of data with no population parameter 1. repeated measures design 2. matched subject design
39
repeated measures design
same participants receive every level of IV
40
matched subject design
participants are matched on a specific variable to hold constant
41
assumptions for a t test for correlated groups
- N >= 30 (normally distributed) - participants are the same in each condition or matched - IV is nominal and DV is interval or ratio
42
t test for independent samples
use to analyze the mean difference between the two groups
43
assumptions for t test for independent samples
- two scores of sampling - IV is nominal with 2 levels; DV is interval or ratio - N >= 30 - homogenity of variance - each condition contains groups of separate people - participants receive only one level of the IV
44
estimated standard error of the mean difference (independent groups)
tells us how far away, on average, two sample mean would be from each other if the null is true
45
are t-test or z-test more powerful?
t-test are less powerful because the critical t's are larger than the z
46
test decision rule
|tobt| > |tcrit| -> reject Ho
47
z-test formula
z = Xbar - mu / (PopSD/sqrtN)
48
single sample t-test formular
t= Xbar - mu / (sampleSD/ sqrtN)
49
correlated sample t-test formula
t= Dbar / sqrt(SSd/N(N-1)) SSd = sigma D^2 - (sigma D)^2/N
50
independent sample t-test (same size) formula
t= Xbar1 - Xbar2/ sqrt (SSxbar1 + SSxbar2/ n(n-1))
51
independent sample t-test (different size)
t= Xbar1 - Xbar2/ sqrt (SSxbar1 + SSxbar2 / n1 + n2 - 2) (1/n1 + 1/n2)
52
degrees of freedom t independent sample test
n1 + n2 - 2
53
t-test correlated groups with raw date
t= Dbar - mu / sqrt (SSd/N(N-1)) where SSd = sigma D^2 - (sigma D)^2/N MU IS ZERO