Exam 3 Flashcards

You may prefer our related Brainscape-certified flashcards:
1
Q

when to use an F distribution

A

when working with more than 2 samples

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

when is ANOVA used?

A

with 2+ nominal independent variables, and an interval dependent variable. Analyzes if 2+ groups differ from each other in one or more characteristics

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

why not use multiple t-tests instead of an ANOVA?

A

p increases with each test (weak evidence against the null. Higher chance of making a type 1 error - rejecting null when null is true)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

F statistic

A

a value you get when you run an ANOVA test or a regression analysis to find out if the means between two populations are significantly different

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

F distribution

A

distribution of all the possible F stastics

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

F =

A

variance between-groups / variance within-groups

s^2between / s^2within

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

variance between-groups

A

estimate of the population variance based on differences among group means

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

variance within-groups

A

estimate of the population variance based on differences within sample distributions

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

another way to think about F ratios

A

each score in a sample is a combination of treatment effects and individual variability or error

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

if between-groups variance is 8, and within-groups variance is 2, what would F be?

A

4

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

between-groups variance equation

A

s^2 = SSbetween / dfbetween

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

SSbetween / dfbetween

A

between-groups variance

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

dfbetween=

A

2(groups) - 1

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

within-groups variance equation

A

SSwithin / dfwithin

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

SSwithin / dfwithin

A

within-groups variance

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

dfwithin=

A

dfgroup1 + dfgroup2

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

one-way ANOVA

A

1 nominal variable with 2+ levels and a scale DV

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

within-groups ANOVA

A

more than 2 samples with same participants. Also called repeated-measures

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

between-groups ANOVA

A

more than 2 samples with different participants in each sample

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

homoscedasticity

A

assumption of ANOVA. Samples come from populations with the same variance

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

effect size for ANOVA

A

r^2

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

formula for calculating effect size for ANOVA

A

r^2 = SSbetween / SStotal

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

small effect size for ANOVA

A

r^2 = .01

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

medium effect size for ANOVA

A

r^2 = .09

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

large effect size for ANOVA

A

r^2 = .25

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

post-hoc tests determine…

A

which groups are different

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

when you have three groups, and F is significant, how do you now where the difference(s) are?

A

post-hoc tests

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

type of post-hoc tests

A

Tukey HSD, Bonferonni

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

Tukey HSD test

A

widely used post hoc test that uses means and standard error

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

bonferroni test

A

post-hoc test that provides a more strict critical value for every comparison of means. We use a smaller critical region to make it more difficult to reject the null. Determine the number of comparisons we plan to make, divide the p level by the number of comparisons

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

one-way within-groups ANOVA

A

same participants do something multiple times. Used when we have one IV with at least 3 levels, a scale DV, and the same participants in each group

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

benefits of within-groups ANOVA

A

we reduce error due to differences between the groups. We know that the groups are identical for all of the same participants. We are able to reduce within-groups variability due to differences for the people in our study across groups

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

matched groups

A

use different people who are similar on all of the variables that we want to control. We can analyze our data as if the same people are in each group, giving us additional

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

two-way ANOVAs

A

used to evaluate effects of more than one IV on a DV. Used to determine individual and combined effects of the IVs

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

interaction

A

occurs when 2 IVs have an effect in combination that we do not see when looking at each IV individually

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

when to use Two-Way ANOVAs

A

to evaluate effects of 2 IVs, it is more efficient to do a single study than two studies with 1 IV each. Can explore interactions between variables

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

cell

A

box depicting a unique combination of levels of IVs in a factorial design

38
Q

main effect

A

when one IV influences the DV

39
Q

interaction effect

A

when the effect of one IV on the DV changes as a result of the level of a second IV

40
Q

two types of interactions in ANOVA

A

quantitative, qualitative

41
Q

correlation

A

co-variation or co-relation between two variables. These variables change together. usually scale (interval or ratio) variables

42
Q

correlation coefficient

A

a statistic that quantifies a relation between two variables. Can be either + or -. Falls between -1.00 and 1.00. The value of the number (not the sign) indicates the strength of the relation

43
Q

positive correlation

A

association between variables such that high scores on one variable tend to have high scores on the other variable. A direct relation between the variables

44
Q

negative correlation

A

association between variables such that high scores on one variable tend to have high scores on the other variable. An inverse relation between the variables

45
Q

Pearson Correlation Coefficient

A

a statistic that quantifies a linear relation between two scale variables. Symbolized by the italic r when based on sample data, italic p (“rho”) when it is a population parameter

46
Q

psychometrics

A

used in the development of tests and measures

47
Q

psychometricians

A

use correlation to examine reliability and validity

48
Q

reliability

A

consistent measure. Particular type: test-retest reliability. Ex: how fast a pitcher can throw a baseball

49
Q

validity

A

measures what it was designed or intended to measure. Correlation is used to calculate validity, and can be used to establish validity (much more difficult than establishing reliability)

50
Q

partial correlation

A

a technique that quantifies the degree of association between two variables after statistically removing the association of a third variable. Allows us to quantify the relation between two variables, controlling for the correlation of each of these variables with a third related variable

51
Q

regression _____, correlation _____

A

predicts, describes

52
Q

simple linear regression

A

statistical tool that lets us predict an individual’s score on the DV based on the score on one IV

53
Q

linear regression

A

intercept: predicted value of Y when X = 0
slope: the amount that Y is predicted to increase for an increase of 1 in X

54
Q

regression with z scores

A

calculate the z score, multiply he z score by the correlation coefficient. Convert the z score to a raw score

55
Q

determining the regression equation

A

1) find the z score for X
2) use the z score to calculate the predicted Y value
3) convert the z score to its raw score

56
Q

determining the regression equation: calculating the slope

A

1) find the z score of X of 1
2) use the z score to calculate the predicted score on Y
3) convert the z score to its raw score
4) find a predicted score

57
Q

standard error of the estimate

A

indicates the typical distance between the regression line and the actual data points

58
Q

multiple regression

A

statistical technique that includes 2+ predictor variables in a prediction equation

59
Q

stepwise regression

A

a type of multiple regression in which computer software determines the order in which IVs are included in the equation. The default in many computer software programs

60
Q

strength of using stepwise regression

A

relies on data, rather than theory. Especially good when a researcher is not certain of what to expect in a study

61
Q

structural equation modeling (SEM)

A

a statistical technique that quantifies how well sample data “fit” a theoretical model that hypothesizes a set of relations among multiple variables. Encourages researchers to think of variables as a series of connections

62
Q

chi square test is a ____ test

A

nonparametric

63
Q

when to use nonparametric tests

A
  • DV is nominal
  • either the DV or IV is ordinal
  • when sample size is small
  • when underlying pop isn’t normal
64
Q

limitations of nonparametric tests

A
  • can’t easily use confident intervals or effect sizes
  • have less statistical power than parametric tests
  • nominal and ordinal data provide less info
  • more likely to commit Type II error
65
Q

chi-square test for goodness-of-fit

A

nonparametric test when we have 1 nominal variable. Determines whether or not the observed categories are similar or different from the hypothesized relative frequencies within those same categories

66
Q

chi-square test for independence

A

nonparametric test when we have 2 nominal variables. Determines whether the first variable is related to the second variable or not

67
Q

Cramer’s V (phi)

A

the effect size for chi-square test for independence

68
Q

when looking at an ANOVA source table, what value is of interest to researchers?

A

between-groups F

69
Q

if lines are separate but parallel when you draw lines to connect bars in a bar graph of a factorial ANOVA

A

there is a main effect

70
Q

if lines intersect when you draw lines to connect bars in a bar graph of a factorial ANOVA

A

there is an interaction

71
Q

MANOVA (multivariate analysis of variance)

A

ANOVA in which there is more than one dependent variable

72
Q

ANCOVA (analysis of covariance)

A

ANOVA that statistically subtracts the effect of a possible confounding variable

73
Q

MANCOVA (multivariate analysis of covariance)

A

an ANOVA with multiple dependent variables and the inclusion of a covariate

74
Q

quantitative interaction

A

treatment effect varies in magnitude, but is always the same direction

75
Q

qualitative interaction

A

treatment effect changes direction

76
Q

marginal mean

A

the mean of a row or a column in a table that shows the cells of a study with a two-way ANOVA design

77
Q

standardized regression coefficient

A

a standardized version of the slope in a regression equation. The predicted change in the dependent variable in terms of standard deviations for an increase of 1 standard deviation in the independent variable

78
Q

orthogonal variable

A

is an independent variable that makes a separate and distinct contribution in the prediction of a dependent variable, as compared with the contributions of another variable

79
Q

multiple regression

A

statistical technique that includes 2+ predictor variables in a prediction equation

80
Q

hierarchical multiple regression

A

a type of multiple regression in which the researcher adds independent variables into the equation in an order determined by theory

81
Q

factorial ANOVA

A

catch-all phrase for two-way, three-way, and higher-order ANOVAs

82
Q

factor

A

a term used to describe an independent variable in a study with more than one independent variable

83
Q

when the p value is small (< .05)

A

strong evidence against the null

84
Q

when the p value is large (> .05)

A

weak evidence against the null

85
Q

z score

A

measure of how many standard deviations below or above the population mean a raw score is. Used in calculating regression lines

86
Q

standardized regression equation

A

zY hat = (rxy)(zx)

87
Q

regression to the mean

A

the tendency of scores that are particularly high or low to drift toward the mean over time

88
Q

The predicted z-score on the dependent variable will always be ____ to its mean than the z-score for the independent variable

A

closer

why?: regression to the mean)

89
Q

proportionate reduction in error

A

also called the coefficient of determination. quantifies how much more accurate our predictors are when we use a regression line vs. mean

90
Q

adjusted standardized residuals

A

the difference between the observed frequency and he expected frequency for a cell in a chi-square research design, divided by the standard error, also called adjusted residual