Prep for Psych 201 Exam 3 Flashcards
A type of ANOVA used for an independent samples design
One-way ANOVA
Compares a single independent variable with 2+ groups in a paired-samples design
Repeated measures ANOVA
Compares 2 independent variables on a single measured variable
Two-way ANOVA/Factorial ANOVA
Compares more than one independent variable (or factor) on a single measured variable. Yields two kinds of information: the main effect and an interaction
Factorial designs
The effect of an independent variable on a dependent variable. Tested by analyzing the marginal means.
Main effect
Situation in which the effect of one independent variable on the dependent variable changes, depending on the level of another independent variable. Two types: crossover and spreading
Interaction
An interaction ex: “It depends…”
Crossover interaction
An interaction ex: “Especially when..”
Spreading interaction
Arithmetic means for each level of IV averaged across other conditions. Ex: There may be a significant main effect for A, but definitely no main effect for B.
Marginal means
Inferential statistic technique for comparing means, comparing variances, and assessing interactions. this tests systematic variance between groups and also within the groups themselves. Uses categorical data from a predictor variable (groups)
Analysis of Variance (ANOVA)
the variable (independent or quasi-independent) that designates the groups being compared
Factor
The individual conditions or values that make up the factor.
Level
Ex: k = 3, there are 3 groups
k = number of levels
The measured variable (continuous)
Dependent/outcome variable
Differences between the sample learning performance means is caused by the type of treatment
Systematic Treatment Differences
Differences that exist even if there is no treatment effect. Includes individual differences and experimental error
Random, unsystematic differences
Source of variability between groups. The group mean is compared to the grand mean.
Between-group variability
Source of variability within groups. The deviation from each individual score is compared to the group mean to which it belongs.
within-group variability
Additional hypothesis tests that are done after an ANOVA to determine exactly which mean differences are significant and which are not.
Post-hoc tests
Used for expressing the degree of relationship between two continuous variables (X and Y) that are measured as they naturally occur. Typically used in non-experimental research designs.
Correlation
Figure used to visualize correlations in which each dot represents a single person’s score on both X and Y
Scatterplot
Negative correlation or positive correlations.
Direction of the Relationship
Linear or curvilinear
Form of the Relationship
Perfect correlation or no correlation. The closer the dots are fitted across a line, the stronger the relationship is.
Strength of the relationship
two variables change in the same way at the same time
Positive correlation
two variables change in opposite ways at the same time
Negative correlation
No relationship or systematic variation between variables
Zero order
Relationships are fit with a straight line
Linear relationship
Relationships are modeled along a curve
Curvilinear relationship
If two variables are related, it is possible to use one variable to make a prediction about the other. Ex: Self-esteem seems to be lower when one is younger and higher when one is older. An experiment could be established to find out the cause of the relationship between self-esteem and age.
Prediction
Relationship between recently developed test and another test measuring the same construct. Ex: Personality tests such as the Myers-Brigg, the Big Five, and the Anagram.
Validity
Relationship between recently developed two sets of measurements using the same instrument. Ex: Taking the Big Five personality test when one is 15 and taking it again when one is 28 years old should have identical but different results.
Reliability
Many theories make predictions about the relationship between two variables. Ex: The learning theory states that everything a person learned is through experience. Measuring a fear of heights on a scale of 1-10 is a way to verify a theory and where it can be applied.
Theory verification
Measures the degree of and direction of linear relationship between two variables. It is the most commonly used correlation coefficient and serves as both a descriptive and inferential statistic.
Pearson’s r
Variability from the mean or average for a single variable.
Variance
Modified formula for variance
Covariance
Measures the relationship between two variables and the extent to which they change together (variables X and Y)
Covariance (COVxy)
When an observed correlation between two variables (X and Y) can be explained by a third variable (Z)
third variable problem
The problem of knowing whether X or Y came first
Directionality problem
Standardized statistical technique used for finding the best-fitting straight line for a set of data. It is used to model the relationship between two or more continuous variables for forecasting and testing cause and effect. Ex: Given a value for variable X, what would Y be?
Simple Linear Regression
Minimizes the distance between each actual Y value and the predicted Y values and the regression line. AKA minimizes the sum of the Squared Residuals
Line of Best Fit
Variance explained by the model (regression line). Distance from the prediction line to the mean. (Predicted Y - Mean Y)
SSreg(SSmodel)
Error. Distance from actual scores to the predicted line. (Y - Predicted Y)
SSres(SSerror)
Total variance (for Y). Distance from the observed score to the mean. (Y - Mean Y)
SSy(SStotal)
Uses continuous data from a predictor variable
Regression
Multiple X variables as predictors of Y variable. Used to obtain more accurate predictions of Y and explain more variance. By including more predictors, it can help solve for the third variable problem.
Multiple regression
Inferential statistic for testing the distribution and differences between categorical variables. Only uses nominal variables. Cannot calculate mean or variance with categorical variables. Tests the relationship between groups using frequency tables.
chi-square test
There is no population parameter we are estimating or testing against (only working with sample data)
non-parametric test
Used to test whether the frequency distribution of a single categorical variable is different from expectations.
Goodness of Fit Test
Used to test whether two categorical variables are related to each other.
Test for Independence
Shows the frequency of each category in one variable, contingent upon the specific level of the other variable.
Contingency table