Statistics and Research Design Flashcards
Alpha
Alpha determines the probability of rejecting the null hypothesis when it is true; i.e., the probability of making a Type I error. The value of alpha is set by the experimenter prior to collecting or analyzing the data. In psychological research, alpha is commonly set at .01 or .05.
Chi-Square Test (Single-Sample And Multiple-Sample)
The chi-square test is a nonparametric statistical test that is used with nominal data (or data that are being treated as nominal data) - i.e., when the data to be compared are frequencies in each category. The single-sample chi-square test is used when the study includes one variable; the multiple-sample chi-square test when it includes two or more variables. (When counting variables for the chi-square test, independent and dependent variables are both included.)
Cluster Analysis
Cluster analysis is a multivariate technique that is used to group people or objects into a smaller number of mutually exclusive and exhaustive subgroups (clusters) based on their similarities - i.e., to group people or objects so that the identified subgroups have within-group homogeneity and between-group heterogeneity.
Cross-Validation/Shrinkage
Cross-validation refers to validating a correlation coefficient (e.g., a criterion-related validity coefficient) on a new sample. Because the same chance factors operating in the original sample are not operating in the subsequent sample, the correlation coefficient tends to “shrink” on cross-validation. In terms of the multiple correlation coefficient (R), shrinkage is greatest when the original sample is small and the number of predictors is large.
Discriminant Function Analysis
Discriminant function analysis is the appropriate multivariate technique when two or more continuous predictors will be used to predict or estimate a person’s status on a single discrete (nominal) criterion.
Effect Size
An effect size is measure of the magnitude of the relationship between independent and dependent variables and is useful for interpreting the relationship’s clinical or practical significance (e.g., for comparing the clinical effectiveness of two or more treatments). Several methods are used to calculate an effect size including Cohen’s d (which indicates the difference between two groups in terms of standard deviation units) and eta squared (which indicates the percent of variance in the dependent variable that is accounted for by variance in the independent variable).
Experimental Research (True and Quasi-Experimental)
research involves conducting an empirical study to test hypotheses about the relationships between independent and dependent variables. A true experimental study permits greater control over experimental conditions, and its “hallmark” is random assignment to groups. A quasi-experimental study permits less control.
Experimentwise Error Rate
The experimentwise error rate (also known as the familywise error rate) is the probability of making a Type I error. As the number of statistical comparisons in a study increases, the experimentwise error rate increases.
External Validity (Pretest Sensitization, Reactivity, Multiple Treatment Interference)
External validity refers to the degree to which a study’s results can be generalized to other people, settings, conditions, etc. Threats include pretest sensitization (which occurs when pretesting affects how subjects react to the treatment), reactivity (which occurs when subjects respond differently to a treatment because they know they are participating in a research study), and multiple treatment interference (which occurs when subjects receive more than one level of an IV). Counterbalancing can be used to control multiple treatment interference and involves administering different levels of the IV to different groups of subjects in a different order.
Factorial ANOVA
The factorial ANOVA is the appropriate statistical test when a study includes two or more IVs (i.e., when the study has used a factorial design) and a single DV that is measured on an interval or ratio scale. It is also referred to as a two-way ANOVA, three-way ANOVA, etc., with the words “two” and “three” referring to the number of IVs.
Factorial Design (Main And Interaction Effects)
Factorial designs are research designs that include two or more “factors” (independent variables). They permit the analysis of main and interaction effects: A main effect is the effect of a single IV on the DV, while an interaction refers to the effects of one IV at different levels of another IV.
Independent and Dependent Variables
The independent variable (IV) is the variable that is believed to have an effect on the dependent variable and is varied or manipulated by the researcher in an experimental research study. Each independent variable in a study must have at least two levels. The dependent variable (DV) is the variable that is believed to be affected by the independent variable and is observed and measured.
Internal Validity (Maturation, History, Statistical Regression, Selection)
Internal validity refers to the degree to which a research study allows an investigator to conclude that observed variability in a dependent variable is due to the independent variable rather than to other factors. Maturation is one threat to internal validity and occurs when a physical or psychological process or event occurs as the result of the passage of time (e.g., increasing fatigue, decreasing motivation) and has a systematic effect on subjects’ status on the DV. History is a threat when an event that is external to the research study affects subjects’ performance on the DV in a systematic way. Statistical regression is a threat when subjects are selected to participate because of their extreme status on the DV or a measure that correlates with the DV and refers to the tendency of extreme scores to “regress to the mean” on retesting. Selection threatens internal validity when groups differ at the beginning of the study because of the way subjects were assigned to groups and is a potential threat whenever subjects are not randomly assigned to groups.
Interval Recording/Event Sampling
Interval recording is a method of behavioral sampling that involves dividing a period of time into discrete intervals and recording whether the behavior occurs in each interval. It is particularly useful for behaviors that have no clear beginning or end. Event sampling is a method of behavioral sampling that is useful for behaviors that are rare or that leave a permanent product. It involves recording each occurrence of a behavior during a predefined or preselected event.
LISREL
LISREL is a structural equation (causal) modeling technique that is used to verify a predefined causal model or theory. It is more complex than path analysis, and it allows two-way (non-recursive) paths and takes into account observed variables, the latent traits they are believed to measure, and the effects of measurement error.
MANOVA (Multivariate Analysis of Variance)
The MANOVA is a form of the ANOVA that is used when a study includes one or more IVs and two or more DVs that are each measured on an interval or ratio scale. Use of the MANOVA helps reduce the experimentwise error rate and increases power by simultaneously analyzing the effects of the IV(s) on all of the DVs.
Measures of Central Tendency (Mean, Median, Mode)
The mean, median, and mode are the most commonly used measures of central tendency. The mean is the arithmetic average of a set of scores, and it can be used when scores represent an interval or ratio scale. The median is the middle score in a distribution when scores have been ordered from lowest to highest. It is used with ordinal data (and with interval and ratio data when the distribution is skewed or contains one or a few outliers). Finally, the mode is the most frequently occurring score or category, and it is used as a measure of central tendency for nominal variables or variables that are being treated as nominal variables.