PSYC 523- Statistics Flashcards
ANOVA
Analysis of variance: a statistical technique used to compare whether three or more populations are statistically different from each other; determines whether there is a significant difference between the groups but does not reveal where that difference lies - must do further tests to determine.
Clinical example: A group of psychiatric patients are trying three different therapies: counseling, medication, and biofeedback. You want to see if one therapy is more efficacious than the others. You will gather data and run an ANOVA on the three groups- counseling, medication, and biofeedback- to see if there is a significant difference between any of them.
Clinical v. statistical significance
Clinical significance refers to the meaningfulness of change in a client’s life due to the treatment. Do the patient’s symptoms reduce in a meaningful or noticeable way? Does the quality of life improve for the patient?
Statistical significance refers to whether or not a treatment made a statistically significant impact on some outcome variable of interest or the impact of the treatment had a high probability of not being due to chance alone. A treatment can be statistically significant in research but not clinically significant.
Clinical example: If a randomized controlled trial does not show that a treatment is more effective than no treatment or a placebo, but that treatment produces a meaningful difference in a client’s life, it could be said to have clinical but not statistical significance.
You are trying to decide between two treatments for your client with treatment-resistant depression. One has demonstrated high clinical significance and high statistical significance in RCTs. The other shows high statistical significance but low clinical significance. You choose the one with high clinical significance because it assesses treatment efficacy from the patient perspective.
Construct validity
In research design, construct validity is the degree to which a test or study measures the qualities or the constructs that it is claiming to measure.
There are two ways of collecting evidence for construct validity, both of which are statistical procedures: convergent validity is how well a certain measure of a construct correlates with other well-established measures of that construct and divergent validity is how well the measure of a construct does not correlate with measures of other constructs.
In order to have high construct validity, a test should correlate highly with measures of the same construct (convergent validity) and not correlate highly with measures of other constructs (divergent validity).
Clinical example: If people score significantly differently on a new test designed to measure intelligence compared to a recognized test of intelligence, the new test may be lacking construct validity.
A group of researchers create a new test to measure depression. They want to ensure that the test has construct validity, in that it actually measures the construct of depression. To do this, they measure how much the test correlates with the Beck Depression Inventory and how much it does not measure another concept like anxiety.
Content validity
In research design, content validity is the degree to which a measure or study includes all of the facets/aspects of the construct that it is attempting to measure. Content validity cannot be measured empirically but is rather assessed through logical analysis.
Clinical example: A depression scale may lack content validity if it only assesses the affective dimension of depression (emotion related- decrease in happiness, apathy, hopelessness) but fails to take into account the behavioral dimension (sleeping more or less, eating more or less, energy changes, etc) Because of this therapists, end up using other scales.
Correlation v. causation
In the context of research, correlation means that a relationship exists between two variables. This relationship can be positive or negative; coefficient will fall between -1.00 and +1.00. Causation means that a change in one variable affects a change in the other variable. Causality is usually determined via controlled studies, when you can isolate variables you want to examine and control for extraneous variables. Correlation does not indicate causation.
Clinical example: A study found that minutes spent exercising correlated with lower depression levels. This study was able to show that depression levels and exercise were correlated, but could not go so far as to claim that one causes the other.
Correlational research
Research method that examines the potential for relationships between variables that might logically seem to be related. The technique identifies a mathematical relationship and does not establish causal factors.
- Produces correlation coefficient; ranges from 1.0 to -1.0 depending on strength/direction of the relationship between the two variables
- Very common in psychological research; usually cost-effective
- PROS - inexpensive, produces wealth of data, encourages future research; precursor to experiment determining causation
- CONS - cannot establish causation or control for confounds
- Statistical tests include Pearson, Spearman, & point-biserial
Clinical example: Shelia’s patient Donna suffers from an anxiety disorder. She brings Shelia an article claiming that eating out of plastic containers causes cancer. After reading the article, Shelia explains that the study referenced in the article is a correlational study, which only shows that there is a relationship between eating out of plastic containers and cancer, but it does not prove that eating out of plastic containers causes cancer.
Cross-sectional design
A type of research that simultaneously compares individuals of different ages at one specific point in time. This type of design is very common and used in online surveys.
- Groups can be compared across a variety of dependent variables
- Advantages include a collection of large amounts of data in a short amount of time & low cost
- Drawbacks included the inability to infer causation (because it is just a snapshot)
- Considered quasi-experimental design (participants are not selected randomly - selected based on age)
EXAMPLE: George was looking to study the difference in peer relations and self-esteem in various age groups. He decided to use a cross-sectional design comparing 6 year-olds, 12-year-olds, 18-year-olds, and 25-year-olds.
EXAMPLE: You’re treating someone with depression. He is having a hard time finding the energy to carry out daily activities. The therapist shows him a cross-sectional study looking at depression levels and the utilization of behavioral activation. Specifically, the effectiveness of taking daily walks to increase energy level. The therapist explains that those who walk daily have been shown to have lower depression and higher energy levels, especially for his age group.
Dependent t-test
In psychological research, a type of statistical analysis that compares the means of two groups where the values in one sample affect the values in the other sample. Because the sample is carried across the test (AKA matched pairs or repeated measures), they are dependent on one another.
- Used when the design involves matched pairs or repeated measures, and only two conditions of the independent variable
- It is called “dependent” because the subjects carry across the manipulation–they take with them personal characteristics that impact the measurement at both points—thus measurements are “dependent” on those characteristics.
Clinical example: A researcher wants to determine the effects of caffeine on memory. They administer a memory test to a group of subjects have the subjects consume caffeine then administer another memory test. Because they used the same subjects, this is a repeated-measures experiment that requires a dependent t-test during statistical analysis.
Descriptive v. inferential
Descriptive statistics are those which are used to describe and summarize a data set.
- Can only be used to describe the sample they are conducted on.
- Common tools include measures of central tendency, variance, and skew.
- We choose a group that we want to describe and then measure all subjects in that group
Inferential statistics take data from a sample and make inferences about the larger population from which the sample was drawn.
- Need to have confidence that sample accurately reflects the population (population must be defined) → importance of random sampling
- Common techniques include hypothesis testing, regression analysis, etc.
- The statistical results incorporate the uncertainty that is inherent in using a sample to understand an entire population.
EXAMPLE: A researcher conducts a study examining the rates of test anxiety in Ivy League
students. This is a descriptive study because it is concerned with a specific population. However, this study cannot be generalized to represent all college students, so it is not an inferential
study.
Double-blind study
A type of experimental design in which both the participants and the researchers are unaware of who is in the experimental condition and who is in the placebo condition.
- In contrast to a single-blind study, where only the participants are unaware of who is in the experimental condition.
- Double-blind studies eliminate the possibility that the researcher may somehow communicate (knowingly or unknowingly) to a participant which condition they are in, thereby contaminating the results.
Example: A study testing the efficacy of a new SSRI for anxiety is using a double-blind study. Neither the experimenter nor the participants are aware of who is in the treatment group and who is receiving a placebo. This setup ensures that the experimenters do not make subtle gestures accidentally signaling who is receiving the drug and who is not, and that experimenter expectations could not affect the studies outcome.
Ecological validity
The extent to which an experimental situation approximates the real-life situation which is being studied.
- Researchers call for these in hopes they will better generalize to the real-world
- Different from external validity
- Experiments high in ecological validity tend to be low in reliability because there is less control of the variables in real-world like settings
EXAMPLE: A researcher wants to study the effects of alcohol on sociability, so he administers beer to a group of subjects and has them interact with each other. To increase their ecological validity, he decides to carry out the study in an actual bar.
Effect size
Part of: research methods and statistical analysis
What: A quantitative measure of the strength of a relationship between two variables; refers to magnitude of an effect.
- It is also valuable for quantifying the effectiveness of a particular intervention, relative to some comparison - commonly used in meta-analyses
- Effect size can be used with the correlation between two variables, regression coefficients or the mean difference.
Example: A researcher conducts a correlational research study on the relationship between caffeine and anxiety ratings. The study produces a correlation coefficient of 0.8 which is considered a large effect size. The effect size reflects a strong relationship between the caffeine and anxiety.
Experimental research
A form of research in which one variable (the independent variable) is manipulated in order to see what effect it will have on another variable (the dependent variable). Researchers try to control any other variables (confounds) that may affect the dependent variable(s). Experimental research is the only way to establish causation.
Example: A researcher conducts an experimental research study to examine the relationship between caffeine intake and anxiety ratings. The study administers various levels of caffeine (the independent variable) to the low, high, and no caffeine groups. The participants are then asked to report their anxiety levels (the dependent variable). They found that those who had more caffeine reported feeling more anxious.
Hypothesis
In the field of research, a hypothesis is a formally stated prediction that can be tested for its accuracy.
- Essential to the scientific method and testing in research
- Hypotheses help to focus the research and bring it to a meaningful conclusion.
- Without hypotheses, it is impossible to test theories.
- Specifically, a hypothesis is a statement or proposition about the characteristics or appearance of variables, or the relationship between variables, that acts as a working template for a particular research study.
EXAMPLE: A famous hypothesis in social psychology was generated from a news story, when a woman in New York City was murdered in full view of dozens of onlookers. Psychologists John Darley and Bibb Latané developed a hypothesis about the relationship between helping behavior and the number of bystanders present, and that hypothesis was subsequently supported by research. It is now known as the bystander effect.
Independent t-test
Statistical analysis that compares the means of two independent groups, typically taken from the same population (although they could be taken from separate populations).
- Determines if there is a statistical difference between the two groups’ means
- We make the assumption that if randomly selected from the same population, the groups will mimic each other; the null hypothesis is no difference between the two groups
EXAMPLE: Fred is analyzing the best treatment options for his patient Harold. He reads a study comparing two different types of therapies. After utilizing an independent t-test, the researchers found that there was not a statistically significant difference between the treatment options. Harold decides that both are good options for his patient and he decides to think about his client’s person variables that might make one better than the other.