II - Textbook Flashcards

You may prefer our related Brainscape-certified flashcards:
1
Q

Coefficient of determination�

A

represents the proportion of the variance in one variable (x) that is accounted for by the other variable (y).
r2 (square the correlation coefficient).

If the correlation between two variables (x and y) is 0.3. Then 0.3 squared = 0.09, or 9% is the variance in x is accounted for y

Proportion of variance in x that is systemic variance shared with y.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Statistical Significance can be influenced by

A

Sample Size - Larger means more likely a correlation is significant
.Magnitude of correlation
.P value

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Partial Correlation

A

The correlation between two variables after the influence of the third variable is statistically removed.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Spearman Rank-order correlation

A

correlation between two variables when one or both of the variables is on an ordinal scale (the numbers reflect rank ordering).

E.g. Correlation between teachers ranking of the best to worst students (ordinal scale) and the students IQ scores (interval scale).
�.e.g. ask teacher to rank students in class from 1-30 based on what they think their intelligence is, then correlate with actual measured iq
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Point biserial correlation:�

A

used when one variable is dichotomous
Gender is dichotomous (male or female). To correlate gender with spatial memory you would assign all males a 1 and all females a 2.
If you get a significant positive correlation that would mean that females tend to score higher on spatial memory than males. A significant negative correlation would mean that males score higher.
1 dichotomous var, 1 interval/ratio

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Phi coefficient�

A

used when both variables being correlated are dichotomous (e.g., gender, handedness, yes/no answer)
.BOTH variables are dichotomous

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

On-line outliers

A

extreme on both variables (very top right of scatter plot graph) INFLATES r

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

off-line outliers

A

off to the side extremes… so points on bottom right and top left of scatter plot. DEFLATES r

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

spurious correlation

A

correlation between two variables that is not due to any direct relationship between them but rather to their relation to other variables. if researchers think something is spurious, they’ll look for third variables

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Factors that distort correlation coefficients

A

.restricted range
.outliers
.reliability of measures (less reliable, lower the coefficients)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

restricted range how distorts coefficients

A

Restricted range: the size of the correlation may be reduced by a restriction of the range in the variables being correlated.
A restricted range occurs when most participants have similar scores (less variability).
This can occur when you are correlating scores that are either either high or low on one variable.
E.g. If you correlate SAT scores of people who get into college with their college GPA, you may be dealing with a restricted range because usually those with higher SAT scores get in to college.
Must ensure you have a broad range of scores.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Regression�

A

Predict scores on one variable from scores on another variable
Use GRE scores to predict success in grad school

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Regression line

A

A regression line is a straight line that summarizes the linear relationship between two variables.
The regression line minimizes the sum of the squared deviations around the line.

It describes how an outcome variable y changes as a predictor variable x changes.
A regression line is often used as a model to predict the value of the response y for a given value of the explanatory variable x.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

multiple regression

A

Multiple Regression is used when there is more than one predictor variable.
If you are predicting success in grad school you may use three predictor variables: GRE scores, University GPA, and IQ scores.

Then you can predict success in grad school based on all three predictors, which usually is more accurate than one predictor.

Allows the researcher to simultaneously consider the influence of all the predictor variables on the outcome variable.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

standard multiple regression

A

Standard multiple regression (simultaneous multiple regression): enter all the predictor variables at the same time.
You can predict grad school success by entering GPA, GRE, and IQ score simultaneously.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

stepwise multiple regression

A

enter the predictor variables one at a time.
First enter the predictor variable that correlates the highest with the outcome variable.
Next, you enter the variable that relates the strongest to the outcome variable after the first variable is entered.
It will account for the highest amount of variance in the outcome variable after the the first predictor variable is entered
This may or may not be the second highest correlation. If the second highest correlation was highly correlated with the first variable than it may not predict a unique amount of the variance in the outcome variable.
enter strongest predictor variable first. then, add the predictor variable that contributes most strongly to the criterion variable GIVEN THAT THE FIRST PREDICTOR VARIABLE IS ALREADY IN THE EQUATION. SEE PAGE 166 second last paragraph ok dummy.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

hierarchical multiple regression

A

enter the predictor variables in a predetermined order, based on hypotheses the researcher wants to test.
Can partial out the effects of predictor variables entered in early steps to see if other predictor variables still have a contribution uniquely to the variance in the outcome variable.
Predetermined order to select for predictor variables to see if have any UNIQUE effects. Entered based on a hypothesis that the research wants to test.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

We want to determine the relation between drinking while pregnant and child’s IQ score.
But, we know that mothers who drink while pregnant also tend to smoke and do other drugs while pregnant, which could also decrease child’s IQ.

A

We can enter smoking and other drug use into the regression equation first and then enter drinking:
to see if after smoking and other drug use are accounted for (partialled out), if drinking uniquely predicts IQ scores above and beyond smoking and other drug use.

19
Q

Mediation Effects�

A

occur when the effect of x on y is actually occurring because of a third variable, z.
First enter the possible mediator variables.
Then you can see if x uniquely predicts variance in y after z is accounted for and partialled out (statistically removed)
Correlation between drowning and eating ice cream, but this relation may be related to a mediator variable called summer (heat).
We could fist enter heat in to the regression to determine how strongly heat is uniquely related to drowning, then after heat is removed we can determine whether eating ice cream is actually uniquely related to drowning.

20
Q

Structural Equations Modeling

A

Allows you to test hypotheses about the pattern of correlations.
Researcher makes precise predictions about how three or more variables are causally related.
x caused y which cases z
Then you can compare your hypothesized correlation matrix against the real correlation matrix.
This analysis determines the degree to which the patterns of correlations observed matches or fits with the researchers predictions or model.
Can also test two different models against each other to see which one fits best with the observed correlation matrix.

21
Q

Factor Analysis

A

Analyze the interrelationships among a number of variables.
Look for a pattern in the correlation matrix; look for correlations among the correlations.
Can determine if some variables are all highly correlated with each other but not with other variables that may only correlate with each other.

22
Q

Multiple correlation coefficient (R)

A

The ability of all the predictor variables together to predict the outcome variable.
Represents the degree of the relationship between the outcome variable and the set of predictor variables.
Ranges from .00 to 1.00, the larger the R the better the predictor variable accounts for the variance the outcome variable.
R can be squared to show the percent of the variance in the outcome variable (y) that is accounted for by the set or predictor variables.
R = .50, accounting for 25% of the variance in y.

23
Q

Randomized groups factorial design

A

participants are randomly assigned to one of the combinations of the independent variables

24
Q

Matched factorial design

A

match participants based on their score on a measure related to the dependent variable.
If there are 6 cells then choose six highest scores and randomly assign each to a cell.

25
Q

Repeated measure factorial design

A

all participants complete all conditions (cells)
So in a 2 x 2 (4 cells) each participant completes 4 conditions, in a 3 x 3 x 2 (18 cells) each participant completes all 18 conditions

26
Q

Mixed factorial design

A

has at least one between and one within subjects variable.

2 x 2, one independent variable between subjects (caffeine no caffeine) and other independent variable is within subjects (visual memory test, verbal memory test)

Randomly assign participants to the between subjects condition and all participants complete the within subjects condition.

27
Q

main effect

A

when there is an effect (or difference) for one independent variable, collapsing across the other independent variable.
2 (caffeine, no caffeine) x 3 design (short, medium, long words).
It is a two way design, so there will be two main effects (one for caffeine and one for word length)
If there is a main effect of caffeine, that means that memory is better in individuals who have caffeine regardless of the length of words (average across words).
If there is a main effect of word length it may be that memory is better for shorter than longer words, collapsing across caffeine intake.

28
Q

interaction

A

Occurs when the influence of one independent variable is different at different levels of another independent variable.
2 (caffeine, no caffeine) x 2 design (short, long words)
The effect of caffeine on memory may be different for short words than for long words.
Caffeine may only have a positive effect on memory for short words, but there may be no difference between caffeine and no caffeine groups for long words.

Caffeine interacted with word length

29
Q

subject variable

A

based on individual personal characteristics that you can not manipulate but you can measure (age, IQ, gender).

30
Q

Expericorr (mixed) design

A

has at least one independent variable manipulated by the researcher and at least one subject variable (gender) measured but not manipulated.
Allows researchers to examine effects of a subject variable
Help to understand how personal characteristics relate to behavior (age, gender)
Dividing participants into groups based on a subject variable makes the participants in each group more homogeneous

31
Q

Classifying participants into groups:

Median-split procedure :

A

divide participants into two groups based on the median score (half of the participants above and half below this score).

32
Q

Extreme groups procedure

A

divide participants into two groups based on very high and low scores on a variable of interest.
These procedures can throw away valuable information.

33
Q

Null Hypothesis (H0):

A

states that the independent variable did not have an effect.

The data do not differ from what we would expect on the basis of chance or error variance

34
Q

Experimental Hypothesis (H1)

A

states that the independent variable did have an effect.

35
Q

Type I error

A

FALSE ALARM: when you reject the null hypothesis when is in fact true.
The probability of making a type 1 error is equal to alpha ().

36
Q

Type II error

A

MISS: fail to reject the null hypothesis when the null hypothesis is really false.
The researcher concludes that the independent variable did not have an effect when it fact it did have an effect.
The probability of making a type II error is equal to beta ().

37
Q

Power

A

Power of a test is the probability that the researchers will be able to reject the null hypothesis if the null hypothesis is false.

The ability of the researchers to detect a difference if there is a difference
Power = 1 - 
Type II errors are more common when the power is low.

38
Q

T test

A

An inferential test used to compare to means

39
Q

Paired t-test:

A

is used when you have a within subjects design or matched subjects design. The participants in the the condition are either the same (within) or very similar (matched).
This test takes into account the similarity in the participants
More powerful test because the pooled variance is smaller resulting in a larger t.

40
Q

Bonferroni adjustment

A

used to adjust for the Type I error rate.
Divide the alpha level (.05) by the number of tests you conduct.
If doing 10 tests: (.05/10 = .005). Which means you must find a larger t for it to be significant (more conservative).
!!But this also increases your chance of making a Type II error (missing an effect when there really is one).

41
Q

Analysis of Variance (ANOVA)

A

Used when comparing more than two means in a single factor study (one-way) or in a study with more than one factor (two- and three-way etc.).
Analyzes differences between means simultaneously, so Type I errors are not a problem.
Calculates the variance within each condition (error variance) and the variance between each condition.
If we have an effect of the independent variable then there should be more variance between conditions than within conditions.

42
Q

F-test

A

Test whether the mean variance between groups is larger than the mean variance within groups.
If there is no effect of the independent variable the F value will be 1 or close to 1, the larger the effect the larger the F value.

43
Q

Multivariate Analysis of Variance (MANOVA)

A

Used when you have more than one dependent variable.
Test differences between two or more independent variables on two or more dependent variables
Why not just conduct separate ANOVAs?
MANOVA is usually used when the researcher has dependent variables that may be conceptually related
Control the Type I error rate (tests all dependant variables simultaneously)
MANOVA creates a new variable called the canonical variable (a composite variable that is a weighted sum of the dependent variables).

First, test to see if this is significant (multivariate F) and then conduct separate ANOVAs on each dependent variable.