Week 3: Correlation Flashcards

1
Q

A general approach is that our outcomes can be predicted by a model and what remains

A

is the error

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

The i in the general model shows

A

e.g., outcome 1 is equal to model plus error 1 and outcome 2 is equal to model plus error 2 and so on…

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

For correlation, the outcome is modelled by

A

scaling (multiplying by a constant) another variable

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Equation of correlation

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What does this equation of correlation mean and what does b1 mean? - (2)

A

‘the outcome for an entity is predicted from their score on the predictor variable plus some error’.

model is described by a parameter, b1, which in this context represents the relationship between the predictor variable (X) and the outcome.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

If you have a 1 continous variable which meets assumtpion of parametric test then you can conduct a

A

pearson correlation or regression

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Variance is a feature of outcome measurements we have obtained and we want to predict with a model that

A

captures the effect of the predictor variables we have manipulated or measured

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Variance of a single variable represents the

A

average amount that the data cary from the mean

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Variance is the standard deviation

A

squared (s squared)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Variance formula - (2)

A

xi minus average of all scores of pp which is squared and divided by total number of participants minus 1

done for each participant (sigma)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Variance is SD squared meaning that it captures the

A

average of the squared difference the outcome values from the mean of all outcomes (explaining what the formula of variance does)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Covariance gathers information on whether

A

one variable covarys with another

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

In covariance if we are interested whether 2 variables are related then interested whether changes in one variable are met with changes in other

therefore.. - (2)

A

when one variable deviates from its mean we
would expect the other variable to deviate from its mean in a similar way.

So, if one variable increases then the other, related variable, should also increase or even decrease at the same level.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

The simplest way to look at whether 2 variables are associated is to look at whether they.. which means..

A

covary

look at the relationship between the 2 variables

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

If one variable covaries with another variable then it means these 2 variables are

A

related

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

To get SD from variance then you would

A

square root variance

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What would you do in covariance formula in proper words? - (5)

A
  1. Calculate the error between the mean and each subject’s score for the first variable (x).
  2. Calculate the error between the mean and their score for the second variable (y).
  3. Multiply these error values.
  4. Add these values and you get the product deviations.
  5. The covariance is the average product deviations
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Example of calculaitng covariance and what does answer tell you?

A

The answer ispositive: that tells us the x and y values tend to risetogether.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

What does each element of covariance formula stand for? - (5)

A

X = the value of ‘x’ variable
Y = the value of ‘y’ variable
X(line) = mean of ‘x’ - e.g., green
Y(line) = mean of ‘y’ - e.g., blue
n = the number of items in the data set

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

covariance will be large when values below

A

the mean for one variable

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

What does a positive covariance indicate?

A

as one variable deviates from the mean, the other
variable deviates in the same direction.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

What does this diagram show? - (5)

A
  • Green line is average number of packetts bought
  • Blue line is average number of adverts watchedVertical lines represent deviations/residuals between obsered variables and circles represent means
  • There is a similar pattern of deviations of both variables as person’s score below mean for one variable then score is other variable is below mean too
  • We know similarity we are seeing between two variables is calculating covariance = divide cross-product deviations( deviations of 2 variables) divided by number of observations minus 1
  • We devide n-1 as unsure of true population mean and related to DF.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

What does negative covariance indicate?

A

a negative covariance indicates that as one variable deviates from the mean (e.g. increases), the other deviates from the mean in the opposite direction (e.g. decreases).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

What is the problem of covariance as a measure of the relationship between 2 variables? - (5)

A

dependent upon the units /scales of measurement used

So covariance is not a standardised measure

e.g., if 2 variables measured in miles and covariance is 4.25 then if we convert data to kilometres then we have to calculate covariance again and see it increases to 11.

Dependence of scale measurement is a problem as can not compare covariances in an objective way –> can not say whether covariance is large or small to another data unless both data sets measured in same units

So we need to STANDARDISE it.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

What is the process of standardisaiton?

A

To overcome the problem of dependence on the measurement scale, we need to convert
the covariance into a standard set of units

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

How to standardise the covariance?

A

dividing by product of the standard deviations of both variables.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

Formula of standardising covariance

A

Same formula of covariance but multipled of SD of x and SD of y

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

Formula of Pearson’s correlation coefficient, r

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

Example of calculating Pearson’s correlation coefficient, r - (5)

A

standard deviation for the number of adverts watched (sx)
was 1.67,

SD of number of packets of crisps bought (sy) was 2.92.

If we multiply these together we get 1.67 × 2.92 =
4.88.

.Now, all we need to do is take the covariance, which we calculated a few pages ago as being 4.25, and divide by these multiplied standard deviations.

This gives us r = 4.25/
4.88 = .87.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

The standardised version of variance is the

A

correlational coefficient or Pearson’s r

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

Pearson’s R is … version of covariance meaning independent of units of measurement

A

standardised

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

What does correlation describe? - (2)

A

Describes a relationship between variables

If one variable increases, what happens to the other variable?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

Pearson’s correlation coefficient r was also called the

A

product-moment correlation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

Linear relationship and normally disturbed data and interval/ratio and continous data is assumed in

A

Pearson’s r correlation coefficient

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

Pearson Correlation Coefficient varies between

A

-1 and +1 (direction of relationship)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

The larger the R value, the closer the values will

A

be with each other and the mean

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

The smaller R values indicate

A

there is unexplained variance in the data and results in the data points being more spread out.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
38
Q

What does these two graphs show? - (2)

A
  • example of high negative correlation. The data points are close together and are close to the mean.
  • On the other hand, the graph on the right shows a low positive correlation. The data points are more spread out and deviate more from the mean.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
39
Q

The Pearson Correlation Coefficient measures the strength of a relationhip

A

between one variable and another hence its use in calculating effect size

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
40
Q

A Pearson’s correlation coefficient of +1 indicates

A

two variablesare perfectly positively correlated, so as one variable increases, the other increases by a
proportionate amount.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
41
Q

A Pearson’s correlation coefficient of -1 indicates

A

a perfect negative relationship: if one variable increases, the other decreases by a proportionate amount.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
42
Q

Pearson’s r
+/- 0.1 means

A

small effect

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
43
Q

Pearson’s r
+/- 0.3 means

A

medium effect

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
44
Q

Pearson’s r
+/- 0.5 means

A

large effect

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
45
Q

In Pearson’s correlation, we can test the hypothesis that - (2)

A

correlation coefficient is different from zero

(i.e., different from ‘no relationship’)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
46
Q

In Pearson’s correlation coefficient, we can test the hypothesis that the correlation is different from 0

If we find our observed coefficient was very unlikely to happen if there was no effect in population then gain confidence that

A

relationship that
we have observed is statistically meaningful.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
47
Q

. In the case of a correlation
coefficient we can test the hypothesis that the correlation is different from zero (i.e. different
from ‘no relationship’).

There are 2 ways to test this hypothesis

A
  1. Z scores
  2. T-statistic
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
48
Q

from z scores we can know the probability of a given z score occuring if the distribution from which it comes is

A

normal

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
49
Q

The problem with Pearson’s r with z scores is that it is known for sampling distribution to not be

A

normally distributed

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
50
Q

There is one problem with z scores in Pearson’s r, which is that it is known to have a sampling distribution
that is not normally distributed.

A
  • This can be fixed by adjusting r so sampling distribution is normal as follows:
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
51
Q

Steps to calculate z score for Pearson’s r

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
52
Q

The hypothesis that correlation coefficient is different from 0 can be tested using t statistic with N-2 DF

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
53
Q

SPSS for Pearson’s correlation coefficient, r does not compute

A

confidence intervals in r

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
54
Q

Confidence intervals tells us something about the

A

likely correlation in the population

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
55
Q

Can calculate confidence intervals of Pearson’s correlation coefficient by transforming formula of CI

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
56
Q

Example of calculating CIs for Pearson’s correlation coefficient, r

If we have zr as 1.33 and SEzr as 0.71 - (4)

A
  • LB = 1.33 - (1.96 * 0.71) = -0.062
  • UB = 1.33 + (1.96 * 0.71) = 2.72
  • Have to convert values of LB and UB as in z metric to r correlaiton coefficient using formula in diagram
  • This gives UB of 0.991 and LB of -0.062 (since value so close to 0 transformation from z to r has no impact)
57
Q

As sample size increases, so the value of r at which a significant result occurs

A

decreases

58
Q

Example of As sample size increases, so the value of r at which a significant result occurs, decreases.
- (3)

A

Imagine you’re studying the relationship between hours of study and exam scores among students. You collect data from 50 students and find a correlation coefficient (r) of 0.3 between study hours and exam scores. With a sample size of 50, this correlation might not be statistically significant at a typical significance level (let’s say p < 0.05).

Now, if you increase your sample size to 500 students while keeping the relationship between study hours and exam scores the same, you might find that even a smaller correlation coefficient, let’s say 0.15, becomes statistically significant at the same significance level.

So, as you move from a smaller sample size to a larger one, you may find that weaker relationships between variables become statistically significant due to the increased precision and reliability provided by the larger sample size.

59
Q

Example of a negative relationship

A

Link between age you die and number of ciggerattes you smoked

60
Q

Pearson’s r = 0 means - (2)

A

indicates no linear relationship at all

so if one variable changes, the other stays the same.

61
Q

Correlation coefficients give no indication of direction of… + example - (2)

A

causality

e.g., although we conclude no of adverts increase nmber of toffees bought we can’t say watching adverts caused us to buy toffees

62
Q

We have to be caution of causality in terms of Pearson’s correlation r as - (2)

A
  • Third variable problem - causality between2 variables can not be assumed in any correlation because there might be other measured/unmeasured variables affecting results. This is known as third variable problem or tertium quid
  • Direction of causality: Correlation coefficients give nothing about which variables causes other to change. Even if we ignore third variable problem and assume 2 correlated variables were only important, correlation coefficient does not give direction at which causality operates e.g., if conclude that watching adverts causes us to
    buy packets of toffees, there is no statistical reason why buying packets of toffees cannot cause us to watch more adverts
63
Q

If you got weak correlation between 2 variables = weak effect then take a lot of measurements for that relationship to be

A

significant

64
Q

correlation coefficient gives the ratio of

A

covariance to a measure of variance

65
Q

Example of correlations getting stronger

A
66
Q

R squared is known as the

A

coefficient of determination

67
Q

of cor

R^2 can be used to explain the

A

proportion of the variance for a dependent variable )outcome) that’s explained by an independent variable . (predictor)

68
Q

Example of R^2 coefficient of determination - (2)

X = exam anxiety
Y = exam performance

If R^2 = 0.194

A

19.4% of variability in exam performance can be explained by exam anxiety

the variance in y accounted for by x’,

69
Q

R^2 calculate the amount of shared

A

variance

70
Q

Example of r and R^2

A

Multiply 0.1 * 0.1 for example

71
Q

R^2 gives you the true strength of.. but without

A

the correlation but without an indication of its direction.

72
Q

What are the three types of correlations? - (3)

A
  1. Bivarate correlations
  2. Partial correlations
  3. Semi-partial or part correlations
73
Q

Whats bivarate correlation?

A

elation between 2 variables

74
Q

What is a partial correlation?

A

looks at the relationship between two variables while ‘controlling’ the effect of one or more additional variables.

75
Q

The partial correlation partials out the

A

the effect of one or more variables on either X or Y

76
Q

A partial correlation controls for third variable which is made from

A
  • A correlation calculates each data points distance from line (residuals)
  • This is the error relative to the model (unexplained variance)
  • A third variable might predict some of that variation in residuals
77
Q

The partial correlation compares the unique variation of one variable with the

A

unfiltiered variation of the other

78
Q

The partial correlation holds the

A

third variable constant (but we don’t manipulate these)

79
Q

Example of partial correlation- (2)

A
80
Q

Example of Venn Diagram of Partial Correlation - (2)

A

Partial Correlation between IV1 and DV = D / D+C

Unique variance accounted for by the predictor (IV1) in the DV, after accounting for variance shared with other variables.

81
Q

Example of Partial Correlation - (2)

A

Partial correlation: Purple / Red + Purple

If we were doing just a partial correlation, we would see how much exam anxiety is influencing both exam performance and revision time.

82
Q

Example of partial correlation and semi-partial correlation - (2)

A

The partial correlation that we calculated took
account not only of the effect of revision on exam performance, but also of the effect of revision on anxiety.

If we were to calculate the semi-partial correlation for the same data, then this would control for only the effect of revision on exam performance (the effect of revision
on exam anxiety is ignored).

83
Q

In partial correlation, the third variable is typically not considered as the primary independent or dependent variable. Instead, it functions as a

A

control variable—a variable whose influence is statistically removed or controlled for when examining the relationship between the two primary variables (IV and DV).

84
Q

The partial correlation is

The amount of variance the variable explains

A

relative to the amount of variance in the outcome that is left to explain after the contribution of other predictors have been removed from both the predictor and outcome.

85
Q

These partial correlations can be done when variables are dichotomous (including third variable) e.g., - (2)

A

we could look at the relationship between bladder relaxation (did the person wet themselves or not?) and the number of large tarantulas crawling up your leg controlling for fear of spiders

(the first variable is dichotomous, but the second variable and ‘controlled for’ variables are continuous).

86
Q

What does this partial correlation output show?

Revision time = partial, controlling for its effect

Exam performance = DV

Exam anxiety = X - (5)

A
  • . First, notice that the partial correlation between exam performance and exam anxiety is −.247, which is considerably less than the correlation when the effect of
    revision time is not controlled for (r = −.441).
  • . Although this correlation is still statistically significant (its p-value is still below .05), the relationship is diminished.
  • value of R2 for the partial correlation is .06, which means that exam anxiety can now account for only 6% of the variance in exam performance.
  • When the effects of revision time were not controlled for, exam anxiety shared 19.4% of the variation in exam scores and so the inclusion of revision time has severely diminished the amount of variation in exam scores shared by anxiety.
  • As such, a truer measure of the role of exam anxiety has been obtained.
87
Q

Partial correlations are most useful for looking at the unique
relationship between two variables when

A

other variables are ruled out

88
Q

In a semi-partial correlation we control for the

A

effect that
the third variable has on only one of the variables in the correlation

89
Q

The semi partial (part) correlation partials out the - (2)

A

Partials out the effect of one or more variables on either X or Y.

e.g. The amount revision explains exam performance after the contribution of anxiety has been removed from the one variable (usually the predictor- e.g. revision).

90
Q

The semi-partial correlation compares the

A

unique variation of one variable with the unfiltered variation of the other.

91
Q

Diagram of venn diagram of semi-partial correlation - (2)

A
  • Semi-Partial Correlation between IV1 and DV = D / D+C+F+G

Unique variance accounted for by the predictor (IV1) in the DV, after accounting for variance shared with other variables.

92
Q

Diagram of revision and exam performance and revision time on semi-partial correlation - (2)

A
  • purple/red + purple + white+ orange
  • When we use semi-partial correlation to look at this relationship, we partial out the variance accounted for by exam anxiety (the orange bit) and look for the variance explained by revision time (the purple bit).
93
Q

Summary of partial correlation and semi-partial correlation - (2)

A

A partial correlation quantifies the relationship between two variables while accounting for the effects of a third variable on both variables in the original correlation.

A semi-partial correlation quantifies the relationship between two variables while accounting for the effects of a third variable on only one of the variables in the original correlation.

94
Q

Pearson’s product-moment correlation coefficient (described earlier) and Spearman’s rho (see section 6.5.3) are examples of

A

of bivariate correlation coefficients.

95
Q

Non-parametric tests of correlations are… (2)

A
  • Spearman’s roh
  • Kendall’s tau test
96
Q

In spearman’s rho the variables are not normally distributed and measures are on a

A

ordinal scale (e.g., grades)

97
Q

If your data non-normal and not measured at interval level then

A

Deselect Pearson’s R tick box

98
Q

Spearman’s rho works on by

A

first ranking the data n(numbers converted into ranks), and then running Pearson’s r on the ranked data

99
Q

Spearman’s correlation coefficient, rs, is a non-parametric statistic and so can be used when the data have

A

data have violated parametric assumptions such as nonnormally distributed data

100
Q

In spearman correlation coefficient is sometimes called

A

Spearman’s rho

101
Q

For spearman’s r we can get R squared but it is interpreted slightly different as

A

proportion of
variance in the ranks that two variables share.

102
Q

Kendall’s tau used rather than Spearman’s coefficient when - (2)

A

when you have a small data set with a large number of
tied ranks.

This means that if you rank all of the scores and many scores have the same rank, then Kendall’s tau should be used

103
Q

Kendall’s tau test - (2)

A

For small datasets, many tied ranks

Better estimate of correlation in population than Spearman’s ρ

104
Q

Kendall’s tau is not numerically similar to r or rs (spearman) and so tau squared does not tell us about

A

proportion of
variance shared by two variables (or the ranks of those two variables).

105
Q

The Kendall’s tau is 66-75% smaller than both Spearman’s r and Pearson’s r so

A

tau is not comparable to r and r s

106
Q

There is a benefit using Kendall’s statistic than Spearman as it shows - (2)

A

Kendall’s statistic is actually a better estimate of the correlation in the population

we can draw more accurate generalizations from Kendall’s statistic than from Spearman’s.

107
Q

Whats the decision tree for Spearman’s correlation? - (4)

A
  • What type of measurement = continous
  • How many predictor variables = one
  • What type of continous variable = continous
  • Meets assumption of parametric tests - No
108
Q

The output of Kendall and Spearman can be interpreted the same way as

A

Pearson’s correlation coefficient r output box

109
Q

The biserial and point-biserial correlation coefficients used when

A

one of the two variables is dichotomous (e.g., example of dichotomous variable is women being pregnant or not)

110
Q

What is the difference between biserial and point-biserial correlations?

A

depends on whether the dichotomous variable is discrete or continuous

111
Q

The point–biserial correlation coefficient (rpb) is used when

A

one variable is a
discrete dichotomy (e.g. pregnancy),

112
Q

biserial correlation coefficient (rb) is used
when - (2)

A

one variable is a continuous dichotomy (e.g. passing or failing an exam).

e.g. An example is passing or failing a statistics test: some people will only just fail while others will fail by
a large margin; likewise some people will scrape a pass while others will clearly excel.

113
Q

The biserial correlation coefficient can not be calculated directly in SPSS as - (2)

A

must calculate the point–biserial correlation coefficient

and then use an equation to adjust that figure

114
Q

Example of when point=biserial correlation used - (3)

A
  • Imagine interested in relationship between gender of a cat and how much time it spent away from home
  • Time spent away is measured in interval level –> mets assumptions of parametric data
  • Gender is discrete dichotomous variable coded with 0 for male and 1 for female
115
Q

What does this point-biserial correlation output from SPSS show? - (4)

A
  • Point-biserial correlation coefficient is r = 0.378 with p value of 0.001
  • Sign of correlation coefficient dependent on which category you assign to code so ignore about direction of relationship
  • R^2 = (0.378) squared is 0.143
  • Conclude that 14.3% of variability in time spent away from home is explained by gender
116
Q

Can convert point-biserial correlation coefficient into

A

biseral correlation coefficient

117
Q
A
118
Q
A
119
Q

Point biserial and biserial correlation differ in size as

A

biserial correlation bigger than point biserial

120
Q
A
121
Q

Example of queston conducting Pearson’s r (4) -

A

The researchers was interested in whether the amount someone gets paid and amount of holidays they take from work, whether these two variables would be related to their productivity at work

  • Pay: Annual salary
  • Holiday: Number of holiday days taken
  • Productivity: Productivity rating out of 10
122
Q

Example of Pearson’s r scatterplot :

relationship between pay and productivity

A
123
Q

If we have r = 0.313 what effect size is it?

A

medium effect size

±.1 = small effect
±.3 = medium effect
±.5 = large effect

124
Q

What does this scatterplot show?

A

o This indicates very little correlation between the 2 variables

125
Q

What will a matrix scatterplot show?

A

the relationship between all possible combinations of your variables

126
Q

What does this scatterplot matrix show? - (2)

A
  • For Pay and Holiday, we can see the line is very flat and indicates the correlation between the two variables is quite low
  • For pay and productivity, the line is steeper suggesting the correlation is fairly substantial between these 2 variables and same for holidays and pay and productivity and holidays here
127
Q

What is degrees of freedom for correlational analysis?

A

N-2

128
Q

What does this Pearson’s correlation r output show? - (4)

A
    • The relationship between pay and holidays is very low correlation is -0.04
    • Between pay and productivity, there is a medium size correlation of r = 0.313
  • Between holidays and productivity there is medium going on large effect size of 0.435
  • Relationship between pay and productivity and also holidays and productivity is sig but correlation with pay and holidays was not sig
129
Q

Another examp;e of Pearson’s correlation r question - (3)

A

A student was interested in the relationship between the time spent preparing an essay, the interestingness of the essay topic and the essay mark received.

He got 45 of his friends and asked them to rate, using a scale from 1 to 7, how interesting they thought the essay topic was (1 - I’ll kill myself of boredom, 4 - it’s not too bad!, 7 - it’s the most interesting thing in the world!) (interesting).

He then timed how long they spent writing the essay (hours), and got their percentage score on the essay (essay).

130
Q

Example of interval/ratio continous data needed for Pearson’s r for IV and DV - (2)

A
  • Interval scale: difference between 10 degrees C and 20 degrees is same as 80 F and 90 F, 0 degrees does not mean absence of temp
  • Ratio: Height as 0 cm means no weight and weight, time
131
Q

Pearson’s correlation r , spearman and kendall equires

A

one IV and one DV

132
Q

Spearman and Kendall typically used on ordinal or ranked data - (3)

A

values ordered and ranked but values between them not uniform

e.g., likert scale from strongly dsiagree to strongly agree
education levels like elemenatry school, high school
rankings like 1st place to 10th place

133
Q

What does this SPSS output show?

A. There was a non-significant positive correlation between interestingness of topic and the amount of time spent writing. There was a non-significant positive correlation between time spent writing an essay and essay mark
There was a significant positive correlation between interestingness of topic and essay mark, with a medium effect size

B. There was a significant positive correlation between interestingness of topic and the amount of time spent writing, with a small effect size.There was a significant positive correlation between time spent writing an essay and essay mark, with a large effect size. .There was a non-significant positive correlation between interestingness of topic and essay mark

C. There was a significant negative correlation between interestingness of topic and the amount of time spent writing, with a medium effect size.. There was a non-significant positive correlation between time spent writing an essay and essay mark. There was a non-significant positive correlation between interestingness of topic and essay mark

D. There was a significant positive correlation between interestingness of topic and the amount of time spent writing, with a large effect size. There was a non-significant positive correlation between time spent writing an essay and essay mark There was a non-significant positive correlation between interestingness of topic and essay mark

A

D. There was a significant positive correlation between interestingness of topic and the amount of time spent writing, with a large effect size. There was a non-significant positive correlation between time spent writing an essay and essay mark There was a non-significant positive correlation between interestingness of topic and essay mark

134
Q

r = 0.21 effect size is..

A

in between small and medium effect

135
Q

Effect size is only meaningful if you evaluatte it witth regards to

A

your own research area

136
Q

Biserial correlaion is when

A

one variable is dichotomous, but there is an underlying continuum (e.g. pass/fail on an exam)

137
Q

Pointt biserial correlation is when

A

When one variable is dichotomous, and it is a true dichotomy (e.g. pregnancy)

138
Q

Example of dichotomous relationship - (3)

A
  • example of a true dichotomous relationship.
  • We can compare the differences in height between males and females.
  • Use dichotomous predictor of gender