Lecture 12 Flashcards

1
Q

Paired samples

A

Independent observations of matched data.
Measure variable twice on the same subject ex before and after treatment.

Matched paired samples (age and sex)
*****More likely to find significance between subjects than if two randomly selected “unpaired” samples

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Unpaired samples

A

Measure a result once and compare between separate subjects.
Less likely to reveal a significant relationship
LARGER SAMPLE SIZE NEEDED WITH UNPAIRED THAN PAIRED.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Paired data with tails

A

Two tailed test: More conservative approach. Ensures that either of two outcomes are covered.

One tailed test: Less robust. Assumes only one outcome likely. Rarely a safe assumption.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Paired data with outliers

A

Extreme data points without obvious measurement induced errors.
Need common sense.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Relative strength of relationship between variables top of pyramid to bottom

A
Top
Causal= predictors. Strongest. Parametric.
Correlation
Association
Random chance- not repeatable
Bottom
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Association is not necessarily

A

Causitive

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Most common test for correlation (association) and direction

A

Pearson’s product moment correlation coefficient (r)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Assumptions for Pearson’s coefficient (r)

A
  1. The population from which the sample is drawn is normally distributed. (If not, use non parametric test of correlation)
  2. The two variables are structurally independent. (one not forced to vary with the other)
  3. Only a single pair of measurements should be made on subjects.
  4. Every r value (sample) should be accompanied by a p-value or confidence interval which the “true” R value (population) is likely to lie.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

For a parametric test, use the r value (sample) accompanied by a p value. For a non-parametric test, what coefficient do you use

A

r sub s for non-parametric instead of r

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

perfect correlation value

A

1

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

When to use Pearson’s r vs spearman’s rank

A

Pearsons: r. Normally distributed. Parametric.

Spearman’s: r sub s for non-normal distribution. Non parametric.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Correlation does not allow for ___ or ___

A

Prediction or causal relationships. Only shows that there is a relationship between the two.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

How to interpret correlation

A
  1. 9-1 very high positive correlation
    - 0.9 to -1 very high negative correlation
  2. 7-0.9 high
  3. 5-0.7 moderate positive/negative correlation
  4. 3-0.5 low positive or negative correlation
  5. 00-0.30 negligible correlation
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Regression analysis

A

Statistical modeling to estimate relationships among variables.
Used for prediction in parametric tests only.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Linear regression

A

mathematical equation allows target variable to be predicted from the indecent variable. Continuous variables, linear relationship.

Slope intercept equation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

When to use the slope intercept equation

A

When determining linear regression

17
Q

Multiple regression

A

Not linear relationship between two or more independent (co-variables). May be quadratic or higher in nature.

18
Q

Probability and confidence is defined by

A

standard deviation. Defines probability limits

19
Q

SD probability limits

A

Approx 2 (1.96) SD above and below the mean defines points within which 95% of observations lie.

20
Q

Statistically significant P value

A

p < 0.05

21
Q

Statistically highly significant P value

A

p < 0.01

22
Q

Obtaining a significant or highly significant outcome means you should ___ the null

A

Reject.

23
Q

3 reasons you might fail to reject the null

A
  1. No difference between groups
  2. Too few subjects to demonstrate a difference existed. Small sample size.
  3. Logical fallacy: Arbitrary assumption for cut off values. reality is values fall on a continuum.
24
Q

Confidence intervals allow for a ___ of response values in the form of a ____, given repetition of the study.

A

Confidence intervals allow for a continuum of response values in the form of a range of responses expected, given repetition of the study.

25
Q

The chance of a real difference given a CI lies between the

A

upper and lower limits. Difference is not statistically significant if either limit overlaps the value in question.

26
Q

CI can be applied to almost all statistical tests to help us understand if the evidence is:

A

Strong, weak, or definitive.

27
Q

Do you want a narrow or wide CI?

A

Narrow- greater precision.

Wider usually due to small sample size

28
Q

CI. Differences in populations. How do you know if there is no difference in populations?

A

The CI crosses zero or contains zero

29
Q

CI. Differences in ratios. How do you know if there is no difference in ratios?

A

The interval contains one or crosses one.

30
Q

CI on left side of null value

A

Results show statistically significant decline

31
Q

CI on right side of null value

A

Results show statistically significant improvement.

32
Q

Relative risk RR

A

Risk in treatment group/risk in control group

33
Q

Relative risk reduction (RRR)

A

percentage by which the risk of adverse event is reduced in the experimental group compared with the control group.

(risk in controls - risk in experimental)/risk in controls

Ex: reduced the death rate by 20%

34
Q

Absolute risk reduction (ARR)

A

Absolute amount by which a negative outcome is reduced comparing experimental with control group.

Ex: over time. Produced absolute reduction in deaths of 3%. Increased survival rate from 84% to 87%

35
Q

NNT number needed to treat

A

Number of subjects who would need to be treated to prevent one adverse outcome.

Reciprocal of the ARR (absolute risk reduction)

Presented with a CI

Ex: 34 people needed to enroll to avoid one death

36
Q

1/ARR=

A

NNT number needed to treat

37
Q

Hawthrone effect (observer effect)

A

Participants know they are being observed.