Test 2 Flashcards

1
Q

Making a prediction steps

A

-compose hypothesis
-generate predictions
-test predictions
-evaluate hypotheses
MUST MAKE TESTABLE PREDICTIONS

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Deductive reasoning

A
  • starts with a theory, test, revise
  • top down approach
  • general–>specific
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Inductive reasoning

A
  • starts with observations, form a theory
  • specific–>general
  • can be falsified with contradictory evidence
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Lakatos (1978)

A
  • individual tests are risky and arbitrary

- should have multiple competing hypotheses

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Kuhn paradigm (1970)

A
  • not linear discovery, but series of paradigm shifts

- scientists aren’t objective but rather come to a consensus

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Manipulative data

A

-when you’ve changed something and gather information

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Observational data

A

-when you observe what’s happening in a system

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

A priori

A

Ahead of time, before collection of data

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

measures of central tendency

A

mean

median

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

t test equation

A
t = x - µ / SEM
t = current mean - comparison mean/ standard error of the mean
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

standard error of the mean

A

variance/n

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Confidence interval

A
  • use confidence interval to calculate sample size

- also need variance, alpha,t,df

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

t test assumptions

A
  • Independent
  • random sample
  • normally distributed
  • equal variances (homogeneity)
  • must test these before any stats can be done!
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

How can we test for normality?

A

shapiro-wilk

kolgomorov-smirnov

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Testing for variance?

A

Levene’s test for equality of variances

-similar bell curve shape

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What if normal distribution, but unequal variances

A

indep t test with equal variances not assumed

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

not normal dis, but similar variances

A
  • non parametric Mann Whitney u test
  • doesnt consider parameter of calculated mean
  • ranks data and calculated u stats, based on difference in rankings
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

data not independent

A

paired t test

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Steps for t tests

A
  • identify question
  • state H0 and Ha in respect to your samples
  • alpha level and direction of relationships
  • choose test after exploring data to understand if it complies
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Statistical tests vary in:

A
  • number of IVs and DFs
  • levels of measurements (ordinal, continuous, category)
  • variable: univariate tests, vectors, matrices in multivariate tests (scalars)
  • role of variables: DVs, IVs, Covariates?
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Univariate

A

single dependent variable

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Multivariate

A

employ one or more dependent variable

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Vectors and matrices

A

vectors- variables with magnitude and direction

matrices–2D array of vectors

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Power

A

-important to high enough power to detect an effect
need to know:
-effect size
-alpha
-sample size
-data dispersion
Amount power = % chance can detect an effect
OR probability of not committing type II error (false negative rate)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

Effect size

A
  • power
  • alpha
  • n
  • s data dispersion
  • known
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

G power

A

-allows you to calculate the sample size needed for univariate and multivariate tests

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

post hoc power calc

A
  • usually when your results were almost significant

- often in poor taste

28
Q

Linear relationship

A

-predictor and response
-bivariate = x and y
positive, negative and no relation = zero

29
Q

scatterplot

A

-scatter diagram is graphical method to display relationship between two variables

30
Q

Fitting a line

A

-least squares method.
-distance from potential line (residuals) squared and added up for all points to try to get lowest number possible
-always passes through the mean of y and x
WHY
to convert a value
standardize: calibration curve!

31
Q

regression significance

A

can we distinguish line with slope from line with no slope

zero slope or no relation is our null

32
Q

R^2

A

coefficient of determination

  • how much variation in y is determined by x
  • want 1 or -1
33
Q

Assumptions of regression

A
  • each x and y are independent and random
  • normal distribution of x values
  • homogenity
  • linear relation
  • measurements of x are free of error or small compared to y (error will make a relation hard to understand)
34
Q

Applications of Regression line

A

-can be used to predict
-R measure of strength of linear assocation between x and y
-R is NOT sqrt(R^2)
-Want 1 or -1
R > 0 direct linear
R

35
Q

Spearman Rank Correlation

A
  • doesn’t meet normality
  • homogenity of variance
  • rank correlation also used when one or both consists of ranks
  • can also have multiple values y for x
36
Q

Parametric tests

A

-indep and paired t test
-correlation analysis
-linear regression
ANOVAS

37
Q

Non Parametric tests

A

(also have their own assumptions!)

  • Mann Whitney u
  • Spearman Rho
38
Q

Transformation

A

-take an abnormal distribution to normal
-there’s a number of ways to do this depending on original distribution
-WONT MAKE UP FOR POOR SAMPLING specifically non random sampling, very sensitive to outliers
-KNOW YOUR LIT/FIELD
prepare to defend your choice

39
Q

Log Transformation

A

heterogeneity of variance (base 10 or natural)

40
Q

Square Root Transformation

A

heterosadastic variance ( data with non-constant variance) commonly used on count data

41
Q

Arcsine transformation

A

-binomial dis
-yes/no
-proportions or percentages
-sqrt of a number
radians range from 0 to 1

42
Q

Back transform

A

-even though you’ve transformed, means nothing to readers, have to go backwards for writing it up

43
Q

Outliers

A
  • data value different from majority
  • need to report and state why you throw them if you trim your data set
  • Need to think about them
  • can’t discard due to inconvenience
  • rerun analysis without outlier to see if its the same
  • run an rank test? categories?
  • transforming may help
44
Q

Lost, corrupted, removed data

A
  • reduces sample size

- small size decreases power and increases chances of extremes

45
Q

quantitative data

A

discrete data

3 of something

46
Q

Continuous data

A

3.14579 of something

47
Q

categories

A

I am a human

convert data into bins

48
Q

Types of data can be divided into groups

A

race age sex
-put into contingency table
-categorical variables
-chi square analysis
must always use frequencies and see how it compare to expected
can use models! Mendelian genetics used Hardy Weinberg

49
Q

Chi square things

50
Q

Odds ratio

A

odds success/odds failure

51
Q

Mosaic plot

A

graphical way to look at frequencies

  • column = “treatment”
  • row variable = “response”
52
Q

ANOVA

A
  • statistical test that exploits variance (s^2)

- uses normally dis sets to compare differences between groups

53
Q

Basic one way ANOVA

A
Two variables:
-categorical 
-quantitative 
Question:
Do the means of the quantitative variable depend on which category the individual is in?
IF ONLY 2 values 
2 sample t test
but you can have 3 or more :)
-determines p value from f statistic
54
Q

What does ANOVA do?

A

Tests these hypotheses:

  1. means of the groups are equal (H0)
  2. not all means are equal (Ha)
    * doesn’t tell us which differ, have to follow up with post hoc testing
55
Q

ANOVA assumptions

A

-each group is approx normal
check graphically, or with normality tests. Can withstand some weirdness but not crazy outliers
-STDEVS are approx equal between each group
ratio of largest to smallest sample’s stedv should be less than 2:1
Levene’s test takes care of this

56
Q

ANOVA notation

A

n = number of total individuals
I = number of groups
x = individual
X bar = mean for entire data set

57
Q

How does one way ANOVA work?

A

measures variation

  • between groups (group mean and overall mean)
  • within groups (value between value and mean of group)
58
Q

ANOVA f statistic

A

ratio of between group mean square variation/mean square within group variation
between/within
MSG/MSE

59
Q

R^2 statistic

A

sum of squares between/ sum of squares total

SSB/SST

60
Q

If ANOVA groups don’t have the same means

A
  • compare in twos: pairwise using two sample t test

- need to adjust p value threshold because multiple tests same data

61
Q

Turkey’s Pairwise comparisons

A
  • if family error rate is 0.05 then

- individual alpha = 0.0199 w/ 95 % CI

62
Q

ANOVA data not normal?

A

kruskal-Wallis Test

nonparametric procedure used to test the claim that 3+ indep samples come from pops with the same distribution

63
Q

kruskal-Wallis Test

A
  • STRONGER hypothesis than ANOVA which only compares means
  • samples are simple random samples from 3+ pops
  • data can be ranked
  • principle is dumping all the data together and seeing if its a normal dispersion
  • large values of H indicate Ri (sum of ranks of samples) are different than expected
  • If H is too large then we reject the null
  • K-W is always right tailed
64
Q

K-W test critical values

A
  • 3 populations or sample size of 5 or less, value from K-W table
  • 4 or more or sample size from one pop is greater than 5, value is chi^2
65
Q

K-W hypothesis test steps

A

step 0: samples are indep random, data can be ranked
step 1: box plots to compare data
step 2: hypotheses.
H0 data dis is the same
H1 data dis is not the same
step 3: rank observations smallest to largest
step 4: level of signifigance–either K-W or chi^2
step 5: compute test stat
step 6: compare critical values
test stat must be bigger than crit value