part one Flashcards
what are the two types of variability
- intrinsic (natural system)
2. extrinsic (measurement error)
when do you define the population
population must be defined before the sampling proess has begun as it will dictate how you sample
how are frequency curves characterised
by 2 key parameters: location (eg middle, mean, mode) and dispersion (spread, variance)
why are the parameters of the frequency curve important
we can never know the true population parameters therefore we can infer from sampling
what is mu (μ)
population mean
what is x
sample mean
what is σ
population standard deviation
what is s
sample standard deviation
what is SEM
standard error of mean measure variabilty of the sample mean
what are the 6 main steps of the logical framework ?
- observations
- models
- hypothesis
- null hypothesis
- experiment and sampling
- interpretation and results
what is the next step after you retain the null
you refute model and hypothesis. therefore you go back to observations and find out what was missing
what is the next step after you reject the null
you retain model and hypothesis. you dont stop. you ask why is this the case ie what are the mechanisms that make this model true?
what are the 2 types of observations
- casual (personally seen in nature with no prior knowledge
2. previously quantified in literature
what types of phrases must be used when making casual observations
it appears, it seems, it looks like
ie not certain
what is a model
the reason behind observations used to explain process
how do you state model from a casual observation
it is correct because it happens in nature in this location where i saw it
how do you state a model from a quanitifed observation
literature behind process eg this is because…
what is a hypothesis
what you predict if the model is true.
use the structure… if I do this then i will observe this/i will expect this
what is the difference between mensurative and manipulative
mensurative experiments are observational, they do not change the experiment.
manipulative experiments change system to understand patterns (you need literature first)
what is the point of a null hypothesis and what is this approach called
falsificationist approach. the hypothesis can never been proved because the population cant be measured. therefore you test everything outside of the population and what remains is true.
limited by its design, a mensurative study can only give certain interpretations. what are they
correlative not causational
it doesnt let you understand cause and effect or mechanisims. merely descriptive/qualittative
what is required for appropriate manipulative studies
appropriate controls and adeuqate prior biological knowlegde of the system
what is the difference between precision and accuracy
precision is the measure of spread, (precise = narrow, imprecise = wide) you can test using standard error of mean
accuracy is the measure of how close the sample mean is to the population mean (usually you cannot test accuracy
SEM
s/sqrt n
when should you use random sampling
when information is not known about the population
when should you use statified sampling
when you know information about the population to best represent that population . this increases precision and accuracy
is random sampling always represenative
no
by chance it can be or it can not be. therefore preliminary tests can be performed with lots of replicates to decide how to represenativly sample
assumptions that must be accounted for PRE sampling
independance
randomness
these are KEY
assumptions that are analysed POST sampling
homogeneity of variances
normality of residuals
how to ensure independent data
replicates need to be indepenent of each other (eg seperated through space, look for possible relationships between replicates
what is psuedo replication
‘replicates’ that are non-independant on each other therefore not really true replicates as you are not accounting for relationships between individuals. this increases type 1 error
what is confounding
when you reject the null (ie your hypothesis is supported) however this is not because your model is correct rather you have not accounted for other factors/variables that cause this relationship
how to mitigate confounding effects
by performing a manipulative study where you can control the variables
why do we perform statistical tests
as we are taking a sample of the population that is subject to error, we can only make probalistic statments rather than absolute statments. statistcis allows us to quantify
what are the three components of a statistical test
a null hypothesis
a test statistic
rejection region and critical value
what is the logical null
everything not included in hypothesis (eg equal or opposite)
what is the statistical null
there is no difference between groups
what is the t-test
testing the difference between 2 means
when do you use 2 tailed t test
when there is no direction in your hypothesis eg (there is no specified direction for proposed difference
when do you use a 1 tailed t test
when you have a directional hypothesis (eg this pop is greater than this pop)
what is a type 1 error
when the null is true however you reject it
what is a type 2 error
when the null is false however you support it
how can you control type 1 error
critical value (eg alpha = 0.05)
why are the rejectios regions smaller for 2 tailed t tests
probability is always alpha (eg 0.05) so when you have a 2 tailed you half alpha (eg 0.05/2)
why is the assumption of homogeneity important
if variances are not equal then the rejections regions will not be comparable across groups this increases type 1 error
to reduce: large sample and balance n
can be fixed by transforming
what is a residual
difference between data point and predicted value (ie mean)