Chapter 6 Flashcards
Statistics
Lying with statistics
The intentional misapplication of statistical methods
Statistical methodology
Justification of the choice of using a particular statistical method
Descriptive statistics
In descriptive statistics, one aims to display data and conclusions accurately
Inferential statistics
In inferential statistics, one aims to draw a justified conclusion from data
Stochastic hypothesis
A hypothesis whose implications come in the form of a probability distribution
Deterministic hypothesis
A hypothesis all of whose implications are certain
Quantitative measure of measurement error
The likelihood of a measurement error being made, presented on a quantitative scale
Error based statistics
Determining the probability of an observation given that a certain hypothesis is true
Confidence in a hypothesis
The subjective estimation of the probability of a hypothesis
Fisher’s significance testing
A method of statistical hypothesis testing developed by Ronald Fisher
Test statistic
Any quantity, computed from values in a sample, that is considered for a statistical purpose
Sampling distribution
A distribution over the possivle outcomes of the test statistic
p-value
The probability of observing an outcome at least as extreme as the observed outcome
Significance level
A conventionally set lever for p-values, below which the associated hypothesis should be rejected
p-value abuse
Changing test setup, statistical method, or sample in order to make the p-value either higher or lower than the significance level (depending on what result is desired)
Neyman-Pearson hypothesis testing
A method of hypothesis testing developed by Jerzy Neyman and Karl Pearson
Original hypothesis (Hi)
Some claim that you are interested in
Alternative hypothesis (Ha)
A hypothesis that due to logical necessity has to be true if the original hypothesis is false and vice versa. I.e. Ha is the inverse, or negation, of Hi
Type I error
Wrongly rejecting a true hypothesis Hi
Type II error
Wrongly accepting a false hypothesis Hi
Power of a test
The probability of correctly rejecting a false hypothesis Hi
Bayesian statistics
Posterior probability of a hypothesis is calculated based on the prior probabilities for this hypothesis together with the observed outcome, using Bayes’ theorem
Prior probability
The (estimated) probability of the hypothesis being true before the application of Bayes’ theorem
Subjective degrees of belief
The Bayesian view of what is meant by “probability” - that probability is the subjective estimation of likelihood rather than a property belonging to the world
Posterior probability
The (calculated) probability of the hypothesis being true after the application of Bayes theorem
The problem of priors
Bayesianism does not offer a clear way to determine prior probabilities
The principle principle
A subject’s prior probability should be assigned on the basis of objective probability, if it is known
The principle of indifference
A subject’s prior probabilities should be assigned equally to the possible outcomes, if there is no information about the objective probabilities
The problem of slow convergence
If two subjects assign sufficiently different prior probabilities to the same hypothesis, it is possible that their respective posterior probabilities will not converge even though Bayes’ theorem has been applied to large amounts of data
The problem of old evidence
The problem of determining what evidence that has been previously used to determine posterior probabilities
The problem of uncertain evidence
Bayesianism does not take uncertainty about evidence into account