Part 6: Statistics Flashcards

1
Q

Lying with statistics:

A

The intentional misapplication of statistical tools.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Statistical methodology:

A

Justification of the choice between statistical methods.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Descriptive statistics:

A

In descriptive statistics, one aims to display data and conclusions accurately.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Inferential statistics:

A

In inferential statistics, one aims to draw a justified conclusion from data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Stochastic hypothesis:

A

A hypothesis whose implications come in the form of a probability distribution

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Deterministic hypothesis:

A

A hypothesis all of whose implications are certain.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Quantitative measure of measurement error:

A

The likelihood of a measurement error being made, presented on a quantitative scale.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Error based statistics:

A

Determining the probability of an observation given that a certain hypothesis is true.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Confidence in a hypothesis

A

The subjective estimation of the probability of a hypothesis.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Fisher’s significance testing:

A

In other words, we have made a bunch of observations and we want to find out whether we should reject a certain hypothesis based on the data or not. For this, we calculate how probable the observed data would be if – i.e. under the assumption that – the hypothesis was true. If it is very improbable, then we should reject the hypothesis.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Test statistic:

A

All possible outcomes of a test, and their respective probabilities.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Sampling distribution:

A

A distribution over the possible outcomes of the test statistic.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

p-value:

A

The probability of observing an outcome at least as extreme as the observed outcome.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Significance level:

A

A conventionally set level for p-values, below which the associated hypothesis should be rejected.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

p-value abuse:

A

Changing test setup, statistical method, or sample in order to make the p-value either higher or lower than the significance level (depending on what result is desired).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Neyman-Pearson hypothesis testing:

A

The test begins by formulating two hypotheses that are mutually exclusive and jointly exhaustive. That is, one is the negation of the other. For example, we might set the H0 as the claim that the coin is not fair and the Ha, the alternative hypothesis, as the claim that it is fair. Either of them might be true, hence accepting one or the other might yield one of four possible outcomes. You either correctly accept the true H0, or correctly reject a false H0.

17
Q

Null hypothesis (H0):

A

The negation of the test hypothesis.

18
Q

Alternative hypothesis (Ha):

A

A hypothesis that due to logical necessity has to be true if the null hypothesis is false and vice versa.

19
Q

Type I error:

A

Wrongly rejecting a true null hypothesis

20
Q

Type II error:

A

Wrongly accepting a false null hypothesis.

21
Q

Power of a test:

A

The probability of correctly rejecting a false null hypothesis.

22
Q

Bayesian statistics:

A

Posterior probability of a hypothesis is calculated based on the prior probabilities for this hypothesis together with the observed outcome, using Bayes’ theorem.

23
Q

Prior probability:

A

The (estimated) probability of the hypothesis being true before the application of Bayes’ theorem.

24
Q

Subjective degrees of belief:

A

The Bayesian view of what is meant by “probability” – that probability is the subjective estimation of likelihood rather than a property belonging to the world.

25
Q

Posterior probability:

A

The (calculated) probability of the hypothesis being true after the application of Bayes theorem.

26
Q

The problem of priors:

A

Bayesianism does not offer a clear way to determine prior probabilities.

27
Q

The principle principle

A

A subject’s prior probability should be assigned on the basis of objective probability, if it is known.

28
Q

The principle of indifference:

A

A subject’s prior probabilities should be assigned equally to the possible outcomes, if there is no information about the objective probabilities.

29
Q

The problem of slow convergence:

A

If two subjects assign sufficiently different prior probabilities to the same hypothesis, it is possible that their respective posterior probabilities will not converge even though Bayes’ theorem has been applied to large amounts of data.

30
Q

The problem of old evidence:

A

The problem of determining what evidence that has been previously used to determine posterior probabilities.

31
Q

The problem of uncertain evidence:

A

Bayesianism does not take uncertainty about evidence into account.