Chapter 3 Flashcards

1
Q

Bayes factor

A

the ratio of the probability of the observed data given the alternative hypothesis to the probability of the observed data given the null hypothesis although SPSS statistics tends to express it the other way around. Put another way, it is the likelihood of the alternative hypothesis relative to the null. A Bayes factor of 3, for example, means that the observed data are 3 times more likely under the alternative hypothesis than under the null hypothesis. A Bayes factor less than 1 supports the null hypothesis by suggesting that the probability of the data given the null is higher than the probability of the data given the alternative hypothesis. Conversely, a Bayes factor greater than 1 suggests that the observed data are more likely given the alternative hypothesis than the null. Values between 1 and 3 are considered evidence for the alternative hypothesis that is ‘barely worth mentioning’, values between 3 and 10 are considered ‘substantial evidence’ (‘having substance’ rather than ‘very strong’) for the alternative hypothesis, and values greater than 10 are strong evidence for the alternative hypothesis.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Bayesian statistics

A

a branch of statistics in which hypotheses are tested or model parameters are estimated using methods based on Bayes’ theorem .

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Cohen’s d

A

an effect size that expresses the difference between two means in standard deviation units. In general it can be estimated using:

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Contingency table

A

cross-classification of two or more categorical variables . The levels of each variable are arranged in a grid, and the number of observations falling into each category is noted in the cells of the table.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Credible interval

A

in Bayesian statistics, a credible interval is an interval within which a certain percentage of the posterior distribution falls (usually 95%). It can be used to express the limits within which a parameter falls with a fixed probability. For example, if we estimated the average length of a romantic relationship to be 6 years with a 95% credible interval of 1 to 11 years, then this would mean that 95% of the posterior distribution for the length of romantic relationships falls between 1 and 11 years. A plausible estimate of the length of romantic relationships would, therefore, be 1 to 11 years.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Effect size

A

an objective and (usually) standardized measure of the magnitude of an observed effect. Measures include Cohen’s d , Glass’s g and Pearson’s correlations coefficient, r .

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Empirical probability

A

the empirical probability is the probability of an event based on the observation of many trials. For example, if you define the collective as all men, then the empirical probability of infidelity in men will be the proportion of men who have been unfaithful while in a relationship. The probability applies to the collective and not to the individual events. You can talk about there being a 0.1 probability of men being unfaithful, but the individual men were either faithful or not, so their individual probability of infidelity was either 0 (they were faithful) or 1 (they were unfaithful).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

HARKing

A

the practice in research articles of presenting a hypothesis that was made after data were collected as though it were made before data collection.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Informative prior distribution

A

in Bayesian statistics an informative prior distribution is a distribution representing your beliefs in a model parameter where the distribution narrows those beliefs to some degree. For example, a prior distribution that is normal with a peak at 5 and range from 2 to 8 would narrow your beliefs in a parameter such that you most strongly believe that its value will be 5, and you think it is impossible for the parameter to be less than 2 or greater than 8. As such, this distribution constrains your prior beliefs. Informative priors can vary from weakly informative (you are prepared to believe a wide range of values) to strongly informative (your beliefs are very constrained) (cf. uninformative prior distribution ).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Likelihood

A

the probability of obtaining a set of observations given the parameters of a model fitted to those observations. When using Bayes’ theorem to test a hypothesis, the likelihood is the probability that the observed data could be produced given the hypothesis or model being considered, p (data|model). It is the inverse conditional probability of the posterior probability .

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Marginal likelihood

A

when using Bayes’ theorem to test a hypothesis, the marginal likelihood (sometimes called evidence) is the probability of the observed data, p (data).)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Meta-analysis

A

this is a statistical procedure for assimilating research findings. It is based on the simple idea that we can take effect sizes from individual studies that research the same question, quantify the observed effect in a standard way (using effect sizes ) and then combine these effects to get a more accurate idea of the true effect in the population.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Odds

A

the probability of an event occurring divided by the probability of that event not occurring.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Odds ratio

A

the ratio of the odds of an event occurring in one group compared to another. So, for example, if the odds of dying after writing a glossary are 4, and the odds of dying after not writing a glossary are 0.25, then the odds ratio is 4/0.25 = 16. This means that the odds of dying if you write a glossary are 16 times higher than if you don’t. An odds ratio of 1 would indicate that the odds of a particular outcome are equal in both groups.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Open science

A

a movement to make the process, data and outcomes of scientific research freely available to everyone.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

p -curve

A

a curve summarizing the frequency distribution of p -values you’d expect to see in published research. On a graph that shows the value of the p -value on the horizontal axis against the frequency (or proportion) on the vertical axis, the p-curve is the line reflecting how frequently (or proportionately) each value of p should occur for a given effect size .

17
Q

p -hacking

A

research practices that lead to selective reporting of significant p -values. Some examples of p -hacking are: (1) trying multiple analyzes and reporting only the one that yields significant results; (2) stopping collecting data at a point other than when the predetermined sample size is reached; (3) deciding whether to include data based on the effect they have on the p -value; (4) including (or excluding) variables in an analysis based on how they affect the p -value; (5) measuring multiple outcome or predictor variables but reporting only those for which the effects are significant; (6) merging groups of variables or scores to yield significant results; and (7) transforming or otherwise manipulating, scores to yield significant p -values.

18
Q

Peer Reviewers’ Openness Initiative

A

an initiative to get scientists to commit to the principles of open science when they act as expert reviewers for journals. Signing up is a pledge to review submissions only if the data, stimuli, materials, analysis scripts and so on are made publically available (unless there is a good reason, such as a legal requirement, not to).

19
Q

Posterior distribution

A

a distribution of posterior probabilities . This distribution should contain our subjective beliefs about a parameter or hypothesis after considering the data. The posterior distribution can be used to ascertain a value of the posterior probability (usually by examining some measure of where the peak of the distribution lies or a credible interval ).

20
Q

Posterior odds

A

ratio of posterior probability for one hypothesis to another. In Bayesian hypothesis testing the posterior odds are the ratio of the probability of the alternative hypothesis given the data, p (alternative|data), to the probability of the null hypothesis given the data, p (null|data).

21
Q

Posterior probability

A

when using Bayes’ theorem to test a hypothesis, the posterior probability is our belief in a hypothesis or model after we have considered the data, p (model|data). This is the value that we are usually interested in knowing. It is the inverse conditional probability of the likelihood .

22
Q

Pre-registration

A

a term referring to the practice of making all aspects of your research process (rationale, hypotheses, design, data processing strategy, data analysis strategy) publically available before data collection begins. This can be done in a registered report in an academic journal or more informally (e.g., on a public website such as the Open Science Framework). The aim is to encourage adherence to an agreed research protocol, thus discouraging threats to the validity of scientific results such as researcher degrees of freedom .

23
Q

Prior distribution

A

a distribution of prior probabilities . This distribution should contain our subjective beliefs about a parameter or hypothesis before or prior to, considering the data. The prior distribution can be an informative prior or an uninformative prior .

24
Q

Prior odds

A

the ratio of the probability of one hypothesis/model to a second. In Bayesian hypothesis testing, the prior odds are the probability of the alternative hypothesis , p (alternative), divided by the probability of the null hypothesis , p (null). The prior odds should reflect your belief in the alternative hypothesis relative to the null before you look at the data.

25
Q

Prior probability

A

when using Bayes’ theorem to test a hypothesis, the prior probability is our belief in a hypothesis or model before or prior to, considering the data, p (model).
See also posterior probability, likelihood , marginal likelihood .

26
Q

Publication bias

A

the fact that articles published in scientific journals tend to over-represent positive findings. This can be because (1) non-significant findings are less likely to be published; (2) scientists don’t submit their non-significant results to journals; (3) scientists selectively report their results to
focus on significant findings and exclude non-significant ones; and (4) scientists capitalize on researcher degrees of freedom to shed their results in the most favorable light possible.

27
Q

Registered reports

A

an article in a journal usually outlining an intended research process (rationale, hypotheses, design, data processing strategy, data analysis strategy). The report is reviewed by relevant expert scientists, ensuring that authors get useful feedback before data collection. If the protocol is accepted by the journal editor it typically comes with a guarantee to publish the findings no matter what they are, thus reducing publication bias and discouraging researcher degrees of freedom aimed at achieving significant results.

28
Q

Researcher degrees of freedom

A

the analytic decisions a researcher makes that potentially influence the results of the analysis. Some examples are: when to stop data collection, which control variables to include in the statistical model, and whether to exclude cases from the analysis.

29
Q

Tests of excess success

A

a procedure designed for identifying sets of results within academic articles that are ‘too good to be true’. For an article reporting multiple scientific studies examining the same effect, the test computes (based on the size of effect being measured and sample size of the studies) the probability that you would get significant results for all of the studies. If this probability is low it is highly unlikely that the researcher would get these results and the results appear ‘too good to be true’, implying p-hacking (Francis, 2013). It is noteworthy that the TES is not universally accepted as testing what it sets out to test (e.g., Morey, 2013).

30
Q

Uninformative prior distribution

A

in Bayesian statistics an uninformative prior distribution is a distribution representing your beliefs in a model parameter where the distribution assigns equal probability to all values of the model/parameter. For example, a prior distribution that is uniform across all potential values of a parameter suggests that you are prepared to believe that the parameter can take on any value with equal probability. As such, this distribution does not constrain your prior beliefs (cf. informative prior ).