Chapter 3 Flashcards
Bayes factor
the ratio of the probability of the observed data given the alternative hypothesis to the probability of the observed data given the null hypothesis although SPSS statistics tends to express it the other way around. Put another way, it is the likelihood of the alternative hypothesis relative to the null. A Bayes factor of 3, for example, means that the observed data are 3 times more likely under the alternative hypothesis than under the null hypothesis. A Bayes factor less than 1 supports the null hypothesis by suggesting that the probability of the data given the null is higher than the probability of the data given the alternative hypothesis. Conversely, a Bayes factor greater than 1 suggests that the observed data are more likely given the alternative hypothesis than the null. Values between 1 and 3 are considered evidence for the alternative hypothesis that is ‘barely worth mentioning’, values between 3 and 10 are considered ‘substantial evidence’ (‘having substance’ rather than ‘very strong’) for the alternative hypothesis, and values greater than 10 are strong evidence for the alternative hypothesis.
Bayesian statistics
a branch of statistics in which hypotheses are tested or model parameters are estimated using methods based on Bayes’ theorem .
Cohen’s d
an effect size that expresses the difference between two means in standard deviation units. In general it can be estimated using:
Contingency table
cross-classification of two or more categorical variables . The levels of each variable are arranged in a grid, and the number of observations falling into each category is noted in the cells of the table.
Credible interval
in Bayesian statistics, a credible interval is an interval within which a certain percentage of the posterior distribution falls (usually 95%). It can be used to express the limits within which a parameter falls with a fixed probability. For example, if we estimated the average length of a romantic relationship to be 6 years with a 95% credible interval of 1 to 11 years, then this would mean that 95% of the posterior distribution for the length of romantic relationships falls between 1 and 11 years. A plausible estimate of the length of romantic relationships would, therefore, be 1 to 11 years.
Effect size
an objective and (usually) standardized measure of the magnitude of an observed effect. Measures include Cohen’s d , Glass’s g and Pearson’s correlations coefficient, r .
Empirical probability
the empirical probability is the probability of an event based on the observation of many trials. For example, if you define the collective as all men, then the empirical probability of infidelity in men will be the proportion of men who have been unfaithful while in a relationship. The probability applies to the collective and not to the individual events. You can talk about there being a 0.1 probability of men being unfaithful, but the individual men were either faithful or not, so their individual probability of infidelity was either 0 (they were faithful) or 1 (they were unfaithful).
HARKing
the practice in research articles of presenting a hypothesis that was made after data were collected as though it were made before data collection.
Informative prior distribution
in Bayesian statistics an informative prior distribution is a distribution representing your beliefs in a model parameter where the distribution narrows those beliefs to some degree. For example, a prior distribution that is normal with a peak at 5 and range from 2 to 8 would narrow your beliefs in a parameter such that you most strongly believe that its value will be 5, and you think it is impossible for the parameter to be less than 2 or greater than 8. As such, this distribution constrains your prior beliefs. Informative priors can vary from weakly informative (you are prepared to believe a wide range of values) to strongly informative (your beliefs are very constrained) (cf. uninformative prior distribution ).
Likelihood
the probability of obtaining a set of observations given the parameters of a model fitted to those observations. When using Bayes’ theorem to test a hypothesis, the likelihood is the probability that the observed data could be produced given the hypothesis or model being considered, p (data|model). It is the inverse conditional probability of the posterior probability .
Marginal likelihood
when using Bayes’ theorem to test a hypothesis, the marginal likelihood (sometimes called evidence) is the probability of the observed data, p (data).)
Meta-analysis
this is a statistical procedure for assimilating research findings. It is based on the simple idea that we can take effect sizes from individual studies that research the same question, quantify the observed effect in a standard way (using effect sizes ) and then combine these effects to get a more accurate idea of the true effect in the population.
Odds
the probability of an event occurring divided by the probability of that event not occurring.
Odds ratio
the ratio of the odds of an event occurring in one group compared to another. So, for example, if the odds of dying after writing a glossary are 4, and the odds of dying after not writing a glossary are 0.25, then the odds ratio is 4/0.25 = 16. This means that the odds of dying if you write a glossary are 16 times higher than if you don’t. An odds ratio of 1 would indicate that the odds of a particular outcome are equal in both groups.
Open science
a movement to make the process, data and outcomes of scientific research freely available to everyone.