2 Flashcards
- What is DV?
The proposed effect, outcome variable, measure not manipulated in exp
- Null Hypothesis Signifiance Tesitng (NHST) computes proability of
Probability of null hypothesis being true (referred as p-value) and computing statistic and how likely statistic has that value by chance along
- What is misconceptions of NHST?
A significant result means effect is important, a non-sig result means null hypothesis is true and significant result means null hypothesis is false
- Effect size can help with NHST issue, what is effect size?
Effect size is quantiative measure of magnitude of experimental effect, larger effect size the stronger relationship between 2 variables and can be used to compare studies on basis on effect size
- How to calculate effect size using Cohen’s D
Mean 1 minus mean 2 divided by standard devation
- What distribution do you need for parametric tests?
Normal distribution which can be described by mean (central tendency) and SD (dispersion)
- Is mean good measure of central tendency?
Mean can be misleading measure of central tendency in skewed distributions as it can be greatly influenced by extreme scores
- Aside from mean as central tendency, median and mode can be used where: - (2)
Median is unaffected with extreme scores and used with ordinal, interval and raio data
Mode only used with nominal data and greatly subject to sample fluctuations and many distributions have more than one mode
- What happens in positively skewed distributions?
Mean is greater than median which is greater than mode (left skewed)
- What happens in negatively skewed distributions?
The mode is greater than median which is greater than mean (right skewed)
- What are tests of normality dependent on?
Sample size, and if you have a massive sample size have normality tests significant even when data is visually normally distributed (usually trust visual plot)
- Two ways a distribution can deviate from normal
Lack of symmetry (skew) and pointiness (kurotosis)
- What does kurotosis tell you?
It tells you how much of our data lies around tails of histogram and helps us to identify when outliers may be present in the data.
- What is difference between parametric and non-parametric tests? - 4
Parametric tests assume specific distributions, like the normal distribution, and require adherence to certain statistical assumptions, such as homogeneity of variances and independence of observations.
They tend to be more powerful when these assumptions are met, making them suitable for analyzing data that closely aligns with their requirements, such as interval or ratio data.
On the other hand, non-parametric tests make fewer distributional assumptions, making them robust and applicable to a wider range of data types, including ordinal or skewed data.
While non-parametric tests are generally less powerful than their parametric counterparts when assumptions are met, they provide reliable results in situations where assumptions are violated or when dealing with non-normally distributed data. These differences in assumptions and robustness make each type of test valuable in different research contexts
- Non-parametric equivalent of correlation
Spearman’s Rho or Kendall Tau
- If skewness values between -1 and 1 then but…
Its good … but is below -1 then negatively skewed and above 1 then positively skewed
- If skew and kurotsis is 0 then tell you your data has a
Normal distribution
- If your kurotsis value between -2 and 2 then but…
All good but..if less than -2 then platykurtic and above 2 then leptokurtic
- Correlationa study does not rule out presence of third variable (tertium quid) which can be ruled out using
RCTs to even out confounding variable between groups
- What is variance?
Average squared deviation of each number from its mean (SD squared)
- To get SD from variance you:
Square root variance
- What is central limit therom?
States sampling distribution of mean approaches normal distribution as sample sixe sizes and especially case for sample sizes over 30
- What is a type 1 error?
False positive so think there is a sig effect but there isn’t = alpha
- Wha is type II error?
False negative so much variance unaccounted for by the model so thinking there is sig effect but there is not = beta
- Acceptable level of type I error is:
Alpha level of 0.05 and alpha level means probability of making type 1 error
- Non-parametirc equivalent of multti-way repeated ANOVA
Loglinear analysis
- Non parametric equivalent of one-way repeated ANOVA
Friedman’s ANOVA
- Non-parametric equivalent of one-way independent ANOVA
Kruskall Wallis
- We accept results as true (accept H1) when
Less proability of test statistic happening by chance e.g., p less than 0.05 means low proability of obtaining at least extreme results given H0 is true
- Acceptable level of Type II error is probability of
Beta level (often 0.2)
- R = 0.1, d = 0.2 (small effect) mean
Effect explains 1% of variance
- R = 0.3, D = 0.5 (medium effect)
Effect accounts 9% of total variance
- R = 0.5, d = 0.8 (large effect)
Effect accounts for 25% of total variance
- Non-parametric equivalent of paired t-tests
Wilcoxon signed rank test compare two dependent groups of scores compute differences between socres of two condition, note sign (pos or negative), rank differences and sum ranks for positive and negattive