SMCR Flashcards

1
Q
  • Sampling dist. –> central element in estimation and null hypothesis testing. It’s the gist. of the outcome scores of man samples –> used to make inferences about the population.
  • Data collected in a random sample.
  • Sample statistic –> characteristic we’re interested in.
  • Sampling space –> range of values sample statistic can take.
  • Units of analysis –> samples.
A
  • Probabilities can either be referred to as a proportion (0-1) or percentages (0% - 100%).
  • The mean of the sampling dist. is equal to the expected value of the sample statistic. The means of the sampling list. also equals the pop. proportion –> hence, the expected value also equals the pop. proportion ONLY IF the sample statistic is an UNBIASED ESTIMATOR.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Representative Sample:

  • Sample is representative of the pop. if variables in the sample are dist. same way as in the pop.
  • However, the random sample is likely tp differ due to CHANCE.
  • Nevertheless, we expect it to be so we say it is in PRINCIPLE REPRESENTATION of the pop.
A

Continuous sample statistic - continuous probabilities:

  • instead of looking at single values we look at range values.
  • the curve is called a probability density function.
  • Right hand prob. –> concerns right hand tail of the sampling dist.
  • Left hand prob. –> considers the left hand tail of the sampling dist.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Means at 3 levels:

  • the pop.
  • the sample.
  • the sampling dist.
A

How to create a sampling dist.?

  • exact approaches.
  • bootstrapping.
  • theoretical approximations.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Bootstrapping:

  • You draw an original sample from the pop. (large sample) and from the original sample you collect samples (bootstrap samples) with replacement (usually).
  • 5000 bootstrap samples.
  • if the proportion of the sample statistic in the initial sample equals the population proportion, then the bootstrapped sampling dist. will be very similar to the true sampling dist.
  • any sample statistic can be bootstrapped, some sample statistics even must be bootstrapped in order to create a sampling dist. for them. However, SPSS doesnt bootstrap all sample statistics.
A

Limitations of bootstrapping:

  • the smaller the initial sample size is, the greater the chance of having a sample without the sample statistic we are interested in; hence, the bootstrapped sampling dist. will be quite diff. from the true sampling dist.
  • Solution: have a large initial sample.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Exact approaches:

  • if you know or think you know the population proportion, we can exactly calculate the probability that a sample includes that sample statistic.
  • only works with categorical or discrete variables.
  • Combinations and outcomes tables are used.
  • e.g. coin flips.
  • exact approaches uses the binomial prob. formula to calculate probabilities.
  • exact approaches also available for 2 categorical variables in a contingency table.
  • Fisher’s exact test.
A

Theoretical Approximations to the sampling distribution:

  • Most tests use that.
  • if the curve fits the histogram of observed sample means, then the normal function is a good approximation of the sampling dist.
  • Left and right tails used for significance (2.5% on each side).
  • Width/ peakedness of the sampling dist. express the variation (SD) in it.
  • Probability Density Function.
  • Bell shape of ND –> symmetrical –> sampling dist. of sample mean should be symmetrical.
  • -> hence, a ND is a reasonable model for the prob. dist. of sample means.
  • Sampling is subject to chance - we may draw samples that don’t really cover the sampling dist. well.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Conditions for the use of TPD:

- rules of thumb (table in book).

A
  • Larger sample –> the closer sample statistic is to pop. proportion –> peaked dist.
  • Sample dist. more skewed/ less symmetrical when pop. prop. is near 0 or 1.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q
  • Large sample is very important.

- Less imp. if true prop. is closer to 0.5.

A
  • Rule of thumb for using the ND as the sampling dist. of a sample prop. :
    true prop. * sample size =
  • product must be larger than 5.
  • REMEBER: this rule of thumb uses 1 - the prob. IF the prob. is larger than 0.5. So for any prob. larger than 0.5, we subtract it from 1 and then multiply the resulting smaller prob. by the sample size and see whether the product is larger than 5.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q
  • Sometimes we just need to assume (educated guess) that the assumptions are met when we decide on using a TPD when it comes to the dist. of the scores in the pop.
A

Independent Samples –> IS T test
Dependent Samples –> DS T test
- Special sampling dist. for dependent samples:
+ mean diff. as the sample statistic.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q
  • if conditions for using a TPD are not met, use exact approach or bootsrapping.
  • if spss doesn’t have a test for the sample statistic you’re interested in, use bootstrapping.
A

Point estimate: best estimate of the parameter only if sample statistic is an unbiased est. –> pop. value would be equal to mean of sampling dist. and expected value of sample.

  • Not reaaaaally accurate, due to chance in random samples.
  • Hence, it’s better to estimate a range within which the pop. value falls –> interval estimate.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Interval estimate:

  • selects sample statistic values closest to the average of the sampling dist.
  • popular prop. is 95% CI –> and we want to know the boundary values for it.
  • width of the estimated interval represents the precision of our estimate.
  • The higher the confidence , the lower the precision and vise versa.
A

How to increase precision?

  • decreasing the Confidence level doesn’t really do you nay good so best thing to do is increase sample size.
  • Provides more information about the sample that would actually probably resemble it AND also yields lower standard error.
  • Large samples –> more peaked distribution cus values are closer to the centre and more concentrate towards the pop. value.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q
  • Concentration of sample statistic values is expressed by SD of the sampling dist. –> tells us how precise our estimate is.
  • SD of sampling dist. is our standard error.
A
  • When using point estimate approach, your sample mean in the point estimate, and the distance between the point estimate and the pop. proportion is the SE.
  • When using more than one sample mean, the distance between those and the pop. proportion expresses the size of the error we would make if we generalize the sample mean to the pop.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q
  • More variation –> larger SE.

- However, we can’t control variation in the scores.

A
  • Critical values are the values on the intervals.

- if we know the intervals, we can know the values.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q
  • Standardization of the sampling dist. formula = (sample mean - average of sampling dist.) / SE
  • Now the sampling dist. consists of standardized scores.
  • The means is always 0 (Z dist.)
  • The critical values in this dist. are -1.96 and 1.96.
    (95% confidence)
A

How to calculate interval estimates from critical values and SE
UB = pop.value + (critical value* SE)
LB = pop. value - (critical value* SE)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q
  • Binary decision - either accept or reject H0.
  • H0 specifies one value for the pop. statistic.
  • The sampling dist. is then centered around it.
  • Test is sig. if the value falls in the tails (rejection region).
A

Rejection region:

  • 2 tails - 2.5% each.
  • Reject H0 if the sample mean falls there.
  • Result is sig.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q
  • Type 1 error - rejecting a true null hypothesis.

- The prob. of making this error is 5% (actually depends on the sig. level you choose for ur test)

A
  • All those probabilities we can make are based around the idea that the null hypothesis is true.
  • The p value of a test (THE ONE WE GET) , the location of the rejection regions and as a consequence, the sig. level of the test depend on the value of the pop. statistic that we specifics in H0.
  • -> if the hypothesized pop. mean is moved toward to the sample mean –> p value becomes larger making it insignificant so we don’t reject HO.
  • -> if hypothesized pop. mean is moved away from the sample mean –> p value becomes smaller making it significant so we reject H0.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Golden Rule of H0 Testing:
P value > Sig. level –> Accept H0.
P value < Sig. level –> Reject H0.
- specifying H0 is necessary for calculating the P value.
- P value is a prob. under the assumption that the null hypothesis is TRUE.

A
  • We have null hypothesis (H0) and Alternative hypothesis (H1)
  • If no value is specified in the hypothesis then it is alternative.
  • If the hypothesis equates the pop. value to 0 it is labelled a NIL HYPOTHESIS.
17
Q
  • The case we’ve covered so far is TWO-SIDED TESTS/ TWO-TAILED HYPOTHESIS.
  • when the hypothesis provides a value and a direction e.g. 5.5 or higher it is a ONE-SIDED TEST/ ONE- TAILED HYPOTHESIS
  • Left-sided test if the sample proportion is smaller than the hypothesized proportion and a right-sided test if the sample proportion is larger than the hypothesized proportion.
A
  • The boundary value of a one sided hypothesis is used as the value for a one sided test (test value in SPSS)
18
Q
  • If a one-sided P value is reported but we need a 2 sided one, we double the one sided one.
  • If we divide a 2 sided p value by 2 and and its divided equally between the left and right tail, then we have the one sided p value.
A
  • Theoretical probability dist. links sample outcomes such as sample mean to probabilities by means of a test statistic.
  • A test statistic is named after the TDP to which it belongs:
    Z –> for z or normal dist.
    T –> for T dist.
    F –> for F dist.
    Chi-Square –> for Chi-Square dist.
19
Q

T distributions:

  • tails are 5% of samples that are most diff. from the hypothesized pop. prop.
  • Tc is the label for the critical values in a T dist.
  • Smaller samples have larger Tc values (as we learned before)
  • Mean of T dist. just like a normal dist./ or z dist. equals to 0.
  • so, the t value of a sample mean 5.5 is zero if the hypothesized pop mean is 5.5.
A
  • observed value –> sample outcome.
  • expected value –> hypothesized pop. value.
  • the larger the distance between the two, the larger the t value –> which makes it less likely (lower p-value) to draw a sample with outcome even more different from the pop. mean, and the more likely we are to reject H0.
  • Reject H0 if sample outcome falls in rejection regions.
20
Q
  • T dist and z. dist/ normal dist. have the same mean of sampling dist. = 0.
  • However, only z / normal dist. have fixed critical values (1.96, -1.96), the other sampling dist. critical values depend on the DF and DF depend on a few things:
  • sample size.
  • no. of groups being compared.
  • no. of rows and columns in contingency table.
A
  • In a t dist. DF depend on sample size.
  • Larger samples –> more DF –> lower critical values.
  • Smaller samples –> less DF –> (usually critical values are near 2.0)
21
Q

Testing H0 with exact approaches:

- uses the binomial formula

A

Testing H0 with bootstrapping:

  • first must understand relation between HO tests and CI
  • UB–> highest hypothesized pop value for which the observed sample mean is not statistically sig.
  • LB –> lowest hypothesized pop value for which the observed sample mean is not statistically sig.
  • CI contains all hypothesized pop values for which the observed sample is plausible among the 95% of samples with values closest to the pop. value.
  • we reject H0 if it falls outside CI and accept if it falls within.
  • hypothesized value can be too high or too low for the CI, so a hypothesis test using a CI is always 2 sided.
  • Using the CI is the easiest and sometimes the only way to testing a H0 if we calculate a sampling dist. with bootstrapping.
22
Q

Capitalization on chance:

  • when running more than one test on the same data, you need to capitalize on chance.
  • one way to do that is to run a bonferroni correction on SPSS.
  • It happens in an ANOVA in the second step since it tests 2 hypotheses.
  • That’s why we run a bonferroni correction for the Post-Hoc tests.
A
  • If we wanna use a TPD to approximate a sampling dist., a few conditions need to be met in order for the TPD to resemble the sampling dist. sufficiently.
  • Check the big table in the book for that.
  • Sample size is one of those conditions.
  • Variation of sample size across groups important. to ANOVA. Aim to end up with more or less equal group sizes.
23
Q

Effect Size:

  • a test with a larger sample is more likely to be statistically sig. so we are more likely to reject H0 with larger samples.
  • The diff. between the sample outcome and the hypothesized pop. value is the effect size.
A
  • probability of rejecting H0 depends on both effect size and sample size.
  • Larger samples will pick up on weaker effects.
  • Smaller samples will only pick up on larger effects as they are more easily picked up in a statistical test.
24
Q

Practical significance

- what we’re really interested in; statistical sig. is just a tool to use to signal practically significant effects.

A
  • the diff. between the sample outcome and hypothesized pop value is the UNSTANDARDIZED EFFECT SIZE.
  • UES depends on the scale on which we’re measuring the sample outcome e.g. grams.
  • Hence, there are no rules of thumb to interpreting an unstandardized effect size based on a scale in terms of small, moderate or large effects.
25
Q
  • In order to examine the meaningfulness of the unstandardized effect size, we need to standardize it.
  • Cohen’s d is a standardized effect size for tests on means.
    –> what it does it divide the difference between the sample mean and the hypothesized pop.mean by the SD in the sample.
    (mean diff. / SD)
  • you can calculate it from the output of tests like:
    + one sample t test.
    + paired samples t test.
    + independent samples t test.
A
interpretation of cohen's D: 
0.2 --> weak effect. 
0.5 --> moderate effect. 
0.8 --> strong effect. 
1 + --> strong effects.
26
Q

Calculating Cohen’s D in a one sample T test
- you get the sample mean and pop. mean separately in this test so you must:
(sample mean - hypothesized pop. value) / SD

A

Calculating Cohen’s D in a paired samples T test

  • you get the mean diff. already in the output (if negative, just drop the minus sign).
  • Mean diff. / SD.
27
Q

Calculating Cohen’s D in an independent samples T test
- you already get the mean diff. here in the output too.
- however, we don’t follow the same formula to calculate cohen’s d because there are 2 SD values.
- therefore, to standardize the effect and get cohen’s d we use the t value.
- there are 2 t values, depending on whether the levee’s test is sig. or not, we determine which row we are using to interpret the results and based on that we know which t value to use.
cohen’s d = (2*t) / square root of the DF

A

Associations as effect size

  • measures of association such as pearson’s and spearman’s correlation coefficients measure the standardized strength of an association anyway so we don’t need to calculate anything we just interpret R2 as it is the standardized effect size of the test.
  • Similarly, in an ANOVA, we use eta2 as standardized effect size.
28
Q
Rules of thumb for interpreting R2 and Eta2: 
0 - 0.10 --> no/ very weak association. 
0.10 - 0.30 --> weak. 
0.30 - 0.50 --> moderate. 
0.50 - 0.80 --> strong. 
0.80 - 1.00 --> very strong. 
1.00 --> perfect association. 
(ignore minus sign if answer has a minus sign)
A
  • if we already know the standardized effect size in the sample for which we want stat. sig. results, we can figure out the minimum sample size for which the test statistic is statistically sig.
  • effect size and test statistic both reflect the diff between the sample outcome and the hypothesized pop. mean (according to H0) –> as a consequence effect size indicators and test statistic are related.
29
Q
hypothetical world vs imaginary world 
- Hypothetical world: 
\+ world of researcher. 
\+ H0. 
\+ Type 1 error.
- Imaginary world: 
\+ Alternative world. 
\+ H1. 
\+ Type 2 error.
A
  • Type 1 error
    + rejecting a true H0.
    + probability of a type 1 error equals sig. level (alpha).
    –> so with a higher sig. level, probability of making a type 1 error increases as we are taking a larger risk at rejecting a H0 if it’s true.
  • Type 2 error
    + accepting/ not rejecting a false H0.
    + probability is equal to Beta
30
Q

Relationship between Type 1 and Type 2 error:
- as the probability of making a type 1 error increases, the probability of making a type 2 error decreases and vise versa.

A

In the case of a true H0,
- if it’s rejected, we are committing a type 1 error (alpha).
- if it’s not rejected, there’s no error (1-alpha).
In the case of a false H0,
- if it’s rejected, no error; POWER (1-beta).
- if it’s not rejected, type 2 error (beta).

31
Q

Sample size, statistical significance, effect size and test power are related.

  • we usually keep sig. level at 0.05 because if it’s any smaller, it decreases the test power.
  • we have to select the effect size (weak/ moderate/ strong); to decide on that we refer to previous research.
  • power set at at least 80% prob.
A
  • Power is not equal to the probability of accepting a true H0.
  • probability of accepting (not rejecting) a true H0 is (1 - prob. of making type 1 error) = 95%
  • this probability is larger than the power; power is usually set to a lower level because H0 is usually assumed to reflect our best knowledge about the world.
  • -> from this perspective, we are more keen on making a type 2 error than a type 1 error.