Unit 11: T test, z test and more Flashcards

1
Q

what does the t test do that the z test doesnt

A

The one-sample t-test compares a sample to a population to
determine statistical significance, without knowing s (population
SD).
* Also means you cannot calculate standard error of the mean
* So, we need an estimate for σM

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
1
Q

what do z tests and t tests have in common

A

Both tests answer:
* is a particular sample likely to have been drawn from a population that has a
specified mean?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

The Estimated Standard Error of the Mean

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q
A

.59

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

when do we use a t test

A

When σ is unknown and we need to estimate SEm, then a test statistic
other than the z must be used

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Degrees of freedom

A

Degrees freedom (df) is a value indicating the number of independent pieces of information a sample of observations can provide for purpose of statistical inference

The degrees of freedom (DF) in statistics indicate the number of independent values that can vary in an analysis without breaking any constraints.

A change in the df indicates a different t distribution, each with its own critical value.
* As sample size (and therefore degrees of freedom) increases, the t distribution becomes increasingly like z.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

example of one sample t test

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

what is the formula for standard error

A

the sample standard deviation/the square root of sample size

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

standard error

A

The standard error tells you how accurate the mean of any given sample from that population is likely to be compared to the true population mean. When the standard error increases, i.e. the means are more spread out, it becomes more likely that any given mean is an inaccurate representation of the true population mean

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q
A

0.59

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

when do we use the test statistic t

A

When σ (sd) is unknown and we need to estimate SEm (standard error), then a test statistic
other than the z must be used

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q
A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

effect size

A

An effect size is a specific numerical nonzero value used to represent the extent to which a null hypothesis is false.
As an effect size, Cohen’s d is typically used to represent the magnitude
of differences between two (or more) groups on a given variable, with
larger values representing a greater differentiation between the two
groups on that variable.
When comparing means in a scientific study, the reporting of an effect
size such as Cohen’s d is considered complementary to the reporting of
results from a test of statistical significance

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Null hypothesis

A

In scientific research, the null hypothesis is the claim that no relationship exists between two sets of data or variables being analyzed. The null hypothesis is that any experimentally observed difference is due to chance alone, and an underlying causative relationship does not exist, hence the term “null.”

(in a statistical test) the hypothesis that there is no significant difference between specified populations, any observed difference being due to sampling or experimental error.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

alternative hypothesis

A

the hypothesis that we are trying to prove and which is accepted if we have sufficient evidence to reject the null hypothesis.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

type 1 error

A

false positive- you reject the null hypothesis (saying there is a causal relationship) when it’s actually true (there’s no causal relationship)

16
Q

type 2 error

A

false negative- you dont reject the null hypothesis

A type II error is a statistical term used within the context of hypothesis testing that describes the error that occurs when one fails to reject a null hypothesis that is actually false. A type II error produces a false negative, also known as an error of omission. For example, a test for a disease may report a negative result when the patient is infected. This is a type II error because we accept the conclusion of the test as negative, even though it is incorrect.

17
Q

One sample t-test

A

Difference between the sample mean and the hypothesized mean

18
Q

Independent-samples t-test

A

Difference between the means of 2 independent groups

independent variable is the grouping variable, which means the scale
is nominal or categorical, specifically dichotomous

dependent has to be continuous/metric

  • You have 2 groups, each with its own number of observations, n1 and n2
  • Df = (n1 -1) + (n2-1) = n1 + n2 – 2… which is
  • Total number of observations – 2
19
Q

Paired t-test

A

Matched pairs (e.g. siblings, same individuals at 2 time points)
* Difference between the means of the pairs

  • Your n stands for the number of pairs
  • Df = n – 1
20
Q

what must an independent sample t test have

A
  • Assumptions
  • Categorical (binary or dichotomous) independent variable
  • Dependent variable Y is continuous or metric
  • Random sample
  • Independence of observations within and across factors
  • One observation should not have any effect or be related to another observation
  • Y should be normally distributed within each group (for small samples)
  • Homogeneity of variances = Homoscedasticity
  • Equal variances across groups
21
Q

what does an apa table need

A

horizontal, table 1 and title of table under

22
Q

Homoscedasticity

A

Homoscedasticity, or homogeneity of variances, is an assumption of equal or similar variances in different groups being compared. This is an important assumption of parametric statistical tests because they are sensitive to any dissimilarities. Uneven variances in samples result in biased and skewed test results.

23
Q

Shapiro-Wilk test of normality

A

the null is that the distribution is
normal

24
Q

cohen’s d cut-off

A

absolute value

  • Small: 0.2
  • Medium: 0.5
  • Large: 0.8
25
Q

what must be needed for a paired t test

A
  • Dependent variable Y is continuous
  • Categorical (dichotomous) IV
  • The 2 groups are independent of each other
  • Paired t-test
  • What you’re testing is matched
26
Q

Nonparametric Tests

A

In statistics, nonparametric tests are methods of statistical analysis that do not require a distribution to meet the required assumptions to be analyzed (especially if the data is not normally distributed). Due to this reason, they are sometimes referred to as distribution-free tests

27
Q

parametric tests

A

Parametric tests are based on assumptions about the distribution of the underlying. population from which the sample was taken. The most common parametric. assumption is that data are approximately normally distributed.

28
Q

why parametric over nonparametric tests

A

parametric has higher power: can find differences easier

29
Q

Wilcoxon Rank Sum Test/Also known as Mann-Whitney U-test

A

nonparametric version of independent sample t test

The Wilcoxon rank-sum test is commonly used for the comparison of two groups of nonparametric (interval or not normally distributed) data, such as those which are not measured exactly but rather as falling within certain limits (e.g., how many animals died during each hour of an acute study).

if the data deviates heavily from normality, especially in small samples
(think n < 30)
* Assumptions
* Response need to be at least ordinal
* Not too many ties and 0s
* Independent samples

30
Q

Wilcoxon Signed Rank Test

A

nonparametric version of paired t test

  • Compares medians
  • 𝐻0: the distribution location is the same for both groups (roughly: no
    differences in the medians)
  • 𝐻1 : the distribution location is different across groups (roughly: the
    medians are different)
  • Assumptions
  • At least ordinal
  • Not too many ties or 0s