Lecture 2 Flashcards

1
Q

Anytime you use the entire population to calculate for a particular parameter such as the mean what is that parameter called?
When you use the sample to calculate for a parameter such as the mean, what is that parameter called?

A

Population parameter

A statistic

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

a range of values in which there is a degree of certainty that it contains the population parameter is called?

What provides a range of values that is likely to contain the population parameter based on a sample statistic?

A value calculated from a sample that is used to estimate the population parameter (e.g., sample mean, sample proportion) is called what value?

the degree of variability around an estimate of a population parameter is termed as?
A. Accuracy
B. Precision
C.Bias
D. Standard error

What is the meaning of 95 percent confidence interval

A

Here’s a full MCQ with two additional options:

Question:
The degree of variability around an estimate of a population parameter is termed as:
A. Accuracy
B. Precision
C. Standard Error
D. Bias

Correct Answer:
B. Precision

Explanation:
- Accuracy refers to how close an estimate is to the true population parameter.
- Precision refers to the degree of variability around the estimate. Lower variability means higher precision.
- Standard Error quantifies the precision but is not the general term for the degree of variability.
- Bias refers to systematic error that skews results away from the true population parameter.

Confidence interval (CI) is a range of values in which there is a degree of certainty that it contains the population parameter

CI indicates the precision of an estimated statistic of a population parameter. (NB: Precision is the degree of variability around an estimate of a population parameter)
• Population Parameter: A value that describes a characteristic of the entire population (e.g., population mean, population proportion).
• Sample Statistic: A value calculated from a sample that is used to estimate the population parameter (e.g., sample mean, sample proportion).

Understanding the Confidence Interval

A confidence interval (CI) provides a range of values that is likely to contain the population parameter based on a sample statistic.

Explanation of the 95% Confidence Interval

When we say that we have a 95% confidence interval for a population parameter, it means:

1.	Sampling and Intervals: If we were to take 100 different random samples from the same population and compute a confidence interval for each sample:
•	Each interval is based on a sample statistic (e.g., sample mean).
•	We use these sample statistics to estimate the population parameter.
2.	Containment of the Population Parameter:
•	About 95 of these 100 confidence intervals will contain the true population parameter (e.g., population mean).
•	About 5 of these intervals will not contain the true population parameter.

Why Mention Population Parameter with Samples?

The purpose of calculating a confidence interval from a sample is to estimate the population parameter. While the interval is based on the sample, it is used to infer information about the population.

95% CI means that if we were to select 100 random samples from the population and use these samples to calculate 100 different confidence intervals for a given population parameter, approximately 95 of the intervals will contain the parameter
and 5 will not

Or that
Let’s use a very simple example with the concept of sample, population, and different samples to explain confidence intervals.

Population: Suppose we want to know the average height of all students in a school. The population is all the students in the school.

Sample: Since measuring every student’s height is impractical, we take a sample, which is a smaller group of students. Let’s say we randomly select 30 students and measure their heights.

  1. Sample 1: We measure the heights of our first sample of 30 students and find the average (sample mean) height is 150 cm.

To create a 95% confidence interval:
1. Calculate the sample mean (already given as 150 cm).
2. Calculate the margin of error (using statistical formulas, let’s say it’s 5 cm).

[
\text{Confidence Interval} = 150 \text{ cm} \pm 5 \text{ cm} = (145 \text{ cm}, 155 \text{ cm})
]

We are 95% confident that the true average height of all students in the school (the population) is between 145 cm and 155 cm based on this sample.

Now, let’s take another sample of 30 different students to see how the confidence interval might change:

  1. Sample 2: We measure the heights of another group of 30 students and find the average height is 148 cm.
  2. Calculate the margin of error (let’s assume it remains 5 cm).

[
\text{Confidence Interval} = 148 \text{ cm} \pm 5 \text{ cm} = (143 \text{ cm}, 153 \text{ cm})
]

We are 95% confident that the true average height of all students in the school is between 143 cm and 153 cm based on this second sample.

  1. Different Samples, Different Intervals: Each sample gives us a slightly different estimate and a slightly different confidence interval.
  2. Confidence Level: The 95% confidence level means that if we repeated this process many times (taking many samples and calculating confidence intervals), 95% of the intervals would contain the true population mean.
  3. True Population Mean: We never know the exact true population mean, but the confidence interval gives us a range where it is likely to be.
  • Population: All students in the school.
  • Sample: A group of 30 students selected randomly.
  • Different Samples: Taking multiple samples can give slightly different results.
  • Confidence Interval: Provides a range that likely contains the true population mean with a certain level of confidence.

Using different samples, we see that the confidence interval helps us understand the uncertainty and variability in our estimates of the population mean.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

In hypotheses testing we have research hypothesis and statistical hypothesis
What is the difference ?

What are two types of statistical hypotheses and define them
What is hypothesis testing

A

Research hypothesis: It is the prediction or supposition that motivates the study
Once the research hypothesis is known, it has to be framed in such a way that statistical techniques can be used to evaluate it or you have to be able to use statistical techniques to evaluate the research hypothesis

This new statement of the hypothesis is called statistical hypothesis.
Statistical hypothesis is the statement you make from the research hypothesis that you can use statistical methods to evaluate the hypothesis.
So if you can’t use statistical methods to evaluate the hypothesis, it is not a statistical hypothesis. It is just a research hypothesis
The statistical hypothesis involves the null and alternative hypotheses

The null hypothesis is often stated that “any observed differences are entirely due to sampling errors (i.e chance)” because there’s no relationship between the variables but alternative hypothesis is a contradiction to this statement that says it’s not due to chance cuz there’s a relationship between the two variables.

Imagine a study aiming to test the effectiveness of a new medication compared to a placebo.

1.	Null Hypothesis (H₀):
•	“The new medication has the same effect as the placebo.”
•	This means any observed difference in effects between the medication and the placebo is attributed to random sampling error or chance.
2.	Alternative Hypothesis (H₁):
•	“The new medication has a different effect (better or worse) than the placebo.”
•	This implies that the observed difference in effects is real and not due to chance.

Hypothesis testing: This is a method used by statisticians to determine how likely it is that the observed differences in data are entirely due to sampling error (i.e chance) rather than to underlying population differences.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Hypothesis continued:
Example of research hypothesis and statistical hypothesis
Between alternate and null hypothesis, which states that the proportion of the outcome (e.g., success rate, incidence of an event) is the same in both the exclusive group and the non-exclusive group?

A

Examples
Research hypothesis: There is no statistical significant relationship between exclusive breastfeeding and infant mortality

Statistical Hypothesis
Hsubscript0: Pexclusive=Pnot-exclusive
Null Hypothesis (H₀)

H_0: P_{exclusive} = P_{not-exclusive}

•	Meaning: The null hypothesis states that the proportion of the outcome (e.g., success rate, incidence of an event) is the same in both the exclusive group and the non-exclusive group. Any observed difference in proportions is due to random chance.

HsubscriptA: Pexclusive≠Pnot-exclusive

Alternative Hypothesis (H₁)

H_1: P_{exclusive} \neq P_{not-exclusive}

•	Meaning: The alternative hypothesis states that the proportion of the outcome is different between the exclusive group and the non-exclusive group. The observed difference in proportions is not due to chance but reflects a real difference.

Pexclusive= Proportion of deaths among exclusively breastfed
infants
Pnot-exclusive= Proportion of deaths among infants not exclusively
breastfed
Given your definitions:

•	 P_{exclusive} : Proportion of deaths among exclusively breastfed infants.
•	 P_{not-exclusive} : Proportion of deaths among infants not exclusively breastfed.

Research hypothesis: There is no statistical significant relationship between exclusive breastfeeding and infant mortality

Hypotheses

1.	Null Hypothesis (H₀)
•	 H_0: P_{exclusive} = P_{not-exclusive} 
•	This means that the proportion of deaths is the same for both exclusively breastfed infants and non-exclusively breastfed infants.
2.	Alternative Hypothesis (H₁)
•	 H_1: P_{exclusive} \neq P_{not-exclusive} 
•	This means that the proportion of deaths is different between exclusively breastfed infants and non-exclusively breastfed infants.

Hypotheses Recap:

1.	Null Hypothesis (H₀)
•	 H_0: P_{exclusive} = P_{not-exclusive} 
•	This means that the proportion of deaths is the same for both exclusively breastfed infants and non-exclusively breastfed infants.
2.	Alternative Hypothesis (H₁)
•	 H_1: P_{exclusive} \neq P_{not-exclusive} 
•	This means that the proportion of deaths is different between exclusively breastfed infants and non-exclusively breastfed infants.

Possible Outcomes and Inferences:

  1. Rejecting the Null Hypothesis (H₀):• Decision: If the p-value obtained from the statistical test (e.g., two-proportion z-test) is less than the significance level (commonly 0.05), we reject the null hypothesis.
    • Inference: There is sufficient evidence to conclude that there is a statistically significant difference in the proportion of deaths between exclusively breastfed infants and non-exclusively breastfed infants.
    • Implication: Exclusive breastfeeding is associated with a different mortality rate compared to non-exclusive breastfeeding.
  2. Failing to Reject the Null Hypothesis (H₀):• Decision: If the p-value is greater than the significance level, we fail to reject the null hypothesis.
    • Inference: There is insufficient evidence to conclude that there is a statistically significant difference in the proportion of deaths between exclusively breastfed infants and non-exclusively breastfed infants.
    • Implication: The data do not provide enough evidence to suggest that exclusive breastfeeding impacts infant mortality differently compared to non-exclusive breastfeeding. However, this does not prove that there is no difference; it only indicates that any observed difference might be due to random chance based on the sample data
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is the p value
What is significance level?
Which p value is statistically significant and which isn’t?
Which p value is preferred?

A

P-value is the probability of obtaining a result as extreme as (or more extreme than) the observed if the null hypothesis were true.

P-values have cut-off called significance level and if it is less than the cut-off, the null hypothesis is rejected; otherwise it will not be rejected

Traditionally, p=0.05 is preferred. Therefore
p<0.05 is considered statistically significant
p≥0.05 is not considered statistically significant
For example p=0.008 is statistically significant

So if p value is more than 0.05 it means you have enough evidence to support the null hypothesis but not enough to accept the alternate one hence you fail to reject the null hypothesis so you fail to reject the null hypothesis at a significant level of 0.05 (or whatever p value you get)and you say there is no association between A and B.

If p value is less than 0.05, it means you don’t have enough evidence to support your hypothesis hence you reject the null hypothesis (so you reject the null hypothesis at a significant level of 0.05 (or whatever p value you get)and you say there is an association between A and B.

K In statistical hypothesis testing, if you fail to reject the null hypothesis, it means you do not have enough evidence to support the alternative hypothesis. However, this does not mean that you accept the null hypothesis as true. Instead, you conclude that there is insufficient evidence to make a definitive statement about the effect or difference you were testing for.

  1. Evidence Limitation: Failing to reject the null hypothesis simply indicates that the data did not provide strong enough evidence against it. It does not prove that the null hypothesis is true.
  2. Sample and Power: The result could be due to a small sample size or insufficient power in the study, meaning that even if there is an effect, your study might not be able to detect it.
  3. Scientific Skepticism: In science, we usually avoid claiming certainty. Just as rejecting the null hypothesis doesn’t prove the alternative hypothesis absolutely, failing to reject the null hypothesis doesn’t prove it is true.

Imagine you’re testing a new drug to see if it lowers blood pressure compared to a placebo.

  • Null Hypothesis (H₀): The new drug has no effect on blood pressure.
  • Alternative Hypothesis (H₁): The new drug lowers blood pressure.

After conducting the study, you calculate the p-value and find it to be 0.08 (greater than 0.05).

  • Fail to Reject H₀: You conclude that there isn’t enough evidence to support that the new drug lowers blood pressure. However, this does not mean you accept that the drug has no effect. It means you don’t have enough evidence to show that it does.
  • Further Research: More data or a more powerful study might be needed to detect an effect if it exists.
  • Decision Making: Based on the current evidence, you might decide not to approve the drug, but you wouldn’t claim definitively that the drug doesn’t work. Further studies might be warranted.
  • Failing to reject the null hypothesis: Indicates insufficient evidence against the null hypothesis.
  • Not accepting the null hypothesis: You remain neutral regarding the null hypothesis; you neither confirm it as true nor prove it false.
  • Future Research: Often, this result suggests the need for additional research, larger sample sizes, or different experimental designs to further investigate the question.

This cautious approach helps maintain scientific rigor and avoids drawing premature or incorrect conclusions from insufficient data.

The p value is like your evidence in the court that proves that the null hypothesis is true. So it could either be enough evidence to prove this or it may not be enough evidence

So hypothesis for the courtroom analogy :
Alternate hypothesis-He stole the boat
Null- he didn’t steal the boat

Exactly! Let’s build on that courtroom analogy with the hypotheses clearly defined:

  1. Null Hypothesis (H₀): He didn’t steal the boat.
    • This is the assumption of innocence. Just like in court, where we assume the defendant is innocent until proven guilty.
  2. Alternative Hypothesis (H₁): He stole the boat.
    • This is the claim we are trying to prove. We need enough evidence to reject the null hypothesis and conclude that he is guilty.
  • Investigation/Trial: Think of this as conducting an experiment or study to gather data.
  • Observed Results: These are the pieces of evidence we present in court. For example, witnesses, fingerprints on the boat, surveillance footage, etc.
  • The p-value is like the strength of the evidence presented in court.
  • Small p-value (e.g., 0.01 or 1%):
    • This means the evidence is very strong.
    • It’s very unlikely that we would see this evidence if the defendant were actually innocent.
    • In our analogy: There is strong evidence that he stole the boat, so we reject the null hypothesis (he didn’t steal the boat) and conclude that he is likely guilty.
  • Large p-value (e.g., 0.4 or 40%):
    • This means the evidence is weak.
    • It’s quite likely that we would see this evidence even if the defendant were innocent.
    • In our analogy: There is not enough evidence to conclude he stole the boat, so we do not reject the null hypothesis (he didn’t steal the boat).
  1. Start with the Assumption: He didn’t steal the boat (null hypothesis).
  2. Collect Evidence: Gather testimonies, physical evidence, and other relevant information.
  3. Evaluate Evidence (p-value):
    • Small p-value: Strong evidence against the null hypothesis (evidence is unlikely to occur by chance if he didn’t steal the boat). We reject the null hypothesis and conclude he likely stole the boat.
    • Large p-value: Weak evidence against the null hypothesis (evidence could easily occur by chance if he didn’t steal the boat). We do not reject the null hypothesis and conclude there isn’t enough proof that he stole the boat.
  1. Null Hypothesis (H₀): The new drug has no effect (like assuming he didn’t steal the boat).
  2. Alternative Hypothesis (H₁): The new drug lowers blood pressure (like claiming he stole the boat).
  3. Collect Data: Conduct a clinical trial and measure the blood pressure reduction.
  4. Calculate the p-value: This tells us how likely it is to see the observed reduction (or more extreme) if the drug really has no effect.
    • Small p-value: Strong evidence against the null hypothesis. We conclude the drug likely has an effect (like concluding he stole the boat).
    • Large p-value: Weak evidence against the null hypothesis. We conclude there isn’t enough evidence to say the drug works (like concluding there isn’t enough proof he stole the boat).
  • p-value: Measures the strength of the evidence against the null hypothesis.
  • Small p-value: Strong evidence, likely to reject the null hypothesis (like finding the defendant guilty based on strong evidence).
  • Large p-value: Weak evidence, unlikely to reject the null hypothesis (like finding the defendant not guilty due to insufficient evidence).

Using the court analogy, the p-value helps us determine whether the evidence (data) is strong enough to reject our initial assumption (null hypothesis) and accept an alternative conclusion.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What does a p value of more than 0.05 mean
What value less than or equal to 0.05

A

After performing a statistical test, the result will include a p-value. Here’s how you interpret it:

  • p-value ≤ 0.05: This is typically considered statistically significant. It suggests that the observed data is unlikely under the null hypothesis, so you reject the null hypothesis.
  • p-value > 0.05: This is typically considered not statistically significant. It suggests that the observed data is not unusual under the null hypothesis, so you do not reject the null hypothesis.
  • p-value = 0.03: Since 0.03 is less than 0.05, it indicates strong evidence against the null hypothesis. You would reject the null hypothesis.
  • p-value = 0.08: Since 0.08 is greater than 0.05, it indicates weak evidence against the null hypothesis. You would not reject the null hypothesis.

Yes, 0.008 is less than 0.05.

When comparing p-values to a significance level (commonly 0.05), a p-value of 0.008 indicates that the result is statistically significant. This means there is strong evidence against the null hypothesis, and you would reject the null hypothesis in favor of the alternative hypothesis.

  • p-value = 0.008: Since 0.008 is less than 0.05, the evidence against the null hypothesis is strong enough to reject it. This suggests that the observed effect is unlikely to have occurred by chance, and there is likely a real effect.

The calculation of a p-value depends on the type of statistical test being performed. Here’s a simplified explanation using a common test:

A t-test is used to compare the means of two groups.

  1. Formulate Hypotheses:
    • Null Hypothesis (H₀): There is no difference in means between the two groups.
    • Alternative Hypothesis (H₁): There is a difference in means between the two groups.
  2. Collect Data:
    • Assume you have two groups with sample sizes ( n_1 ) and ( n_2 ).
    • Calculate the means (( \bar{X_1} ), ( \bar{X_2} )) and standard deviations (( s_1 ), ( s_2 )) of each group.
  3. Calculate the Test Statistic:
    • The t-statistic measures the difference between the two sample means relative to the variation in the sample data.
    • Formula for the t-statistic for two independent samples:
      [
      t = \frac{\bar{X_1} - \bar{X_2}}{\sqrt{\frac{s_1^2}{n_1} + \frac{s_2^2}{n_2
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is the chi square test
What is it used for

A

The chi-squared test is used to determine the association between two categorical variables

In effect, it determines the difference in proportions between two or more categories

Yes, that’s correct!

The Chi-Square Test is used to determine whether there is a significant difference in the proportions between two or more categories of a categorical variable. It checks if the observed distribution of categorical data differs from what we would expect by chance.

For example, if you’re analyzing whether there is an association between gender (male, female) and voting preference (party A, party B), the Chi-Square Test can help determine if the proportion of males and females voting for each party is significantly different from what we would expect if there were no association.

If you have multiple categories or levels within your variables, the Chi-Square Test can still be applied to determine whether there are differences in proportions across all categories.

In an MCQ setting that states “Chi-Square measures association between multiple categories,” you would typically choose True if the context is about comparing proportions across multiple categories within two categorical variables.

However, if the question implies that Chi-Square can measure associations among more than two categorical variables (i.e., three or more variables simultaneously), then the statement would be False because the standard Chi-Square Test of Independence is specifically for testing the association between two categorical variables.

So, to clarify:
- If “multiple categories” refers to different levels within two categorical variables (e.g., different age groups and different income brackets), then the answer is True.
- If the question suggests that Chi-Square tests association between more than two separate categorical variables (e.g., gender, education level, and voting preference), then the answer is False.

The key is in how “multiple categories” is being interpreted.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

You have three categorical variables that you want to test the association between them
Can you use the chi squared test and why?

A

No you can’t use it cuz it checks for two CATEGORICAL variables
If you want to check association for more than two categorical variables, u use T Test

I’m not clear on this info

It seems there might be some confusion. Let me clarify:

  1. Chi-Square Test: This test is indeed used to check for an association between two categorical variables. It is one of the most common tests used in such scenarios.
  2. T-Test: This test is typically used to compare the means of two groups to see if they are statistically different from each other. The T-Test is generally used for continuous data, not categorical data.

If you want to check associations between more than two categorical variables, you would use methods like Logistic Regression (if the outcome is binary) or Multinomial Logistic Regression (if the outcome has more than two categories), or you might consider more advanced tests like Cochran’s Q Test or Fisher’s Exact Test for small sample sizes.

If you’re working with more than two groups and want to compare means, you’d use an ANOVA (Analysis of Variance) rather than a T-Test.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

There’s a table on the chi square
What are the observed frequencies in this table?
How is degree of freedom for a chi square test in a contingency table calculated?

A

a,b,c,d are the observed frequencies

Degree of freedom(df)= (number of rows(r)-1) x (number of columns(c)-1)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

How are expected frequencies in chi square calculated?

A

Check the slide in the biostat 2 slides
So for the value a, expected frequency is a+bxa+c divided by n
a+b is the total number of observed frequencies by row for value a and a+c is the total number of observed frequencies by column all divided by the total number of frequencies whether by row or column

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Explain the difference between observed frequencies and expected frequencies in chi squared testing
Which is based on the assumption that there is no association between the two variables in the contingency table

A

In the context of the chi-square test of independence or goodness-of-fit, the terms “observed frequency” and “expected frequency” refer to different concepts:

  1. Observed Frequency:
    • This refers to the actual counts or frequencies observed in your data.
    • It represents the number of times a particular outcome or category appears in your sample or dataset.
    • For example, if you are conducting a survey and tallying responses, the observed frequency is the number of respondents who chose a specific option.
  2. Expected Frequency:
    • This refers to the frequencies that you would expect to observe under a null hypothesis of no association (in chi-square test of independence) or under a specified distribution (in chi-square test of goodness-of-fit).
    • It is calculated based on the assumption that there is no relationship between the variables (in independence test) or that the data follow a specified distribution (in goodness-of-fit test).
    • For example, in a chi-square test of independence, if there is no relationship between two categorical variables, the expected frequency for each cell (combination of categories) is calculated based on the marginal totals of the table and assuming no association between the variables.

Key Differences:

  • Purpose: Observed frequencies are the actual data you have collected or observed, whereas expected frequencies are calculated based on a null hypothesis assumption or a specified distribution.
  • Calculation: Observed frequencies are directly counted from your data, while expected frequencies are computed using mathematical formulas or assumptions (e.g., based on marginal totals, proportions, or theoretical distributions).
  • Testing: In chi-square tests, the comparison between observed and expected frequencies helps determine whether the observed data significantly differ from what would be expected under the null hypothesis (independence or specified distribution).

In summary, observed frequencies are the actual counts observed in your data, while expected frequencies are the theoretical counts or frequencies you would expect under certain assumptions or hypotheses. The chi-square test assesses whether the observed frequencies significantly deviate from the expected frequencies, providing insights into the relationship between variables or the fit of data to a specified distribution.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Interpret this p value for this hypothesis:
There is an association between dogs and cats
P value is 0.001

A

Because the p value is 0.001, we reject the null hypothesis at significant level of 0.05 and we conclude that there is an association between A and B
(We reject the null hypothesis because we do not have enough evidence to support the null hypothesis which says there is no association since the p value is less)
. Since there’s no enough evidence to support the hypothesis that there’s no association, we reject this hypothesis that there is no association and we say that there is an association)

If it was more than the 0.05, we would have enough evidence to support the null hypothesis so we will say we fail to reject the null hypothesis because the evidence we have shows that there is no association (the null hypothesis )

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Question: use the table in the slides to examine the association between wearing helmet and head injury
Second question: which category had higher proportion of head injury?

A

Answer to second question:
So you’re focusing on the numbers that have to do with the category of yes there was head injury. So between those who wore helmet and those who didn’t, which category has higher proportion of head injury?
It’s those who didn’t wear helmet cuz their proportion of those who don’t wear helmet but have head injury is 33.75 and those who wear helmet and have head injury is 11.56

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

State two other ways of comparing categorical variables

A

ANOVA test: analysis of variants
Stata app and T test

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Use the second table in the slides and Check whether there is an association between the variables gender (gender) and hypertension status (hyp)

Find the percentage of males who are hypertensive

Is there any association between gender and hypertension status?

Interpret your results

A

Pearson chi2(1) = 22.2109
Pr = 0.000

The notation you’ve provided, “Pearson chi2(1) = 22.2109” and “Pr = 0.000,” typically refers to the result of a chi-squared test for independence or goodness of fit. Here’s what these results generally indicate:

  • Pearson chi2(1) = 22.2109: This represents the value of the Pearson chi-squared statistic with 1 degree of freedom. It suggests that there is a significant difference or association between the observed and expected frequencies in the data.
  • Pr = 0.000: This indicates the p-value associated with the chi-squared statistic. A p-value of 0.000 (or very close to it) means that the observed result is statistically significant at conventional levels (usually p < 0.05 or p < 0.01). In other words, there is strong evidence to reject the null hypothesis in favor of the alternative hypothesis, suggesting a relationship or difference exists.

In summary, based on these results, there is strong statistical evidence to conclude that there is a significant relationship or difference between the variables being tested, as indicated by the chi-squared test.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What is the t test
Give three examples
What are the two types of t tests

A

The t-test is a statistical test used to compare the means of two groups.
Examples are:
comparing the mean cholesterol levels of male and female PRE-GEM students
Comparing the mean systolic blood pressure of drivers and lecturers in UCC
Comparing the mean weights of level 200 and 300 GEM students in the School of Medical Sciences.

The two types of t-tests are independent/unpaired t-test and dependent/paired t-test.

17
Q

What is the independent or unpaired t test

A

Independent (Unpaired) t-test: This test compares the means of two independent groups to determine if there is a statistically significant difference between them. It assumes that the observations in each group are independent of each other. Examples include comparing the mean scores of two different groups of students on a test.

Independent (Unpaired) t-test: This is like comparing how fast everyone in Group A can run with how fast everyone in Group B can run. You want to see if one group is generally faster than the other.

Independent (Unpaired) t-test: You want to see if there is a difference in how quickly Group A and Group B get better when they take different medicines. The t-test helps you decide if one medicine makes people get better faster than the other.

So for a paired test, you’re testing the differences in the same groups usually before and after something was introduced.

A paired t-test is used when you have two related measurements for the same group of individuals, and you want to see if there is a significant difference between those two measurements.

Modified Example with Paired t-test:

Imagine you have one group of athletes (instead of two separate groups like in the independent t-test example). Let’s call this Group A. You want to test whether a new type of running shoe improves their running speed.

1.	Before Using the New Shoes:
•	You measure how fast each athlete in Group A can run a 100-meter race while wearing their regular running shoes. You record each athlete’s time. This is the first measurement.
2.	After Using the New Shoes:
•	The same athletes in Group A then put on the new running shoes. You measure how fast each athlete can run the 100-meter race again. This is the second measurement.
3.	Paired Data:
•	For each athlete, you have two measurements:
•	Measurement 1: Time running with regular shoes.
•	Measurement 2: Time running with the new shoes.
4.	Key Idea of the Paired t-test:
•	The paired t-test compares the difference in running times for each athlete between the two conditions (regular shoes vs. new shoes).
•	It checks if, on average, the difference in times (after minus before) is significantly different from zero. Essentially, it tells us if the new shoes made a meaningful difference in performance.

Key Points:

•	Same Group, Measured Twice: You are looking at the same athletes (Group A) both before and after a change (using new shoes).
•	Paired Differences: You calculate the difference in running time for each athlete between their first and second races.
•	Within-Subject Comparison: The paired t-test focuses on the change within each athlete, not comparing separate groups of athletes.

Conclusion:

While the independent t-test compares the average running times of two different groups, the paired t-test compares the change in running time for the same group before and after using something new (like new shoes). It is like seeing if each person improved with the new shoes compared to their performance without them.

18
Q

State the three assumptions that must be satisfied before using the unpaired t-test:
See the examples o the calculations in the slides

A

Independent/unpaired t-test
The following assumptions must be satisfied before using the unpaired t-test:

-The data must be independent

-Observations of the groups must be normally distributed

-The standard deviations of the groups should be equal

19
Q

What are paired or dependent t tests

A

Dependent (Paired) t-test: Also known as a paired samples t-test, this test compares the means of two related groups to determine if there is a statistically significant difference between their means. It is used when the observations are paired or matched in some way (e.g., repeated measures on the same subjects). Examples include comparing the before and after treatment scores of the same group of patients
N Dependent (Paired) t-test: This is like comparing how much each person in Group A has improved in running speed after practicing for a month. You’re checking to see if practicing made everyone in Group A faster.

In both cases, the t-test helps us figure out if there’s a big difference between the two groups or if any differences we see could just be by chance. It’s like using a special tool to decide if the differences are real and important
G Dependent (Paired) t-test: Now, imagine you have one group of patients who take medicine A for a week and then switch to medicine B for another week. You want to see if they feel better after switching medicines. The t-test helps you figure out if switching medicines actually makes a difference in how they feel.

In both cases, the t-test helps doctors and scientists figure out if the treatments they’re studying really work and if the differences they see in patients are because of the treatment or just by chance. It’s like a tool that helps them make sure they’re making the right decisions to help people feel better.

These tests are commonly used in hypothesis testing to assess whether the difference between groups or conditions is statistically significant, based on the t-statistic and its associated p-value.

In an independent t-test, the degrees of freedom are calculated as:
• a) n_1 + n_2 - 1
• b) n_1 + n_2 - 2
• c) n_1 - 1
• d) n_1 \times n_2 - 2
• Answer: b) n_1 + n_2 - 2

Which assumption is common to both independent and dependent t-tests?
• a) Homogeneity of variances.
• b) Independence of observations.
• c) Normality of the distribution.
• d) Equal sample sizes.
• Answer: c) Normality of the distribution.

20
Q

What assumptions for the paired t test must be satisfied before it is used
Give an example of a paired t test
What’s the difference between paired t test and unpaired t test

A

Paired/dependent t-test
Paired t-test compares the means of paired observations. Examples are:
Pre- and post-BMIs of 300 obese people in a city X after an intervention (education on diet, exercise)

Comparison of baseline and endline duration of sleeping among 400 individuals after taking a sleeping tablet X.

Matching individuals to compare the mean psychiatric scores after taking narcotic drugs A and B.

The following are the assumptions for the paired t-test:
Measurements across groups are dependent
The differences in observations must be normally distributed

When conducting a paired samples t-test, the null hypothesis is typically:
• a) The means of the two groups are equal.
• b) The differences between paired observations are zero.
• c) The variances of the two groups are equal.
• d) The correlation between paired observations is zero.
• Answer: b) The differences between paired observations are zero.
I’m paired t test, we are looking the differences in the means of the groups and not the means themselves of the groups as is done in unpaired t test. Also, paired focuses on seeing the differences before and after something or after an event has occurred.

These two groups typically consist of measurements taken from the same subjects at two different times (e.g., before and after a treatment) or under two different conditions.

Explanation:

1.	Null Hypothesis in a Paired Samples t-test:
•	The null hypothesis (H_0) in a paired samples t-test is that there is no difference between the paired observations. This means that the average of the differences between each pair of observations is zero. If the null hypothesis is true, any difference observed between the paired observations is due to random chance rather than a significant effect or treatment.
2.	Why the Correct Answer is (b): “The differences between paired observations are zero.”:
•	When conducting a paired samples t-test, you are working with paired differences. Instead of looking at the raw scores of each group, you calculate the difference between each pair and then test whether the mean of these differences is significantly different from zero.
•	Therefore, stating “the differences between paired observations are zero” is precisely what the null hypothesis represents in this context.

Sure! Let me explain it in a simpler way:

Imagine you have two friends, Alice and Bob, and you want to see if eating a healthy breakfast helps them run faster. You measure how fast they can run before they eat a healthy breakfast, and then you measure how fast they run after they eat a healthy breakfast.

For each friend, you now have two times: one before breakfast and one after breakfast.

Now, you want to know if the healthy breakfast really made a difference. Here’s what you do:

1.	Calculate the Difference: For each friend, you find the difference between their before and after times. Maybe Alice ran 2 seconds faster after eating, and Bob ran 3 seconds faster.
2.	The Big Question (Null Hypothesis): We want to know if these differences are just by luck or if the healthy breakfast actually helped. The big question we start with is, “What if these differences are actually zero?” That would mean the breakfast didn’t help at all.
3.	Checking the Differences: We check the differences to see if they are close to zero or not. If they are far from zero, we think, “Hey, maybe the breakfast really did help them run faster!”

So, when we say, “The differences between paired observations are zero,” we mean: “Let’s assume for now that the healthy breakfast did not make a difference and see if our results prove otherwise.”

In simple words, we are checking if there’s really a difference or if it’s just like nothing changed at all.

Great question! Let’s think about how the paired t-test and the independent t-test are different by using an example with your friends:

Think of a paired t-test like a “before and after” test for the same people or things.

For example, let’s say Alice and Bob try a new kind of shoe that is supposed to make them run faster. You first measure how fast they run without the new shoes and then measure again with the new shoes.

  • You’re looking at the same people (Alice and Bob) running twice — once before the shoes and once after.
  • You’re comparing how much each person improved or changed, not just looking at their separate times.

Key idea: The paired t-test checks if there’s a difference in performance within the same group of people or things.

Now, an independent t-test is more like comparing two completely different groups of people or things.

For example, let’s say you have two groups of friends: one group eats a healthy breakfast (Group A), and the other group does not (Group B). You want to see if Group A runs faster than Group B.

  • You have two separate groups: Group A and Group B.
  • You’re comparing the average running times of the two groups to see if one group is faster than the other.

Key idea: The independent t-test checks if there’s a difference between two different groups that have no direct connection to each other.

  • Paired t-test: Same group of people tested twice (before and after something). You compare the change in each person.
  • Independent t-test: Two different groups of people tested once. You compare the averages between the two groups.

Think of it this way:
- Paired = Pairs of measurements from the same person (like a twin test!).
- Independent = Two separate groups that don’t mix (like comparing apples to oranges).

So, the main difference is whether you’re comparing the same group before and after something (paired) or two different groups to each other (independent).

In a paired samples t-test, the degrees of freedom are calculated as:
• a) n_1 + n_2 - 2
• b) n - 1
• c) n_1 \times n_2 - 1
• d) n_1 + n_2 - 1
• Answer: b) n - 1

Let’s break this down to understand why the answer is b) (n - 1) for the degrees of freedom in a paired samples t-test.

In a paired samples t-test, we are comparing two sets of measurements from the same subjects. For example, we might measure the blood pressure of a group of people before and after taking a certain medication. Because these measurements come from the same subjects, they are “paired” — meaning each “before” measurement is directly related to an “after” measurement.

Degrees of freedom (df) represent the number of independent values that can vary in an analysis without violating any given constraints. For a paired samples t-test:

  1. Single Sample of Differences: In a paired samples t-test, we calculate the differences between each pair of measurements (e.g., before vs. after). These differences are treated as a single sample. Let’s call the number of paired observations (n).
  2. Formula for Degrees of Freedom: The degrees of freedom for this single sample of differences is calculated as:
    [
    \text{df} = n - 1
    ]This formula works because we have (n) pairs, and when calculating the sample variance or standard deviation, we lose one degree of freedom (due to estimating the mean difference from the sample itself).
  • a) (n_1 + n_2 - 2): This formula is used for independent t-tests, where (n_1) and (n_2) are the sample sizes of two independent groups. It is not applicable to paired samples.
  • b) (n - 1): Correct answer. In a paired samples t-test, (n) represents the number of paired observations (not the total number of data points), and the degrees of freedom are (n - 1).
  • c) (n_1 \times n_2 - 1): This is not a standard formula for degrees of freedom in t-tests.
  • d) (n_1 + n_2 - 1): This is incorrect for both independent and paired samples t-tests. It does not correctly represent the degrees of freedom for either test.

For a paired samples t-test, the degrees of freedom are calculated based on the number of pairs ((n)), and since we use up one degree of freedom when calculating the mean of the differences, the degrees of freedom are (n - 1). This is why the correct answer is b) (n - 1).

21
Q

What is one way ANOVA

A

The oneway Analysis of Variance (AnoVa) is the extension of the t-test and is therefore used to compare the means of three or more groups.
Examples are:
Comparing mean glucose levels of non-diabetics, pre-diabetics and diabetics.
Comparing incidence rates of COVID-19 in three metropolitan areas

22
Q

State the three assumptions of one way ANOVA
When will ANOVA be inappropriate

A

The following are the underlying assumptions of the oneway anova:

The groups must be independent

The dependent variable should be normally distributed for each level of the independent variable

The variances of the groups must be the same (homoscedasticity): Homoscedasticity, also known as homogeneity of variances, is an assumption in ANOVA (Analysis of Variance) that the variances within each of the groups being compared are equal. This is important because ANOVA tests whether there are any statistically significant differences between the means of three or more independent (unrelated) groups.

NB: The oneway ANOVA is inappropriate when the equal variances assumption is violated

23
Q

State the three assumptions of one way ANOVA
When will ANOVA be inappropriate
What is homoscedasticity?

A

The following are the underlying assumptions of the oneway anova:

The groups must be independent

The dependent variable should be normally distributed for each level of the independent variable

The variances of the groups must be the same (homoscedasticity)

NB: The oneway ANOVA is inappropriate when the equal variances assumption is violated

24
Q

Answer the question in one way ANOVA
What is a post hoc test

You have two p values. One with F statistic and one with the Bartlett test. Which do you use to know if ANOVA is valid and how will you know?
Which do you use to check for association

A

What is a Post Hoc Test?

A post hoc test is used after finding a significant result in an ANOVA to determine which specific groups are different from each other. Here’s a simple explanation:

•	Purpose: ANOVA tells you that there is a significant difference among the group means, but it doesn’t tell you which groups are different. Post hoc tests help identify exactly which groups differ.
•	Example: Imagine you compared the test scores of three different teaching methods and found a significant result. A post hoc test helps you figure out whether Method A is significantly different from Method B, Method C, or both.

Post Hoc Tests (if significant): If ANOVA shows significant results, you may need to conduct post hoc tests (like Tukey’s HSD or Bonferroni correction) to determine which specific groups are significantly different from each other.

: If the ANOVA test indicates a significant difference (p-value < 0.05), you would use the Bonferroni correction to perform pairwise comparisons between group means. This helps to determine which specific groups differ from each other while controlling the overall Type I error rate.
2. ANOVA Validity:
• ANOVA Valid: The Bonferroni correction is used regardless of whether the ANOVA test was valid or not, as long as you are performing multiple comparisons following a significant ANOVA result. The validity of the ANOVA is about the assumptions (e.g., homogeneity of variances), while the Bonferroni correction is about controlling Type I error when making multiple comparisons.

You use the p value near the F statistic to check for association and you use the one with Bartlett test to check if ANOVA is valid. It’s valid if the p value is more than 0.05 cuz this shows that the variances are not different and are equal. There’s now significant difference between the variances of the categories.
If it’s less than 0.05, it is statistically significant hence there is significant difference between the variances and the test is invalid.

But if the p value near the F statistic is less than 0.05, there is association between the variables
If more than, no association

25
Q

Answer the question in one way ANOVA

A
26
Q

A test gives a p value of 0.0520 from a table on chi square that’s in one of the questions

Which of the statements about this is TRUE?
A. The equal variances assumption was invalid
B. The oneway anova test was valid
C. The posthoc test was not appropriate
D. There was a significant difference in mean BMI

A

Based on the output provided, here’s an analysis of the statements about the one-way ANOVA test:

  • Between Groups Source:
    • Sum of Squares (SS): 148,202,562
    • Degrees of Freedom (df): 16
    • Mean Square (MS): 16.06
    • F-statistic: 3.12
    • p-value (Prob > F): 0.0520
  • Bartlett’s Test for Equal Variances:
    • Chi-square (χ²): 0.0583
    • p-value (Prob > χ²): 0.971
  1. The equal variances assumption was invalid:
    • False. Bartlett’s test is used to test the equality of variances across groups. A p-value of 0.971 indicates that there is no significant evidence against the assumption of equal variances. Therefore, the assumption of equal variances was not invalidated.
  2. The one-way ANOVA test was valid:
    • True. Since Bartlett’s test indicates that the assumption of equal variances is met (p-value = 0.971), and assuming normality of residuals and independence, the one-way ANOVA test was valid.
  3. The post-hoc test was not appropriate:
    • False. This statement cannot be determined from the given output alone. The decision to perform a post-hoc test depends on whether there was a significant difference found in the ANOVA. In this case, the p-value for the ANOVA is 0.0520, which is marginally above the common significance level of 0.05. A post-hoc test is generally conducted if the ANOVA test indicates a significant difference (typically p < 0.05), so it may not have been appropriate here.
  4. There was a significant difference in mean BMI:
    • False. The p-value from the ANOVA test is 0.0520, which is not below the conventional significance level of 0.05. Thus, there is no statistically significant difference in mean FEV (Forced Expiratory Volume), not BMI, between the groups.

The true statement is:
- The one-way ANOVA test was valid.

Let’s clarify why the equal variances assumption was actually valid in the context of the output you provided.

Bartlett’s test assesses whether the variances across groups are equal. Here’s how to interpret the results:

  • Bartlett’s Test Result:
    • Chi-square (χ²) Statistic: 0.0583
    • p-value: 0.971
  • p-value > 0.05: This indicates that there is no significant evidence against the null hypothesis of equal variances. In other words, Bartlett’s test suggests that the assumption of equal variances across the groups is not violated.
  1. Bartlett’s Test Validity:
    • The p-value from Bartlett’s test is very high (0.971), which means the test did not find significant evidence to reject the null hypothesis of equal variances. Therefore, based on this test, the assumption of equal variances is valid for the one-way ANOVA test.
  2. Differences in Standard Deviations:
    • It is possible to observe differences in standard deviations among groups, but Bartlett’s test is a formal statistical test designed to assess whether these differences are statistically significant. The high p-value suggests that the observed differences in standard deviations are not large enough to be statistically significant.
  • Statistical Test Result: Bartlett’s test result with a high p-value indicates that the variances are statistically equal, supporting the validity of the equal variance assumption for the ANOVA.
  • Practical Observations: While practical differences in standard deviations may exist, Bartlett’s test provides a formal test of this assumption.

If you still see practical concerns about variance differences, consider using a different test, such as Welch’s ANOVA, which is robust to unequal variances.

In this case, the correct statement is that the one-way ANOVA test was valid, as Bartlett’s test confirmed that the equal variance assumption was met. Differences in standard deviations observed practically do not necessarily invalidate the test if Bartlett’s result is not significant.

Yes, if the p-value from a test for equal variances (such as Levene’s test or Bartlett’s test) is not statistically significant, it suggests that the variances across groups are similar and the assumption of equal variances is met. This means that the ANOVA test is considered valid under the assumption of equal variances.

Key Points:

1.	Non-Significant p-value in Variance Tests:
•	Implication: A p-value greater than 0.05 in tests like Levene’s or Bartlett’s indicates that there is no significant evidence to reject the null hypothesis of equal variances. Therefore, the variances are considered to be approximately equal across the groups.
•	ANOVA Validity: With equal variances assumed, the one-way ANOVA can be considered valid for comparing group means.
2.	When p-value is Significant:
•	Implication: A p-value less than 0.05 suggests significant differences in variances. This means the equal variance assumption is violated.
•	Alternative Actions: In such cases, alternative methods like Welch’s ANOVA (for unequal variances) or non-parametric tests (such as the Kruskal-Wallis test) should be used.

Summary:

•	If no significant p-value is found: The variance across groups is considered equal, and the one-way ANOVA test is valid.
•	If a significant p-value is found: The equal variance assumption is violated, and alternative statistical methods should be considered.
27
Q

A test has a p value of 0.0520 from a chi square table.

Which of the statements about the table is TRUE?
A. The equal variances assumption was invalid
B. The oneway anova test was valid
C. The posthoc test was not appropriate
D. There was a significant difference in mean BMI

A

Let’s clarify why the equal variances assumption was actually valid in the context of the output you provided.

Bartlett’s test assesses whether the variances across groups are equal. Here’s how to interpret the results:

  • Bartlett’s Test Result:
    • Chi-square (χ²) Statistic: 0.0583
    • p-value: 0.971
  • p-value > 0.05: This indicates that there is no significant evidence against the null hypothesis of equal variances. In other words, Bartlett’s test suggests that the assumption of equal variances across the groups is not violated.
  1. Bartlett’s Test Validity:
    • The p-value from Bartlett’s test is very high (0.971), which means the test did not find significant evidence to reject the null hypothesis of equal variances. Therefore, based on this test, the assumption of equal variances is valid for the one-way ANOVA test.
  2. Differences in Standard Deviations:
    • It is possible to observe differences in standard deviations among groups, but Bartlett’s test is a formal statistical test designed to assess whether these differences are statistically significant. The high p-value suggests that the observed differences in standard deviations are not large enough to be statistically significant.
  • Statistical Test Result: Bartlett’s test result with a high p-value indicates that the variances are statistically equal, supporting the validity of the equal variance assumption for the ANOVA.
  • Practical Observations: While practical differences in standard deviations may exist, Bartlett’s test provides a formal test of this assumption.

If you still see practical concerns about variance differences, consider using a different test, such as Welch’s ANOVA, which is robust to unequal variances.

In this case, the correct statement is that the one-way ANOVA test was valid, as Bartlett’s test confirmed that the equal variance assumption was met. Differences in standard deviations observed practically do not necessarily invalidate the test if Bartlett’s result is not significant.

To determine the validity of the one-way ANOVA test based on the provided p-values, you should consider both the ANOVA p-value and the p-value from the test for equal variances (Bartlett’s test in your case). Here’s how you use these p-values:

  • Purpose: This p-value tests whether there are any significant differences in means among the groups.
  • Provided p-value: 0.0520
  • Interpretation: Since this p-value is greater than the typical significance level of 0.05, it suggests that there is no statistically significant difference in the means of the groups. This p-value tells you if the overall ANOVA test result is significant or not.
  • Purpose: This p-value tests whether the variances are equal across the groups.
  • Provided p-value: 0.971
  • Interpretation: A high p-value (greater than 0.05) from Bartlett’s test suggests that the assumption of equal variances (homoscedasticity) is not violated. This means that variances among the groups are considered to be equal, making the ANOVA test valid in terms of this assumption.
  • Use the ANOVA p-value (0.0520) to assess whether there is a significant difference in means across the groups.
  • Use the Bartlett’s test p-value (0.971) to check if the assumption of equal variances is valid.

In your case, the ANOVA p-value (0.0520) indicates no significant difference in means, while Bartlett’s test p-value (0.971) supports that the variances are equal. The ANOVA test remains valid given that Bartlett’s test confirms the homogeneity of variances.

Yes, if the p-value from a test for equal variances (such as Levene’s test or Bartlett’s test) is not statistically significant, it suggests that the variances across groups are similar and the assumption of equal variances is met. This means that the ANOVA test is considered valid under the assumption of equal variances.

  1. Non-Significant p-value in Variance Tests:
    • Implication: A p-value greater than 0.05 in tests like Levene’s or Bartlett’s indicates that there is no significant evidence to reject the null hypothesis of equal variances. Therefore, the variances are considered to be approximately equal across the groups.
    • ANOVA Validity: With equal variances assumed, the one-way ANOVA can be considered valid for comparing group means.
  2. When p-value is Significant:
    • Implication: A p-value less than 0.05 suggests significant differences in variances. This means the equal variance assumption is violated.
    • Alternative Actions: In such cases, alternative methods like Welch’s ANOVA (for unequal variances) or non-parametric tests (such as the Kruskal-Wallis test) should be used.
  • If no significant p-value is found: The variance across groups is considered equal, and the one-way ANOVA test is valid.
  • If a significant p-value is found: The equal variance assumption is violated, and alternative statistical methods should be considered.
28
Q

Which of the following is NOT an assumption of the oneway anova? The
A. distribution must be normal for all the groups
B. observations are not correlated
C. differences in observations must be normally distributed
D.variance must be the same for all the groups

A

Here’s a clearer explanation:

Assumptions of One-Way ANOVA:

1.	Normality: The data within each group should be approximately normally distributed.
2.	Independence: Observations should be independent of each other. This means that each observation is not influenced by or related to another.
3.	Homoscedasticity (Equal Variances): The variances among the groups should be approximately equal.

Incorrect Assumption:

•	Observations are not correlated: This is the incorrect statement. The correct assumption is that observations should be independent (not correlated). Correlated observations violate the assumption of independence, which is crucial for the validity of the ANOVA results.