Statistics Flashcards
- OpenStax Introductory Statistics - Introduction to Statistics 4E, Freedman
Average
A number that describes the central tendency of the data
average = sum of entries / number of entries
Blinding
Not telling participants which treatment a subject is receiving
Categorical Variable
Variables that take on values that are names or labels
Cluster Sampling
A method for selecting a random sample and dividing the population into groups (clusters); use simple random sampling to select a set of clusters. Every individual in the chosen clusters is included in the sample.
Continuous Random Variable
A random variable (RV) whose outcomes are measured; the height of trees in the forest is a continuous RV.
Control Group
A group in a randomized experiment that receives an inactive treatment but is otherwise managed exactly as the other groups
Convenience Sampling
A nonrandom method of selecting a sample; this method selects individuals that are easily accessible and may result in biased data.
Cumulative Relative Frequency
The term applies to an ordered set of observations from smallest to largest. The cumulative relative frequency is the sum of the relative frequencies for all values that are less than or equal to the given value.
Data
A set of observations (a set of possible outcomes); most data can be put into two groups: qualitative (an attribute whose value is indicated by a label) or quantitative (an attribute whose value is indicated by a number). Quantitative data can be separated into two subgroups: discrete and continuous. Data is discrete if it is the result of counting (such as the number of students of a given ethnic group in a class or the number of books on a shelf). Data is continuous if it is the result of measuring (such as distance traveled or weight of luggage)
Double-blind experiment
An experiment in which both the subjects of an experiment and the researchers who work with the subjects are blinded
Triple-blind experiment
An experiment in which both the subjects of an experiment, researchers who work with the subjects, and analysts who analyze the data are blinded
Experimental Unit
Any individual or object to be measured
Explanatory Variable
The independent variable in an experiment; the value controlled by researchers
Frequency
The number of times a value of the data occurs
Institutional Review Board
A committee tasked with oversight of research programs that involve human subjects
Informed Consent
Any human subject in a research study must be cognizant of any risks or costs associated with the study. The subject has the right to know the nature of the treatments included in the study, their potential risks, and their potential benefits. Consent must be given freely by an informed, fit participant.
Lurking Variable
A variable, not included in experiment, that has an effect on a study even though it is neither an explanatory variable nor a response variable
Confounding Variable
Difference between the treatment and control groups - other than the treatment - which affects the responses being studied. A third variable, associated with both the dependent and response variables.
“The idea is a bit subtle: a gene that causes cancer but is unrelated to smoking is not a confounder and is sideways to the argument”
Gene needs to A) cause cancer AND B) get people to smoke
Sometime controlled for by cross-tabulation
How is a Lurking Variable different from a Confounding Variable?
Lurking = Unknown or unconsidered
Confounding = Known but not controlled for
Nonsampling Error/Systematic Error/Bias
An issue that affects the reliability of sampling data other than natural variation; it includes a variety of human errors including poor study design, biased sampling methods, inaccurate information provided by study participants, data entry errors, and poor analysis.
Numerical Variable
Variables that take on values that are indicated by numbers
Population Parameter
A number that is used to represent a population characteristic and that generally cannot be determined easily
Placebo
An inactive treatment that has no real effect on the explanatory variable
Population
All individuals, objects, or measurements whose properties are being studied
Probability
A number between zero and one, inclusive, that gives the likelihood that a specific event will occur
Proportion
The number of successes divided by the total number in the sample
Qualitative Data
Data that has an attribute whose value is indicated by a label
Quantitative Data
Quantitative (an attribute whose value is indicated by a number) data can be separated into two subgroups: discrete and continuous.
Data is discrete if it is the result of counting (such as the number of students of a given ethnic group in a class or the number of books on a shelf).
Data is continuous if it is the result of measuring (such as distance traveled or weight of luggage).
Random Assignment
The act of organizing experimental units into treatment groups using random methods
Random Sampling
A method of selecting a sample that gives every member of the population an equal chance of being selected.
Relative Frequency
The ratio of the number of times a value of the data occurs in the set of all outcomes to the number of all outcomes to the total number of outcomes
Representative Sample
A subset of the population that has the same characteristics as the population
Response Variable
The dependent variable in an experiment; the value that is measured for change at the end of an experiment
Sample
A subset of the population studied
Sampling Error (Chance Variation)
The natural variation that results from selecting a sample to represent a larger population; this variation decreases as the sample size increases, so selecting larger samples reduces sampling error.
Sampling with Replacement
Once a member of the population is selected for inclusion in a sample, that member is returned to the population for the selection of the next individual.
Sampling without Replacement
A member of the population may be chosen for inclusion in a sample only once. If chosen, the member is not returned to the population before the next selection.
Simple Random Sampling
A straightforward method for selecting a random sample; give each member of the population a number. Use a random number generator to select a set of labels. These randomly selected labels identify the members of your sample.
Sample Statistic
A numerical characteristic of the sample; a statistic estimates the corresponding population parameter.
sample estimate = parameter + bias + chance error/sampling error
Stratified Sampling
A method for selecting a random sample used to ensure that subgroups of the population are represented adequately; divide the population into groups (strata). Use simple random sampling to identify a proportionate number of individuals from each stratum.
Systematic Sampling
A method for selecting a random sample; list the members of the population. Use simple random sampling to select a starting point in the population. Let k = (number of individuals in the population)/(number of individuals needed in the sample). Choose every kth individual in the list starting with the one that was randomly selected. If necessary, return to the beginning of the population list to complete your sample.
Treatments
Different values or components of the explanatory variable applied in an experiment
Variable
A characteristic of interest for each person or object in a population.
Box plot
A graph that gives a quick picture of the middle 50% of the data plus the minimum and maximum value.
First Quartile
The value that is the median of the of the lower half of the ordered data set
Frequency Polygon
A graph that shows the frequencies of grouped data. It is a type of frequency diagram that plots the midpoints of the class intervals against the frequencies and then joins up the points with straight lines.
Frequency Table
A data representation in which grouped data is displayed along with the corresponding frequencies
Histogram
A graphical representation in x-y form of the distribution of data in a data set; x represents the data and y represents the frequency, or relative frequency. The graph consists of contiguous rectangles - the area of each representing the percent of the data that block/class represents.
Interquartile Range (IQR)
The range of the middle 50 percent of the data values; the IQR is found by subtracting the first quartile from the third quartile.
Interval
Also called a class interval; an interval represents a range of data and is used when displaying large data sets
Mean
The sum of all values in the population divided by the number of values
Median
A number that separates ordered data into halves; half the values are the same number or smaller than the median and half the values are the same number or larger than the median. The median may or may not be part of the data.
Midpoint
The mean of an interval in a frequency table
Mode
The value that appears most frequently in a set of data
Outlier
An observation that does not fit the rest of the data.
Standard test: if data is 2 standard deviations or more
Paired Data Set
Two data sets that have a one-to-one relationship so that 1) each data set is the same size and 2) each data point in one data set is matched with exactly one data point in the other data set.
Percentile
A number that divides ordered data into hundredths; percentiles may or may not be part of the data. The median of the data is the second quartile and the 50th percentile. The first and third quartiles are the 25th and the 75th percentiles, respectively.
FORMULA 1: Percentile given Percentile Rank of Value
P(x) = (n/N)*100, where n = values below x
FORMULA 2: Percentile Rank given percentile
n = (P*N)/100
Quartiles
The numbers that separate the data into quarters; quartiles may or may not be part of the data. The second quartile is the median of the data.
Skewed
Used to describe data that is not symmetrical; when the right side of a graph looks “chopped off” compared the left side, we say it is “skewed to the left.” When the left side of the graph looks “chopped off” compared to the right side, we say the data is “skewed to the right.” Alternatively: when the lower values of the data are more spread out, we say the data are skewed to the left. When the greater values are more spread out, the data are skewed to the right.
Outliers in the tail pull the mean from the center towards the longer tail.
Standard Deviation
A number that is equal to the square root of the variance and measures how far data values are from their mean; notation: s for sample standard deviation and σ for population standard deviation.
Measures the average deviation of the values from the average
SD = the rms size of the deviations from the average
Measures how far away values are from the mean on average
Deviation = Value - Average
SD = sqrt(avg(deviations^2))
Variance
Mean of the squared deviations from the mean, or the square of the standard deviation; for a set of data, a deviation can be represented as x – x¯ where x is a value of the data and x¯ is the sample mean. The sample variance is equal to the sum of the squares of the deviations divided by the difference of the sample size and one.
AND Event
An outcome is in the event A AND B if the outcome is in both A AND B at the same time.
Complement Event
The complement of event A consists of all outcomes that are NOT in A.
Contingency Table
The method of displaying a frequency distribution as a table with rows and columns to show how two variables may be dependent (contingent) upon each other; the table provides an easy way to calculate conditional probabilities.
Conditional Probability
The likelihood that an event will occur given that another event has already occurred
Dependent Events
If two events are NOT independent, then we say that they are dependent.
Equally Likely
Each outcome of an experiment has the same probability.
Event
A subset of the set of all outcomes of an experiment; the set of all outcomes of an experiment is called a sample space and is usually denoted by S. An event is an arbitrary subset in S. It can contain one outcome, two outcomes, no outcomes (empty subset), the entire sample space, and the like. Standard notations for events are capital letters such as A, B, C, and so on.
Experiment
A planned activity carried out under controlled conditions
Independent Events
The occurrence of one event has no effect on the probability of the occurrence of another event. Events A and B are independent if one of the following is true:
P(A|B) = P(A)
P(B|A) = P(B)
P(A AND B) = P(A)P(B)
Mutually Exclusive
Two events are mutually exclusive if the probability that they both happen at the same time is zero. If events A and B are mutually exclusive, then P(A AND B) = 0.
Or Event
An outcome is in the event A OR B if the outcome is in A or is in B or is in both A and B.
Outcome
A particular result of an experiment
Sample Space
The set of all possible outcomes of an experiment
Tree Diagram
The useful visual representation of a sample space and events in the form of a “tree” with branches marked by possible outcomes together with associated probabilities (frequencies, relative frequencies)
Venn Diagram
The visual representation of a sample space and events in the form of circles or ovals showing their intersections
Bernoulli Trials
an experiment with the following characteristics:
- There are only two possible outcomes called “success” and “failure” for each trial.
- The probability p of a success is the same for any trial (so the probability q = 1 − p of a failure is the same for any trial).
Binomial Experiment
A statistical experiment that satisfies the following three conditions:
- There are a fixed number of trials, n.
- There are only two possible outcomes, called “success” and, “failure,” for each trial. The letter p denotes the probability of a success on one trial, and q denotes the probability of a failure on one trial.
- The n trials are independent and are repeated using identical conditions.
Expected Value
Expected arithmetic average when an experiment is repeated many times.
The average value of values generated by a chance process
Binomial Probability Distribution
A discrete random variable (RV) that arises from Bernoulli trials; there are a fixed number, n, of independent trials. “Independent” means that the result of any trial (for example, trial one) does not affect the results of the following trials, and all trials are conducted under the same conditions. Under these circumstances the binomial RV X is defined as the number of successes in n trials. The notation is: X equiv B(n,p)
Geometric Distribution
A discrete random variable (RV) that arises from the Bernoulli trials; the trials are repeated until the first success. The geometric variable X is defined as the number of trials until the first success.
Geometric Experiment
A statistical experiment with the following properties:
- There are one or more Bernoulli trials with all failures except the last one, which is a success.
- In theory, the number of trials could go on forever. There must be at least one trial.
- The probability, p, of a success and the probability, q, of a failure do not change from trial to trial.
Hypergeometric Experiment
A statistical experiment with the following properties:
- You take samples from two groups.
- You are concerned with a group of interest, called the first group.
- You sample without replacement from the combined groups.
- Each pick is not independent, since sampling is without replacement.
- You are not dealing with Bernoulli Trials.
Hypergeometric Probability
A discrete random variable (RV) that is characterized by:
- A fixed number of trials.
- The probability of success is not the same from trial to trial.
We sample from two groups of items when we are interested in only one group. X is defined as the number of successes out of the total number of items chosen.
Probability Distribution Function (PDF)
A mathematical description of a discrete random variable (RV), given either in the form of an equation (formula) or in the form of a table listing all the possible outcomes of an experiment and the probability associated with each outcome.
Poisson Probability Distribution
A discrete random variable (RV) that counts the number of times a certain event will occur in a specific interval; characteristics of the variable:
- The probability that the event occurs in a given interval is the same for all intervals.
- The events occur with a known mean and independently of the time since the last event.
The Poisson distribution is often used to approximate the binomial distribution, when n is “large” and p is “small” (a general rule is that n should be greater than or equal to 20 and p should be less than or equal to 0.05).
Random Variable (RV)
A characteristic of interest in a population being studied; common notation for variables are upper case Latin letters X, Y, Z,…; common notation for a specific value from the domain (set of all possible values of a variable) are lower case Latin letters x, y, and z. For example, if X is the number of children in a family, then x represents a specific integer 0, 1, 2, 3,…. Variables in statistics differ from variables in intermediate algebra in the two following ways.
- The domain of the random variable (RV) is not necessarily a numerical set; the domain may be expressed in words; for example, if X = hair color then the domain is {black, blond, gray, green, orange}.
- We can tell what specific value x the random variable X takes only after performing the experiment.
The Law of Large Numbers
Sample mean approaches population mean as sample size increases.
As the number of trials in a probability experiment increases, the difference between the theoretical probability of an event and the relative frequency probability approaches zero.
Decay parameter
The decay parameter describes the rate at which probabilities decay to zero for increasing values of x.
Exponential Distribution
A continuous random variable (RV) that appears when we are interested in the intervals of time between some random events, for example, the length of time between emergency arrivals at a hospital;
Memoryless property
For an exponential random variable X, the memoryless property is the statement that knowledge of what has occurred in the past has no effect on future probabilities.
Poisson Distribution
If there is a known average of λ events occurring per unit time, and these events are independent of each other, then the number of events X occurring in one unit of time has the Poisson distribution.
Uniform Distribution
A discrete or continuous random variable (RV) that has equally likely outcomes over the domain, a < x < b.
Normal Distribution
aka the Gaussian distribution; Visualized as a symmetrical bell-shaped curve. The most frequent values cluster around the center, and the probability of finding values far away from the center tapers off gradually in both directions.
Standard Normal Distribution / Z-Distribution
A normal distribution with mean 0 and standard deviation of 1
Z-score
Statistical value that describes a specific data point’s relative position within a standard normal distribution (aka Z-Distribution). It tells you how many standard deviations a particular point is away from the mean (average) of the data, expressed in standard deviation units.
Central Limit Theorem (CLT)
The sampling distribution of mean for a variable approximates a normal distribution with increasing sample size.
Given a random variable (RV) with known mean μ, known standard deviation σ and known sample size of n - if n is sufficiently large, then the distribution of the sample means and the distribution of the sample sums will approximate a normal distributions regardless of the shape of the population. The mean of the sample means will equal the population mean, and the mean of the sample sums will equal n times the population mean.
As sample size, n, increases, the standard deviation of the sampling distribution becomes smaller and because the square root of the sample size is in the denominator. In other words, the sampling distribution clusters more tightly around the mean as sample size increases.
Why is it important?
1. Normality Assumption
Allows us to use hypothesis test that rely on data that is normally distributed when we have nonnormally distributed data
- Precision of Estimates
We can make our estimate more precise by increasing or sample size
Sampling Distribution of the Mean
Describes the probability distribution of all possible means you could get if you draw multiple random samples of size n from a population.
Given simple random samples of size n from a given population with a measured characteristic such as mean, proportion, or standard deviation for each sample, the probability distribution of all the measured characteristics is called a sampling distribution.
Standard Error of the Mean
The standard deviation of the distribution of the sample means around the population mean.
Binomial Distribution
A discrete random variable (RV) which arises from Bernoulli trials; there are a fixed number, n, of independent trials. “Independent” means that the result of any trial (for example, trial 1) does not affect the results of the following trials, and all trials are conducted under the same conditions. Under these circumstances the binomial RV X is defined as the number of successes in n trials.
Confidence Interval (CI)
An interval estimate for an unknown population parameter. This depends on:
1, the desired confidence level,
2. information that is known about the distribution (for example, known standard deviation),
3. the sample and its size.
Confidence Level (CL)
The percent expression for the probability that the confidence interval contains the true population parameter; for example, if the CL = 90%, then in 90 out of 100 samples the interval estimate will enclose the true population parameter.
Confidence Level = 1- Alpha(α)
Degrees of Freedom (df)
The number of objects in a sample that are free to vary. Usually, df = n -1
Error Bound for a Population Mean (EBM) / Margin of Error
The margin of error; depends on the confidence level, sample size, and known or estimated population standard deviation.
Inferential Statistics
Also called statistical inference or inductive statistics; this facet of statistics deals with estimating a population parameter based on a sample statistic. For example, if four out of the 100 calculators sampled are defective we might infer that four percent of the production is defective.
Point Estimate
A single number computed from a sample and used to estimate a population parameter
Student’s t-Distribution
Investigated and reported by William S. Gossett in 1908 and published under the pseudonym Student; the major characteristics of the random variable (RV) are:
- It is continuous and assumes any real values.
- The pdf is symmetrical about its mean of zero. However, it is more spread out and flatter at the apex than the normal distribution.
- It approaches the standard normal distribution as n get larger. (n>=30 closely follows normal distribution)
- There is a “family” of t–distributions: each representative of the family is completely defined by the number of degrees of freedom, which is one less than the number of data.
Hypothesis
A statement about the value of a population parameter, in case of two hypotheses, the statement assumed to be true is called the null hypothesis (notation H0) and the contradictory statement is called the alternative hypothesis (notation Ha).
Hypothesis Testing
Statistical analysis that uses sample data to assess two mutually exclusive theories about the properties of a population.
Based on sample evidence, a procedure for determining whether the hypothesis stated is a reasonable statement and should not be rejected or is unreasonable and should be rejected.
- Understand Sampling Distribution
- Understand test statistic given sampling distribution
- Run Test and receive p-value
- Evaluate statistical significance and decision
Work by taking the observed test statistic from a sample and using the sampling distribution to calculate the probability of obtaining that test statistic if the null hypothesis is correct.
Level of Significance of the Test
Probability of a Type I error (reject the null hypothesis when it is true). Notation: α. In hypothesis testing, the Level of Significance is called the preconceived α or the preset α.
Q: Is this true? See Hypothesis Testing by Jim Frost.
p-value
The probability that an event will happen purely by chance assuming the null hypothesis is true. The smaller the p-value, the stronger the evidence is against the null hypothesis.
Probability of observing a sample statistic that is at least as extreme as our sample statistic when we assume that the null hypothesis is correct
Indicates the strength of the sample evidence against the null hypothesis. Probability we would obtain the observed effect, or larger, if the null hypothesis is correct.
If the p-value is less than or equal to the significance level, you reject the null hypothesis, in favor of your alternative hypothesis, and our results are statistically significant.
- When the p-value is low, the null must go
- If the p-value is high, the null will fly
Whenever we see a p-value, we know we are looking at the results of a hypothesis test.
Determines statistical significance for a hypothesis test
P-values are NOT an error rate!
The chance of getting a test statistic assuming the null hypothesis is true. Not the chance of the null hypothesis being correct.
Type I Error
Alpha (α)
The decision is to reject the null hypothesis when, in fact, the null hypothesis is true.
Ex. Saying there is a change when there is not
Ex. Fire alarm going off when there is no fire
Type II Error
Beta (β)
The decision is not to reject the null hypothesis when, in fact, the null hypothesis is false.
Ex. Saying there is not a change when there is.
Ex. Fire alarm does NOT ring when there is a fire
Descriptive Statistics
Describes/Summarizes a dataset for a particular group of objects, observations, or people. No attempt to generalize beyond the set of observations.
Statistics
Science concerned with collecting, organizing, analyzing, interpreting, and presenting data. It equips us with the tools and methods to extract meaningful insights from information.
Continuous Data
TODO
Discrete Data
TODO
Interval Scale Data
TODO
Ratio Scale Data
TODO
Binary Data
TODO
Ordinal Data
TODO
Multimodal Distribution
A distribution that has multiple peaks
4 Categories of Descriptive/Summary Statistics
- Central Tendency
- Spread/Dispersion
- Correlation/Dependency
- Shape of Distribution
Second Quartile
aka the Median of the data. Splits the entire ordered dataset into 2 equal parts
Third Quartile
The value that is the median of the of the upper half of the ordered data set
Central Tendency
Measure of the center point or typical value of a dataset
Variability
Measure of the amount of dispersion in a dataset. How spread out the values in the dataset are.
Range
The difference between the MAX and MIN value in a dataset.
Spurious Correlation
Situation where two variables appear to have an association, but in reality, a third factor (confounding/lurking variable) causes that association.
Probability Distribution
Mathematical function that describes the probabilities for all possible outcomes of a random variable
Correlation
Indicates how much one variable changes (dependent variable) in value in response to change in another variable (independent variable). Value between 0 and 1.
Strength of association between 2 variables.
Strength:
- 1 = perfect relationship
- 0.8 = strong
- 0.6 = moderate
- 0.4 = weak
- 0 = no relationship
Direction:
- Positive = upward slope
- Negative = downward slope
Discrete Probability Distribution / Probability Mass Functions
Probability distributions for discrete data (assume a set of distinct values)
Continuous Probability Distribution / Probability Density Functions
Probability distribution for continuous data (assumes infinite number of values between any two values)
Negative Binomial Distribution
Discrete probability distribution used to calculate the number of trials that are required to observe an event a specific number of times. In other words, given a known probability of an event occurring and the number of events that we specify, this distribution calculates the probability for observing that number of events within N trials.
Empirical Rule
Can be used to determine the proportion of values that fall within a specific number of standard deviations from the mean for data that is normally distributed.
STDDEV | PERCENTAGE OF DATA
1 | 68%
2 | 95%
3 | 99.7%
Standardization
Data transformation that creates a standard normal distribution (Z-distribution) with a mean of 0 and standard deviation of 1. Allows for easier comparison between features with different scales.
Normalization
Data transformation that rescales the data to a specific range, often between 0 and 1 or -1 and 1 (min-max normalization). Ensures that all features are on a similar scale, preventing features with larger ranges from dominating the model during training.
Tools for Inferential statistics:
- Hypothesis Tests
- Confidence Intervals
- Regression Analysis
Precision
TODO
Random Sampling Methodologies
- Simple Random Sampling
- Stratified Sampling
- Cluster Sampling
Simple Random Sampling
TODO
Stratified Sampling
TODO
Cluster Sampling
TODO
Independent Variable
Variables that we include in the experiment to explain or predict changes in the dependent/response variable.
Dependent Variable / Response Variable
Variable in the experiment that we want to explain or predict. Values of this variable are dependent on other variables
Causality
One event (cause) brings about another event (affect)
Parametric Statistics
Branch of statistics that assumes sample data come from populations that are adequately represented in modeled by probability distributions with a set of parameters.
Nonparametric Statistics
Branch of statistics that does not assume sample data comes from populations that are adequately represented in modeled by probability distributions with a set of parameters.
Null Hypothesis
H0; One of the two mutually exclusive theories about the population’s properties. Typically states that there is no effect.
Alternative Hypothesis
HA; The other of the two mutually exclusive theories about the population’s properties. Typically states that the population parameter does not equal the null hypothesis value. In other words, there is a non-zero effect.
Effect / Population Effect / Difference
The difference between the population value and the null hypothesis value.
Represents the ‘signal’ in the data
Significance Level (Alpha, α)
Defines how strong the sample evidence must be to conclude an effect exists in the population. Evidentiary standard set before study begins. Specifies how strongly the sample evidence must contradict the null hypothesis before we can reject the null hypothesis. Standard defined by the probability of rejecting a true null hypothesis. In other words, the probability that we say there is an effect when there is no effect.
Evidentiary standard set to determine whether our sample is strong enough to reject the null hypothesis.
Confidence Level = 1 - Alpha(α)
Critical Region
Defines sample values on the sampling distribution that are improbable enough to warrant rejecting the null hypothesis and therefore represent statistically significant results.
Student’s t-Distribution
TODO
t-test
Type of hypothesis test for the mean and uses the Student’s t-distribution sampling distribution and t-value to determine statistical significance
t-statistic / t-value
Test statistic for t-test hypothesis test. Used alongside Student’s t-Distribution
1-Sample t-Test
Hypothesis test where we examine a single population and compare it to a base, hypothesized value.
Null: The population mean equals the hypothesized mean
Alternative: The population mean does not equal the hypothesized mean
Assumptions:
1. Representative, random sample
2. Continuous Data
3. Sample data is normally distributed or Sample Size >= 20
Two-Tailed 1-Sample t-Test
aka nondirectional or two-sided test because we are testing for effects in both directions. When we perform a two-tailed test, we split the significance level percentage between both tails of the distribution.
Null: The effect equals zero
Alternative: The effect does not equal zero (nothing about direction)
We can detect both positive and negative results. Standard in scientific research where discovering ANY type of effect is usually of interest to researchers.
Default choice
One-Tailed 1-Sample t-Test
aka directional or one-sided test because we can test for effects in ONLY one direction. When we perform a one-tailed test, the entire significance level percentage goes into one tail of the distribution.
Null: The effect is less than or equal to 0
Alternative: the effect is greater than zero
One-tailed tests have more statistical power to detect an effect in one direction than a two-tailed test with the same design and significance level.
One-tailed tests occur most frequently for studies where one of the following is true:
1. Effects can exist in only one direction
2. Effects can exist in both directions but we only care about an effect in one direction
2-Sample t-Test
Hypothesis test where we examine two populations and compare them to each other.
Null: The means for the two populations are equal
Alternative: The means for the two populations are not equal
Assumptions:
1. Representative, random sample
2. Continuous Data
3. Sample data is normally distributed or each groups size >= 15
4. Groups are independent
5a. Groups have equal variances
5b. Groups have unequal variances –> Use Welch’s t-test
Paired t-Test
Hypothesis test where we assess dependent samples, which are two measurements on the same person or item.
Paired t-Test is a 1-Sample t-test where our hypothesized value 0 effect for the difference between the pre- and the post-observations.
Null: The difference between pre- and post-observations are equal
Alternative: The difference between pre- and post-observations are not equal
Assumptions:
1. Representative, random sample
2. Independent observations
3. Continuous data
4. Data is normally distributed or sample size >= 20
Why not ACCEPT the null hypothesis?
If our test fails to detect an effect, that’s not proof it doesn’t exist. It just means our sample contains an insufficient amount of evidence to conclude that an effect exists.
Lack of proof doesn’t represent proof that something doesn’t exist
Test Statistic
A value that hypothesis tests calculate from our sample data. This value boils our data down into a single number, which measures the overall DIFFERENCE between our sample data and our null hypothesis.
Represents the signal-to-noise ratio for a particular sample = signal/noise
Measures the difference between the data and what is expected by the null hypothesis
Ex. (hypothesis test, test statistics) = (t-test, t-value)
Power
Hypothesis Test’s ability to detect an effect that actually exists. Test correctly rejects the a false null hypothesis.
80% is the standard benchmark for studies
Power = 1 - Beta(β) = Opposite of Type II errors
Beta(β) = Type II Error
Factors that affect power:
1. Sample Size - Larger = higher power
2. Variability - Lower = higher power
3. Effect Size - Larger = higher power
Chi-Square Distribution
TODO
Goodness-of-Fit Test
To assess whether a data set fits a specific distribution, we can apply the goodness-of-fit hypothesis test that uses the chi-square distribution.
Null hypothesis states that the data comes from the assumed distribution.
Test compares observed values against the values you would expect to have if the data followed the assumed distribution.
Test is right-tailed. Each observation or cell category must have an expected value of at least 5.
Test of Independence
To assess whether two factors are independent, we can apply the test of independence that uses the chi-square distribution.
Null hypothesis states that the two factors are independent.
Test compares observed values to expected values. Test is right-tailed. Each observation or cell category must have an expected value of at least 5.
Test of Homogeneity
To assess whether two datasets are derived from the same distribution, which can be unknown, we can apply the test of homogeneity that uses the chi-square distribution.
Null hypothesis states that the populations of the two data sets come from the same distribution.
Test compares the observed values against the expected values if the two populations followed the same distribution.
Test is right-tailed. Each observation or cell category must have an expected value of at least 5.
Test of Single Variance
TODO
Z-test
Hypothesis test used if you know the population standard deviation (companion of the t-test which is used when you only have an estimate of the population standard deviation)
Argument by contradiction, designed to show that the null hypothesis will lead to an absurd conclusion and must therefore be rejected.
Statistical Significance
Indicates that our sample provides enough evidence to conclude that the effect exists in the population (via p-value).
Factors that influence statistical significance:
1. Effect Size
2. Sample Size
3. Variability
Practical Significance
Asking whether a statistically significant effect is substantial enough to be meaningful.
Power Analysis
Managing the tradeoffs between effect size, sample size, and variability to settle on a sample size needed for a hypothesis test.
Our goal is to collect a large enough sample to have sufficient power to detect a meaningful effect but not too large to be wasteful.
Linear Equation
y = mx+b
y = dependent variable
m = slope
x = independent variable
b = y-intercept
Slope (Linear Equation)
Tells us how much the dependent variable (y) changes for every one unit increase in the independent variable (x), on average.
y-Intercept (Linear Equation)
Used to describe the dependent variable when independent variable equals zero.
Residual
TODO
Least-Squares Regression
TODO
Sum of Squared Errors
TODO
Correlation Coefficient, r
Measures the strength of the linear relationship between x and y.
Value between -1 and 1
Coefficient of Determination, r^2
Equal to the square of the correlation coefficient, r. When expressed as a percent, r^2 represents the percent of variation in the dependent variable, y, that can be explained by variation in the independent variable x using the regression line.
Linear Regression Assumptions
- Linear
- Independent
- Normal
- Equal Variance
- Random
TODO - expand
ANOVA (Analysis of Variance)
Method of testing whether of note the means of three or more populations are equal
TODO - expand
One-Way ANOVA
Hypothesis test that allows us to compare more group means (3 or more).
Requires one categorical factor for the independent variable and a continuous variable for the dependent variable. The values of the categorical factor divide the continuous data into groups. The test determines whether the mean differences between these groups is statistically significant.
Null: All group means are equal (F-value = 1)
Alternative: Not all group means are equal (F-value != 1)
Assumptions:
1. Random Samples
2. Independent Groups
3. Dependent variable is continuous
4. Independent variable is categorical
5. Sample data is normally distributed or each group has 15 or more observations
6. Group should have roughly equal variances or use Welch’s ANOVA
Two-Way ANOVA
Used to assess differences between group means that are defined by two categorical factors.
More closely resembles Linear Regression. (Just use linear regression)
Assumptions:
1. Dependent variable is continuous
2. The 2 independent variables are categorical
3. Random residuals with constant variance
F-Distribution
Distributed used in an F-test.
TODO
F-statistic / F-value / F-ratio
Test statistic used in an F-test alongside an F-Distribution.
Ratio of two variances, or technically, two mean squares.
Test of Two Variances
TODO
F-test
Type of hypothesis test for the mean and uses the F-distribution sampling distribution and F-statistic to determine statistical significance
Post Hoc Tests with ANOVA
Used to explore difference between multiple group means while controlling the experiment-wise error rate as ANOVA will only tell us IF there exist a difference between ANY of the group means.
Types:
- Tukey’s Method
- Dunnett’s Method
- Hsu’s MCB
The statistical lesson: the treatment and control groups should be as _______ as possible, except for the _________.
similar, treatment
The treatment and control groups must be drawn from the same ________ population in order to remain _______.
eligible, unbiased
Historical Controls
Patients that were part of a control group in the past. These observations are not contemporaneous with the current treatment group
Method of Comparison
Fundamental statistical method where we can understand the effect of a treatment by establishing a CONTROL and TREATMENT group where the “only” difference is the treatment. Any difference between then between these two groups is associated with/caused by with the treatment.
Caution: Confounding variables
Controlled Experiment
There is a test and control group in the experiment, but the members of the groups are not randomly assigned from an eligible group.
Randomized Controlled Experiment
Experiment where members of control and treatment group are randomly drawn from an “eligible” group
Contemporaneous Controls
Control groups that are drawn at the same time as the treatment group. Opposite of “Historical Controls”.
Observational Study
A study where researchers do not assign experimental units into the treatment and control groups but rather observe experimental units that assign themselves to the different groups.
Some subjects have the condition whose effects are being studied; this is the treatment group. The other subjects are the controls.
Can only establish association.
Association
A formal relationship between two variables measured by correlation.
Circumstantial evidence for causation.
Difference between Association and Causation?
Association = two variables (X, Y) have a correlation. When one variable moves, another moves predictably. Unknown if variable X causes the move in Y or not, we just know Y responses predictably.
Causation = same as association but we have performed a randomized-controlled experiment to confirm causation. X causes the move in Y.
Simpson’s Paradox
A statistical phenomenon where a trend appears in one set of data but disappears or reverses when the data is aggregated. This can occur when there is a confounding variable that distorts the relationship between the variables of interest.
Ex. Admissions rates between men and women at UC Berkeley
In a histogram, the _____ of the blocks represents percentages
Area
Distribution Table
A table that shows the percentage of data found in each class interval
To figure out the height of a block, in a histogram, over a class interval, divide the percentage the block represents by the ______ of the class interval
Length
Density Scale
Y-axis on a histogram representing “percent per” horizontal unit
Ex. Percent per Thousand Dollars
In a histogram, the height of a block represents ________ - percentage per horizontal unit
Crowding
With the density scale on the vertical axis, the areas of the blocks come out as a _______. The area under the histogram over an interval equals the percentage of cases in that ________. To total area under the histogram = ___%
Percent
Interval
100
Root Mean Square (rms)
Measures how big the entries of a list are, neglecting signs
rms size of a list = sqrt(avg(entries^2))
Name in reverse to perform rms:
1. SQUARE Entires
2. Take their MEAN
3. ROOT the average
Why do statisticians use the rms instead of the average when trying to understand the average size of values in a list?
Average is not good if the list entries are both positive AND negative because the positives and the negatives can cancel each other out leaving the average value around 0.
rms gets around this problem by squaring all entries first
The Standard Deviation (SD) is the ____ size of the deviations from the _______.
rms
average
Cross-Sectional Study
Study where subjects are compared to each other at one point in time
Longitudinal Study
Study where subjects are compared to themselves at different points in time
What is the difference between a “Cross-Sectional” and “Longitudinal” Study?
Cross-Sectional compares many subjects at the same time and longitudinal looks at the same subjects but over many periods of time.
When your data is skewed, it is better to use the ______ instead of the _______ as a measure of central tendency for the data.
Median
Average
__________ can be used to summarize data that does not follow the normal distribution.
Percentiles
If you ADD the same number, n, to every entry in a list, the average increases by ________
n
If you ADD the same number, n, to every entry in a list, the standard deviation increases by ________
- Does not change.
If you MULTIPLY the same number, n, to every entry in a list, the average increases by ________
AVG*n
If you MULTIPLY the same number, n, to every entry in a list, the standard deviation increases by ________
SD*n
Change of Scale
When you change from one unit to another by performing a constant operation on all entries in a list
In an ideal world, if the same thing is measured several times, the same result would be obtained each time. In practice, there are differences, and each result is thrown off by _____________, and the error changes from measurement to measurement.
Chance Error
The SD of a series of repeated measurements estimates the likely size of the _______________ in a single measurement
Chance Error
Chance Error
Refers to the random fluctuations that occur in data due to sampling variability.
The likely size of this value being the SD of a sequence of repeated measurements made under the same conditions.
For a chance model, the chance error is measured by the standard error
Measurement Error
Occurs when the value obtained from a measurement differs from the true value of the quantity being measured. This error can arise due to various factors, including chance error and systematic error (or bias)
Individual Measurement = exact value + _______ + _______
Bias
Chance Error
_________ affects all measurements the same way, pushing them in the same direction. _____________ changes from measurement to measurement, sometimes up and sometimes down.
Bias
Chance Error
Slope
Rise/Run
The rate at which y increases with x
Intercept
The height of the line (y-value) at x=0. The x-value of the line when it intersects with the y-axis.
If there is a ______ association between two variables, then knowing one helps in predicting the other. However, when there is a _______ association, information about one variable does not help much in determining the other.
Strong
Weak
Point of Averages
The dot on a scatter plot that represents a value with (x-average, y-average)
5 Summary Statistics of Scatter Plot
- Average of x-values
- SD of x-values
- Average of y-values
- SD of y-values
- Correlation (r)
Correlation Coefficient
r = avg( (x in standard units) * (y in standard units) )
Convert each variable to standard units and then take the average product
Measurement of LINEAR association between two variables.
SD Line
Line on scatter plot that goes through all (x,y) points that are equal number of SDs away from the average, for both variables
Will pass through point of Averages (x=0 SD, y = 0 SD)
slope = (SD of y)/(SD of x)
How do you convert to standard units / standardize?
Subtract mean from value and then divide by the standard deviation.
z = ( x - avg ) / SD
The correlation coefficient is a pure number, without units. It is not affected by:
1. ___________ the two variables
2. ________ the same number to all the variables of one variable
3. _________ all the values of one variable by a constant positive number
Interchanging
Adding
Multiplying
The correlation coefficient, r, measures clustering not in absolute terms but in relative terms - relative to the ________
Standard Deviations
r measures ________ association, not association __________.
Linear
In General
Regression
Technique that models the relationship between a dependent variable and one or more independent variables.
Way of using the correlation coefficient to estimate the average value of y for each value of x
Associated with each increase of one SD in x, there is an increase of r * SDs in y, on average
Regression Line
Estimates the average value of y for a corresponding value of x
Regression Effect
In virtually all test-retest situations, the bottom group on the first test will, on average, show some improvement on the second test and the top group will, on average, fall back.
Regression Fallacy
Thinking that the Regression Effect is due to something important and not just spread around the regression line
Error / Residual
The distance of the predicted value from the actual value.
Error = actual - predicted
rms Error for Regression
Tells you how much error you can expect from your predictions vs the actual, on average.
Measures spread around the regression line in absolute terms
The units for the rms error are the same as the units for the variable ______________
Being predicted
Homoscedasticity
TODO
Heteroscedasticity
TODO
Least Squares Line
Among all lines, the one that makes the smallest rms error in prediction y from x
The regression line that minimizes the rms error the most
aka the regression line
Method of Least Squares
The process of getting least squares estimates until you generate the least squares line
Least Squares Estimates
Slope and intercept of a regression line. The ones that minimize the rms the most become the least squares line
Frequency Theory
TODO
Addition Rule
To find the chance that AT LEAST ONE OF TWO things will happen, check to see if they are mutually exclusive. If they are, add the chances.
Multiplication Rule
The chance that two independent events will both happen equals the chance that the first will happen, multiplied by the chance that the second will happen given the first has happened.
What is the difference between independent and mutually exclusive?
Independent says that the probability of event A is not affected by event B while mutually exclusive is the exact opposite where the event B makes event A chance equal 0.
If you want to find the chance that AT LEAST ONE OF TWO events will occur, and the events are NOT mutually exclusive, ___________ add the chances: the sum will be ___________.
Do not
Too Big
If you are having trouble working out the chance of an event, try to figure out the chance of its ____________; then subtract from 100%
Opposite
Chance Variability
Refers to the random fluctuations that occur in data due to sampling or measurement errors. It’s the inherent uncertainty that exists in any data set, even when collected and analyzed carefully.
Box Model
A model that can assist in modeling a population
You have a box with a series of tickets in it that you are drawing at random. An operation is then performed on the draws (sum, average, etc.)
Chance Process
TODO
What is the difference between the standard deviation and the standard error of a box model?
Standard Deviation = rms of value deviations from the average
Standard Error = difference between the sample average and the expected value (population average)
Square Root Law
Principle that states that the standard error of a sample proportion, or mean, decreases in proportion to the square root of the sample size.
In simpler terms, this means that as you increase the sample size, the accuracy of your estimate (whether it’s a proportion or a mean) improves, but not at a linear rate. Instead, the improvement gets smaller and smaller as the sample size gets larger.
Part of a formula used to compute the SE for a sum of draws made at random with replacement (independent) from a box.
The square root law is the mathematical explanation for the law of averages.
When should you use a z-Test vs a t-Test?
TODO
Box Model: Expected Value for the Sum
= number of draws * avg of box
Box Model: Sum
= expected value + chance error
In general, the sum is likely to be around its expected value, give or take the chance error (measured with standard error)
Box Model: Standard Error (SE) for the Sum
How big is the chance error around the expected value of the sum likely to be?
= sqrt(num draws) * SD_of_box
Each draw adds some extra variability to the sum, because you don’t know how it is going to turn out. As the number of draws goes up, the sum gets harder to predict, the chance error (absolute) gets bigger, and so does the standard error
Here we can see the square root law in effect
For a box model, the sum of draws is likely to be around the ___________, give or take a __________ or so.
Expected value for the sum
Standard Error for the sum
Observed Values
Values that are returned by the chance process/ box model
= expected value + chance error
Box Model Attributes
Number
Standard Deviation (SD)
Expected Value (of sum) (EV)
Standard Error (of sum) (SE)
-
What is the difference between standardization and normalization?
TODO
Box Model: Counting
Instead of the chance process producing a value between A and B, it will produce either a 0 or a 1.
Our same formulas from before can be used.
Provided the number of draws is sufficiently large the _____________can be used to figure chances for the sum of draws.
How large is sufficient?
normal curve
30 observations
Probability Histogram
A new king of graph that represents chance rather than data. Each rectangle represents the chance of a given interval occurring. Sums to 100%.
x-Axis = Sum of Box Model
y-Axis = Percent per standard unit
With enough draws, the probability histogram for the sum will be close to the normal curve
Empirical
Experimentally Observed
Converge
Gets closer and closer to
As the number of samples increases, the probability histogram will get closer and closer to the ________________.
Normal Curve
Selection/Sample Bias
The systematic tendency on the part of the sampling procedure to execute one kind of person or another from the sample.
Taking a larger sample does not help this problem, it just repeats the basic mistake on a larger scale
Non-Response Bias
Bias for members of the sample to be people that respond rather than all members selected to be in the sample initially
Non-respondents can be very different from respondents.
Quota Sampling
Sampling members of a population until the desired sample size is met
With a simple random sample, the expected value for the sample percentage equals the ____________________. However, the sample percentage will be off by a chance error.
Population Percentage
Standard Error (SE) for Sample Percentage
= (SE for # / Size of sample)*100%
First get the SE for the corresponding responding number; then convert to percentage, relative to the size of the sample
Multiplying the size of the sample by some factor divides the SE for a percentage not by the whole factor - but by its ________________.
Square Root
The SE for the sample number goes _____ like the ______________ of the sample size
Up
Square Root
The SE for the sample percentage goes ______ like the _____________ of the sample size/
Down
Square Root
When drawing at random from a box of 0’s and 1’s, the percentage of 1’s among the draws is likely to be around ___________, give or take _____________ or so.
Expected Value
SE for the Sample Percentage
Box Model Classifying & Counting: Standard Deviation
SD = sqrt( p * (p-1) )
SD = sqrt(percentage of 1s * percentage of 0s)
When estimating percentages, it is the _______________ of the sample which determines accuracy, not the ________________ to the population. This is true only if the sample is a small part of the population (which is the usual case).
Absolute Size
Size Relative
Chance error is not affected by the _______________ but rather the _______________ of the sample and the _________ of the box.
Population Size
Absolute Size of the Sample
Standard Error (via SD)
(20.4)
Box Model: Population Percentage Coverage
= Sample Percentage +- N SE’s
A ____________________ is used when estimating an unknown parameter from sample data. The interval gives a range for the parameter and a confidence level that the range covers the true value.
Confidence Interval
_______________ are used when you reason forward, from the box to the draws; ________________ are used when reasoning backwards, from draws to the box.
Probabilities
Confidence levels
A sample percentage will be off the ____________________, due to chance error. The _______ tells you the likely size of the amount. Confidence levels were introduced to make this idea more quantitative.
Population Percentage
Standard Error (SE)
With a simple random sample, the ___________ percentage is used to estimate the ____________ percentage
Sample
Population
A __________________ for the population percentage is obtained by going the right number of ______________ either way from the sample percentage. The _______________ is read off the normal curve.
Confidence Interval
SEs
Confidence Level
When ___________ operates more of less evenly across the sample, it can not be detected just by looking at the data.
Bias
Box Model: Expected Value of Average
= Average of Box
Box Model: Standard Error (SE) for the Average
= SE for sum / # of draws
Tells you how far off the Expected Value of the Sample Average is from the population average
Q: What is the difference between the SD of the sample and the SE for the sample average? (23.2)
- The SD says how far values are from the average - on average
- The SE says how far sample average are from the population average - on average
Q: Why is it OK to use the normal curve in figuring confidence levels? (23.2)
Because with sample random sampling, we assume that the chance error is normally distributed around the observed value.
Box Model: Expected Value of Count
Sum of 1s / # of draws
Box Model: Standard Error (SE) of Count
SE for sum, from a 0-1 box
Box Model: Expected Value of Percentage
Sum of 1s / # of draws
Box Model: Standard Error (SE) of Percentage
= (SE for count / # of draws)*100%
With a simple random sample, the __________ of the sample can be used to estimate the SD of the box. A __________ for the average of the box can be found by going to the right the average of the draws. The __________ is read off the normal curve.
SD
Confidence Interval
Confidence Level
Q: How should an experiment be designed to test the effectiveness of a treatment? (1)
A randomized-controlled double/triple blind experiment
Q: How do we measure the “center” of the data?
Average
Q: How do we measure the “spread” of a dataset?
Standard Deviation
= rms of deviations from average
Q: How big is the chance error likely to be? (6)
We can estimate it using the SD of a sequence of repeated measurements made under the same conditions
Q: How much chance error is likely to cancel out in the average? (24)
More precise than any individual estimate, by a factor equal to the square root of the number of measurements.
Q: Why is the correlation coefficient a pure number?
Because the first step to calculating it is to convert to standard units
Q: How do we determine if the results of an experiment are true or just because of chance?
We can run tests of significance to understand how likely our results are given the assumption of “no effect” (null hypothesis)
The __________ corresponds to the idea that an observed difference is due to chance. To make a test of significance, the null hypothesis has to be set up as a box model for the data. The __________ is another statement about the box, corresponding to the idea that the observed difference is real.
Null Hypothesis
Alternative Hypothesis
Observed Significance Level (P-value)
Chance of getting a test statistic as extreme as, or more extreme than, the observed one. The chance is computed on the basis that the null hypothesis is true. The small this chance is, the stronger the evidence against the null.
Test of Significance
Procedure used to determine whether an observed effect or difference is statistically significant or due to chance. It helps us decide if our findings are meaningful or if they could have occurred by chance.
Steps:
1. Setup Null & Alternative Hypothesis
2. Pick a test statistic, to measure the difference between the observed data and what is expected by the null hypothesis.
3. Compute the observed significance level P
Q: How do we find the expected value and the standard error for the difference between two sample averages?
- ## Compute EV and SE for each average independently
Standard Error for the difference between two independent quantities
= sqrt(a^2 + b^2)
a = SE for average_A
b = SE for average_B
The two-sample z-statistic is computed from:
- the sizes of the two samples
- the averages of the two samples
- the SDs of the two samples
Assumes two independent random samples
Expected value for the difference between two quantities
= EV_A - EV_B
Chi-Square Test (X^2-test)
Statistical hypothesis test used to determine if there is a significant difference between observed and expected frequencies in one or more categories. It’s commonly used to analyze categorical data.
a) Categorical Data
b) Chance model
c) Frequency Table
d) X^2 Statistic
e) Degrees of Freedom
f) Observed Significance Level (P)
X^2-Statistic
sum of ( (observed freq. - expected freq)^2 / expected freq. )
The __________ test says whether the data are like the result of drawing at random from a box whose CONTENTS are given
X^2
The __________ says whether the data are like the result of drawing at random for a box whose AVERAGE is given.
z-test
The P-value of a test depends on the _________. With a __________ sample, even a small difference can be “statistically significant”. Conversely, an important difference may be statistically insignificant if the sample is too __________.
Sample Size
Large
Small