AP Final Exam Flashcards

1
Q

SOCS - fairly symmetric

A
  • Shape
  • Center- mean
  • Spread - SD
  • Outliers - too small if Q1 - 1.5
    too big is Q1 + 1.5
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

SOCS - slightly/strongly skewed

A
  • Shape
  • Center - median
  • Median - IQR
  • Outliers - too small if Q1 - 1.5
    too big is Q1 + 1.5
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

How to find SOCS easily

A

Make list –> stats –> stat calc –> 1 variable stats

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

True or false: you can determine the shape with a boxplot

A

False

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Interpret SD

A

The (context) typically varies by (SD) from the mean of (mean).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Interpret percentile

A

(Percentile) % of (context) are less than or equal to (value).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Interpret z-score

A

(Specific value w/ context) is (z-score) standard deviations (above/below) the mean.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Describing the distribution

A

DUFS:
- Direction - (+) or (-)
- Unusual features - outliers or clusters
- Form - linear or nonlinear
- Strength - weak, moderate, or strong

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Calculate slope

A

use b in y(hat) = a + bx –> b/increment (ex. years)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Interpret slope

A

The predicted (y-context) (increases/decreases) by (slope) for each additional (x-context).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Interpret coefficient of determination (r^2)

A

(r^2 as percentage) of the variation in (y-context) can be explained by the linear relationship with (x-context).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Calculate residual (r)

A

r = (actual) - (predicted)
predicted = y(hat) = a + bx

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Interpret residual (r)

A

The actual (y-context) was (r) less/greater than the predicted (y-context & predicted value).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Convenience sample

A

Selected for inclusion because they are easy to access
(ex. first 30 people to walk through the door)
- Underestimates or overestimates true proportion
- Not representative of population

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Voluntary response

A

Choose to participate in a survey or experiment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Simple random sample

A

Random sample which takes a random population and randomly assigns them into groups with equal probability.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Stratified sampling

A

Takes population and splits them into groups (strata) into a characteristic that we think has some effect.
(ex. SRS within grades)
- Homogeneous groups
- SRS within each group

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Cluster sample

A

Grouping is similar to population and SRS is taken to choose a cluster.
(ex. SRS of classrooms)
- Heterogeneous groups
- SRS of groups

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Undercoverage bias

A

Don’t have access to survey

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Response bias

A

No reply

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Response bias

A

Participants who are untruthful

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Confounding variable

A

Variable that causes suspicious association

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Observational study

A

Variables are observed to determine if there’s a correlation

24
Q

Experimental study

A

Using controlled variables to determine if there’s causation

25
Use blocking for...
Experimental designs
26
Use stratifying and clustering for...
Observational studies
27
Mutually exclusive
Probability for A or B (A U B); cannot occur together - think ADDITION - If there's an overlap, subtract
28
Independent
Probability A and B; one thing does not affect the other - think MULTIPLY - If A already happened; what's the probability of second
29
How to find probability when given percentages
Tree diagram
30
Interpret probability (A) (mutually exclusive)
After many, many (context), the proportion of times that (context A) will occur is about (P(A)).
31
Describing binomial distribution
Conditions: BINS 1) Binary - success or failure 2) Independent 3) Number of trials fixed - n = ___ 4) Same probability - p = ___ - Shape - (np ≥ 10, n(1-p) ≥ 10) - Center - mean: μx = np - Variance - SD: ox = √np(1-p)
32
Calculate binomial probability (exactly)
binompdf (clearly identify parameters)
33
Calculate binomial (a least)
1 - binompdf (clearly identify parameters)
34
Interpret conditional probability (independence)
Given (context B), there is a P(A|B) probability of (context A)
35
Interpret expected value (mean, μ)
If the random process of (context) is repeated many times, the average number of (x context) we can expect is (expected value).
36
Interpret binomial mean (μx)
After many, many trials, the average number of (success context) out of (n) is (μx).
37
Interpret binomial SD (ox)
The number of (success context) out of (n) typically varies by (ox) from the mean of (μx).
38
Transforming random variables (multiple/divide by A)
- Mean - multiply or divide by A - SD - multiply or divide by A - Variance - multiply or divide the SD by A^2
39
Transforming random variables (add/subtract A)
- Mean - add or subtract by A - SD - no change - Variance - no change
40
Combining random variables (S = X + Y)
- μ = μx + μy - o = √ox^2 + oy^2 - o = ox^2 + oy^2
41
Combining random variables (D = X - Y)
- μ = μx - μy - o = √ox^2 + oy^2 - o = ox^2 + oy^2
42
Calculate normal distribution probability (more/less than)
normalcdf(>, <, μ, o)
43
Calculate normal distribution (gives % on curve)
inversecdf(area, μ, o)
44
Identifying when to use geometric distributions
"On any given ___, there is a ___% probability..."
45
Describing a geometric distribution
Conditions: BIFS - Binary - success or failures - Independence - First success - Same probability - Shape - Center - μx = 1/p - Variability - ox = √(1-p)/p
46
Find the probability of a geometric distribution (until)
geometricpdf(p, x)
47
Find the probability of a geometric distribution (within)
geometriccdf(p, lower, upper)
48
Sampling distribution
Many, many samples and a statistic calculated for each of those samples
49
What makes a good statistic?
- No bias - Low variability
50
Z score for one sample proportion
z = (p (hat) - p) / √p(1-p)/n
51
Z score for one sample mean
z = (x̄ - μ) / o/√n
52
Calculate z score into p-value
normcdf(z score, 1E99, 0, 1)
53
Interpret standard deviation of sample proportions (op(hat))
The sample proportion of (success context) typically varies by (op(hat)) from the true proportion of (p).
54
Standard deviation of sample means (ox̄)
The sample mean amount of (x-context) typically varies by (ox̄) from the true mean of (μx).
55
One sample confidence interval for mean (μ)
1) State: μ = true mean (context) CL = ___ 2) Plan: name: one sample t interval for μ Conditions: 1) random 2) 10% rule 3) normal - pop. distribution is normal, CLT (n ≥ 30), graph shows no strong skew or outliers 3) Do: x +/- t* (S/√n) 4) Conclude: We are (CL)% confident that the interval from ___ to ___ captures the true mean of (context).
56
One sample confidence interval for proportions (p)
1) State: p = true proportion (context) CL = ___ 2) Plan: name: one sample z interval for p Conditions: 1) random 2) 10% rule 3) normal - CLT (n ≥ 30) 3) Do: p(hat) +/- z* (√(p(hat)(1-p(hat))/n 4) Conclude: We are (CL)% confident that the interval from ___ to ___ captures the true proportion of (context).