Chapter 4 - Introduction to Probability p.173 Flashcards
Addition law p.191
A probability law used to compute the probability of the union of two events. It is
Probability(Union(A,B)) = Probability(A) + Probability(B) - Probability(Intersect(A,B))
For mutually exclusive events, Probability(Intersect(A,B)) = 0; in this case the addition law reduces to Probability(Union(A,B)) = Probability(A) + Probability(B)
Basic requirements for assigning probabilities p.180
§ Two requirements that restrict the manner in which the probability assignments can be made:
1. For each experimental outcome E we must have 0 <= P€ <= 1; 2. Considering all experimental outcomes, we must have P(E1) + P(E2) + . . . + P(En) = 1.0
Bayes’ theorem p.204
A method used to compute posterior probabilities.
Classical method p.180
A method of assigning probabilities that is appropriate when all the experimental outcomes are equally likely.
Combination p.179
In an experiment we may be interest in determining the number of ways n objects may be selected from among N objects without regard to the order in which the n objects are selected. Each selection of n objects is called a combination and the total number of combinations of N objects taken n at a time is
Combination(N,n) = (N . n) = N!/(n!(N - n)!) for n = 0,1,2,…,N.
Complement of A p.189
The event consisting of all sample points that are not in A
Conditional probability p.196
The probability of an event given that another event already occurred. The conditional probability of a given B is
P(A | B) = P(Intersect(A,B))/P(B)
Event
A collection of sample points.
Experiment
A process that generates well-defined outcomes.
Independent events p.199
Two events A and B where P(A|B) = P(A) or P(B|A) = P(B); that is, the events have no influence on each other.
Intersection of A and B p.191
The event containing the sample points belonging to both A and B.
The intersection is denoted Intersect(A,B)
Joint probability p.197
The probability of two events both occurring; that is, the probability of the intersection of two events.
Marginal probability p.197
The values in the margins of a joint probability table that provide the probabilities of each event separately.
Multiple-step experiment p.176
An experiment that can be described as a sequence of steps. If a multiple-step experiment has k steps with n1 possible outcomes on the first step, n2 possible outcomes on the second step, and so on, the total number of experiment outcomes is given by (n1)(n2)…(nk)
Multiplication law p.199
A probability law used to compute the probability of the intersection of two events. It is
P(Intersect(A,B)) = P(B)P(A|B), or
P(Intersect(A,B)) = P(A)P(B|A).
For independent events it reduces to
P(Intersect(A,B)) = 0.
Mutually exclusive events p.193
Events that have no sample points in common; that is, Intersect(A,B) is empty and Probability(Intersect(A,B)) = 0.
Permutation p.179
In an experiment we may be interested in determining the number of ways n objects may be selected from among N objects when the order in which the n objects are selected is important. Each ordering of n objects is called a permutation and the total number of permutations of N objects taken n at a time is
Permutations(N,n) = n!(N . n) = N!/(N - n)! For n = 0, 1, 2, …, N.
Posterior probabilities p.205
Revised probabilities of events based on additional information.
Prior probabilities p.204
Initial estimates of the probabilities of events.
Probability p174
A numerical measure of the likelihood that an event will occur.
Relative frequency method p.181
A method of assigning probabilities that is appropriate when data are available to estimate the proportion of the time the experimental outcome will occur if the experiment is repeated a large number of times.
Sample point p.176
An element of the sample space. A sample point represents an experimental outcome.
Sample space p.176
The set of all experimental outcomes.
S = {Head, Tail}
Subjective method p.181
A method of assigning probabilities on the basis of judgement.
Tree diagram p.177
A graphical representation that helps in visualizing a multiple-step experiment.
Union of A and B p.190
The event containing all samples points belonging to A or B or both.
The union is denoted: Union(A,B)
Venn diagram p.189
A graphical representation for showing symbolically the sample space and operations involving events in which the sample space is represented by a rectangle and events are represented as circles within the sample space.
4.1 Counting Rule for Combinations
The number of combinations of N objects taken n at a time is
Combination(N,n) =(N . n) = N!/(n!(N - n)!)
Where
N! = N(N - 1)(N - 2) . . . (2)(1)
n! = n(n- 1)(n - 2) . . . (2)(1)
And, by definition,
0! = 1
4.2 Counting Rule for Permutations
The number of permutations of N objects taken nat a time is given by
PNn = n!(N . n) = N!/(N - n)! , For n = 0, 1, 2, …, N.
4.5 Computing Probability Using the Complement
Probability(A) = 1 - Probability(Complement(A))
4.6 Addition Law
Probability(Union(A,B)) = Probability(A) + Probability(B) - Probability(Intersect(A,B))
4.7 Conditional Probability
P(A|B) = P(Intersect(A,B))/P(B) P(B|A) = P(Intersect(A,B))/P(A)
4.11 Multiplication Law
P(Intersect(A,B)) = P(B)P(A|B) P(Intersect(A,B)) = P(A)P(B|A)
4.13 Multiplication Law for Independent Events
P(Intersect(A,B)) = P(A)P(B)
4.19 Bayes’ Theorem
P(Ai|B) = P(Ai)P(B|Ai) / [P(A1)P(B|A1) + P(A2)P(B|A2) + . . . P(An)P(B|An)]
Random experiment
A random experiment is a process that generates well-defined experimental outcomes. On any single repetition or trial, the outcome that occurs is determined completely by chance.
n! = n(n- 1)(n - 2) . . . (2)(1)
1. The experimental outcomes are well defined, and in many cases can even be listed prior to conducting the experiment.
2. On any single repetition or trial of the experiment, one and only one of the possible experimental outcomes will occur.
3. The experimental outcome that occurs on any trial is determined solely by chance.
Samples space
The sample space for a random experiment is the set of all experimental outcomes.
Counting rule for multiple-step experiments
If an experiment can be described as a sequence of k steps with n1 possible outcomes on the first step, n2 possible outcomes on the second step, and so on, the total number of experiment outcomes is given by (n1)(n2)…(nk)
Basic Requirements for Assigning Probabilities
- The probability assigned to each experimental outcome must be between 0 and 1, inclusively. If we let Exp(i) denote the ith experimental outcome and Probability(Exp(i)) its probability, then this requirement can be written as
• 0 <= Probability(Exp(i)) <= 1 for all I - The sum of the probabilities for all the experimental outcomes must equal 1.0. Fr n experimental outcomes, this requirement can be written as
• Probability(Exp(1)) + Probability(Exp(2)) + . . . + Probability(Exp(n)) = 1
Event
An event is a collection of sample points.
Probability of an event
The probability of any event is equal to the sum of the probabilities of the sample points in the event.
The probability of event C is given by:
Probability(c) = P(2, 6) + P(2, 7) + P(2, 8) + P(3, 6) + P(3, 7)
Union of two events
The union of A and B is the event containing all sample points belonging to A or B or both.
The union is denoted by Union(A,B)
Intersection of Two Events
Given two events A and B, the intersection of A and B is the event containing the sample points belonging to both A and B.
The intersection is denoted Intersect(A,B)
Mutually Exclusive Events
Two events are said to be mutually exclusive if the events have no sample points in common.
Addition Law for Mutually Exclusive Events
P(Union(A,B)) = P(A) + P(B)
Independent events
Two events A and B are independent if
P(A|B) = P(A), or
P(B|A) = P(B)
Otherwise, the events are dependent.
Bayes’ Theorem (Two-Event Case)
P(A1|B) = P(A1)P(B|A2) / [P(A1)P(B|A1) + P(A2)P(B|A2)]
P(A2|B) = P(A2)P(B|A2) / [P(A1)P(B|A1) + P(A2)P(B|A2)]