FRM Level 1 Part 2 Flashcards
chapter 1: probability
1. random event and probability
basic concept of probability
- outcome and sample space
2. relationship amount events mutually exclusive events (互斥事件) exhaustive events(便利事件) independent events(独立事件) the occurrence of B has no influence of the occurrence of A
the type of probability
Joint probability is the probability of two events occurring simultaneously.
Marginal probability is the probability of an event irrespective of the outcome of another variable.
Conditional probability is the probability of one event occurring in the presence of a second event.
unconditional probability
p(A)
conditional probability
p(A|B)
join probability
p(AB)
two important rule
multiplication rule
p(AB) = p(A|B)xp(B)
if they are independent
p(AB) = p(A)xp(B)
additional rule
p(A+B) = p(A) + p(B) - p(AB)
if mutually exclusive
p(A+B) = p(A) + p(B)
- discrete and continuous random variable
discrete random variable
number of possible outcomes can be counted
continuous random variable
it can take on any value within a given infinite and finite range
P(X=x) = 0 even thought event X can occur
provability density distribution:
discrete random variable (probability that a discrete random will take on the value X)
continuous random variable
PDF 即 f(x), 他表示X 对应的函数值
p(x1<=X<=x2)(区间与PDF围成的面积即为概率)
cumulative distribution function concept: the probability that a random variable will be less than or a given value ----> F(x) = F(X <= x) characteristic: 单调性递增 有界性:x 逼近负无穷为0,正无穷为1 P(a
Chapter 2 Bayesian Analysis
Total probability theorem
if A1,….,An are mutually exclusive and exhaustive
p(B) = the sum of p(Aj)p(B|Aj) from j = 1 to n
Baye’s Theorm
p(A|B) = p(B|A)/p(B) x p(A)
p(A|B): updated probability 后验概率
p(A): prior probability 先验概率
p(B)用全概率公式计算
3.Basic Statistic
arithmetic mean population mean mu = (the sum of Xi from i=1 to N)/N sample mean x = (the sum of Xi from i=1 to n)/n
median
the middle item of a set of items sorted into ascending or descending order
odd (n+1)/2 and even n/2
mode
most frequently occurring value of the distribution
expected value definition E(x) = X1*p(X=x1)+...+Xn*p(X=xn) properties: if c is any constant, then E(cx + a) = cE(x) + a E(X+ Y) = E(X) + E(Y) if X and Y are independent random variables, then E(XY) = E(X)*E(Y) E(X^2) != [E(X)]^2
- dispersion
variance for data:
population variance
sample variance
standard deviation
population standard deviation
sample standard deviation
variance for random variable formula: var(x) = E[(X - mu)^2] = E(X^2) - [E(X)]^2 properties: if c is any constant Var(X+c) = Var(X) Var(cX) = c^2 *var(X)
if X and Y are independent random variable, then
Var(X+Y) = Var(X) + Var(Y) + 2Cov(X,Y)
Var(X-Y) = Var(X) + Var(Y) - 2Cov(X,Y)
平方根法则: 期初债券持有量应为(n-1)Y/n,现金持有量为Y/n;期末债券持有量为0,现金持有量为Y/n。平均债券持有量就是(n-1)Y/2n。那么考虑如下的最大化过程(持有债券收益的最大化,控制变量为变现次数n) Max(n-1)Yr/2n-nb 一阶条件为Yr/2n^2-b=0 解得n=√(Yr/2b) 其平均的现金持有量为Y/2n,将上述结果代入得 Md=√(Yb/2r)这就是平方根法则的数学表示,也是鲍莫尔模型的最大化结果
covariance
definition:
the relationship between the deviation of two variables
Cov(X, Y) = E[X - E(X)][Y - E(Y)] = E(XY) - E(X)E(Y)
properties
1. Cov(X, Y) 是正无穷到负无穷
2. if X and Y are independent, then E(XY) = E(X)E(Y) and cov(X, Y) = 0
3. if X = Y, then Cov(X,X) =E[X - E(X)]E[X - E(X)] = var(X)
4. Cov(a+bX, c+dY) = bdCov(X,Y)
5. Var(w1X1, w2X2) = [w1^2]Var(X1) + [w2^2]Var(X2) + 2w1w2Cov(X1,X2)
w1 与w2 分别为X1 与X2 的权重
correlation
definition:
linear relationship between two variable px,y = Cov(x,y)/std(x)*std(y). p 区间在-1 到1 and it is no units
properties
p = 0 indicate the absence of any linear relationship, but perhaps exist non-linear relationship
the bigger of the absolute value, the stronger linear relationship
correlation coefficient with interpretation
p = +1 perfect positive linear correlation
0<p></p>
- Skewness
definition
how symmetrical the distribution around the mean
skewness = E[(X - mu)^3]/std^3
properties
symmetrical distribution:
Skewness = 0
positively skewed distribution (right skew): Skewness>0
outlier in the right tail———-mean>median>mode
negatively skewed distribution (left skew):
Skewness < 0
outlier in the left tail: many financial asset exhibit negative skew (more risky)———-mean
- Kurtosis
definition: the degree of how many weight on extreme points in the tail
Kurtosis =E[(X - mu)^4]/std^4
leptokurtic mesokurtic platykurtic kurtosis >3 = 3 <3 excess kurtosis >0 =0 <0
chapter 4
1. discrete probability distribution
Bernoulli distribution:
definition: trail produces one of two outcomes (success or failure)
properties
E(X) = p1 +(1-p)0
Var(X) = p*(1-p)
Binomial Distribution
definition: The distribution of binominal random variable which is defined as the number of success in n Bernoulli trails
properties
The probability is constant for all trails
the trail are all independent
E(X) = np
Var(X) = np(1-p)
p(x) = P(X = x) = n!/[(n - x)!x!] * p^x(1-p)^(n-x)
as n increase and P —–> 0.5 approximate normal distribution
Poisson distribution
definition: used to model the occurrence of events over time
properties:
f(x) = P(X = x) = (v^x*e^(-v))/x!
v —> the average or expected number of events in the interval
X —> the success number of number of events in the interval
continuous probability distribution
uniform distribution
definition:
the probabilities for all possible outcomes are equal
graph: probability density function f(x)= 1/(b-a) for (a <= x <= b) 0 for otherwise cumulative probability function F(x) = 0 for x <= a (x - a)/(b - a) for a < x < b 1 for x >= b
properties
E(x) =(a + b)/2 Var(X) = (b - a)^2/12
For all a <= x1 <= b: P(x1<=X<=x2) =(x1-x2)/(b-a)
standard uniform distribution: a = 0, b = 1
normal distribution
properties:
completely described by mean and variance
X~N(mean, variance)
Skewness = 0, kurtosis = 3
1. linear combination of independent normal distributed random variables is also normally distributed
2. probabilities decrease further from the mean. But the tails go on forever
some special data
68% confidence interval is [X - 1std, X + 1std]
90% confidence interval is [X - 1.65std, X + 1.65std]
95% confidence interval is [X - 1.96std, X + 1.96std]
98% confidence interval is [X - 2.33std, X + 2.33std]
99% confidence interval is [X - 2.58std, X + 2.58std]
normal distribution —–when mean = 0, std = 1 ———————–>standard normal distribution (standardizing steps Z = (X-mean)/std)
lognormal distribution
definition:
lnx is normal, x is lognormal
Y is normal, e^y is lognormal
properties
if lnX is normal, then X is lognormal
if lnX is normal distribution, e^x is lognormal distribution
chart
Right skewed
Bounded from below by zero
Sampling distribution
student distribution
definition:
if Z is a standard normal distribution and U is a chi-square variable with K degrees of freedom, which is independent of Z, then the random variable X follows a t-distribution with k degree of freedom X
X = Z/sqrt(U/K)
Z: standard normal variable
U: chi-square variable
K: degree of freedom
tips:
chi-square variable could be the sum of squares
Y = S1^2+…+Sn^2
where S1,…, Sn are independent standard normal random variables
properties:
1. symmetrical (bell shaped), skewness = 0
2, defined by single parameter: degrees of freedom (df), and df = n -1, where n is the sample size
3. comparison with normal distribution
fatter tails
As df increase, T-distribution is approaching to standard normal distribution
Given a degree of confidence, t-distribution has a wider confidence interval
As df increase, t-distribution is becoming more peaked with thinner tails, which means smaller probabilities for extreme values
Chi-Square (x2) distribution
definition:
If we have k independent standard normal variables, Z1,…,Zk, then sum of their square S, has a chi-square distribution
S = Z1 +… +Zk
k is the degree of freedom (df = n-1 when sampling)
properties
Asymmetrical, bounded below by zero
As df increase, converage to normal distribution
if the sum of two independent chi-square variables, with k1 and k2, degree of freedom will follow chi-square distribution, with k1 + k2 degrees of freedom
F-distribution
definition:
if U1 and U2 are two independent chi-squared distribution with K1 and K2 degrees freedom, then X follow an F-distribution
X = (U1/K1)/(U2/K2)
properties
As K1 and K2 —–> 正无穷, F-distribution approach normal distribution
if X follow t(k), then X^2 has an distribution: X^2 has an F-distribution
when sampling, df are n1-1 and n2-1
X^2 ~F(1,k)
degree freedom
Df =N−1
Df = degree of freedom
N = sample size