Quantitative Methods Flashcards
Future Value of a Single Cash Flow
FV = PV(1+r)^N
Continuous Compounding Lump Sum
FV = PV * e^(r.s * N)
Effective Annual Rate
EAR = ((1 + Periodic Rate)^m) - 1
Future Value Ordinary Annuity
FV = A * [(((1 + r)^N) - 1)/r]
Future Value of Annuity Factor
[(((1 + r)^N) - 1)/r]
Present Value Single Cash Flow
PV = FV * (1 + r)^-N
Present Value Factor
(1 + r)^-N
Reciprocal of future value factor
Present Value Ordinary Annuity
PV = A * [(1-(1/((1+r)^N)))/r]
Present Value of a Perpetuity
PV = A/r
Solving for interest rate/growth rate
G = (FV/PV)^(1/N) - 1
Solving for number of periods
N = [ln(FV/PV)/ln(1+r)]
Solving for annuity Payments
A = PV/PF Annuity Factor
= PV/ [(1-(1/(1+r)^N))/r]
Numerical Data
Can be continuous or Discrete (meaning finite number of values
Categorical Data
Values that describe a quality or characteristic (dividend vs. No dividend)
Nominal vs. Ordinal Data
Nominal cannot be logically ordered or ranked, while ordinal can
Cross Sectional vs. Time Series vs. Panel Data
Cross sectional: List of observations for time period (January inflation rates for each country)
Time Series: List of observations of a specific variable (Daily closing prices)
Panel Data: Mix of the two (Table is with columns time series and rows cross section)
Structured vs. Unstructured Data
Structured is highly organized (Financial Statements for example)
Unstructured does not follow conventional organization (financial news for example)
1 Dimensional Array and 2 Dimensional Rectangular Array
1 Dimensional has only 1 variable and has a date and the observation
2 is a fancy way of saying a fucking table
Geometric Mean and Geometric Mean Return Formulas
Geometric Mean: multiply each number then take the root (root number is amount of numbers)
Geometric Mean Return: multiply each return (1+whatever) then take the root and subtract 1
Variance and Standard Deviation
s^2 = SUM[(observation-mean)^2]/(n-1)
Standard Deviation is the square root of this
Downside deviation/target semideviation
Using all values below the chosen target
Square root of SUM[((Observation - target)^2)/(n-1)]
n is the total, not just the values you use
Coefficient of Variation
CV = standard dev/mean
Sample Covariance
SUM[(obs x - mean x)*(obs y - mean y)]
All divided by (n-1)
Correlation Coefficient
= Covariance/(standard dev x * standard dev y)
Conditional and Joint Probability
P(A|B) = P(AB)/P(B)
Joint probability is P(AB) = P(A|B)*P(B)
Probability that A happens or B happens (not both)
P(A or B) = P(A) + P(B) - P(AB)
Expected Value Variance
= SUM OF:
probability* [(X-Expected value)^2]
Expected Value Conditional Probabilites
= probability (X1|S)X1 + p(X2|S)X2 etc
Adding up the conditional probabilities for all scenarios (S1, S2 etc) gives you the total expected value E(X)
Covariance with expected values
Covariance between X and Y: (population Covariance)
Cov(x,y) = E[(X-EX)*(Y-EY)]
Sample Covariance:
Cov(x,y) = SUM OF {[(X-X mean)* (Y-Y mean)]/(n-1)}
Correlation
= covariance/(standard dev X * standard dev y)
Portfolio variance
= (weight^2)(variance) + (weight2^2)(variance2) + 2(weight1weight2Covariance)
Bayes Formula
= (prob of new information given event/unconditional prob of new information)* prior probability
P(Event|Info) = [P(Info|Event)/P(Info)]*P(Event)
Number of combinations binomial distribution
= (n!)/[(n-x)! * x!]
Binomial probability function
= (n!)/[(n-x)! * x!] * [p^x * (1-p)^n-x]
Z Score
Z = (X - mean)/standard dev
Central Limit Theorem
With a large enough sample size, the distribution of sample means will be normal distribution
Variance/Sample size, as sample gets bigger the fraction gets smaller
Standard error
Standard deviation/root of sample size
= standard dev/sqrt(n)
Measured how much inaccuracy comes from sampling
Confidence interval z scores 90%, 95%, 99%
90%: z = 1.65
95%: z = 1.96
99%: z = 2.58
Confidence intervals
Mean +/- Z*(standard dev/sqrt(n))
*Z for probability/2