PART III: Computational Mathematics For Physics Flashcards
What’s the idea of numerical integration?
An approximation method to determine the area underneath the curve
There’s error since its an approximation
Riemann sum
I = sum(n-1,i=0)hf((x(i+1)+xi)/2)
From using midpoint rule
Derivation of the trapezoid rule
+ equation for the trapezoid rule
A = 1/2*bh I = sum(n-1,i=0)h[f(x(i+1))+f(xi)]/2
What is a Taylor series?
An expansion of a function into an infinite sum of terms, with increasing orders of a variable like x, x^2, x^3, ,,,
When it converges, it expresses the exact value of a function
What is a Taylor series expansion?
Formula given lol
What’s a Taylor polynomial?
Tn(x) from a convergent series allows us to approximate the value of a function up to a certain order
- any orders we don’t care about it O(x^n+1)
- Epsilon = f(x) - T(x), so epsilon proportional to O(x^n+1)
What’s special about Taylor expansions?
- they’re smooth
- allows us to rewrite a smooth function as an infinite sum of polynomial terms
- can guess what a function will look like around a point
- add infinite number of derivatives to get a single infinite sum
Calculating error in integration methods
Taylor expand f around a=xi-1 Integrate it all Sub u = x-a Change integration bounds Make it O(h^whatever it is) Expand around second point xi Average them together Add up all sub intervals from a to b
Error in trapezoid rule
1/4*h^2[f’(a)-f’(b)]
- accurate to and including terms proportional to h (first order)
- error O(h^2)
Accuracy of a first order rule
O(h) and error O(h^2)
Simpson’s rule: formula
I(a,b) = 1/3h[f(a) + f(b) +4sum(odd)f(a+kh) + 2sum(even)f(a+kh)]
What’s special about Simpson’s rule?
- convenient to code
- more precise with relatively little effort
- moderately accurate with small number of steps
Error in simpson’s rule
- 1/90*h^4[f”‘(a)-f”‘(b)]
- 3rd order
- 4th approximation error
Is Simpson’s rule always better than trapezoid rule?
No
When h>1, trapezoid is better
Both depend on derivative so if your 3rd derivative (what Simpson depends on) is large, the error is larger for Simpson’s rule
Adaptive integration
How many step sizes is enough?
Double N each time
Error is 1/3(Ii - I(i-1))
Romberg integration
Adjusting trapezoid rule (accuracy of Simpson’s method)
Calculate I1=R(1,1) and I2=R(2,1) with trapezoid rule
Use R(i,m+1) = R(i,m)+(R(i,m) - R(i-1,m))/((4^m)-1) to get more accurate estimate
I3 = R(3,1), use it to get R(3,2) and R(3,3)
At each successive stage, compute 1 more trapezoid rule estimate
For each estimate, we calculate error and stop when we get target
Error in Romberg integration
N levels of this process, the last estimate is R(n,n) which is accurate to order h^2n
Why is calculating numerical derivatives less common than integrals?
- basic techniques for it are pretty simple
- derivatives can usually be easily calculated analytically
- noisy data poses a lot of problems
Forward difference method
f’ = [f(x+h)-f(x)]/h
What’s the types of error associated with numerical differentiation?
Rounding error (due to use of computer/program/software) Approximation error (due to technique)
Derivation of approximation error for forward difference method
Taylor expand f around x
Rearrange to solve for f’
You get f’ = forward diff + error
Epsilon = h/2*|f”(x)|
Derivation of round off error for forward difference method
C = Python error constant Apply to both f(x+h) and f(x) f(x) becomes Cf(x) Since f(x) is close to f(x+h), combined round off error is 2C|f(x)| Take h into account Epsilon = 2C|f(x)|/h
Total error in forward difference method
Epsilon = sum of approx + roundoff
Epsilon = 2C|f(x)|/h + h/2*|f”(x)|
Find h that minimizes it: derivative, set =0, solve for h, plug back in