PART III: Computational Mathematics For Physics Flashcards
What’s the idea of numerical integration?
An approximation method to determine the area underneath the curve
There’s error since its an approximation
Riemann sum
I = sum(n-1,i=0)hf((x(i+1)+xi)/2)
From using midpoint rule
Derivation of the trapezoid rule
+ equation for the trapezoid rule
A = 1/2*bh I = sum(n-1,i=0)h[f(x(i+1))+f(xi)]/2
What is a Taylor series?
An expansion of a function into an infinite sum of terms, with increasing orders of a variable like x, x^2, x^3, ,,,
When it converges, it expresses the exact value of a function
What is a Taylor series expansion?
Formula given lol
What’s a Taylor polynomial?
Tn(x) from a convergent series allows us to approximate the value of a function up to a certain order
- any orders we don’t care about it O(x^n+1)
- Epsilon = f(x) - T(x), so epsilon proportional to O(x^n+1)
What’s special about Taylor expansions?
- they’re smooth
- allows us to rewrite a smooth function as an infinite sum of polynomial terms
- can guess what a function will look like around a point
- add infinite number of derivatives to get a single infinite sum
Calculating error in integration methods
Taylor expand f around a=xi-1 Integrate it all Sub u = x-a Change integration bounds Make it O(h^whatever it is) Expand around second point xi Average them together Add up all sub intervals from a to b
Error in trapezoid rule
1/4*h^2[f’(a)-f’(b)]
- accurate to and including terms proportional to h (first order)
- error O(h^2)
Accuracy of a first order rule
O(h) and error O(h^2)
Simpson’s rule: formula
I(a,b) = 1/3h[f(a) + f(b) +4sum(odd)f(a+kh) + 2sum(even)f(a+kh)]
What’s special about Simpson’s rule?
- convenient to code
- more precise with relatively little effort
- moderately accurate with small number of steps
Error in simpson’s rule
- 1/90*h^4[f”‘(a)-f”‘(b)]
- 3rd order
- 4th approximation error
Is Simpson’s rule always better than trapezoid rule?
No
When h>1, trapezoid is better
Both depend on derivative so if your 3rd derivative (what Simpson depends on) is large, the error is larger for Simpson’s rule
Adaptive integration
How many step sizes is enough?
Double N each time
Error is 1/3(Ii - I(i-1))
Romberg integration
Adjusting trapezoid rule (accuracy of Simpson’s method)
Calculate I1=R(1,1) and I2=R(2,1) with trapezoid rule
Use R(i,m+1) = R(i,m)+(R(i,m) - R(i-1,m))/((4^m)-1) to get more accurate estimate
I3 = R(3,1), use it to get R(3,2) and R(3,3)
At each successive stage, compute 1 more trapezoid rule estimate
For each estimate, we calculate error and stop when we get target
Error in Romberg integration
N levels of this process, the last estimate is R(n,n) which is accurate to order h^2n
Why is calculating numerical derivatives less common than integrals?
- basic techniques for it are pretty simple
- derivatives can usually be easily calculated analytically
- noisy data poses a lot of problems
Forward difference method
f’ = [f(x+h)-f(x)]/h
What’s the types of error associated with numerical differentiation?
Rounding error (due to use of computer/program/software) Approximation error (due to technique)
Derivation of approximation error for forward difference method
Taylor expand f around x
Rearrange to solve for f’
You get f’ = forward diff + error
Epsilon = h/2*|f”(x)|
Derivation of round off error for forward difference method
C = Python error constant Apply to both f(x+h) and f(x) f(x) becomes Cf(x) Since f(x) is close to f(x+h), combined round off error is 2C|f(x)| Take h into account Epsilon = 2C|f(x)|/h
Total error in forward difference method
Epsilon = sum of approx + roundoff
Epsilon = 2C|f(x)|/h + h/2*|f”(x)|
Find h that minimizes it: derivative, set =0, solve for h, plug back in
Backward difference method
f’ = [f(x)-f(x-h)]/h
Central difference method
Average of forward and backward difference
f’ = [f(x+0.5h)-f(x-0.5h)]/h
Error on central difference method
2 Taylor expansions (+ and -) Subtract second from first, rearrange for f' Same method as others from here Epsilon is much smaller Rounding error is the same
Alternate central difference method: what is it and when do you use it?
Used if you have a discrete set of data points
f’ = [f(x+h) - f(x-h)]/2h
When is central difference more accurate?
h < |f”(x)/f”‘(x)|
Define interpolation
When we want to approximate values in between known data points
Define extrapolation
When we want to approximate values outside our range of data points
Define interpolant
The model
Define basis functions
Related to how much each datapoint is weighted
Multiply set of data values y by a set of basis functions W to get inter/extrapolate equation
Higher order interpolation
Instead of 2 points, 3 points
More points = more weights
Piecewise interpolation
For complex data
- break function into easier/manageable components
Piece wise interpolation: how do we piece it together?
Splines!
Define splines
Special basis set of cubic polynomials with two constraints
- curve must pass through a point so two points are constrainted
- tangent Fm curve must have a particular angle, so two derivatives are constrained
How is a matrix different from an array?
Matrices are strictly 2D (arrays can be multi-dim)
Multiplication is different (mat does dot product)
Matrix class has methods of calculating inverse, transpose, etc
Syntax is different (mat(“[row1;row2]”)
Solving linear equations
- use matrices
- convert system to matrix format
- define Ax=v
- use Gaussian elimination
- back substitute
- can use solve from numpy.linalg
Relaxation method
For equations of form x=f(x)
Start with initial guess x = 1
Solve x’ = f(x) using x from previous step
Solve x” = f(x’) using x from step 2
Keep repeating until convergence or until accuracy reaches desired threshold
When can we use the relaxation method?
The equation is invertible and not oscillating between different values (has convergence)
Relaxation method: what happens if the function is oscillating or not invertible?
Use alternate method
Log both sides and solve for alternative expression for x
Apply relaxation to that
Steps for binary search method
Get initial pair of points x1,x2, check that f(x1) and f(x2) have opposite signs, pick target accuracy (check product=-) Calculate midpoint on x'=0.5(x1+x2) and evaluate f(x') If f(x') has the same sign as f(x1) set x1=x', or else set x2=x' If |x1-x2| > target accuracy, repeat from calc midpoint Else do it all again
Binary search: how many steps are required?
N = log2(del/e) Del = initial interval e = error (desired accuracy)
Problems with binary search method
Multiple roots
Local extreme at 0 but doesn’t cross
End points don’t envelope a zero crossing
Derivation for Newton’s method
f' = f/del(x) x' = x-del(x)
What do we need to know to use Newton’s method?
Derivative
Steps for Newton’s method
Start with initial guess of x
Solve for f(x), f’(x), plug into equation
Use x’ to plug back into equation for x”
Repeats until desired accuracy
Error on Newton’s method + what does it mean?
Epsilon’ = [-f”(x)/2f’(x)]epsilon^2
Size of error is epsilon^2 on next estimate (quadratic convergence)
Derivation for Euler’s method
With initial condition
x(t+h) = x(t) + hf(x,t) + O(h^2)
Assume O(h^2)->0
x(t+h) = x(t) + hf(x,t)
Steps for Euler’s method
Take x(t) that we know (initial condition) Plug into x(t+h) = x(t)+hf(x,t) to solve for x(t+h) Repeat that for another interval h after that Can go as long as we want
Error with Euler’s method
Summation of errors at each individual step
1/2h[f(x(b),b) - f(x(a),a)]
Linear so not great convergence
Derivation for Runge-Kutta method
Instead of stopping Taylor expansion at O(h^2), change it to O(h^3) in Euler’s method
Use slope at time t+0.5h instead of at t exactly
x(t+h) = x(t) + h(dx/dt|(t+0.5h)) + O(h^3)
h^2 is gone, error O(h^3) so more accurate now
Steps for Runge-Kutta method
Approx x(t+0.5h) with Euler's method Don't forget to sub 0.5 everywhere h is in the equation K1 = hf(x,t) K2 = hf(x+0.5k1, t+0.5h) x(t+h) = x(t) + k2 This is SECOND ORDER
Higher order runge-Kutta method
4th order exists (most common because 5th order error)
Problems with Runge-Kutta method
Infinite ranges - have to change variables
Multiple variables - have to use vector equations and this gets complicated
What is the Runge-Kutta method and why do we care about it?
Extension of Euler’s method with smaller error
Gets better estimates!
Second order DEs
Rewrite it as a first order vector DE Solve for a condensed 1st order DE Solve again when we replace the solution with original definitions RK methods are better and faster here This is called reducible
Concerns with solving second order DEs?
Energy conservation and time reversal
Reducible method might not conserve energy
Meaning/interpretation of solutions could be lost
Leapfrog method
Apply RK2 but conserve energy
Instead of calculating midpoint at each t+h, we use first midpoint and add h to base future midpoints off the first one
This is now time reversible! Aka energy is conserved