PART III: Computational Mathematics For Physics Flashcards

1
Q

What’s the idea of numerical integration?

A

An approximation method to determine the area underneath the curve
There’s error since its an approximation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Riemann sum

A

I = sum(n-1,i=0)hf((x(i+1)+xi)/2)

From using midpoint rule

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Derivation of the trapezoid rule

+ equation for the trapezoid rule

A
A = 1/2*bh
I = sum(n-1,i=0)h[f(x(i+1))+f(xi)]/2
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is a Taylor series?

A

An expansion of a function into an infinite sum of terms, with increasing orders of a variable like x, x^2, x^3, ,,,
When it converges, it expresses the exact value of a function

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is a Taylor series expansion?

A

Formula given lol

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What’s a Taylor polynomial?

A

Tn(x) from a convergent series allows us to approximate the value of a function up to a certain order

  • any orders we don’t care about it O(x^n+1)
  • Epsilon = f(x) - T(x), so epsilon proportional to O(x^n+1)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What’s special about Taylor expansions?

A
  • they’re smooth
  • allows us to rewrite a smooth function as an infinite sum of polynomial terms
  • can guess what a function will look like around a point
  • add infinite number of derivatives to get a single infinite sum
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Calculating error in integration methods

A
Taylor expand f around a=xi-1
Integrate it all
Sub u = x-a
Change integration bounds
Make it O(h^whatever it is)
Expand around second point xi
Average them together 
Add up all sub intervals from a to b
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Error in trapezoid rule

A

1/4*h^2[f’(a)-f’(b)]

  • accurate to and including terms proportional to h (first order)
  • error O(h^2)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Accuracy of a first order rule

A

O(h) and error O(h^2)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Simpson’s rule: formula

A

I(a,b) = 1/3h[f(a) + f(b) +4sum(odd)f(a+kh) + 2sum(even)f(a+kh)]

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What’s special about Simpson’s rule?

A
  • convenient to code
  • more precise with relatively little effort
  • moderately accurate with small number of steps
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Error in simpson’s rule

A
  • 1/90*h^4[f”‘(a)-f”‘(b)]
  • 3rd order
  • 4th approximation error
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Is Simpson’s rule always better than trapezoid rule?

A

No
When h>1, trapezoid is better
Both depend on derivative so if your 3rd derivative (what Simpson depends on) is large, the error is larger for Simpson’s rule

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Adaptive integration

A

How many step sizes is enough?
Double N each time
Error is 1/3(Ii - I(i-1))

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Romberg integration

A

Adjusting trapezoid rule (accuracy of Simpson’s method)
Calculate I1=R(1,1) and I2=R(2,1) with trapezoid rule
Use R(i,m+1) = R(i,m)+(R(i,m) - R(i-1,m))/((4^m)-1) to get more accurate estimate
I3 = R(3,1), use it to get R(3,2) and R(3,3)
At each successive stage, compute 1 more trapezoid rule estimate
For each estimate, we calculate error and stop when we get target

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Error in Romberg integration

A

N levels of this process, the last estimate is R(n,n) which is accurate to order h^2n

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Why is calculating numerical derivatives less common than integrals?

A
  • basic techniques for it are pretty simple
  • derivatives can usually be easily calculated analytically
  • noisy data poses a lot of problems
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Forward difference method

A

f’ = [f(x+h)-f(x)]/h

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

What’s the types of error associated with numerical differentiation?

A
Rounding error (due to use of computer/program/software)
Approximation error (due to technique)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Derivation of approximation error for forward difference method

A

Taylor expand f around x
Rearrange to solve for f’
You get f’ = forward diff + error
Epsilon = h/2*|f”(x)|

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Derivation of round off error for forward difference method

A
C = Python error constant
Apply to both f(x+h) and f(x)
f(x) becomes Cf(x)
Since f(x) is close to f(x+h), combined round off error is 2C|f(x)|
Take h into account
Epsilon = 2C|f(x)|/h
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Total error in forward difference method

A

Epsilon = sum of approx + roundoff
Epsilon = 2C|f(x)|/h + h/2*|f”(x)|
Find h that minimizes it: derivative, set =0, solve for h, plug back in

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Backward difference method

A

f’ = [f(x)-f(x-h)]/h

25
Central difference method
Average of forward and backward difference | f' = [f(x+0.5h)-f(x-0.5h)]/h
26
Error on central difference method
``` 2 Taylor expansions (+ and -) Subtract second from first, rearrange for f' Same method as others from here Epsilon is much smaller Rounding error is the same ```
27
Alternate central difference method: what is it and when do you use it?
Used if you have a discrete set of data points | f' = [f(x+h) - f(x-h)]/2h
28
When is central difference more accurate?
h < |f"(x)/f"'(x)|
29
Define interpolation
When we want to approximate values in between known data points
30
Define extrapolation
When we want to approximate values outside our range of data points
31
Define interpolant
The model
32
Define basis functions
Related to how much each datapoint is weighted | Multiply set of data values y by a set of basis functions W to get inter/extrapolate equation
33
Higher order interpolation
Instead of 2 points, 3 points | More points = more weights
34
Piecewise interpolation
For complex data | - break function into easier/manageable components
35
Piece wise interpolation: how do we piece it together?
Splines!
36
Define splines
Special basis set of cubic polynomials with two constraints - curve must pass through a point so two points are constrainted - tangent Fm curve must have a particular angle, so two derivatives are constrained
37
How is a matrix different from an array?
Matrices are strictly 2D (arrays can be multi-dim) Multiplication is different (mat does dot product) Matrix class has methods of calculating inverse, transpose, etc Syntax is different (mat("[row1;row2]")
38
Solving linear equations
- use matrices - convert system to matrix format - define Ax=v - use Gaussian elimination - back substitute - can use solve from numpy.linalg
39
Relaxation method
For equations of form x=f(x) Start with initial guess x = 1 Solve x' = f(x) using x from previous step Solve x" = f(x') using x from step 2 Keep repeating until convergence or until accuracy reaches desired threshold
40
When can we use the relaxation method?
The equation is invertible and not oscillating between different values (has convergence)
41
Relaxation method: what happens if the function is oscillating or not invertible?
Use alternate method Log both sides and solve for alternative expression for x Apply relaxation to that
42
Steps for binary search method
``` Get initial pair of points x1,x2, check that f(x1) and f(x2) have opposite signs, pick target accuracy (check product=-) Calculate midpoint on x'=0.5(x1+x2) and evaluate f(x') If f(x') has the same sign as f(x1) set x1=x', or else set x2=x' If |x1-x2| > target accuracy, repeat from calc midpoint Else do it all again ```
43
Binary search: how many steps are required?
``` N = log2(del/e) Del = initial interval e = error (desired accuracy) ```
44
Problems with binary search method
Multiple roots Local extreme at 0 but doesn't cross End points don't envelope a zero crossing
45
Derivation for Newton's method
``` f' = f/del(x) x' = x-del(x) ```
46
What do we need to know to use Newton's method?
Derivative
47
Steps for Newton's method
Start with initial guess of x Solve for f(x), f'(x), plug into equation Use x' to plug back into equation for x" Repeats until desired accuracy
48
Error on Newton's method + what does it mean?
Epsilon' = [-f"(x)/2f'(x)]epsilon^2 | Size of error is epsilon^2 on next estimate (quadratic convergence)
49
Derivation for Euler's method
With initial condition x(t+h) = x(t) + hf(x,t) + O(h^2) Assume O(h^2)->0 x(t+h) = x(t) + hf(x,t)
50
Steps for Euler's method
``` Take x(t) that we know (initial condition) Plug into x(t+h) = x(t)+hf(x,t) to solve for x(t+h) Repeat that for another interval h after that Can go as long as we want ```
51
Error with Euler's method
Summation of errors at each individual step 1/2h[f(x(b),b) - f(x(a),a)] Linear so not great convergence
52
Derivation for Runge-Kutta method
Instead of stopping Taylor expansion at O(h^2), change it to O(h^3) in Euler's method Use slope at time t+0.5h instead of at t exactly x(t+h) = x(t) + h(dx/dt|(t+0.5h)) + O(h^3) h^2 is gone, error O(h^3) so more accurate now
53
Steps for Runge-Kutta method
``` Approx x(t+0.5h) with Euler's method Don't forget to sub 0.5 everywhere h is in the equation K1 = hf(x,t) K2 = hf(x+0.5k1, t+0.5h) x(t+h) = x(t) + k2 This is SECOND ORDER ```
54
Higher order runge-Kutta method
4th order exists (most common because 5th order error)
55
Problems with Runge-Kutta method
Infinite ranges - have to change variables | Multiple variables - have to use vector equations and this gets complicated
56
What is the Runge-Kutta method and why do we care about it?
Extension of Euler's method with smaller error | Gets better estimates!
57
Second order DEs
``` Rewrite it as a first order vector DE Solve for a condensed 1st order DE Solve again when we replace the solution with original definitions RK methods are better and faster here This is called reducible ```
58
Concerns with solving second order DEs?
Energy conservation and time reversal Reducible method might not conserve energy Meaning/interpretation of solutions could be lost
59
Leapfrog method
Apply RK2 but conserve energy Instead of calculating midpoint at each t+h, we use first midpoint and add h to base future midpoints off the first one This is now time reversible! Aka energy is conserved