Chapter 1 - Some General Theory of Ordinary Differential Equations Flashcards
Existence and Uniqueness Theorem
Theorem
-consider the differential equation
y’’ + p(x)y’ + q(x)y = 0
with p and q continuous on some interval I given by a≤x≤b
-let α, β be any two real numbers and xo be any point in the interval I
-then the equation has a unique solution defined on the interval I which satisfies:
y(xo) = α
y’(xo) = β
Existence and Uniqueness Theorem
General Solution
y’’ + p(x)y’ + q(x)y = 0
-since the equation is linear and homogeneous, its solution set forms a vector space
-since it is second order, the dimension of this space is 2
-any two linearly independent solutions y1(x), y2(X) can be used as a basis
-the general solution is given as the linear superposition:
y(x) = c1 y1(x) + c2 y2(x)
-the existence and uniqueness theorem says that c1 and c2 can be fixed by specifying initial conditions
y’’ + p(x)y’ + q(x)y = 0
Boundary Conditions
- it is also possible to fix c1 and c2 by specifying 2-point boundary conditions such as y(0) = y(1) = 0
- but in this case we do not have such a powerful general theorem telling us that a (nontrivial) solution always exists
y’’ + p(x)y’ + q(x)y = 0
Why only two initial conditions?
-the equation implies that: y''(x) = -p(x)y'(x) - q(x)y(x) for any x ∈ I -in particular: y''(xo) = -p(xo)y'(xo) - q(xo)y(xo) -so y''(xo) cannot be independently specified
The Wronskian
Derivation
given:
y(x) = c1 y1(x) + c2 y2(x)
and
y(xo) = α , y’(xo) = β
-we have a pair of linear equations for c1 and c2:
c1 y1(xo) + c2y2(xo) = α
c1y1’(xo) + c2y2’(xo) =β
-rewrite in a matrix form:
AB = C
-where A = 2x2 matrix, top row y1(xo) , y2(xo), bottom row y1’(xo) , y2’(xo)
-B is a 2x1 matrix: c1, c2
-C is a 2x1 matrix: α, β
-to be able to calculate c1 and c2 we need to multiply both sides by A inverse, so A inverse must exist
-the Wronskian is the determinant of matrix A so if the Wronskian is 0 we cannot find c1 and c2
Abel’s Formula
Formula
-let yi(x) be solutions of : y'' + p(x)y' + q(x)y = 0 Then: W'(x) = -p(x) W(x) and so: W(x) = Wo*exp(-∫p(ξ)dξ) -where the integral is taken from xo to x
Abel’s Formula
Proof
-start with the definition of the Wronskian :
W(x) = y1y2’ - y1’y2
-differentiate:
W’(x) = y1’y2’ + y1y2’’ - y1’y2’ - y1’‘y2
W’(x) = y1y2’’ - y1’‘y2
-from the original ODE: y’’ + p(x)y’ + q(x)y = 0
we can write
y2’’ = -py2’ - qy2
y1’’ = -py1’ - qy1
-sub in and cancel:
W’(x) = -p(x)W(x)
-therefore defining Wo = W(xo), we have W’(x)=-p(x)W(x) solved by W(x) = Wexp(-∫p(ξ)dξ)
-where the integral is taken from xo to x
Linear Dependence
Definition
-two functions y1(x) and y2(x) are linearly dependent if there exists γ≠0 such that y2 = γ*y1
Wronskian of Linearly Dependent Functions
- if two functions y1 and y2 are linearly dependent then y2 can be written as y2 = γ*y1 for some non-zero γ
- in this case the Wronskian is 0
Is this statement true?
‘ W[y1,y2] = 0 implies y1 and y2 are linearly dependent’
- no this is not always true and can be shown by counter example
- if y1=x^3 and y2=|x|^3 then y1 and y2 are linearly independent but W[y1,y2]=0
- however this statement does hold true subject to the condition that y1 and y2 are solutions of a second order homogeneous linear differential equation
Linear Dependence and Wronskian Theorem
-let y1(x) and y2(x) be two non-zero solutions of:
y’’ + p(x)y’ + q(x)y = 0
-with p(x) and q(x) continuous on some interval I given by a≤x≤b
-then y1(x) and y2(x) are linearly dependent if and only if their Wronskian vanishes identically on I
Euler’s Equation
-the general form of Euler's equation: x²y'' + axy' + by = 0 -where a and b are constants -in this case p(x)=ax/x² and q(x)=b/x² -the point x=0 is badly defined so we seek solutions only for x>0 -calculating the Wronskian gives: W(x) = Wo*x^(-a)
Singular Point
Definition
-points which cannot be included in the interval I are called singular points
-e.g. if p(x)=1/x² and q(x)=1/x
then x=0 would be a singular point as the functions are not continuous over an interval including x=0
Wronskian of Three Functions
- determinant of a 3x3 matrix
- first row with entries: y1, y2, y3
- second row with entries: y1’, y2’, y3’
- third row with entries: y1’’, y2’’, y3’’
Wronskian of Three Functions to Find the Homogeneous Linear Differential Equation Whose Solution Space is Spanned by Two Given Functions
-two functions y1(x) and y2(x) are given as linearly independent solutions of: y'' + p(x)*y' + q(x)*y = 0 -let y(x) be a general solution of this equation, then: y(x) = c1*y1(x) + c2*y2(x) -for some c1 and c2 -calculate the Wronskian: W[y1,y2,y] = 0 W[y1,y2,y]=y'' + p(x)y' + q(x)y -where: p(x) = -W'(x)/W(x) q(x) = W[y1',y2'] / W[y1,y2]
y’’ + p(x)y’ + q(x)y = 0
Given One Solution, Find , the Other
-given one solution y1(x), then we have a first order differential equation for y2(x): y1*y2' - y1'*y2 = W(X) -this equation is linear in y2 with integrating factor y1^(-2) leading to: d/dx (y2/y1) = W(x) / y1² -integrating and rearranging: y2 = y1 * ∫ W(x)/y1² dx y2 = y1 * ∫ [exp(- ∫ p(x)dx)/y1²]dx
Constant Coefficient Second Order ODE with Double Root
y’’ - 2my’ + m²y = 0
y'' - 2my' + m²y = 0 -using y=e^(λx) gives an auxiliary equation: (λ-m)² = 0 -so: y1 = e^(mx) -calculate W using W'=-p(x)W , in this case p(x)=-2m : W' = 2mW => W = e^(2mx) -substitute into d/dx (y2/y1) = W(x) / y1² : y2 = x*e^(mx)
Basis of Solutions
-to find the general solution of;
y’’ + p(x)y’ + q(x)y = 0
-we just need to find two linearly independent solutions
-i.e. solutions y1 and y2 such that W[y1,y2]≠0
-since all other solutions are then just a linear combination of these two, the pair of functions y1 and y2 form a basis in the linear algebra sense
Standard Basis for The Solution Space of the Second Order Equation
y’’ + p(x)y’ + q(x)y = 0
-just like choosing a convenient basis e1 and e2 for 2-D space, we can choose:
->y1 to be the unique solution satisfying:
y1(xo) = 1
AND y1’(xo) = 0
->y2 to be the unique solution satisfying:
y2(xo) = 0
AND y2’(xo) = 1
-in this case W[y1,y2]=1 so y1 and y2 satisfy linear independence
Standard Basis for the Solution Space of a Second Order Equation Proof
-for the standard basi we choose: y1(xo) = 1, y1'(xo) = 0 AND y2(xo) = 0, y2'(xo) = 1 -in this case W[y1,y2]=1≠0 so y1 and y2 satisfy linear independence -these conditions also satisfy the existence and uniqueness theorem: let y(x) = αy1(x) + βy2(x) -calculate y(xo): y(xo) = αy1(xo) + βy2(x0) = α*1 + 0 = α -calculate y'(xo): y'(xo) = αy1'(xo) + βy2'(x) = 0 + β*1 = β
Change of Basis
- let y1 and y2 be a basis
- we can form a new basis by non-singular linear transformation
- i.e. take the 2x1 column vector y1, y2 and multiply by the 2x2 vector a,b,c,d such that ad-bc≠0
Change of Basis
Hyperbolic Function
y’’ - y = 0
y'' - y = 0 -this equation has solutions; y1 = e^x y2 = e^(-x) -an alternative basis is: ^y1 = coshx ^y2 = sinhx -these basis are related by the 2x2 matrix with entries a=1/2 , b=1/2 , c=1/2 , d=-1/2
Finding a Solution to Euler’s (Cauchy’s) Equation
Deriving the Indicial Equation
x²y'' + axy' + by = 0 -since: y = x^r => x*y' = r*x^r => x²*y'' = r(r-a)*x^r -we have: x²y'' + axy' + by = [ r(r-1) + ar + b ] * x^r = 0 -if r is chosen to satisfy the indicial equation F(r) = r² + (a-1)r + b = 0 -then y=x^r is a solution of Euler's Equation