Find Area Where Solution Exists and Is Unique

Differential equation containing derivatives with respect to only one variable

In mathematics, an ordinary differential equation (ODE) is a differential equation containing one or more functions of one independent variable and the derivatives of those functions.[1] The term ordinary is used in contrast with the term partial differential equation which may be with respect to more than one independent variable.[2]

Differential equations [edit]

A linear differential equation is a differential equation that is defined by a linear polynomial in the unknown function and its derivatives, that is an equation of the form

a 0 ( x ) y + a 1 ( x ) y + a 2 ( x ) y + + a n ( x ) y ( n ) + b ( x ) = 0 , {\displaystyle a_{0}(x)y+a_{1}(x)y'+a_{2}(x)y''+\cdots +a_{n}(x)y^{(n)}+b(x)=0,}

where a 0 ( x ) {\displaystyle a_{0}(x)} , ..., a n ( x ) {\displaystyle a_{n}(x)} and b ( x ) {\displaystyle b(x)} are arbitrary differentiable functions that do not need to be linear, and y , , y ( n ) {\displaystyle y',\ldots ,y^{(n)}} are the successive derivatives of the unknown function y of the variable x.

Among ordinary differential equations, linear differential equations play a prominent role for several reasons. Most elementary and special functions that are encountered in physics and applied mathematics are solutions of linear differential equations (see Holonomic function). When physical phenomena are modeled with non-linear equations, they are generally approximated by linear differential equations for an easier solution. The few non-linear ODEs that can be solved explicitly are generally solved by transforming the equation into an equivalent linear ODE (see, for example Riccati equation).

Some ODEs can be solved explicitly in terms of known functions and integrals. When that is not possible, the equation for computing the Taylor series of the solutions may be useful. For applied problems, numerical methods for ordinary differential equations can supply an approximation of the solution.

Background [edit]

parabolic projectile motion showing velocity vector

The trajectory of a projectile launched from a cannon follows a curve determined by an ordinary differential equation that is derived from Newton's second law.

Ordinary differential equations (ODEs) arise in many contexts of mathematics and social and natural sciences. Mathematical descriptions of change use differentials and derivatives. Various differentials, derivatives, and functions become related via equations, such that a differential equation is a result that describes dynamically changing phenomena, evolution, and variation. Often, quantities are defined as the rate of change of other quantities (for example, derivatives of displacement with respect to time), or gradients of quantities, which is how they enter differential equations.

Specific mathematical fields include geometry and analytical mechanics. Scientific fields include much of physics and astronomy (celestial mechanics), meteorology (weather modeling), chemistry (reaction rates),[3] biology (infectious diseases, genetic variation), ecology and population modeling (population competition), economics (stock trends, interest rates and the market equilibrium price changes).

Many mathematicians have studied differential equations and contributed to the field, including Newton, Leibniz, the Bernoulli family, Riccati, Clairaut, d'Alembert, and Euler.

A simple example is Newton's second law of motion — the relationship between the displacement x and the time t of an object under the force F, is given by the differential equation

m d 2 x ( t ) d t 2 = F ( x ( t ) ) {\displaystyle m{\frac {\mathrm {d} ^{2}x(t)}{\mathrm {d} t^{2}}}=F(x(t))\,}

which constrains the motion of a particle of constant mass m. In general, F is a function of the position x(t) of the particle at time t. The unknown function x(t) appears on both sides of the differential equation, and is indicated in the notation F(x(t)).[4] [5] [6] [7]

Definitions [edit]

In what follows, let y be a dependent variable and x an independent variable, and y = f(x) is an unknown function of x. The notation for differentiation varies depending upon the author and upon which notation is most useful for the task at hand. In this context, the Leibniz's notation ( dy / dx , d 2 y / dx 2 , …, d n y / dx n ) is more useful for differentiation and integration, whereas Lagrange's notation (y′, y′′, …, y (n)) is more useful for representing derivatives of any order compactly, and Newton's notation ( y ˙ , y ¨ , y . . . ) {\displaystyle ({\dot {y}},{\ddot {y}},{\overset {...}{y}})} is often used in physics for representing derivatives of low order with respect to time.

General definition [edit]

Given F, a function of x, y, and derivatives of y. Then an equation of the form

F ( x , y , y , , y ( n 1 ) ) = y ( n ) {\displaystyle F\left(x,y,y',\ldots ,y^{(n-1)}\right)=y^{(n)}}

is called an explicit ordinary differential equation of order n.[8] [9]

More generally, an implicit ordinary differential equation of order n takes the form:[10]

F ( x , y , y , y , , y ( n ) ) = 0 {\displaystyle F\left(x,y,y',y'',\ \ldots ,\ y^{(n)}\right)=0}

There are further classifications:

Autonomous
A differential equation not depending on x is called autonomous.
Linear
A differential equation is said to be linear if F can be written as a linear combination of the derivatives of y:
y ( n ) = i = 0 n 1 a i ( x ) y ( i ) + r ( x ) {\displaystyle y^{(n)}=\sum _{i=0}^{n-1}a_{i}(x)y^{(i)}+r(x)}
where a i (x) and r (x) are continuous functions of x.[8] [11] [12] The function r(x) is called the source term, leading to two further important classifications:[11] [13]
Homogeneous
If r(x) = 0, and consequently one "automatic" solution is the trivial solution, y = 0. The solution of a linear homogeneous equation is a complementary function, denoted here by yc .
Nonhomogeneous (or inhomogeneous)
If r(x) ≠ 0. The additional solution to the complementary function is the particular integral, denoted here by yp .
Non-linear
A differential equation that cannot be written in the form of a linear combination.

System of ODEs [edit]

A number of coupled differential equations form a system of equations. If y is a vector whose elements are functions; y(x) = [y 1(x), y 2(x),..., ym (x)], and F is a vector-valued function of y and its derivatives, then

y ( n ) = F ( x , y , y , y , , y ( n 1 ) ) {\displaystyle \mathbf {y} ^{(n)}=\mathbf {F} \left(x,\mathbf {y} ,\mathbf {y} ',\mathbf {y} '',\ldots ,\mathbf {y} ^{(n-1)}\right)}

is an explicit system of ordinary differential equations of order n and dimension m. In column vector form:

( y 1 ( n ) y 2 ( n ) y m ( n ) ) = ( f 1 ( x , y , y , y , , y ( n 1 ) ) f 2 ( x , y , y , y , , y ( n 1 ) ) f m ( x , y , y , y , , y ( n 1 ) ) ) {\displaystyle {\begin{pmatrix}y_{1}^{(n)}\\y_{2}^{(n)}\\\vdots \\y_{m}^{(n)}\end{pmatrix}}={\begin{pmatrix}f_{1}\left(x,\mathbf {y} ,\mathbf {y} ',\mathbf {y} '',\ldots ,\mathbf {y} ^{(n-1)}\right)\\f_{2}\left(x,\mathbf {y} ,\mathbf {y} ',\mathbf {y} '',\ldots ,\mathbf {y} ^{(n-1)}\right)\\\vdots \\f_{m}\left(x,\mathbf {y} ,\mathbf {y} ',\mathbf {y} '',\ldots ,\mathbf {y} ^{(n-1)}\right)\end{pmatrix}}}

These are not necessarily linear. The implicit analogue is:

F ( x , y , y , y , , y ( n ) ) = 0 {\displaystyle \mathbf {F} \left(x,\mathbf {y} ,\mathbf {y} ',\mathbf {y} '',\ldots ,\mathbf {y} ^{(n)}\right)={\boldsymbol {0}}}

where 0 = (0, 0, ..., 0) is the zero vector. In matrix form

( f 1 ( x , y , y , y , , y ( n ) ) f 2 ( x , y , y , y , , y ( n ) ) f m ( x , y , y , y , , y ( n ) ) ) = ( 0 0 0 ) {\displaystyle {\begin{pmatrix}f_{1}(x,\mathbf {y} ,\mathbf {y} ',\mathbf {y} '',\ldots ,\mathbf {y} ^{(n)})\\f_{2}(x,\mathbf {y} ,\mathbf {y} ',\mathbf {y} '',\ldots ,\mathbf {y} ^{(n)})\\\vdots \\f_{m}(x,\mathbf {y} ,\mathbf {y} ',\mathbf {y} '',\ldots ,\mathbf {y} ^{(n)})\end{pmatrix}}={\begin{pmatrix}0\\0\\\vdots \\0\end{pmatrix}}}

For a system of the form F ( x , y , y ) = 0 {\displaystyle \mathbf {F} \left(x,\mathbf {y} ,\mathbf {y} '\right)={\boldsymbol {0}}} , some sources also require that the Jacobian matrix F ( x , u , v ) v {\displaystyle {\frac {\partial \mathbf {F} (x,\mathbf {u} ,\mathbf {v} )}{\partial \mathbf {v} }}} be non-singular in order to call this an implicit ODE [system]; an implicit ODE system satisfying this Jacobian non-singularity condition can be transformed into an explicit ODE system. In the same sources, implicit ODE systems with a singular Jacobian are termed differential algebraic equations (DAEs). This distinction is not merely one of terminology; DAEs have fundamentally different characteristics and are generally more involved to solve than (nonsingular) ODE systems.[14] [15] [16] Presumably for additional derivatives, the Hessian matrix and so forth are also assumed non-singular according to this scheme,[ citation needed ] although note that any ODE of order greater than one can be (and usually is) rewritten as system of ODEs of first order,[17] which makes the Jacobian singularity criterion sufficient for this taxonomy to be comprehensive at all orders.

The behavior of a system of ODEs can be visualized through the use of a phase portrait.

Solutions [edit]

Given a differential equation

F ( x , y , y , , y ( n ) ) = 0 {\displaystyle F\left(x,y,y',\ldots ,y^{(n)}\right)=0}

a function u: IRR , where I is an interval, is called a solution or integral curve for F, if u is n-times differentiable on I, and

F ( x , u , u , , u ( n ) ) = 0 x I . {\displaystyle F(x,u,u',\ \ldots ,\ u^{(n)})=0\quad x\in I.}

Given two solutions u: JRR and v: IRR , u is called an extension of v if IJ and

u ( x ) = v ( x ) x I . {\displaystyle u(x)=v(x)\quad x\in I.\,}

A solution that has no extension is called a maximal solution. A solution defined on all of R is called a global solution.

A general solution of an nth-order equation is a solution containing n arbitrary independent constants of integration. A particular solution is derived from the general solution by setting the constants to particular values, often chosen to fulfill set 'initial conditions or boundary conditions'.[18] A singular solution is a solution that cannot be obtained by assigning definite values to the arbitrary constants in the general solution.[19]

In the context of linear ODE, the terminology particular solution can also refer to any solution of the ODE (not necessarily satisfying the initial conditions), which is then added to the homogeneous solution (a general solution of the homogeneous ODE), which then forms a general solution of the original ODE. This is the terminology used in the guessing method section in this article, and is frequently used when discussing the method of undetermined coefficients and variation of parameters.

Theories [edit]

Singular solutions [edit]

The theory of singular solutions of ordinary and partial differential equations was a subject of research from the time of Leibniz, but only since the middle of the nineteenth century has it received special attention. A valuable but little-known work on the subject is that of Houtain (1854). Darboux (from 1873) was a leader in the theory, and in the geometric interpretation of these solutions he opened a field worked by various writers, notably Casorati and Cayley. To the latter is due (1872) the theory of singular solutions of differential equations of the first order as accepted circa 1900.

Reduction to quadratures [edit]

The primitive attempt in dealing with differential equations had in view a reduction to quadratures. As it had been the hope of eighteenth-century algebraists to find a method for solving the general equation of the nth degree, so it was the hope of analysts to find a general method for integrating any differential equation. Gauss (1799) showed, however, that complex differential equations require complex numbers. Hence, analysts began to substitute the study of functions, thus opening a new and fertile field. Cauchy was the first to appreciate the importance of this view. Thereafter, the real question was no longer whether a solution is possible by means of known functions or their integrals, but whether a given differential equation suffices for the definition of a function of the independent variable or variables, and, if so, what are the characteristic properties.

Fuchsian theory [edit]

Two memoirs by Fuchs[20] inspired a novel approach, subsequently elaborated by Thomé and Frobenius. Collet was a prominent contributor beginning in 1869. His method for integrating a non-linear system was communicated to Bertrand in 1868. Clebsch (1873) attacked the theory along lines parallel to those in his theory of Abelian integrals. As the latter can be classified according to the properties of the fundamental curve that remains unchanged under a rational transformation, Clebsch proposed to classify the transcendent functions defined by differential equations according to the invariant properties of the corresponding surfaces f = 0 under rational one-to-one transformations.

Lie's theory [edit]

From 1870, Sophus Lie's work put the theory of differential equations on a better foundation. He showed that the integration theories of the older mathematicians can, using Lie groups, be referred to a common source, and that ordinary differential equations that admit the same infinitesimal transformations present comparable integration difficulties. He also emphasized the subject of transformations of contact.

Lie's group theory of differential equations has been certified, namely: (1) that it unifies the many ad hoc methods known for solving differential equations, and (2) that it provides powerful new ways to find solutions. The theory has applications to both ordinary and partial differential equations.[21]

A general solution approach uses the symmetry property of differential equations, the continuous infinitesimal transformations of solutions to solutions (Lie theory). Continuous group theory, Lie algebras, and differential geometry are used to understand the structure of linear and nonlinear (partial) differential equations for generating integrable equations, to find its Lax pairs, recursion operators, Bäcklund transform, and finally finding exact analytic solutions to DE.

Symmetry methods have been applied to differential equations that arise in mathematics, physics, engineering, and other disciplines.

Sturm–Liouville theory [edit]

Sturm–Liouville theory is a theory of a special type of second order linear ordinary differential equation. Their solutions are based on eigenvalues and corresponding eigenfunctions of linear operators defined via second-order homogeneous linear equations. The problems are identified as Sturm-Liouville Problems (SLP) and are named after J.C.F. Sturm and J. Liouville, who studied them in the mid-1800s. SLPs have an infinite number of eigenvalues, and the corresponding eigenfunctions form a complete, orthogonal set, which makes orthogonal expansions possible. This is a key idea in applied mathematics, physics, and engineering.[22] SLPs are also useful in the analysis of certain partial differential equations.

Existence and uniqueness of solutions [edit]

There are several theorems that establish existence and uniqueness of solutions to initial value problems involving ODEs both locally and globally. The two main theorems are

Theorem Assumption Conclusion
Peano existence theorem F continuous local existence only
Picard–Lindelöf theorem F Lipschitz continuous local existence and uniqueness

In their basic form both of these theorems only guarantee local results, though the latter can be extended to give a global result, for example, if the conditions of Grönwall's inequality are met.

Also, uniqueness theorems like the Lipschitz one above do not apply to DAE systems, which may have multiple solutions stemming from their (non-linear) algebraic part alone.[23]

Local existence and uniqueness theorem simplified [edit]

The theorem can be stated simply as follows.[24] For the equation and initial value problem:

y = F ( x , y ) , y 0 = y ( x 0 ) {\displaystyle y'=F(x,y)\,,\quad y_{0}=y(x_{0})}

if F and ∂F/∂y are continuous in a closed rectangle

R = [ x 0 a , x 0 + a ] × [ y 0 b , y 0 + b ] {\displaystyle R=[x_{0}-a,x_{0}+a]\times [y_{0}-b,y_{0}+b]}

in the x-y plane, where a and b are real (symbolically: a, b ∈ ℝ) and × denotes the cartesian product, square brackets denote closed intervals, then there is an interval

I = [ x 0 h , x 0 + h ] [ x 0 a , x 0 + a ] {\displaystyle I=[x_{0}-h,x_{0}+h]\subset [x_{0}-a,x_{0}+a]}

for some h ∈ ℝ where the solution to the above equation and initial value problem can be found. That is, there is a solution and it is unique. Since there is no restriction on F to be linear, this applies to non-linear equations that take the form F(x, y), and it can also be applied to systems of equations.

Global uniqueness and maximum domain of solution [edit]

When the hypotheses of the Picard–Lindelöf theorem are satisfied, then local existence and uniqueness can be extended to a global result. More precisely:[25]

For each initial condition (x 0, y 0) there exists a unique maximum (possibly infinite) open interval

I max = ( x , x + ) , x ± R { ± } , x 0 I max {\displaystyle I_{\max }=(x_{-},x_{+}),x_{\pm }\in \mathbb {R} \cup \{\pm \infty \},x_{0}\in I_{\max }}

such that any solution that satisfies this initial condition is a restriction of the solution that satisfies this initial condition with domain I max {\displaystyle I_{\max }} .

In the case that x ± ± {\displaystyle x_{\pm }\neq \pm \infty } , there are exactly two possibilities

where Ω is the open set in which F is defined, and Ω ¯ {\displaystyle \partial {\bar {\Omega }}} is its boundary.

Note that the maximum domain of the solution

  • is always an interval (to have uniqueness)
  • may be smaller than R {\displaystyle \mathbb {R} }
  • may depend on the specific choice of (x 0, y 0).
Example.
y = y 2 {\displaystyle y'=y^{2}}

This means that F(x, y) = y 2, which is C 1 and therefore locally Lipschitz continuous, satisfying the Picard–Lindelöf theorem.

Even in such a simple setting, the maximum domain of solution cannot be all R {\displaystyle \mathbb {R} } since the solution is

y ( x ) = y 0 ( x 0 x ) y 0 + 1 {\displaystyle y(x)={\frac {y_{0}}{(x_{0}-x)y_{0}+1}}}

which has maximum domain:

{ R y 0 = 0 ( , x 0 + 1 y 0 ) y 0 > 0 ( x 0 + 1 y 0 , + ) y 0 < 0 {\displaystyle {\begin{cases}\mathbb {R} &y_{0}=0\\[4pt]\left(-\infty ,x_{0}+{\frac {1}{y_{0}}}\right)&y_{0}>0\\[4pt]\left(x_{0}+{\frac {1}{y_{0}}},+\infty \right)&y_{0}<0\end{cases}}}

This shows clearly that the maximum interval may depend on the initial conditions. The domain of y could be taken as being R ( x 0 + 1 / y 0 ) , {\displaystyle \mathbb {R} \setminus (x_{0}+1/y_{0}),} but this would lead to a domain that is not an interval, so that the side opposite to the initial condition would be disconnected from the initial condition, and therefore not uniquely determined by it.

The maximum domain is not R {\displaystyle \mathbb {R} } because

lim x x ± y ( x ) , {\displaystyle \lim _{x\to x_{\pm }}\|y(x)\|\to \infty ,}

which is one of the two possible cases according to the above theorem.

Reduction of order [edit]

Differential equations can usually be solved more easily if the order of the equation can be reduced.

Reduction to a first-order system [edit]

Any explicit differential equation of order n,

F ( x , y , y , y , , y ( n 1 ) ) = y ( n ) {\displaystyle F\left(x,y,y',y'',\ \ldots ,\ y^{(n-1)}\right)=y^{(n)}}

can be written as a system of n first-order differential equations by defining a new family of unknown functions

y i = y ( i 1 ) . {\displaystyle y_{i}=y^{(i-1)}.\!}

for i = 1, 2,..., n. The n-dimensional system of first-order coupled differential equations is then

y 1 = y 2 y 2 = y 3 y n 1 = y n y n = F ( x , y 1 , , y n ) . {\displaystyle {\begin{array}{rcl}y_{1}'&=&y_{2}\\y_{2}'&=&y_{3}\\&\vdots &\\y_{n-1}'&=&y_{n}\\y_{n}'&=&F(x,y_{1},\ldots ,y_{n}).\end{array}}}

more compactly in vector notation:

y = F ( x , y ) {\displaystyle \mathbf {y} '=\mathbf {F} (x,\mathbf {y} )}

where

y = ( y 1 , , y n ) , F ( x , y 1 , , y n ) = ( y 2 , , y n , F ( x , y 1 , , y n ) ) . {\displaystyle \mathbf {y} =(y_{1},\ldots ,y_{n}),\quad \mathbf {F} (x,y_{1},\ldots ,y_{n})=(y_{2},\ldots ,y_{n},F(x,y_{1},\ldots ,y_{n})).}

Summary of exact solutions [edit]

Some differential equations have solutions that can be written in an exact and closed form. Several important classes are given here.

In the table below, P(x), Q(x), P(y), Q(y), and M(x,y), N(x,y) are any integrable functions of x, y, and b and c are real given constants, and C 1, C 2,... are arbitrary constants (complex in general). The differential equations are in their equivalent and alternative forms that lead to the solution through integration.

In the integral solutions, λ and ε are dummy variables of integration (the continuum analogues of indices in summation), and the notation ∫ x F(λ) just means to integrate F(λ) with respect to λ, then after the integration substitute λ = x, without adding constants (explicitly stated).

Separable equations [edit]

Differential equation Solution method General solution
First-order, separable in x and y (general case, see below for special cases)[26]

P 1 ( x ) Q 1 ( y ) + P 2 ( x ) Q 2 ( y ) d y d x = 0 P 1 ( x ) Q 1 ( y ) d x + P 2 ( x ) Q 2 ( y ) d y = 0 {\displaystyle {\begin{aligned}P_{1}(x)Q_{1}(y)+P_{2}(x)Q_{2}(y)\,{\frac {dy}{dx}}&=0\\P_{1}(x)Q_{1}(y)\,dx+P_{2}(x)Q_{2}(y)\,dy&=0\end{aligned}}}

Separation of variables (divide by P 2 Q 1). x P 1 ( λ ) P 2 ( λ ) d λ + y Q 2 ( λ ) Q 1 ( λ ) d λ = C {\displaystyle \int ^{x}{\frac {P_{1}(\lambda )}{P_{2}(\lambda )}}\,d\lambda +\int ^{y}{\frac {Q_{2}(\lambda )}{Q_{1}(\lambda )}}\,d\lambda =C\,\!}
First-order, separable in x [24]

d y d x = F ( x ) d y = F ( x ) d x {\displaystyle {\begin{aligned}{\frac {dy}{dx}}&=F(x)\\dy&=F(x)\,dx\end{aligned}}}

Direct integration. y = x F ( λ ) d λ + C {\displaystyle y=\int ^{x}F(\lambda )\,d\lambda +C\,\!}
First-order, autonomous, separable in y [24]

d y d x = F ( y ) d y = F ( y ) d x {\displaystyle {\begin{aligned}{\frac {dy}{dx}}&=F(y)\\dy&=F(y)\,dx\end{aligned}}}

Separation of variables (divide by F). x = y d λ F ( λ ) + C {\displaystyle x=\int ^{y}{\frac {d\lambda }{F(\lambda )}}+C\,\!}
First-order, separable in x and y [24]

P ( y ) d y d x + Q ( x ) = 0 P ( y ) d y + Q ( x ) d x = 0 {\displaystyle {\begin{aligned}P(y){\frac {dy}{dx}}+Q(x)&=0\\P(y)\,dy+Q(x)\,dx&=0\end{aligned}}}

Integrate throughout. y P ( λ ) d λ + x Q ( λ ) d λ = C {\displaystyle \int ^{y}P(\lambda )\,d\lambda +\int ^{x}Q(\lambda )\,d\lambda =C\,\!}

General first-order equations [edit]

Differential equation Solution method General solution
First-order, homogeneous[24]

d y d x = F ( y x ) {\displaystyle {\frac {dy}{dx}}=F\left({\frac {y}{x}}\right)\,\!}

Set y = ux, then solve by separation of variables in u and x. ln ( C x ) = y / x d λ F ( λ ) λ {\displaystyle \ln(Cx)=\int ^{y/x}{\frac {d\lambda }{F(\lambda )-\lambda }}\,\!}
First-order, separable[26]

y M ( x y ) + x N ( x y ) d y d x = 0 y M ( x y ) d x + x N ( x y ) d y = 0 {\displaystyle {\begin{aligned}yM(xy)+xN(xy)\,{\frac {dy}{dx}}&=0\\yM(xy)\,dx+xN(xy)\,dy&=0\end{aligned}}}

Separation of variables (divide by xy).

ln ( C x ) = x y N ( λ ) d λ λ [ N ( λ ) M ( λ ) ] {\displaystyle \ln(Cx)=\int ^{xy}{\frac {N(\lambda )\,d\lambda }{\lambda [N(\lambda )-M(\lambda )]}}\,\!}

If N = M, the solution is xy = C.

Exact differential, first-order[24]

M ( x , y ) d y d x + N ( x , y ) = 0 M ( x , y ) d y + N ( x , y ) d x = 0 {\displaystyle {\begin{aligned}M(x,y){\frac {dy}{dx}}+N(x,y)&=0\\M(x,y)\,dy+N(x,y)\,dx&=0\end{aligned}}}

where M x = N y {\displaystyle {\frac {\partial M}{\partial x}}={\frac {\partial N}{\partial y}}\,\!}

Integrate throughout. F ( x , y ) = y M ( x , λ ) d λ + x N ( λ , y ) d λ + Y ( y ) + X ( x ) = C {\displaystyle {\begin{aligned}F(x,y)=&\int ^{y}M(x,\lambda )\,d\lambda +\int ^{x}N(\lambda ,y)\,d\lambda \\&+Y(y)+X(x)=C\end{aligned}}}

where Y(y) and X(x) are functions from the integrals rather than constant values, which are set to make the final function F(x, y) satisfy the initial equation.

Inexact differential, first-order[24]

M ( x , y ) d y d x + N ( x , y ) = 0 M ( x , y ) d y + N ( x , y ) d x = 0 {\displaystyle {\begin{aligned}M(x,y){\frac {dy}{dx}}+N(x,y)&=0\\M(x,y)\,dy+N(x,y)\,dx&=0\end{aligned}}}

where M x N y {\displaystyle {\frac {\partial M}{\partial x}}\neq {\frac {\partial N}{\partial y}}\,\!}

Integration factor μ(x, y) satisfying

( μ M ) x = ( μ N ) y {\displaystyle {\frac {\partial (\mu M)}{\partial x}}={\frac {\partial (\mu N)}{\partial y}}\,\!}

If μ(x, y) can be found:

F ( x , y ) = y μ ( x , λ ) M ( x , λ ) d λ + x μ ( λ , y ) N ( λ , y ) d λ + Y ( y ) + X ( x ) = C {\displaystyle {\begin{aligned}F(x,y)=&\int ^{y}\mu (x,\lambda )M(x,\lambda )\,d\lambda +\int ^{x}\mu (\lambda ,y)N(\lambda ,y)\,d\lambda \\&+Y(y)+X(x)=C\end{aligned}}}

General second-order equations [edit]

Differential equation Solution method General solution
Second-order, autonomous[27]

d 2 y d x 2 = F ( y ) {\displaystyle {\frac {d^{2}y}{dx^{2}}}=F(y)\,\!}

Multiply both sides of equation by 2dy/dx, substitute 2 d y d x d 2 y d x 2 = d d x ( d y d x ) 2 {\displaystyle 2{\frac {dy}{dx}}{\frac {d^{2}y}{dx^{2}}}={\frac {d}{dx}}\left({\frac {dy}{dx}}\right)^{2}\,\!} , then integrate twice. x = ± y d λ 2 λ F ( ε ) d ε + C 1 + C 2 {\displaystyle x=\pm \int ^{y}{\frac {d\lambda }{\sqrt {2\int ^{\lambda }F(\varepsilon )\,d\varepsilon +C_{1}}}}+C_{2}\,\!}

Linear to the nth order equations [edit]

Differential equation Solution method General solution
First-order, linear, inhomogeneous, function coefficients[24]

d y d x + P ( x ) y = Q ( x ) {\displaystyle {\frac {dy}{dx}}+P(x)y=Q(x)\,\!}

Integrating factor: e x P ( λ ) d λ . {\displaystyle e^{\int ^{x}P(\lambda )\,d\lambda }.} Armour formula:

y = e x P ( λ ) d λ [ x e λ P ( ε ) d ε Q ( λ ) d λ + C ] {\displaystyle y=e^{-\int ^{x}P(\lambda )\,d\lambda }\left[\int ^{x}e^{\int ^{\lambda }P(\varepsilon )\,d\varepsilon }Q(\lambda )\,d\lambda +C\right]}

Second-order, linear, inhomogeneous, function coefficients

d 2 y d x 2 + 2 p ( x ) d y d x + ( p ( x ) 2 + p ( x ) ) y = q ( x ) {\displaystyle {\frac {d^{2}y}{dx^{2}}}+2p(x){\frac {dy}{dx}}+\left(p(x)^{2}+p'(x)\right)y=q(x)}

Integrating factor: e x P ( λ ) d λ {\displaystyle e^{\int ^{x}P(\lambda )\,d\lambda }} y = e x P ( λ ) d λ [ x ( ξ e λ P ( ε ) d ε Q ( λ ) d λ ) d ξ + C 1 x + C 2 ] {\displaystyle y=e^{-\int ^{x}P(\lambda )\,d\lambda }\left[\int ^{x}\left(\int ^{\xi }e^{\int ^{\lambda }P(\varepsilon )\,d\varepsilon }Q(\lambda )\,d\lambda \right)d\xi +C_{1}x+C_{2}\right]}
Second-order, linear, inhomogeneous, constant coefficients[28]

d 2 y d x 2 + b d y d x + c y = r ( x ) {\displaystyle {\frac {d^{2}y}{dx^{2}}}+b{\frac {dy}{dx}}+cy=r(x)\,\!}

Complementary function yc : assume yc = e αx , substitute and solve polynomial in α, to find the linearly independent functions e α j x {\displaystyle e^{\alpha _{j}x}} .

Particular integral yp : in general the method of variation of parameters, though for very simple r(x) inspection may work.[24]

y = y c + y p {\displaystyle y=y_{c}+y_{p}}

If b 2 > 4c , then

y c = C 1 e x 2 ( b + b 2 4 c ) + C 2 e x 2 ( b b 2 4 c ) {\displaystyle y_{c}=C_{1}e^{-{\frac {x}{2}}\,\left(b+{\sqrt {b^{2}-4c}}\right)}+C_{2}e^{-{\frac {x}{2}}\,\left(b-{\sqrt {b^{2}-4c}}\right)}}

If b 2 = 4c , then

y c = ( C 1 x + C 2 ) e b x 2 {\displaystyle y_{c}=(C_{1}x+C_{2})e^{-{\frac {bx}{2}}}}

If b 2 < 4c , then

y c = e b x 2 [ C 1 sin ( x 4 c b 2 2 ) + C 2 cos ( x 4 c b 2 2 ) ] {\displaystyle y_{c}=e^{-{\frac {bx}{2}}}\left[C_{1}\sin \left(x\,{\frac {\sqrt {4c-b^{2}}}{2}}\right)+C_{2}\cos \left(x\,{\frac {\sqrt {4c-b^{2}}}{2}}\right)\right]}
nth-order, linear, inhomogeneous, constant coefficients[28]

j = 0 n b j d j y d x j = r ( x ) {\displaystyle \sum _{j=0}^{n}b_{j}{\frac {d^{j}y}{dx^{j}}}=r(x)\,\!}

Complementary function yc : assume yc = e αx , substitute and solve polynomial in α, to find the linearly independent functions e α j x {\displaystyle e^{\alpha _{j}x}} .

Particular integral yp : in general the method of variation of parameters, though for very simple r(x) inspection may work.[24]

y = y c + y p {\displaystyle y=y_{c}+y_{p}}

Since α j are the solutions of the polynomial of degree n: j = 1 n ( α α j ) = 0 {\displaystyle \prod _{j=1}^{n}(\alpha -\alpha _{j})=0\,\!} , then:

for α j all different,

y c = j = 1 n C j e α j x {\displaystyle y_{c}=\sum _{j=1}^{n}C_{j}e^{\alpha _{j}x}\,\!}

for each root α j repeated kj times,

y c = j = 1 n ( = 1 k j C j , x 1 ) e α j x {\displaystyle y_{c}=\sum _{j=1}^{n}\left(\sum _{\ell =1}^{k_{j}}C_{j,\ell }x^{\ell -1}\right)e^{\alpha _{j}x}\,\!}

for some α j complex, then setting α = χ j + j , and using Euler's formula, allows some terms in the previous results to be written in the form

C j e α j x = C j e χ j x cos ( γ j x + φ j ) {\displaystyle C_{j}e^{\alpha _{j}x}=C_{j}e^{\chi _{j}x}\cos(\gamma _{j}x+\varphi _{j})\,\!}

where ϕ j is an arbitrary constant (phase shift).

The guessing method [edit]

When all other methods for solving an ODE fail, or in the cases where we have some intuition about what the solution to a DE might look like, it is sometimes possible to solve a DE simply by guessing the solution and validating it is correct. To use this method, we simply guess a solution to the differential equation, and then plug the solution into the differential equation to validate if it satisfies the equation. If it does then we have a particular solution to the DE, otherwise we start over again and try another guess. For instance we could guess that the solution to a DE has the form: y = A e α t {\displaystyle y=Ae^{\alpha t}} since this is a very common solution that physically behaves in a sinusoidal way.

In the case of a first order ODE that is non-homogeneous we need to first find a DE solution to the homogeneous portion of the DE, otherwise known as the characteristic equation, and then find a solution to the entire non-homogeneous equation by guessing. Finally, we add both of these solutions together to obtain the total solution to the ODE, that is:

total solution = homogeneous solution + particular solution {\displaystyle {\text{total solution}}={\text{homogeneous solution}}+{\text{particular solution}}}

Software for ODE solving [edit]

  • Maxima, an open-source computer algebra system.
  • COPASI, a free (Artistic License 2.0) software package for the integration and analysis of ODEs.
  • MATLAB, a technical computing application (MATrix LABoratory)
  • GNU Octave, a high-level language, primarily intended for numerical computations.
  • Scilab, an open source application for numerical computation.
  • Maple, a proprietary application for symbolic calculations.
  • Mathematica, a proprietary application primarily intended for symbolic calculations.
  • SymPy, a Python package that can solve ODEs symbolically
  • Julia (programming language), a high-level language primarily intended for numerical computations.
  • SageMath, an open-source application that uses a Python-like syntax with a wide range of capabilities spanning several branches of mathematics.
  • SciPy, a Python package that includes an ODE integration module.
  • Chebfun, an open-source package, written in MATLAB, for computing with functions to 15-digit accuracy.
  • GNU R, an open source computational environment primarily intended for statistics, which includes packages for ODE solving.

See also [edit]

  • Boundary value problem
  • Examples of differential equations
  • Laplace transform applied to differential equations
  • List of dynamical systems and differential equations topics
  • Matrix differential equation
  • Method of undetermined coefficients
  • Recurrence relation

Notes [edit]

  1. ^ Dennis G. Zill (15 March 2012). A First Course in Differential Equations with Modeling Applications. Cengage Learning. ISBN978-1-285-40110-2. Archived from the original on 17 January 2020. Retrieved 11 July 2019.
  2. ^ "What is the origin of the term "ordinary differential equations"?". hsm.stackexchange.com. Stack Exchange. Retrieved 2016-07-28 .
  3. ^ Mathematics for Chemists, D.M. Hirst, Macmillan Press, 1976, (No ISBN) SBN: 333-18172-7
  4. ^ Kreyszig (1972, p. 64)
  5. ^ Simmons (1972, pp. 1, 2)
  6. ^ Halliday & Resnick (1977, p. 78)
  7. ^ Tipler (1991, pp. 78–83)
  8. ^ a b Harper (1976, p. 127)
  9. ^ Kreyszig (1972, p. 2)
  10. ^ Simmons (1972, p. 3)
  11. ^ a b Kreyszig (1972, p. 24)
  12. ^ Simmons (1972, p. 47)
  13. ^ Harper (1976, p. 128)
  14. ^ Kreyszig (1972, p. 12)
  15. ^ Ascher (1998, p. 12) harvtxt error: no target: CITEREFAscher1998 (help)
  16. ^ Achim Ilchmann; Timo Reis (2014). Surveys in Differential-Algebraic Equations II. Springer. pp. 104–105. ISBN978-3-319-11050-9.
  17. ^ Ascher (1998, p. 5) harvtxt error: no target: CITEREFAscher1998 (help)
  18. ^ Kreyszig (1972, p. 78)
  19. ^ Kreyszig (1972, p. 4)
  20. ^ Crelle, 1866, 1868
  21. ^ Lawrence (1999, p. 9) harvtxt error: no target: CITEREFLawrence1999 (help)
  22. ^ Logan, J. (2013). Applied mathematics (Fourth ed.).
  23. ^ Ascher (1998, p. 13) harvtxt error: no target: CITEREFAscher1998 (help)
  24. ^ a b c d e f g h i j Elementary Differential Equations and Boundary Value Problems (4th Edition), W.E. Boyce, R.C. Diprima, Wiley International, John Wiley & Sons, 1986, ISBN 0-471-83824-1
  25. ^ Boscain; Chitour 2011, p. 21
  26. ^ a b Mathematical Handbook of Formulas and Tables (3rd edition), S. Lipschutz, M. R. Spiegel, J. Liu, Schaum's Outline Series, 2009, ISC_2N 978-0-07-154855-7
  27. ^ Further Elementary Analysis, R. Porter, G.Bell & Sons (London), 1978, ISBN 0-7135-1594-5
  28. ^ a b Mathematical methods for physics and engineering, K.F. Riley, M.P. Hobson, S.J. Bence, Cambridge University Press, 2010, ISC_2N 978-0-521-86153-3

References [edit]

  • Halliday, David; Resnick, Robert (1977), Physics (3rd ed.), New York: Wiley, ISBN0-471-71716-9
  • Harper, Charlie (1976), Introduction to Mathematical Physics, New Jersey: Prentice-Hall, ISBN0-13-487538-9
  • Kreyszig, Erwin (1972), Advanced Engineering Mathematics (3rd ed.), New York: Wiley, ISBN0-471-50728-8 .
  • Polyanin, A. D. and V. F. Zaitsev, Handbook of Exact Solutions for Ordinary Differential Equations (2nd edition), Chapman & Hall/CRC Press, Boca Raton, 2003. ISBN 1-58488-297-2
  • Simmons, George F. (1972), Differential Equations with Applications and Historical Notes, New York: McGraw-Hill, LCCN 75173716
  • Tipler, Paul A. (1991), Physics for Scientists and Engineers: Extended version (3rd ed.), New York: Worth Publishers, ISBN0-87901-432-6
  • Boscain, Ugo; Chitour, Yacine (2011), Introduction à l'automatique (PDF) (in French)
  • Dresner, Lawrence (1999), Applications of Lie's Theory of Ordinary and Partial Differential Equations, Bristol and Philadelphia: Institute of Physics Publishing, ISBN978-0750305303
  • Ascher, Uri; Petzold, Linda (1998), Computer Methods for Ordinary Differential Equations and Differential-Algebraic Equations, SIAM, ISBN978-1-61197-139-2

Bibliography [edit]

  • Coddington, Earl A.; Levinson, Norman (1955). Theory of Ordinary Differential Equations . New York: McGraw-Hill.
  • Hartman, Philip (2002) [1964], Ordinary differential equations, Classics in Applied Mathematics, 38, Philadelphia: Society for Industrial and Applied Mathematics, doi:10.1137/1.9780898719222, ISBN978-0-89871-510-1, MR 1929104
  • W. Johnson, A Treatise on Ordinary and Partial Differential Equations, John Wiley and Sons, 1913, in University of Michigan Historical Math Collection
  • Ince, Edward L. (1944) [1926], Ordinary Differential Equations, Dover Publications, New York, ISBN978-0-486-60349-0, MR 0010757
  • Witold Hurewicz, Lectures on Ordinary Differential Equations, Dover Publications, ISBN 0-486-49510-8
  • Ibragimov, Nail H. (1993). CRC Handbook of Lie Group Analysis of Differential Equations Vol. 1-3. Providence: CRC-Press. ISBN0-8493-4488-3. .
  • Teschl, Gerald (2012). Ordinary Differential Equations and Dynamical Systems. Providence: American Mathematical Society. ISBN978-0-8218-8328-0.
  • A. D. Polyanin, V. F. Zaitsev, and A. Moussiaux, Handbook of First Order Partial Differential Equations, Taylor & Francis, London, 2002. ISBN 0-415-27267-X
  • D. Zwillinger, Handbook of Differential Equations (3rd edition), Academic Press, Boston, 1997.

External links [edit]

  • "Differential equation, ordinary", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
  • EqWorld: The World of Mathematical Equations, containing a list of ordinary differential equations with their solutions.
  • Online Notes / Differential Equations by Paul Dawkins, Lamar University.
  • Differential Equations, S.O.S. Mathematics.
  • A primer on analytical solution of differential equations from the Holistic Numerical Methods Institute, University of South Florida.
  • Ordinary Differential Equations and Dynamical Systems lecture notes by Gerald Teschl.
  • Notes on Diffy Qs: Differential Equations for Engineers An introductory textbook on differential equations by Jiri Lebl of UIUC.
  • Modeling with ODEs using Scilab A tutorial on how to model a physical system described by ODE using Scilab standard programming language by Openeering team.
  • Solving an ordinary differential equation in Wolfram|Alpha

Find Area Where Solution Exists and Is Unique

Source: https://en.wikipedia.org/wiki/Ordinary_differential_equation

0 Response to "Find Area Where Solution Exists and Is Unique"

Post a Comment

Iklan Atas Artikel

Iklan Tengah Artikel 1

Iklan Tengah Artikel 2

Iklan Bawah Artikel