CHAPTER 9

ORDINARY DIFFERENTIAL EQUATIONS

    9.1. Introduction

      9.1-1. Survey

      9.1-2. Ordinary Differential Equations

      9.1-3. Systems of Differential Equations

      9.1-4. Existence and Desirable Properties of Solutions

      9.1-5. General Hints

    9.2. First-order Equations

      9.2-1. Existence and Uniqueness of Solutions

      9.2-2. Geometrical Interpretation. Singular Integrals

      9.2-3. Transformation of Variables

      9.2-4. Solution of Special Types of First-order Equations

      9.2-5. General Methods of Solution

(a) Picard’s Method of Successive Approximations

(b) Taylor-series Expansion

    9.3. Linear Differential Equations

      9.3-1. Linear Differential Equations. Superposition Theorems

      9.3-2. Linear Independence and Fundamental Systems of Solutions

      9.3-3. Solution by Variation of Constants. Green’s Functions

      9.3-4. Reduction of Two-point Boundary-value Problems to Initial-value Problems

      9.3-5. Complex-variable Theory of Linear Differential Equations. Taylor-series Solution and Effects of Singularities

      9.3-6. Solution of Homogeneous Equations by Series Expansion about a Regular Singular Point

      9.3-7. Integral-transform Methods

      9.3-8. Linear Second-order Equations

      9.3-9. Gauss’s Hypergeometric Differential Equation and Riemann’s Differential Equation

      9.3-10. Confluent Hypergeometric Functions

      9.3-11. Pochhammer’s Notation

    9.4. Linear Differential Equations with Constant Coefficients

      9.4-1. Homogeneous Linear Equations with Constant Coefficients

      9.4-2. Nonhomogeneous Equations. Normal Response, Steady-state Solution, and Transients

      9.4-3. Superposition Integrals and Weighting Functions

      9.4-4. Stability

      9.4-5. The Laplace-transform Method of Solution

      9.4-6. Periodic Forcing Functions and Solutions. The Phasor Method

(a) Sinusoidal Forcing Functions and Solutions. Sinusoidal Steady-state Solutions

(b) The Phasor Method

(c) Rotating Phasors

(d) More General Periodic Forcing Functions

      9.4-7. Transfer Functions and Frequency-response Functions

(a) Transfer Functions

(b) Frequency-response Functions

(c) Relations between Transfer Functions or Frequency- response Functions and Weighting Functions

      9.4-8. Normal Coordinates and Normal-mode Oscillations

(a) Free Oscillations

(b) Forced Oscillations

    9.5. Nonlinear Second-order Equations

      9.5-1. Introduction

      9.5-2. The Phase-plane Representation. Graphical Method of Solution

      9.5-3. Critical Points and Limit Cycles

(a) Ordinary and Critical Phase-plane Points

(b) Periodic Solutions and Limit Cycles

(c) Poincare’s Index and Bendixson’s Theorems

      9.5-4. Poincaré-Lyapounov Theory of Stability

      9.5-5. The Approximation Method of Krylov and Bogoliubov

(a) The First Approximation

(b) The Improved First Approximation

      9.5-6. Energy-integral Solution

    9.6. Pfaffian Differential Equations

      9.6-1. Pfaffian Differential Equations

      9.6-2. The Integrable Case

    9.7. Related Topics, References, and Bibliography

      9.7-1. Related Topics

      9.7-2. References and Bibliography

9.1. INTRODUCTION

9.1-1. Survey.  Differential equations are used to express relations between changes in physical quantities and are thus of great importance in many applications.Sections 9.1-2 to 9.3-10 present a straightforward classical introduction to ordinary differential equations, including some complex-variable theory. Sections 9.4-1 to 9.4-8 introduce the linear differential equations with constant coefficients used in the analysis of vibrations, electric circuits, and control systems, with emphasis on solutions by Laplace-transform methods. Sections 9.5-1 to 9.5-6 deal with nonlinear second-order equations. Sections 9.6-1 and 9.6-2 introduce Pfaffian differential equations, although these are not ordinary differential equations.

Some naturally related material is treated in other chapters of this handbook, particularly in Chap. 8 and Secs. 13.6-1 to 13.6-7. Boundary value problems, eigenvalue problems, and orthogonal-function expansions of solutions are discussed in Chap. 15, and a number of differential equations defining special functions are treated in Chap. 21.

The notation used in the various subdivisions of this chapter has been chosen so as to simplify reference to standard textbooks in different special fields. Thus the usually real variables in Secs. 9.2-1 to 9.2-5 are denoted by x, y = y(x); the frequently complex variables encountered in the general theory of linear ordinary differential equations (Secs. 9.3-1 to 9.3-10) are denoted by z, w = w(z). The variables in Sees. 9.4-1 to 9.5-6 usually represent physical time and various mechanical or electrical variables and are thus introduced as t, yk = yk(t).

9.1-2. Ordinary Differential Equations.  An ordinary differential equation of order r is an equation

image

to be satisfied by the function y = y(x) together with its derivatives y({x), y′′{x), . . . , y(r)(x) with respect to a single independent variable x. To solve (integrate) a given differential equation (1) means to find functions (solutions, integrals) y(x) which satisfy Eq. (1) for all values of x in a specified bounded or unbounded interval (a, b). Note that solutions can be checked by resubstitution.

The complete primitive (complete integral, general solution) of an ordinary differential equation of order r has the form

image

where C1, C2, . . . , Cr are r arbitrary constants (constants of integration, see also Sec. 4.6-4). Each particular choice of these r constants yields a particular integral (2) of the given differential equation. Typical problems require one to find the particular integral (2) subject to r initial conditions

image

which determine the r constants C1, C2, . . . , Cr. Alternatively, one may be given r boundary conditions on y (x) and its derivatives for x = a and x = b (see also Sec. 9.3-4).*

Many ordinary differential equations admit additional solutions known as singular integrals which are not included in the complete primitive (2) (see also Sec. 9.2-26).

A differential equation is homogeneous if and only if ay(x) is a solution for all a whenever y(x) is a solution (see also Secs. 9.1-5 and 9.3-4).

* Strictly speaking, initial and boundary conditions refer to unilateral derivatives (Sec. 4.5-1).

Given an r-parameter family of suitably differentiable functions (2), one can eliminate C1, C2, . . . , Cr from the r + 1 equations y(i) = y(i)(x, C1, C2, . . . , Cr) (j = 0, 1, 2, . . . , r) to obtain an rth-order differential equation describing the family.

NOTE: An ordinary differential equation is a special instance of a functional equation imposing conditions on the functional dependence y = y(x) for a set of values of x. OTHER EXAMPLES OF FUNCTIONAL EQUATIONS: y(x1x2) = y(x1) + y(x2) [logarithmic property, satisfied by y(x) = A log, x], partial differential equations (Sec. 10.1-1), integral equations (Sec. 15.3-2), and difference equations (Sec. 20.4-3).

9.1-3. Systems of Differential Equations(see also Secs. 13.6-1 to 13.6-7).  A system of ordinary differential equations

image

involves a set of unknown functions y1= y1(x), y2 = y2(x), . . . and their derivatives with respect to a single independent variable x. The order ri of each differential equation (4) is that of the highest derivative occurring. In general, one will require n differential equations (4) to determine n unknown functions yk(x); and the general solution y1 = y1(x), y2 = y2(x) will involve a number of arbitrary constants equal to r = r1+ r2+ . . . +rn.

The solution of a system (4) can be reduced to that of a single ordinary differential equation of order r through elimination of n — 1 variables yk and their derivatives. More importantly, one can reduce every system (4) to an equivalent system of r first-order equations by introducing higherorder derivatives as new variables.

9.1-4. Existence and Desirable Properties of Solutions.  A properly posed differential-equation problem requires an existence proof indicating the construction of a solution subject to the given type of initial or boundary conditions. The existence of physical phenomena described by a given differential equation may suggest but does not prove the existence of a solution; an existence proof checks the self-consistency of the mathematical model (see also Secs. 4.2-1b and 12.1-1; see Secs. 9.2-1 and 9.3-5 for examples of existence theorems).

It is desirable to design mathematical models involving differential equations so that the solutions are continuous functions of numerical coefficients, initial conditions, etc., so as to avoid excessive errors in solutions due to small errors in numerical data (see also Sec. 9.2-la).

9.1-5. General Hints.  (a) Substitution of a Taylor series (Sec. 4.10-4) or other series expansion for y(x) in a given differential equation may yield equations for the unknown coefficients (see also Secs. 9.2-5b and 9.3-5). Many differential equations can be simplified through transformation of variables (Secs. 9.1-5b, 9.1-3, and 9.3-8c). Every differential equation or system of differential equations can be reduced to a system of first-order equations; so that the methods of Sec. 9.2-5 apply.

(b)The following special types of differential equations reduce easily to equations of lower order (see also Secs. 9.2-3 and 9.5-6):

image

If a given differential equation F(x, y, y′, y′′, . . . , yr) = 0 is homogeneous in the arguments y, y′, y′′, . . . , y(r) (Sec. 4.5-5; this does not necessarily imply that the differential equation is homogeneous in the sense of Sec. 9.1-2), introduce = yy.

9.2. FIRST-ORDER EQUATIONS

9.2-1. Existence and Uniqueness of Solutions.  (a) A given first-order differential equation expressible in the form

image

has a solution y = y(x) through every “point” (x = x0, y = y0) with a neighborhood throughout which f(x, y) is continuous. More specifically, let D be a region of “points” (x, y), (x, n) where f(x, y) is single-valued, bounded, and continuous and

image

for some real M independent of y and n. Then the given differential equation (1) has a unique solution y = y(x) through every point (x = x0, y = yo) of D, and y(x) is a continuous function of the given value yo = y(x0). Each solution extends to the boundary of D.

The Lipschitz condition (2) is satisfied, in particular, whenever f(x, y) has a bounded and continuous derivative ∂f/∂y in D.

(b) (See also Sec. 9.1-3). An analogous existence theorem applies to systems of first-order differential equations

image

if the Lipschitz condition (2) is replaced by

image

9.2-2. Geometrical Interpretation. Singular Integrals (see also Secs. 17.1-1 to 17.1-7). (a) If x, y are regarded as rectangular cartesian coordinates, a first-order differential equation

image

describes a “field” of line elements (x, y, p) or elements of straight lines through (x, y) with slope p = dy/dx = f(x, y). Each line element is tangent to a curve of the one-parameter family of solutions

image

where λ is a constant of integration.

A plot of the field of tangent directions permits at least rough graphical determination of solutions; the general character of the family of solutions may be further discussed in the manner of Sec. 9.5-2. It may be helpful to know that the curves F(x, y, p1) = 0 or f(x, y) = P1 are isoclines where the solution curves have a specified fixed slope p1.

The curves imageare loci of points of inflection (see also Sec. 9.5-2).

(b)Singular Integrals(see also Sec. 9.1-2).  Let F(x, y, p) be twice continuously differentiate with respect to x and y, and let ∂F/∂y ≠ 0. Elimination of p from

image

yields a curve or set of curves called the p discriminant of the given differential equation (locus of singular line elements). A curve defined by Eq. (7) is a singular integral of the given differential equation if image on this curve, unless both ∂F/∂x and ∂F/∂y vanish at a point of the curve. Geometrically, such singular integrals are frequently envelopes of the family of solution curves (6), and may thus be obtained from the complete primitive (6) in the manner of Sec. 17.1-7.

9.2-3. Transformation of Variables(see Sec. 9.2-4 for examples). (a) A suitable continuously differentiable transformation

image

will transform the given differential equation (1) or (5) into a new differential equation relating image and image. The new equation may be simpler, or a solution image may be known. Once image is found, y = y(x) is given implicitly, or by inverse transformation.

(b) Contact Transformations (see also Secs. 10.2-5, 10.2-7, and 11.6-8). A set of twice continuously differentiable transformation equations

image

With the special property

image

or

image

defines a contact transformation associating line elements (Sec. 9.2-2a) (x, y, p) and image so that line elements forming regular arcs are mapped onto regular arcs, and contact of regular arcs is preserved. It is then legitimate to writeimage image, and to use suitable contact transformations (9) to simplify differential equation (1) or (5). Once a solution image of the transformed equation is known, y = y(x) is given implicitly or by inverse transformation.

In particular, g(x, y, p) =1 yields the easily reversible contact transformation

image

which transforms a given differential equation (5) into

image

Equation (11) may be a simpler differential equation or, indeed, an ordinary equation relating and .

9.2-4. Solution of Special Types of First-order Equations.  (a) The following special types of first-order equations are relatively easy to solve.

      1. The variables are separable: y' = f1(x)/f2(y). Obtain the solution from f2(y) dy = f1(x) dx + C.

      2. “Homogeneous” first-order equations:* y′ = f(y/x). Introduce image = y/x to reduce to type 1.

      3. Exact differential equations can be written in the form

image

where the expression on the left is an exact differential image, Sec. 5.7-1image Obtain the solution from

image

* Note that the expression “homogeneous” differential equation is here not used in the sense defined in Sec. 9.1-2.

If the expression on the left of Eq. (12) is not an exact differential image, one may be able to find an integrating factor µ = µ(X, y) such that multiplication of Eq. (12) by µ(x, y) yields an exact differential equation. The integrating factor n(x, y) satisfies the partial differential equation

image

      4. The linear first-order equation y′ + a(x)y = f(x) (see also Sees. 9.3-1 and 9.3-3) admits the integrating factor

image

The complete primitive is then

image

Many first-order equations can be reduced to one of the above types by transformation of variables (Sec. 9.2-3). In particular

y' = f(ax + βy) reduces to type 1 if one introduces image = ax + βy.

image reduces to type 2 by a coordinate translation if α1β2 - α2β1 ≠ 0; otherwise introduce image to separate the variables.

y′ = f1(x)y + f2(x)yn (BERNOULLI’s DIFFERENTIAL EQUATION) reduces to a linear equation if one introduces image.

(b) Given a first-order equation of the form

image

it may be advantageous to differentiate both sides with respect to x. The resulting differential equation

image

might be easy to solve for y′ = y′(x) or y′ = y′(y, respectively; substitution of this result into the given Eq. (16) yields the desired relation of x and y. If the solution of Eq. (17) takes the form u(x, y′) = 0 or u(y, y′) = 0, the desired relation of x and y is given in terms of a parameter p = y′.

EXAMPLES: Clairaut’s differential equation y = y′x + f(y′) yields the complete primitive y = Cx + f(C) and the singular integral (in parametric representation) x = —ƒ′(p), y = —pƒ′(p) + f(P). Lagrange’s differential equation y = xf1(p) + f2(p) is solved in the same manner.

(c)Riccati Equations.   The differential equation

image

is sometimes simplified by the transformation y = 1/ alternatively,

image

leads to a homogeneous second-order equation for image:

image

If a particular integral y1(x) of Eq. (18) is known, the transformation

image

yields a linear differential equation. If one knows two particular integrals y1, y2 or three particular integrals yh yz, yz, one has, respectively,

image

For any four particular integrals y1, y2, y3, y4, the double ratio (y1y2) (y3y4)/(y1y3) (y2y4) is constant.

The special Riccati equation

image

can be reduced to type 1 if m = 4k/(l — 2k) (k = 0, ±1, ±2, . . .). For k > 0, the transformation image reduces Eq. (21) to a similar equation

image

image

The procedure is repeated until (after k steps) the right side of the differential equation is constant.

Similarly, for k < 0, the transformation x = —1/(m+1), imageyields a differential equation of the form (22a) with

image

9.2-5. General Methods of Solution.  (a) Picard’s Method of Successive Approximations. To solve the differential equation y′ = f(x, y) for a given initial value y(xo) = y0, start with a trial solution y[0](x) and compute successive approximations

image

to the desired solution y(x). The process converges subject to the conditions of Sec. 9.2-1. Picard’s method is useful mainly if the integrals in Eq. (23) can be evaluated in closed form, although numerical integration can, in principle, be used.

A completely analogous procedure applies to systems (3) of first-order differential equations.

(b)Taylor-series Expansion (see also Sec. 4.10-4). If the given function f(x,y) is suitably differentiable, obtain the coefficients y(m)(xo)/mǃ of the Taylor series

image

by successive differentiations of the given differential equation:

image

with x = xo, y = y(x0) = y0.

An analogous procedure applies to systems of first-order equations.

9.3.LINEAR DIFFERENTIAL EQUATIONS

9.3-1.Linear Differential Equations. Superposition Theorems (see also Secs. 10.4-2, 13.6-2, 13.6-3, 14.3-1, and 15.4-2). A linear ordinary differential equation of order r relating the real or complex variables z and w = w(z) has the form

image

where the ak(z) and f(z) are real or complex functions of z. The general solution (Sec. 9.1-2) of a linear differential equation (1) can be expressed as the sum of any particular integral and the general solution of the homogeneous linear differential equation (Sec. 9.1-2)

image

For any given nonhomogeneous or “complete” linear differential equation (1), the homogeneous equation (2) is known as the complementary equation or reduced equation, and its general solution as the complementary function.

Let w1(z) and w2(z) be particular integrals of the linear differential equation (1) for the respective “forcing functions” f(z) ≡ f1(z) and f(z) ≡ f2(z). Then αw1(z) + β w2(z) is a particular integral for the forcing function f(z) ≡ αf1(z) + βf2(z) (Superposition Principle). In particular, every linear combination of solutions of a homogeneous linear differential equation (2) is also a solution.

The superposition theorems often represent some physical superposition principle. Mathematically, they permit one to construct solutions of Eq. (1) or (2) subject to given initial or boundary conditions by linear superposition.

Analogous theorems apply to systems of linear differential equations (see also Sec. 9.4-2).

9.3-2.Linear Independence and Fundamental Systems of Solutions  (see also Secs. 1.9-3, 14.2-3, and 15.2-la). (a) Let w1(z), w2(z), . . . , wr(z) be r — 1 times continuously differentiate solutions of a homogeneous linear differential equation (2) with continuous coefficients in a domain D of values of z. The r solutions wk(z) are linearly independent in D if and only if image in D implies λ1 = λ2 = . . . = λr = 0 (Sec. 1.9-3). This is true if and only if the Wronskian determinant (Wronskian)

image

differs from zero throughout D. W = 0 for any z in D implies w = 0 for all z in D.*

(b) A homogeneous linear differential equation (2) of order r has at most r linearly independent solutions, r linearly independent solutions w1(z), w2(z), . . . , wk(z) constitute a fundamental system of solutions

whose linear combinations image include all particular integrals of Eq. (2).

(c) Use of Known Solutions to Reduce the Order. If m < r linearly independent solutions w1(z), w 2(z), . . . , wm(z) of the homogeneous equation (2) are known, then the transformation = W[w1, w2, . . . , wm, w]φ( x) reduces Eq. (2) to a homogeneous linear differential equation of order rm for any conveniently chosen φ(x).

9.3-3. Solution by Variation of Constants. Green’s Functions. (a) Given r linearly independent solutions w1(z), w2(z), . . . , wr(z) of the homogeneous linear differential equation (2), the general solution of the complete nonhomogeneous equation (1) is

image

* Note that the theorem in this simple form does not apply to every set of r — 1 times continuously differentiable functions Wk(z); they must be solutions of a suitable differential equation (2).

image

After solving the r simultaneous equations (5) for the r unknown derivatives Ck(z), one obtains each Ck(z) = C′k(z) dz + Kk by a simple integration. In principle, this procedure reduces the solution of any linear ordinary differential equation to the solution of a homogeneous linear differential equation.

(b) Assuming real variables zx and ww (x) for simplicity, particular integrals of the complete differential equation (1) can often be written as

image

where G(x, ξ) is known as the Green’s function (sometimes called the weighting function, Sec. 9.4-3) yielding the specific particular integral in question. The complete integral of Eq. (1) is then

image

where the wk(z) are r linearly independent solutions of Eq. (2), and the Ak are r constants of integration to be determined by suitable initial or boundary conditions.

Any given set of r linearly independent solutions wk(x) of the complementary equation (2) permits one to construct a particular integral (6) with

image

where the C′ k(x) are obtained from Eq. (5), and U(x) is the unit-step function defined in Sec. 21.9-1.

For linear differential equations of order r = 2, the complete integral is given by Eq. (4) or (7) with

image

(c) While the general solution (7) obtained with the aid of the particular Green’s function (8) is only another way of writing Eq. (4), it is often possible to construct a Green’s function G (x, ξ) such that the particular integral (6) satisfies the specific initial or boundary conditions of a given problem. Assuming boundary conditions linear and homogeneous in w(x) and its derivatives, the required Green’s function G(x, ξ ) must satisfy the given boundary conditions and

image

for x in (a, b), with ∂r—2G/∂xr—2 continuous in (a, b), and

image

The existence and properties of such Green’s functions are discussed from a more general point of view in Sec. 15.5-1; see also Sec. 9.4-3. Table 9.3-1 lists the Green’s functions for a number of boundary-value problems.

9.3-4. Reduction of Two-point Boundary-value Problems to Initial-value Problems.  The general theory of boundary-value problems and eigenvalue problems involving ordinary differential equations is treated in Secs. 15.4-1 to 15.5-2 (see also Secs. 9.3-3, 20.9-2, and 20.9-3). The following method is often useful in connection with numerical solution methods.

Given an rth-order linear differential equation Lw = f(z) with r suitable boundary conditions to be satisfied by w(z) and its derivatives for z = a, z = b, write the solution as

image

Where the wk(z) are defined by the r + 1 initial-value problems

image

Apply the r given boundary conditions to the general solution (12a) to obtain r simultaneous equations for the r unknown coefficients αk.

NOTE: Given a nonlinear boundary-value problem like

image

one can often calculate w(b) for two or three trial values of the unknown initial value w′(a); the correct value of w′(a) is then approximated by interpolation.

9.3-5. Complex-variable Theory of Linear Differential Equations. Taylor-series Solution and Effects of Singularities.  (a) A given

Table 9.3-1. Green’s Functions for Linear Boundary-value Problems

Each boundary-value problem listed has the solutionimage.

Use G(x, ξ) to obtain solutions for other initial or boundary conditions from Eq. (9.3-7) (see also Secs. 9.3-3, 9.4-3, 10.4-2, 15.4-8, and 15.5-1). The table yields solutions for other intervals (a, b) with the aid of suitable coordinate transformations.

image

* This is a modified Green’s function in the sense of Sec. 15.5-1b and does not satisfy Eq.(9.3-10)

linear differential equation (1) has an analytic solution w = w(z) at every regular point z where the functions ak(z) and f(z) are analytic (see also Sec. 7.3-3). If these functions are single-valued, and if D is a singly connected region of regular points, a given set of values w (z0), w′(zo), . . . , wr-1(z0) for some point zo in D defines a unique solution w(z) in D.

To obtain this solution in Taylor-series form (see also Sec. 4.10-4), substitute

image

and

image

into the given differential equation (1); comparison of coefficients will yield recurrence relations for the unknown coefficients image (k = r, r + 1, . . .). The series (14) converges absolutely and uniformly within every circle |zz0| < R in D.

(b) Analytic continuation of any solution w(z) of a linear differential equation around singularities of one or more coefficients ak(z) will, in general, yield different branches of a multiple-valued solution (see also Secs. 7.4-2, 7.6-2, and 7.8-1).

In particular, one complete circuit around a singularity will transform a fundamental system of solutions w1(z), W2(z), . . . , wr(z) of the homogeneous linear differential equation (2) into a new fundamental system image . The two fundamental systems are necessarily related by a nonsingular linear transformation

image

The eigenvalues λk of the matrix [aik] (Sec. 13.4-2) are independent of the particular fundamental system W1(z), W2(z), . . . , wr(z) in question.

9.3-6. Solution of Homogeneous Equations by Series Expansion about a Regular Singular Point.  (a) A singularity z = Z1 of one or more ak(z) is an isolated singularity of the homogeneous linear differential equation (2) if and only if z1 has a neighborhood containing no other singular point. An isolated singularity z = z1 is a regular singular point of the homogeneous linear differential equation if and only if none of its solutions w(z) has an essential singularity at z1; otherwise z = Z1 is an essential singularity of the given differential equation. z = z1 is a regular singular point if and only if ak(z)/a0(z) has at worst a pole of order k at (k = 1, 2, . . . , r) (Fuchs’s Theorem). In this case,

Eq. (2) can be rewritten as

image

where all pk(z) are analytic in a neighborhood D1 of z1

It follows that a given homogeneous linear differential equation (2) admits a solution of the form

image

whenever z = Z1 is a regular point or a regular singular point. The exponent µ must satisfy the rth-degree algebraic equation

image

The first coefficient a0 may be chosen at will, and the other coefficients ak are found successively from a set of recurrence relations obtained on substitution of the series (17) into Eq. (2) or (16). The series converges absolutely and uniformly within every circle |zz1| < R in D1.

Different roots µ = µ1, µ2, . . . , µr of the indicial equation (18) yield linearly independent solutions (17) of the given differential equation, unless two roots µk coincide or differ by an integer. In such cases, one may use the known solutions to reduce the order of the given differential equation in the manner of Sec. 9.3-2c or 9.3-7a, or use Frobenius’s method (Ref. 9.14); see also Sec. 9.3-8a.

The exponents µk are related to the eigenvalues λ obtained from Eq. (15) for one circuit about the regular singular point z1 with λk = e2πiµk.

(b)Regular Singular Points at Infinity (see also Secs. 7.2-1 and 7.6-3). z = is a regular singular point of Eq. (2) if and only if the transformation

image

yields a differential equation having a regular singular point at = 0. In this case, one may obtain solutions of the transformed equation in the manner of Sec. 9.3-6a.

(c) Generalization. If z = z1 is not a regular singular point (e.g., if the functions Pk have poles at z = z1 one may still write solutions similar to Eq. (17) by replacing each power series by a Laurent series (Sec. 7.5-3) admitting negative as well as positive powers of (zz1).

9.3-7. Integral-transform Methods.  The solution of a linear differential equation (1) with polynomial coefficients image and given initial conditions w(i)(0) = w0(i) is often simplified through the use of the unilateral Laplace transformation in the manner of Sec. 9.4-5. Apply the formula

image

to obtain a new and possibly simpler differential equation for the Laplace transform £[w(z); s] of the solution. Boundary-value problems can be transformed to initialvalue problems by the method of Sec. 9.3-4.

More general integral transformations (Table 8.6-1) may be similarly employed in various special cases (see also Sec. 10.5-1 and Ref. 9.14).

9.3-8. Linear Second-order Equations(see also Secs. 15.4-3c and 15.5-4). (a) The theory of Secs. 9.3-1 to 9.3-4 applies to linear second-order equations, so that one is mainly interested in the solution of homogeneous linear second-order equations

image

Equation (21) is equivalent to

image

If any solution w1(z) of Eq. (21) or (22) is known, the complete primitive is

image

(b) Series Expansion of Solutions (see also Secs. 9.3-5, 9.3-6, 9.3-9, and 9.3-10). In the important special case of a second-order equation expressible in the form

image

where p1(z) and p2(z) are analytic at z = z1, the indicial equation (18) reduces to

image

The indicial equation has two roots. The root µ = µ1 having the larger real part yields a solution of the form

image

The second root µ 2 yields a similar linearly independent solution, which is replaced by

image

If µ1 and µ2 are identical or differ by an integer. Substitute each solution (26) or (27) into the given differential equation (24) to obtain recurrence relations for the coefficients.

(c)Transformation of Variables.  The following transformations may simplify a given differential equation (21) or reduce it to a differential equation with a known solution.

      1. w = exp ∫φ( z) dz yields

image

One attempts to choose φ(z) so as to simplify the new differential equation; in particular, φ(z) = — α1(z)/2 eliminates the coefficient of dw̄/dz. A suitable substitution z = z() may.also yield a simpler differential equation.

      2. The substitution

image

transforms Eq. (21) into a first-order differential equation of the Riccati type (Sec. 9.2-4c).

(d)Existence and Zeros of Solutions for Real Arguments.  Let x be a real variable. The homogeneous linear differential equation

image

has a solution w = w(x) in every interval [a, b] where a1(x) and a2(x) are real and continuous; the solution is uniquely determined by the values w(xo), w′(x0) for some x0 in [a, b]. Unless w ≡ 0 in [a, b], w (x) has at most a finite number of zeros in any finite interval [a, b]; the zeros of any two linearly independent solutions alternate in [a, b].

9.3-9. Gauss’s Hypergeometric Differential Equation and Riemann’s Differential Equation(see also Secs. 9.3-5 and 9.3-6). (a) The homogeneous linear differential equation

image

has regular singular points at z = (exponents µ = a, µ = b),z = 1 (exponents µ = 0, µ = cab), and z = 0 (exponents µ = 0, µ = 1

c), and no other singularities. The solutions of Eq. (31) include many elementary functions, as well as many of the special transcendental functions of Chap. 21 as special cases.

Series expansion about z = z1 = 0 yields solutions (hypergeometric functions) (17) for µ = 0 and µ = 1c, with

image

For µ = 0 one obtains the special hypergeometric function

image

The series converges uniformly and absolutely for |z| < 1; the convergence extends to the unit circle if Re (a + bc) < 1, except for the point z = 1 if Re (a + bc) ≥ 0. The series reduces to a geometric series (Sec. 4.10-2) for a = 1, b = c, and to a Jacobi polynomial (Sec. 21.7-8) if a and/or b equals zero or any negative integer. The function (32) is undefined if one of the denominators c, c + 1, . . . equals zero and does not cancel out.

A second (linearly independent) solution of Eq. (31) may be obtained in the manner of Sec. 9.3-8b; in particular, the hypergeometric function of the second kind

image

is a solution whenever c is not an integer.

Note the following relations (|z| < 1):

image

The following formulas serve for analytic continuation of F(a, b; c; z) outside the unit circle:

image

image

See Table 9.3-1 and Refs. 21.9 and 21.11 for additional formulas.

(b)Riemann’s Differential Equation.  Paperitz Notation.  Equation (31) is a special case of the linear homogeneous differential equation

image

whose only singularities are distinct regular singular points at z = z1 (exponents , ∞′), z = z2 (exponents β, β′), and z = z 3 (exponents γ, γ′); note

image

A solution of Eq. (40), written in the so-called Paperitz notation, is

image

which reduces to Eq. (32) for z1 = 0, z2 = , z3 = 1 and α = 0, β = α, γ=0; α′ = 1 — c, β′ = b, γ′ = cab. See Table 9.3-2 and Ref. 21.11 for additional formulas.

9.3-10. Confluent Hypergeometric Functions.  One can move the singularity z = 1 of the hypergeometric differential equation (31) to z = b by substituting z/b for z; the singularity at z = b will then approach the original singularity at z = as b (confluence of singularities). One thus obtains the new differential equation

image

whose only singularities are a regular singular point at z = 0 and an essential singularity at z = . Many special transcendental functions are solutions of Eq. (40) for special values of a and c (Chap. 21).

Series expansion about z = zx. = 0 yields solutions (confluent hypergeometric functions) (17) for ε = 0 and ε = 1c, with

image

Table 9.3-1. Additional Formulas Relating to Hypergeometric Functions

image

Table 9.3-1. Additional Formulas Relating to Hypergeometric Functions

(Continued)

image

Table 9.3-2. Additional Formulas Relating to Confluent Hypergeometric Functions

image

For µ = 0 one obtains Kummer’s confluent hypergeometric function

image

(see also Sec. 21.7-5).

A second solution may be obtained in the manner of Sec. 9.3-86; in particular, the confluent hypergeometric function of the second kind

image

is a solution whenever c is not an integer.

Note the following relations:

image

See Refs. 21.9 and 21.11 for additional formulas.

9.3-11. Pochhammer’s Notation.  The infinite series (32) and (43) are special cases of

image

where (x)kx (x + 1) . . . (x + k — 1). In this notation the hypergeometric function (32) becomes 2F1 (a, b, c; z), and the confluent hypergeometric function (43) is written as 1F1 (a; c; z).

9.4. LINEAR DIFFERENTIAL EQUATIONS WITH CONSTANT COEFFICIENTS

9.4-1. Homogeneous Linear Equations with Constant Coefficients (see also Sec. 9.3-1). (a) The first-order differential equation

image

has the solution

image

For ao/a1 > 0

image

ao/a1 is often referred to as the time constant.

(b)The second-order equation

image

has the solution

image

If ao, a1, and a2 are real, s1 and s2 become complex for a12 — 4a0a2 < 0; in this case, Eq. (4a) can be written as

image

where the quantities

image

are respectively known as the damping constant and the natural (characteristic) circular frequency. The constants C1, C2, A, B, R, and a are chosen so as to match given initial or boundary conditions (see also Sec. 9.4-5a).

If a0a2 > 0, the quantity image is called the damping ratio; for ζ > 1, ζ = 1, 0 < ζ < 1 one obtains, respectively, an overdamped solution (4a), a critically damped solution (4b), or an underdamped (oscillatory) solution (4c). In the latter case, the logarithmic decrement 2πrσ1N is the natural logarithm of the ratio of successive maxima of y(t).

Equation (3) is often written in the nondimensional form

image

with

image

imageis called the undamped natural circular frequency; for weak damping (ζ2 << 1), ωωN (see also Fig. 9.4-1).

image

FIG. 9.4-1. Solution of the second-order differential equation

image

for y(0) = 0, dy/dt]o = 1. Response is overdamped for fζ > 1, critically damped for ζ = 1, and underdamped for 0 < ζ < 1.

(c) To solve the rthorder differential equation

image

find the roots of the rth-degree algebraic equation

image

obtained, for example, on substitution of a trial solution y = est. If the r roots s1, s2, . . . of the characteristic equation (6) are distinct, the given differential equation (5) has the general solution

image

If a root sk is of multiplicity mk, replace the corresponding term in Eq. (7a) by

image

The various terms of the solution (7) are known as normal modes of the given differential equation. The r constants Ck and Ckj must be chosen so as to match given initial or boundary conditions (see also Sec. 9.4-5a).

If the given differential equation (5) is real, complex roots of the characteristic equation appear as pairs of complex conjugates σ ± iw. The corresponding pairs of solution terms will also be complex conjugates and may be combined to form real terms:

image

where A and B, or R and α, are new real constants of integration.

(d) Given a system of n homogeneous linear differential equations with constant coefficients

image

where the image are polynomials in d/dt, each of the n solution functions yk = yk (t) (k = 1, 2, . . . , n) has the form (7); the sk are now the roots of the algebraic equation

image

The constants of integration must again be matched to the given initial or boundary conditions (see also Secs. 9.4-5b and 13.6-2).

9.4-2. Nonhomogeneous Equations. Normal Response, Steadystate Solution, and Transients (see also Sec. 9.3-1). (a) The superposition theorems and solution methods of Secs. 9.3-1 to 9.3-4 apply to all linear ordinary differential equations. Thus the general solution of the nonhomogeneous differential equation

image

can be expressed as the sum of the general solution (7) of the reduced equation (5) and any particular integral of Eq. (10).

If, as in many applications, f(t) = 0 for t ≤ 0,* the particular integral y = yN(t) of Eq. (10) with yN = y′N = y′′N = . . . = yN(r—1) = 0 for t ≤ 0 will be called the normal response to the given forcing function f(t). To solve Eq. (10) for t> 0 with given initial values for y, y′, y′′ , . . . y(r—1), one adds the solution of the corresponding initial-value problem for Eq. (5) to the normal response yN(t).

In many applications (stable electric circuits, vibrations), all roots of the characteristic equation (9) have negative real parts, and the complementary function (7) dies out more or less rapidly (stable “transient solution”). In such cases, one is often mainly interested in a suitable nontransient particular integral y = yss(t), the “steady-state solution” due to the given forcing function f(t). In other cases, yss(t) is not uniquely defined by the given differential equation but depends on the initial conditions. The normal response yN(t) may or may not include a transient term.

(b) In the same manner, each solution function yk = yk(t) of a system of linear differential equations with constant coefficients,

image

can be expressed as the sum of the corresponding solution function of the complementary homogeneous system (8) and a particular solution function of the given system (11). The normal response of the system (11) to a set of forcing functions fj(t) equal to zero for t ≤ 0 is the particular solution such that all yk vanish for t ≤ 0 together with all derivatives which can be arbitrarily chosen (see also Sec. 13.6-2).

(c) If a forcing function contains a periodic term whose frequency equals that of an undamped sinusoidal term in the complementary function (7), then the differential equation or system may not have a finite solution (resonance, see also Secs. 9.4-5 and 15.4-12).

9.4-3. Superposition Integrals and Weighting Functions (see also Secs. 9.3-3, 9.4-7c, and 15.5-1). (a) Physically Realizable Initial-

* This means one considers only forcing functions of the type f(t) ≡ f(t)U+(t), where U+(t) is the asymmetrical unit-step function defined in Sec. 21.9-1 (see also the footnote to Sec. 9.4-3).

value Problems. Application of the Green's-function method of Sec. 9.3-3b to the differential equation (10) yields the normal-response solution (Sec. 9:4-2) as a weighted mean over past values of f(t) in the form

image

if one assumes (1) f(t) = 0 for t ≤ 0 (initial-value problem), and (2) h+(tτ) = 0 for tτ, so that “future” values of f(t) cannot affect “earlier” values of y(t), and “instantaneous” effects are also ruled out (physically realizable systems). More general problems are considered in Sec. 9.4-3d.

If the derivative on the right exists,

image

The weighting function h+(t — τ) is the special Green’s function defined by

image

h+(t — τ) is the normal response to an asymmetrical unit impulse δ+(t — τ) (Sec. 21.9-6); Eq. (14a) may be rewritten as a “symbolic differential equation”

image

Note that image is the normal response to the asymmetrical unit-step function U+(t — τ) (Sec. 21.9-1). The “symbolic differential equation” (14c) is often easily solved for h+(t — τ) by the Laplace-transform method of Sec. 9.4-5 (see also Secs. 8.5-1 and 9.4-7); alternatively, h+(t) can be found as that solution of the homogeneous differential equation

image

which satisfies the initial conditions

image

EXAMPLE: For Lya(dy/dt) + y, one hasimage.

(b) Under similar conditions, the normal-response solution of a system of linear differential equations (11) can be expressed in the form

image

where (h+)kj(t — τ) is the kth solution function obtained on substitution of fj (t) = δ+(t — τ) (i = 1, 2, … , n) in Eq. (11). The weighting-function matrix [(h+)kj(t — τ)] is often called a state-transition matrix (Sec. 13.6-2).

(c) A superposition integral (12) also yields the normal response if the “input function” f(t) and the “output function” y(t) are related by a differential equation of the form

image

Such relations may result, in particular, if a system (11) is reduced to a single differential equation by elimination of all but one of the unknown functions.

Any (h+)kj(t), and also h+ (t) if relations of the type (16) are considered, may contain delta-function-type singularities (modified Green’s function, Sec. 15.5-16). Thus, if h+(t) = c1δ+(tt1) + c2 δ ++(tt2) + . . . + h0(t), then Eq. (12) yields the normal response

image

(d) More General Problems. “Symmetrical” vs. “Asymmetrical” Weighting Functions. To deal with forcing functions different from zero for t ≤ 0, one may introduce the “symmetrical” weighting function h(t — τ) defined by

image

or Lh(t — τ) = δ( t — τ) (see also Sec. 21.9-2) with suitable initial or boundary conditions. The resulting solution

image

will, in particular, satisfy Eq. (10) for t ≥ 0 with

image

If f( t ) ≡ f ( t ) U (t ) (Sec. 21.9-1), and one adds the condition

image

h+(t) and h(t), and the solutions (12) and (18), are easily confused. The “asymmetrical” weighting function h+(t) is particularly convenient for use with the unilateral Laplace transforms employed by most engineers, while h(t) fits the context of Fourier analysis or bilateral Laplace transforms (see also Sec. 18.10-5). In the usual physical applications, Eq. (19a) holds since “future” values of f(t) cannot affect the solution; h+(t) and h (t) are then identical wherever they are continuous. Frequently, forcing functions cannot even affect the solution instantaneously, so that h(t — τ) satisfies the stronger condition

image

and h(t) and h+ (t) are identical.

EXAMPLE: In a purely resistive electric circuit, the current y(t) and the voltage f(t) are related by y(t) = f (t)/R; so that h+(t) = δ +(t)/R and h(t) = δ( t)/R. But forimage

9.4-4. Stability. A linear differential equation (10) or a system (11) will be called completely stable if and only if all roots of the corresponding characteristic equation (6) or (9) have negative real parts, so that effects of small changes in the initial conditions tend to zero with time (refer to Sec. 13.6-5 for a more general discussion of stability). The nature of the roots may be investigated with the aid of Secs. 1.6-6 and 7.6-9 (stability criteria for electric circuits and control systems). A differential equation (10) is completely stable if and only if image[or equivalentlyimage exists; a similar condition for every weighting function of a system (11) is necessary and sufficient for complete stability of the system.

9.4-5. The Laplace-transform Method of Solution (see also Secs. 8.1-1, 8.4-1 to 8.4-5, 9.3-7, and 13.6-2). (a) To solve a linear differential equation (10) with given initial values y(0 + 0), y′(0 + 0), y′′(0 + 0), . . . , y(r—1)(0 + 0), apply the Laplace transformation (8.2-1) to both sides, and let image . The resulting linear algebraic equation (subsidiary equation)

image

is easily solved to yield the Laplace transform of the desired solution y(t) in the form

image

Here the first term is the Laplace transform YN(s) of the normal response yN(t) (Sec. 9.4-2a), and the second term represents the effects of nonzero initial values y(t) and its derivatives. The solutions y(t) and yN(t)are found as inverse Laplace transforms by reference to tables (Appendix D), or by one of the methods of Secs. 8.4-2 to 8.4-9. In particular, each of the r terms in the partial-fraction expansion of G(s)/(aosr + a1sr—1 + . . . + ar) (Sec. 8.4-5) yields a corresponding term of the force-free solution (7).

This solution method applies without essential changes to differential equations of the type (16).

(b) In the same manner, one applies the Laplace transformation to a system of linear differential equations (11) to obtain

image

where the functions Gj(s) depend on the given initial conditions. The linear algebraic equations (22) are solved by Cramer’s rule (1.9-4) to yield the unknown solution transforms

image

where Ajk(s) is the cofactor of φjk(s) in the system determinant D(s) ≡ det [φjk(s)] (see also Sec. 1.9-2). The first sum in Eq. (23) is the Laplace transform of the normal-response solution, while the second sum represents the effect of the initial conditions.

The desired solutions yk(t) are obtained from Eq. (23) by inverse Laplace transformation.

In problems involving unstable differential equations (Sec. 9.4-4) and/or impulse-type forcing functions, the solutions may contain delta-function-type singularities (see also Secs. 8.5-1 and 21.9-6).

9.4-6. Periodic Forcing Functions and Solutions. The Phasor Method. (a) Sinusoidal Forcing Functions and Solutions. Sinusoidal Steady-state Solutions. Every system of linear differential equations (11) with sinusoidal forcing functions of equal frequency,

image

admits a unique particular solution of the form

image

In particular, if all roots of the characteristic equation (9) have negative real parts (stable systems, Sec. 9.4-4), the sinusoidal solution (24b) is the unique steady-state solution obtained after all transients have died out (Sec. 9.4-2).

(b) The Phasor Method. Given a system of linear differential equations (11) relating sinusoidal forcing functions and solutions (24), one introduces a reciprocal one-to-one representation of these sinusoids by corresponding complex numbers (vectors, phasors)

image

The absolute value of each phasor equals the root-mean-square value of the corresponding sinusoid, while the phasor argument defines the phase of the sinusoid. The phasors (25) are related by the (complex) linear algebraic equations (phasor equations)

image

which correspond to Eq. (11) and may be solved for the unknown phasors

image

(see also Sec. 9.4-5b). In the case of resonance (Sec. 9.4-2c), the expression (27) may not exist (may become “infinitely large”).

(c) Rotating Phasors. A set of sinusoidal functions (24) satisfies any given system of linear differential equations (11) if and only if the same is true for the corresponding set of complex exponential functions (rotating phasors)

image

which are often more convenient to handle than the real sinusoids (24).

(d) More General Periodic Forcing Functions (see also Sees. 4.11-4, 4.11-5, and 9.4-5c). Given a stable system (11) with more general periodic forcing functions expressible in the form

image

one can apply the phasor method of Sec. 9.4-6a separately for each sinusoidal term and superimpose the resulting sinusoidal solutions to obtain the steady-state periodic solution. This procedure may be more convenient than the Laplace-transform method if only a few harmonics of the periodic solution are needed.

9.4-7. Transfer Functions and Frequency-response Functions. (a) Transfer Functions. The function

image

in Eq. (21) is known as a transfer function. The transfer function “represents” a linear operator (Sec. 15.2-7) which operates on the forcing function (input) to yield the normal response (output; see also Fig. 9.4-1).

image

FIG. 9.4-1. Transfer-function representation of linear differential equations with constant coefficients. If yN(t) in turn serves as the forcing function for a second differential equation to produce the normal response zN(t), the two transfer functions multiply, i.e., ZN(s)/F(s) = H1(s)H2(s).

More generally, each function Ajk(s)/D(s) in Eq. (23) is the transfer function relating the normal-response “output” yk(t) of the system (11) to the “input” fj(t) when all other forcing functions vanish identically. The transfer functions Ajk(s)/D(s) together constitute the transfer matrix.

The transfer f unction corresponding to Eq. (16) is

image

(b)Frequency-response Functions (see also Sec. 9.4-6a). The frequency-response functions H() and Ajk()/D() similarly relate the phasors representing sinusoidal forcing functions and steady-state solutions of given circular frequency ω. Specifically, the absolute value and the argument of a frequency-response function respectively relate the amplitudes and the phases of input and output sinusoid; thus for f(t) = B sin (ωt + β), y(t) = A sin (ωt + a)

image

If frequency-response functions are “cascaded” in the manner of Fig. 9.4-1, the amplitude responses |H()| multiply, and the phase responses arg H() add.

(c) Relations between Transfer Functions or Frequency-response Functions and Weighting Functions (see also Sees. 4.11-4e, 9.4-3, and the convolution theorem of Table 8.3-1). The transfer function H(s) is the unilateral Laplace transform of the asymmetrical weighting function h+(t), and the bilateral Laplace transform (Sec. 8.6-2) of the symmetrical weighting function h(f):

image

Hence the frequency-response function H() is related to the symmetrical weighting function h(t) by the Fourier transformation*

image

Equations (33) and (34) indicate the possibility of obtaining weighting functions as inverse Laplace or Fourier transforms of rational functions.

9.4-8. Normal Coordinates and Normal-mode Oscillations. (a) Free Oscillations. Small oscillations of undamped mechanical or electrical systems are often described by a set of n linear second-order differential equations of the form

image

where the matrices [ajk] and [bjk] are both symmetric, positive-definite (Sec. 13.5-2), and such that the resulting characteristic equation (9) has 2n distinct, nonzero, purely imaginary roots ±1, ± 2, . . . , ±n. Pairs of these roots correspond to sinusoidal free oscillations at the n normal-mode frequencies ω1/2π, ω 2/2π, . . . , ωn/2π .

One may introduce normal coordinates ̄y1 ̄y2, . . . , ̄yn for the given system (35) by a linear transformation

image

with coefficients tkh chosen in the manner of Secs. 13.5-5 and 14.8-7 so as to diagonalize the matrices [ajk] and [bjk] simultaneously; the transformed system takes the simple form

image

The resulting free sinusoidal normal-mode oscillations

image

do not affect one another (are “uncoupled”). The normal coordinates (38) may have an intuitive physical interpretation.

The problem is a generalized eigenvalue problem involving sets of n functions [1(t), 2 (t), . . . , n(t)] as eigenvectors (see also Sees. 13.6-2a, 14.8-7, and 15.4-5).

* See the footnote to Sec. 4.11-2.

EXAMPLE: For a pair of similar coupled oscillators described by

image

the normal coordinates are simply image Given y1 = 1, y2 = dy1/dt = dy2/dt = 0 for t = 0, the normal-mode equations

image

For α2 << ω02 (weak coupling), this solution describes the so-called resonance phenomenon.

(b) Forced Oscillations. The corresponding forced-oscillation problem

image

can, in principle, be solved in the manner of Sec. 14.8-10 through normal-mode expansion of the forcing functions fi(t). The Laplace-transform method of Sec. 9.4-5 is usually more convenient.

9.5. NONLINEAR SECOND-ORDER EQUATIONS

9.5-1. Introduction. Sections 9.5-2 to 9.5-5 introduce the general terminology and the most easily summarized approximation method of the theory of nonlinear oscillations. References 9.15 to 9.17, 9.22, and 9.23 are recommended for further study; for better or for worse, many solution methods are closely tied to specific applications.

The perturbation method of Sec. 10.2-7c is often used to simplify nonlinear problems, especially in celestial mechanics. See Secs. 20.7-4 and 20.7-5 for numerical methods of solution.

9.5-2. The Phase-plane Representation. Graphical Method of Solution (see also Sec. 9.2-2). The second-order differential equation

image

is equivalent to the system of first-order equations

image

The general solution y = y (t), image of Eq. (1) or (2) can be represented geometrically by a family of directed phase-trajectory curves in the yy plane or phase plane. The phase-plane representation is most useful if the given function f(t, y, dy/dt) does not involve the independent variable t explicitly (e.g., “free“ oscillations). In this case, the system (2) is of the general form

image

and the phase trajectories satisfy the first-order differential equation

image

which specifies the slope of the solution curve through at that point. The resulting field of tangent directions (“phase-plane portrait” of the given differential equation) permits one to sketch image( y) and hence y(t) for given initial values of y and image one may begin by drawing loci of constant slope dy/dy = m(isoclines, Fig. 9.5-1).

9.5-3. Critical Points and Limit Cycles (see also Sec. 9.5-4). (a) Ordinary and Critical Phase-plane Points. Given a differential equation (1) reducible to a system (3), a phase-plane point (y, ẏ) is an ordinary point if and only if P(y, ẏ) and Q(y, ẏ) are analytic and not both equal to zero; there exists a unique phase trajectory through each ordinary point. Phase-plane points (y0, yQ) such that

image

are critical points (or singular points) where the trajectory is not uniquely determined. Critical points are classified according to the nature of the phase trajectories in their neighborhood; Fig. 9.5-2 illustrates the most important types. Physically, critical points are equilibrium points admitting stable or unstable equilibrium solutions y = yo (Sec. 9.5-4).

(b) Periodic Solutions and Limit Cycles. Periodic solutions y = y(t) correspond to closed phase-trajectory curves, and vice versa. A closed phase trajectory C is called a limit cycle if each trajectory point has a neighborhood of ordinary points in which all phase trajectories spiral into C (stable limit cycle, see also Sec. 9.5-4) or out of C (unstable limit cycle), or into C on one side of C and out of C on the other side (half-stable limit cycle). For an example, see Secs. 9.5-4c and 9.5-5 (see also Fig. 9.5-3).

image

FIG. 9.5-1. Isoclines, tangent directions, and some solutions of the differential equation

image

corresponding to Van der Pol’s differential equation

image

with dy/dt = image, ε = 1. Only the right half-plane is shown.

image

FIG. 9.5-2. Phase trajectories in the neighborhood of six types of critical points (Sees. 9.5-3 and 9.5-4).

image

FIG. 9.5-3. (a) A stable limit cycle enclosing an unstable critical point at the origin. “Soft” self-excitation of oscillations for arbitrarily small initial values of y and y. (b) A stable limit cycle enclosing a stable critical point at the origin and an unstable limit cycle (shown in broken lines). “Hard” self-excitation of oscillations for initial values outside the unstable limit cycle.

(c) Poincare’s Index and Bendixson’s Theorems. The possible existence of limit cycles (stable oscillations) is of interest in many applications. In addition to the analytical criteria of Sec. 9.5-4, the following theory is sometimes helpful.

For any given phase-plane portrait, the index of a closed curve C containing only ordinary points is the number of revolutions of the solution tangent as the point (y, y) makes one complete cycle around C. The index of an isolated critical point P is the index of any closed curve enclosing P and no other critical point. Then

      1. The index of any closed curve C equals the sum of the indices of all the (isolated) critical points enclosed by C; if C encloses only ordinary points, its index is zero.

      2. The index of a nodal point, focal point, or vortex point is 1; the index of a saddle point is —1 (see also Fig. 9.5-2).

      3. The index of every closed phase trajectory is 1; hence a limit cycle must enclose at least one critical point other than a saddle point (Fig. 9.5-3).

No closed phase trajectories exist within any phase-plane domain where ∂P/dy + ∂Q/dy is of one sign (Bendixson’s First Theorem). A trajectory which remains in a bounded region of the phase plane and does not approach any critical point for 0 ≤ t is either closed or approaches a closed trajectory asymptotically {Bendixson’s Second Theorem).

9.5-4. Poincaré-Lyapunov Theory of Stability (see also Sees. 13.6-5 to 13.6-7). (a) The solution yi = y1i(t) (i = 1, 2, . . . , n) of a system of ordinary differential equations reduced to the form

image

is stable in the sense of Lyapunov if and only if each yi(t) → y1i(t) as yi(to) → y1i(t0) (i = 1, 2, . . . , n), the convergence being uniform for t > 0 (Sec. 4.4-4); sufficiently small initial-condition changes then cannot cause large solution changes. A stable solution is asymptotically stable if and only if there exists a bound r(tQ) such that |yi (to) < r(to)| < r(t0) i implies image for all i. An asymptotically stable solution is asymptotically stable in the large (completely stable) if and only if r(t0) = ∞. This is, in particular, true for every solution of a completely stable system of linear differential equations with constant coefficients (all roots of characteristic equation have negative real parts, Secs. 9.4-4 to 9.4-7).

(b) Stability of Equilibrium. For a system of the form

image

with suitably differentiable fi, an equilibrium solution

image

is asymptotically stable whenever the linearized system

image

is completely stable, where the partial derivatives are computed for y1= y11, y2 = y12, . . . , yn = y1n. This is true whenever all roots s of the characteristic equation

image

have negative real parts. The equilibrium is unstable if Eq. (9) has a root with positive real part; if the real parts of all roots are negative or zero, one requires a more detailed stability investigation (Sec. 13.6-7).

In particular, for a second-order differential equation (1) reducible to a system (3) the characteristic equation is

image

where all derivatives are computed at the equilibrium point (y0, image0). The equilibrium point is

      A stable or unstable nodal point if both roots Si and s2 of Eq. (10) are real and negative or positive, respectively

      A saddle point if si and s2 are real and of opposite sign

      A stable or unstable focal point if si and s2 are complex conjugates with negative or positive real parts, respectively

      A vortex point if s1 and s2 are purely imaginary

(see also Sec. 9.5-3a and Fig. 9.5-2).

Examples for the six types of equilibrium points listed are most easily obtained from the linear differential equation (9.4-3) for different values of the coefficients (Sec. 9.4-16).

(c) Stability of a Periodic Solution. The stability of a periodic solution y = yp(t) ≠ 0, y = yp (t) of Eq. (3) depends on that of the linearized system

image

which is satisfied by small variations (Sec. 11.4-1) δy, δẏ of the periodic solution. This is a system of linear differential equations with periodic coefficients having the same period T as the given solution; Eq. (11) admits two linearly independent solutions of the form

image

where the hik(t) are periodic functions. The periodic solution is stable if

image

is less than zero, and unstable if λ > 0; the case λ = 0 requires additional investigation.

See Sec. 13.6-6 for additional stability criteria.

EXAMPLE: The differential equation

image

yields a stable limit cycle in the neighborhood of the approximate periodic solution y = yp(t) ≈ 2 cos t, image = imagep(t) ≈ —2 sin t, with λ = —μ[1 + 0(μ)] (see also Sec. 9.5-5).

9.5-5. The Approximation Method of Krylov and Bogolyubov. (a) The First Approximation. Equivalent Linearization. To solve a differential equation of the form

image

where ω is a given constant, and the last term is a small nonlinear perturbation, write

image

Assuming that errors of the order of μ2 are negligible, the “amplitude” r(t) and the “total phase” φ(t) are then obtained from the first-order differential equations

image

For a given initial value r (0) = r0, the solution (9.4-4) of the equivalent linear differential equation

image

approximates the solution of the given differential equation (15) with an error of the order of μ2. For a periodic solution such as a limit cycle (Sec. 9.5-3b), the approximate amplitude rL is obtained from a1(rL) = 0, and the circular frequency is approximated byimage. The limit cycle is stable if da1/dr]r=rL > 0 and unstable if da1/dr]r=rL < 0. For self-excitation from rest, one must have a1(0) < 0.

The first approximation is of considerable interest in connection with periodic nonlinear oscillations. In such cases, the equivalent linear differential equation (18) yields the same energy storage and dissipation per cycle as the given nonlinear equation (15). The equivalent linear equation can therefore be used in many investigations of nonlinear resonance phenomena (Ref. 9.17).

(b) The Improved First Approximation. An improved first-order approximation is given by

image

where r (t) and φ( t) are given by Eq. (17), and

image

EXAMPLE: In the case of Van der Pol’s differential equation (14), Eq. (17) yields

image

There is a stable limit cycle for r = rL= 2. The coefficients (20) all vanish except for β3; the improved first approximation is

image

The Krylov-Bogolyubov approximation (19) is an improvement on an earlier method due to Van der Pol, who derived an approximation of the form

image

in an analogous manner.

The Krylov-Bogolyubov approximation method can be extended to apply in the case of a periodic forcing function on the right side of the differential equation (15) (nonlinear forced oscillations, subharmonic resonance, entrainment of frequency). For this and other methods, see Refs. 9.15 to 9.17.

9.5-6. Energy-integral Solution. Differential equations of the form

(9.5-23) image

which are of considerable interest in dynamics, can be reduced to first-order equations through multiplication by dy/dt and integration:

image

9.6. PFAFFIAN DIFFERENTIAL EQUATIONS

9.6-1. Pfaffian Differential Equations (see also Sees. 3.1-16 and 17.2-2). A Pfaffian differential equation (first-order linear total differential equation)

image

with continuously differentiable coefficients P, Q, R may be interpreted geometrically as a condition P . dr= 0 on the tangent vector dr ≡ (dx, dy, dz) of a solution curve (integral curve) described by two equations f(x, y, z) = 0, g(x, y, z, C) = 0, where C is a constant of integration. To find the integral curves lying on an arbitrary regular surface

image

solve the ordinary differential equation obtained by elimination of z and dz from Eq. (1) and df(x, y, z) = 0.

9.6-2. The Integrable Case (see also Sec. 9.2-4). The Pfaffian differential equation (1) is integrable if and only if there exists an integrating factor μ = μ(x,y,z) such that μ(P dx + Q dy + R dz) is an exact differential ( x,y, z)>; this is true if and only if

image

In this case, every curve on an integral surface

image

orthogonal to the family of curves described by

image

is a solution. It follows that the solutions found in the manner of Sec. 9.6-1 from a conveniently chosen family of surfaces (usually planes)

image

lie on an integral surface (4) obtainable by elimination of X from the resulting solution f(x, y, z; λ) = 0, g(x, y, z, C; λ) = 0 (Mayer’s Method of Solution).

To find integral surfaces (4) by another method, hold z constant and obtain the solution of the ordinary differential equation P dx + Q dy = 0 in the form u(x, y, z) — K = 0. Then

image

describes an integral surface, where φ( z ) is the solution of the ordinary differential equation obtained by elimination of x and y from

image

with the aid of Eq. (7). Note that the function (8) is the required integrating factor ε( x , y , z ).

An important application is found in thermodynamics, where the reciprocal of the absolute temperature T is an integrating factor for the adiabatic condition δq = 0 of the form (1); δq / T is the (exact) differential of the entropy. See Ref. 10.23 for a discussion of total differential equations involving more than three variables.

9.7. RELATED TOPICS, REFERENCES, AND BIBLIOGRAPHY

9.7-1. Related Topics. The following topics related to the study of ordinary differential equations are treated in other chapters of this handbook:

The Laplace transformation  Chap. 8

Calculus of variations  Chap. 11

Matrix notation for systems of differential equations  Chap. 13

Boundary-value problems and eigenvalue problems  Chap. 15

Numerical solutions   Chap. 20

Special transcendental functions Chap. 21

    9.7-2. References and Bibliography.

      9.1. Agnew, R. P.: Differential Equations, 2d ed., McGraw-Hill, New York, 1960.

      9.2. Andronow, A. A., and C. E. Chaikin: Theory of Oscillations, Princeton University Press, Princeton, N.J., 1949.

      9.3. Bellman, R.: Stability Theory of Differential Equations, McGraw-Hill, New York, 1953.

      9.4. Bieberbach, L.: Differentialgleichungen, 2d ed., Springer, Berlin, 1965.

      9.5. Birkhoff, G., and G. Rota: Ordinary Differential Equations, Blaisdell, New York, 1962.

      9.6. Coddington, E. A.: An Introduction to Ordinary Differential Equations, Prentice-Hall, Englewood Cliffs, N.J., 1961.

      9.7. and N. Levinson: Theory of Ordinary Differential Equations, McGraw- Hill, New York, 1955.

      9.8. Ford, L. R.: Differential Equations, 2d ed., McGraw-Hill, New York, 1955.

      9.9. Golomb, M., and M. E. Shanks: Elements of Ordinary Differential Equations, 2d ed., McGraw-Hill, New York, 1965.

      9.10. Hale, J. K.: Oscillations in Nonlinear Systems, McGraw-Hill, New York, 1963.

      9.11. Hartman, P.: Ordinary Differential Equations, Wiley, New York, 1964.

      9.12. Hochstadt, H.: Differential Equations, Holt, New York, 1964.

      9.13. Hurewicz, W.: Lectures on Ordinary Differential Equations, M.I.T., Cambridge, Mass., 1958.

      9.14. Kamke, E.: Differentialgleichungen, Losungsmethoden und Losungen, vol. I, Chelsea, New York, 1948.

      9.15. Krylov, N., and N. Bogolyubov: Nonlinear Oscillations, translated by S. Lefschetz, Princeton University Press, Princeton, N.J., 1943.

      9.16. Lefschetz, S.: Differential Equations: Geometric Theory, 2d ed., Interscience, New York, 1963.

      9.17. Minorski, N.: Nonlinear Oscillations, Van Nostrand, Princeton, N.J., 1962.

      9.18. Petrovsky, I. G.: Lectures on Partial Differential Equations, Interscience, New York, 1955.

      9.19. Pontryagin, L. S.: Ordinary Differential Equations, Addison-Wesley, Reading, Mass., 1962.

      9.20. Saaty, T. L., and J. Bram: Nonlinear Mathematics, McGraw-Hill, New York, 1964.

      9.21. Sansone, G., and R. Conti: Nonlinear Differential Equations, Macmillan, New York, 1964.

      9.22. Stoker, J. J.: Nonlinear Vibrations, Interscience, New York, 1950.

      9.23. Struble, R.: Nonlinear Differential Equations, McGraw-Hill, New York, 1961.

      9.24. Tenenbaum, M., and H. Pollard: Ordinary Differential Equations, Harper and Row, New York: 1963.

      9.25. Tricomi, F. G.: Differential Equations, Hafner, New York, 1961.