XVIII. Partial differential operators.

Linear Methods of Applied Mathematics

Evans M. Harrell II and James V. Herod*

*(c) Copyright 1994,1995,1996,2000 by Evans M. Harrell II and James V. Herod. All rights reserved.


version of 20 April 2000

Many of the calculations in this chapter are available in the format of a Mathematica notebook; a Maple worksheet is planned.


XVIII. Partial differential operators - classification and adjoints.

Language and Classification

In this chapter we will take a look at the language of partial differential equations. As in any technical subject, we shall need some standard terms in order to carefully describe the things we are working with.

Several terms are probably familiar already. Suppose that we have a partial differential equation in u. The order of the equation is the order of the highest partial derivative of u, the number of variables is simply the number of independent variables for u in the equation, and the equation has constant coefficients if u and the coefficients of all the partial derivatives of u are constant. If all the terms involving u are moved to the left side of the equation, then the equation is called homogeneous if the right side is zero, and non homogeneous if it is not zero. The system is linear if that left side involves u in only a "linear way" through some linear operator L[u]

Examples XVIII.1.

  1. 4uxx - 24 uxy + 11uyy - 12ux - 9uy - 5u = 0
    is a linear, homogeneous, second order partial differential equation in two variables with constant coefficients.
  2. ut = x uxx + y uyy + u2
    is a nonlinear, second order partial differential equation in two variables and does not have constant coefficients.
  3. utt - uxx = sin(pi t)
    is a non-homogeneous equation.
In thinking of partial differential equations, we shall carry over the language that we used for matrix or ordinary differential equations as far as possible. .

So, in partial differential equation, we consider linear equations

Lu = 0, or u' = Lu, only now L is a linear operator on a space of functions. For example, it may be that L(u) = uxx + uyy, and corresponding notation for the heat equation u' = Lu is ut = uxx + uyy. We shall be interested in a rather general second order, differential operator. In two variables, we consider the operator

L(u) = ISU(p=1,2, )ISU(p=1,2, ) Apq [[Phi]]([[partialdiff]]<sup>2</sup>u,[[partialdiff]]xp[[partialdiff]]xq)  + ISU(p=1,2,  Bp)  F([[partialdiff]]u,[[partialdiff]]xp) + Cu.

                                                                              (19.1)

There is an analogous formula for three variables.

We suppose that u is smooth enough so that

F([[partialdiff]]<sup>2</sup>u,[[partialdiff]]x1 [[partialdiff]]x2)  = F( [[partialdiff]]<sup>2</sup>u,[[partialdiff]]x2 [[partialdiff]]x1) ;

                                                                              (19.2)

that is, we can interchange the order of differentiation. In this first consideration, the matrix A, the vector B, and the number C do not depend on u; we take the matrix A to be symmetric. Because of (19.2) we are free to arrange this, since the coefficient of the cross term is A12 + A21, and we may assign A12 and A21 any way we wish, so long as the sum has the right value.

Model Problem XVIII.2. Write an example in the matrix representation for constant coefficient equations:

L[u] = 4uxx - 24 uxy + 11uyy - 12ux - 9y - 5u. Answer:

L[u]= B(ACO2([[partialdiff]]/[[partialdiff]]x, [[partialdiff]]/[[partialdiff]]y)) B(ACO2( 4, -12, -12, 11))  B(A( [[partialdiff]] /[[partialdiff]]x, [[partialdiff]] /[[partialdiff]]y))u  +  B(ACO2( -12,   -9)) B(A( [[partialdiff]]/[[partialdiff]]x, [[partialdiff]] /[[partialdiff]]y))u  -  5 u.


Many equations of interest have the form L(u) = f or u' = L(u) + f. In this section, u is a function on R2 or R3 and the equations are to hold in an open, connected region D of the plane. We will also assume that the boundary of the region is piece-wise smooth, and denote this boundary by partial D. Just as in ordinary differential equations, in partial differential equations some boundary conditions will be needed to solve the equations. We will take the boundary conditions to be linear and have the general form B(u) = a u + b un, where un is the derivative taken in the direction of a normal to the boundary of the region.

The techniques of studying partial differential operators and the properties of these operators change depending on the "type" of operator. These operators have been classified into three principal types. The classifications are made according to the nature of the coefficients in the equation which defines the operator. The operator is called an elliptic  operator if the eigenvalues of A are non-zero and have the same algebraic sign. The operator is hyperbolic  if the eigenvalues have opposite signs and is parabolic  if at least one of the eigenvalues is zero.

The classification makes a difference both in what we expect from the solutions of the equation and in how we go about solving them. Each type is modeled on an important equation of physics.

The basic example of a parabolic equation is the one dimensional heat equation. Here, u(t,x) represents the heat on a line at time t and position x. One should be given an initial distribution of temperature which is denoted u(0,x), and some boundary conditions which arise in the context of the problem. For example, it might be assumed that the ends are held at some fixed temperature for all time. In this case, boundary conditions for a line of length L would be u(t,0) = alpha and u(t,L) = beta . Or, one might assume that the ends are insulated. A mathematical statement of this is that the rate of flow of heat through the ends is zero:

F([[partialdiff]]u,[[partialdiff]]x) (t,0) =F([[partialdiff]]u,[[partialdiff]]x) (t,L) = 0.

The manner in which u changes in time is derived from the physical principle which states that the heat flux at any point is proportional to the temperature gradient at that point and leads to the equation

F([[partialdiff]]u,[[partialdiff]]t) (t,x) = F([[partialdiff]]<sup>2</sup>u,[[partialdiff]]x<sup>2</sup>) (t,x).

Geometrically, one may think of the problem as one of defining the graph of u whose domain is the infinite strip bounded in the first quadrant by the parallel lines x = 0 and x = L The function u is known along the x axis between x = 0 and x = L, To define u on the infinite strip, move in the t direction according to the equation

F([[partialdiff]]u,[[partialdiff]]t) (t,x) = F([[partialdiff]]<sup>2</sup>u,[[partialdiff]]x<sup>2</sup>) (t,x),

while maintaining the boundary conditions.

(graph)

We could also have a source term. Physically, this could be thought of as a heater (or refrigerator) adding or removing heat at some rate along the strip. Such an equation could be written as

F([[partialdiff]]u,[[partialdiff]]t)(t,x) = F([[partialdiff]]<sup>2</sup>u,[[partialdiff]]x<sup>2</sup>) (t,x)  + Q(t,x).

Boundary and initial conditions would be as before. In order to rewrite this equation in the context of this course, we should conceive of the equation as L[u] = f , with appropriate boundary conditions. The operator L is

L[u]   = F([[partialdiff]]u,[[partialdiff]]t) (t,x) -F([[partialdiff]]<sup>2</sup>u,[[partialdiff]]x<sup>2</sup>) (t,x).

This is a parabolic operator according to the definition given above; in fact, the matrix A in (3.1) is given by

A= {{0, 0}, {0, -1}}.

(A and Laplaceoperator, or Laplacian:

L[u]
=  F([[partialdiff]]<sup>2</sup>u,[[partialdiff]]x<sup>2</sup>)  +F([[partialdiff]]<sup>2</sup>u,[[partialdiff]]y<sup>2</sup>)  .

Laplace's equation states that

L[u] = 0

and Poisson's equation states that

L[u] = f(x,y)

for some given function f. A physical situation in which it arises is in the problem of finding the shape of a drum under force. Suppose that the bottom of the drum sits on the unit disc in the xy-plane and that the sides of the drum lie above the unit circle. We do not suppose that the sides are at a uniform height, but that the height is specified on the circle.

(picture)

That is, we know u(x,y) for {x,y} on the boundary ot the drum. We also suppose that there is a force pulling down, or pushing up, on the drum at each point and that this force is not changing in time. An example of such a force might be the pull of gravity. The question is, what is the shape of the drum? As we shall see, the appropriate equations take the form: Find u if

F([[partialdiff]]<sup>2</sup>u,[[partialdiff]]x<sup>2</sup>)  +  F([[partialdiff]]<sup>2</sup>u,[[partialdiff]]y<sup>2</sup>)   = f(x,y) for  x<sup>2 </sup>+ y<sup>2</sup> < 1

with

u(x,y) specified for x2 + y2 = 1.

Laplace's and Poisson's equations also arise in electromagnetism and fluid mechanics as the equations for potential functions. Still another place where you may encounter Laplace's equation is as the equation for a temperature distribution in 2 or 3 dimensions, when thermal equilibrium has been reached. The Laplace operator is elliptic by our definition, for the matrix A in (3.1) is given by

{{ 1, 0}, {0, 1}}.

Finally, the model for a hyperbolic equation is the one dimensional wave equation. One can think of this equation as describing the motion of a taunt string after an initial perturbation and subject to some outside force. Appropriate boundary conditions are given. To think of this as being a plucked string with the observer watching the up and down motion in time is not a bad perspective, and certainly gives intutive understanding. Here is another perspective, however, which will be more useful in the context of finding the Green function to solve this one dimensional wave equation:

partial <sup>2</sup>u/partial t<sup>2</sup>) = 
partial <sup>2</sup>u/partial  x<sup>2</sup>)  + f(t,x)
for 0 < x < L,
     u(t,x) = u(t,L) = 0  for t > 0,
     u(0,x) = g(x)<
     and    partial u/partial t (0,x) = h(x) for 0 < x < L.

As in example (a), the problem is to describe u in the infinite strip within the first quadrant of the xt-plane bounded by the x axis and the lines x = 0 and x = L. Both u and its first derivative in the t direction are known along the x axis. Along the other boundaries, u is zero. What must be the shape of the graph above the infinite strip?

To classify this as a hyperbolic problem, think of the operator L as

L[u] = F([[partialdiff]]<sup>2</sup>u,[[partialdiff]]t<sup>2)  - </sup>F([[partialdiff]]<sup>2</sup>u,[[partialdiff]]x<sup>2</sup>)

and re-write it in the appropriate form for classification. The matrix A in (19.1) is given by

A = {{ 1, 0}, {0, -1}}.

(A physical derivation of the wave equation and further discussion of it)

There are standard procedures for changing more general partial differential equations to the familiar standard forms, which we shall investigate in the next section.

These very names and ideas suggest a connection with quadratic forms in analytic geometry. We will make this connection a little clearer. Rather than finding geometric understanding of the partial differential equation from this connection we will more likely develop algebraic understanding. Especially, we will see that there are some standard forms. Because of the nearly error-free arithmetic that Maple is able to do, we will offer syntax in Maple that enables the reader to use this computer algebra system to change second order, linear systems into the standard forms.

If presented with a quadratic equation in x and y, one could likely decide if the equation represented a parabola, hyperbola, or ellipse in the plane. However, if asked to draw a graph of this conic section in the plane, one would start recalling that there are several forms that are easy to draw:

   a x2 + b y2 = c2, and the special case x2 + y2 = c2,

   a x2 - b y2 = c2, and the special case x2 - y2 = c2,

or
   y - a x2 = 0 and x - b y2 = 0.

These quadratic equations represent the familiar conic sections: ellipses, hyperbolas and parabolas, respectively. If a quadratic equation is given that is not in these special forms, then one may recall procedures to transform the equations algebraically into these standard forms. This will be the topic of the next section.

The purpose for doing the classification is that the techniques for solving equations are different in the three classes, if it is possible to solve the equation at all. Even more, there are important resemblance among the solutions of one class; and there are striking differences between the solutions of one class and those of another class. The remainder of these notes will be primarily concerned with finding solutions to hyperbolic, second order, partial differential equations. As we progress, we will see the importance of the equation being a hyperbolic partial differential equation to use the techniques of these notes.

Before comparing the similarity in procedures for changing the partial differential equation to standard form with the preceeding arithmetic, we pause to emphasize the differences in geometry for the types: elliptic, hyperbolic, and parabolic.

(sketch)

                                                             Figure 19.1

Here are three equations from analytic geometry:

x2 + y2 = 4 is an ellipse,
x2 - y2 = 4 is a hyperbola,
and x2 + 2 x y + y2 = 4 is a parabola. Figure 19.1 contains the graphs of all three of these. Their shapes and their geometry, are strikingly different. Even more, naively, one might say that the graph of the third of those above is not the graph of a parabola. Indeed. It does, however, meet the criteria: b2 - 4 a c = 0. One might think of the graph as that of a parabola with vertex at the "point at infinity."

The criterion for classifying second order, partial differential equations is the same: ask what is the character of b2 - 4 a c in the equation

a F([[partialdiff]]<sup>2</sup>u,[[partialdiff]]x<sup>2</sup>) + b F([[partialdiff]]<sup>2</sup>u,[[partialdiff]]x[[partialdiff]]y)  + c F([[partialdiff]]<sup>2</sup>u,[[partialdiff]]y<sup>2</sup>)  + d F([[partialdiff]]u,[[partialdiff]]x)  + e F([[partialdiff]]u,[[partialdiff]]y)+ f u = 0.

We now present solutions for three equations that have the same start - the same initial conditions. However the equations are of the three types. There is no reason you should know how to solve the three equations yet. There is no reason you should even understand that solving the hyperbolic equation by the method of characteristics is appropriate. But, you should be able to check the solutions - to see that they solve the specified equations. Each equation has initial conditions

u(0,y) = sin(y) and ux(0,y) = 0.

The equations are

F([[partialdiff]]<sup>2</sup>u,[[partialdiff]]x<sup>2</sup>) +  F([[partialdiff]]<sup>2</sup>u,[[partialdiff]]y<sup>2</sup>)   = 0

Laplace's equation - elliptic;

F([[partialdiff]]<sup>2</sup>u,[[partialdiff]]x<sup>2</sup>) -  F([[partialdiff]]<sup>2</sup>u,[[partialdiff]]y<sup>2</sup>)   = 0

a form of the wave equation - hyperbolic; and

F([[partialdiff]]<sup>2</sup>u,[[partialdiff]]x<sup>2</sup>)
 + 2 F([[partialdiff]]<sup>2</sup>u,[[partialdiff]]x[[partialdiff]]y)  +
F([[partialdiff]]<sup>2</sup>u,[[partialdiff]]y<sup>2</sup>)   = 0 is a parabolic partial differential equation,

These have solutions cosh(x) sin(y), cos(x) sin(y), and cos(x-y) x/2 + sin(y-x) respectively. Figure 19.2 has the graphs of these three solutions in order.


                                                             Figure 19.2 a


                                                             Figure 19.2 b


                                                             Figure 19.2 c

Canonical (Standard) Forms

In this section, we review the procedures to get second-order equations into standard forms, and compare these with the techniques to get second-order partial differential equations with constant coefficients into a few standard forms, called the canonical forms.

Performing these algebraic procedures corresponds to the geometric notions of translations and rotations.

For example, the equation

x2 - 3y2 - 8x + 30y = 50

represents a hyperbola. To draw the graph of the hyperbola, one algebraically factors the equation or, geometrically, translates the axes:

(x - 4)2 - 3(y - 5)2 = 1.

Now, the response that this is a hyperbola with center {4,5} is expected. More detailed information about the direction of the major and minor axes could be made, but these are not notions that we will wish to carry over to the process of getting second order partial differential equations into canonical forms.

There is another idea more appropriate. Rather than keeping the hyperbola in the Euclidean plane where it now has the equation

x2 - 3y2 = 1

in the translated form, think of this hyperbola in the Cartesian plane, and do not insist that the x axis and the y axis have the same scale. In this particular case, keep the x axis the same size and expand the y axis so that every unit is the old unit multiplying by R(3). Algebraically, one commonly writes that there are new coordinates {x',y'} related to the old coordinates by

x = x' , R(3) y = y'.

The algebraic effect is that the equation is transformed into an equation in {x',y'} coordinates:

x' 2 - y' 2 = 1.

Pay attention to the fact that it is a mistake to carry over too much of the geometric language for the form. For example if the original quadratic equation had been

x2 + 3y2 - 8x + 30y = 50

and we had translated axes to produce

x2 + 3y2 = 50,

and then rescaled the axes to get

x2 + y2 = 50

we have not changed an ellipse into a circle for a circle is a geometric object whose very definition involves the notion of distance. The process of changing the scale on the x axis and the y axis certainly destroys the entire notion of distance being the same in all directions.

Rather, the rescaling is an idea that is algebraically simplifying.

Before we pursue the idea of rescaling and translating in second order partial differential equations in order to come up with canonical forms, we need to recall that there is also the troublesome need to rotate the axis in order to get some quadratic forms into the standard one. For example, if the equation is

xy = 2,

we quickly recognize this as a quadratic equation. Even more, we could draw the graph. If pushed, we would identify the resulting geometric figure as a hyperbola. We ask for more here since these geometric ideas are more readily transformed into ideas about partial differential equations if they are converted into algebraic ideas. The question, then, is how do we achieve the algebraic representation of the hyperbola in standard form?

One recalls from analytic geometry, or recognizes from simply looking at the picture of the graph of the equation, that this hyperbola has been rotated out of standard form. To see it in standard form, we must rotate the axes. One forgets the details of how this rotation is performed, but should know a reference to find the scheme. From here on, the following may serve as your reference!

Here is the rotation needed to remove the xy term in the equation

    a x2 + b xy + c y2 + d x + e y + f = 0.

                                                                       (19.3)

The new coordinates {x', y'} are given by

B(A(x',y'))
= B(ACO2( cos(a), sin(a),-sin(a), cos(a))) B(A(x,y)),

                                                                       (19.4)

where

a = B(A(p/4 if a = c, F(1,2) arctan( F(b,a-c) ) if a  \pi  c)).

                                                                       (19.5)

And "quivalent way to write (11.2) is to multiply both sides by the inverse of the matrix:

B(A(x,y))
= B(ACO2(cos(a), -sin(a),sin(a),  cos(a))) B(A(x',y')).

                                                                       (19.6)

Thus, substitute x = x' cos(a) - y' sin(a) and y = x' sin(a) + y' cos(a) into the equation (11.1), where a is as indicated above and the cross term, bxy, will disappear.

Given a general quadratic in two variables, there are three things that need to be done to get it into standard form: get rid of the xy terms, factor all the x terms and the y terms separately, and rescale the axes so that the coefficients of the x2 term and the y2 terms are the same. Geometrically this corresponds, as we have recalled, to a rotation, translation, and expansion, respectively. From the geometric point of view, it does not matter which is done first: the rotation and then the translation, or vice versa. Algebraically, it is better to remove the xy terms first, for then the factoring is easier.

Finally, it should also be remembered that the sign of b2 - 4ac can be used to predict whether the curve is a hyperbola, parabola, or ellipse.

What follows is a Maple program for removing the cross term. Using the program assures fast accurate computation for otherwise tedious calculations to determine the rotation for equations of the form

a x2 + b xy + c y2 +d x +ey + f = 0.

The coefficients are read in first.

* a:=?: b:=?:c:=?:d:=?:e:=?:f:=?:

The angle is determined and the rotation is performed.

*if a=c then alpha := Pi/4
          else alpha:=arctan(b/(a-c))/2  fi;
* si:=sin(alpha);
* co:=cos(alpha);
* x:=co*u-si*v; y:=si*u+co*v;
* q:=a*x^2+b*x*y+c*y^2+d*x+e*y+f;
* simplify(");
* evalf(");

The purpose of the previous paragraphs recalling how to change algebraic equations representing two dimensional conic sections into standard form was to suggest that the same ideas carry over almost unchanged for the second degree partial differential equations. The techniques will change these equations into the canonical forms for elliptic, hyperbolic, or parabolic partial differential equations.

Here are the forms we want:

Elliptic Equations:

L[u] =   F([[partialdiff]]<sup>2</sup>u,[[partialdiff]]x<sup>2</sup>)  +
F([[partialdiff]]<sup>2</sup>u,[[partialdiff]]y<sup>2</sup>)  +/- k u
[[integral]] --<sup>2</sup> u +/- k u,

Hyperbolic Equations:

L[u] =  F([[partialdiff]]<sup>2</sup>u,[[partialdiff]]x<sup>2</sup>)  -
F([[partialdiff]]<sup>2</sup>u,[[partialdiff]]y<sup>2</sup>) +/- k u,

Parabolic Equation:

L[u]
= F([[partialdiff]]u,[[partialdiff]]x)  -
F([[partialdiff]]<sup>2</sup>u,[[partialdiff]]y<sup>2</sup>)   +/- u.

The choice for which form to use is determined by the techniques used to solve the equations.

We turn now to the techniques of arriving at standard forms for partial differential equations.

If one has a second order partial differential equation with constant coefficients that is not in the standard form, there is a method to change it into this form. The techniques are similar to those used in the analytic geometry. Having the standard form, one might then solve the equation. Finally, the solution should be transformed back into the original coordinate system.

We will illustrate the procedure for transformation of a second order equation into standard form. Consider the equation

11
F([[partialdiff]]<sup>2</sup>u,[[partialdiff]]x<sup>2</sup>)  + 4R(3) <sup>
</sup>F([[partialdiff]]<sup>2</sup>u,[[partialdiff]]x[[partialdiff]]y)  + 7
F([[partialdiff]]<sup>2</sup>u,[[partialdiff]]x<sup>2</sup>)  - 5u = 0.

In the original equation, if we think of the equation as

a F([[partialdiff]]<sup>2</sup>u,[[partialdiff]]x<sup>2</sup>) + b <sup>
</sup>F([[partialdiff]]<sup>2</sup>u,[[partialdiff]]x[[partialdiff]]y)  + c
F([[partialdiff]]<sup>2</sup>u,[[partialdiff]]y<sup>2</sup>)<sup>  + d
</sup>F([[partialdiff]]u,[[partialdiff]]x) + e
F([[partialdiff]]u,[[partialdiff]]y) + f u = 0.

Then, a = 11, b = 4 R(3), c = 7, so that b2 - 4 a c = -260 and we identify this as an elliptic equation.

We would like to transform the equation into the form

F([[partialdiff]]<sup>2</sup>u,[[partialdiff]]x<sup>2</sup>)
 + F([[partialdiff]]<sup>2</sup>u,[[partialdiff]]y<sup>2</sup>)  + cu = 0.

Introduce new coordinates (x,h) by rotation of axes so that in the transformed equation the mixed second partial derivative does not appear. Let

B(A(x,h))
= B(ACO2( cos(a), sin(a),-sin(a), cos(a))) B(A(x,y))

or,

B(A(x,y))
= B(ACO2(cos(a), -sin(a),sin(a),  cos(a))) B(A(x,h)).

Using the chain rule,

F([[partialdiff]] ,[[partialdiff]]x) = cos(a) F([[partialdiff]]
,[[partialdiff]]x) - sin(a) F([[partialdiff]] ,[[partialdiff]]h)

and

F([[partialdiff]]
,[[partialdiff]]y) = sin(a) F([[partialdiff]] ,[[partialdiff]]x) + cos(a)
F([[partialdiff]] ,[[partialdiff]]h) .

It follows that

F([[partialdiff]]<sup>2</sup>
,[[partialdiff]]x<sup>2</sup>) = F([[partialdiff]] ,[[partialdiff]]x)
F([[partialdiff]] ,[[partialdiff]]x) = ( cos(a) F([[partialdiff]]
,[[partialdiff]]x) - sin(a) F([[partialdiff]] ,[[partialdiff]]h) )( cos(a)
F([[partialdiff]] ,[[partialdiff]]x) - sin(a) F([[partialdiff]]
,[[partialdiff]]h) )

so that

 F([[partialdiff]]<sup>2</sup>
,[[partialdiff]]x<sup>2</sup>)  = cos<sup>2</sup>(a)
F([[partialdiff]]<sup>2</sup> ,[[partialdiff]]x<sup>2)</sup> - 2 sin(a)cos(a)
F([[partialdiff]]<sup>2</sup> ,[[partialdiff]]x[[partialdiff]]h)   +
sin<sup>2</sup>(a)  F([[partialdiff]]<sup>2</sup> ,[[partialdiff]]h<sup>2)
.</sup>

In a similar manner,

F([[partialdiff]]<sup>2</sup> ,[[partialdiff]]x [[partialdiff]]y) =  sin(a) cos(a)  F([[partialdiff]]<sup>2</sup>
,[[partialdiff]]x<sup>2)</sup> + (cos<sup>2</sup>(a) - sin<sup>2</sup>(a) )
F([[partialdiff]]<sup>2</sup> ,[[partialdiff]]x<sup> </sup>[[partialdiff]]h) -
sin(a)cos(a) F([[partialdiff]]<sup>2</sup> ,[[partialdiff]]h<sup>2)</sup> ,

and

F([[partialdiff]]<sup>2</sup>
,[[partialdiff]]y) = sin<sup>2</sup>(a)  F([[partialdiff]]<sup>2</sup>
,[[partialdiff]]x<sup>2</sup>) + 2 sin(a) cos(a)  F([[partialdiff]]<sup>2</sup>
,[[partialdiff]]x [[partialdiff]]h)  + cos<sup>2</sup>(a)
F([[partialdiff]]<sup>2</sup> ,[[partialdiff]]h<sup>2)  .</sup>

The original equation described u as a function of x and y. We now define v as a function of x and h by v(x,h) = u(x,y). The variables x and h are related to x and y as described by the rotation above:

v(x,h) = u(x(x,h), y(x,h))

= u(cos(a) x - sin(a) h, sin(a) x + cos(a) h)

Of course, we have not specified a yet. This comes next.

The equation satisfied by v is

[11c<sup>2
</sup> + 4R(3)sc + 7 s<sup>2</sup>]
F([[partialdiff]]<sup>2</sup>v,[[partialdiff]]x<sup>2</sup>) + [-8 sc + 4 R(3)
(c<sup>2</sup> - s<sup>2</sup>)]
F([[partialdiff]]<sup>2</sup>v,[[partialdiff]]x[[partialdiff]]h)<p>
	+ [ 11s<sup>2</sup> + 4R(3) sc + 7 c<sup>2</sup>]
F([[partialdiff]]<sup>2</sup>v,[[partialdiff]]h<sup>2</sup>)  - 5 v,

where we have used the abbreviations s = sin(a) and c = cos(a). The coefficient of the mixed partials will vanish if a is chosen so that

-8 sin(a)cos(a) + 4 R(3)  (cos<sup>2</sup>(a) - sin<sup>2</sup>(a)) = 0,

that is,

   tan(2a) = 31/2/2

This means

a = p/6, sin(a) = R(F(1- cos(2 a),2))   and cos(a) = R(F(1+ cos(2 a),2)).

After substitution of these values, the equation satisfied by v becomes

13 F([[partialdiff]]<sup>2</sup>v,[[partialdiff]]x<sup>2</sup>)  + 5
F([[partialdiff]]<sup>2</sup>v,[[partialdiff]]h<sup>2</sup>)  -5 v = 0.

This special example, together with the foregoing discussions of analytic geometry makes the following statement believable: Every second order partial differential equation with constant coefficients can be transformed into one in which mixed partials are absent. The angle of rotation is, again, given by (3.3).

It is left as an exercise to see that in the general case, b2 - 4 a c is left unchanged by this rotation. Surely, Maple is to be used to do this calculation. (See Exercise 8.)

We are now ready for the second step: to remove the first order term. For economy of notation, let us assume that the given equation is already in the form

 F([[partialdiff]]<sup>2</sup>u,[[partialdiff]]x<sup>2</sup>)  - 4
F([[partialdiff]]<sup>2</sup>u,[[partialdiff]]y<sup>2</sup>) + 3
F([[partialdiff]]u,[[partialdiff]]x)  + u = 0.

Define v by

v(x,y) = e<sup>-bx </sup>u(x,y)  or  u(x,y) = e<sup>bx</sup> v(x,y),

where b will be chosen so that the transformed equation will have the first order derivative removed. Differentiating u and substituting into the equation we get that

F([[partialdiff]]<sup>2</sup>v,[[partialdiff]]x<sup>2</sup>)  - 4
F([[partialdiff]]<sup>2</sup>v,[[partialdiff]]y<sup>2</sup>) + (2b +3)
F([[partialdiff]]v,[[partialdiff]]x)  + (b<sup>2</sup> + 3b + 1)v = 0.

If we choose b = - 3/2, we have

F([[partialdiff]]<sup>2</sup>v,[[partialdiff]]x<sup>2</sup>)
 - 4 F([[partialdiff]]<sup>2</sup>v,[[partialdiff]]y<sup>2</sup>)   - F(5,4) v
= 0.

Notice that this transformation to achieve an equation lacking the first derivative with respect to x is generally possible when the coefficient on the second derivative with respect to x is not zero, and is otherwise impossible. The same statements hold for derivatives with respect to y.

The final step is rescaling. We choose variables xi and eta by

xi = \mu x and eta = \nu y, where m and n are chosen so that in the transformed equation the coefficients of v_xi xi , v_eta eta and v are equal in absolute value. We have

F([[partialdiff]]<sup>2</sup>u,[[partialdiff]]xi<sup>2</sup>)<sup> </sup>=
mu<sup>2</sup> F([[partialdiff]]<sup>2</sup>v,[[partialdiff]]xi<sup>2</sup>)
and F([[partialdiff]]<sup>2</sup>u,[[partialdiff]]y<sup>2</sup>)  =
nu<sup>2</sup> F([[partialdiff]]<sup>2</sup>v,[[partialdiff]]eta<sup>2</sup>).

Our equation becomes

mu<sup>2</sup>
F([[partialdiff]]<sup>2</sup>v,[[partialdiff]]xi<sup>2</sup>)  - 4
nu<sup>2</sup> F([[partialdiff]]<sup>2</sup>v,[[partialdiff]]eta<sup>2</sup>)
- f(5,4) v = 0.

The condition that

mu2 = 4 nu2 = 5/2 will be satisfied if mu = (5/2)1/2 and nu= (5/8)1/2 . Then, we obtain the standard form vxi xi - v_eta eta - v = 0. Generalized Rotations to Remove Cross Terms

The removal of cross terms as typically presented is complicated by the necessity of finding the angle a of rotation and of working with the ensuing trigonometric functions. These ideas seem inappropriate for higher dimensions.

An alternative approach is to address the problem from the perspective of linear algebra, where the ideas even generalize for large dimensions. We will need the linear algebra package to make this work.

> with(linalg);

We choose the constants as the coefficients for the partial differential equations

    a uxx + b xy + cyy + d ux + e uy + f = 0.

> a:=2: b:=3: c:=-2: d:=1: e:=1: f:=3/25:

Make a symmetric matrix with a, b/2, and c as the entries.

> A:=matrix([[a,b/2],[b/2,c]]):

It is a theorem in linear algebra that symmetric matrices have a diagonal Jordan form.

> jordan(A,P);

We want K to be a matrix of eigenvectors.

> K:=inverse(P);

In order to make this process work, we ask that the eigenvectors - which form the columns of K - should have norm 1. This will cause the eigenvectors to be orthonormal.

> N1:=norm([K[1,1],K[2,1]],2): N2:=norm([K[1,2],K[2,2]],2):

> L:=matrix([[K[1,1]/N1,K[1,2]/N2],[K[2,1]/N1,K[2,2]/N2]]);

It will now be the case that (transpose(L) * A * L) is the Jordan form.

> evalm(transpose(L) &* A &* L);

To remove the cross-product term, we now perform a change of variables defining s and t.

> s:=(x,y)->evalm(transpose(L) &* vector([x,y]))[1]:

t:=(x,y)->evalm(transpose(L) &* vector([x,y]))[2]:

If we define v as indicated below, then it will satisfy a PDE with the cross-term missing and with u satisfying the original partial differential equation.

> u:=(x,y)->v(s(x,y),t(x,y)):

> a*diff(u(x,y),x,x)+b*diff(u(x,y),x,y)+c*diff(u(x,y),y,y)+

      d*diff(u(x,y),x) + e*diff(u(x,y),y) + u(x,y):

> simplify(");

> collect(",D[1,2](v)(s(x,y),t(x,y)));

> pde:=collect(

      collect(

          collect(

             collect(",D[1](v)(s(x,y),t(x,y)))

                ,D[2](v)(s(x,y),t(x,y)))

                   ,D[2,2](v)(s(x,y),t(x,y)))

                      ,D[1,1](v)(s(x,y),t(x,y)));

Thus, we solve this simpler system and create u that satisfies the original equation.


In the chapters that came before, an understanding of the adjoints of linear functions was critical in determining when certain linear equations would have a solution....and even in computing the solutions for some cases. It is, then, no surprise that we shall be interested in the computation of adjoints in this setting, too.

In a general inner product space, the adjoint of the linear operator L is defined as the operator L* such that for all u,v,

< L(u) , v > = < u , L*(v) >. For ordinary differential equations boundary value problems, the dot product came with the problem in a sense: it was an integral over an appropriate interval on which the functions were defined. For partial differential equations with boundary conditions, the inner product will be of the standard kind: int(f(x,y) but now the integral is multidimensional and the variables run over the region of interest. For ordinary differential equations, integration-by-parts played a key role in deciding the appropriate boundary conditions to impose so that the formal adjoint would be the real adjoint. Now, Green's identities provide the appropriate calculus.

In fact, Green's second identity can be used to compute adjoints of the Laplacian. We will see that the divergence theorem is useful for the more general second-order, differential operators.

Model Problem. Consider

grad 2u = f,
u(x,0) = u(x,b) = 0
partial u(0,y)/partial x = partial u(a,y)/partial x = 0
on an interval [0,a] x [0,b] in the plane. This problem invites consideration of the operator L defined on a manifold as given below: L(u) = grad 2u
and
M = {u: u(x,0) = u(x,b) = 0, partial u(0,y)/partial y = partial u(a,y)/partial y = 0}.
The second identity presents a natural setting in which the operator L is self-adjoint in the sense that L = L*. Let u and v be in M: IntegralD[ [v L(u) - L(v) u] dA] = Integralpartial D [v partial u/partial eta - partial v/partial eta u] ds = 0. This last equality follows because, on the boundary, all of u, partial u/partial eta , v, and partial v/partial eta = 0.

In order to discuss adjoints of more general second order partial differential equations, let A, B, C, and c be scalar valued functions. Let b be a vector valued function. Let L(u) be given by

L(u) = Apartial 2u/partial x2 + 2Bpartial 2u/partial xpartial y + Cpartial 2u/partial y2 + < b , grad u > + cu.

DEFINITION: The FORMAL ADJOINT is given by

L*(v) = partial 2(Av)/partial x2 + 2partial 2(Bv)/partial xpartial y + partial 2(Cv)/partial y2 - grad *(bv) + cv.

Take A, B, and C to be constant. What would it mean to say that L is formally self-adjoint? That L = L* (formally)? Then <b, grad u> must be - grad *bu = < -b , grad u > - u grad *b. Thus, 2 < b , grad u > = -u (grad *b) for all u. Since this must hold for all u, it must hold in the special case that u = 1 identically, which implies that grad *b = 0. Taking u(x,y) to be x, or to be y gets that each of b1 and b2 = 0. Hence, if L is formally self adjoint, then b = 0.

Examples XVIII.3.

  1. Let L[u] = 3partial 2u/ partial x2) + 5 partial 2u/ partial y2). The formal adjoint of L is L. Note that

    L[u] v - u L[v] = partial / partial x ( 3[partial u/ partial x v - u partial v/ partial x ] ) + partial / partial y ( 5[partial u/ partial y v - u partial v/ partial x ] )
            = grad .( 3[partial u/ partial x v - u partial v/ partial x ] 5[partial u/ partial y v - u partial v/ partial x ] )

  2. Let L[u] = 3 F(partial 2u,partial x2) + 5 F(partial 2u,partial y2) + 7 F(partial u,partial x) + 11 F(partial u,partial y) + 13 u. The formal adjoint of L is

    L*[v] = 3 F(partial 2v,partial x2) + 5 F(partial 2v,partial y2) - 7 F(partial v,partial x) - 11 F(partial v,partial y) + 13 v. Note that

    L[u] v - u L*[v] = F(partial ,partial x) ( 3[F(partial u,partial x) v - u F(partial v,partial x) ] + 7 uv) + F(partial ,partial y) ( 5[F(partial u,partial y) v - u F(partial v,partial x) ] + 11 uv)

    = grad .( 3[F(partial u,partial x) v - u F(partial v,partial x) ] +7 uv, 5[F(partial u,partial y) v - u F(partial v,partial x) ] + 11 uv).

  3. Let L[u] = ex F(partial 2u,partial x2) + 5 F(partial u,partial y) + 3u. The formal adjoint of L is L* given by

    L*[v] = ex F(partial 2v,partial x2) + 2ex F(partial v,partial x) -5F(partial v,partial y) + (ex+3) u. Note that

    L[u] v - u L*[v] = F(partial ,partial x) (ex v F(partial u,partial x) - u F(partial exv,partial x) ) + F(partial ,partial y) (5uv)

    = grad .( ex v F(partial u,partial x) - u F(partial exv,partial x) , 5uv).

THE CONSTRUCTION OF M*

We now come to the important part of the construction of the real adjoint: how to construct the appropriate adjoint boundary conditions. We are given L and M; we have discussed how to construct L*. We now construct M*. To see what is M* in the general case, the divergence theorem is recalled:

òòD grad *F dx dy = òpartial D < F, eta > ds.

The hope, then, is to write v L(u) - L*(v) u as grad *F for some suitable chosen F.

Theorem. If L is a second order differential operator and L* is the formal adjoint, then there is F such that vL(u) - L*(v)u = grad .F.

Here's how to see that. Note that

(v Apartial 2u/partial x2 + v Cpartial 2u/partial y2) - (u partial 2(Av)/partial x2 + u partial 2(Cv)/partial y2)

= partial /partial x(vApartial u/partial x - upartial (Av)/partial x) + partial /partial y (vCpartial u/partial y - upartial (Cv)/partial y)

= grad *{ v(Apartial u/partial x, Cpartial u/partial y) - u(partial Av/partial x, partial Cv/partial y)}.

Also, v < b , grad u > + u grad *(bv)

= v b1 partial u/partial x + b2 partial u/partial y v + u partial (b1v)/partial x + u partial (b2v)/partial y

= grad *(vbu).

COROLLARY: òòD [vL(u) - uL*(v)]

= òòD grad *{v(Apartial u/partial x,Cpartial u/partial y) - u(partial (Av)/partial x,partial (Cv)/partial y) + vbu)

partial D [v{Apartial u/partial x,Cpartial u/partial y}-u{[[patialdiff]](Av)partial x,partial (Cv)/partial y} +vbu}]*eta ds.

EXAMPLES.

1(cont). Let L[u] be as in example 1 above for {x,y} in the rectangle D = [0,1]x[0,1] and M = {u: u=0 on partial D}. Then, according to Example 1, L = L* and

òòD [vL(u) - uL*(v)] dA=

òpartial D <{3 [ F(partial u,partial x) v - u F(partial v,partial x) ], 5 [ F(partial u,partial y) v - u F(partial v,partial y)] } , eta > ds.

Recalling that the unit normal to the faces of the rectangle D will be {0.-1}, {1,0}, {0,1}, or {-1,0} and that u = 0 on partial D, we have that

òòD [vL(u) - uL*(v)] dA=

= - I(0,1, ) 5F(partial u,partial y) v dx + 3 I(0,1, )F(partial u,partial x) v dy

+ 5 I(1,0, ) F(partial u,partial y) v |dx| - 3 I(1,0, ) F(partial u,partial y) v |dy|.

In order for this integral to be zero for all u in M, it must be that v = 0 on partial D. And M = M*. Hence, {L, M} is (really) self adjoint.

2.(cont) Let L[u] be as in Example 2 above and M = {u: u(x,0) = u(0,y) = 0, and F(partial u,partial x)(1,y) = F(partial u,partial y)(x,1) = 0, 0 < x < 1, 0 < y < 1}. Using the results from above,

òòD [L(u)v - uL*(v)] dA=

-I(0,1, ) 5 F(partial u,partial y) v dx + I(0,1, ) [-3 u F(partial v,partial x) + 7uv] dy

+ I(1,0, ) [5u F(partial v,partial x) + 11 uv] |dx| - I(1,0, )[3 F(partial u,partial x) v ] |dy|.

It follows that M* = {v: v(x,0) = 0, v(1,y) = F(3,7) F(partial v,partial x) (1,y), v(x,1) =

F(5,11) F(partial v,partial x) (x,1), and v(0,y) = 0}.

3(cont). Let L[u] be as in Example 3 above for [x,y} in the first qaudrant. Let M = {u: u = 0 on partial D}. Then

òòD [vL(u) - uL*(v)] dA=

partial D <{ex v F(partial u,partial x) - u F(partial exv ,partial x) , 5uv }, eta > ds = - I(0,*, )v F(partial u,partial x) dy.

Thus, M* = {v: v(0,y) = 0 for y > 0}.


Exercises XVIII
  1. Classify each of the following as hyperbolic, parabolic, or elliptic. 1. 		 F(<img src=2u,partial x2) + 3 F(partial 2u,partial xpartial y) + 2 F(partial 2u,partial y2) - F(partial u,partial x) - F(partial u,partial y) = 0.
  2. F(partial 2u,partial x2) + 3 F(partial 2u,partial xpartial y) + 2 F(partial 2u,partial y2) - 2 F(partial u,partial x) - 4 F(partial u,partial y) = 0.
  3. F(partial 2u,partial x2) + 2 F(partial 2u,partial xpartial y) + 2 F(partial 2u,partial y2) = 0.
  4. F(partial 2u,partial x2) + 2 F(partial 2u,partial xpartial y) + F(partial 2u,partial y2) + F(partial u,partial x) + F(partial u,partial y) = 0.">
  5. Transform the following equations into standard form:

     (a)   3 F([[partialdiff]]<sup>2</sup>u,[[partialdiff]]x<sup>2</sup>)  + 4
F([[partialdiff]]<sup>2</sup>u,[[partialdiff]]y<sup>2</sup>)  - u = 0.
(b)	4 F([[partialdiff]]<sup>2</sup>u,[[partialdiff]]x<sup>2</sup>)  +
F([[partialdiff]]<sup>2</sup>u,[[partialdiff]]x[[partialdiff]]y) + 4
F([[partialdiff]]<sup>2</sup>u,[[partialdiff]]y<sup>2</sup>)  + u = 0.
(c)	 F([[partialdiff]]<sup>2</sup>u,[[partialdiff]]x<sup>2</sup>)  +
F([[partialdiff]]<sup>2</sup>u,[[partialdiff]]y<sup>2</sup>)  + 3
F([[partialdiff]]u,[[partialdiff]]x)  - 4  F([[partialdiff]]u,[[partialdiff]]y)
+ 25 u = 0.
(d)	 F([[partialdiff]]<sup>2</sup>u,[[partialdiff]]x<sup>2</sup>)  - 3
F([[partialdiff]]<sup>2</sup>u,[[partialdiff]]y<sup>2</sup>)  + 2
F([[partialdiff]]u,[[partialdiff]]x)  -   F([[partialdiff]]u,[[partialdiff]]y)
+  u = 0.
(e)	 F([[partialdiff]]<sup>2</sup>u,[[partialdiff]]x<sup>2</sup>)  - 2
F([[partialdiff]]<sup>2</sup>u,[[partialdiff]]x[[partialdiff]]y)  +
F([[partialdiff]]<sup>2</sup>u,[[partialdiff]]y<sup>2</sup>)  + 3 u = 0.

  6. Show that the equation

    u_xx - u_y+  g u = f(x,y) where g is any constant, can be transformed into

    v_xx- v_y =  g(x,y).

  7. Show that by rotation of the axis by 45 degrees the equations

    u_xx - u_yy  = 0   and  u_xy = 0.

    can be transformed into one another. Find the general solution for both equations.

  8. Show that upon making a rotation to remove the cross terms, then the discriminant b2 - 4ac is unchanged. 
  9. Suppose that L(u) = partial 2u/partial x2 - partial 2u/partial t2 restricted to M = {u: u(0,t) =u(a,t) = 0 and u(x,0) = partial u/partial t(x,0) = 0}. Classify L as parabolic, hyperbolic, or elliptic. Find L* . Find F such that v L[u] - L*[v} u = grad F. What is M*?
  10. Suppose that L[u] = partial 2u/partial x2 + partial 2u/partial y2 - partial 2u/partial z2 restricted to

    M = {u: u(0,y,z) = 0, u(1,y,z) = 0, u(x,y,0) = u(x,y,1), partial u/partial z (x,y,0) = partial u/partial z(x,y,1),

    partial u/partial y(x,0,z) = 3 u(x,0,z), partial u/partial y(x,1,z) = 5 u(x,1,z)}

    Classify L as parabolic, hyperbolic, or elliptic. Give L*. Find F such that v L[u} - L*[v] u = grad .F. What is M*?


Link to
  • Chapter XIX
  • chapter XVII
  • Table of Contents
  • Evans Harrell's home page
  • Jim Herod's home page