Solutions

Solution to Exercise 0.1, page ix

We have images.

On the other hand, images.

Clearly f(x2) > f(x1), and so the mining operation x1 is preferred to x2 because it incurs a lower cost.

Solutions to the exercises from Chapter 1

Solution to Exercise 1.1, page 7

True. Indeed we have:

(V1)For all x, y, z > 0, x images (y images z) = x images (yz) = x(yz) = (xy)z = (xy) images z = (x images y) images z.

(V2)For all x > 0, x images 1 = x1 = x = 1x = 1 images x.
(So 1 serves as the zero vector in this vector space!)

(V3)If x > 0, then 1/x > 0 too, and x images (1/x) = x(1/x) = 1 = (1/x)x = (1/x) images x.
(Thus 1/x acts as the inverse of x with respect to the operation images.)

(V4)For all x, y > 0, x images y = xy = yx = y images x.

(V5)For all x > 0, 1 · x = x1 = x.

(V6)For all x > 0 and all images.

(V7)For all x > 0 and all images.

(V8)For all x, y > 0, images.

We remark that V is isomorphic to the one dimensional vector space R (with the usual operations): indeed, it can be checked that the maps log : VR and exp : RV are linear transformations, and are inverses of each other.

Solution to Exercise 1.2, page 7

We prove this by contradiction. Suppose that C[0, 1] has dimension d. Consider functions xn(t) = tn, t ∈ [0, 1], n = 1, ··· , d. Since polynomials are continuous, we have xnC[0, 1] for all n = 1, ··· , d.

First we prove that xn, n = 1, ··· , d, are linearly independent in C[0, 1]. Suppose not. Then there exist αnR, n = 1, ··· , d, not all zeros, such that α1 · x1 + ··· + αd · xd = 0. Let m ∈ {1, ··· , d} be the smallest index such that αm ≠ 0. Then for all t ∈ [0, 1], αmtm + ··· + αdtd = 0. In particular, for all t ∈ [0, 1], we have images.

Thus for all nN we have images.

Passing the limit as n → 8, we obtain αm = 0, a contradiction. So the functions xn, n = 1, ··· , d, are linearly independent in C[0, 1].

Next, we get the contradiction to C[0, 1] having dimension d. Since any independent set of cardinality d in a d-dimensional vector space is a basis for this vector space, {xn : n = 1, ··· , d} is a basis for C[0, 1]. Since the constant function 1 (taking value 1 everywhere on [0, 1]) belongs to C[0, 1], there exist βnR, n = 1, ··· , d, such that 1 = β1 · x1 + ··· + βd · xd. In particular, putting t = 0, we obtain the contradiction that 1 = 0: 1 = 1(0) = (β1 · x1 + ··· + βd · xd)(0) = 0.

Solution to Exercise 1.3, page 7

(“If ” part.) Suppose that ya = yb = 0. Then we have:

(S1)If x1, x2S, then x1 + x2S. As x1, x2C1[a, b], also x1 + x2C1[a, b]. Moreover, x1(a) + x2(a) = 0 + 0 = 0 = ya and x1(b) + x2(b) = 0 + 0 = 0 = yb.

(S2)If xS and αR, then α · xS. Indeed, as xC1[a, b], and αR,
we have α·xC1[a, b], and (α·x)(a) = α0 = 0 = ya, (α·x)(b) = α0 = 0 = yb.

(S3)0S, since 0C1[a, b] and 0(a) = 0 = ya = yb = 0(b).

Hence, S is a subspace of a vector space C1[a, b]

(“Only if ” part.) Suppose that S is a subspace of C1[a, b]. Let xS.

Then 2 · xS. Therefore, (2 · x)(a) = ya, and so ya = (2 · x)(a) = 2x(a) = 2ya.

Thus ya = 0. Moreover, (2 · x)(b) = yb, and so yb = (2 · x)(b) = 2x(b) = 2yb.

Hence also yb = 0.

Solution to Exercise 1.4, page 10

images

Solution to Exercise 1.5, page 14

From the triangle inequality, we have that ||x|| = ||y + xy|| images ||y|| + ||xy||, for all x, yX. So for all x, yX, ||x|| − ||y|| images ||xy||.

Interchanging x, y, we get images.

So for all x, yX, − (||x|| − ||y||) images ||xy||.

Combining the results from the first two paragraphs, we obtain |||x|| − ||y||| images ||xy|| for all x, yX.

Solution to Exercise 1.6, page 14

No, since for example (N2) fails if we take x = 1 and α = 2:

images

Solution to Exercise 1.7, page 15

We verify that (N1), (N2), (N3) are satisfied by || · ||Y:

(N1)For all yY, ||y||Y = ||y||X images 0.
If yY and ||y||Y = 0, then ||y||X = 0, and so y = 0X.
But 0Y, and so y = 0Y.

(N2)If yY and αR, then α · yY and images.

(N3)If y1, y2Y, then y1 + y2Y.
Also, images.

Solution to Exercise 1.8, page 15

(1) We first consider the case 1 images p < ∞, and then p = ∞. Let 1 images p < ∞.

(N1)If x = (x1, ··· , xd) ∈ Rd then images.

If xRd and ||x||p = 0, then ||x||pp = 0. that is, images.

So |xn| = 0 for 1 images n images d, that is, x = 0.

(N2)Let x = (x1 , ··· , xd) ∈ Rd, and αR.

Then images.

(N3)Let x = (x1, ··· , xd) ∈ Rd and y = (y1 , ··· , yd) ∈ Rd.

If p = 1, then we have |xn + yn| images |xn| + |yn| for 1 images n images d.

By adding these, ||x + y||1 images ||x||1 + ||y||1, establishing (N3) for p = 1.

Now consider the case 1 < p < ∞.

If x + y = 0, then ||x + y||p = ||0||p = 0 images ||x||p + ||y||p trivially.

So we assume that x + y0. By Hölder’s Inequality, we have

images

where we used q(p − 1) = p in order to obtain the last equality.

Similarly, images. Consequently,

images

   Dividing throughout by images, we obtain images. This completes the proof that (Rd, || · ||p) is a normed space for 1 images p < ∞.

Now we consider the case p = ∞.

(N1)If x = (x1 , ··· , xd) ∈ Rd, then ||x|| = max{|x|, ··· , |xd|} images 0.

If xRd and ||x|| = 0, then max{|x1|, ··· , |xd|} = 0, and so |xn| = 0 for 1 images n images d, thart is, x = 0.

(N2)Let x = (x1 , ··· , xd) ∈ Rd, and αR.
Then images.

(N3)Let x = (x1, ··· , xd) ∈ Rd and y = (y1, ··· , yd) ∈ Rd.

We have images for 1 images n images d.
So it follows that ||x + y|| images ||x|| + ||y||, establishing (N3) for p = ∞.

(2) See the following pictures.

images

(3) We have for x = (a, b) ∈ R2 that

images

So images.

We have images. We have

images

giving 1/p images hp images 0 for all p, and so hp → 0 as p → ∞.) So it follows by the Sandwich Theorem1 that images.

The balls Bp(0, 1) grow to B(0, 1) as p increases.

Solution to Exercise 1.9, page 16

(1)If x, yB(0, 1), then for all α ∈ (0, 1), (1 − α) · x + α · yB(0, 1) too, since

images

(2)See the following picture.

images

(3)B(0, 1) is not convex: taking x = (1, 0), y = (0, 1) and α = 1/2, we obtain images, and so images.

Solution to Exercise 1.10, page 16

We’ll verify that (N1), (N2), (N3) hold.

(N1)If xC[a, b], then |x(t)| images 0 for all t ∈ [0, 1], and so images.

Let xC[a, b] be such that ||x||1 = 0. If x(t) = 0 for all t ∈ (a, b), then by the continuity of x on [a, b], it follows that x(t) = 0 for all t ∈ [a, b] too, and we are done! So suppose that it is not the case that for all t ∈ (a, b), x(t) = 0. Then there exists a t0 ∈ (a, b) such that x(t0) ≠ 0. As x is continuous at t0, there exists a δ > 0 small enough so that a < t0δ, t0 + δ < b, and such that for all t ∈ [a, b] such that t0δ < t < t0 + δ, |x(t) − x(t0)| < |x(t0)|/2. Then for t0δ < t < t0 + δ, we have, using the “reverse” Triangle Inequality from Exercise 1.5, page 14, that

images

So images.

This is a contradiction. Hence x = 0.

(N2)For xC[a, b], αR, images.

(N3)Let x, yC[a, b]. Then

images

Solution to Exercise 1.11, page 17

(N1)For xCn[a, b], clearly images.

If xCn[a, b] is such that ||x||n, = 0, then ||x|| + ··· + ||x(n)|| = 0, and since each term in this sum is nonnegative, we have ||x|| = 0, and so x = 0.

(N2)Let xCn[a, b] and αR. Then

images

(N3)Let x, yCn[a, b]. For all 0 images k images n, ||x(k) + y(k)|| images ||x(k)|| + ||y(k)||, by the Triangle Inequality for || · || Consequently,

images

Solution to Exercise 1.12, page 17

(1)Let k1, k2, m1, m2, n1, n2Z, p ł m1, m2, n1, n2 and images.

If k1 > k2, then pk1k2m1n2 = m2n1, which implies that p | m2n1, and as p is prime, this would mean p | m1 or p | n1, a contradiction. Hence k1 images k2. Similarly, we also obtain k2 images k1.

Thus k1 = k2. Consequently, images, and so | · |p is well-defined.

(2)If 0 ≠ rQ, then we can express r as images, with k, m, nZ, and p image m, n.

We see that images. If r = 0, then |r|p = |0|p = 0 by definition.
Thus |r|p images 0 for all rR. Also if r ≠ 0, then |r|p > 0. Hence |r|p = 0 implies that r = 0.

(3)The claim is obvious if r1 = 0 or r2 = 0. Suppose that r1 ≠ 0 and r2 ≠ 0.

Let images and images.

So images. As p image m1, p image m2, and p is prime, we have p image m1m2.

Similarly p image n1n2. Thus images.

(4)The inequality is trivially true if r1 = 0 or r2 = 0 or if r1 + r2 = 0.

Assume r1 ≠ 0, r2 ≠ 0, and r1 + r2 ≠ 0.

Let images, with k1, k2, m1, m2, n1, n2Z, p image m1, m2, n1, n2. We have

images

where images := pk1−min{k1,k2} m1n2 + pk2−min{k1,k2} n1m2 (≠ 0, since r1 + r2 ≠ 0). By the Fundamental Theorem of Arithmetic, there exists a unique integer images images 0 and an integer m such that images and p image m. Clearly p image n1n2.

Hence r1 + r2 = images, with p image m, n1n2.

So images

images

This yields the Triangle Inequality:

images

Solution to Exercise 1.13, page 17

(N1)Clearly images for all M = [mij] ∈ Rm×n.

If ||M|| = 0, then |mij| = 0 for all 1 images i images m, 1 images j images n, that is, M = [mij] = 0, the zero matrix.

(N2)For M = [mij] ∈ Rm×n and αR, we have

images

(N3)For P = [pij], Q = [qij] ∈ Rm×n, |pij + qij| images |pij| + |qij| images ||P|| + ||Q||.

As this holds for all i, j, ||P + Q|| = images.

Solution to Exercise 1.14, page 19

Consider the open ball B(x, r) = {yX : ||xy|| < r} in X. If yB(x, r), then ||xy|| < r. Define r′ = r − ||xy|| > 0. We claim that B(y, r′) ⊂ B(x, r). Let zB(y, r′). Then ||zy|| < r′ = r − ||xy|| and so ||xz|| images ||xy|| + ||yz|| < r. Hence zB(x, r). The following picture illustrates this.

images

Solution to Exercise 1.15, page 19

The point images, but for each r > 0, the point images belongs to the ball B(c, r), but not to I, since ||yc||2 = images, but images ≠ 0. See the following picture.

images

Solution to Exercise 1.16, page 19

Using the following picture, it can be seen that the collections O1, O2, O of open sets in the normed spaces (R2, || · ||1), (R2, || · ||2), (R2, || · ||), respectively, coincide.

images

Solution to Exercise 1.17, page 20

If Fi, iI, is a family of closed sets, then X\Fi, iI, is a family of open sets. Hence images is open. So images is closed.

If F1, ··· , Fn are closed, then X\F1, ··· , X\Fn are open, and so the intersection images of these finitely many open sets is open as well.

Thus images is closed.

For showing that the finiteness condition cannot be dropped, we’ll consider the normed space X = R, and simply rework Example 1.15, page 20, by taking complements.

We know that Fn := R\(−1/n, 1/n), nN, is closed and the union of these, images which is not closed, since if it were, its complement R\(R\{0}) = {0} would be open, which is false.

Solution to Exercise 1.18, page 20

Consider the closed ball B(x, r) = {yX : ||xy|| images r} in X. To show that B(x, r) is closed, we’ll show its complement, U := {yX : ||xy|| > r}, is open. If yU, then ||xy|| > r. Define r′ = ||xy|| − r > 0. We claim that B(y, r′) ⊂ U. Let zB(y, r′). Then ||zy|| < r′ = ||xy|| − r and so ||xz|| images ||xy|| − ||yz|| > ||xy|| − (||xy||− r) = r. Hence zU.

Solution to Exercise 1.19, page 20

(1)False.

For example, in the normed space R, consider the set [0, 1). Then [0, 1) is not open, since every open ball B with centre 0 contains at least one negative real number, and so B has points not belonging to [0, 1).

    On the other hand, this set [0, 1) is not closed either, as its complement is C := (−∞, 0) ∪ [1, ∞), which is not open, since every open ball B′ with centre 1 contains at least one positive real number strictly less than one, and so B′; contains points that do not belong to C.

(2)False. R is open in R, and it is also closed.

(3)True. Ø and X are both open and closed in any normed space X.

(4)True. [0, 1) is neither open nor closed in R.

(5)False.

0 ∈ Q, but every open ball centred at 0 contains irrational numbers; just consider images/n, with a sufficiently large n.

(6)False.

Consider the sequence (an)nN given by a1=images, and for n > 1, an+1 = images. Then it can be shown, using induction on n, that (an)nN is bounded below by images, and that (an)nN is monotone decreasing. (Example 1.19, page 31.) So (an)nN is convergent with a limit L satisfying images, and so L2 = 2.

As L must be positive (the sequence is bounded below by image), it follows that L = image. So every ball with centre image and a positive radius contains elements from Q (terms an for large n), showing that R\Q is not open, and hence Q is not closed.

(Alternately, let cR have the decimal expansion c = 0.101001000100001 ···. The number c is irrational because2 it has a nonterminating and nonrepeating decimal expansion. The sequence of rational numbers obtained by truncation, namely 0.1, 0.101, 0.101001, 0.1010010001, 0.101001000100001, ··· converges with limit c, and so every ball with centre c and a positive radius contains elements from Q, showing again that R \ Q is not open, and hence Q is not closed.)

(7)True. image As each (n, n + 1) is open, so is their union.
Hence Z = R\(R\Z) is closed.

Solution to Exercise 1.20, page 21

We have already seen in Exercise 1.14, page 19, that the interior of S, namely the open ball B(0, 1) = {xX : ||x|| < 1} is open. Also, it follows from Exercise 1.18, page 20, that the exterior of the closed ball B(0, 1), namely the set U = {xX : ||x|| > 1} is open as well. Thus, the complement of S, being the union of the two open sets B(0, 1) and U, is open. Consequently, S is closed.

Solution to Exercise 1.21, page 21

If X = {0}, then {0} is clearly closed, since X\{0} = Ø is open.

Now suppose that X ≠ {0}, and let xX. We want to show that U := X\{x} is open. Let yU := X\{x}, and set r := ||xy|| > 0. We claim that the open ball B(y, r) is contained in U. If zB(y, r), then ||yz|| < r, and so ||zx|| image ||xy|| – ||yz|| image r – ||yz|| > rr = 0. Hence zx, and so zX\{x} = U. Consequently U is open, and so {x} = X\U is closed.

If F is empty, then it is closed.

If F is not empty, then F = {x1, ··· , xn} = image{xi}, for some x1, ···, xnX.

As F is the finite union of the closed sets {x1}, ···, {xn}, F is closed too.

Solution to Exercise 1.22, page 21

Let x, yR and x < y. By the Archimedean property of R, there is a positive integer n such that n > 1/(yx), that is n(yx) > 1. Also, there are positive integers m1, m2 such that m1 > nx and m2 > –nx, so that –m2 < nx < m1. Thus we have nx ∈ [–m2, –m2 + 1) ∪ [–m2 + 1, –m2 + 2)∪···∪[m1 – 1, m1). Hence there is an integer m such that m – 1 image nx < m. We have nx < m image 1 + nx < ny, and so dividing by n, we have x < q := m/n < y. Consequently, between any two real numbers, there is a rational number.

Let xR and let image > 0. Then there is a rational number y such that ximage < y < x + image, that is, |xy| < image. Hence Q is dense in R.

Solution to Exercise 1.23, page 21

Let xR and let image > 0. If xR\Q, then taking y = x, we have |xy| = 0 < image. If on the other hand, xQ, then let nN be such that n > image/image so that with y := x + image/n, we have yR\Q, and |xy| = image/n < image. So R\Q is dense in R.

Solution to Exercise 1.24, page 21

Let x = (xn)nN2, and image > 0. Let NN be such that image

Then y := (x1, ···, xN, 0, ···) ∈ c00, and image

Thus ||xy||2 < image. Consequently, c00 is dense in 2.

Solution to Exercise 1.25, page 21

Consider the set D of all finitely supported sequences with rational terms. Then D is a countable set since it is a countable union of countable sets. We now show that D is dense in 1. Let x := (xn)nN1 and let r > 0.

Let NN be large enough so that image

As Q is dense in R, there exist q1, ···, qNQ such that image

With x′ := (q1, ···, qN, 0, ···) ∈ D, image

Solution to Exercise 1.26, page 22

By the Binomial Theorem, we have

image

Putting s = t, we get 1 = (t + (1 – t))n image

Keeping s fixed, and differentiating (7.1) with respect to t yields

image

Multiplying throughout by t gives

image

With image

Differentiating (7.2) with respect to t yields

image

Multiplying throughout by t yields

image

Setting s = t now gives

image

Hence

image

Solution to Exercise 1.27, page 33

(1)We check that the relation ~ is reflexive, symmetric and transitive.

(ER1)(Reflexivity) If ||·|| is a norm on X, then for all xX, we have that 1 · ||x|| = ||x|| = 1 · ||x||, and so ||·|| ~ ||·||.

(ER2)(Symmetry) If ||·||a ~ ||·||b, then there exist positive m, M such that for all xX, m||x||b image ||x||a image M ||x||b. A rearrangement of this gives (1/M)||x||a image ||x||b image (1/m)||x||a, xX, and so ||·||2 ~ ||·||1.

(ER3)(Transitivity) If ||·||a ~ ||·||b and ||·||b ~ ||·||c, then there exist positive constants Mab, Mbc, mab, mbc such that for all xX, we have that mab||x||b image ||x||a image Mab||x||b and mbc ||x||c image ||x||b image Mbc ||x||c.
Thus mabmbc||x||c image mab||x||b image ||x||a image Mab||x||b image MabMbc ||x||c, and so ||·||a ~ ||·||c.

(2)Suppose that ||·||a ~ ||·||b. Because ~ is an equivalence relation, it is enough to just prove that if U is open in (X, ||·||b), then U is open in (X, ||·||a) too, and similarly, if (xn)nN is Cauchy (respectively) convergent in (X, ||·||b), then it is Cauchy (respectively convergent) in (X, ||·||a) as well. Let m, M > 0 be such that for all xX, m||x||b image ||x||a image M||x||b.

Let U be open in (X, ||·||b), and xU. Then as U is open in (X, ||·||b), there exists an r > 0 such that Bb(x, r) := {yX : ||yx||b < r} ⊂ U. But if yX satisfies ||yx||a < mr, then ||yx||b image (1/m)||yx||a < (1/m)mr = r, and so yBb(x, r) ⊂ U. Hence Ba(x, mr) := {yX : ||yx||a < mr} ⊂ U. So it follows that U is open in (X, ||·||a) too.

Now suppose that (xn)nN is a Cauchy sequence in (X, ||·||b). Let image > 0. Then there exists an NN such that for all n > N, ||xnxm||b < image/M. Hence for all n > N, ||xnxm||a image M ||xnxm||b < M · (image/M) = image.

Consequently, (xn)nN is a Cauchy sequence in (X, ||·||a) as well.

If (xn)nN is a convergent sequence in (X, ||·||b) with limit L, then for image > 0, there exists an NN such that for all n > N, ||xnL||b < image/M. Thus for all n > N, ||xnL||a image M ||xnL||b < M · (image/M) = image. So (xn)nN is convergent with limit L in (X, ||·||a) too.

Solution to Exercise 1.28, page 42

(1)Let L > 0 be such that for all x, yR, |f(x) – f(y)| = image

Then in particular, with image nN, and y = 0, we obtain image

Thus n image L for all nN, which is absurd. So f is not Lipschitz.

(2)x1(0) = 0 and x2(0) = 02/4 = 0, and so x1, x2 satisfy the initial condition.

For all t image 0, image

So x1, x2 are both solutions to the given Initial Value Problem.

Solution to Exercise 1.29, page 43

Let F be closed, and (xn)nN be a sequence in F which converges to x. Suppose that xF. Since F is closed, there is an open ball B(x, r) := {xX : ||xx|| < r} with r > 0, which is contained in X\F. But with image := r > 0, there exists an NN such that for all n > N, ||xnx|| < r. In particular, ||xN+1x|| < r, so that FxN+1B(x, r) ⊂ X\F, a contradiction. Hence xF.

Now suppose that for every sequence (xn)nN in F, convergent in X with a limit xX, we have that the limit xF. We want to show that X\F is open. Suppose it isn’t. Then3 ¬[∀xX\F, ∃r > 0 such that B(x, r) ⊂ X\F]. In other words, ∃xX\F such that ∀r > 0, B(x, r) ∩ F ≠ Ø. So with r = 1/n, nN, we can find an xnB(x, r) ∩ F. Then we obtain a sequence (xn)nN in F satisfying ||xnx|| < 1/n for all nN. Thus (xn)nN converges to x. But xF, contradicting the hypothesis. Hence X\F is open, that is, F is closed.

Solution to Exercise 1.30, page 43

Let (xn)nN in c00 be given by image nN.

Then with image we have

image

showing that c00 is not closed.

Solution to Exercise 1.31, page 43

(1)Suppose that F is a closed set containing S. Let L be a limit point of S.

Then there exists a sequence (xn)nN in S\{L} which converges to L.

As each xnS\{L} ⊂ SF, and since F is closed, it follows that LF.

So all the limit points of S belong to F. Hence SF.

S is closed. Suppose that (xn)nN is a sequence in S that converges to L.

We would like to prove that LS. If LS, then LS, and we are done.

So suppose that LS. Now for each n, we define the new term xn as follows:

If xnS, then xn := xn.

If xnS, then xn must be a limit point of S, and so B(xn, 1/n) must contain some element, say xn, of S.

Hence we have image

Thus (xn)nN is a sequence in S\{L} which converges to L, and so L is a limit point of S, that is, LS. Consequently S is closed.

(2)We first note that if yY, then there exists a (yn)nN in Y that converges to y. Indeed, this is obvious if y is a limit point of Y, and if yY, then we may just take (yn)nN as the constant sequence with all terms equal to y. We have:

(S1)Let x, yY. Let (xn)nN, (yn)nN be sequences in Y that converge to x, y, respectively. Then xn + ynYY for each nN, and (xn + yn)nN converges to x + y. But as Y is closed, it follows that x + yY too.

(S2)Let αK, yY. Let (yn)nN be a sequence in Y that converges to y. Then α · ynYY for each nN, and (α · yn)nN converges to α · y.
But as Y is closed, it follows that α · yY too.

(S3)0YY.

Hence Y is a closed subspace.

(3)The proof is similar to part (2). Let x, yC. Then there exist sequences (xn)nN and (yn)nN in C that converge to x, y, respectively. If α ∈ (0, 1), then (1 – α)x + αy = (1 – α) image xn + α image yn = image ((1 – α)xn + αyn).

As (1 – α)xn + αynCC for all nN, and since C is closed, it follows that (1 – α)x + αyC too.

(4)Suppose that D is dense in X. Let xX\D. If nN, then the ball B(x, 1/n) must contain an element dnD. The sequence (dn)nN converges to x because ||xdn|| < 1/n, nN. Hence x is a limit point of D, that is, xD.

So X\DD. Also DD. Thus X = D ∪ (X\D) ⊂ DX, and so X = D. Now suppose that X = D. If xX\D = D\D, then x is a limit point of D, and so there is a sequence (dn)nN in D that converges to x. Thus given an image > 0, there is an N such that ||xdN|| < image, that is, dNDB(x, image).

On the other hand, if xD, and image > 0, then xB(x, image) ∩ D.

Hence D is dense in X.

Solution to Exercise 1.32, page 43

Let (xn)nN) 1. Then image and so image |xn| = 0.

Thus there exists an NN such that |xn| image 1 for all n image N. For all n image N, |xn|2 = |xn| · |xn| image |xn| · 1 = |xn|. By the Comparison Test4, image

Hence (xn)nN2.

image while the Harmonic Series image diverges.

(1, ||·||2) is not a Banach space: Let us suppose, on the contrary, that it is a Banach space, and we will arrive at a contradiction by showing a Cauchy sequence which is not convergent in (1, ||·||2).

Consider for nN, image Then (xn)nN converges in 2 to image because image

So (xn)N is a Cauchy sequence in (2, ||·||2), and so it is also Cauchy in (1, ||·||2). As we have assumed that (1, ||·||2) is a Banach space, it follows that the Cauchy sequence (xn)nN must be convergent to some element x′ ∈ 12. But by the uniqueness of limits (when we consider (xn)nN as a sequence in 2), we must have x = x′ ∈ 1, which is false, since we know that the Harmonic Series diverges. This contradiction proves that (1, ||·||2) is not a Banach space.

Solution to Exercise 1.33, page 43

Let (an)nN be a Cauchy sequence in c0. Then this is also a Cauchy sequence in , and hence convergent to a sequence in , say a. We’ll show that ac0. We write image and image Let image > 0. Then there exists an NN such that ||aNa|| < image. In particular, for all mN, image < image. But as aNc0, we can find an M such that for all m > M, image Consequently, for m > M, we have from the above that image Thus ac0 too.

Solution to Exercise 1.34, page 44

Given image > 0, let NN be large enough so that for all n > N, ||xnx|| < image. Then for all n > N, we have ||xn|| – ||x|| image ||xnx|| < image, and so it follows that the sequence (||xn||)nN is R is convergent, with limit ||x||.

Solution to Exercise 1.35, page 44

First consider the case 1 image p < ∞.

(N1)image for all x = (x1, x2, x3, ···) ∈ p.

If image then |xn| = 0 for all n, and so x = 0.

(N2)||α · x||p = image = |α| ||x||p, for xp, αK.

(N3)Let x = (x1, x2, ···) and y = (y1, y2, ···) belong to p. Let dN.

By the Triangle Inequality for the ||·||p-norm on Rd,

image

Passing the limit as d tends to ∞ yields ||x + y||p image ||x||p + ||y||p.

Now consider the case p = ∞.

(N1)image for all x = (x1, x2, x3, ···) ∈ .

If image then |xn| = 0 for all n, that is, x = 0.

(N2)image for x, αK.

(N3)Let x = (x1, x2, ···) and y = (y1, y2, ···) belong to .

Then for all k, |xk + yk| image |xk| + |yk| image ||x|| + ||y||, and so

image

Solution to Exercise 1.36, page 44

From Exercise 1.11, page 17, taking n = 1, (C1[a, b], ||·||1,∞) is a normed space. We show that (C1[a, b], ||·||1,∞) is complete. Let (xn)nN be a Cauchy sequence in C1[a, b]. Then ||xnxm|| image ||xnxm|| + ||xnxm||1,∞, and so (xn)nN is a Cauchy sequence in (C[a, b], ||·||), and hence convergent to, say, xC[a, b]. Also, ||xnxm|| image ||xnxm|| + ||xnxm||1,∞ = ||xnxm||1,∞, shows that (xn)nN is a Cauchy sequence in (C[a, b], ||·||), and hence convergent to say, y ∈ C[a, b]. We will now show that xC1[a, b], and x′ = y. Let t ∈ [a, b]. By the Fundamental Theorem of Calculus, image and so

image

Passing the limit as n goes to ∞ gives, for all t ∈ [a, b], image

By the Fundamental Theorem of Calculus, x′ = yC[a, b]. So xC1[a, b]. Finally, we’ll show that (xn)nN converges to x in C1[a, b]. Let image > 0, and let N be such that for all m, n > N, ||xnxm||1,∞ < image. Then for all t ∈ [a, b], we have |xn(t) – xm(t)| + |xn(t) – xm(t)| image |xnxm| + |xnxm| = ||xnxm||1,∞ < image. Letting m go to ∞, it follows that for all n > N, |xn(t) – x(t)| + |xn(t) – x′(t) image image As the choice of t ∈ [a, b] was arbitrary, it follows that

image

that is, ||xnxm||1,∞ image 2image.

Solution to Exercise 1.37, page 44

Let (xn)nN be any Cauchy sequence in X. We construct a subsequence (xnk)kN inductively, possessing the property that if n > nk, then ||xnxnk|| < 1/2k, kN. large enough so that if n, m image n1, then ||xnxm|| < 1/2. Suppose xn1, ···, xnk have been constructed. Choose nk+1 > nk such that if n, m image nk+1, then ||xnxm|| < 1/2k+1. In particular for n image nk+1, ||xnxnk+1|| Z 1/2k+1.

Now define u1 = xn1, uk+1 = xnk+1xnk, kN.

We have image Thus image converges.

But the partial sums of image are image

So (xnk)kN converges in X, to, say xX. As (xnk)kN is a convergent subsequence of the Cauchy sequence (xn)nN, it now follows that (xn)nN is itself convergent with the same limit x. Indeed, given image > 0, first let N be such that for all n, m > N, ||xnxm|| < image/2, and next let nK > N be such that ||xnKx|| < image/2, which yields that for all n > N,

image

Solution to Exercise 1.38, page 44

(N1)For (x, y) ∈ X × Y, ||(x, y)|| = max{||x||, ||y||} image 0.
If ||(x, y)|| = 0, then 0 image ||x|| image max{||x||, ||y||} = {||x, y)|| = 0, and so ||x|| = 0, giving x = 0. Similarly, y = 0 too, and so (x, y) = 0X×Y.

(N2)For αK, and (x, y) ∈ X × Y,

image

(N3)Let (x1, y1), (x2, y2) ∈ X × Y. Then

image

and so max{||x1 + x2||, ||y1 + y2||} image ||(x1, y1)|| + {||x2, y2)||. Thus

image

Hence (x, y) images max{||x||, ||y||}, (x, y) ∈ X × Y, defines a norm on X × Y.

Let ((xn, yn))nN be Cauchy in X × Y. As ||x|| image max{||x||, ||y||} = ||(x, y)||, (xn)nN is Cauchy in X. As X is Banach, (xn)nN converges to some xX. Similarly (yn)nN converges to a yY. Let image > 0. Then there exists an Nx such that for all n > Nx, ||xnx|| < image, and there is an Ny such that for all n > Ny, ||yny|| < image. So with N := max{Nx, Ny}, for all n > N, we have ||xnx|| < image and ||yny|| < image. Thus ||(xn, yn) – (x, y)|| = ||(xnx, yny)|| = max{||xnx||, ||yny||} < image, showing that ((xn, yn))nN converges to (x, y) in X × Y. So X × Y is Banach.

Solution to Exercise 1.39, page 50

Since K is compact in Rd, it is closed and bounded. Let R > 0 be such that for all xK, ||x||2 image R. In particular, for every xKF, we have ||x||2 image R. Thus KF is bounded. Also, since both K and F are closed, it follows that even KF is closed. Hence KF is closed and bounded, and so by Theorem 1.10, page 45, we conclude that KF is compact.

Solution to Exercise 1.40, page 50

Clearly Sd–1 is bounded. It is also closed, and we prove this below. Let (xn)nN be a sequence in Sd–1 which converges to L in Rd. Let L = (L1, ···, Ld) and image for nN. Then image xn(k) = Lk (k = 1, ..., d).

Since xnSd–1 for each nN, we have image Passing the limit as n → ∞, we obtain image Hence LSd–1. So Sd–1 is closed. As Sd–1 is closed and bounded, it follows from Theorem 1.10, page 45, that it is compact.

Solution to Exercise 1.41, page 50

(1)Let (Rn)nN be a sequence in O(2).
Using image then image and image

So each of the sequences (an)nR, (bn)nR, (cn)nR, (dn)nR is bounded.

By successively refining subsequences of these sequences, we can choose a sequence of indices n1 < n2 < n3 <···, such that the sequences (ank)kN, (bnk)kN, (cnk)kN, (dnk)kN are convergent, to, say, a, b, c, d, respectively.

Hence (Rnk)kN is convergent with the limit image

From (Rn)images Rn = I (nN), it follows that also RimagesR = I, that is, RO(2).

(2)The hyperbolic rotations image belong to O(1, 1) because

image

But ||R(t)|| image | cosh(t)| = cosh t → ∞ as t → ∞, showing that O(1, 1) is not bounded. Hence O(1, 1) can’t be compact (as every compact set is necessarily bounded).

Solution to Exercise 1.42, page 51

Let image Since K ⊂ [0, 1], clearly K is bounded.

Moreover, image

Thus R\K, being the union of open intervals, is open, that is, K is closed. Since K is closed and bounded, it is compact.

Solutions to the exercises from Chapter 2

Solution to Exercise 2.1, page 58

If 1C[0, 1] denotes the constant function taking value 1 everywhere, then

image

and so image

So (L2) is violated, showing that S1 is not a linear transformation.

On the other hand, S2 is a linear transformation. For all x1, x2C[0, 1],

image

and so (L1) holds. Moreover, for all αR and xC[0, 1] we have

image

and so (L2) holds as well.

Solution to Exercise 2.2, page 58

(1)Let α1, α2R be such that α1f1 + α2f2 = 0, that is,

image

In particular, with t = 0, we obtain α1 = 0. Thus α2eat sin(bt) = 0 for all tR. With t = π/2b, we see that image and so α2 = 0. Consequently, f1, f2 are linearly independent.

(2)First of all, D is a well-defined map from Sf1,f2 to itself, since

image

Thus DSf1,f2Sf1,f2.

Furthermore, it is clear that D(g1 + g2) = D(g1) + D(g2) for all g1, g2C1(R) (and in particular for g1, g2Sf1,f2C1(R)), and also D(α · g) = α · D(g) and all g ∈ (R) (and in particular, for all gSf1,f2).

Hence D is a linear transformation from Sf1,f2 to itself.

(3)We have Df1 = aeat cos(bt) – eatb sin(bt) = af1bf2, and
Df2 = aeat sin(bt) + eatb cos(bt) = bf1 + af2.

So the matrix of D with respect to the basis B = (f1, f2) is image

(4)As det[D]B = a2 + b2 ≠ 0, [D]B is invertible, and image
Hence D is invertible, and the inverse D–1 : Sf1,f2Sf1,f2 has the matrix [D–1]B (with respect to B) given by [D–1]B = [D]–1]B found above.

(5)We note that image and so

image

By the definition of D, image

So image any constant.

Similarly, as image we have

image

and so image

So image any constant.

Solution to Exercise 2.3, page 61

(1)We have image

(As expected, the arc length is simply the length of the line segment [0, 1].)

(2)We have image and so

image

(3)Suppose that f is continuous at 0. Then with image := 1 > 0, there exists a δ > 0 such that whenever xC1[0, 1] and ||x0|| < δ, we have |f(x) – f(0)| < 1.

We have image for all image

Hence for such n there must hold that |f(xn) – f(0)| = |f(xn) – 1| < 1.

So for all image we have image image |f(xn)| image |f(xn) – 1| + 1 < 1 + 1 = 2,

which is a contradiction. Hence f is not continuous at 0.

Let x0, xC1[a, b]. Using the triangle inequality in (R2, ||·||2), we obtain

image

and so

image

Thus given image > 0, if we set δ := image, then we have for all xC1[0, 1] satisfying ||xx0||1,∞ < δ that |f(x) – f(x0)| image ||xx0||1,∞ < δ = image.

So f is continuous at x0. As the choice of x0 was arbitrary, f is continuous.

Solution to Exercise 2.4, page 62

Let x0X. Given image > 0, set δ := image. Then for all xX satisfying ||xx0|| < δ, we have | ||x|| – ||x0|| | image ||xx0|| < δ = image. Thus ||·|| is continuous at x0. As x0X was arbitrary, it follows that ||·|| is continuous on X.

Solution to Exercise 2.5, page 62

f–1({–1, 1}) = { : nZ}, f–1({1}) = {2 : nZ}, f–1([–1, 1]) = R, and image

Solution to Exercise 2.6, page 62

Since cos is periodic with period 2π (that is, f(x) = f(x + 2π) for all xR), we have f(R) = f([0, 2π]) = f([δ, δ + 2π]) = [–1, 1].

Solution to Exercise 2.7, page 64

(“If” part) Suppose that for every closed F in Y, f–1(F) is closed in X.

Now let V be open in Y. Then Y\V is closed in Y.

Thus f–1(Y\V) = f–1(Y)\f–1(V) = X\f–1(V) is closed in X.

Hence f–1(V) = X\(X\f–1(V)) is open in X.

So for every open V in Y, f–1(V) is open in X.

By Theorem 2.1, page 63, f is continuous on X.

(“Only if” part) Suppose that f is continuous.

Let F be closed in Y, that is, Y\F is open in Y.

Hence f–1(Y\F) = f–1(Y\f–1(F) = X\f–1(F) is open in X.

Consequently, we have that f–1(F) is closed in X.

Solution to Exercise 2.8, page 64

If x ∈ (g image f)–1(W), then (g image f)(x) ∈ W, that is, g(f(x)) ∈ W. So f(x) ∈ g–1(W), that is, xf–1(g–1(W)). Thus (g image f)–1(W) ⊂ f–1 (g–1(W)).

If xf–1(g–1(W)), then f(x) ∈ g–1(W), that is, (g image f)(x) = g(f(x)) ∈ W. Hence x ∈ (g image f)–1(W). So we have f–1(g–1(W)) ⊂ (gimagef)–1(W).

Consequently, (g image f)–1(W) = f–1(g–1(W)).

Solution to Exercise 2.9, page 64

(1)True.

Since (–∞, 1) is open in R and f : XR is continuous, it follows that {xX : f(x) < 1} = f–1(–∞, 1) is open in X by Theorem 2.1, page 63.

(2)True.

Because (1, ∞) is open in R, and f : XR is continuous, it follows by Theorem 2.1, page 63, that {xX : f(x) > 1} = f–1 (1, ∞) is open in X.

(3)False.

Take for example X = R with the usual Euclidean norm, and consider the continuous function f(x) = x for all xR. Then {xX : f(x) = 1} = {1}, which is not open in R.

(4)True.

(–∞, 1] is closed in R because its complement is (1, ∞), which is open in R. As f : XR is continuous, {xX : f(x) image 1} = f–1 (–∞, 1] is closed in X by Corollary 2.1, page 64.

(5)True.

Since {1} is closed in R and since f : XR is continuous, it follows by Corollary 2.1, page 64, that {xX : f(x) = 1} = f–1{1} is closed in X.

(6)True.

Each of the sets f–1{1} and f–1{2} are closed, and so their finite union, namely {xX : f(x) = 1 or 2} is closed as well.

(7)False.

Take for example X = R with the usual Euclidean norm, and consider the continuous function f(x) = 1 (xR). Then {xX : f(x) = 1} = R, which is not bounded, and hence can’t be compact.

Solution to Exercise 2.10, page 65

For all xX, we have f(2x) = –f(x), and so

image

Since the sequence image converges to 0, it follows that

image

So we obtain that ((–1)nf(x))nN is convergent with limit f(0). Thus the subsequence (f(x))nN = ((–1)2nf(x))nN of ((–1)f(x))nN is also convergent with limit f(0). Hence f(x) = f(0) for all xX. As f(0) = f(2 · 0) = –f(0), it follows that f(0) = 0. Hence f(x) = 0 for all xX. So if f is continuous and it satisfies the given identity then it must be the constant function x images 0 : XY.

Conversely, the constant function x images 0 : XY is indeed continuous and also f(2x) + f(x) = 0 + 0 = 0 for all xX.

Solution to Exercise 2.11, page 65

The determinant of M = [mij] is given by the sum of expressions of the type

image

where p : {1, 2, 3, ···, n} → {1, 2, 3, ···, n} is a permutation. Since each of the maps M images m1p(1) m2p(2) m3p(3) ... mnp(n) is easily seen to be continuous using the characterisation of continuous functions provided by Theorem 2.3, page 64, it follows that their linear combination is also continuous.

{0} is closed in R, and so its inverse image det–1{0} = {MRn×n : det M = 0} under the continuous map det is also closed. Thus its complement, namely the set {MRn×n : det M ≠ 0}, is open. But this is precisely the set of invertible matrices, since MRn×n is invertible if and only if det M ≠ 0.

Solution to Exercise 2.12, page 73

We’d seen in Exercise 1.21, page 21, that a singleton set in any normed space is closed. So {0} is closed in Rm. As the linear transformation TA : RnRm is continuous, its inverse image under TA, T–1A({0}) = {xRn : Ax = 0} = ker A, is closed in Rn.

Solution to Exercise 2.13, page 73

Let V be a subspace of Rn, and let {v1, · · ·, vk} be a basis for V. Extend this to a basis {v1, · · ·, vk, vk+1, · · ·, vn} for R. By using the Gram-Schmidt orthogonalisation procedure, we can find an orthonormal5 set of vectors {u1, · · ·, un} such that for each k ∈ {1, · · ·, n}, the span of the vectors v1, · · ·, vk coincides with the span of u1, · · ·, uk. Now define AR(nkn as follows:

image

It is clear from the orthonormality of the ujs that Au1 = · · · = Auk = 0, and so it follows that also any linear combination of u1, · · ·, uk lies in the kernel of A. In other words, V ⊂ ker A.

On the other hand, if x = α1u1 + · · · + αnun, where α1, · · ·, αn are scalars and if Ax = 0, then it follows that

image

So x = α1u1 + · · · + αkukV. Hence ker AV.

Consequently V = ker A, and by the result of the previous exercise, it now follows that V is closed.

Solution to Exercise 2.14, page 73

(1)The linearity of T follows immediately from the properties of the Riemann integral. Continuity follows from the straightforward estimate

image

(2)The partial sums sn of the series converge to f. Thus, since the continuous map T preserves convergent sequences, it follows that

image

Solution to Exercise 2.15, page 73

We have for all tR that

image

Thus ||fg|| image ||g||||f||1 for all gL(R). So f∗ is well-defined. Linearity is easy to see. From the above estimate, it follows that the linear transformation f∗ is continuous as well.

Solution to Exercise 2.16, page 73

Consider the reflection map image : L2(R) → L2(R). Then it is straightforward to check that RL(L2(R)), and moreover it is continuous since ||f||2 = image2 for all fL2(R). Clearly Y = ker(IR), and so, being the inverse image of the closed set {0} under the continuous map IR, it follows that Y is closed.

Solution to Exercise 2.17, page 76

For image

image

and so Λ ∈ CL(2) and ||Λ|| image image|λn|.

Moreover, for 2en := (0, · · ·, 0, 1, 0, · · ·) ∈ 2 (sequence with all terms equal to 0 and nth term equal to 1), we have

image

for all n, and so ||Λ|| is an upper bound for {|λn| : nN}. Hence ||Λ|| image image |λn|.

From the above, it now follows that ||Λ|| = image|λn|.

If λn = 1 –, image nN, then ||Λ|| = image image = 1.

Suppose that x = (an)nN2 is such that ||x||2 image 1 and ||Λx||2 = ||Λ|| = 1.

If 0 = a2 = a3 = · · ·, then Λx = 0, and this contradicts the fact that ||Λx||2 = 1.

So at least one of the terms a2, a3, · · · must be nonzero.

image

a contradiction. So the operator norm is not attained for this particular Λ.

Solution to Exercise 2.18, page 76

Let x = (xn)nNp, and let image > 0.

Then there exists an N such that image |xk|p < imagep. Let sn := image xkek.

Then for n > N, xsn = (0, · · ·, 0, xn+1, xn+2, xn+3, · · ·).

So image giving ||xsn||p < image.

So (sn)nN converges in p to x, that is, x = image xnen.

The map x = (x1, x2, x3, · · ·) image xn : pK is easily seen to be linear.

It’s continuous as for all xp, |φn(x)| = |xn| = (|xn|p)1/p image = |x|p.

If x = image, where the ξis and images are scalars, then applying φn,

image

As the choice of n was arbitrary, ξn = image for all n.

Solution to Exercise 2.19, page 77

(1)Let x = (xn)nN. Then for all nN, |xn| image ||x|.

Thus image = ||x||.

Consequently Ax. So A is a well-defined map.

The linearity is easy to check.

Also, we see that for all x that ||Ax|| = image image ||x||.

So ACL(), and ||A|| image 1. Also, with 1 := (1, 1, 1, · · ·) ∈ , we have

image

Consequently, ||A|| = 1.

(2)Let x = (xn)nNc, and let its limit be denoted by L.

We’ll show that Axc as well.

We will prove that Ax is convergent with the same limit L! (Intuitively, this makes sense since for large n, all xns look alike, imagesL, and the average of these is approximately L, since the first few terms do not “contribute much” if we take a large collection to take an average.)

Let image > 0. Then there exists an N1N such that for all n > N1, |xnL| < image/2. Since (xn)nN is convergent, it is bounded, and so there exists an M > 0 such that for all nN, |an| image M.

Choose NN such that N > max image

(This ghastly choice of N is arrived at by working backwards. Since we wish to make image less than image for n > N, we manipulate this, as shown in the chain of inequalities below, and then choose N large enough to achieve this.)

So N > N1 and image Then for all n > N, we have:

image

So image is a convergent sequence with limit L.

Hence Axc. Consequently Acc, and c is an invariant subspace of A.

Solution to Exercise 2.20, page 85

(If part:) Since image|λn| > 0, we have |λk| image image|λn| > 0, and so λk ≠ 0 for all k.

Moreover, image < ∞, and so V : 22 given by

image

belongs to CL(2). Moreover for all (an)nN we have

image

and so VΛ = I = ΛV. Hence Λ is invertible in CL(2), with Λ–1 = V.

(Only if part:) Let Λ be invertible in CL(2). Then there exists a Λ–1CL(2) such that Λ–1Λ = I = ΛΛ–1. So ||x||2 = ||Λ–1Λx||2 image ||Λ–1||||Λx||2, for all x2.

Hence ||Λx||2 image image for all x2. So with x := ek (kth term 1, others 0),

image

Thus image

Solution to Exercise 2.21, page 86

We have

image

Similarly,

image

Solution to Exercise 2.22, page 86

(1)If there exist matrices A, B such that ABBA = I, then

image

a contradiction.

(2)If n = 1, then ABnBnA = ABBA = I = 1 · B0 = nBn–1.

If for some nN, we have ABnBnA = nBn–1, then

image

and so the result follows by induction.

Suppose that ABBA = I. Then for all nN, ABnBnA = nBn–1. Taking operator norm on both sides yields

image

We claim that Bn–10 for all nN. Indeed, if n = 1, then B0 := I0. If Bn–10 for some nN, then Bn = 0 gives the contradiction that

image

and so we must have Bn0 too. By induction, our claim is proved. Thus in (7.3), we may cancel ||Bn−1|| > 0 on both sides of the inequality, obtaining n images 2||A||||B|| for all nN, which is absurd. Consequently, our original assumption that ABBA = I must be false.

(3)If Ψ ∈ C(R), then

images

and so ABBA = I.

Solution to Exercise 2.23, page 87

(1)For x = (x1, x2) ∈ R2, we have, using the Cauchy-Schwarz inequality, that

images

So images

By the Neumann Series Theorem, (IK)−1 exists in CL(R2).

So there is a unique solution xR2 to (IK)x = y, given by x = (IK)−1y.

(2)We have images and so images

Thus images

(3)A computer program yielded the following numerical values:

images

Solution to Exercise 2.24, page 88

If n = 1, then (IA)P1 = (IA)(I + A)(I + A2) = IA4 = IA21 + 1.

If the claim is true for some kN, then

images

So the claim follows by induction for all nN.

(IA2n+1)nN converges to I in L(X) since ||A|| < 1 and

images

Also, since ||A|| < 1, IA is invertible in CL(X). We have

images

and so ((IA)−1 (IA2n+1))nN = ((IA)−1(IA)Pn)nN = (Pn)nN is convergent with limit (IA)−1.

Solution to Exercise 2.25, page 88

(1)Let T0GL(X). Then T0−1CL(X), and also r := ||T0−1|| ≠ 0.

If Timages, and in particular,

images

and so by the Neumann Series Theorem, I + (TT0)T0−1 belongs to GL(X).

But as T0GL(X) too, it now follows that

images

This completes the proof that GL(X) is an open subset of CL(X).

(2)Let T0GL(X) and images > 0. Set images

Let TCL(X) be such that ||TT0|| < δ.

Then in particular ||Timages and so by part (1), TGL(X), with

images

Moreover, we have

images

Thus using the estimate from the Neumann Series Theorem,

images

Solution to Exercise 2.26, page 92

A2 = B2 = 0, and so A, B are nilpotent.

Hence images and images

We note that images

Also, images

We have images and images Thus

images

and so images

Solution to Exercise 2.27, page 94

Suppose that the Banach space has an infinite countable Hamel basis {x1, x2, x3, ··· }. We can ensure that for all nN, we have ||xn|| = 1. Let Fn := span{x1, x2, ···, xn}. Then each Fn is a finite dimensional normed space (with the induced norm from X), and so it is a Banach space. It follows that Fn is a closed subspace of X. By the Baire Lemma, there is an nN such that Fn contains an open set U, and in particular, an open ball B(X, 2r) for some r > 0. The vector y := rxn+1 + x belongs to B(X, 2r) since ||yx|| = ||rxn+1|| = r < 2r. Since y, xB(X, 2r) ⊂ Fn, and as Fn is a subspace, we conclude that (yx)/rFn too, that is, xn+1Fn = span{x1, ···, xn}, a contradiction.

Solution to Exercise 2.28, page 96

In light of the Open Mapping Theorem, such a function must necessarily be nonlinear. If the function is constant on an open interval I, then the image f(I) will be a singleton, which is not closed. The following function does the job:

images

If I := (−1, 1), then f(I) = {0}, which is not open. f is surjective and continuous, and its graph is depicted in the following picture.

images

Solution to Exercise 2.29, page 96

From Exercise 1.38, page 44, X × Y is a Banach space. Since G(T) is a closed subspace of the Banach space X × Y, it is a Banach space too. Let us now consider the map p : G(T) → X defined by p(X, Tx) = x for xX. Then p is a linear transformation:

images

for αK, x, x1, x2X. Moreover, p continuous because

images

p is also injective since if p(X, Tx) = 0, then x = 0.

Furthermore, if xX, then x = p(x, Tx), showing that p is surjective too.

Thus, pCL(G(T), X) is bijective, and hence invertible in CL(G(T), X), with inverse p−1CL(X, G(T)). Hence for all xX,

images

showing that TCL(X, Y).

Solution to Exercise 2.30, page 102

We have

images

Solution to Exercise 2.31, page 102

(1)We know that σ (T) ⊂ {λ ∈ C : |λ| images ||T||}, and so ||T|| is an upper bound for {|λ| : λ ∈ σ(T)}. Thus images

(2)We have σ(TA) = {eigenvalues of A} = {1}, and so rσ(TA) = 1.

On the other hand, with images we have ||x1||2 = 1, and so

images

Solution to Exercise 2.32, page 103

Suppose that λ2σ(T2). Then λ2ρ(T2), that is, λ2 IT2 is invertible in CL(X). From the identity (λ2IT2) = (λIT)(λI + T) = (λI + T)(λIT), we then obtain images

But then Q = QI = Q(λIT)P = IP = P, and so P = QCL(X) is the inverse of λIT, a contradiction to the fact that λσ(T).

Solution to Exercise 2.33, page 103

If en2 denotes the sequence with the nth term equal to 1, and all others equal to 0, then Λen = λnen, and so each λn is an eigenvalue of Λ with eigenvector en0. Thus {λn : nN} ⊂ σp(Λ).

Next we will show that σ(Λ) ⊂ {λn : nN}images{0}. To this end, suppose that μ ∉ {λn : nN}images{0}. Then we claim that μI − Λ is invertible in CL(2). By a previous exercise, we know that in order to show the invertibility of

images

it is enough to show that |μλn| is bounded away from 0. To see this, note that since images there is an N large enough such that |λn| < |μ|/2 for all n > N, and so

images

But also |μλ1|, ···, |μλN| are all positive, so that we do have

images

Hence μI − Λ ∈ CL(2) is invertible in CL(2), that is, μρ(Λ).

Thus σ(Λ) ⊂ {λn : nN}images{0}.

But the spectrum σ(Λ) is closed, and since it contains σp(Λ) ⊃ {λn : nN}, it must contain the limit of (λn)nN, which is {0}.

So we also obtain {λn : nN}images{0} ⊂ σp(Λ)images{0} ⊂ σ(Λ).

Thus σ(Λ) = {λn : nN}images{0}.

Consequently, {λn : nN} ⊂ σp(Λ) ⊂ {λn : nN}images{0} = σ(Λ).

Solution to Exercise 2.34, page 103

(1)Suppose that λσap(T). Then there exists a sequence (xn)nC of vectors in X such that ||xn|| = 1 for all nN, and images

We will just prove that λρ(T), and so by definition it will follow that then λσ(T). Suppose, on the contrary, that λρ(T). Then TλI is invertible in CL(X). Thus

images

a contradiction. Consequently, λρ(T), that is, λσ(T).

(2)For kN, let ek denote the sequence in 2 whose kth term is 1 and all other terms are zeros. Then ||ek||2 = 1, and Λek = λkek, so that

images

that is, images Consequently, images

Solution to Exercise 2.35, page 103

Let λC and Ψ ∈ DQ be such that xΨ(x) = λΨ(x) for almost all xR, that is, (xλ)Ψ(x) = 0 for almost all xR. Now xλ ≠ 0 for all xR\{λ}. Hence for almost all xR, Ψ(x) = 0, that is, Ψ = 0 in L2(R). Consequently, λ can’t be an eigenvalue of Q, and so σp(Q) = ∅.

Solution to Exercise 2.36, page 105

For simplicity we’ll assume K = R. If a = (an)nN1, then define the functional φaCL(c0, R) = (c0)′ by

images

Then a images φa : 1 → (c0)′ is an injective linear transformation, and it is also continuous because |φa(b)| images ||b|| ||a||1 for all bc0, and ||φa|| images ||a||1. To see the surjectivity of this map, we need to show that given φ ∈ (c0)′, there exists an a1 such that φ = φa. Let enc0 being the sequence with nth term 1 and all others 0. Set a = (φ(e1), φ(e2), φ(e3), ···). We’ll show that a1, and that φ = φa.

Define the scalars αn, nN, by images

Then for all n we have αnφ(en) = |φ(en)|.

We have ||(α1, ···, αn, 0, ···)|| images 1, and so

images

for all nN. Hence a1.

Finally, we need to show φ = φa . Let b = (bn)nNc0 and images > 0. Then there exists an N such that for all n > N, |bn| < images. Set bimages = (b1, ···, bN, 0, ···) ∈ c0. Then ||bbimages|| = ||(0, ··· , 0, bN+1, ···)|| images images. Moreover, we have that

images

Hence

images

As the choice of images > 0 was arbitrary, it follows that φ(b) = φa(b) for all bc0, that is, φ = φa.

Solution to Exercise 2.37, page 105

(1)BV [a, b] is a vector space: We prove that BV [a, b] is a subspace of the vector space R[a,b] of all real valued functions on [a, b] with pointwise operations.

(S1)The zero function 0 belongs to BV [a, b].

Indeed, for any partition images and so var(0) = 0 < ∞.

(S2)Let μ1, μ2BV [a, b]. Then we have

images

and so μ1 + μ2BV [a, b].

(S3)Let αR and μBV [a, b]. Then

images

and so αμBV [a, b].

(2)We show that μ images ||μ|| defines a norm on BV [a, b].

(N1)If μBV [a, b], then ||μ|| = |μ(a)| + var(μ) images 0.

Let μBV [a, b] be such that ||μ|| = 0. Then var(μ) = 0, and |μ(a)| = 0.

Hence μ(a) = 0. Suppose that μ0. Then there exists a c ∈ [a, b] such that μ(c) ≠ 0. Clearly ca, since μ(a) = 0. Now consider the partition

images

Then var images

images

a contradiction. Hence μ = 0.

(N2)Let αR and μBV [a, b]. Then αμBV [a, b], and we have seen earlier that varαμ = |α|var(μ). Hence

images

(N3)Let μ1, μ2BV [a, b]. Then μ1 + μ2BV [a, b], and we’ve seen above that var(μ1 + μ2) images var(μ1) + var(μ2). Thus

images

Consequently BV [a, b] is a normed space with the norm ||·||.

(3)Let xC[a, b] and μBV [a, b]. Given images > 0, let δ > 0 be such that for every partition P satisfying δP < δ, we have

images

Then

images

As the choice of images > 0 was arbitrary, it follows that images

(4)For all xC[a, b], |φµx| images ||x|| var(μ).

From the linearity of the Riemann-Stieltjes integral, it follows that φµ is a linear transformation from C[a, b] to R. From the above estimate, we also see that φµ is continuous. Consequently φµCL(C[a, b), R) = (C[a, b])′.

Moreover ||φµ|| images var(μ).

(5)We will show that (x images x(a)) = φµ, where images

First of all, μBV [a, b], since var(μ) = 1 < ∞.

Let xC[a, b], and images > 0. Let δ > 0 be such that for all t such that ta < δ, we have |x(t) − x(a)| < images.

Then for all partitions P with δP < δ, we have

images

where the last inequality follows from the fact that |at1| images δP < δ.

So images (μ is not unique: for any cR, μ + c also works!)

Solution to Exercise 2.38, page 109

On the one dimensional subspace Y :=span{x} ⊂ X, we have a continuous linear map φ : YC. (Simply define φ(αx) = α, then |φ(αx)| = |α| = ||αx||/||x||, and so ||φ|| = 1/||x|| < ∞.) By the Hahn-Banach Theorem, there exists an extension φCL(X, C) of φ, and so φ(x) = φ(x) = 1 ≠ 0. (Alternatively, one could just use Corollary 2.7, page 109, with x = x and y = 0: there exists a functional φCL(X, C) such that φ(x) ≠ φ(0) = 0.)

Solution to Exercise 2.39, page 115

Consider the collection P of all linearly independent subsets SX. Consider the partial order which is simply set inclusion ⊂. Then every chain in P has an upper bound, as explained below.

If C is a chain in P, then images is an upper bound of C.

We just need to show the linear independence of this set U. To this end, let v1, ···, vn be any set of vectors from U for which there exist scalars α1, ···, αn in F such that α1v1 + ··· + αnvn = 0. Let the sets S1, ···, SnC be such that v1S1, ···, vnSn. As C is a chain, we can arrange the finitely many Sks in “ascending order”, and there exists a k ∈ {1, ···, n} such that S1, ···, SnSk. Then v1, ···, vnSk. But by the linear independence of Sk, we conclude that α1 = ··· = αn = 0. Thus U is linearly independent, showing that every chain in P has an upper bound.

By Zorn’s Lemma, P has a maximal element B. We claim that span B = X. For if not, then there exists an xX\span B. We will show B′ := B ∪ {x} is linearly independent. Suppose that α1, ···, αn, αK and v1, ···, vnB are such that αx + α1v1 + ··· + αnvn = 0. First we note that α = 0, since otherwise

images

which is false. As α = 0, the equality αx + α1v1 + ··· + αnvn = 0 now becomes α1v1 + ··· + αnvn = 0. But by the independence of the set B, we conclude that α1 = ··· = αn = 0 too. Hence B′ is linearly independent, and so B′ belongs to P. As B′ = B ∪ {x} images B, we obtain a contradiction (to the maximality of B). Consequently, span B = X, and as BP, B is also linearly independent.

Solution to Exercise 2.40, page 115

Let B = {vi : iI}. Every xX has a unique decomposition

images

for some finite number of indices i1, ···, inI and scalars α1, ···, αn in F. Define F(x) = α1f(vi1) + ··· + αnf(vin). It is clear that F(vi) = f(vi), iI. Let us check that F : XY is linear.

(L1)Given x1, x2X, there exist scalars α1, ···, αn and β1, ···, βn (possibly several of them equal to zero) and indices i1, ···, inI, such that

images

(L2)Let αF. Given xX, there exist β1, ··· , βnF and i1, ···, inI, such that x = β1vi1 + ··· + βnvin. Then αx = (αβ1)vi1 + ··· + (αβn)vin.

images

Solution to Exercise 2.41, page 115

Let B be a Hamel basis for X. As X is infinite dimensional, B is an infinite set. Let {vn : nN} be a countable subset of B. Let yY be any nonzero vector.

Let f : BY be defined by images

By the previous exercise, this f extends to a linear transformation F from X to Y. We claim that FCL(X, Y). Suppose that it does. Then there exists an M > 0 such that for all xX, ||F(x)|| images M||x||. But if we put x = vn, nN, this yields n||vn||||y|| = ||f(vn)|| = ||F(vn)|| images M ||vn||, and so for all nN, n images M/||y||, which is absurd. Thus F is a linear transformation from X to Y, but is not continuous.

Solution to Exercise 2.42, page 115

If R were finite dimensional, say d-dimensional over Q, then there would exist a one-to-one correspondence between R and Qd. But Qd is countable, while R isn’t, a contradiction. So R is an infinite dimensional vector space over Q.

Suppose that R has a countable basis B = {vn : nN} over Q.

We will define an injective map images yielding a contradiction.

Set f(0) := 0 ∈ Q1. If x ≠ 0, then x has a decomposition x = q1v1 + ··· + qnvn, where q1, ···, qnQ and qn ≠ 0. In this case, set f(x) = (q1, ···, qn) ∈ Qn. It can be seen that if f(x) = f(y), for some x, yR, then x = y. So f is injective.

As images is countable, follows that R is countable too, a contradiction.

Hence B can’t be countable.

Solution to Exercise 2.43, page 115

The set R is an infinite dimensional vector space over Q. Let {vi : iI} be a Hamel basis for this vector space. Fix any iI.

We define a function f : BR on the basis elements: images

Let F be an extension of f from B to R, as provided by Exercise 2.40, page 115. Then F is linear, and in particular, additive. So F(x + y) = F(x) + F(y) for all x, yR.

We now show that F is not continuous on R: for otherwise, for any vivi, if (qn)nN is a sequence in Q converging to the real number vi/vi (vi ≠ 0 since it is a basis vector), then we would have

images

a contradiction!

Solution to Exercise 2.44, page 116

(1)By the Algebra of Limits, the map l is linear.

Let (xn)nNc. For all nN, |xn| images ||(xn)nN||∞.

Passing the limit as images

Thus lCL(c, K).

(2)Y is a subspace of . Indeed we have:

(S1)Clearly (0)nNY, since images

(S2)Let (xn)nN, (yn)nNY.

Then images and images exist.

As images

we conclude that images exists as well.

Thus (xn)nN + (yn)nNY too.

(S3)Let (xn)nNY and αK. Then images exists.

As images it follows that

images exists, and so α · (xn)nNY.

Consequently, Y is a subspace of .

(3)For all x, xSxY : Let x = (xn)nN. Then we have

images

We have images

As x, it follows that images and so xSxY.

(4)If x = (xn)nNc, then Axc, where A denotes the averaging operator (Exercise 2.19, page 77).

Hence images exists, and so xY. Consequently, cY.

(5)Define L0 : YK by L0(xn)nN = images

Then it is easy to check that L0 : YK is a linear transformation.

Moreover, if xY, then images

But images

Hence |L0x| images ||x||∞. Consequently, L0CL(Y, K).

We had seen that if xc, then Axc, and that l(Ax) = l(x).

Hence for all xc, L0(x) = l(Ax) = l(x), that is, L0|c = l.

Using the Hahn-Banach Theorem, there exists an LCL(, K) such that L|Y = L0 (and ||L|| = ||L0||).

In particular, if xc, then xY and so Lx = L0x = lx. Thus L|c = l.

Also, if x = (xn)nN, then xSxY.

Hence images

Thus Lx = LSx for all x, that is, L = LS.

(6)We have images

images

Consequently, images

Solutions to the exercises from Chapter 3

Solution to Exercise 3.1, page 124

f is a continuous linear transformation. Thus it follows that f′(x0) = f for all x0, and in particular also for x0 = 0.

Solution to Exercise 3.2, page 125

Suppose that f′(x0) = LCL(X, Y). Let M > 0 be such that ||Lh|| images M||h||, for all hX. Let images > 0. Then there exists a δ1 > 0 such that whenever xX satisfies 0 < ||xx0|| < δ1, we have

images

So if xX satisfies ||xx0|| < δ1, then ||f(x) − f(x0) − L(xx0)|| images images||xx0||.

Let images Then for all xX satisfying ||xx0|| < δ, we have

images

Hence f is continuous at x0.

Solution to Exercise 3.3, page 125

(Rough work: We have for xC1[0, 1] that

images

where L : C1[0, 1] → R is the map given by Lh = 2x0(1)h′(1), h′ ∈ C1[0, 1]. So we make the guess that f′(x0) = L.)

Let us first check that L is a continuous linear transformation. L is linear because:

(L1)For all h1, h2C1[0, 1], we have

images

(L2)For all hC1 [0, 1] and αR, we have

images

Also, L is continuous since for all hC1[0, 1], we have

images

So L is a continuous linear transformation. Moreover, for all xC1[0, 1],

images

so that images

Given images > 0, set δ = images. Then if xC1[0, 1] satisfies 0 < ||xx0||1, ∞ < δ, we have

images

Solution to Exercise 3.4, page 125

Given images > 0, let images′ > 0 be such that images′||x2x1|| < images. Let δ′ > 0 such that whenever 0 < ||xγ(t0)|| < δ′, we have

images

Let δ 0 be such that δ ||x2x1|| < δ′. For all tR satisfying 0 < |tt0| < δ,

images

and so ||γ(t) − γ(t0)|| = |tt0|||x2x1|| images δ||x2x1|| < δ′. Thus for all tR satisfying 0 < |tt0| < δ, we have

images

Thus f images γ is differentiable at t0 and images

Let x1, x2X be such that g(X1) ≠ g(X2). With γ the same as above, we have for all tR that

images

So g images γ is constant. Thus (g images γ)(1) = g(x2) = g(x1) = (g images γ)(0), a contradiction. Consequently, g is constant.

Solution to Exercise 3.5, page 128

Suppose that f′(x0) = 0. Then for every images

In particular, setting h = x0, we have images giving x0 = 0C[a, b].

Vice versa, if x0 = 0, then

images

for all hC[a, b], that is, f′(0) = 0.

Consequently, f′(x0) = 0 if and only if x0 = 0.

So we see that if x is a minimiser, then f′(x) = 0, and so from the above x = 0. We remark that 0 is easily seen to be the minimiser because

images

Solution to Exercise 3.6, page 129

If x1, x2S, α ∈ (0, 1), then x1, x2C1[a, b]. So (1 − α)x1 + αx2C1[a, b]. Moreover, as x1(a) = x2(a) = ya and x1b = x2(b) = yb, we also have that

images

Thus (1 − α)x1 + αx2S. Consequently, S is convex.

Solution to Exercise 3.7, page 129

For x1, x2X and α ∈ (0, 1) we have by the triangle inequality that

images

Thus || · || is convex.

Solution to Exercise 3.8, page 129

(If part:) Let x1, x2C and α ∈ (0, 1). Then we have that (x1, f(x1) ∈ U(f) and (x2, f(x2)) ∈ U(f). Since U(f) is convex,

images

Consequently, (1 − α)f(x1) + αf(x2) = y images f(x) = f((1 − α) · x1 + α · x2). Hence f is convex.

(Only if part:) Let (x1, y1), (x2, y2) ∈ U(f) and α ∈ (0, 1). Then we know that y1 images f(x1) and y2 images f(x2) and so

images

Consequently, images that is,

images

So U(f) is convex.

Solution to Exercise 3.9, page 129

We prove this using induction on n. The result is trivially true when n = 1, and in fact we have equality in this case. Suppose the inequality has been established for some nN. If x1, ···, xn, xn+1 are n + 1 vectors, and images then

images

and so the claim follows for all n.

Solution to Exercise 3.10, page 130

We have for all xR

images

Thus f is convex.

(Alternately, one could note that images is a norm on R2, and so it is convex. Now fixing y = 1, and keeping x variable, we get convexity of images

Solution to Exercise 3.11, page 132

For x1, x2C1[0, 1] and α ∈ (0, 1), we have, using the convexity of function images (Exercise 3.10, page 130), that

images

Solution to Exercise 3.12, page 133

(If:) Suppose that x0(t) = 0 for all t ∈ [0, 1]. Then we have that for all hC[0, 1],

images

and so f′(x0) = 0.

(Only if:) Now suppose that f′(x0) = 0. Thus for every hC[0, 1], we have

images

In particular, taking h := x0C[0, 1], we obtain images

So images As x0 is continuous on [0, 1], it follows that x0 = 0.

By the necessary condition for x0 to be a minimiser, we have that f′(x0) = 0 and so x0 must be the zero function 0 on [0, 1]. Furthermore, as f is convex and f′(0) = 0, it follows that the zero function is a minimiser. Consequently, there exists a unique solution to the optimisation problem, namely the zero function 0C[0, 1]. The conclusion is also obvious from the fact that for all xC[0, 1],

images

Solution to Exercise 3.13, page 141

We have images Then images and images

The Euler-Lagrange equation is images

Upon integrating, we obtain images on [a, b] for some constant C.

Thus images, for all t ∈ [a, b].

So A images 0, and images for each t ∈ [a, b]. As images is continuous, we can conclude that images must be either everywhere equal to images, or everywhere equal to −images. In either case, images is constant, and so x is given by x(t) = αt + β, t ∈ [a, b]. Since x(a) = xa and x(b) = xb, we have

images

and images for all t ∈ [a, b].

That this xS is indeed a minimiser can be concluded by noticing that the map x images L(γx) : SR is convex, thanks to the convexity of images images for all ηR (Exercise 3.10, page 130).

(The fact that x is a minimiser, is of course expected geometrically, since the straight line is the curve of shortest length between two points in the Euclidean plane.)

Solution to Exercise 3.14, page 141

We have images

Solution to Exercise 3.15, page 141

With images we have

images

Then images and images

The Euler-Lagrange equation is images

Upon integrating, we obtain images on [a, b] for some constant C.

Thus images

So A images 0, and images for each t ∈ [a, b]. As images is continuous, we can conclude that images must be either everywhere equal to images, or everywhere equal to −images. In either case, images is constant, and so x is given by x(t) = αt + β, t ∈ [a, b]. Since x(a) = xa and x(b) = xb, we have

images

and images for all t ∈ [a, b]

We will now show that this x is a maximiser of x images L(γx) : SR, that is, it is a minimiser of x imagesL(γx). Note that the map images is convex because

images

Hence x imagesL(γx) : SR is convex too, and this proves our claim.

Solution to Exercise 3.16, page 142

We have images Thus

images

So the Euler-Lagrange equations are

images

that is,

images

Solution to Exercise 3.17, page 143

(1)With images we have that

images

We have images

So the Euler-Lagrange equation is

images

We have

images

Similarly images

Thus the Euler-Lagrange equation becomes (using uxy = uyx)

images

If u = Ax + By + C, then uxx = 0, uxy = 0 and uyy = 0, so that all the three

summands on the left-hand side of the Euler-Lagrange equation vanish, and so we see that the Euler-Lagrange equation is satisfied.

If u = tan–1 (y/x), then we have

images

Thus uxx = images, uxy = uyx = images, and uyy = images.

Hence

images

With s := images and t = tan–1(y/x) = u, we have tan t = images, and so

images

Thus x = images · cos t = s · cos t. Then

images

Vice versa, if x = s · cos t, y = s · sin t and u = t, then

images

and so s = images. Also images = tan t, and so u = tan–1(y/x) = t.

Using the Maple command given in the exercise we obtain the following:

images

(2)If L(X1, X2, U, V1, V2) := images, then I(u) = images.

We have images.

So the Euler-Lagrange equation is:

images

Thus u satisfies the wave equation images = 0.

We can check this by direct differentiation that the given u in terms of f satisfies the wave equation. We have

images

Differentiating again with respect to t, we obtain

images

Similarly, by differentiating u with respect to x we obtain

images

Differentiating again with respect to x, we obtain

images

It follows from (∗) and (∗∗) that images = 0.

Let us check that the boundary conditions are satisfied.

Note that u(0, t) = images = 0 since f is odd.

Now we would like to check u(1, t) = 0 too.

Using the oddness and 2-periodicity of f, we have

images

So u(1, t) = images = 0.

Finally, we can check if the initial conditions is satisfied.

We have u(x, 0) = images = f(x) for all x.

Also, from our previous calculation, we have

images

for all x.

For a fixed t, the graph of f(· –t) is just a shifted version of the graph of f by t units to the right. As t increases, the graph travels to the right, representing a travelling wave, moving to the right with a speed 1. Similarly the graph of f(·+t) with increasing t represents a travelling wave moving to the left with speed 1. The solution of the wave equation is an average of these two travelling waves moving in opposite directions, and the shape of the wave is determined by the initial shape of the string.

Solution to Exercise 3.18, page 153

We have (suppressing the argument (q, p) everywhere)

images

Also,

images

Finally, we will prove the Jacobi Identity. In order to simplify the notation, we will use subscripts to denote partial derivatives, for example Fp will mean images.First we note that

images

Similarly, by making cyclic substitutions FGH above, we obtain

images

Thanks to the symmetry of the left-hand side of the expression in Jacobi’s Identity in F, G, H, it is enough to show that after collecting all the Fq, Fp terms, their overall coefficients are zero.

The overall coefficient of Fq is

images

Since Gpq = Gqp and Hpq = Hqp, we see that the above expression is 0.

The overall coefficient of Fp is

images

This completes the proof of the Jacobi Identity.

Solution to Exercise 3.19, page 153

We have {Q, P} = images = 1 · 1 – 0 · 0 = 1.

Solutions to the exercises from Chapter 4

Solution to Exercise 4.1, page 162

With x := 1 = (t images 1), and y := (t images t), 2||x||2 + 2||y||2 = 2 · 12 + 2 · 12 = 4, while ||x + y||2 + ||xy||2 = ||1 + t||2 + ||1 – t||2 = 22 + 12 = 5. So ||·|| does not obey the Parallelogram Law, and hence ||·|| cannot be a norm induced by some inner product on C[0, 1].

Solution to Exercise 4.2, page 162

Let x, y, zX. Then

images

Adding these, we obtain

images

Geometric interpretation in R2: If x, y, z are the vertices of a triangle ABC, then images is the length of the median AD (see the picture).

The Appollonius Identity gives AB2 + AC2 = imagesBC2 + 2AD2.

images

Solution to Exercise 4.3, page 162

Let images > 0. Let N1N be such that for all n > N1, ||xnx|| < images.

Let N2N be such that for all n > N2, ||yny|| < images, where the number M := images ||xn|| < ∞ (this exists since (xn)nN, being convergent, is bounded).

Consequently, for all n > N := max{N1, N2},

images

Hence (〈xn, yn〉)nN is convergent in K, with limit 〈x, y〉.

Solution to Exercise 4.4, page 162

If the ellipse has major and minor axis lengths as 2a and 2b, respectively, then observe that the perimeter is given by

images

where the last expression is obtained by rotating the ellipse through 90°, obtaining a new ellipse with the same perimeter.

images

Using Cauchy-Schwarz Inequality we obtain

images

Thus P images 2πab. Since the areas of the circle and the ellipse are equal, it follows that πr2 = πab, where r denotes the radius of the circle. Hence r = √ab. So we have P images 2πab = 2πr, that is, the perimeter P of the ellipse is at least as large as the circumference of the circle.

Solution to Exercise 4.5, page 163

(IP1)If ARm×n, then 〈A, A〉 = tr(AimagesA) = images aki aki = images a2ki images 0.
If ARm×n and 〈A, A〉 = 0, then images a2ki = 0, and so for all
k ∈ {1, ···, m} and all i ∈ {1, ···, n}, aki = 0, that is, A = 0.

(IP2)For all A1, A2, BRm×n,

images

For all A, BRm×n and αR,

images

(IP3)For all A, BRm×n,

images

This is a Hilbert space, since finite-dimensional normed spaces are complete.

Solution to Exercise 4.6, page 163

Let x, yX. Then

images

Also,

images

From (∗) and (∗∗) it follows that for all x, yX, 〈Tx, Ty〉 = 0.

In particular, with y = Tx, we get 〈Tx, Tx〉 = 0, that is, ||Tx||2 = 0.

Hence for all xX, Tx = 0, that is, T = 0.

We have 〈Tx, x〉 = images = –x2x1 + x1x2 = 0, for all x = imagesR2.

There is no contradiction to the previous part since the vector space R2 is a vector space over the real scalars.

Solution to Exercise 4.7, page 163

R is an equivalence relation on C:

(ER1)If x = (xn)nNC, then images ||xnxn||X = images 0 = 0, and so (x, x) ∈ R.

(ER2)If x = (xn)nN, y = (yn)nNC, and (x, y) ∈ R, then images ||xnyn||X = 0.
So images ||ynxn||X = images |–1| ||xnyn||X = images ||xnyn||X = 0.
Hence (y, x) ∈ R.

(ER3)Let x = (xn)nN, y = (yn)nN, z = (zn)nN be in C, such that (x, y) ∈ R and (y, z) ∈ R. Then images ||xnyn||X = 0 and images ||ynzn||X = 0.
As 0 images ||xnzn||X images ||xnyn||X + ||ynzn||X , we get images ||xnzn||X = 0.
So (x, z) ∈ R.

Consequently, R is an equivalence relation on C.

images is well-defined:

If [(xn)nN] = [(x′n)nN] and [(yn)nN] = [(y′n)nN], then we wish to show that [(xn + yn)nN] = [(x′n + y′n)nN]. We have that (xn + yn)nNC, since (xn)nN, (yn)nNC and ||xn + yn – (xm + ym)||X images ||xnxm||X + ||ynym||X.

Similarly, (x′n + y′n)nNC.

Furthermore, 0 images ||(xn + yn) – (x′n + y′n)||X + ||xn + x′n||X + ||yn + y′n)||X, and so

images

that is, ((xn + yn)nN, (x′n + y′n)||nN) ∈ R. So [(xn + yn)nN] = [(x′n + y′n)||nN].

images is well-defined:

Let αK and [(xn)nN] = [x′n)nN]. Since ||αxnαxm||X = |α|||xnxm||X, clearly (αxn)nNC. Similarly, (αxn)nNC. We have

images

and so ((αxn)nN, (αxn)nN) ∈ R. So [(αxn)nN] = [(αxn)nN].

images is well-defined:

Since Cauchy sequences are bounded, given (xn)nN, (yn)nN in C, we have that Mx := images ||xn||X < ∞ and My := images ||yn||X < ∞.

Let N be large enough so that if m, n > N, then

images

Thus for m, n > N,

images

So (〈xn, ynX)nN is a Cauchy sequence in K, and as K (= R or C) is complete, it follows that imagesxn, ynX exists.

Now suppose that [(xn)nN] = [(x′n)nN] and [(yn)nN] = [(y′n)nN].

Given images > 0, let N be such that for all n > N,

images

where Mx = images ||xn||X < ∞. For n > N, we have

images

Passing the limit as n → ∞, we obtain images

〈·, ·〉 defines an inner product on X:

(IP1)If images, then images.

Let images be such that images = 0.

Then imagesxn, xnX = images ||xn||2X = 0.

(0)nNC and images ||xn0||X = images ||xn||X = 0 (using the above).

Thus [(xn)nN] = [(0)nN].

(IP2)For all x1, x2, yX,

images

For all αK and x, yX, we have

images

(IP3)For all x, yX, 〈x, yX = images.

images

ι is a linear transformation:

images

ι is injective:

If ι(x) = [(x)nN] = [(0)nN], then ||x|| = images ||x0|| = 0, and so x = 0.

ι preserves inner products: For x, yX, 〈ι(x), ι(y)〉X = imagesx, yX = 〈x, yX.

Solution to Exercise 4.8, page 168

As span{v1} = span{x1} = span{u1}, it follows that v1 = α1u1.

Thus 1 = ||v1|| = |α1|||u1|| = |α1| · 1 = |α1|.

For n > 1, vn ∈ span{v1, ···, vn} = span{x1, ···, xn} = span{u1, ···, un}.

So there are scalars β1, ···, βn–1, αn such that vn = β1u1 + ··· + βn–1un–1 + αnun. We also know that for all k < n, 〈vn, vk〉 = 0. So it follows that 〈vn, v〉 = 0 for all v ∈ span{v1, ···, vn–1} = span{x1, ···, xn–1} = span{u1, ···, un–1}. Thus 〈vn, uk〉 = 0 for all k < n. This gives β1 = ··· = βn–1 = 0, and vn = αnun. Moreover, 1 = ||vn|| = |αn| ||un|| = |αn| · 1 = |αn|.

Solution to Exercise 4.9, page 171

Let us first note that the derivative of an even monomial t2k is odd, and that of an odd monomial t2k+1 is even. From here it follows that the derivative of a polynomial with only even monomials is a polynomial consisting of only odd monomials, while that of a polynomial with only odd monomials is a polynomial with only even monomials.

By the Binomial Theorem, we see that the polynomial (t2 – 1)n is the sum of even monomials of the form ckt2k, for suitable scalars ck, k = 0, ···, n.

So images (t2 – 1)n will be a polynomial p with:

(1) only even monomials if n is even,

(2) only odd monomials if n is odd.

In the former case, when n is even, p, being the sum of even functions will be even, while in the latter case, p, being the sum of odd functions, will be odd. Thus Pn is even when n is even, and odd if n is odd.

If n is odd, then each of the terms ckt2kn is an odd polynomial, and hence so is their sum. Consequently, Pn is odd if n is odd.

We have Pn(–1) = (–1)nPn(1) = (–1)n · 1 = (–1)n for all n images 0.

Solution to Exercise 4.10, page 171

With y(t) := (t2 – 1)n, we have y′(t) = n(t2 – 1)n–1 · 2t. So

images

By differentiating the left-hand side of (∗), we obtain

images

and by differentiating the right-hand side of (∗), we have

images

Equating the final expressions from the above calculations, we obtain

images

Multiplying by images, we get (1 – t2)Pn(t) – 2tPn(t) + n(n + 1)Pn(t) = 0.

Solution to Exercise 4.11, page 171

t2 – 1 is zero at ±1. By Rolle’s Theorem, it follows that (d/dt)(t2 – 1) is zero at some t(1) ∈ (–1, 1). But we had seen that (d/dt)(t2 – 1) is also zero at the end points ±1. So by Rolle’s Theorem applied to the function (d/dt)(t2 – 1) on the two intervals [–1, t(1)] and [t(1), 1], we get the existence of points t1(2) ∈ (–1, t(1)) and t2(2) ∈ (t(1), 1), where (d/dt)2(t2 – 1) is zero. Proceeding in this manner, we get the existence of points t1(n), ···, tn(n) ∈ (–1, 1) where (d/dt)n(t2 – 1)n vanishes. So Pn has at least n zeros on (–1, 1). But Pn has degree n, and hence it can have at most n zeros in C. This shows that all the zeros of Pn are real, and all of them lie in the open interval (–1, 1).

Solution to Exercise 4.12, page 171

The set {eij : 1 images i images m, 1 images j images n}, where eij is the matrix with 1 in the ith row and jth column, and all other entries 0, is a basis for Rm×n. To see that this basis is in fact orthonormal, observe that the map ι : Rm×nRmn given by A = [aij] images (a11, ···, a1n, a21, ···, a2n, ···, am1, ···, amn) (that is, lay out the rows of A next to each other in one long row), is an isomorphism that preserves inner products:

images

{ι(eij) : 1 images i images m, 1 images j images n} is orthonormal, and so it follows that the set {eij : 1 images i images m, 1 images j images n} is orthonormal as well.

Solution to Exercise 4.13, page 172

(1)We have H0 = ex2ex2 = 1. For n images 0,

images

Thus if Hn is a polynomial, then 2xHn, Hn are polynomials too, and so is Hn+1 = 2xHnHn. Since H0 = 1 is a nonzero polynomial of degree 0, it follows by induction on n that each Hn, n images 0 is a polynomial. Moreover, if Hn has degree d, and its leading term is cdxd, then Hn has degree d – 1, while 2xHn has degree d + 1 with the leading term 2cdxd+1. Consequently, the recurrence relation together with H0 = 1 also reveals that Hn has the leading term 2nxn, and in particular has degree n.

Using the recursion relation, we get H1 = 2x, H2 = 4x2 – 2, H3 = 8x3 – 12x.

(2)Let m < n. Then we have

images

As (d/dx)n–1ex2 is a sum of terms of the form ckxkex2, and because Hm is a polynomial, it follows that the first summand in the right-hand side is 0.
So we have 〈φm, φn〉 = (–1)n+1 images.

We can continue this process of integration by parts, until we arrive at

images

But as Hm has degree m < n, (d/dx)n Hm = 0, so that 〈φm, φn〉 = 0.

The case m > n also follows from here, since the inner product is conjugate symmetric. Finally,

images

(The last equality can be justified as follows. With I := images, we have

images

So I = images

(3)For n images 0, we have

images

(4)First let us note that if n images 1, then we have

images

Hence for n images 1,

images

(5)We have for all φ

images

Hence for all n images 0,

images

(6)We have images and images.

From the previous part, we have (–(d/dx)2 + x2)φn = (2n + 1)φn, giving

images

We have

images

In Schrödinger’s equation, a2 = images, and so images = a(2n + 1).

So En = images, for n images 0.

Solution to Exercise 4.14, page 172

Since images diverges, images does not converge absolutely.

If sn is the nth partial sum of images, then for n > m, we have

images

and this can be made as small as we please since images.

Hence (sn)nN is Cauchy in H, and since H is a Hilbert space, it converges.

Solution to Exercise 4.15, page 172

For all NN, we have

images

Thus images, and as N was arbitrary, images.

Solution to Exercise 4.16, page 173

Let yYY. As yY, we know that for all y′ ∈ Y , 〈y, y′〉 = 0. Taking y′ := yY, we obtain 0 = 〈y, y′〉 = 〈y, y〉 = ||y||2, and so ||y|| = 0, giving y = 0. So YY ⊂ {0}. Also, since Y, Y are subspaces, it follows that each contains the zero vector 0. So YY = {0}.

Solution to Exercise 4.17, page 173

(1)If yY, then for each xY, 〈y, x〉 = 〈x, y = 0, and so y ∈ (Y). Thus Y ⊂ (Y).

(2)Let xZ. Then 〈x, z〉 = 0 for all zZ. As YZ, we also have 〈x, y〉 = 0 in particular for all yY. Hence xY. This shows that ZY.

(3)As YY, it follows from part (2) that images.

Now let xY. Then 〈x, y〉 = 0 for all yY.

If y′ ∈ Y, then there exists a sequence (yn)nN in Y such that images yn = y′. Thus 〈x, y′〉 = images = 0.

Hence images, showing that images as well.

(4)Suppose that xY.

As Y is dense in X, there is a sequence (yn)nN in Y converging to x in X. Thus 〈x, x〉 = imagesx, yn〉 = images 0 = 0.

(5)Suppose x = (xn)nNimages. Since e2nYeven for each N, x2n = 〈x, e2n〉 = 0. Hence the subspace imagesYodd, where Yodd denotes the subspace of 2 all sequences whose evenly indexed terms are 0.

Vice versa, if xYodd, it is clear that for all yYeven, 〈x, y〉 = 0. Thus Yoddimages.

Consequently, images = Yodd.

Similarly, images = Yeven. And so, images.

(6)We know that c00 is dense in 2. (Just truncate the series to the desired accuracy to get a finitely supported approximation!)

So images. But then images.

Solution to Exercise 4.18, page 176

Let images.

Then E(m, b) = images.

Thus the problem of finding the least square regression line is:

images

It follows from Theorem 4.5, page 174, that a minimiser Y is given by

images

where {U1, U2} is any orthonormal basis for the subspace Y := span{Y1, Y2} of Rn with the usual Euclidean inner product. By the Gram-Schmidt Orthonormalisation Procedure, U1 = images, and U2 = images.

images

images

For the given data, using the above formulae, we obtain m = –0.3184 million tonnes coal per °C, and b = 10.4667 million tonnes of coal. The y-intercept is b = 10.4667 million tonnes of coal, and this is the inland energy consumption when the mean temperature is 0°C (that is when it is freezing!). The x-intercept is 10.4667/0.3184 = 32.8728, which is the mean temperature when the inland consumption is 0 (that is, no heating required). The slope is m = –0.3184 million tonnes of coal per °C. Thus for each °C drop in temperature, the inland energy consumption increases by 0.3184 million tonnes of coal. Finally, the forecast of the energy consumption for a month with mean temperature 9°C is given by y = mx + b = (–0.3184)(9) + 10.4667 = 7.6011 million tonnes of coal.

Solution to Exercise 4.19, page 179

Let C := L2+(R). Then C is convex. Thus C is convex too. We will show that g := max{f, 0} ∈ L2+(R) = CC satisfies: for all gC, 〈fg, ggimages 0.

We have f = max{f, 0} + min{f, 0}. So fg = min{f, 0}. Also,

images

Hence we obtain for all gC that

images

So for all gC, ||fg|| images ||fg||. In particular, for all gL2+(R) = CC, we also have ||fg|| images ||fg||.

Solution to Exercise 4.20, page 182

We’d seen in Exercise 4.17, page 173, that images. So images, where the last equality follows from Corollary 4.1, page 182, since Y is closed.

Solution to Exercise 4.21, page 182

For all fL2(R), it is easy to check that fe := (f + images)/2 is even, and fo := (fimages)/2 is odd. Thus for all gY, we have

images

Thus, by Theorem 4.7, page 180, PYf = fe for all fL2(R).

By Theorem 4.8, page 180, we have

images

PY = IPY, and so for all fL2(R), PYf = fimages.

We have f = If = PYf + PYf = images.

Moreover, by Theorem 4.8, this decomposition is unique.

Solution to Exercise 4.22, page 182

Y = ker(IS), and so Y is a closed subspace of H.

For all xH, images.

So image for all xH. Moreover, for all yY, we have

images

Thus, by Theorem 4.7, page 180, PYx = image for all xH.

By Theorem 4.8, page 180, we have

images

Thus Z = (Y) = Y.

PY = IPY, and so for all xH, PYx = ximage

Solution to Exercise 4.23, page 182

Consider the map image, where image is the indicator function of image.

As images < ∞, MfL2(R).

It is also easy to see that M is linear. The above inequality then establishes that MCL(L2(R)). We have

images

Thus YA is closed.

For fL2(R), 1AfYA, and moreover, for any gYA,

images

Thus PAf = 1Af for all fL2(R).

Solution to Exercise 4.24, page 182

Suppose that D = {0}. Then D = (D) = {0} = H. So D is dense in H.

Now suppose that D is dense in H. Then D = H. Thus D = (D) = H = {0}.

Solution to Exercise 4.25, page 184

Let xC[–1, 1] and image > 0. By Weierstrass’s Approximation Theorem (Exercise 1.26, page 22), there is a polynomial pC[–1, 1] such that ||xp|| < image.

Then image

Hence ||xp||2 < image. Consequently the polynomials are dense in C[–1, 1] (with the usual inner product).

Solution to Exercise 4.26, page 185

images

Moreover ι is continuous because ||ι(x)||2 = image for all xH.

If xH is such that ι(x) “ 0, then ||x|| = ||ι(x)|| = 0, and so x = 0.

Hence ι is injective.

If (cn)nN2, then x := image cnunH, and for all kN,

images

So ι(x) = (cn)nN, showing that ι is surjective too.

As ιCL(H, 2) is a bijection, it has a continuous inverse ι–1CL(2, H) (by Corollary 2.4 on page 96). Moreover, ||ι(x)|| = ||x|| for all xH, and so ι is an isometry.

Solution to Exercise 4.27, page 187

Let xC[0, 1] be the function t images t.

For n ≠ 0, we have 〈x, Tn〉 = image, using integration by parts.

Also 〈x, T0〉 = 1/2. By Parseval’s Identity,

images

which yields image

Solution to Exercise 4.28, page 187

Let [(xn)nN] ∈ X. Consider the sequence (xn)nN in X. Since (xn)nNC, given any image > 0, there exists an NN such that for all m, n > N, ||xnxm|| < image. Consequently, for all m > N, ||ι(xm) – [(xn)nN]||X = image ||xmxn|| images image.

Hence image ι(xn) = [(xn)nN].

Solution to Exercise 4.29, page 187

We have

images

with equality if and only if image.

Thus the curve enclosing the maximum area is given by

images

with image.

Let α ∈ [0, 2π) be such that cos α = image and sin α = image. Then

images

Hence (x(s) – a0)2 + (y(s) – c0)2 = image.

Consequently, s images (x(s), y(s)) : [0, L] → R2 is the parametric representation of a circle with centre at (a0, c0) ∈ R2 and radius equal to image.

Solution to Exercise 4.30, page 188

(1)Call un the nth vector in the list. If {un : nN} were an orthonormal basis, then

images

a contradiction. So the given set is not an orthonormal basis.

(2)Let us call the evenly indexed vectors as vn, and the oddly indexed ones as wn. Then clearly 〈vi, vj〉 = 〈wi, wj〉 = 〈vi, wj〉 = 0 whenever ij, since there are no overlapping nonzero terms. Also 〈vi, wi〉 = 0.

Finally ||vi|| = ||wi|| = 1. This shows that the given set B is orthonormal. In order to show density, we note that image and image. Thus span B = span{en : nN}, and the latter is dense in 2.

Solution to Exercise 4.31, page 188

If X is a real vector space, then let KQ := Q, while if X is a complex vector space, then let KQ := Q + iQ. Set

images

Then D is countable. Let xX, and image > 0.

Then there exists an N such that image

Let cnKQ, n = 1, ···, N, be such that image.

Then with y := image cnunB, we have

images

Thus X is separable.

Solution to Exercise 4.32, page 188

We have for λμ that

images

On the other hand, ||eiλx||2 = 1. Thus

images

Hence image

Suppose now that X is separable, with a dense subset D = {d1, d2, d3, ···}. Then for each λR, there exists a dλD such that ||eiλxdλ|| < 1/√2.

This gives us the existence6 of a map λ image dλ : RD.

This map is injective since if λμ, then

image

giving ||dλdμ|| > 0, and in particular dλdμ.

But this is absurd, since R is uncountable, while D is countable!

So image is not separable.

Solution to Exercise 4.33, page 189

For nN, set image

If Un has more than n − 1 elements, then for any distinct ui1, · · · , uinUn,

image

(where the former inequality is by virtue of the fact that the uik ’s belong to Un, and the latter is Bessel’s Inequality). So we obtain ||x||2 < ||x||2, which is absurd. Thus Un has at most n − 1 elements. Hence each Un is finite. But

image

and as each Un is finite, their union U is at most countable.

Consequently, 〈x, ui〉 is nonzero for at most a countable number of the ui ’s.

Solution to Exercise 4.34, page 190

(1)We have for all xH that |φy(x)| = |〈x, y〉| image ||x|| ||y||, and so ||φy|| image ||y||.

If y = 0, then ||φy|| image ||y|| = 0, and so ||φy|| = 0 = ||y||.

If y ≠ 0, then define z = image, and observe that ||z|| = 1, so that

image

Hence it follows that ||φy|| = ||y||.

(2)Let yH\{0}. Then for xH,

image

and so φiy = −y.

Also ||φy|| = ||y|| ≠ 0, so that φy0, the zero linear functional.

If the map η image φη : HCL(H, C) were linear, then in particular, we would have φiy = y, and from the above, we would then get y = −y, giving φy = 0, which is absurd.

Solution to Exercise 4.35, page 195

We will show that Y := ran P = ker(IP), and since the kernel of the continuous linear transformation IP is closed, it follows that Y is closed.

That ran P = ker(IP): If y ∈ ran P, then y = Px for some xH. Then

image

So y ∈ ker(IP). Hence ran P ⊂ ker(IP).

On the other hand, if y ∈ ker(IP), then (IP)y = 0 and so y = Py ∈ ran P. Thus ker(IP) ⊂ ran P as well.

It remains to show that P = PY. We will use (ran P) = ker(P) = ker P, where the last equality follows thanks to the self-adjointness of P. Let xH. Then x = PY x + PYx. But PYxY = ker P, and so

image

As PY xY = ran P, PY x = Px1 for some x1H.

Thus P (PY x) = P (Px1) = P2 x1 = Px1 = PY x. Hence Px = P (PY x) = PY x.

Solution to Exercise 4.36, page 195

image

and so T1 is self-adjoint, while T2 is skew-adjoint.

Moreover, image

In order to show uniqueness, suppose that T1, T2 are self-adjoint and skew-adjoint respectively such that T = T1 + T2. Then T1 + T2 = T1 + T2, and so we obtain T1T1 = T2T2. As the left-hand side is self-adjoint, and the right-hand side is skew-adjoint, both sides must be zero. (Indeed, if S := T1T1 = T2T2 is the common value, then S = S = −S, and so 2S = 0, that is, S = 0.)

Solution to Exercise 4.37, page 195

image. Define T : 22 by Tk = image.

Then T is well-defined and TCL(2). We will show that Λ = T.

For all h = (hn)nN and k = (kn)nN in 2, we have

image

Thus Λ = T.

Solution to Exercise 4.38, page 195

We’ll show that I is given by image

ICL(L2 [0, 1)] by Example 2.10 (page 70), with image

For h, kL2 [0, 1], we have

image

and so ICL(L2 [0, 1]) is given by image

Solution to Exercise 4.39, page 195

T*A = TA, where image

Thus TA is clockwise rotation through an angle θ in the plane.

Solution to Exercise 4.40, page 196

For xH, we have image

(Note that x′ ∈ Yn because 〈x′, ui〉 = 0 for all i = 1, · · · ,n.)

So image for all xH. For all xH, we have

image

since image

Solution to Exercise 4.41, page 196

(1)If B′ = {un: nN} is another orthonormal basis, then

image

On the other hand, we also have

image

and so image

(2)We will verify simultaneously the norm and subspace axioms:

(N1/S3) For all TS2 (H) that image

Now let TS2 (H) and ||T||HS = 0. Then image

So Tun = 0 for all n. But then for all xH, we have

image

Consequently T = 0.

Clearly 0S2 (H) since ||0||HS = 0 < ∞.

(N2/S2) For all TS2 (H) and αK, we have

image

and so ||α · T||HS = |α| ||T||HS.

Note that we’ve also shown for all TS2 (H), αK, that α · TS2 (H).

(N3/S1) Finally, if T1, T2S2 (H), then we have

image

and so ||T1 + T2||HS image ||T1||HS + ||T2||HS.

Also, this shows that for all T1, T2S2 (H), T1 + T2S2(H).

(3)We have for all xH that

image

and so ||T || = ||T|| image ||T||HS.

Solution to Exercise 4.42, page 197

As CL(H) is an algebra, Λ(T) ∈ CL(H). We verify linearity:

(L1) For T1, T2CL(H),

image

(L2) Λ(αT) = A (αT)+(αT)A = α(A T + T A) = αΛT, TCL(H), αK.

Continuity: For TCL(H),

image

and so Λ ∈ CL(CL(H)).

If TCL(H) is such that T = T, then

image

So Λ(T) is self-adjoint.

Solution to Exercise 4.43, page 197

Let (Tn)nN be a sequence of self-adjoint operators in CL(H) that converges to TCL(H). We’d like to show that for all x, yH, 〈T x, y〉 = 〈x, Ty〉. As we have ||TnxT x|| image ||TnT || ||x||, it follows that (Tnx)nN converges to Tx, and similarly, (Tny)nN converges to Ty. Thus

image

Solution to Exercise 4.44, page 197

Let μρ(T). Then there is an SCL(H) such that S(μIT) = I = (μIT)S. Taking adjoints, we obtain

image

Thus μ IT is invertible in CL(H), and so μρ(T).

So we have proved that image for all TCL(H).

Applying this to T instead of T gives:

image

Consequently for all TCL(H), μρ(T) if and only if μρ(T).

We had seen that R = L and that σ(L) = {zC : |z| image 1}.

From the above, we obtain σ(R) = C\ρ(R) = C\(ρ(L)) = C\ρ(L) = σ(L).

Consequently the spectrum of R is the same as that of L, namely the closed unit

disc {zC : |z|image 1} in the complex plane.

Solution to Exercise 4.45, page 197

We have for λ ∉ {0, 1},

image

and similarly image

The previous part shows that σ(PY) ⊂ {0, 1}.

We now show that both 0 and 1 are eigenvalues, so that σ(PY) = σp(PY) = {0, 1}.

As Y is a proper subspace, Y ≠ {0}. So there exist nonzero vectors y in Y, and all of these are eigenvectors of PY with eigenvalue 1: PY y = y = 1 · y.

Also, as Y is a proper subspace, YH.

If Y = {0}, then we have that Y = (Y qK = {0} = H, a contradiction.

Thus Y ≠ {0}. But this means that there exist nonzero vectors x in Y.

All of these are eigenvectors of PY with eigenvalue 0, since PY x = 0 = 0 · x.

Solution to Exercise 4.46, page 197

Let λσp (U) with eigenvector v0.

Then Uv = λv, and so |λ|2 ||v||2 = 〈λv, λv〉 = 〈Uv, Uv〉 = 〈U* Uv, v〉 = 〈Ivv〉 = ||v||2.

Thus |λ| = 1, that is, λ lies on the unit circle with centre 0 in the complex plane.

If v1, v2H\{0} are eigenvectors of U corresponding to distinct eigenvalues λ1, λ2, then we have

image

and so 〈v1, v2〉 = 0.

Solution to Exercise 4.47, page 199

The spectrum of T is real, and hence T + iI is invertible in CL(H). Since (T + iI)(TiI) = T2 + I = (TiI)(T + iI), it follows by pre- and post-multiplying with (T + iI)−1 that (TiI)(T + iI)−1 = (T + iI)−1(TiI) =: U. Hence we have

image

So

image

Thus U is unitary. We have

image

Hence IU is invertible in CL(H) with inverse image. Similarly,

image

So image

Solution to Exercise 4.48, page 199

(1)Suppose that PY image PZ. If yY, then

image

So PZy = 0, giving y = PZ y + PZy = PZ y + 0 = PZ yZ. Thus YZ.

(2)Now let YZ and xH. We have PZ x = PY x + (PZ xPY x).

We first show that PZxPYx is perpendicular to PYx.

As x = PY x + PYx = PZ x + PZx, we have PZ xPY x = PYxPZx.

So 〈PY x, PZ xPY x〉 = 〈PY x, PYxPZx〉 = image = 0.

Hence

image

Consequently, PY image PZ.

Solution to Exercise 4.49, page 204

(1)By the Fundamental Theorem of Calculus, image

So image.

As f (x) image 0 for all x, we must have that L image 0. Suppose that L > 0.

Then there exists an R > 0 such that for all x > R, Lf (x) image |f (x) − L| < image, and in particular, f (x) > image for all x > R. Hence for all x > R,

image

which is absurd. Hence L = 0.

(2)We apply part (1) with f (x) := |Ψ(x)|2.

We note that f′ = (|Ψ|2)′ = (ΨΨ*)′ = Ψ′Ψ* + Ψ(Ψ′)*, and so |f′| image 2||Ψ|| ||Ψ′||.

Thus image

So image

To show that image, we apply the above to x image Ψ(−x), and note that if Ψ, Ψ′ ∈ L2 (R), then so do Ψ(−·), (Ψ(−·))′ = −Ψ′(−·).

Solution to Exercise 4.50, page 205

We have for self-adjoint A, B that

[A, B] = (ABBA) = B AAB = BAAB = −(ABBA) = −[A, B].

Solution to Exercise 4.51, page 205

We have

image

Similarly, Hence

image

Hence

image

Solution to Exercise 4.52, page 205

If n = 1, then [Q, P] = −[P, Q] = −(−iimageI) = iimage1Q1−1, and so the claim is true. If [Qn, P] = iimagenQn−1 for some nN, then we have

image

and so the claim follows for all nN by induction.

Solution to Exercise 4.53, page 207

We have in the classical case that

image

Thus {Q2, P2} = 4QP.

In the quantum mechanical case, we have, using Exercise 4.52, page 205, that

image

Thus image (since otherwise QP = PQ, which is false since [Q, P] = iimageI0).

QP is not self-adjoint, since if it were self-adjoint, then for all compactly supported Ψ and Φ, we would have

image

which would give iimageΦ = [Q, P] Φ = 0, which is clearly false for nonzero Φ! On the other hand, for all Ψ and Φ, we have

image

Solution to Exercise 4.54, page 207

We have

image

and so t image ||ψ(t)||2 is constant, giving ||ψ(t)||2 = ||ψ(0)||2 = 1.

Solution to Exercise 4.55, page 207

As V ≡ 0 for x ∈ (0, π), we have image, that is, image.

Depending on the sign of E, the solution is given by

image

If E = 0, then the conditions X(0) = X(π) = 0 give A = B = 0. So X ≡ 0.

If E < 0, then the conditions X(0) = X(π) imply that A = B = 0 so that X ≡ 0.

So only the case E > 0 remains. The condition X(0) = 0 gives A = 0.

The condition X(π) = 0 implies B sin image

As we want nontrivial solutions, we know B ≠ 0 (otherwise X ≡ 0).

So sin image, giving image

Thus image (discrete/“quantised” energy levels!).

We have |Ψ(x, t)| = |X(x)||T (t)| = |X(x)| · |C| = |C| · |B| · | sin(nx)|.

The plots of |Ψ|2 = constant · sin(nx))2 when n = 1, 2 are shown below.

image

When n = 1, the probability is

image

When n = 2, the probability is

image

Solutions to the exercises from Chapter 5

Solution to Exercise 5.1, page 215

(1)Tm is linear:

(L1) For all x1, x2H,

image

(L2) For all xH and αK,

image

So Tm is a linear transformation. Next we prove continuity: for all xH,

image

Conclusion: TmCL(H).

For xH we have

image

(2)As image, we have image

Thus (Tm)mN converges to T in CL(H). Since the range of Tm is contained in the span of Tu1, · · ·, Tum, it follows that Tm has finite rank, and so Tm is compact. As T is the limit in CL(H) of a sequence of compact operators, it follows that T is compact.

Solution to Exercise 5.2, page 216

(1)(L1) For x1, x2H, we have

image

(L2) For αK and xH, we have

image

Continuity: For xH, we have

image

So x0 image y0CL(H), and ||x0 image y0|| image ||x0 || ||y0||.

(2)As ran(x0 image y0) ⊂ span{x0}, we have that x0 image y0 has finite rank, and so it is compact.

(3)For all xH,

image

Since this is true for all xH, we conclude that A(x0imagey0)B = (Ax0)image(B y0).

Solution to Exercise 5.3, page 217

(1)Let H = 2, and T be diagonal with 2 × 2 nilpotent blocks image

More explicitly, T (a1, a2, a3, a4, a5, a6, · · ·) = (a2, 0, a4, 0, a6, 0, · · ·), for all (an)nN2. Thus TCL(2). Also, T2 = 0 is compact.

But if we take the bounded sequence (e2n)nN, then (Te2n)nN = (e2n−1)nN, and this has no convergent subsequence. Hence T is not compact.

(2)Suppose that (xn)nN is a bounded sequence in H, and ||xn|| image M for all n. Since T2 is compact, (T2xn)nN has a convergent subsequence, say (T2 xnk)kN. We will show that (T xnk)kN is also convergent, by showing that it is Cauchy. We have for j, k that

image

and so (T xnk)kN is Cauchy. As H is a Hilbert space, it follows that (T xnk)kN is convergent. Hence T is compact.

Solution to Exercise 5.4, page 217

(1)True.

(2)False.

Neither I nor −I is compact, but their sum is 0, which is compact.

(3)True.

(4)False.

See the example in the solution to Exercise 5.3, part (1), page 217.

Alternately, we could take two diagonal operators on 2 corresponding to the sequences (1, 0, 1, 0, 1, 0, · · ·) and (0, 1, 0, 1, 0, 1, · · ·).

Solution to Exercise 5.5, page 217

If TK(H), then as ACL(H), we have ATK(H). Also, TAK(H) because TK(H) and ACL(H). Since AT and TA are in K(H), also their sum AT + TAK(H), that is, Λ(T) ∈ K(H). Thus K(H) is Λ-invariant.

Solution to Exercise 5.6, page 226

We have ker T = {0}. So ran T = (ker T) = (ker T) = {0} = H.

So T has infinite rank. Let xH = ranT, and image > 0. Then there exists a y ∈ ran T, such that ||xy|| < image/2.

As y ∈ ran T, we have y = T x′, for some x′ ∈ H, and image

So there exists an N Such that with

image

we have ||yz|| < image/2. Consequently, ||xz|| image ||xy|| + ||yz|| < image/2 + image/2 = image, and so span{un : nN} is dense in H. Since {un : nN} is also an orthonormal set, it follows that it is an orthonormal basis for H.

Solution to Exercise 5.7, page 226

We note that each eigenvalue λ of T is nonnegative because if u is a corresponding unit-norm eigenvector, then λ = λ · 1 = λu, u〉 = 〈T u, uimage 0. By the spectral theorem, we know that there exists a sequence of orthonormal eigenvectors u1, u2, u3, · · · of T with corresponding eigenvalues λ1 image λ2 image λ3 image · · · image 0.

We will show that for all xH, image converges in H.

For N > M, we have,

image

In the above, we have used Bessel’s Inequality to get the last inequality.

Hence image is Cauchy in H. As H is a Hilbert space,

image

converges in H. Consequently imagex is well-defined for all xH.

Also, it is easy to see that image is a linear transformation.

Continuity: For all NN,

image

Passing the limit N → ∞, we obtain ||imagex||2 image λ1 ||x||2, and so imageCL(H).

We have for all xH that

image

So (image)2 = T.

Solutions to the exercises from Chapter 6

Solution to Exercise 6.1, page 232

(1)Since images exists, given an images > 0, there exists a δ > 0 such that

image

whenever 0 < |h| < δ. Consider the interval [0, h] for some h which satisfies 0 < h < δ. Since f is differentiable in (0, h) and continuous on [0, h], it follows from the Mean Value Theorem that

image

Thus |θh| < δ and so images

So for all h ∈ (0, δ), images

Applying the Mean Value Theorem on [−h, 0], where 0 < h < δ, we also get

image

for all h ∈ (−δ, 0). Consequently, for all h satisfying 0 < |h| < δ, we have

image

that is, f is differentiable at 0, and images

shows that f′ is continuous at 0. It was given that f′ is also continuous on R. So f is continuously differentiable on R.

(2)Applying the result from part (1) above, to the function f(n−1) : RR, we obtain that f(n−1) is continuously differentiable on R, that is, f is n times continuously differentiable on R.

(3)We’ll show that for x > 0, images where pk is a polynomial.

This holds for k = 1: f(x) = e−1/x for x > 0, and so images

If the claim holds for some k, then

image

where images is a polynomial.

Now e1/x e−1/x = 1, and since we have images it follows that images for x > 0. So 0 < x−2n e−1/x < (2n + 1)!x for x > 0. Thus images Consequently

image

By the previous part, it follows that fC(R).

Solution to Exercise 6.2, page 232

The equation images says that u is constant along the lines parallel to the x-axis. So for each fixed y, there is a number Cy such that u(x, y) = Cy for all xR. But uD(R2) must have compact support, and so it is zero outside a ball B(0, R) with a large enough radius R. So Cy is forced to be 0 for all y! Hence u ≡ 0 is the only solution.

Solution to Exercise 6.3, page 232

It is clear that if Φ ∈ D(R), then Φ′ ∈ D(R). Moreover,

image

So we have images

Now suppose that φD(R) is such that images

Define Φ by images for xR. Then Φ′ = φ, and so Φ ∈ C.

If a > 0 is such that φ is zero outside [−a, a], then we have for x < −a that

image

On the other hand, for images

So φ also vanishes outside [−a, a], and hence Φ ∈ D(R).

Finally, let φY, and suppose that Φ1, Φ2D(R) are such that images Then (Φ1 − Φ2)′ = 0, and so Φ1 − Φ2 = C, where C is a constant. But as Φ1, Φ2 both have compact supports, it follows that C must be zero. Hence Φ1 = Φ2.

Solution to Exercise 6.4, page 233

From the solution to Exercise 6.3, page 232, we know that the Φns are given by

image

As images there is some a > 0 such that all the φn vanish olutside [−a, a].

Then it follows that each Φn also vanishes outside [−a, a]. Also,

image

Hence it follows that (Φn)nN converges uniformly to 0 as n → ∞. Since images it follows that images for k image 1. Thus for each k image 1, we have that images converges uniformly to 0 (thanks to the fact that images This completes the proof that images

Solution to Exercise 6.5, page 236

Suppose that such a function δ exists. Let images

For nN, and let φn : RR be defined by φn(x) := φ(nx), xR.

Then φn is smooth, takes values in [0, 1], and vanishes outside [−1/n, 1/n]. So we have

image

a contradiction.

Solution to Exercise 6.6, page 237

(1)For all φD(R), there exists an NN such that φ = 0 on R\[−n, n].

So the sum in the definition of 〈T, φ〉 is finite: images

Hence 〈T, φ〉 is well defined for each φD(R). The linearity is obvious.

Now suppose that images Then there exists an KN such that each φn, vanishes outside [−K, K]. Also, for all |k| image K, images

Thus images and so TD′(R).

(2)Take any φD(R) that is positive in (0, 1q and zero outside [0, 1].

(From Example 6.1, page 230, there is a ψD(R) that is positive on (−1, 1) and zero outside [−1, 1]. By shifting and scaling, we see that the function φ defined by φ(x) := ψ(2x − 1), xR, is one such function.)

Now define φnD(R), nN, by images

We have for kN that images

Thus for all images

Hence for all k image 0, we have images uniformly. However, we have

image

(3)There is no contradiction to our conclusion from (1) that T is a distribution, since we observe that there is no compact set KR such that for all nN, φn is zero outside K: Indeed, images

Solution to Exercise 6.7, page 241

The function images is continuously differentiable on R\{0}, and has a jump f(0+) − f(0−) = 1 at 0. For x < 0, H(x) = 0 and so (H(x) cos x)′ = 0.

For x > 0, H(x) = 1, and so (H(x) cos x)′ = (cos x)′ = − sin x.

Moreover, images

Consequently, images

The function images is continuously differentiable on R\{0}, and has a jump g(0+) − g(0−) = 0 at 0. For x < 0, H(x) = 0 and so (H(x) sin x)′ = 0.

For x > 0, H(x) = 1, and so (H(x) sin x)′ = (sin x)′ = cos x.

Moreover, images

Consequently, images

Solution to Exercise 6.8, page 241

The function images is continuously differentiable on R\{0}, and has a jump f(0+) − f(0−) = 0 at 0. Moreover, for x > 0, |x|/2 = x/2, and so we have (|x|/2)′ = (x/2)′ = 1/2 for x > 0. On the other hand, for x < 0, |x|/2 = −x/2, and so we obtain (|x|/2)′ = (−x/2)′ = −1/2 for x < 0.

Also, images

Hence images where images

Again, g is continuously differentiable on R\{0}.

g has a jump of g(0+) − g(0−) = images − (− images) = 1 at 0.

Also g is constant for x > 0 (respectively for x < 0), and so g′(x) = 0 for x > 0 (respectively for x < 0).

Also, images

Hence images

Solution to Exercise 6.9, page 241

(1)Let us first consider the case when 0.

Then V = ker ⊂ ker L implies that ker L = V too, and so L = 0 as well.

Thus we may simply take c = 0, and then clearly L = 0 = 0 is valid.

Now let us suppose that 0.

Then there is a vector v0V such that (v0) ≠ 0.

This vector v0 must be nonzero, for otherwise (v0) = 0.

(To show the desired decomposition of an arbitrary vector as v = cvv0 + w,

with w ∈ ker , we need to find the appropriate scalar cv, because then we can set w := vcvv0. To find what cv might work, we apply on both sides to obtain (v) = cv(v0) + (w) = cv(v0) + 0 = cv(v0).

So it seems that images should do the trick!)

Given vV, we now proceed to show that images

We have images and so w ∈ ker .

As w ∈ ker ⊂ ker L, we have L(w) = 0, and

image

Hence with images we have L = cℓ.

(2)For φD(R), 0 = 〈0, φ〉 = 〈T′, φ〉 = −〈T, φ〉. So {φ′ : φD(R)} ⊂ ker T.

Let 1 denote the constant function Rx images 1. By Exercise 6.3, page 232

image

Finally, by part (1), applied to the vector space V = D(R), with L := T and := T1, we get the existence of a cC so that T = cT1 = Tc.

(Here Tc denotes the regular distribution corresponding to the constant function taking value c everywhere on R.)

Solution to Exercise 6.10, page 242

Fix any φ0D(R)\{0} which is nonnegative everywhere. For ψD(R), set

image

As ψ and φ0 belong to D(R), so does φ. Moreover,

image

Thus images

By Exercise 6.3, page 232, there is a unique Φ ∈ D(R) such that Φ′ = φ.

We define S : D(R) → C by 〈S, ψ〉 = −〈T, Φ〉. Let us check that S is linear.

Let ψ1, ψ2D(R), and let Φ1, Φ2D(R) be such that

image

Then images

So images

Similarly, 〈S, αψ〉 = αS, ψ〉 for all ψD(R) and all αC.

Now we check the continuity of S. Let (ψn)nN be a sequence in D(R) such that images. Then there exists an a > 0 such that all the ψn vanish outside [−a, a], and (ψn)nN converges uniformly to 0 as n → ∞, giving

image

Now set images

Then there exists a b > 0 such that each φn vanishes outside [−b, b].

Also, for k image 0, images

So for each k image 0, images converges uniformly to 0. Thus images

Let Φn be the unique element in D(R) such that images

From Exercise 6.4, page 233, we can conclude that images

Consequently, 〈S, ψn〉 = −〈T, Φn〉 → 0 as n → ∞. Hence SD′(R).

Finally, we’ll show that S′ = T.

If Φ ∈ D(R), then images

Thus images

Solution to Exercise 6.11, page 242

Let φD(R) be such that φ(0) ≠ 0. (For example we can simply take the test function from Example 6.1, page 230.) Then xn φD(R) too, and we have

image

So δ(n)0.

Solution to Exercise 6.12, page 242

It is enough to show the linear independence of δ, δ′, · · ·, δ(n) for each n. Suppose that there are scalars c0, c1, · · ·, cn such that images Let φD(R), and for λ > 0, set φλ(x) := φ(λx), for all xR. Then

image

The polynomial images is zero on {λ : λ > 0}, and hence must be identically zero. So c0φ(0) = · · · = cn φ(n)(0) = 0. As the choice of φ was arbitrary, we have that for all test functions φD(R),

image

But if we look at the φ from Example 6.1, page 230, then φ(0) ≠ 0, and also xnφ, nN, belongs to D(R), which moreover satisfies

image

So using φ, xφ, · · ·, xnφ as the test functions in (∗), we obtain c0 = · · · = cn = 0.

Solution to Exercise 6.13, page 242

For any φD(Rd), we have

image

So images

Solution to Exercise 6.14, page 242

images and so it defines a regular distribution on R2.

For φD(R2), with a > 0 such that φ ≡ 0 on R2\(−a, a)2, we have

image

Thus images

Solution to Exercise 6.15, page 242

If u : R2R is a radial function, say u(x) = f(r), where r = ||x||2, then

image

Thus images Since for all R > 0 we have

image

we conclude that images

For φD(R2) which vanishes outside the ball B(0, R), we have

image

(log r)(Δφ) is integrable, as logr is locally integrable, and Δφ = 0 outside a ball.

Let images > 0.

image

Using Green’s formula in the annulus Ω := {xR2 : images < ||x||2 < R} (with the boundary ∂Ω being the union of the two circles S(images) = {x : ||x||2 = images} and S(R) = {x : ||x||2 = R}), for the functions u = log r and v = φ, we obtain

image

We’ll show below that the first integral on the right-hand side is O(images), and thus it tends to 0 as images → 0.

As images and ||n(x)||2 = 1, the Cauchy-Schwarz Inequality gives

image

where images Finally, images

Next we will look at the second integral images

First, images Moreover,

image

Given η > 0,

image

where first images0 > 0 is chosen small enough so that |φ(x) − φ(0)| image η if ||x||2 image images0, and images satisfies 0 < images image images0.

Thus images

So images Hence images

Solution to Exercise 6.16, page 248

u is continuous on R, and continuously differentiable on R\{0}.

For x < 0, we have u′(x) = 0. For x > 0, u′(x) = 1.

Also, images

Thus by the Jump Rule, images in the sense of distributions.

So u is a weak solution of u′ = H.

Solution to Exercise 6.17, page 251

We view H(x) cosx as the product of the C function cos with the regular distribution H. Using the Product Rule, we have

image

Similarly,

image

Solution to Exercise 6.18, page 251

(1)We have

image

Hence images

(2)When images

If the claim is true for some nN, then

image

So the claim follows for all nN by induction.

(3)We have

image

Thus

image

Consequently, images

Solution to Exercise 6.19, page 252

We have for all φD(R) that

image

So αδ′ = α(0)δ′ − α′(0)δ. In particular, ′ = 0δ′ − 1δ = −δ.

Solution to Exercise 6.20, page 252

For all φD(R), we have that

image

So images

Solution to Exercise 6.21, page 252

With u := e−3yxH(y), we have

image

For all φD(R2), we have

image

So images Hence images

Moreover, u(0, y) = e−0 H(y) = 1 · H(y) = H(y).

Solution to Exercise 6.22, page 252

First we note that for all φD(R), we have

image

where 1 is the constant function Rx images 1.

Suppose on contrary, it is possible to define an associative and commutative product such that for αC(R) and TD′(R), it agrees with Definition 6.6, page 249. Then

image

whereas

image

and so images violating associativity.

Solution to Exercise 6.23, page 252

(1)Let images Then we have

image

From Exercise 6.9, page 241, there exists a cC such that eλxT = c, that is, T = ceλx.

(2)Since fC, there exists an FC such that images

(In fact, an explicit expression for one such F (for which F(0) = 0), is given by images This can be checked by differentiation using the Product Rule and the Fundamental Theorem of Calculus.)

Hence we obtain images

From part (1), TF = ceλx for some cC. Hence T = F + ceλxC.

(3)Let images with an ≠ 0.

Then images

So P(ξ) = (ξλ)Q(ξ), where λ = λn, and a suitable polynomial Q.

Correspondingly, with images

We’ll use induction (on the order n of D) to prove

image

This is true for n = 1, from part (2) above.

Suppose that the claim is true for all differential operators of order n.

Let D have order n + 1, and write images where D1 is order n.

If DT = fC, then images and so D1T = Tg for some gC.

But by the induction hypothesis, it now follows that T = TF, with FC.

(4)If E is also a fundamental solution, then DE = δ.

But also DE = δ, and so D(EE) = 0.

Thus EE = F, where F is a classical solution of the homogeneous equation DF = 0. So E = E + F.

Conversely, if F is a classical solution of the homogeneous equation DF = 0, then E := E + F is a fundamental solution of D too: indeed, we have that DE = DE + DF = δ + 0 = δ.

So we conclude that: ED′(R) satisfies DE = δ if and only if

image

Solution to Exercise 6.24, page 252

If T = , where cC, then clearly xT = x() = 0.

Now suppose that TD′(R) is such that xT = 0.

This means that for all φD(R), we have 0 = 〈xT, φ〉 = 〈T, xφ〉.

Hence { : φD(R)} ⊂ ker T. We will now identify the set on the left-hand side as ker δ = {ψD(R) : ψ(0) = 0}, and then use part (1) of Exercise 6.9, page 241.

First, let us note that if ψ = , where φD(R), then ψD(R), and moreover, ψ(0) = 0φ(0) = 0. So we have { : φD(R)} ⊂ {ψD(R) : ψ(0) = 0}.

Next, let us show the reverse inclusion. Let ψD(R) be such that ψ(0) = 0.

We have, by the Fundamental Theorem of Calculus:

image

Set images Then ψ(x) = (x).

By differentiating under the integral sign we see that φC.

If ψ is zero outside [−a, a] for some a > 0, then as images

it follows that φ also vanishes outside [−a, a]. Thus φD(R).

So we have {ψD(R) : ψ(0) = 0} ⊂ { : φD(R)} as well.

Thus ker δ = { : φD(R)} ⊂ ker T, and by part (1) of Exercise 6.9, page 241 there exists a cC such that T = .

Solution to Exercise 6.25, page 254

First, we prove by induction that images where pn is a polynomial.

This is indeed true for n = 0 and images

If it is true for some n, then

image

where images is a polynomial.

This finishes the proof of our claim.

Now to show ex2S(R), it is enough to show that images for all nonnegative integers . For = 0, this is clear since |ex2| image 1 for all xR.

We have images and so for images

Since images is a continuous function, there is an M > 0 such that images

for x ∈ [−1, 1]. Consequently, images

Solution to Exercise 6.26, page 254

Since images we know that there exists an a > 0 such that all the φn vanish outside [−a, a], and moreover, φn and all its derivatives converge uniformly to 0 on [−a, a]. So for any nonnegative integers m, k, we have that

image

So images

Solution to Exercise 6.27, page 255

We have for φS(R) that

image

From here it follows that if (φn)nN is a sequence in S(R) such that images as n → ∞, then 〈Tf, φn〉 → 0. Thus TfS′(R).

Solution to Exercise 6.28, page 256

For φS(R), we have that

image

Note that in the last step, we have used the fact that the Fourier transform of an L′(R) function is bounded on R, and hence it defines a tempered distribution.

1 See for example [Sasane (2015), §2.4].

2 See for example [Sasane (2015), Chapter 6].

3 The symbol ¬ stands for “negation”. It is read as: “It is not the case that · · ·”.

4 See for example [Sasane (2015), page 311].

5 image uj = 0, unless i = i, in which case image ui = 1. Here ·images denotes transpose.

6 By the Axiom of Choice!