CHAPTER III

Cartan’s Criterion and Its Consequences

In this chapter we shall get at the heart of the structure theory of finite-dimensional Lie algebras of characteristic zero. We shall obtain the structure of semi-simple algebras of this type, prove complete reducibility of finite-dimensional representations and prove the Levi radical splitting theorem. All these results are consequences of certain trace criteria for solvability and semi-simplicity. One of these is Cartan’s criterion that a finite-dimensional Lie algebra of characteristic 0 is solvable if and only if tr (ad a)2 = 0 for every a in ′ Clearly, this is a weakening of Engel’s condition— that ad a is nilpotent for every a′. The method which we employ to establish this result and others of the same type is classical and is based on the study of certain nilpotent subalgebras, called Cartan subalgebras. We shall pursue this method further in the next chapter to obtain the classification of simple Lie algebras over algebraically closed fields of characteristic 0.

1. Cartan subalgebras

If is a subalgebra of a Lie algebra then the normalizer of is the set of x such that , that is, [xb] ∈ for every b. It is immediate that is a subalgebra containing , and is an ideal in . In fact, as in group theory, is the largest subalgebra in which is contained as an ideal. We now give the following

DEFINITION 1. A subalgebra of a Lie algebra is called a Cartan subalgebra if (1) is nilpotent and (2) is its own normalizer in

Let be a nilpotent subalgebra of a finite-dimensional Lie algebra and let = 01 be the Fitting decomposition of relative to ad. We recall that 0 = {x | x(ad h)k = 0, h, for some integer k}. We can now establish the following criterion.

PROPOSITION 1. Let be a nilpotent subalgebra of a finite-dimenstona Lie algebra . Then is a Cart an subalgebra if and only if coincides with the Fitting component 0 of relative to ad .

Proof: We note first that , the normalizer of . Thus if x then [xh] ∈ for any h. Since is nilpotent,

for some k. Hence x0. Hence if , then 0. Next assume 0. Now 0 is invariant under ad and every restriction ad, is nilpotent. Also is an invariant subspace of 0 relative to ad . Hence we obtain an induced Lie algebra of linear transformations acting in the non-zero space . Since these transformations are nilpotent, one of the versions of Engel’s theorem implies that there exists a non-zero vector x + such that . This means that we have [xh] ∈ for every h; hence x and so that . Thus if and only if , which is what we wished to prove

PROPOSITION 2. Let be a nilpotent subalgebra of the finite dimensional Lie algebra and let be the Fitting decomposition of relative to ad . Then 0 is a subalgebra and 1.

Proof: Let h and a0. Then for some k. Hence

This relation and Lemma 1 of §2.4 implies that the Fitting spaces 0 ad h and 1 ad h of relative to ad h are invariant under ad a. Since and , it follows that 0 ad and 1 ad . Since a is any element of and .

DEFINITION 2. An element h is called regular if the dimensionality of the Fitting null component of relative to ad h is minimal. If this dimensionality is l, then nl, where n = dim , is called the rank of .

We have seen that the dimensionality of the Fitting null component of a linear transformation A is the multiplicity of the root 0 of the characteristic polynomial of A. Hence h is regular if and only if the multiplicity of the characteristic root 0 of ad h is minimal. Since [hh] = 0 for every h it is clear that ad h is singular for every h. Hence the number l of the above definition is > 0. We remark also that is of rank 0 if and only if every ad h is nilpotent. By Engel’s theorem this holds if and only if is a nilpotent Lie algebra. Regular elements can be used to construct Cartan subalgebras, for we have the following

THEOREM 1. If is a finite-dimensional Lie algebra over an infinite field Φ and a is a regular element of then the Fitting null component of relative to ad a is a Cartan subalgebra.

Proof. Let = be the Fitting decomposition of relative to ad a. Then, by Proposition 2, is a subalgebra and . We assert that every adb, b, is nilpotent. Otherwise let b be an element of such that adb is not nilpotent. We choose a basis for which consists of a basis for and a basis for . Then the matrix of any ad h, h, relative to this basis has the form

where (ρ1) is a matrix of adh and (ρ2) is a matrix of adh. Let

respectively be the matrices for ad a and ad b. Then we know that (α2) is non-singular; hence det(α2) ≠ 0. Also, by assumption, (β1) is not nilpotent. Hence if n – l is the rank then dim = l and the characteristic polynomial of (β1) is not divisible by λl. Now let λ, μ v be (algebraically independent) indeterminates and let F(λ, μ, v) be the characteristic polynomial F(λ, μ, v) = det(λ1 – μA – vB). We have F(λ, μ, v) = F1(λ, μ, v)F2(λ, μ, v) where

We have seen that F2(λ, 1, 0) = det(λ1 — (α2)) is not divisible by λ and F1(λ, 0, 1) = det(λ1 – (β1)) is not divisible by λl. Hence the highest power of λ dividing F(λ, μ, v) is λl′, l < l. Since Φ is infinite we can choose μ0, v0 in Φ such that F(λ, μ0, v0) is not divisible by λl′+1. Set c = μ0a + v0b. Then the characteristic polynomial det (λ1 – ad c) = det(λ1 – μ0Av0B) = F(λ, μ0, v0) is not divisible by λl′+1. Hence the multiplicity of the characteristic root 0 of ad c is l < l. This contradicts the regularity of a. We have therefore proved that for every b, adb is nilpotent. Consequently, by Engel’s theorem, is a nilpotent Lie algebra. Let 0 be the Fitting null component of relative to ad . Then since the latter is the Fitting null component of ad a and a. On the other hand, we always have that for a nilpotent subalgebra. Hence 0 = , and is a Cartan subalgebra by Proposition 1.

Another useful remark about regular elements and Cartan subalgebras is that if a Cartan subalgebra contains a regular element a, then is uniquely determined by a as the Fitting null component of ad a. Thus if is this component then it is clear that since is nilpotent. On the other hand, we have just seen that is nilpotent so that if , then contains an element such that (cf. Exercise 1). This contradicts the assumption that is a Cartan subalgebra. An immediate consequence of our result is that if two Cartan subalgebras have a regular element in common then they coincide. We shall see later (Chapter IX) that if Φ is algebraically closed of characteristic zero, then every Cartan subalgebra contains a regular element. We now indicate a fairly concrete way of determining the regular elements assuming again that Φ is infinite. For this purpose we need to introduce the notion of a generic element and the characteristic polynomial of a Lie algebra.

Let be a Lie algebra with basis (e1, e2, …, en) over the field Φ. Let be indeterminates and let , the field of rational expressions in the ξi. We form the extension . The element of P is called a generic element of and the characteristic polynomial fx(λ) of ad x (in P) is called the characteristic polynomial of the Lie algebra . If we use the basis (e1, e2, …, en) for P, then we can write

where the pij are homogeneous expressions of degree one in the ξk. It follows that

where τi is a homogeneous polynomial of degree i in the ξ’s and but τn–1+k = 0, if k > 0. Since x ad x = 0 and x ≠ 0, det(ρ) = 0 and l > 0. The characteristic polynomial of any a = Σαiei is obtained by specializing ξi = αi, i = 1, 2, …, n, in (4). Hence it is clear that the multiplicity of the root 0 for the characteristic polynomial of ad a is at least l. On the other hand, if Φ is an infinite field then, since the polynomial in the polynomial algebra , we can choose so that . Then ad a for a = Σαiei has exactly l characteristic roots 0, and so a is regular. Thus we see that for an infinite field, a is regular if and only if

In this sense “almost all” the elements of are regular. (In the sense of algebraic geometry the regular elements form an open set.) It is also clear that nl is the rank of .

All of this depends on the choice of the basis (e). However, it is easy to see what happens if we change to another basis (f1, f2 …, fn) where fi = Σμijej. Thus if η1, η2, …, ηn are indeterminates, then . Hence the characteristic polynomial fy(λ) is obtained from fx(λ) by the substitutions in its coefficients (of the powers of λ).

If Ω is any extension field of Φ, then (e) is a basis for Ω over Ω. Hence can be considered also as a generic element of Ω and the characteristic polynomial fx(λ) is unchanged on extending the base field Φ to Ω. It is clear from this also that if Φ is infinite, a is regular in if and only if a is regular as an element in Ω. (In either case, (5) is the condition for regularity.) We have seen that the Fitting null component of ad a, a regular is a Cartan subalgebra. The dimensionality of is l, which is the multiplicity of the characteristic root 0. It follows that the Cartan subalgebra determined by a in Ω is Ω.

2. Products of weight spaces

It is convenient to carry over the notion of weights and weight spaces for a Lie algebra of linear transformations to an abstract Lie algebra and a representation R of . Let be the module for . A mapping a → α(a) of into Φ is called a weight of if there exists a non-zero vector x in such that

for a suitable k. The set of vectors satisfying this condition together with 0 is a subspace α called the weight space corresponding to the weight α. If is nilpotent, then Lemma 2.1 shows that α is a submodule. If = α, then we shall say that is a weight module for corresponding to the weight α.

Let be a Lie algebra and let be a finite-dimensional weight module for relative to the weight α. Then for any x, if k is sufficiently large. Moreover, if dim = n, then (λ – α(a))n is the characteristic polynomial of aR. Hence we have x(aRα(a)1)n = 0 for all x. We consider the contragredient module * which carries the representation R* satisfying

x, y**. We have

which we can add to (7) to obtain

Iteration of this gives

If k = n, x(aRα(a)1)n = 0 for all x and consequently, by (10), x, y*(aR* + α(a)1)n = 0. Hence y*(aR* + α(a)1)n = 0 for all y**. This shows that * is a weight module With the weight – α.

PROPOSITION 3. If is a finite-dimensional weight module for with the weight α, then the contragredient module * is a weight module with the weight –α.

We consider next what happens if we take the tensor product of two weight spaces. Thus let , be weight modules of relative to the weights α and β. Let R and S denote the representations in and , respectively. Then any x satisfies

for some positive integer k, and every y satisfies

for some positive integer k′. Let = and denote the representation of in by T. Then we have

or aT = aR ⊗ 1 + 1 ⊗ aS. Hence

Since the two transformations in the parentheses commute we can apply the binomial theorem to obtain

If we apply this to xy, we obtain

If we take m = k + k′ – 1, then for every i, either x(aRα(a)1)i = 0 or y(asβ(a)1)m–i = 0. Hence (xy)(aT – (α(a) + β(a))1)m = 0 and we have the following

PROPOSITION 4. If and are weight modules for for the weights α and β, respectively, then = is a weight module with the weight α + β.

We suppose now that is a finite-dimensional Lie algebra, a nilpotent subalgebra, and is a finite-dimensional module for , hence for . If R denotes the representation in and ad the adjoint representation in , then we assume that R and ad are split Lie algebras of linear transformations, that is, the characteristic roots of hR and adh, h, are in the base field Φ. This will be automatically satisfied if Φ is algebraically closed. If hRα(hR) is a weight on R, then hα(hR) is a weight for in the module . The result on weight spaces for a split nilpotent Lie algebra of linear transformations (Theorem 2.7) implies that is a direct sum of weight modules ρ. Similarly, we have a decomposition of into weight modules α. Thus we have

and

where ρ, σ, …, are mappings of into Φ such that if xPρ then xρ(hRρ(h)1)m = 0 for some m, etc. The weights α, β, …, associated with ad will be called the roots of in . Since hhR and h → adh are linear, it is clear that in the characteristic 0 case the weights ρ, …, τ and the roots α, …, δ are linear functions on which vanish on ′.

The following important result relates the decompositions (17) and (18) relative to .

PROPOSITION 5. if ρ + α is a weight of relative to ; otherwise ρα = 0.

Proof. The elements of ρα are of the form , . The characteristic property of the tensor product of two spaces shows that we have a linear mapping π of ρα onto ρα such that

We shall now show that π is in fact a homomorphism for the -modules. Thus let h. Then we have

On the other hand, the image of xραα under π is xρaα. Following this with the module product by h we again obtain (xρaα)h. We have therefore proved that ρ α is a homomorphic image of Pα. Moreover, the latter is a weight module for the weight ρ + α. Now it is clear from the definition that the homomorphic image of a weight module with weight β is either 0 or it is a weight module with weight β. The result stated follows from this.

If we apply the last result to the case in which = and the representation is the adjoint representation of , we obtain the

COROLLARY. is a root, and [αβ] = 0 otherwise.

3. An example

Before plunging into the structure theory it will be well to look at an example. Let be an n-dimensional vector space over an algebraically closed field Φ, the associative algebra of linear transformations in , = L, the corresponding Lie algebra. We wish to determine the regular elements of and the corresponding Cartan subalgebras.

Let A and let be the decomposition of into the weight spaces relative to Φ A. The αi are distinct elements of Φ and these are just the characteristic roots of the linear transformation A and the are the corresponding characteristic spaces. It is well known that a decomposition of a space as a direct sum leads to a decomposition of the dual space into a direct sum of subspaces which can be considered as the conjugate spaces of the components. Hence we have

where can be identified with the conjugate space of , is invariant under –A* and Proposition 3 shows that this is a weight module for the Lie algebra ΦA with the weight – αi. Accordingly, we write for and we have

We now consider the module * relative to ΦA. As is well known, we have . By Proposition 4, is a weight module for ΦA for the weight αiαj. If Aij denotes the linear transformation in corresponding to A, then (Aij – (αiαj)l)kij = 0 for suitable kij. Hence Aij is non-singular if and Aij is nilpotent if αi = αj. This implies that the Fitting null component of * relative to the linear transformation corresponding to A (A is the representation of in *) is . If dim , then the dimensionality of this Fitting space is . Since , is minimal if and only if every ni = 1, which is equivalent to saying that A has n distinct characteristic roots. Now, we recall that the module * is isomorphic to the module = L relative to the adjoint mapping X → [XA]. Hence we see that the dimensionality of the Fitting null component of relative to ad A is minimal if and only if A has n distinct characteristic roots. Thus A is regular in = L if and only if A has n distinct characteristic roots. The corresponding Cartan subalgebra is the Fitting null component of relative to ad A and dim = n.

Since A has n distinct characteristic roots we can choose a basis for such that the matrix of A relative to this basis is diagonal. Let 1 be the set of linear transformations whose matrices relative to this basis are diagonal. Then dim 1 = n and [HA] = 0 for every H1. Hence . On the other hand, dim = n. Hence and the Cartan subalgebra determined by A is just the centralizer of the linear transformation A.

Let (e1, e2, …, en) be a basis such that eiA = αiei. Then we have seen that the Cartan subalgebra is the set of linear transformations which have diagonal matrices relative to the basis (ei). Hence if H then we have , and we may assume the notation is such that . The spaces are weight spaces and Φei corresponds to the weight . We shall therefore write for Φei. As for the single linear transformation A we can now write and is a weight space for the Cartan subalgebra corresponding to the weight λiλj. We can summarize our results in the following

THEOREM 2. Let = L, the Lie algebra of linear transformations in an n-dimensional vector space over an algebraically closed field Φ. Then A ∈ is a regular element if and only if the characteristic polynomial det (λ1 – A) has n distinct roots. The Cartan subalgebra determined by A is the set of linear transformations H such that [HA] = 0. If the weights of H acting in are then the roots (weights of the adjoint representation in ) are .

If Φ is not algebraically closed but is infinite then the extension field argument of § 1 shows that again A is regular in L if and only if A has n distinct characteristic roots (in the algebraic closure Ω of Φ). It is well known that the centralizer of such an A is the algebra Φ[A] of polynomials in A and dim Φ[A] = n. Since the dimensionality of the Cartan subalgebra determined by A is n, it follows that this subalgebra is Φ[A].

If is the orthogonal or symplectic Lie algebra in a 2l-dimensional space over an infinite field one can show that the rank is Nl, N = dim (cf. Exercises 4 and 5). It is not difficult to determine Cartan subalgebras for the other important examples of Lie algebras which we have encountered (cf. Exercise 3).

4. Cartan’s criteria

We now consider a finite-dimensional Lie algebra over an algebraically closed field Φ, a nilpotent subalgebra of and a module for , which is finite-dimensional over Φ. Let

be the decompositions of and into weight modules relative to . We have seen that

Now suppose is a Cartan subalgebra. Then = 0, the root module corresponding to the root 0. Also we have = [] = Σ[αβ] where the sum is taken over all the roots α, β. The formula for [αβ] shows that

where the summation is taken over all the α such that — α is also a root (e.g., α = 0).

We now prove the following

LEMMA 1. Let Φ be algebraically closed of characteristic 0, a Cartan subalgebra of over Φ, a module for . Suppose a is a root such that — α is also a root. Let . Then ρ(hα) is a rational multiple of α(hα) for every weight ρ of in .

Proof: Consider the functions of the form ρ(h) + iα(h), i = 0, ± 1, ± 2, …, which are weights, and form the subspace = summed over the corresponding weight spaces. This space is invariant relative to and, by (20), it is also invariant relative to the linear transformations eαR, eR where R is the representation of determined by . Hence, if tr denotes the trace of an induced mapping in , then . On the other hand, the restriction of hαR to σ has the single characteristic root σ(hα). Hence

where, in general, nσ = dim σ. Thus we have

Since Σinρ + iα is a positive integer this shows that ρ(hα) is a rational multile of α(hα).

We can now prove

Cartan’s criterion for solvable Lie algebras. Let be a finitedimensional Lie algebra over a field of characteristic 0. Suppose has a finite-dimensional module such that (1) the kernel of the associated representation R is solvable and (2) tr (aR)2 = 0 for every a. Then is solvable.

Proof: Assume first that the base field Φ is algebraically closed. It suffices to prove that for, conditions (1) and (2) carry over to and as -module. Hence we shall have that

We therefore suppose that = . Let be a Cartan subalgebra and let the decomposition of and relative to be as in (19). Then (21) implies that = Σ[αβ] summed on the α such that – α is also a root. Choose such an α, let eα α, e, and consider the element hα = [eαe]. The formula = implies that every element of is a sum of terms of the form [eαe]. The restriction of hαR to ρ has the single characteristic root ρ(hα). Hence the restriction of (hRα)2 has the single characteristic root ρ(hα)2 and if nρ is the dimensionality of ρ, then we have

By the lemma, ρ(hα) = rρα(hα), rρ rational. Hence α(hα)2(Σnρr2ρ) = 0. Since the nρ are positive integers, this implies that α(hα) = 0 and hence ρ(hα) = 0. Since the ρ are linear functions and every h is a sum of elements of the form hα, hβ, …, etc., we see that ρ(h) = 0. Thus 0 is the only weight for ; that is, we have = 0. If α is a root then (20) now implies that α = 0 for every α ≠ 0. This means the kernel of R contains all the α, a ≠ 0. Hence is a homomorphic image of . Thus R is nilpotent and is solvable contrary to ' = .

If the base field Φ is not algebraically closed, then let Ω be its algebraic closure. Then Ω is a module for Ω and Ω is the kernel of the corresponding representation. Since is solvable, Ω is solvable. Next we note that the condition tr (aR)2 = 0 and tr aRbR = tr bRaR imply that

Hence if the ai and ωi∈ Ω, then tr Hence the condition (2) holds also in Ω. The first part of the proof therefore implies that Ω is solvable. Hence is solvable and the proof is complete.

COROLLARY. If Φ is of characteristic 0 then is solvable if and only if tr(ad a)2 = 0 for every a ∈ .

Proof: The sufficiency of the condition is a consequence of Cartan’s criterion since the kernel of the adjoint representation is the center. Conversely, assume solvable. Then, by Corollary 2 to Theorem 2.8, applied to ad , the elements ada, a ∈ , are in the radical of (ad )*. Hence ad a is nilpotent and tr(ada)2 = 0.

Let R be a representation of a Lie algebra in a finite-dimensional space . Then the function

is evidently a symmetric bilinear form on with values in Φ. Such a form will be called a trace form for . In particular, if is finite-dimensional, then we have the trace form tr(ad a)(ad b), which we shall call the Killing form of . If f is the trace form determined by the representation R then

A bilinear form f(a, b) on which satisfies this condition

is called an invariant form on . Hence we have verified that trace forms are invariant. We note next that if f(a, b) is any symmetric invariant form on , then the radical of the form; that is, the set of elements z such that f(a, z) = 0 for all a, is an ideal. This is clear since f(a, [zb]) = — f([ab], z) = 0.

We can now derive

Cartan’s criterion for semi-simplicity. If is a finite-dimensional semi-simple Lie algebra over a field of characteristic 0, then the trace form of any 1:1 representation of is non-degenerate. If the Killing form is non-degenerate, then is semi-simple.

Proof: Let R be a 1:1 representation of in a finite-dimensional space and let f(a, b) be the associated trace form. Then is an ideal of and f(a, a) = tr(aR)2 = 0 for every a. Hence is solvable by the first Cartan criterion. Since is semi-simple, = 0 and f(a, b) is non-degenerate. Next suppose that is not semi-simple. Then has an abelian ideal ≠ 0. If we choose a basis for such that the first vectors form a basis for , then the matrices of ad a, a ∈ , and ad b, b ∈ , are, respectively, of the forms

This implies that tr(ad b)(ad a) = 0. Hence and the Killing form is degenerate.

If f(a, b) is a symmetric bilinear form in a finite-dimensional space and (e1e2, …, en) is a basis for the space, then it is well known that f is non-degenerate if and only if det (f(ei, ej)) ≠ 0. If is a finite-dimensional Lie algebra of characteristic zero with basis (e1e2, …, en) and we set βij = tr ad ei ad ej, then is semisimple if and only if det (βij) ≠ 0. This is the determinant form of Cartan’s criterion which we have just proved. If Ω is an extension of the base field of then (e1e2, …, en) is a basis for ω over Ω. Hence it is clear that we have the following consequence of our criterion.

COROLLARY. A finite-dimensional Lie algebra over a field Φ of characteristic zero is semi-simple if and only if Ω is semi-simple for every extension field Ω of Φ.

5. Structure of semi-simple algebras

We are now in a position to obtain the main structure theorem on semi-simple Lie algebras. The proof of this result which we shall give is a simplification, due to Dieudonné, of Cartan’s original proof. The argument is actually applicable to arbitrary non-associative algebras and we shall give it in this general form.

Let be a non-associative algebra over a field Φ. A bilinear form f(a, b) on (to Φ) is called associative if

If f is an invariant form in a Lie algebra then f([ac], b) + f(a, [bc]) = 0. Hence f([ac], b) — f(a, [cb]) = 0 and f is associative. Now let f(a, b) be a symmetric associative bilinear form on and let be an ideal in . Let a so that f(a, b) = 0 for all b. Then for any c in , f(ac, b) = f(a, cb) = 0 since cb. Also f(ca, b) = f(b, ca) = f(bc, a) = 0 since bc. Hence is an ideal.

The importance of associative forms is indicated in the following result.

THEOREM 3. Let be a finite-dimensional non-associative algebra over a field Φ such that (1) has a non-degenerate symmetric associative form f and (2) has no ideals with 2 = 0. Then is a direct sum of ideals which are simple algebras.

(We recall that simple means that has no ideals ≠ 0, , and 2 ≠ 0.)

Proof: Let be a minimal ideal (≠ 0) in . Then ⊥; is an ideal contained in . Hence either = or = 0. Suppose the first case holds and let b1, b2, a. Then f(b1b2, a) = f(b1, b2a) = 0. Since f is non-degenerate b1b2= 0 and 2 = 0 contrary to hypothesis. Hence = 0. It is well known that this implies that = and is an ideal. This decomposition implies that = 0 = hence every -ideal is an ideal. Consequently, is simple. Moreover, satisfies the same conditions as since the restriction of f to is nondegenerate and any -ideal is an ideal. Hence, induction on dim implies that = 2 ⊕ … ⊕ r where the i are ideals and are simple algebras. Then for 1 = we have = 12⊕ … r, i, simple and ideals.

This result and the non-degeneracy of the Killing form for a semi-simple Lie algebra of characteristic zero imply the difficult half of the fundamental

Structure theorem. A finite-dimensional Lie algebra over a field of characteristic 0 is semi-simple if and only if where the i are ideals which are simple algebras.

Proof: If is semi-simple, then has the structure indicated. Conversely, suppose ideals and simple. We consider the set of linear transformations ad = {ad a | a ∈ } acting in . The invariant subspaces relative to this set are the ideals of . Since where the i are irreducible, we see that the set ad is completely reducible. Hence if is any ideal ≠ 0 in , then = where is an ideal (Theorem 2.9). Moreover, the proof of the theorem referred to shows that we can take to have the form for a subset {iu} of the i. Then where the iv are the remaining i. Since i is simple, 2i = i. Hence and consequently 2 = . Thus is not solvable. We have therefore proved that has no non-zero solvable ideals; so is semi-simple.

The argument just given that has the following consequence.

COROLLARY 1. Any ideal in a semi-simple Lie algebra of characteristic 0 is semi-simple.

If i is simple then the derived algebra hence the structure theorem implies the following

COROLLARY 2. If is semi-simple of characteristic 0, then = .

Remark. We have proved in Chapter II that if is a completely reducible Lie algebra of linear transformations in a finite-dimensional vector space over a field of characteristic 0, then = 1 where is the center and 1 is a semi-simple ideal. Then ' =' = 1. Hence = , semi-simple.

We prove next the following general uniqueness theorem.

THEOREM 4. If is a non-associative algebra and

where the i and j are ideals and are simple, then r = s and the i’s coincide with the i’s (except for order).

Proof: Consider . This is an ideal contained in 1 and j. Hence if since 1 and j are simple. It follows that for at most one j. On the other hand, if for all j then for all j. Since , this implies that 1 = 0 contrary to the assumption that 21 ≠ 0. Hence there is a j such that 1 = j. Similarly, we have that every i coincides with one of the j and every j coincides with one of the i.

The result follows from this.

It is easy to see also that if is as in Theorem 4, then has just 2r ideals, namely, the ideals , a subset of {1, 2, …, r). We omit the proof.

The main structure theorem fails if the characteristic is p ≠ 0. To obtain a counter example we consider the Lie algebra L of linear transformations in a vector space whose dimensionality n is divisible by p. It is easy to prove (Exercise 1.20) that the only ideals in L are 'L and Φ1, the set of multiples of 1. Since L is the set of linear transformations of trace 0 and tr 1 = n = 0, . Hence = has only one ideal, namely, , and the latter is simple. This implies that is semi-simple, but since is the only ideal in is not a direct sum of simple ideals. This and Theorem 3 imply that L/Φ1 possesses no nondegenerate symmetric associative bilinear form.

We conclude this section with the following characterization of the radical in the characteristic 0 case.

THEOREM 5. If is a finite-dimensional Lie algebra over a field of characteristic 0, then the radical of is the orthogonal complement of relative to the Killing form f(a, b).

Proof: = is an ideal and if b', then tr(adb)2 = f(b, b) = 0. The kernel of the representation is abelian. Hence is solvable, by Cartan’s criterion, and . Next let s ∈ , a, b ∈ . Then f(s, [ab]) = f([sa], b). We have seen (Corollary 2 to Theorem 2.8) that ad [sa] is contained in the radical of the enveloping associative algebra (ad )*. Consequently, ad [sa] ad b is nilpotent for every b and hence f([sa], b) = 0. Thus f(s, [ab]) = 0 and s. Thus .

6. Derivations

We recall that ad a is a derivation called inner and the set ad of these derivations is an ideal in the derivation algebra (). In fact, we have the formula [ad a, D] = ad aD for D a derivation. Hence

which implies that

Thus, for the Killing form f(a, b) = tr a ad b we have

that is, every derivation is a skew-symmetric transformation relative to the Killing form.

We prove next the following theorem which is due to Zassenhaus.

THEOREM 6. If is a finite-dimensional Lie algebra which has a non-degenerate Killing form, then every derivation D of is inner.

Proof: The mapping is a linear mapping of into Φ; that is, it is an element of the conjugate space * of Since f(a, b) is non-degenerate it follows that there exists an element d such that f(d, x) — (ad x)D for all x .

Let E be the derivation D — ad d. Then

Thus

Now consider

by (26). Since f is non-degenerate, this implies that E = 0. Hence D = ad d is inner.

This result implies that the derivations of any finite dimensional semi-simple Lie algebra over a field of characteristic zero are all inner. We recall also that if is solvable, finite-dimensional of characteristic 0, then is mapped into the nil radical by every derivation of (Corollary 2 to Theorem 2.13). We can now prove

THEOREM 7. (1) Let be a finite-dimensional Lie algebra over a field of characteristic 0, the radical, the nil radical. Then any derivation D of maps into . (2) Let be an ideal in a finitedimensional algebra 1, 1, 1 the radical and nil radical of 1.

Then .

Proof: We first prove (2) for the radical. Thus it is clear that is a solvable ideal in , hence . Then is a solvable ideal in . On the other hand, , which is an ideal in . Hence and are semi-simple. Hence . Now let 1 be the holomorph of the radical and nil radical of 1. Then we know that (Theorem 2.13). Since by the first part of the argument, . This implies that every derivation of maps into , which proves (1). Now let 1 be any finite-dimensional Lie algebra containing as an ideal. If a11 then ad a1 induces a derivation in . Hence . This means that is an ideal in 1 so that the nil radical of 1. Since the reverse inequality is clear, .

This result fails for characteristic p ≠ 0. To construct a counter example we consider first the commutative associative algebra with the basis with zp = 0. The radical of has the basis (z, z2, …, zp-l) and . It is easy to prove that if w is any element of then there exists a derivation of mapping z into w. In particular, there is a derivation D such that zD = 1. Now let be any simple Lie algebra and let be the Lie algebra . The elements of this algebra have the form and . Then is a Lie algebra (Exercise 1.23) and is a nilpotent ideal in . Moreover, is simple. Hence is the radical and the nil radical of . If D is any derivation in the associative algebra , then the mapping is a derivation in . If we take D so that zD = 1 and let b ≠ 0 in then . Hence we have a derivation which does not leave the radical invariant.

7. Complete reducibility of the representations of semi-simple algebras

In this section we shall prove the main structure theorem for modules of a semi-simple Lie algebra of characteristic zero and we shall obtain its most important consequences. The main theorem is due to Weyl and was proved by him by transcendental methods based on the connection between Lie algebras and compact groups.

The first algebraic proof of the result was given by Casimir and van der Waerden. The proof we shall give is in essence due to Whitehead. It should be mentioned that Whitehead’s proof was one of the stepping stones to the cohomology theory of Lie algebras which we shall consider in § 10. We note also that in the characteristic p case there appears to be little connection between the structure of a Lie algebra and the structure of its modules since, as will be shown later, every finite-dimensional Lie algebra of characteristic p ≠ 0 has faithful representations which are not completely reducible and also faithful representations which are completely reducible.

We obtain first a criterion that a set Σ of linear transformations in a finite-dimensional vector space be completely reducible. We have seen (Theorem 2.9) that Σ is completely reducible if and only if every invariant subspace of has a complement which is invariant relative to Σ. Now let ’ be any complementary subspace to : = ’. Such a decomposition is associated with a projection E of onto . Thus if x, then we can write x in one and only one way as x = y + y', y, y’ ∈ ’, and E is the linear mapping xy. Conversely, if E is any idempotent linear mapping such that = E then ’ = (l — E) is a complement of in . Now let A ∈ Σ and consider the linear transformation [AE] = AEEA. If x, xAE and xEA hence [AE] maps into . If y then yAE = yA. Hence [AE] maps into 0. Then if denotes the set of linear tranformations of which map into and into 0, [AE] ∈ . It is clear that is a subspace of the space of linear transformations in . We now prove the following

LEMMA 2. has a complement which is invariant if and only if there exists a D such that [AE] = [AD] for all AΣ. Here E is any projection onto .

Proof: Let be a complement of which is invariant relative to Σ and let F be the projection of onto determined by the decomposition = Since is invariant, F commutes with every A ∈ Σ, that is, [AF] = 0. Hence [AD] = [AE] for D = E — F. Also, since E and F are projections on , E — F maps into and into 0. Hence D ∈ as required. Conversely, suppose there exists a D such that [AE] = [AD]. Then F = E — D commutes with every AΣ. If x then xF = x(E — D) ∈ and if y then yF = yE = y. Hence F2 = F and = F. Then = (1 — F) is a complement of and is invariant under Σ since F commutes with every AΣ.

Suppose now that is a Lie algebra and is an -module, a submodule. We can apply our considerations to the set R of representing transformations aR determined by . Let E be a projection of onto . If a, set f(a) = [aR, E]. Then af(a) is a linear mapping of into the space of linear transformations of which map into and into 0. If X and a then [XaR]. Thus if x then x[XaR] and if y then yXaR = 0 and yaRX = 0. We denote the mapping X → [XaR by aR. It is immediate that aαR is a representation of whose associated module is the space . We have

We are now led to consider the following situation: We have a module for and a linear mapping af(a) of into such that

A “trivial” example of such a mapping is obtained by taking f(a) = da, where d is an element of . For, we have

The key result for the proof of complete reducibility of the modules for a semi-simple Lie algebra of characteristic zero is the following

LEMMA 3. (Whitehead). Let be finite-dimensional semi-simple of characteristic zero and let be a finite-dimensional module for and af(a) a linear mapping of into satisfying (27). Then there exists a d such that f(a) = da.

Proof. The proof will be based on the important notion of a Casimir operator. First, suppose that is a Lie algebra and 1 and 2 are ideals in such that the representations of in 1 and 2 are contragredient. Thus we are assuming that the spaces 1 and 2 are connected by a bilinear form (b1, b2), bii, (b1 b2) ∈ Φ, which is non-degenerate and that for any a, we have

If (u1, …, um) is a basis for 1 then we can choose a complementary or dual basis (u1, u2, …, um) for 2 satisfying (ui uj) = δij. Let [uia] = Then Hence (28) implies that = αik = — βki that is, the matrices (α) and (β) determined by dual bases satisfy (β) = — (α) ((α) the transpose of (α)). Now let R be a representation of . Then the element

is called a Casimir operator of R. We have

Hence we have the important property that Γ commutes with all the representing transformations aR.

Now let satisfy the hypotheses of the lemma. Let be the kernel of the representation R determined by . Then we can write where 1 is an ideal. Then the restriction of R to is semi-simple. Hence the trace form (b1 b2) ≡ is non-degenerate on 1. Also we know that the trace form of a representation is invariant. Hence the equation (28) holds for bi1 and α. Thus the representation of in 1 coincides with its contragredient and if (u1, …, um), (u1, …, um)are bases for 1 satisfying is a Casimir operator which commutes with every aR. We note also that tr

We now decompose into its Fitting components 0 and 1 relative to Γ so that Γ induces a nilpotent linear transformation in 0 and a non-singular one in 1. Since so that the j are submodules. We can write f(a) = f0(a) + (a) where and it is immediate that a → fj(a) is a linear mapping of into îj satisfying (27). Now if both spaces are ≠ 0, then dim < dim for j = 0, 1. Hence we can use induction on dim to conclude that there is a djj such that fj(α) = dja. Then dd0 + d1 satisfies f(α) = da as required. Thus it remains to consider the following two cases: = 0 and = 1.

= 0: In this case Γ is nilpotent. Hence m — tr Γ = 0. This means that the kernel of R is the whole of , that is, ar = 0 for all α. Then the condition (27) is that f([ab]) = 0, a, b ∈ S. Thus f(a’) = 0 for all a’ ∈ '. Since ' = , this implies that f(a) = 0 so that d = 0 satisfies the condition.

= 1: Set where the (ui) and (ui) are dual bases for i as before. Then

Since Γ is non-singular, d = Γ1 satisfies the required condition f(α) — da. This completes the proof of Whitehead’s lemma.

We can now prove the following fundamental theorem:

THEOREM 8. If is finite-dimensional semi-simple of characteristic0, then every finite-dimensional module for is completely reducible.

Proof: Let be a finite-dimensional -module, a submodule. Let be the space of linear transformations of which map into , into 0, and consider as -module relative to the composition Xa ≡ [X, aR], R the representation of . Let E be any projection of onto and set f(α) — [αR, E]. Then f(a) satisfies the conditions of Whitehead’s lemma. Hence there exists a D such that f(a) = Da — [D, aR]. As we saw before, this implies that has a complementary subspace which is invariant under . Since this applies to every submodule , is completely reducible.

If is a subalgebra of a Lie algebra , then a derivation D of into is a linear mapping of into such that

for every l1, l2. It is immediate that the set (, ) of derivations of into is a subspace of the space of linear transformations of into . Whitehead’s lemma to the theorem on complete reducibility has the following important consequence on derivations:

THEOREM 9 Let be a finite-dimensional Lie algebra of characteristic 0 and let be a semi-simple subalgebra of . Then every derivation of into can be extended to an inner derivation of .

Proof: Consider as -module relative to the multiplication [bl], l, b. Then a derivation D of into defines f(l) = lD satisfying the condition

of Whitehead’s lemma. Hence there exists a d such that lD = f(l) = [d, l]. Then D can be extended to the inner derivation determined by the element — d.

We recall that we have shown in Chapter II (Theorem 2.11) that if is a completely reducible Lie algebra of linear transformations in a finite-dimensional vector space over a field of characteristic 0, then = 1 where 1 is a semi-simple ideal and is the center. Moreover, the elements C are semi-simple in the sense that their minimum polynomials are products of distinct irreducible polynomials. We are now in a position to establish the converse of this result. Our proof will be based on a field extension argument of the following type: Suppose we have a set Σ of linear transformations in over Φ. If Ω is an extension field of Φ, every A ∈ Σ has a unique extension to a linear transformation, denoted again by A, in Ω. In this way we get a set Σ = {A} of linear transformations in Ω over Ω. We shall now prove the following

LEMMA 4. Let Σ be a set of linear transformations in a finitedimensional vector space over Φ and let Σ be the set of extensions of these transformations to Ω over Ω, Ω an extension field of Φ. Suppose the set Σ in Ω is completely reducible. Then the original Σ is completely reducible in .

Proof: Let be a subspace of which is invariant under Σ and let E be a projection on . Then our criterion for complementation (Lemma 2) shows that will have a complement which is invariant relative to Σ if and only if there exists a linear transformation D of mapping into , into 0 such that [AE] = [AD] for all AΣ. If A1, A2, …, Ak is a maximal set of linearly independent elements in Σ, and we set Bi = [AiE], then it suffices to find a D such that [AiD] = Bi, i = 1, 2, …, k. This is a system of k linear equations for D in the finite-dimensional space of linear transformations of mapping into , into 0. Thus if we have the basis (U1, U2, …, Ur) for we can write Bi = then our equations are equivalent to the ordinary system: s = 1, 2, …, k for the δj in Φ. Hence has a Σ-invariant complement if and only if this system has a solution. We now pass to Ω and the invariant subspace Ω relative to the set Σ of extensions of the AΣ. Then our hypothesis is that Ω has a Σ-invariant complement in Ω. Now the extension E of E is a projection of Ω onto Ω. Hence we have a linear mapping D of Ω mapping Ω into Ω, Ω into 0, such that The extensions Uu U2, …, Ur form a basis for the space of linear transformations of Ω mapping Ω into Ω, Ω into 0. Hence if satisfy the system . Since the γihs and βis belong to Φ, it follows that this system has a solution (δ1, …, δr), δ’s in Φ. Hence there exists a D such that [AD] = [AE], A ∈ Σ, and so has a Σ-invariant complement in .

We can now prove the following

THEOREM 10. Let be a Lie algebra of linear transformations in a finite-dimensional vector space over a field of characteristic zero. Then is completely reducible in if and only if the following conditions hold: (1) = 1, 1 a semi-simple ideal and the center and (2) the elements of are semi-simple.

Proof: The necessity has been proved before. Now assume (1) and (2) and let Ω be the algebraic closure of the base field. Then the lemma shows that it suffices to prove that the set of extensions of the elements of is completely reducible in Ω. The set of Ω-linear combinations of the elements of can be identified with Ω and similar statements hold for 1 and Now let C Since the minimum polynomial of C in has distinct irreducible factors and since the field is of characteristic 0, the minimum polynomial of C in Ω has distinct linear factors in Ω. Consequently, we can decompose Ω as α1 ⊕ … ⊕ αk where

and α1, α2, …, αr are the different characteristic roots of C. Since AC = CA for A, αi Aαi. We can apply the same procedure to the αi relative to any other D. This leads to a decomposition of Ω = 12 ⊕ … ⊕ r into -invariant subspaces such that the transformation induced in the i by every C is a scalar multiplication. To prove completely reducible in Ω it suffices to show that 1 the sets of induced transformations in the i are completely reducible and since the elements of are scalars in i it suffices to show that 1 is completely reducible in every i. The invariant subspaces of i relative to 1 are invariant relative to Ω1, the set of Æ-linear combinations of the elements of 1. Now Ω1 is a homomorphic (actually isomorphic) image of the extension algebra 1 Ω, which is semi-simple. Hence Ω1 is semi-1mple and consequently this Lie algebra of linear transformations is completely reducible by Theorem 8. Thus we have proved that is completely reducible in Ω and hence in .

We now shift our point of view and consider a finite dimensional Lie algebra of characteristic 0 and two finite-dimensional completely reducible modules and for . We shall show that is completely reducible. Now the space = is a module relative to the product (x + y)l = xl + yl, x, y. Evidently is completely reducible and is a submodule of Hence it suffices to prove that is completely reducible. If we replace by / where is the kernel of the representation in , then we may assume that the associated representation R in is 1: 1. Then we know that = 1 where 1 is a semi-simple ideal and is the center. Moreover, the elements CR C, are semi-simple. Now, in general, if R is a faithful representation of a Lie algebra , then the representation RR in is also faithful. Thus, if a and aR is not a scalar multiplication, then, since the algebra of linear transformations in is the tensor product of the algebras of linear transformations in aRaR, aR ⊗ 1, 1 ⊗ aR and 1 ⊗ 1 are linearly independent, so aR ⊗ 1 + 1 ⊗ aR Φ 0. Hence if aR ⊗ R = 0, aR must be a scalar, say aR = α. Then aRR = 2α (in ) and α = 0. since R is 1:1 this implies that a = 0. We can now conclude that where is semi-simple and RR is the center. Our result will therefore follow from the criterion of Th. 10 provided that we can prove that every

C, is semi-simple.

Let Ω be the algebraic closure of the base field and let α1 α2, …, αk be the different characteristic roots of CR. Then the proof of Theorem 10 shows that where xσiCR = for Hence and for every It follows that the minimum polynomial of CRR has distinct roots in ()Ω since this is also the minimum polynomial of CRR in , it follows that this polynomial is a product of distinct irreducible factors. Thus CRR is semi-simple and we have proved

THEOREM 11. Let be a finite-dimensional Lie algebra over a field of characteristic zero and let and be finite-dimensional completely reducible modules for . Then is completely reducible.

8. Representations of the split three-dimensional simple Lie algebra

In § 1.4 we called a three-dimensional simple Lie algebra split if contains an element h such that ad h has a non-zero characteristic root ρ belonging to the base field. We showed that any such algebra has a basis (e, f, h) with the multiplication table

The representation theory of this algebra is the key for unlocking the deeper parts of the structure and representation theory of semisimple Lie algebras (Chapters IV, VII, and VIII). We consider this now for the case of a field Ф of characteristic 0. We suppose first that Ф is algebraically closed and that is a finite-dimensional module for . The representation in is determined by the images E, F, H of the base elements e, f, h and we have

Conversely, any three linear transformations E, F, H satisfying these relations determine a representation of and hence module. Let α be a characteristic root of H and x a corresponding characteristic vector: x ≠ 0, xH = αx. Then

If xE ≠ 0 then (32) shows that α + 2 is a characteristic root for H and xE a corresponding characteristic vector. We can replace x by xE and repeat the process. This leads to a sequence of nonzero vectors x, xE, xE2, …, belonging to the characteristic roots α, α + 2, α + 4, …, respectively, for H. Now H has only a finite number of distinct characteristic roots; hence, our sequence breaks off and this means that we obtain a k such that xEk ≠ 0 and xEk + 1 = 0.

If we replace x by xEk we may suppose at the start that x ≠ 0 and

Now set x0 = x and let xi = xi – 1F. Then, analogous to (32), we obtain

and the argument used for the vectors xEi shows that there exists a non-negative integer m such that x0, x1, …, xm are ≠ 0 but xm + 1 = 0. Thus xFm + 1 = 0, xFm ≠ 0.

Then xi 0 i m, is a characteristic vector of H belonging to the characteristic root α ‒ 2i. since α, α – 2, α – 4, …, α – 2m are all different it follows that the xi are linearly independent. Let so that is an (m + 1)-dimensional subspace of . We shall now show that is invariant and irreducible relative to We first establish the formula

Thus we have x0E = 0 as given in (35). Assume (35) for i – 1. Then

as required. It is now clear from (34), (35), and xiF = xi + 1 that is a -subspace of . since H = [EF] we must have trH = 0. This, using (34), gives (m + 1)αm(m + 1) = 0. Hence we obtain the result that αm. Our formulas now read

and we note that in the last equation

mi + i(i – 1) ≠ 0.

Now let 1 be a non-zero invariant subspace of and let

βi ≠ 0, be in 1. Then . Hence by the last equation of (36) every xi1 and 1 = . Hence if is -irreducible to begin with, then = . In general, the theorem on complete reducibility shows that is a direct sum of irreducible invariant subspaces which are like the space .

We can now drop the hypothesis that Ф is algebraically closed, assuming only that Ф is of characteristic 0. We note first the following

LEMMA 5. Let be the split three-dimensional simple Lie algebra over a field Ω of characteristic zero and let eE, fF, hH define a finite-dimensional representation of . Then the characteristic roots of H are integers.

Proof. If is the module of the representation and Ω is the algebraic closure of Ф then Ω is a module for Ω which satisfies the same conditions over Ω as over Φ. Then Ω is a direct sum of irreducible subspaces with bases (x0, x1, …, xm) satisfying (36). Hence if we choose a suitable basis for Ω then the matrix of H relative to this is a diagonal matrix with integral entries. Hence the characteristic roots of H in Ω are integers. These are also the characteristic roots of H in .

We can now prove the following

THEOREM 12. Let be the split three-dimensional simple Lie algebra over a field of characteristic 0. Then for each integer m = 0, 1, 2, … there exists one and, in the sense of isomorphism, only one irreducible -module of dimension m + 1. has a basis (x0, x1, …, xm) such that the representing transformations (E, F, H) corresponding to the canonical basis (e, f, h) are given by (36).

Proof. Let be a finite-dimensional irreducible module for . Then the characteristic roots of H are integers. Hence we can find an integer α and a vector x ≠ in such that xH = αx. As before we may suppose xE = 0. Then we obtain that α = m and that has a basis (x0, x1, xm) such that (36) hold. These formulas are completely determined by the dimensionality m + 1 of . Hence any two (m + 1)-dimensional irreducible modules for are isomorphic. It remains to show that there is an irreducible (m + 1)-dimensional module for for every m = 0, 1, …. To see this we let be a space with the basis (x0, x1, …, xm) and we define the linear transformations E, F, H by (36). Then we have

Hence E, F, and H satisfy the required commutation relations and so they define a representation of As before, is -irreducible.

The theorem of complete reducibility applies here also and together with the foregoing result gives the structure of any finitedimensional -module.

9. The theorems of Levi and Malcev-Harish-Chandra

The “radical splitting” theorem of Levi asserts that if is a finite-dimensional Lie algebra of characteristic 0 with solvable radical , then contains a semi-simple subalgebra = + such that = + . It will follow that = 0 so that = and /. Thus the subalgebra is isomorphic to the difference algebra of modulo its radical. Conversely, if contains a subalgebra isomorphic to /, then is semi-simple. Hence = 0 and since dim = dim + dim / = dim + dim , = + .

We note next that it suffices to prove the theorem for the case 2 = 0, that is is abelian. Thus suppose 2 ≠ 0. Then if = /2, dim < dim . Hence if we use induction on the dimensionality we may assume the result for . Now = /2 is the radical of and / /. Hence contains a subalgebra . As subalgebra of , has the form 1/2 where 1 is a subalgebra of containing 2. Now 2 is the radical of 1 and so that dim 1 < dim . The induction hypothesis can therefore be used to conclude that 1 contains a subalgebra /, and this completes the proof for .

We now assume that 2 = 0 and for the moment we drop the assumption that = / is semi-simple. Now is a submodule of for (adjoint representation). since 2 = 0, is in the kernel of the representation of determined by the module . Hence we have an induced representation for = /. For the corresponding module we have s = [s, b], s, b.

We can find a 1:1 linear mapping of into such that Such a mapping is obtained by writing = where is a subspace. Then we have a projection of onto defined by this decomposition. since is the kernel, we have an induced linear isomorphism σ of onto hence into . If b = s + g, s, then by definition and so that as required. Conversely, let be any 1: 1 linear mapping of into such that Then is a complement of in . If s and b, then holds for the module multiplications in .

Let and consider the element

If we apply the algebra homomorphism of onto and make use of the property we obtain and Hence we see that

One verifies immediately that is a bilinear mapping of into .

Now suppose is a subalgebra of . Then hence so that we must have for all The converse is also clear since implies that Hence is a subalgebra if and only if the bilinear mapping g is 0.

If σ is not a subalgebra, then we seek to modify σ to obtain a second mapping τ of so that τ is a subalgebra. Suppose this is possible. Then we have a 1: 1 linear mapping τ of into such that and for all Now let ρ = στ. Then ρ is a linear mapping of into such that

Hence , and we can consider ρ as a linear mapping of into . Also we have

If s is defined as before, we have Thus, if we can somehow choose a complement of which is a subalgebra then the bilinear mapping of × into can be expressed in terms of the linear mapping ρ of into by the formula

Conversely, suppose we have a linear mapping p of into satisfying this condition. Then τ = σρ is another 1: 1 linear mapping of into such that and one can re-trace the steps to show that so that t is subalgebra.

Our results can be stated in the following way:

Criterion. Let be a Lie algebra, an ideal in such that 2 = 0 and set Then is a -module relative to the composition s = [sb]. Also there exist 1: 1 linear mappings σ of into such that If σ is such a mapping then

Moreover, has a complementary space which is a subalgebra if and only if there exists a linear mapping ρ of into such that

We observe next that the bilinear mapping g, which we shall call a factor set in , satisfies certain conditions which are consequences of the special properties of the multiplication in a Lie algebra. Thus it is clear that

which implies . We next write

and calculate

If we permute 1, 2, 3, cyclically, add, and make use of the Jacobi identities in and , we obtain

Our proof of Levi’s theorem will be completed by proving the following lemma, which is due to Whitehead.

LEMMA 6. Let be a finite-dimensional semi-simple Lie algebra of characteristic 0, a finite-dimensional -module and (l1, l2) → g(l1, l2) a bilinear mapping of × into such that

Then there exists a linear mapping llρ of into such that

Proof: Let , 1, ui, ui, Γ be as in the proof of Whitehead’s first lemma: is the kernel of the representation, 1 is an ideal such that = 1, (ui) and (ui), i = 1, …, m, are dual bases of 1 relative to the trace form of the given representation, and Γ is the Casimir operator determined by the ui and ui We recall that γ is the mapping in . Set l3 = ui in (ii) and take the module product with ui. Add for i. This gives

If we make use of and recall that βij = – αji (cf. (28)) we can verify that

These and the skew symmetry of g permit the cancellation of four terms in the foregoing equations. Hence we obtain

If Γ is non-singular we define

Then (43) gives the required relation (iii). If Γ is nilpotent, then, as in the proof of Whitehead’s first lemma, m = 0, = , so that the representation is a zero representation. Then (ii) reduces to

Now let denote the vector space of linear mappings of into . We make this into an -module by defining for A, x, l, x(Al) ≡ – [xl]A that is, Al ≡ – (ad l)A. It is easy to see that this satisfies the module conditions (cf. § 1.6). For each l we define an element Al as the mapping xg(x, l) ∈ . Then lAl is a linear mapping of into and

Hence the skew symmetry of g and (ii′) imply that

Thus he hypothesis of Whitehead’s first lemma holds. The conclusion states that there exists a ρ such that Al = ρl. This means that we have a linear mapping ρ of into such that

By definition of as module, this gives (iii). This proves the result for the case Γ nilpotent. If Γ is neither non-singular nor nilpotent, then we have the decomposition of as 01 where the i are the Fitting components of relative to Γ and these are 0. These spaces are submodules and we can write g(l1, l2) = g0(l1, l2) + g1(l1, l2), gii. Then the gi satisfy the conditions imposed on g, so we can represent these in the form (iii), by virtue of an induction hypothesis on the dimensionality of . This gives the result for by adding the linear transformations for the i.

As we have noted before, the lemma completes our proof of

Levi’s theorem. If is a finite-dimensional Lie algebra of characteristic zero with radical then there exists a semi-simple subalgebra of such that = .

A subalgebra satisfying these conditions is called a Levi factor of . A first consequence of Levi’s theorem is the following result:

COROLLARY 1. Let , and be as in the theorem. Then .

Proof: We have = so that . since have .

We have seen that the nil radical of (Theorem 2.13), so we can now state that . We know also that the radical of an ideal is the intersection of the ideal with the radical of the containing algebra. Hence is the radical of ′. We therefore have the following

COROLLARY 2. The radical of the derived algebra of a finitedimensional Lie algebra of characteristic 0 is nilpotent.

We take up next the question of uniqueness of the Levi factors. It will turn out that these are not usually unique; however, they are conjugate in a rather strong sense which we shall now define. We recall that if z, the nil radical of , then ad z is nilpotent. since ad z is a derivation we know also that A = exp (ad z) is an automorphism. Let denote the group of automorphisms generated by the elements exp (ad z), z. Then we have the following conjugacy

Theorem of Malcev-Harish-Chandra. Let = where is a solvable ideal and is a semi-simple subalgebra and let 1 be a semi-simple subalgebra of . Assume finite-dimensional and of characteristic 0. Then there exists an automorphism A such that .

Proof: Any l11 can be written in one and only one way as where and so that we have the linear mappings λ and σ of 1 into and , respectively. since 1 is semi-simple, 1 = 0; hence λ is 1:1. If l21 then

Hence

The second of these equations shows that the nil radical of . since this implies that for every l11 and so We shall prove by induction that there exists an automorphism such that where (i) is the ith derived algebra of . since is solvable this will prove the result. since we have proved that it suffices to prove the inductive step and we may simplify the notation and assume that . Then we shall show that there exists A such that . If we use the notation introduced before, 1 + (k) implies that The first equation in (48) implies that if we set then this makes (k) into an 1-module. Now (k + 1) a submodule so that (k)/(k + 1) is an 1-module relative to where z(k) and . We now take the cosets relative to ^α+1)0f the terms in the second equation of (48). Since we have

Now set is a linear mapping of 1 into the 1-module (k)/(k + 1) and the foregoing equation can be re-written as

Hence by Whitehead’s first lemma there exists a such that which means that

Let A = exp (ad z). Then

Now and (51) shows that We can therefore prove the result by induction on k.

COROLLARY 1. Any semi-simple subalgebra of a finite-dimensional Lie algebra of characteristic zero can be imbedded in a Levi factor.

Proof: If A is as in the theorem, then 1 is contained in the Levi factor A – 1.

COROLLARY 2. If where 1 and 2 are semisimple subalgebras then there exists an automorphism A such that .

This is an immediate consequence of the theorem.

10. Cohomology groups of a Lie algebra

The two lemmas of Whitehead can be formulated as theorems in the cohomology theory of Lie algebras. Historically, these constituted one of the clues which led to the discovery of this theory. Another impetus to the theory came from the study of the topology of Lie groups which was initiated by Cartan. In this section we give the definition of the cohomology groups which is concrete and we indicate an extension of the “Γ non-singular” case of Whitehead’s lemmas to a general cohomology theorem. Later (Chapter V) we shall give the definition of the cohomology groups which follows the general pattern of derived functors of Cartan-Eilenberg.

Let be a Lie algebra, an -module. If i 1, an i-dimensional -cochain for is a skew symmetric i-linear mapping of × × … × (i times) into . Such a mapping f sends an i-tuple (l1, l2, … li), lq into f(l1, …, li) ∈ in such a way that for fixed values of l1, …, lq – 1, lq + 1, …, li the mapping lqf(l1, …, fi) is a linear mapping of into . The skew symmetry means that f is changed to —f if any two of the li are interchanged (the remaining ones unchanged). If i = 0 one defines a 0-dimensional -cochain for as a “constant” function from to , that is, a mapping lu, u a fixed element of . If f is an i-dimensional cochain (or simply “an i-cochain”), i 0, f determines an (i + 1)-dimensional cochain fδ, called the coboundary of f, defined by the formula

Here the ^ over an argument means that this argument is omitted (e.g., . For i = 0 this is to be interpreted as (fδ)(l) = ul if f is the mapping xu.

The set Ci(, ) of i-cochains for is a vector space relative to the usual definitions of addition and scalar multiplication of functions. Moreover ffδ is a linear mapping, the coboundary operator, of Ci(, ) into Ci + 1(, ), i 0. Besides the case

we have

An i-cochain f is called a cocycle if fδ = 0 and a coboundary if f = gδ for some (i – 1)-cochain g. The set Zi(, ) of i-cocycles is the kernel of the homomorphism δ of Ci into Ci + 1, so Zi is a subspace of Ci similarly, the set of i-coboundaries is a subspace of Ci since it is the image under δ of Ci – 1. It can be proved fairly directly that , that is, coboundaries are cocycles.

This amounts to the fundamental property: δ2 = 0 of the coboundary operator. We shall not give the verification in the general case at this point since it will follow from the abstract point of view later on. At this point we shall be content to verify fδ2 = 0 for f a 0-or a 1-cochain. Thus if f = u, that is, f is the mapping xu then (l) = ul and fδ2(l1 l2) = – ul2l1 + ul1l2u[l1l2 = 0 by the definition of a module. If f is a 1-cochain, fd(l1l2) is given by (54). Hence, by (55),

One checks that this sum is 0; hence 2 = 0 for any 1-cochain f.

Once the verification δ2 = 0 has been made, one can define the i-dimensional cohomology group (space) of relative to the module as the factor space . If i = 0 we agree to take Bi = 0 since there are no (i – 1)-cochains. Hence in this case it is understood that H0(, ) = Z0(, ). This can be identified with the subspace I() of elements u ∈ such that ul = 0 for all l. Such elements are called invariants of the module . Hi(, ) = 0 means that that is, every i-cocycle is a coboundary. For i = 1 this states that if lf(l) is a linear mapping of into such that , then there exists a u in such that f(f) = ul. This is just the type of statement which appears in Whitehead’s first lemma. similarly, Whitehead’s second lemma is a statement about the second cohomology groups. In fact, these two results can now be stated in the following way.

THEOREM 13. If is finite-dimensional semi-simple of characteristic 0, then H1(, ) = 0 and H2(, ) = 0 for every finite-dimensional module of .

It is easy to see that if = 2 where the i are sub-modules of , then This and the theorem of complete reducibility permits the reduction of the Hi(, ), for finite-dimensional to the case irreducible and here one distinguishes two cases: (1) ≠ 0 and (2) = 0. In the second case irreducibility implies dim = l, so can be identified with the field Φ. Then an i-cochain is a skew symmetric i-linear function of (l1 · ··, li) with values in Φ, and since the representation is a zero representation, the coboundary formula reduces to

It turns out that for semi-simple Lie algebras the cohomology groups with values in = Φ are the really interesting ones, since these correspond to cohomology groups of Lie groups. On the other hand, the case ≠ 0 is not very interesting (for semi-simple 8, finite-dimensional irreducible ) except for its applications to the theorem of complete reducibility and the Levi theorem, since one has the following general result.

THEOREM 14 (Whitehead). Let be a finite-dimensional semisimple Lie algebra over a field of characteristic 0 and let be a finite-dimensional irreducible module such that ≠ 0. Then for all i 0.

If i = 0 the irreducibility and ≠ 0 imply that u = 0 holds only for u = 0. This means that H0( ) = 0. The proof for i > 0 is similar to the proof of the case: Γ non-singular, in the two Whitehead lemmas. We leave the details to the reader.

11. More on complete reducibility

For our further study of this question we require a notion of a type of closure for Lie algebras of linear transformations and an imbedding theorem for nilpotent elements in three-dimensional split simple algebras. The first of these is based on a special case of a property of associative algebras (the so-called Wedderburn princeal theorem), which is the analogue of Levi’s theorem on Lie algebras. The result is the following

THEOREM 15. Let be a finite-dimensional algebra (associative with identity 1) generated by a single element x over Φ of characteristic zero and let be the radical of . Then contains a semisimple ubalgebra 1 such that

Proof: Let f(λ) be the minimum polynomial of x and let

be the factorization of f(λ) into irreducible polynomials with the leading coefficients one such that if i = j and deg πi(λ) > 0, We note first that if all the ei = 1, has no nonzero nilpotent elements (cf. p. 47), so is semi-simple and there is nothing to prove. In any case, set

and z = f1(x). Then if e = max(ei), so that z is nilpotent. since is commutative, the ideal (z) generated by z is nilpotent; hence (z) ⊆ . On the other hand, f1(x) = 0 (mod (z)). Hence the minimum polynomial of the coset in /(z) is a product of distinct prime factors. since x generates /(z), this means that /(z) is semi-simple. Hence (z) = . It follows also easily that the minimum polynomial of . Hence it suffices to prove that contains an element y whose minimum polynomial is We shall obtain such an element by a method of ‘ ‘successive approximations” beginning with x1 = x To begin with we have f1(x1) ≡ 0 (mod ) and xxk (mod ). Suppose we have already determined xk such that f1(xk) ≡ 0 (mod k) and xxk(mod ). Set xk + 1 = ×k + w where w is to be determined in k so that f1(xk + i) ≡ 0 (mod k +1). We have, by Taylor’s theorem for polynomials,

since the base field is of characteristic 0, f1(λ) has distinct roots in the algebraic closure of Φ. Hence fι(Λ) is prime to the derivative It follows that has an inverse v in /. Set w = — fi(xk)v. Then w ≡ 0 (mod k) so that

Thus we have determined xk + 1 such that f1(xk + i) = 0 (mod ) and y = x (mod ). since is nilpotent this process leads to a y such that f1(y) = 0 and y = x (mod ). Hence 1 = λ [y] satisfies = . since the minimum polynomial of x is it follows that the minimum polynomial of y is f1(λ) also.

We can now prove

THEOREM 16. Let X be a linear transformation in a finite-dimensional vector space over a field of characteristic zero. Then we can write X = Y + Z where Y and Z are polynomials in X such that Y is semi-simple and Z is nilpotent. Moreover, if X = Yx + Zi where Y1 is semi-simple and Z1 is nilpotent and Yx and Zi commute with Z, then Y1 = Y, Z1 = Z.

Proof: The existence of the decomposition X = Y + Z is obtained by applying Theorem 15 to the algebra Φ[X]. Now suppose X = Y1 + Z1 where Y1 and Z1 have the properties stated in the theorem. since Y and Z are polynomials in X they commute with Y, and Z1. We have Y - Yt = Z1 - Z. since Z and Z1 are nilpotent and commute, Z - Z1 is nilpotent. since Y and Y1 are semi-simple and the base field is of characteristic zero, the proof of Theorem 11 shows that Y - Y1 is semi-simple. since the only transformation which is both semi-simple and nilpotent is 0,

Hence Y = Y1 Z = Z1.

We call the uniquely determined linear transformations Y and Z of Theorem 16 the semi-simple and the nilpotent components of X.

DEFINITION 3. A Lie algebra of linear transformations of a finite-dimensional vector space over a field of characteristic 0 is called almost algebraic* if it contains the nilpotent and semi-simple components of every X.

To prove our imbedding theorem we require the following two lemmas.

LEMMA 7. (Morozov). Let be a finite-dimensional Lie algebra of characteristic 0 and suppose contains elements f, h such that [fh] = –2f and h ∈ [f]. Then there exists an element e ∈ such that

* This concept is due to Malcev, who used the term splittable. We have changed the term to “almost algebraic” since this is somewhat weaker than Chevalley’s notion of an algebraic Lie algebra of linear transformations. Moreover, we have preferred to use the term “split Lie algebra” in a connection which is totally unrelated to Malcev’s notion.

Proof: There exists a z such that h = [zf]. Set F = adf, H = ad h, Z = ad z so that we have

The first of these relations implies that F is nilpotent (Lemma 2.4). Also

Hence [zh] = 2z + x1 where x1 the subalgebra of elements x such that [xf] = 0. since [FH] = –2F, if b then

Hence bH and so H. Also we have

and since HFk = FkH + 2kFk, we have

Let Then b = aFi-1 and bF= aFi = 0. Hence

Hence b(H +(i – 1)) ∈ Fi. It follows from this relation and the nilpotency of F that if b is any element of then

for some positive integer m. Thus the characteristic roots of the restriction of H to are non-positive integers. Hence H – 2 induces a non-singular linear transformation in and consequently there exists a y, ∈ such that y1(H – 2) where Xi is the element such that [zh] = 2z + x1. Then [y1h] = 2y1 + x1 Hence if we set e = zy1 we have [eh] = 2e. Also [ef] = [zf] = h. Hence (59) holds.

LEMMA 8. Let be a Lie algebra of linear transformations in a finite-dimensional vector space over a field of characteristic 0. Suppose every nilpotent element F ≡ 0 of can be imbedded in a subalgebra with basis (E, F, H) such that [EH] = 2E, [FH] — — 2F, [EF] = H. Let ⊕ be any subalgebra of which has a complementary space in invariant under multiplication by Then ⊕ has the property stated for .

Proof Let F be a non-zero nilpotent of . Then we can choose E and H in so that the indicated relations hold. Write H = H1 + H2 H1, ∈ , H2, E = E1 + E2, E1, E2. Then we have –2F = [FH] = [FH] + [FH2] and [FH1] ∈ [FH2] ∈ . Hence 2F = [FH1]. Also H = [EF] = [E, F] + [E2F]. This implies that Hi = [E1 F1] ∈ [F]. Thus H1 satisfies for F, the conditions on H in Lemma 7. Hence there exists E′, H′ in such that [FH1] = 2F, [EH′] = 2E′, [EF] = H′. The subalgebra generated by F, E′, H′ is a homomorphic image of the split three-dimensional simple algebra. since F ≠ 0 we have an isomorphism, so that F, E′, H′ are linearly independent and satisfy the required conditions

We can now establish our second criterion for complete reducibility.

THEOREM 17. Let be a Lie algebra of linear transformations in a finite-dimensional vector space over a field of characteristic 0. (1) Assume completely reducible. Then every non-zero nilpotent element of can be imbedded in a three-dimensional split simple subalgebra of and is almost algebraic. (2) Assume that every non-zero nilpotent element of can be imbedded in a three-dimensional simple subalgebra of and that the center of is almost algebraic. Then is completely reducible.

Proof: (1) Assume is completely reducible and let denote the complete algebra of linear transformations in . Let F be a nilpont linear transformation and let be a decomposition of into cyclic invariant subspaces relative to F. Thus in i we have a basis (x0, x1, …, xmi) such that xjF = xj + 1 xmi F = 0. We define H and E to be the linear transformations leaving every i invariant and satisfying xjH = (mi = 2j)xj, x0E = 0, xjE = mij + j(j – 1))xj– 1, j > 0 (cf. (36)). Then as in §8, [EH] = 2E, [FH] = – F, [EF] = H. This shows that F can be imbedded in a subalgebra ΦE + ΦF + ΦH of the type indicated. We shall show next that we can write L = where is a subspace such that [] ⊆ . It will then follow from Lemma 8 that every nilpotent element ≠ 0 of can be imbedded in a split three-dimensional simple subalgebra of . We recall that L as module relative to (adjoint representation) is equivalent to *, * the contragredient module. It is also easy to see that * is completely reducible. Hence, by Theorem 11, *, and consequently L, is completely reducible relative to . since is a submodule of L relative to , there exists a complement such that L = . This completes the proof of the first assertion in (1). Now let X be any element of and let Y and Z be the semi-simple and nilpotent components of X. Then

and adZ is nilpotent (Zm = 0 implies (adZ)2m-1 = 0). Also the identification of with * and the proof of Theorem 11 show that adY is semi-simple. Hence adY and adZ are the semi-simple and nilpotent components of adX and so these are polynomials in adX. since adX and adY, adZ are polynomials in adX, adY , adZ . Thus L → [LY], L → [LZ] are derivations in . We can write = ′ ⊕ where ′ is semisimple and is the center. since the derivations of ′ are all inner it follows that any derivation of which maps into 0 is an inner derivation determined by an element of ′. Since Z is a polynomial in X, [XC] = 0 implies [ZC] = 0. This implies that the derivation L → [LZ] maps into 0. Hence there exists a Z1′ such that [LZ] = [LZ1, L ∈ . since Z is nilpotent, adZ1 = adZ is nilpotent. since ′ is semi-simple, the result just proved (applied to ad ′) implies that there exists an element U′ such that [adZ1, adU] = 2adZ1. Then [Z1U] = 2Z1, which implies that Z1 is nilpotent. since [XZ] = 01 [XZ1] = 0 and since Z is a polynomial in X, [ZZ1] = 0. It now follows that ZZ1 is nilpotent. since [L, ZZ1] = 0, L, and ZZ1 is in the enveloping associative algebra *, ZZ1 is in the center of *. since * is completely reducible and ZZ1 is nilpotent, this implies ZZ1 = 0 so Z = Z1. Hence also Y = XZ. This completes the proof that is almost algebraic. (2) Assume has an almost algebraic center and has the property stated for nilpotent elements. Let be the radical of and let F ∈ []. Then we know that F is nilpotent (Corollary 2 to Theorem 2.8). If F is not zero it can be imbedded in a three-dimensional simple subalgebra . since ≠ 0 and is simple, which is impossible because of the solvability of . Hence F = 0 and [] = 0. This implies that = the center. By Levi’s theorem = 1 where 1 is a semi-simple subalgebra. since is the center this implies that 1 is an ideal. We can now invoke Theorem 10 to prove that is completely reducible, provided that we can show that every C is semi-simple. Now we are assuming that C = D + E where D is semi-simple, E nilpotent, and D and E are in . If E ≠ 0 we can imbed this in a three-dimensional simple subalgebra. Clearly this is impossible since E is in the center. This completes the proof of (2).

It is immediate that if is almost algebraic, then the center of is almost algebraic. Hence we can replace the assumption in (2) that is almost algebraic by the assumption that is almost algebraic. We recall that the centralizer of a subset S is the set of elements y such that [sy] = 0 for all s ∈ S. This is a subalgebra of . We shall now use the foregoing criterion to prove

THEOREM 18. Let be a completely reducible Lie algebra of linear transformations in a finite-dimensional vector space of characteristic zero and let 1 be a completely reducible subalgebra of . Then the centralizer is completely reducible.

Proof. Let X2. Then since is almost algebraic, the semisimple and nilpotent parts Y, Z of X are in . since these are polynomials in X, [CX] = 0 for C1 implies [CY] = 0 = [CZ]. Hence Y, Z ∈ 2 and 2 is almost algebraic. We shall show next that where 3 is a subspace of such that . It will then follow from Theorem 17 and Lemma 8 that every nil-potent element of 2 can be imbedded in a three-dimensional split simple algebra. Then 2 will be completely reducible by Theorem 17. Now we know that ad1 is completely reducible (proof of Theorem 17). since is a submodule of relative to 1, is completely reducible relative to ad1. Thus we may write

where [i1] i, i = 1, …, k, and i is irreducible relative to ad1. We assume the i are ordered so that [i1] = 0, i = 1, …, h, and [i1] ≠ 0 if j > h. since the subset i of elements zi such that [zi1] = 0 is a submodule of i, it is immediate that

Set 3 = h+1 + … + k. Then = 23. If i > h, then [i1] ≠ 0 and [i1] + [[i1]1] + … is an 1-submodule ≠ 0 of i. Hence i = [i1] + [[i1]1] + …. This implies that

On the other hand, = 23; hence, [1] = [31] 3. Hence 3 = [1] and

This shows that 3 is a complement of 2 in such that [32] 3, which is what we needed to prove.

Exercises

In all these exercises the characteristic of the base field will be zero, and unless the number is indicated with an asterisk the dimensionalities of the spaces will be finite.

1. Show that if is a Cartan subalgebra of then is a maximal nil-potent subalgebra of . Show that the converse is false for ФnL (n 2).

2. Let be a nilpotent Lie algebra of linear transformations in and let = 01 be the Fitting decomposition relative to . Show that if Ф is infinite, then there exists an A such that 0 = 0A, 1 = 1A, iA, the Fitting components relative to A.

3. Show that the diagonal matrices of trace 0 form a Cartan subalgebra in the Lie algebra of triangular matrices of trace 0. Show that is complete

4. Let be the subalgebra of Ф2lL of matrices A satisfying S–lA′S = – A where

(This is isomorphic to an orthogonal Lie algebra.) Show that the diagonal matrices

form a Cartan subalgebra of .

5. Same as Exercise 4 but with S replaced by

6. Generalize Exercise 2.9 to the following: Let be a Lie algebra, a nilpotent subalgebra of the derivation algebra of . Suppose the only element l such that lD = 0 for all D is l = 0. Then prove that is nilpotent.

7. Show that if si is a semi-simple ideal in 1 then = 12 where 2 is a second ideal.

8. Let be an ideal in such that / is semi-simple. Show that there exists a subalgebra 1 of such that = 1.

9. Let be simple over an algebraically closed field Ф and let f(a, b) be an invariant symmetric bilinear form on . Show that f is a multiple of the Killing form. Generalize this to semi-simple .

10. Let n be the n-dimensional irreducible module for a split three-dimensional simple algebra . Obtain a decomposition of nr into irreducible submodules.

11*. Let e, h be elements of an associative algebra such that [[eh]h] = 0. Show that if h is algebraic, that is, there exists a non-zero polynomial φ(λ) such that φ(h) = 0, then [eh] is nilpotent.

12*. Let be an associative algebra with an identity element 1 and suppose contains elements e, f, h such that [eh] = 2e, [fh] = —2f, [ef] = h. Show that if φ(h) ∈ is a polynomial in h then Also prove that if r and n are positive integers, r n then

where

13*. , h, e, f as in Exercise 12. Show that if em = 0, then

14. Prove that if e is an element of a semi-simple Lie algebra of characteristic zero such that ad e is nilpotent, then eR is nilpotent for every representation R of .

15. Prove that if is semi-simple over an algebraically closed field then contains an e ≠ 0 with ad e nilpotent.

16. Prove that every finite-dimensional Lie algebra ≠ 0 over an algebraically closed field has indecomposable modules of arbitrarily high finite dimensionalities. (Hint: Show that there exists an e and a representation R such that eR is nilpotent ≠ 0. If is the corresponding module, then the dimensionalities of the indecomposable components of , , , … are not bounded.)

17. Prove that any semi-simple algebra has irreducible modules of arbitrarily high dimensionalities.

18. Show that the derivation algebra of any Lie algebra is algebraic. (Hint: Use Exercise 2.8.)

19. A Lie algebra is called reductive if ad is completely reducible. Show that is reductive if and only if has a 1: 1 completely reducible representation.

20. A subalgebra of is called reductive in if ad is completely reducible. Prove that if is a completely reducible Lie algebra of linear transformations and is reductive in then is completely reducible.

21. Show that any reductive commutative subalgebra of a semi-simple Lie algebra can be imbedded in a Cartan subalgebra.

22. Show that any semi-simple Lie algebra contains commutative Cartan subalgebras.

23. Let A be an automorphism of a semi-simple Lie algebra . Show that the subalgebra of elements y such that y (A — l)m = 0 for some m is a reductive subalgebra. (Hint: Use Exercise 2.5.)

24. (Mostow-Taft). Let G be a finite group of automorphisms in a Lie algebra. Show that has a Levi factor which is invariant under G.

25. Let fa(λ) = det (λ1 — ad a), the characteristic polynomial of ad a in a Lie algebra , and let D be a derivation of ;. Show that if t is an indeterminate, then

(Hint. Use the fact if A is an automorphism and the fact that exp tD = 1 + tD + (t2D2/2!) + … is a well-defined automorphism in P, P the field of power series in t with coefficients in Ф)

26. Write and let τi(a1, …, ai) be the linearized form of τi defined by

Show that τi(a1, …, ai) is a symmetric i-linear function and that

for any derivation D.