CHAPTER I

Basic Concepts

The theory of Lie algebras is an outgrowth of the Lie theory of continuous groups. The main result of the latter is the reduction of “local” problems concerning Lie groups to corresponding problems on Lie algebras, thus to problems in linear algebra. One associates with every Lie group a Lie algebra over the reals or complexes and one establishes a correspondence between the analytic subgroups of the Lie group and the subalgebras of its Lie algebra, in which invariant subgroups correspond to ideals, abelian subgroups to abelian subalgebras, etc. Isomorphism of the Lie algebras is equivalent to local isomorphism of the corresponding Lie groups. We shall not discuss these matters in detail since excellent modern accounts of the Lie theory are available. The reader may consult one of the following books: Chevalley’s Theory of Lie Groups, Cohn’s Lie Groups, Pontrjagin’s Topological Groups.

More recently, two other types of group theory have been aided by the introduction of appropriate Lie algebras in their study. The first of these is the theory of free groups which can be studied by means of free Lie algebras using a method which was originated by Magnus. Although the connection here is not so close as in the Lie theory, significant results on free groups and other types of discrete groups have been obtained using Lie algebras. Particularly noteworthy are the results on the so-called restricted Burnside problem: Is there a bound for the orders of the finite groups with a fixed number r of generators and satisfying the relation xm = 1, m a fixed positive integer? It is worth mentioning that Lie algebras of prime characteristic play an important role in these applications to discrete group theory. Again we shall not enter into the details but refer the interested reader to two articles which give a good account of this method in group theory. These are: Lazard [2] and Higman [1].

The type of correspondence between subgroups of a Lie group and subalgebras of its Lie algebra which obtains in the Lie theory has a counterpart in Chevalley’s theory of linear algebraic groups. Roughly speaking, a linear algebraic group is a subgroup of the group of non-singular n × n matrices which is specified by a set of polynomial equations in the entries of the matrices. An example is the orthogonal group which is defined by the set of equations on the entries αij of the matrix (αij). With each linear algebraic group Chevalley has defined a corresponding Lie algebra (see Chevalley [2]) which gives useful information on the group and is decisive in the theory of linear algebraic groups of characteristic zero.

In view of all this group theoretic background it is not surprising that the basic concepts in the theory of Lie algebras have a group-theoretic flavor. This should be kept in mind throughout the study of Lie algebras and particularly in this chapter, which gives the foundations that are adequate for the main structure theory to be developed in Chapters II to IV. Questions on foundations are taken up again in Chapter V. These concern some concepts that are necessary for the representation theory, which will be treated in Chapters VI and VII.

1. Definition and construction of Lie and associative algebras

We recall the definition of a non-associative algebra (= not necessarily associative algebra) over a field Φ. This is just a vector space over Φ in which a bilinear composition is defined. Thus for every pair (x, y), x, y in , we can associate a product x y and this satisfies the bilinearity conditions∈

A similar definition can be given for a non-associative algebra over a commutative ring Φ having an identity element (unit) 1. This is a left Φ-module with a product x y satisfying (1) and (2). We shall be interested mainly in the case of algebras over fields and, in fact, in such algebras which are finite-dimensional as vector spaces. For such an algebra we have a basis (e1, e2, …, en) and we can write where the γ’s are in Φ. The n3 γijk are called the constants of multiplication of the algebra (relative to the chosen basis). They give the values of every product eiej, i, j = 1, 2, …, n. Moreover, these products determine every product in . Thus let x and y be any two elements of and write . Then, by (1) and (2),

and this is determined by the eiej.

This reasoning indicates a universal construction for finite-dimensional non-associative algebras. We begin with any vector space and a basis (ei) in . For every pair (i, j) we define in any way we please eiej as an element of . Then if we define

One checks immediately that this is bilinear in the sense that (1) and (2) are valid. The choice of eiej is equivalent to the choice of the elements γ ijk in Φ such that

The notion of a non-associative algebra is too general to lead to interesting structural results. In order to obtain such results one must impose some further conditions on the multiplication. The most important ones—and the ones which will concern us here, are the associative laws and the Lie conditions.

DEFINITION 1. A non-associative algebra is said to be associative if its multiplication satisfies the associative law

A non-associative algebra is said to be a Lie algebra if its multiplication satisfies the Lie conditions

The second of these is called the Jacobi identity.

Since these types of non-associative algebras are defined by identities, it is clear that subalgebras and homomorphic images are of the same type, i.e., associative or Lie. If is a Lie algebra and x, y then 0 = (x + y)2 = x2 + xy + yx + y2 = xy + yx so that

holds in any Lie algebra. Conversely, if this condition holds then 2x2 = 0, so that, if the characteristic is not two, then x2 = 0. Hence for algebras of characteristic ≠ 2 the condition (6) can be used for the first of (5) in the definition of a Lie algebra.

PROPOSITION 1. A Non-Associative Algebra with basis (e1, e2, …, en) over Φ is associative if and only if (eiej)ek = ei(ejek) for i, j, k = 1, 2, …, n. If these conditions are equivalent to

The algebra is Lie if and only if

for i, j, k = 1, 2, …, n. These conditions are equivalent to

Proof: If is associative, then (eiej)ek = ei(ejek). Conversely, assume these conditions hold for the ei. If , then and . Hence (xy)z = x(yz) and is associative. If , then and . Hence the linear independence of the ei implies that the conditions (eiej)ek = ei(ejek) are equivalent to (7). The proof in the Lie case is similar to the foregoing and will be omitted.

In actual practice the general procedure we have indicated is not often used in constructing examples of associative and of Lie algebras except for algebras of low dimensionalities. We shall employ this in determining the Lie algebras of one, two, and three dimensions in § 4. There are a couple of simplifying remarks that can be made in the Lie case. First, we note that if and eiej = – ejei in an algebra, then the validity of (eiej)ek + (ejek)ei + (ekei)ej = 0 for a particular triple i, j, k implies (ejei)ek + (eiek)ej + (ekej)ei = 0. Since cyclic permutations of i, j, k are clearly allowed it follows that the Jacobi identity for (ei, ej,)ek is valid for i′, j′, k′, a permutation of i, j, k. Next let i = j. Then . Hence or, what is the same thing, x2 = 0 in implies that the Jacobi identities are satisfied for ei, ei, ej. In particular, the Jacobi identities are consequences of x2 = 0 if dim 2 and if dim = 3, then the only identity we have to check is (e1e2)e3 + (e2e3)e1 + (e3e1)e2 = 0.

2. Algebras of linear transformations. Derivations

Actually, it is unnecessary to sit down and construct examples of associative and Lie algebras by the method of bases and multiplication tables since these algebras occur “in nature.” The prime examples of associative algebras are obtained as follows. Let be a vector space over a field Φ and let denote the set of linear transformations of into itself. We recall that if A, B and αΦ, then A + B, αA and AB are defined by x(A + B) = xA + xB, x(α A) = α(xA), x(AB) = (xA)B for x in . Then it is well known that is a vector space relative to + and the scalar multiplication and that multiplication is associative and satisfies (1) and (2). Hence is an associative algebra. It is well known also that if is m-dimensional, m < ∞, then is m2-dimensional over Φ. If (e1, e2, …, em) is a basis for over Φ, then the linear transformations Eij such that eiEij = ej, erEij = 0 if ri, i, j = 1, …, m, form a basis for over Φ. If A, then we can write ei A = Σjαijej, i = 1, …, m, and (α) = (αij) is the matrix of A relative to the basis (ei). The correspondence A → (α) is an isomorphism of onto the algebra Φm of m × m matrices with entries αij in Φ.

The algebra is called the (associative) algebra of linear transformations in over Φ. Any subalgebra of , that is, a subspace of which is closed under multiplication, is called an algebra of linear transformations.

If is an arbitrary non-associative algebra and a, then the mapping aR which sends any x into xa is a linear transformation. It is well known and easy to check that (a + b)R = aR + bR, (αa)R = αaR and if is associative, (ab)R = aRbR. Hence if is an associative algebra, the mapping aaR is a homomorphism of into the algebra of linear transformations in the vector space . If has an identity (or unit) 1, then aaR is an isomorphism of into . Hence is isomorphic to an algebra of linear transformations. If does not have an identity, we can adjoin one in a simple way to get an algebra, * with an identity such that dim * = dim + 1 (cf. Jacobson [2], vol. I, p. 84). Since * is isomorphic to an algebra of linear transformations, the same is true for . If is finite-dimensional, the argument shows that is isomorphic to an algebra of linear transformations in a finite-dimensional vector space.

Lie algebras arise from associative algebras in a very simple way. Let be an associative algebra. If x, y, then we define the Lie product or (additive) commutator of x and y as

One checks immediately that

Moreover,

Thus the product [xy] satisfies all the conditions on the product in a Lie algebra. The Lie algebra obtained in this way is called the Lie algebra of the associative algebra . We shall denote this Lie algebra as L. In particular, we have the Lie algebra L obtained from . Any subalgebra of L is called a Lie algebra of linear transformations. We shall see later that every Lie algebra is isomorphic to a subalgebra of a Lie algebra L, associative. In view of the result just proved on associative algebras this is equivalent to showing that every Lie algebra is isomorphic to a Lie algebra of linear transformations.

We shall consider now some important instances of subalgebras of Lie algebras L, the associative algebra of linear transformations in a vector space over a field Φ.

Orthogonal Lie algebra. Let be equipped with a non-degenerate symmetric bilinear form (x,y) and assume finite-dimensional. Then any linear transformation A in has an adjoint A* relative to (x,y); that is, A* is linear and satisfies: (x A, y) = (x, y A*). The mapping AA* is an anti-automorphism in the algebra : (A + B)* = A* + B*, (αA)* = αA*, (AB)* = B* A*. Let denote the set of A which are skew in the sense that A* = − A. Then is a subspace of and if A* = − A, B* = − B, then [AB]* = (AB − BA)* = B*A* − A*B* − BAAB = [BA] = − [AB]. Hence [AB] ∈ and is a subalgebra of L.

If Φ is the field of real numbers, then the Lie algebra is the Lie algebra of the orthogonal group of relative to (x,y). This is the group of linear transformations O in which are orthogonal in the sense that (xO, yO) = (x, y), x, y in . For this reason we shall call the orthogonal Lie algebra relative to (x, y).

Symplectic Lie algebra. Here we suppose (x, y) is a non-degenerate alternate form: (x, x) = 0 and again dim < ∞. We recall that these conditions imply that dim = 2l is even. Again let A* be the adjoint of A(∈ ) relative to (x,y). Then the set of skew (A* = – A) linear transformations is a subalgebra of L. This is related to the symplectic group and so we shall call it the symplectic Lie algebra of the alternate form (x,y).

Triangular linear transformations. Let be a chain of subspaces of such that dim i = i and let be the set of linear transformations T such that . It is clear that is a subalgebra of the associative algebra : hence L is a subalgebra of L. We can choose a basis (x1, x2, …, xm) for so that (x1, x2, …, xi) is a basis for i. Then if implies that the matrix of T relative to (x1, x2, …, xm) is of the form

Such a matrix is called triangular and correspondingly we shall call any T a triangular linear transformation.

Derivation algebras. Let be an arbitrary non-associative algebra. A derivation D in is a linear mapping of into satisfying

Let denote the set of derivations in . If , then

Hence . Similarly, one checks that if αΦ. We have

Interchange of 1, 2 and subtraction gives

(xy)[D1D2] = (x[D1D2])y + x(y[D1D2]).

Hence [D1D2] ∈ and so is a subalgebra of where L, wher is the algebra of linear transformations in the vector space . We shall call this the Lie algebra of derivations or derivation algebra of .

The Lie algebra is the Lie algebra of the group of automorphisms of if is a finite-dimensional algebra over the field of real numbers. We shall not prove any of our assertions on the relation between Lie groups and Lie algebras but refer the reader to the literature on Lie groups for this. However, in the present instance we shall indicate the link between the group of automorphisms and the Lie algebra of derivations.

Let D be a derivation. Then induction on n gives the Leibniz rule:

If the characteristic of Φ is 0 we can divide by n! and obtain

If is finite-dimensional over the field of reals, then it is easy to prove (cf. Jacobson [2], vol. II, p. 197) that the series

converges for every linear mapping D in , and the linear mapping exp D defined by (13) is 1:1. Also it is easy to see, using (12′), that if D is a derivation, then G = exp D satisfies (xy)G = (xG)(yG). Hence G is an automorphism of .

A connection between automorphisms and derivations can be established in a purely algebraic setting which has important applications. Here we suppose the base field of is arbitrary of characteristic 0. Let D be a nilpotent derivation, say, DN = 0. Consider the mapping

We write this as G = 1 + Z, Z = D + (D2/2!) + … + (DN–1/(N – 1)!) and note that ZN = 0. Hence G = 1 + Z has the inverse 1 – Z + Z2 + … ± Zn–1 and so G is 1: 1 of onto . We have

Hence G is an automorphism of .

3. Inner derivations of associative and Lie algebras

If a is any element of a non-associative algebra , then a determines two mappings aL: x → ax and aR: xxa of into itself. These are called the left multiplication and right multiplication determined by a. The defining conditions (1) and (2) for an algebra show that aL and aR are linear mappings and the mappings aaL, aaR are linear of into the space of linear transformations in . Now let be associative and set Da = aRaL. Hence Da is the linear mapping xxaax. We have

hence Da is a derivation in the associative algebra . We shall call this the inner derivation determined by a.

Next let be a Lie algebra. Because of the way Lie algebras arise from associative ones it is customary to denote the product in by [xy] and we shall do this from now on. Also, it is usual to denote the right multiplication aR (= − aL since [xa] = – [ax]) by ad a and to call this the adjoint mapping determined by a. We have

hence ad a: x → [xa] is a derivation. We call this also the inner derivation determined by a.

A subset of a non-associative algebra is called an ideal if (1) is a subspace of the vector space , (2) ab, ba for any a in , b in . Consider the set of elements of the form Σaibi, ai, bi in . We denote this set as 2 and we can check that this is an ideal in . If = is a Lie algebra, then it is customary to write ′ for 2 and to call this the derived algebra (or ideal) of . If is a Lie algebra, then the skew symmetry of the multiplication implies that a subspace of is an ideal if and only if [ab] (or [ba]) is in for every a, b. It follows that the subset of elements c such that [ac] = 0 for all a is an ideal. This is called the center of . is called abelian if = , which is equivalent to ′ = 0.

PROPOSITION 2. If is associative or Lie, then the inner derivations form an ideal in the derivation algebra .

Proof. In any non-associative algebra we have (a + b)L = aL + bL, (αa)L = αaL, (a + b)R = aR + bR, (αa)R = αaR. Hence if Da = aRaL then Da + b = Da + Db, Dαa = αDa and the inner derivations of an associative or of a Lie algebra form a subspace of . Let D be a derivation in . Then (ax)D = (aD)x + a(xD), or (ax)Da(xD) = (aD)x. In operator form this reads (xaL) – (xD)aL = x(aD)L, or [aLD] = aLDDaL = (aD)L. Similarly, [aRD] = (aD)R and consequently also [DaD] = DaD. These formulas show that if is associative or Lie and I is an inner derivation and D any derivation, then [ID] is an inner derivation. Hence is an ideal in .

Example. Let be the algebra with basis (e, f) such that [ef] = e = – [fe] and all other products of base elements are 0. Then [aa] = 0 in and since dim = 2, is a Lie algebra. The derived algebra ′ = Φe. If D is a derivation in any algebra , then Hence if D is a derivation in then eD = δe. Also ad(δf) has the property e(ad δf) = [e, δf] = δe. Hence if E = D – ad δf, then E is a derivation and eE = 0. Then e = [ef] gives 0 = [e, fE]. It follows that fE = γe. Now ad(– γe) satisfies e ad(– γe) = 0, f ad(– γe) = [f, – γe] = γ[ef] = γe. Hence E = ad(– γe) is inner and D = E + ad δf is inner. Thus every derivation of = Φe + Φf is inner.

In group theory one defines a group to be complete if all of its automorphisms are inner and its center is the identity. If H is complete and invariant in G then H is a direct factor of G. By analogy we shall call a Lie algebra complete if its derivations are all inner and its center is 0.

PROPOSITION 3. If is complete and an ideal in , then = where is an ideal.

Proof : We note first that if is an ideal in , then the centralizer of , that is, the set of elements b such that [kb] = 0 for all k is an ideal. is evidently a subspace and if b and a, then [k[ba]] = − [a[kb]] – [b[ak]] = 0 − [b, k′], k′ = [ak] ∈ ; hence [k[ba]] = 0 for all k and [ba] ∈ . Hence is an ideal. Now let be complete. If c , then c is in the center of and so c = 0. Hence = 0. Next let a. Since is an ideal in , ad a maps into itself and hence it induces a derivation D in . This is inner and so we have a k such that xD = [xa] = [xk] for every x. Then b = a – k and a = b + k, b, k. Thus = + = as required.

Example. The algebra Φe + Φf of the last example is complete.

4. Determination of the Lie algebras of low dimensionalities

We shall now determine all the Lie algebras such that dim 3. If (e1, e2, …, en) is a basis for a Lie algebra , then [eiej] = 0 and [eiej] = – [eiej]. Hence in giving the multiplication table for the basis, it suffices to give the products [eiej] for i < j. We shall use these abbreviated multiplication tables in our discussion.

 I. dim = 1. Then = Φe [ee] = 0.

II. dim = 2.

(a) ′ = 0, 2 is abelian.

(b) ′ ≠ 0. Since = Φe + Φf, ′ = Φ[ef] is one-dimensional. We may choose e so that ′ = Φe. Then [ef] = αe ≠ 0 and replacement of f by α–1f permits us to take [ef] = e. Then is the algebra of the example of § 3. This can now be characterized as the non-abelian two-dimensional Lie algebra.

III. dim = 3.

(a) ′ = 0, abelian.

(b) dim ′ = 1, the center. If ′ = Φe, we write = Φe + Φf + Φg. Then ′ = Φ[fg]. Hence we may suppose [fg] = e. Thus has basis (e, f, g), with multiplication table

We have only one Lie algebra satisfying our conditions. (If we have (16), then the Jacobi condition is satisfied.)

(c) dim ′ = 1, the center. If ′ = Φe, then there is an f such that [ef] ≠ 0. Then [ef] = βe ≠ 0 and we may suppose [ef] = e. Hence Φe + Φf is the non-abelian two-dimensional algebra . Since is an ideal and since is complete, = , = Φg. Hence has basis (e, f, g) with multiplication table

(d) dim ′ = 2. ′ cannot be the non-abelian two-dimensional Lie algebra . For then = and ′ = ′ = . But ′ ⊂ . Hence we have ′ abelian. Let ′ = Φe + Φf and = Φe + Φf + Φg. Then ′ = Φ[eg] + Φ[fg] and so ad g induces a 1:1 linear mapping in ′. Hence we have basis (e, f, g) with

where is a non-singular matrix. Conversely, in any space with basis (e, f, g) we can define a product [ab] so that [aa] = 0 and (18) holds. Then [[ef]g] + [[fg]e] + [[ge]f] = 0 and hence is a Lie algebra. What changes can be made in the multiplication table (18)? Our choice of basis amounts to this: We have chosen a basis (e, f) for ′ and supplemented this with a g to get a basis for . A change of basis in ′ will change A to a similar matrix M–1AM. The type of change allowable for g is to replace it by ρg + x, ρ ≠ 0 in Φ, x in ′. Then [e, ρg + x] = ρ[eg], [f, ρg + x] = ρ[fg] so this changes A to ρA. Hence the different matrices A which can be used in (18) are the non-zero multiples of the matrices similar to A. This means that we have a 1: 1 correspondence between the algebras satisfying dim = 3, dim ′ = 2 and the conjugacy classes in the two dimensional collineation group.

If the field is algebraically closed we can choose A in one of the following forms:

These give the multiplication tables

Different choices of α give different algebras unless αα′ = 1. Hence we get an infinite number of non-isomorphic algebras.

(e) dim ′ = 3. Let (e1, e2, e3) be a basis and set [e2e3] = f1, [e3e1] = f2, [e1e2] = f3. Then (fi, f2, f3) is a basis. Write , A = (αij) non-singular. The only Jacobi condition which has to be imposed is that [f1e1] + [f2e2] + [f3e3] = 0. This gives

Hence αij = αji and so A is a symmetric matrix. Let (1, 2, 3) be a second basis where non-singular. Set . We have for (i, j, k) any cyclic permutation of (1, 2, 3)

The matrix N = (vij) = adj M′ = (M′)–1 det M′. The matrix relating the f’s to the e’s is A and that relating the e’s to the ′s is M–1. Hence if is the matrix (ij) such that then

Two matrices A, B are called multiplicatively cogredient if B = ρNAN where N is non-singular and p ≠ 0 in Φ. In this case we may write B = ρσ2–1N)′ A–1N), σ = ρ det N and if the matrices are of three rows and columns, then we take M = σN–1 and B = μ(M–1)′ AM–1, μ = ρσ2 = det M. Thus we have the relation (19). Thus the conditions on A and are that these symmetric matrices are multiplicatively cogredient. Hence with each satisfying our conditions we can associate a unique class of non-singular multiplicatively cogredient symmetric matrices. We have as many algebras as there are classes of such matrices. For the remainder of this section we assume the characteristic is not two. Then each co-gredience class contains a diagonal matrix of the form diag {α, β, 1}, αβ ≠ 0. This implies that the basis can be chosen so that

If the base field is the field of reals, then we have two different algebras obtained by taking α = β = 1 and α = − 1, β = 1. If the field is algebraically closed we can take α = β = 1.

We shall now single out a particular algebra in the family of algebras satisfying dim = 3 = dim ′. We impose the condition that contains an element h such that ad h has a characteristic root α ≠ 0 belonging to Φ. Then we have a vector e ≠ 0 such that [eh] = e ad h = αe ≠ 0 and since [hh] = 0, e and h are linearly independent and are part of a basis (e1, e2, e3) = (e, h, f). If (f1, f2, f3) are defined as before, the symmetric matrix (αij) is now

Then we have [eh] = αe, [hh] = 0, [fh] = –αfα11hα12e, which implies that the characteristic roots of ad h are 0, α and – α. We may replace f by a characteristic vector belonging to the root – α of ad h. This is linearly independent of (e, h) and may be used for f. Hence we may suppose that [eh] = αe, [fh] = – αf. If we replace h by 2α–1h we obtain [eh] – 2e, [fh] = – 2f. The form of (21) now gives [ef] = βh ≠ 0. If we replace f by β–1f, then we obtain the basis (e, f, h) such that

Thus we see that there is a unique algebra satisfying our condition. We shall see soon that any such that dim = dim ′ = 3 is simple, that is, has no ideals other than itself and 0 and ′ ≠ 0. The particular algebra we have singled out by the condition that it contains h with ad h having a non-zero characteristic root in Φ is called the split three-dimensional simple Lie algebra. It will play an important role in the sequel.

5. Representations and modules

If is an associative algebra over a field Φ, then a representation of is a homomorphism of into an algebra of linear transformations of a vector space over Φ. If aA, bB in the representation, then, by definition, a + bA + B, αaαA, α ∈ Φ, and abAB. A right -module for the associative algebra is a vector space over Φ together with a binary product of × into mapping (x, a), x, a, into an element xa such that

If aA is a representation of , then the representation space can be made into a right -module by defining xa = xA. Thus we will have

Conversely, if is any right -module, then for any a we let A denote the mapping xxa. Then the first part of 1 and the first part of 2 show that A is a linear transformation in over Φ. The rest of the conditions in 1, 2, and 3 imply that aA is a representation.

In the theory of representations and in other parts of the theory of associative algebras, algebras with an identity play a preponderant role. In fact, for most considerations it is convenient to confine attention to these algebras and to consider only those homomorphisms which map the identity into the identity of the image algebra. In the sequel we shall find it useful at times to deal with associative algebras which need not have identities. We shall therefore adopt the following conventions on terminology: “Algebra” without any modifier will mean “associative algebra with an identity” For these “subalgebra” will mean subalgebra in the usual sense containing the identity, and ”homomorphism“ will mean homomorphism in the usual sense mapping 1 into 1. In particular, this will be understood for representations. The corresponding notion of a module is defined by 1 through 3 above, together with the condition

4. x1 = x, x.

If we wish to allow the possibility that does not have an identity then we shall speak of the “associative algebra” and if we wish to drop 4, then we shall speak of a “module for associative algebra” rather than a module for “the algebra .”

The algebra can be considered as a right -module by taking xa to be the product as defined in . Then 1, 2, and 3 hold as a consequence of the axioms for an algebra and 4. holds since 1 is the identity. The representation aA where A is the linear transformation xxa is called the regular representation. We have seen (§ 2) that the regular representation is faithful, that is, an isomorphism.

Now let be a Lie algebra. Then we define a representation of to be a homomorphism lL of into a Lie algebra L, the algebra of linear transformations of a vector space over Φ. The conditions here are that if l1L1, l2L2, then

We now define xl for x, l by xl = xL. Then (23) and the linearity of L gives the following conditions:

We shall now use these conditions to define the concept of an -module, a Lie algebra. This is a vector space over Φ with a mapping of x to such that the result xl satisfies 1, 2, and 3 above.

As in the associative case, the concepts of module and representation are equivalent. Thus we have indicated that if lL is a representation, then the representation space can be considered as a module. On the other hand, if is any module, then for any l we let L denote the mapping xxl. Then L is linear in over Φ and lL is a representation of by linear transformations in over Φ.

We note next that itself can be considered as an -module by taking xl to be the product [xl] in . Then 1 and 2 are consequences of the axioms for an algebra and 3. follows from the Jacobi identity and skew symmetry. We have denoted the representing transformation of l: x → [xl] by ad l. The representation l → ad l determined by this module is called the adjoint representation of . We recall that the mappings ad l are derivations in .

If is a module for the Lie algebra then we can consider as an abelian Lie algebra with product [xy] = 0. Then the mappings x → xl are linear transformations in and because of the triviality of the multiplication in , these are derivations. More generally, we suppose now that is an -module which is at the same time a Lie algebra, and we assume that the module mappings xxl are derivations in . Thus, in addition to the axioms for a Lie algebra in and in and 1, 2, and 3 above, we have also

4. [x1x2]l = [x1l, x2] + [x1l, x2l].

Now let be the direct sum of the two vector spaces and . We introduce in a multiplication [uv] by means of the formula

It is clear that this composition is bilinear, so it makes the vector space into a non-associative algebra. Moreover,

If we permute 1, 2, and 3 cyclically and add, then the terms involving three x’s or three l’s add up to 0 by the Jacobi identity in and in . The terms involving two x’s and one l are

The terms involving two l’s and one x are

This shows that = is a Lie algebra. It is immediate from (24) and the fact that [xl] ∈ for x in , l in , that is a subalgebra of and is an ideal in . We shall call the split extension of by .

An important special case of this is obtained by taking = to be any Lie algebra and = the derivation algebra. Since is, by definition, a Lie algebra of linear transformations, the identity mapping is a representation of in . becomes a -module by defining the module product lD = lD, l, D. The split extension of by is called the holomorph of . This is the analogue of the holomorph of a group which is an extension of the group of automorphisms of a group by the group.

We can make the same construction where is any Lie algebra and 1 is a subalgebra of the derivation algebra. In particular, it is often useful to do this for 1 = ΦD, the subalgebra of multiples of one derivation D of .

Another important special case of a split extension is obtained by taking and to be arbitrary Lie algebras and considering as a trivial module for by defining ml = 0, m, l. The Lie algebra = is the direct sum of and . More generally, if 1, 2, …, r are Lie algebras then the direct sum = 12 ⊕ … ⊕ r, is the vector space direct sum of the = with the Lie product . As in group theory, if is a Lie algebra and contains ideals i such that = 12 ⊕ … ⊕ r, as vector space, then [lilj] ∈ i j = 0 if lii, lij and ij. Then is isomorphic to the direct sum of the Lie algebras i and we shall say that is a direct sum of the ideals i of .

The kernel of a homomorphism η of a Lie algebra into a Lie algebra is an ideal in and the image η is a subalgebra of . The fundamental theorem of homomorphisms states that under the correspondence l + . We recall that / is the vector space / considered as an algebra relative to the multiplication [l1 + , l2 + ] = [l1l2] + . This is a Lie algebra. The kernel of the adjoint representation is the set of elements c such that [xc] = 0 for all x. This is just the center of . The image = ad is the algebra of inner derivations and we have . If = 0, ad is a Lie algebra of linear transformations isomorphic to . Thus in this case we obtain in an easy way a faithful representation of . We shall see later that every has a faithful representation and that every finite-dimensional has a faithful finitedimensional representation, that is, a faithful representation acting in a finite-dimensional space.

Examples. We shall now determine the matrices of the adjoint representations of two of our examples.

(a) the Lie algebra with basis (e, f), [ef] = e. We have e ad e = 0, f ad e = − e; e ad f = e, f ad f = 0. Hence relative to the basis (e, f)the matrices are

(b) the Lie algebra with basis (e1e2e3) such that [e1e2] = e3, [e2e3] = e1, [e3e1] = e2. Here e1 ad e1 = 0, e2 ad e1 = – e3, e3 ad e1 = e2; e1 ad e2 = e3, e2 ad e2 = 0, e3 ad e2 = 0, e3 ad e2 = –e1; e1 ad e3 = –e2, e2 ad e3 = e1, e3 ad e3 = 0. Hence the representation by matrices is

Note that the matrices are skew symmetric and form a basis for the space of skew symmetric matrices. Hence we see that is isomorphic to the Lie algebra of skew symmetric matrices in the matrix algebra Φ3.

6. Some basic module operations

The notion of a submodule of a module for a Lie or associative algebra is clear: is a subspace of closed under the composition by elements of the algebra. If is a submodule, then we obtain the factor module / which is the coset space / with the module compositions (x + )a = xa + , a in the algebra. If 1 and 2 are two modules for an associative or a Lie algebra, then the space 12 is a module relative to the composition (x1 + x2)a = x1a + x2a, xii. This module is called the direct sum 12 of the two given modules. A similar construction can be given for any number of modules.

The module concepts we have just indicated are applicable to both associative and Lie algebras. We shall now consider some notions which apply only in the Lie case. These are analogous to well-known concepts of the representation theory of groups. The principal ones we shall consider are the tensor product of modules and the contragredient module.

We suppose first that lL1 and lL2 are two representations of a Lie algebra by linear transformations acting in the same vector space over Φ. We assume, moreover, that if L1 is any representing transformation from the first representation and M2 is any representing transformation from the second representation, then [L1M2] = L1M2M2L1 = 0. We shall now define a new mapping of into the algebra L of linear transformations in by

Since this is the sum of the linear mappings lL1 and lL2 it is linear. Now let mM1, mM2, so that in the new mapping we have mM1 + M2. Then

and since [lm] → [L1M1] + [L2M2], the new mapping is a representation. (Note: Nothing like this holds in the associative case.)

We suppose next that 1 and 2 are any two modules for a Lie algebra . Let = 12 (≡ 12) the tensor (or Kronecker or direct) product of 1 and 2. We recall that if Ai is a linear transformation in i, then the mapping , , is a linear transformation . We have the rules

It is clear from these relations that the mapping AiAi ⊗ 12, 12, the identity in 2, is a homomorphism of the algebra (1) of linear transformations in 1 into the algebra (12). Similarly, A2 → 11A2 is a homomorphism of (2) into (12). Now we have the representation Ri determined by the i. The linear transformation lRi associated with l is xixil. The resultant of the Lie homomorphism llRi with the associative (hence Lie) homomorphism of (i) into (12) is a representation of acting in 12. The two representations of obtained in this way are

If f, m, then

Hence the commutativity condition of the last paragraph holds. It now follows that

is a representation of acting in 12. In this way 12 is an -module with the module composition defined by

The module 12 obtained in this way is called the tensor product of the two modules i. The same terminology is applied to the representation, which we denote as R1R2.

We consider next a Lie module and the dual space * of linear functions on (to the base field). We shall denote the value of a linear function y* at the vector x by <x, y*>. Then <x, y*> ∈ Φ and this product is bilinear:

Also the product is non-degenerate. Any linear transformation A in determines an adjoint (or transpose) transformation A* in * such that

The mapping AA* is an associative anti-homomorphism of () into (*). Now consider the mapping A → – A*. This is linear and

Hence A → –A* is homomorphism of ()L into (*)L. If we now take the resultant of the representation llR determined by with A → – A* we obtain a new representation l → – (lR)* of . For the corresponding module * we have

Hence the characteristic property relating the two modules is

We call the module * defined in this way the contragredient module * of and denote the corresponding representation by R*.

We recall that if is a finite-dimensional space, then there is a natural isomorphism of * onto the vector space (). If , then the corresponding linear transformation in is . If is a module for , then * and * are modules. By definition,

If we denote xxl by lR then x* → x*l is –(lR)*. Then in the representation in * the mapping corresponding to l sends into . The elements of () associated with these two transformations are

It is clear from these formulas that B = [A, lR]. We can interpret this result as follows: Consider an arbitrary -module and the algebra (). If lR is the representing transformation of l, then the mapping X→ [X, lR] is a representation of acting in . The result we have shown is that this representation is equivalent to the representation in *; that is, the module is isomorphic to *.

The result just indicated can be generalized to a pair of vector spaces 1, 2. We recall that the set of linear transformations (2, 1) of 2 into 1 is a vector space under usual compositions of addition and scalar multiplication. If the spaces are finitedimensional, then there is a natural isomorphisms of 12* onto (2, 1) mapping the element , into the linear mapping of 2 into 1. If 1 and 2 are -modules, then (2, 1) is an -module relative to the composition Xl = XlR1 – lR2X. As for a single space, this module is isomorphic to 12* under the space isomorphism we defined.

7. Ideals, solvability, nilpotency

If 1 and 2 are subspaces of a Lie algebra then we write 1 2, 1 + 2, respectively, for the intersection and space spanned by 1 and 2. The latter is just the collection of elements of the form b1 + b2, bi in i. We now define [12] to be the subspace spanned by all products [b1b2], bii. It is immediate that this is the set of sums . We assume the reader is familiar with the (lattice) properties of the set of subspaces relative to the compositions and + and we proceed to state the main properties of the composition [12]. We list these as follows and leave the veification to the reader.

A subspace is an ideal if and only if [] . The intersection and sum of ideals is an ideal and 3 for 3 = shows that the same is true of the (Lie) product of ideals. In particular, it is clear that the terms of the derived series

are ideals. The same is true of the terms of the lower central

These series are analogous to the derived series and the lower central series of a group. More generally, if is an ideal in , then the derived algebras (i) and the powers i are ideals. We note also that if η is a homomorphism of into a second Lie algebra then and . These are readily proved by induction on i.

A Lie algebra is said to be solvable if for some positive integer h. Any abelian algebra is solvable, and it is immediate from the list of Lie algebras of dimensions 3 that all these are solvable with the exception of those such that dim = 3 = dim ′. The algebra of triangular matrices is another example of a solvable Lie algebra (Exercise 12 below).

LEMMA Every subalgebra and every homomorphic image of a solvable Lie algebra is solvable. If contains a solvable ideal such that / is solvable, then is solvable.

Proof: The first two statements are clear. If is an ideal such that / is solvable, then for some positive integer h.Thus let η be the canonical homomorphism ll + of onto /. Then if h is sufficiently large. Hence . If is solvable we have (k) =0 for some k. Hence implies and is solvable.

PROPOSITION 4. The sum of any two solvable ideals is a solvable ideal.

Proof. Let 1 and 2 be solvable ideals. By one of the standard isomorphism theorems 1 2 is an ideal in 1 and (1 + 2)/2 1/(1 2). This is solvable since it is a homomorphic image of the solvable algebra 1. Since 2 is solvable the lemma applies to prove 1 + 2 solvable.

Now suppose is finite-dimensional and let be a solvable ideal of maximum dimensionality of . Then Proposition 4 implies that if is any solvable ideal + is solvable and an ideal. Hence + = since dim is maximal. Consequently, . We therefore have the existence of a solvable ideal which contains every solvable ideal. We call the radical of . If = 0, that is, has no non-zero solvable ideal, then is called semi-simple.If has no ideals ≠ 0 and , and if ′ ≠ 0, then is called simple. If is simple and is its radical, then we must have = or = 0. But if = then ′ ⊂ and ′ is an ideal so contrary to the definition. Hence = 0; so simplicity implies semisimplicity. If is the radical, then any solvable ideal of / has the form /, an ideal in . But is solvable by the lemma. Hence and . Thus / is semi-simple. If is a non-zero solvable ideal in and , then (h–1) is an abelian ideal ≠ 0 in . Hence is semi-simple if it has no non-zero abelian ideals.

The three-dimensional Lie algebras with dim ′ = 3 (or ′ = ) are simple. For, if is an ideal in such that 0 ≠ , then and / are both one or two dimensional; hence solvable. Then is solvable contrary to ′ = .

A Lie algebra is called nilpotent if k = 0 for some positive integer k.

PROPOSITION 5. .

Proof: By definition . We assume for all i. Then

This result implies that any product of k factors in any association is contained in k. Since (k) is a product of 2k such factors it follows that . Hence if is nilpotent, say , then (k) = 0 and is solvable. The converse does not hold since the two-dimensional non-abelian Lie algebra is solvable but not nilpotent. The set of nil triangular matrices, that is, the triangular matrices with 0’s on the diagonal is a nilpotent subalgebra of ΦnL where Φn is the algebra of n x n matrices.

PROPOSITION 6. The sum of nilpotent ideals is nilpotent.

Proof: We note first that if is an ideal, then any product in which h of the and the remaining is contained in . A simple induction on k establishes this result. Now consider two ideals 1 and 2 and the ideal 1 + 2. Then (1 + 2)m is contained in a sum of terms where i = 1 or 2. Any such term contains , where [m/2] is the integral part of m/2; hence it is contained in or in . Consequently,

It follows that if 1 and 2 are nilpotent, then 1 + 2 is nilpotent.

As for solvability, we can now conclude that if is finitedimensional, then contains a nilpotent ideal which contains every nilpotent ideal of . We call the nil radical of . This is contained in the radical . For the two-dimensional non-abelian algebra . Also / is abelian, hence nilpotent. Thus we may have and / may have a non-zero nil radical.

The theory of nilpotent ideals and radicals has a parallel for associative algebras. If 1 and 2 are subspaces of an associative algebra , then 12 denotes the subspace spanned by all products b1b2, bii. is nilpotent if there exists a positive integer k such that * = 0 (1 = , k = k–1). This is equivalent to saying that every product of k elements of is 0. If 1 and 2 are nilpotent ideals of , then it is easy to see that 1 + 2 is a nilpotent ideal. Hence if is finite-dimensional, then contains a maximal nilpotent ideal which contains every nilpotent ideal. The ideal is called the radical of . The algebra / is semi-simple in the sense that it has no nilpotent ideals ≠ 0. The proofs of these statements are similar to the corresponding ones for Lie algebras and will be left as exercises.

8. Extension of the base field

We are assuming that the reader is familiar with the basic definitions and results on tensor products and extension of the field of operators of vector spaces and non-associative algebras. We now recall without proofs the main properties which will be needed in the sequel.

If and are arbitrary non-associative algebras over Φ, then the vector space can be made into a non-associative algebra by defining for . If and are associative, then is associative also. If P is a field extension of Φ and is arbitrary, then the Φ-algebra P ⊗ can be considered as a non-associative algebra over P by defining . We denote this “extension” of as P. Such extensions for Lie algebras will play an important role from time to time in the sequel.

We recall some of the main properties of P and of P where is any vector space over Φ and P is P considered as vector space over P relative to , If is a basis for over Φ, then is a basis for P over P. The set of Φ-linear combinations of these elements coincides with the subset of P. This set is a Φ-subspace of P which is isomorphic to . We may identify 1 ⊗ x with x and the set {1 ⊗ x} with . In this way becomes a Φ-subspace of P which has the following two properties: (1) the P-space spanned by is P, (2) any subset of which is linearly independent over Φ is linearly independent over P. These imply that any basis for over Φ is a basis for over P. If is a non-associative algebra, then the identification just made permits us to consider as a Φ-subalgebra of P. The properties (1) and (2) are characteristic. Thus, let be any vector space over P, Φ a subfield of P, so that can be considered also as space over Φ. Suppose is a subspace of over Φ satisfying (1) and (2). Then the mapping , is an isomorphism of P onto . Similarly, if is a non-associative algebra over P and is a Φ-subalgebra satisfying (1) and (2), then we have the isomorphism indicated of P onto .

If is associative, then (aa′)a′′ = a(aa′′) in implies that P is associative. Similarly, if is a Lie algebra, then [aa] = 0, [aa′ = – [aa], [[aa′]a′′] + [[aa′′]a] + [[a′′a]a′] = 0 imply that P is a Lie algebra.

If is a subspace of , then the P-subspace P generated by may be identified with P. If is a subalgebra (ideal) of , then P(= P) is a subalgebra (ideal) of P. The ideal (P)2 of P is the set of P-linear combinations of the elements aa′, a, a′ ∈ . Hence . If is a Lie algebra, r is the set of linear combinations of the products . It follows that (r)P = (P)r. Similarly, the derived algebra ′′ is the set of linear combinations of the products of the form the set of linear combinations of products

etc., for (4)…. It follows that . These observations imply that a Lie algebra is commutative, nilpotent, or solvable if and only if P is, respectively, commutative, nilpotent, or solvable.

If is an extension field of the field P, then we can form and (P). The first of these is while the second is . It is well known that there is a natural isomorphism of (P) onto mapping . Hence we can identify (P) with by means of this isomorphism.

If A is a linear transformation of into a second vector space over Φ, then A has a unique extension, which we shall usually denote by A again, to a linear transformation of P into P. We have . The image and the kernel of the extension A is P, the kernel of A in . Hence A is surjective (onto) if and only if its extension is surjective and A is 1: 1 if and only if its extension is 1: 1. If is a non-associative algebra and A is a homomorphism (anti-homomorphism, derivation), then its extension A in P is a homomorphism (anti-homomorphism, derivation).

Now let be a Lie algebra, an -module and R the associated representation. If a, aR has a unique extension aR which is a linear transformation in P. We have (a + b)R = aR + bR, (αa)R = αaR, [ab]R = [aR, bR], α Φ Φ, for these extensions. This implies that for the mapping is a homomorphism R of P into the Lie algebra (P)L of linear transformations in P. Thus R has an extension R which is a representation of P acting in P. For the module P which is determined in this way we have and if , then . In a similar fashion a representation R of an associative algebra defines a representation R of the extension P and a right module for defines a right module P for P.

Exercises

1. Let and be associative algebras. Show that if θ is a homomorphism of into , then θ is a homomorphism of L into L and if θ is an antihomomorphism of into , then – θ is a homomorphism of L into L. Show that if θ is an anti-homomorphism of into , then the subset (, θ) of θ-skew elements is a subalgebra of L. Show that if D is a derivation of , then D is a derivation of L. Give examples of and which are neither isomorphic nor anti-isomorphic but indicates isomorphism). (Hint: Take , to be commutative.) Give an example of a derivation in L which is not a derivation of .

2. If S is a subset of a Lie algebra the centralizer (S) is the set of elements c such that [sc] = 0 for all sS. Show that (S) is a subalgebra. If is a subspace of , then the normalizer of is the set of l such that [bl] ∈ for every b. Show that the normalizer of is a subalgebra.

3. Let D be a derivation in a non-associatiative algebra . Show that the set of elements z of satisfying zD = 0 is a subalgebra. (Such elements are called D-constants.) Show that the set of elements z satisfying zDi = 0 for some i is a subalgebra.

4. Show that if D is a derivation of a Lie algebra such that D commutes with every inner derivation, then the image the center. (If = 0 this implies that the center of the derivation algebra () = 0. Hence also ((()) = 0, etc.)

5. Show that any three-dimensional simple Lie algebra of characteristic not two is complete.

6. Prove that any four-dimensional nilpotent Lie algebra has a three-dimensional ideal. Use this to classify the four-dimensional nilpotent Lie algebras.

7. Verify that if has basis (e1, e2 …, e8) with the multiplication table

is a nilpotent Lie algebra.

8. A subalgebra of is called subinvariant in if there exists a chain of subalgebras such that i is an ideal in i–1. Show that if the chain is as indicated, then if there are s ’s in the term on the left-hand side.

9. (Schenkman). Show that if and are subspaces of a Lie algebra then . Use this to prove that if is subinvariant in , then is an ideal in .

10. Let be a nilpotent Lie algebra. Show that a subset S of generates if and only if the cosets , generate /2. Hence show that for finite-dimensional the minimum number of generators of is dim /2.

11. Show that if is a nilpotent Lie algebra, then every subalgebra of is subinvariant. Show also that if is a non-zero ideal in , then for the center of .

12. Let be the Lie algebra of triangular matrices of n rows and columns. Determine the derived series and the lower central series for .

13. Prove that a finite-dimensional Lie algebra is solvable if and only if there exists a chain where dim and i is an ideal in i–1.

14. If fi is a Lie algebra the upper central series is defined as follows: 1 is the center of and (i is the ideal in such that is the center of . Show that the upper central series leads to , that is, there is an s such that = s = if and only if is nilpotent. Show that the minimum s satisfying this is the same as the minimum t such that i + 1 = 0.

15. Prove that every nilpotent Lie algebra has an outer (= non-inner) derivation. (Hint: Write where is an ideal. If the linear mapping such that → 0, ez is a derivation. If z is chosen in but not in n + 1 where n satisfies , then the derivation defined by z is outer.)

16. Let A be a linear transformation in an n dimensional space. Suppose A has n distinct characteristic roots ξ1, ξ2, …, ξn. Show that ad A acting in has the n2 characteristic roots ξi – ξj, i, j = 1, 2, …, n.

17. Let be a finite-dimensional module, * the contragredient module. Show that if is a submodule of , then is a submodule of *. Hence show that is irreducible if and only if * is irreducible.

18. Let and * be as in Exercise 17. Show that * contains a u ≠ 0 such that ul = 0 for all l. Assume irreducible and suppose is any module such that contains a u ≠ 0 such that ul = 0, l ∈ . Show that contains a submodule isomorphic *.

19. Let be a non-associative algebra such that where the i are ideals satisfying . Show that the derivation algebra where i is an ideal in and is isomorphic to the derivation algebra of i.

20. Show that the derived algebra of the Lie algebra is the set of matrices of trace 0. Show that the center of is the set Φ1 of multiples of 1 and that the only ideals in are and Φ1 unless n = 2 and the characteristic is 2.

21. Give an example of a Lie algebra over the field C of complex numbers which is not of the form c where is a Lie algebra over the field R of real numbers. (Hint: Consider the Lie algebras satisfying dim = 3, dim ′ = 2.)

22. Let be an ideal in a non-associative algebra , D a derivation in . Show that + D is an ideal. Show that if is finite-dimensional associative of characteristic zero with radical , then for every derivation D of . (This fails for characteristic p; see p. 75.) Prove the same result for Lie algebras.

23. Show that if is a commutative, associative algebra (with 1) and is a Lie algebra, then is a Lie algebra. Give an example to show that the tensor product of an associative algebra and a Lie algebra need not be a Lie algebra and an example to show that the tensor product of two Lie algebras need not be a Lie algebra. (Hint: For the first of these, take the associative algebra to be Φ2 and note that Φn n where is any non-associative algebra and n is the algebra of n x n matrices with entries in .)