Representation Theory IV

6 Solutions

Solution to (1).

  1. 1.

    We have

    exp(s00t)=n=0(sn/n!00tn/n!)=(exp(s)00exp(t)).

    Since

    (01-10)n={(-1)n/2(1001)n even(-1)(n-1)/2(01-10)n odd

    we have

    exp(0t-t0) =(1-t22!+t44!-)I+(t-t33!+t55!-)(01-10)
    =(cos(t)sin(t)-sin(t)cos(t)).

    A very similar calculation (just lose all the minus signs) shows

    exp(0tt0)=(cosh(t)sinh(t)sinh(t)cosh(t)).
  2. 2.

    If ab, then Ea,bn=0 for n2 so exp(tEa,b)=I+tEa,b. If a=b, then Ea,bn=Ea,b for n1 so exp(tEa,a) is diagonal with et in the a-th entry and 1 in the other entries.

Solution to (2).

Using the definition we expand the LHS and the RHS; then it is enough they agree up to terms involving t3 or higher powers of t. Left hand side:

LHS = (I+tX+t2X22+O(t3))(I+tY+t2Y22+O(t3))
= I+t(X+Y)+t22(X2+2XY+Y2)+O(t3).

Right hand side:

RHS = I+t(X+Y)+t22(XY-YX)+t22(X+Y)2+O(t3)
= I+t(X+Y)+t22(XY-YX+X2+XY+YX+Y2)+O(t3)
= I+t(X+Y)+t22(X2+2XY+Y2)+O(t3).
Solution to (3).

1. Let V=n and, for 0in, let Vi be the span of e1,,ei, so

V0={0}V1=e1V2=e1,e2Vn=V.

Then XeiVi-1 for all i, so XV=XVnVn-1, X2VXVn-1Vn-2 etc. Thus XnV=0, so Xn=0.

2. Since each Xi𝔫N, and IN, we have

exp(X)=I+i=1Xii!N

as N is a closed subset of GLn(𝕂) (alternatively, note that by part (1) the sum is actually a finite sum).

3. The sum is finite since, for gN, (g-I)𝔫 and so (g-I)k=0 for all kn.

4. We have, as power series in t, exp(log(1+t))=1+t since this holds for all t sufficiently small. Similarly, log(exp(t))=t as power series in t. Since all powers of X commute with each other and there are no convergence issues (by the previous part and the fact that exp converges absolutely everywhere) we may substitute in X.

In fact, this shows that exp and log define diffeomorphisms between a neighbourhood of zero in 𝔤𝔩n,𝕂 and a neighbourhood of the identity in GLn(𝕂).

Solution to (4).

2. I claim that g=(-110-1)SL2() is not in the image of the exponential map from 𝔰𝔩2,. Indeed, suppose it were of the form g=exp(A) where tr(A)=0. Then the eigenvalues of A must add to zero; since they are nonzero (as they exponentiate to -1), they must be distinct. But then A=PDP-1 for some diagonal D and some PGL2() since every matrix with distinct eigenvalues is diagonalizable. Then g=Pexp(D)P-1 would be diagonalizable, which it is not.

For the second part, we take the same g=(-110-1)GL2+() and suppose it is of the form exp(A) for A𝔤𝔩2,. Then the eigenvalues of A are imaginary, since they exponentiate to -1, and since A is real they must be a complex conjugate pair. But then we get a contradiction in the same way: A is diagonalisable but g is not.

Solution to (5).

The function f(θ) is clearly continuous, and it is a one-parameter subgroup as f(θ+θ) is rotation by θ+θ about v, which is the same as rotation by f(θ) followed by f(θ):

f(θ+θ)=f(θ)f(θ).

We compute ddθf(θ)(e1). If e1 is parallel to v, this is zero.

If y is the projection of e1 on v, and x=e1-y, then rotation about v initially moves e1 in the direction v×e1 perpendicular to v and x (note that we consider rotations as being anticlockwise when looking from the ‘nose’ of v towards the origin).

We have

f(θ)(e1)=y+sin(θ)|x|v×e1|v×e1|+cos(θ)x.

Thus

f(0)(e1)=|x|v×e1|v×e1|=v×e1

since |v×e1|=|x|=sin(α) where α is the angle between v and e1. This formula also holds in the parallel case.

Thus the infinitesimal generator is

f(0)=(v×e1v×e2v×e3)=(0-zyz0-x-yx0)

where v=(xyz).

Solution to (6).

If X+X=0 then, for all t,

exp(tX)=exp(-tX)=(exp(tX))-1

and so exp(tX)U(n).

Conversely, if exp(tX)U(n) for all t, we have

exp(tX)exp(tX)=exp(tX)exp(tX)=I

for all t. Taking the derivative at t=0 gives

X+X=0.

Thus the Lie algebra of U(n) is

{X𝔤𝔩n,:X+X=0}.

We have that X+X=0 if and only if xii is imaginary for all i and

xji=-x¯ij

for all i<j. Thus X is determined by its n imaginary diagonal entries and its n(n-1)/2 complex entries above the diagonal. Its real dimension is thus

2n(n-1)2+n=n2.

If X𝔲n is nonzero, then

(iX)=-iX=iX-iX

so iX𝔲n. Thus 𝔲n is not a complex subspace of 𝔤𝔩n,.

For a challenge, try to show that there is no complex structure on 𝔲n: there is no linear map

J:𝔲n𝔲n

such that

[J(X),Y]=J([X,Y])=[X,J(Y)]

for all X,Y𝔲n and

J2(X)=-X

for all Xufn (for n odd this is easy, but it is trickier for n even).

Solution to (7).

If XIp,q+Ip,qXT=0 then, for all t, Ip,q-1tXIp,q=tXT and so

Ip,q-1exp(tX)Ip,q=exp(tX)T

whence exp(tX)O(p,q). Conversely, if exp(tX)O(p,q) for all t then

exp(tX)Ip,qexp(tXT)=Ip,q

for all t. Differentiating with respect to t at t=0 gives

XIp,q+Ip,qXT=0

as required.

For the last part, we need only show that X𝔬p,q implies trX=0. But if X𝔬p,q then

tr(X)=tr(Ip,q-1XIp,q)=tr(-XT)=-tr(X)

so tr(X)=0 as required.

Solution to (8).

1. Problem 55 suggests that we consider the following basis Jx,Jy,Jz of infinitesimal rotations around the axes:

Jx =(00000-1010)
Jy =(001000-100)
Jz =(0-10100000).

In fact, these are the images of e1,e2,e3 under an isomorphism (3,×)𝔰𝔬3. By calculation we have

[Jx,Jy]=Jz,[Jy,Jz]=Jx,[Jz,Jx]=Jy.

These may remind you of the quaternion group, whose irreducible two-dimensional representation leads us to consider the following basis for 𝔰𝔲2:

=12(01-10)
𝒥 =12(0ii0)
𝒦 =12(i00-i).

We have

[,𝒥]=𝒦,[𝒥,𝒦]=,[𝒦,]=𝒥

— we need the factors of 12 for this, otherwise the right hand sides would be doubled.

It follows that the linear map 𝔰𝔲2𝔰𝔬3 taking to Jx, 𝒥 to Jy and 𝒦 to Jz is an isomorphism of Lie algebras.

2. By the problems class (or problem 7), we know that 𝔰𝔬2,1 has a basis A,B,C with

A =(000001010)
B =(001000100)
C =(010-100000).

We calculate that [A,B]=C, [A,C]=-B and [B,C]=A. It follows that 2A,B,C satisfy the same commutation relations as

H=(100-1),E=(0100),F=(0010)

so that there is a Lie algebra isomorphism sending 2AH, BE, CF.

How might you think of this? Well, the eigenvalues of the linear map X[H,X] are 2,0,-2, with E and F the eigenvectors. So you might look for an element H of 𝔰𝔬2,1 such that the eigenvalues of X[H,X] are 2,0,-2, and 2A above works; B and C are then the eigenvectors!

For a more conceptual approach, let X,Y=tr(XY), a bilinear form on 𝔰𝔩2,. For each gSL2(), ρ(g):XgXg-1 is a linear map 𝔰𝔩2,𝔰𝔩2, preserving this bilinear form. But it is possible to write down a basis e1,e2,e3=H,E+F,E-F of 𝔰𝔩2, such that

ei,ej={2i=j=1,2-2i=j=30ij.

With respect to this basis, , is the bilinear form determined by 2I2,1 and so ρ(g)O(2,1) for all g. The derived map Dρ on Lie algebras is the desired isomorphism.

3. Here is a possible approach. Show that SL2() acts on the four-dimensional real vector space V of Hermitian 2×2 matrices by gX=gXg for gSL2() and X a Hermitian matrix. The quadratic form -det on V is preserved by this action. It has signature (3,1); indeed, it is positive definite on the space of matrices

(xzz¯-x):x,z

and negative definite on the subspace of matrices

diag(x,x):x.

We obtain a map SL2,O(3,1); its derivative is the required isomorphism.

Solution to (9).

We prove the first part. Let J be the matrix defining the standard symplectic form, so that

𝔰𝔭2n={X𝔤𝔩2n,:XTJ+JX=0}.

If X𝔰𝔭2n, then JX=-XTJ, and hence

X=-J-1XTJ.

Taking traces and using that trace is invariant under conjugation gives

tr(X)=-tr(J-1XTJ)=-tr(XT)=-tr(X).

Therefore tr(X)=0.

For the second part, try induction.

Solution to (10).

Suppose that hG0 and gG. We must show ghg-1G0. By assumption, there is a path γ:[0,1]G with γ(0)=I, γ(1)=h. Then tgγ(t)g-1 is a path from I to ghg-1 so ghg-1G0 as required.

Alternatively, we have that, for any gG, gG0g-1 is open and closed in G and contains the identity. It therefore contains G0. But by the same argument G0 contains gG0g-1. So G0=gG0g-1.

Solution to (11).

1. Let R(v,θ) be rotation by θ about v, for v3 — every element of SO(3) has this form. Then, for any v and θ, the path

tR(v,tθ)

is a continuous path in SO(3) from the identity to R(v,θ), showing that this group is connected.

2. Induction on n. Base case: n=2, clear from the explicit description of SO(2).

Suppose true for n-1. Let vn be a unit vector and let RSO(n). Let w=Rv, and let V be a plane containing v,w. Then there is some rotation SSO(V) such that w=Sv. Extend S to an element of SO(n) by letting it act as the identity on V: explicitly, S(v+v)=S(v)+v for vV,vV. Since SO(V)SO(2) is connected, there is a path γ(t) from I to S. Then γ(t)-1R is a path from R to S-1R, and it suffices to show that T=S-1R is connected to the identity. Note that T fixes v and so is determined by its action on W=v. By the induction hypothesis applied to SO(W), we can connect T to an element that fixes v and W, and therefore fixes V. In other words, T is connected to the identity.

Solution to (12).

Let g=(abcd)SU(2). Then g-1=(d-b-ca) since det(g)=1. But also g-1=g and so d=a¯ and c=-b¯. Thus g=(ab-b¯a¯) and the determinant condition gives aa¯+bb¯=1.

We deduce that the map SU(2){(a,b)2:|a|2+|b|2=1} is a diffeomorphism (it and its inverse are clearly smooth). Moreover, writing a=w+ix, b=y+iz, the latter space is

{(w,x,y,z)4:w2+x2+y2+z2=1}

which is the three-sphere.

Solution to (14).

Firstly we will show that the Lie algebra of Z is contained in 𝔷. Indeed, suppose that X𝔤 with

exp(tX)Z

for all t. Then for all Y𝔤,

exp(tX)exp(sY)exp(-tX)=exp(sY)

for all s,t. Taking the derivative at s=0 gives

exp(tX)Yexp(-tX)=Y

for all t and taking the derivative of this at t=0 gives [X,Y]=0 for all Y𝔤, whence X𝔷.

Conversely, if G is connected and X𝔷, then I claim that exp(tX)Z for all t. Indeed, for Y𝔤,

exp(tX)exp(Y)=exp(tX+Y)=exp(Y)exp(tX)

as [tX,Y]=0. So exp(tX) commutes with all elements of G of the form exp(Y). Since these generate G by the connectedness assumption, we see exp(tX)Z.

Solution to (16).

1. I claim that for Z𝔷, vρ(Z)v is a 𝔤-homomorphism. Indeed, if X𝔤 then

ρ(Z)ρ(X)v =[ρ(Z),ρ(X)]v+ρ(X)ρ(Z)v
=ρ([Z,X])v+ρ(X)ρ(Z)v
=0+ρ(X)ρ(Z)v
=ρ(X)ρ(Z)v.

By Schur’s lemma, as V is irreducible, there is α(Z) such that ρ(Z)v=α(Z)v for all vV. As ρ is linear, so is α.

2. If A=(aij)𝔷, then it commutes with every n×n matrix. Let Di be the diagonal matrix with ‘1’ in position i and ‘0’ elsewhere. Then DiA-ADi=0 implies that aij=0 for ji. Hence A is diagonal, with entries a1,,an. But then A commutes with the elementary matrix Eij, with ‘1’ in row i and column j and ‘0’ elsewhere, if and only if ai=aj. So all the ai are equal, so A is scalar. Conversely, all scalar matrices are in 𝔷. So

𝔷={λI:λ}.

If V=Λkn, then for all v1vk,

(λI)(v1vk)=i=1kv1(λvi)vk=kλ(v1vk),

so α is the linear map

α(λI)=kλ.
Solution to (17).

We use induction on m. The case m=1 is clear — in that case, we just get

[X,Y]=XY-YX.

Suppose the result is true for m. Then

adXm+1(Y) =adX(adXm(Y))
=k=0m(mk)(Xk+1Y(-X)m-k-XkY(-X)m-kX)
=k=0m(mk)(Xk+1Y(-X)m-k+XkY(-X)m-k+1)
=k=0m+1((mk-1)+(mk))XkY(-X)m+1-k
=k=0m+1(m+1k)XkY(-X)m+1-k

as required.

For the second part, for all X and Y in 𝔤𝔩n, we have

exp(adX)(Y) =m=0adXmm!(Y)
=m=0k=0m1m!(mk)XkY(-X)m-k
=l=0k=01(k+l)!(k+lk)XkY(-X)l
=l=0k=01k!l!XkY(-X)l
=(k=0Xkk!)Y(l=0(-X)ll!)
=exp(X)Yexp(-X)
=Adexp(X)(Y)

as required.

Solution to (18).

1. Let h=eis. Then

Gφ(hg)𝑑g=02πφ(eiseit)𝑑t.

Making the linear substitution u=s+t, we obtain

02πφ(eiu)𝑑u=Gφ(g)𝑑g

as required. Similarly for Gφ(gh)𝑑g.

2. Fix v,wV. Then the G-invariance is simply the previous part applied to the function

φ(g)=(ρ(g)v,ρ(g)w).

It is clear that the form (,)ρ is sesquilinear as (,) is. Finally, it is positive definite: if v=w0 then

(v,v)ρ=G(ρ(g)v,ρ(g)v)𝑑g>0

since the integrand is positive for all g.

3. This is the proof of Maschke’s theorem. Suppose V is a finite dimensional representation and W is a subrepresentation. By part (2), there is a G-invariant Hermitian inner product on V, and then V=WW. By the G-invariance, W is also a subrepresentation. This implies that V is completely reducible.

Solution to (19).

1. Every orthogonal matrix has determinant ±1 (which follows from taking det of AAT=I). So det is a surjective homomorphism O(2){±1} — it is surjective because det(s)=-1 — and its kernel is SO(2). This implies that SO(2) has index 2 in O(2), by the first isomorphism theorem. As s is an element in the non-identity coset, we have O(2)=SO(2)SO(2)s; as every element of SO(2) has the form rθ, we get the second claim.

Finally,

srθs-1 =(0110)(cos(θ)-sin(θ)sin(θ)cos(θ))(0110)
=(cos(θ)sin(θ)-sin(θ)cos(θ))
=r-θ

as required. Geometrically, rθs is reflection about a line through the origin making an angle of π/4+θ/2 with the x-axis (it is the reflection that maps (cos(π/4),sin(π/4)) to (cos(π/4+θ),sin(π/4+θ))).

2. Suppose that V is an irreducible finite-dimensional representation of O(2). As a representation of SO(2) it is completely reducible; every irrep of SO(2) is one-dimensional of the form rθeinθ for some integer n. So there is an integer n and nonzero vV such that

rθ(v)=einθv

for all θ. I claim that v,sv span a subrepresentation; clearly their span is preserved by s, and

rθ(sv)=sr-θ(v)=se-inθv=e-inθsv

so it is preserved by rθ for all θ as well. As V is irreducible, we have

V=v,sv.

Now, if n0 then, since rθ has different eigenvalues on v and sv (for general θ), they are linearly independent and dimV=2. If n<0 then we switch v and sv, and now n>0. Now V is determined up to isomorphism by n, as in the basis v,sv we have

ρ(rθ)=(einθ00e-inθ)

and

ρ(s)=(0110).

Different n give different eigenvalues for rθ and hence non-isomorphic two-dimensional representations.

If n=0 then v, sv are fixed by SO(2). In this case, either v+sv or v-sv is nonzero and spans a one-dimensional subrepresentation which must then be all of V. Thus V is one-dimensional, and either isomorphic to the trivial representation or the determinant representation (according as sv=v or sv=-v).

Solution to (20).

We start with the definition: if A𝔤𝔩2,, then

(Aϕ)(v)=ddtϕ(exp(-At)v)|t=0.

For typesetting reasons I’ll write (x,y)T for the column vector (xy).

Starting with A=X:

(Xϕ)((x,y)T) =ddtϕ(exp(-tX)(x,y)T)|t=0
=ddtϕ((x-ty,y)T)|t=0
=-yx(ϕ)

by the multivariable chain rule. Thus X acts as -yx and a very similar calculation shows that Y acts as -xy.

Finally,

(Hϕ)((x,y)T) =ddtϕ(exp(-tH)(x,y)T)|t=0
=ddtϕ((e-tx,ety)T)|t=0
=ddt(ety)|t=0y(ϕ)-ddt(e-tx)|t=0x(ϕ)
=yy(ϕ)-xx(ϕ)

so that H acts as yy-xx.

An alternative solution would be to compute

ddtϕ(exp(-At)v)|t=0

using the multivariate chain rule. The derivative of vexp(-At)v at t=0 -s -Av and the derivative of ϕ is ϕ=(ϕx,ϕy) so we get that the required derivative is

-ϕAv

which one can check agrees with the answers from before.

Remark. Another possible convention is to use gT rather than g-1, which leads to slightly different formulas. This second convention is the same as if we considered elements of 2 as row vectors, with matrices acting on the right, and defined instead

(gϕ)(v)=ϕ(vg).
Solution to (21).

1. Let e1e2 be the standard basis vector for Λ2(). Then

(abcd)(e1e2)=(ae1+ce2)(be1+de2).

Multiplying out, and noting that e1e1=e2e2=0 while e2e1=-e1e2, we get that

(abcd)(e1e2)=(ad-bc)e1e2

which is what we want (since det(abcd)=ad-bc).

2. This is similar. We get, with X=(abcd),

X(e1e2)=(Xe1)e2+e1(Xe2).

This simplifies to

(a+d)(e1e2)=tr(X)(e1e2)

as required.

3. Compute the action of (abcd) on the basis vectors e12,e1e2,e22. Skipping the working, the result is

ρ((abcd))=(a2abb22acad+bc2bdc2cdd2).
Solution to (22).

We have

H(vw)=Hvw+vHw=(αv)w+vβw=(α+β)vw

as required. If v,w are highest weight vectors then

X(vw)=(Xv)w+vXw=0w+v0=0

so vw is a highest weight vector.

Solution to (23).

  1. 1.

    If v1,,vn is a basis of weight vectors of V with weights λ1,,λn, then the dual basis v1*,,vn* is a basis of weight vectors of V* with weights -λ1,,-λn. Proof: we have

    Hvi*(vj)=-vi*(Hvj)=-λjδij

    so that

    Hvi*=-λivi*

    as required.

  2. 2.

    Since the weights of every irreducible representation of 𝔰𝔩2, are symmetrical about the origin, and every representation is a direct sum of irreducible representations, the weights of V are symmetrical about the origin. Thus the weights of V* are the same as the weights of V. Since the weights determine the representation up to isomorphism, we have V*V.

Solution to (24).

  1. 1.

    We will show that 𝒞 commutes with each of π(X), π(Y) and π(H). Note that, since π is a Lie algebra representation, we have

    π(X)π(Y) =π(Y)π(X)+π(H)
    π(H)π(X) =π(X)π(H)+2π(X)
    π(H)π(Y) =π(Y)π(H)-2π(Y).

    For instance, the first equation follows from [π(X),π(Y)]=π([X,Y])=π(H). So we get

    𝒞π(X) =π(X)π(Y)π(X)+π(Y)π(X)2+12π(H)2π(X)
    =2π(X)π(Y)π(X)-π(H)π(X)+12π(H)π(X)π(H)+π(H)π(X)
    =2π(X)π(Y)π(X)+12π(H)π(X)π(H)

    and therefore

    π(X)𝒞 =π(X)2π(Y)+π(X)π(Y)π(X)+12π(X)π(H)2
    =2π(X)π(Y)π(X)+π(X)π(H)+12π(H)π(X)π(H)-π(X)π(H)
    =2π(X)π(Y)π(X)+12π(H)π(X)π(H)
    =𝒞π(X).

    Similarly, 𝒞 commutes with π(Y). Finally,

    𝒞π(H)=π(H)𝒞=π(X)π(H)π(Y)+π(H)π(Y)π(H)+2π(X)π(Y)+2π(Y)π(X)+12π(H)3.

    If V is irreducible, then since 𝒞 commutes with all elements of 𝔰𝔩2, it is a 𝔰𝔩2, homomorphism VV and so is scalar by Schur’s lemma.

  2. 2.

    The representation V=V(n) is irreducible, and so 𝒞 is a scalar. To find the scalar, we just need to evaluate 𝒞 on a single element of V; I will use the highest weight vector e1n. We have

    𝒞e1n =π(X)ne1n-1e-1+π(Y)0+12n2e1n
    =(n+12n2)e1n

    so 𝒞 acts as the scalar n2+2n2 on V(n). Here we see why we might want to use 1+2𝒞 instead: then it acts as (n+1)2.

  3. 3.

    Recall that X acts as -yx, Y acts as -xy, H acts as yy-xx. We see that

    𝒞 =yx(xy)+xy(yx)+12(yy-xx)(yy-xx)
    =yy+yx2xy+xx+xy2xy+12(yy+y22y2-2xy2xy+xx+x22x2)
    =12(3xx+3yy+x22x2+2xy2xy+y22y2).

    If we apply this to a monomial xayb of degree n=a+b, we find that

    𝒞xayb =12(3(a+b)+a(a-1)+2ab+b(b-1))xayb
    =12(2n+n2)xayb.

    We can explain this as follows: the space of homogeneous polynomial functions of degree n is isomorphic to V(n) and so the calculation from the previous part applies!

Solution to (25).

The idea is to use the formulae we found for the actions of ρ(Y) and ρ(X) on a weight basis for a representation of 𝔰𝔩2, in the course of proving their classification to just define a representation with highest weight λ.

Let V be an infinite-dimensional vector space with basis v0,v1,v2,. Define a representation ρ=ρλ of 𝔰𝔩2, on V by

ρ(H)vi =(λ-2i)vi
ρ(Y)vi =vi+1
ρ(X)vi =i(λ-i+1)vi-1

for all i0 (so ρ(X)v0=0). To check that this does define a representation, we must check that [ρ(A),ρ(B)]=ρ([A,B]) for all A,B{X,Y,H} (it suffices to consider each pair (A,B) of distinct elements in one particular order). And indeed we have:

[ρ(X),ρ(Y)]vi =((i+1)(λ-i)-i(λ-i+1))vi
=(λ-2i)vi
=ρ(H)vi
[ρ(H),ρ(X)]vi =i(λ-i+1)vi-1×(λ-2i+2-(λ-2i))
=2i(λ-i+1)vi-1
=2ρ(X)vi
[ρ(H),ρ(Y)]vi =(λ-2i-2-(λ-2i))vi+1
=-2ρ(Y)vi

as required.

Finally, v0 is killed by ρ(X) and is a weight vector of weight λ, so V has highest weight λ.

Advanced remark: (V,ρλ) will be irreducible unless λ is a nonnegative integer, in which case the vector vλ+1 generates a subrepresentation W isomorphic to (V,ρ-λ-2) and the quotient representation V/W is exactly the finite-dimensional irreducible representation with highest weight λ.

Solution to (26).

We use the notation v2k-n=e1ke-1n-k for the usual weight basis of Symn(2), so v2k-n has weight 2k-n, for 0kn. We have the formulas Xv2k-n=(n-k)v2k+2-n and Yv2k-n=kv2k-2-n. We also abbreviate Symn=Symn(2).

  1. 1.

    The weights of Sym2(2) are {2,0,-2}. To obtain the weights of Sym2(Sym2(2)) we add together all possible (unordered, possibly equal) pairs of these and get:

    {4,2,0,0,-2,-4}.

    Thus

    Sym2(Sym2(2))Sym4(2).

    A highest weight vector of weight 4 is v22 (clear as it is a symmetric product of highest weight vectors). To get a highest weight vector of weight 2 we must take a linear combination of v02 and v2v-2 which is killed by X. Since Xv02=2v0v2 and Xv2v-2=2v2v0 we see that

    v02-v2v-2

    is a weight vector of weight 0 killed by X, so a highest weight vector of weight 0.

  2. 2.

    This time we must take all sums of unordered pairs of /distinct/ elements of {2,0,-2}. This gives

    {2,0,-2}

    so that Λ2(Sym2(2))Sym2(2). A highest weight vector of weight 2 is v2v0.

  3. 3.

    We have to add together all pairs of weights from {3,1,-1,-3} and {2,0,-2}, giving

    {5,3,3,1,1,1,-1,-1,-1,-3,-3,-5}

    as the weights of Sym3Sym2. Thus the decomposition is

    Sym5Sym3Sym1.

    A highest weight vector of weight 5 is v3v2. We can now apply Y repeatedly (and divide out by constant factors where possible to keep the numbers small) to obtain a weight basis of the copy of Sym5 in the representation, as shown in the table.

    weightvector5v3v233v1v2+2v3v013v-1v2+6v1v0+v3v-2-13v1v-2+6v-1v0+v-3v2-33v-1v-2+2v-3v0-5v-3v-2

    We have X(v3v0)=v3v2 and X(v1v2)=v3v2 so that

    v3v0-v1v2

    is a highest weight vector of weight 3. We apply Y repeatedly (and divide out scalars where possible) to obtain a weight basis of the copy of Sym3 in the representation:

    weightvector3v3v0-v1v21v3v-2+v1v0-2v-1v2-1v-3v2+v-1v0-2v1v-2-3v-3v0-v-1v-2

    Notice that we can ‘cheat’ and obtain just the weight vectors with nonpositive weight, and then apply the symmetry sending vi to v-i to obtain those of nonnegative weight.

    Finally, we have X(v3v-2)=2v3v0, X(v1v0)=v3v0+v1v2, and X(v-1v2)=2v1v2 so that

    v3v-2-2v1v0+v-1v2

    is a highest weight vector of weight 1. Applying Y (or the symmetry discussed above) we see that this vector together with

    v-3v2-2v-1v0+v1v-2

    is a weight basis for the copy of 2.

  4. 4.

    We must add together all unordered triples of (not necessarily distinct) elements of {2,0,-2}. We get that the weights of Sym3(Sym2(2)) are:

    {6,4,2,2,0,0,-2,-2,-4,-6}

    so that

    Sym3(Sym2(2))Sym6(2)Sym2(2).

    A highest weight vector of weight 6 is v23. We have X(v22v-2)=2v22v0 and X(v2v02)=2v22v0 so that

    v22v-2-v2v02

    is a highest weight vector of weight 2.

Solution to (27).

  1. 1.

    The weights of Syma2 are a,a-2,,2-a,-a and the weights of Symb2 are b,b-2,,2-b,-b. Without loss of generality, ab. Adding these lists together, remembering multiplicity, we see that in the tensor product:

    • For weights a+b,a+b-2,,a-b=a+b-2b, each a+b-2k occurs k+1 times as

      a+(b-2k),(a-2)+(b-2k+2),,(a-2k)+b

      ; the same holds for their negatives.

    • Each weight a-b,a-b-2,,b-a+2,b-a occurs b+1 times; specifically, a-b-2k occurs as

      (a-2k)+(-b),(a-2k-2)+(2-b),,(a-2k-2b)+b.

    This agrees with the weights of

    Syma+b2Syma+b-22Syma-b2

    and so this is the decomposition of Syma2Symb2 into irreducibles.

  2. 2.

    Omitted (for now).

Solution to (28).

Let =(i00-i), 𝒥=(01-10), 𝒦=(0ii0) be a basis for 𝔰𝔲2, so that [,𝒥]=2𝒦 etc. Let A=x+y𝒥+z𝒦 be a general element of 𝔰𝔲2. Then we compute that the matrix for the adjoint action of A with respect to the basis ,𝒥,𝒦 is:

2(0-zyz0-x-yx0).

This has characteristic polynomial T(T2+4(x2+y2+z2)) which has distinct roots (unless x=y=z=0 in which case A=0) so the adjoint action of A is diagonalizable. So, for every A𝔰𝔲2, the map B[A,B] is diagonalizable. This is not true for 𝔰𝔩2,, since the action of X=(0100) is, wrt the standard basis X,H,Y,

(0-20001000)

which is not diagonalizable.

Here is another, related solution. For every finite-dimensional (nonzero) representation (ρ,V) of 𝔰𝔩2,, ρ(X) is not diagonalizable. But in the standard representation of 𝔰𝔲2, every element is diagonalizable. Indeed, if A𝔰𝔲2 then iA is Hermitian, and hence diagonalizable, so A is diagonalizable. So 𝔰𝔲2 cannot be isomorphic to 𝔰𝔩2,.

Solution to (29).

The eigenvalues of (eit00e-it) on the weight basis {e1ke-1n-k:0kn} for Symn2 are ekite-(n-k)it=e(2k-n)it for 0kn. The trace is then (using the formula for the sum of a GP)

χn(t) =k=0ne(2k-n)it
=e-nite(2n+2)it-1e2it-1
=e(n+1)it-e-(n+1)iteit-e-it
=sin((n+1)t)sin(t).
Solution to (30).

  1. 1.

    This follows from Schur’s lemma, since Z is in the centre of 𝔤𝔩2,.

  2. 2.

    Suppose otherwise. Then there is a proper, nonzero subspace WV preserved by ρ(g) for all g𝔰𝔩2,. But W is also preserved by ρ(Z) as ρ(Z) is scalar. So W is preserved by

    𝔰𝔩2,Z=𝔤𝔩2,

    so it is a 𝔤𝔩2,-subrepresentation, contradicting that V is irreducible.

  3. 3.

    Let V(n) be the irreducible representation of 𝔰𝔩2, of dimension n+1. To extend the action of 𝔰𝔩2, to an action of 𝔤𝔩2, we have to send Z to a map V(n)V(n) commuting with the action of 𝔰𝔩2,. Since V(n) is irreducible, we must send Z to a scalar map, and any scalar λ is possible.

  4. 4.

    If the representation ρ from the third part exponentiates to a representation of GL2(), then we must have that

    exp(πiλI)=exp(ρ(πiZ))=ρ(exp(πiZ))=ρ((-100-1))=(-1)nI.

    This implies that λ is an integer of the same parity as n. Moreover, if λ is such an integer then we get a representation of GL2() with the required derivative by taking

    Symn(2)detλ-n2

    where 2 is the standard representation of GL2() and detk is the one dimensional representation on which g acts as det(g)k.

    Thus the irreducible finite-dimensional representations of GL2() are parametrised by pairs (n,λ) where n0 is an integer and λ is an integer of the same parity as n.

Solution to (31).

  1. 1.

    We have Jx=zy-yz and so

    Jx2 =zyzy-zyyz-yzzy+yzyz =z22y2-yz2yz-zz-yy-yz2zy+y22z2
    =z22y2+y22z2-2yz2yz-zz-yy.

    Summing over x, y and z we see that

    Jx2=(x2)(2x2)-x22x2-2xx-2yz2yz.

    Note that

    (xx)2=x22x2+xx+2yz2yz

    since

    xxxx=x22x2+xx.

    Thus

    Jx2=r2Δ-(xx)2-(xx).

    On 𝒫, xx acts as multiplication by (Euler’s identity), and so

    Jx2=r2Δ-2-

    as required.

  2. 2.

    The isomorphism 𝔰𝔩2,𝔰𝔬3 sends

    X=-i𝒥 Jx-iJy
    Y=-(+i𝒥) -(Jx+iJy)
    H=-2i𝒦 -2iJz.

    The Casimir for 𝔰𝔩2, is

    𝒞=XY+YX+12H2

    (where this multiplication is not as 2×2 matrices but as operators on a fixed representation!), and under the isomorphism above this goes to

    -(Jx-iJy)(Jx+iJy)-(Jx+iJy)(Jx-iJy)-2Jz2

    which multiplies out to

    -2(Jx2+Jy2+Jz2).

    Thus, as operators on 𝒫,

    𝒞=2(2+-r2Δ).
Solution to (32).

  1. 1.

    We first check that (x-iy) is a weight vector for Jz=yx-xy:

    Jz(x-iy) =y(x-iy)-1+ix(x-iy)-1
    =(y+ix)(x-iy)-1
    =i(x-iy)

    so that it is a weight vector of weight i.

    Recall that the raising operator is

    Jx-iJy=z(y+ix)-(y+ix)z.

    Thus

    (Jx-iJy)(x-iy)=z(-i+i)(x-iy)-1=0

    so that (x-iy) is a highest weight vector as required.

  2. 2.

    The lowering operator is (up to sign)

    Jx+iJy=z(y-ix)-(y-ix)z.

    Applying this once to (x-iy) we find that

    z(-i-i)(x-iy)-1

    is a weight vector for Jz of weight i(-1); we may throw away the constants to get just

    z(x-iy)-1

    (for 1; for =0 we just get 0!). Applying the lowering operator to this, we get

    -2i(-1)z2(x-iy)-2-(y-ix)(x-iy)-1

    which simplifies to

    i(x2+y2-2(-1)z2)(x-iy)-2,

    a weight vector of weight i(-2) (for 2; for =1 we just get -(y-ix)=i(x+iy) here, while for =0 we get 0).

  3. 3.

    When =1 we apply the procedure from the previous part to get a weight basis

    (x-iy),z,(x+iy).

    When =2 the procedure from the previous part gives weight vectors

    (x-iy)2,z(x-iy),(x2+y2-2z2).

    We can now either continue applying the lowering operator, or observe that the substitution y-y preserves harmonic polynomials and sends weight vectors for Jz of weight ik to weight vectors for Jz of weight -ik so that we get a weight basis

    (x-iy)2,z(x-iy),(x2+y2-2z2),z(x+iy),(x+iy)2.

    (Proof: let f(x,y,z) be a harmonic polynomial and weight vector for Jz, and let g(x,y,z)=f(x,-y,z). Then Δg=Δf=0, so g is harmonic, while

    Jzg=(yx-xy)g=-(-y)(xf)(x,-y,z)+x(yf)(x,-y,z)=-(Jzf)(x,-y,z)

    and so g is a weight vector with weight minus that of f.)

Solution to (33).

  1. 1.

    Note that

    x(r2f)=r2xf+2xf

    and so

    2x2(r2f)=r22x2f+4xxf+2f.

    Summing over x,y,z and using Euler’s identity

    xxf=f

    for f𝒫() gives

    Δr2=r2Δ+4+6

    on 𝒫(), as required.

  2. 2.

    Notice that, if f𝒫(), then r2k-2f𝒫(+2k-2) and so

    Δ(r2kf)=r2Δ(r2k-2f)+2(2+4k-1)r2k-2f.

    Iterating, we see that

    Δ(r2kf) =r2kΔ(f)+i=1k2(2+4i-1)r2k-2f
    =r2kΔ(f)+2k(2+2k+1)r2k-2f.
  3. 3.

    Suppose fr2𝒫-2 is nonzero. Then we may write f=r2kg where /2k1 and g is not divisible by r2. Then

    0=Δf=Δ(r2kg)=r2kΔ(g)+2k(2(-2k)+2k+1)r2k-2g

    by the previous part and the fact that f is harmonic. Since 2k(2(-2k)+2k+1)0, we see that

    r2k-2g=cr2kΔg

    and so g=cr2Δg for some constant c. But this contradicts our assumption that g is not divisible by r2!

Solution to (34).

  1. 1.

    For the first part, we first find a basis of weight vectors for V (with respect to Jz=(0-10100000)). This amounts to finding the eigenvectors for Jz; we see that e3 is an eigenvector (weight vector) of eigenvalue (weight) 0 and that e1ie2 are eigenvectors of eigenvalue ±i (note the signs!!).

    It follows that a weight basis of Sym2(V) is given by taking the product of any two of {e1ie2,e3} with the weights being the corresponding sums of two of {±i,0}. We find that the weights are

    {2i,i,0,0,-i,-2i}.

    A weight vector of weight ±2i is given by (e1ie2)2=e122ie1e2-e22. A weight vector of weight ±i is given by (e1ie2)e3=e1e3ie2e3. Finally, a basis of weight vectors of weight 0 is given by (e1+ie2)(e1-ie2)=e12+e22 and e32.

    Since the weights of Sym2(V) agree with the weights of V(2)V(0) — notation as in Theorem 3.29 — this is the decomposition into irreducible representations. We now have to find V(0) and V(2) as subrepresentations of Sym2(V).

    To find the copy of V(0), we need to look for a weight vector of weight 0 preserved by Jx, Jy and Jz. Note that

    Jxe32=2(Jxe3)e3=-2e2e3,
    Jye32=2e1e3

    and Jze32=0. Similar calculations show Jx(e12+e22)=0+2e2e3 and Jy(e12+e22)=-2e1e3. It follows that e12+e22+e32 is killed by each of Jx, Jy and Jz and therefore spans a copy of the trivial representation.

    The copy of V(2) inside Sym2(V) is spanned by the weight vectors of nonzero weights together with a weight vector of weight 0 which may be obtained by applying the lowering operator to the weight vector of weight i. This gives

    (Jx+iJy)(e1e3-ie2e3)=-e1e2-ie32+ie22+i(-e32+e12-ie1e2)

    simplifying to i(e12+e22-2e32). Thus a basis of weight vectors for the copy of V(2) in Sym2(V) is

    (e1ie2)2,(e1ie2)e3,(e12+e22-2e32).

    An alternative approach is to notice that if we write x=e1, y=e2 and z=e3, then V is simply isomorphic to the representation of 𝔰𝔬3 on homogeneous linear polynomials. Then the symmetric square of V is exactly the space 𝒫(2) of homogeneous quadratic polynomials, which we know decomposes

    (2)r2(0).

    The basis for (2) we wrote down above then corresponds exactly to that in example 3.40, while the copy of the trivial representation is spanned by

    r2=x2+y2+z2=e12+e22+e32.
  2. 2.

    The Jz weights of (2) are 2i,i,0,-i,-2i. Thus the Jz weights of (2)(2) are obtained by adding all possible ordered pairs of these, giving

    {±4i,±3i,±3i,±2i,±2i,±2i,±i,±i,±i,±i0,0,0,0,0}

    This agrees with the multiset of weights of

    (4)(3)(2)(1)(0)

    so that this is the required decomposition (because representations of 𝔰𝔬3 are determined by their Jz weights). Note that, thankfully, we aren’t asked to find these as subrepresentations!

Solution to (35).

This is just matrix multiplication:

(a1000a2000a3)Eij=aiEij

and

Eij(a1000a2000a3)=ajEij

whence

[(a1000a2000a3),Eij]=(ai-aj)Eij.

Similarly E12E23=E13 and E23E12=0 so that

[E12,E23]=E13.
Solution to (36).

For 𝔰𝔩2 we have 𝔥=H and a weight, a linear map λ:𝔥, is uniquely determined by λ(H). Thus we identify weights with complex numbers (actually integers, for finite-dimensional representations).

With this identification, the roots are {±2} and the root spaces are X and Y.

Solution to (37).

Let ρ* be the dual representation. If A𝔰𝔩3,, then the matrix of ρ*(A) with respect to the dual basis is -AT. From this, we see that ρ*(Eij)e3*=0 if i3, while ρ*(E31)e3*=-e1* and ρ*(E32)e3*=-e2*.

Moreover, if H is diagonal with entries a1,a2,a3, then ρ*(H)e3*=-a3e3*. Thus e3* is a weight vector with weight -L3. Since e3* is killed by E12 and E23, it is a highest weight vector.

It is worth thinking about how you derive the formula for the matrix of ρ*(A) with respect to the dual basis. It is defined so that, for v*V* and wV, and A=(aij)𝔤,

(ρ*(A)v*)(w)=-v*(ρ(A)w).

We apply this with v*=ei*, recalling that ei*ek=δik. Then

(ρ*(A)ei*)(ej)=-ei*(ρ(A)ej)=-ei*(kakjek)=-aij.

This implies that

ρ*(A)ei*=-jaijej*

which exactly says that the matrix of ρ*(A) with respect to the dual basis is minus the transpose of the matrix of ρ(A) with respect to the original basis.

Solution to (38).

By definition ΛW consists of elements of 𝔥* that may be written as an integer linear combination of the Li. Any such linear combination clearly satisfies the given condition. Conversely, if a1L1+a2L2+a3L3 satisfies a1-a2 and a2-a3, then

a1L1+a2L2+a3L3=(a1-a2)L1-(a2-a3)L3ΛW

since L1+L2+L3=0.

The ai need not be integers; indeed, zL1+zL2+zL3=0ΛW for any z.

Solution to (39).

  1. 1.

    See Figure 12.

    Root lattice (blue) inside weight lattice (black).
    Figure 12: Root lattice (blue) inside weight lattice (black).
  2. 2.

    A complete set of coset representatives for ΛW/ΛR is 0, L1, 2L1 (please check this!), so the index is three.

    Alternatively, there is a homomorphism ΛW/3 sending aL1+bL2+cL3 to a+b+cmod3, and one can show that the kernel of this homomorphism is ΛR.

  3. 3.

    Here 𝔥=H. We have an isomorphism 𝔥* taking α to α(H). The weight lattice is then . The root lattice is 2 since the eigenvalues of H on the adjoint representation are 0,±2. The index is therefore two.

  4. 4.

    If vV is a weight vector of weight α then the subrepresentation generated by v must be all of V. Thus V is spanned by

    {X1Xnv:Xi{Eij}𝔥}

    and we see from the fundamental weight calculation that V is spanned by weight vectors all of whose weights differ from α by an integer linear combination of roots, as required.

Solution to (40).

The weights of Sym3(3) are a1L1+a2L2+a3L3 with a1,a2,a3 non-negative integers summing to 3. By Weyl symmetry it is enough to find the dominant weights. Since

a1L1+a2L2+a3L3=(a1-a2)L1-(a2-a3)L3

these are those with a1a2a3. The only possibilities for (a1,a2,a3) are then (3,0,0),(2,1,0) and (1,1,1) corresponding to

3L1,2L1+L2(=L1-L3),L1+L2+L3(=0).

Applying Weyl symmetry we see that the weights are

{3L1,3L2,3L3,Li-Jj for ij,0}.

It is left to you to draw these; for a similar picture of Sym5(3) see 11.

Solution to (41).

The representation 3(3)* has a highest weight vector e1e3* of weight L1-L3. It therefore contains a subrepresentation isomorphic to V(1,1)=𝔰𝔩2, (which will be the subrepresentation generated by e1e3*).

The weights of 3(3)* are {Li-Lj:ij}{0,0,0} (as we can write 0=Li-Li in three ways). Therefore, looking at weights, we have

3(3)*𝔰𝔩2,W

where W is a one-dimensional representation with a single weight, 0. Therefore W is trivial and

3(3)*𝔰𝔩2,.

To find the trivial representation inside 3(3)*, we look for a HWV of weight 0. The following works:

e1e1*+e2e2*+e3e3*.

For a conceptual proof, if V and W are any representations of a Lie algebra 𝔤 then we can define a representation on Hom(V,W) by

(XT)(v)=X(Tv)-T(Xv)

for X𝔤,vV,THom(V,W). Then we have a map

V*WHom(V,W)

sending

λwTλ,w

where Tλ,wHom(V,W) is defined by

Tλ,w(v)=λ(v)w.

One can check that this is a 𝔤-isomorphism (if V and W are finite-dimensional). In the case at hand we get

(3)*3Hom(3,3)=𝔤𝔩3,C

where the right hand side is a representation of 𝔰𝔩3, by the same formula defining the adjoint representation. Then

𝔤𝔩3,=𝔰𝔩3,

as representations of 𝔰𝔩3,, with corresponding to the subspace of scalar matrices and 𝔰𝔩3,𝔤𝔩3, in the obvious way.

An intuitive way to see the isomorphism WV*Hom(V,W) is that V* is ‘row vectors of length m’, W is ‘column vectors of length n’, and Hom(V,W) is ’m×n matrices’, and the isomorphism takes a column vector (a1,,am)T tensored with a row vector (b1,,bn) to the matrix (aibj)i,j.

Solution to (42).

First I claim that if ρ~ is a representation of a Lie group G on V with derivative ρ, then

ρ~(g)ρ(H)v=ρ(Adg(H))ρ~(g)v.

Indeed, we may assume that GGL(V) when this just becomes gH=(gHg-1)g. Alternatively, the left hand side is the derivative at t=0 of

ρ~(g)ρ~(exp(tH))=ρ~(gexp(tH))=ρ~(exp(tAdg(H))g)=ρ~(exp(tAdg(H)))ρ~(g)

which gives the result.

Let vVα. By the above, we have that, for H𝔥,

ρ(H)(ρ~(σ3)v)=ρ~(σ3)ρ(Adσ3(H))v=α(Adσ3H)ρ~(σ3)v.

If α=a1L1+a2L2+a3L3 and H is diagonal with entries x,y,z, then α(H)=a1x+a2y+a3z. Since Adσ3(H) is diagonal with entries y,x,z,

α(Adσ3H)=a1y+a2x+a3z=(s3α)(H)

where s3α=a2L1+a1L2+a3L3.

This shows that ρ~(σ3) maps Vα to Vs3α; since the inverse provides a map in the other direction, this is an isomorphism.

This shows that the weights of a representation, with their multiplicities, are preserved under swapping L1 and L2. Similarly for any transposition, and hence for any permutation of the Li.

Solution to (43).

It is easy to see that its weight is aL1-bL3. Moreover, as e1 and e3* are highest weight vectors, so are e1a and (e3*)b, and so is their tensor product.

(General lemmas: if vV is a highest weight vector, then Evn=n(Ev)vn-1=0 for each positive root vector E, so vn is a highest weight vector in SymnV (you should also check it is a weight vector!). If v,wV,W are highest weight vectors, then

E(vw)=(Ev)w+v(Ew)=0

for each positive root vector E so vw is a highest weight vector (you should also check it is a weight vector!).)

Solution to (44).

Suppose that V is reducible, so that there is a nonzero proper subrepresentation WV. Then by complete reducibility, there is another subrepresentation W with V=WW. Then W and W both have nonzero highest weight vectors, which must be linearly independent from each other, contradicting the assumption on V.

For the standard representation, the weight spaces are spanned by e1, e2, and e3 respectively and only e1 (or a scalar multiple of it) is a highest weight vector. So it is irreducible. Similarly for the dual representation.

For the adjoint representation, out of the nonzero weights only α1-α3 is a highest weight, with unique highest weight vector. We have to check there are no highest weight vectors of weight zero. Such a vector would be a nonzero element H𝔥 such that [Eij,H]=0 for all i<j. This would imply that H is scalar, but since H has trace zero this is impossible.

Remark: this is not the simplest way to see that the standard representation is irreducible; indeed, the action of 𝔰𝔩3, on 3 is transitive, which implies irreducibility. Similarly for the dual. Can you prove that the adjoint representation is irreducible without using weights?

Solution to (45).

  1. 1.

    The weights of Sym2(3) are 2L1,2L2,2L3,L1+L2,L1+L3,L2+L3 while the weights of (3)* are -L1, -L2, -L3. Adding everything from the first list to everything from the second, we see that Sym2(3)(3)* has weights

    L1,L1,L1,L2,L2,L2,L3,L3,L3,2L1-L2,2L1-L3,2L2-L1,2L2-L3,2L3-L1,2L3-L2,L1+L2-L3,L2+L3-L1,L3+L1-L2.

    The weight diagram is shown in figure 13.

    Weights for
    Figure 13: Weights for Sym23(3)*.
  2. 2.

    Since e12 is a weight vector of weight 2L1 and e1* is a weight vector of weight -L1, their tensor product is a weight vector of weight 2L1-L1=L1. Similarly, the other terms are also weight vectors of weight L1. Therefore their sum is also a weight vector of weight L1.

    We can hit it with E12 and E23, using Eijek*=-δikej*:

    E12(e12e1*+e1e2e2*+e1e3e3*)=e12(-e2*)+e12e2*
    =0,

    and

    E23(e12e1*+e1e2e2*+e1e3e3*)=e1e2(-e3*)+e1e2e3*
    =0

    so it is a highest weight vector.

  3. 3.

    We find E21v=2e1e2e3* whence

    E32E21v=2e1e3e3*-2e1e2e2*,

    while E32v=-e12e2* so

    E21E32v=e12e1*-2e1e2e2*.

    These are linearly independent, as e12e1* only appears in the second while e1e3e3* only appears in the first.

  4. 4.

    Let V=Sym23(3)*. Note that V is completely reducible, and the possible irreducible constituents are V(2,1), V(0,2), and V(1,0), since these are the only weights of V that are dominant. Since V has a highest weight vector of weight 2L1-L3, and this is not a weight of V(0,2) or V(1,0), V must have a subrepresentation V1V(2,1). As e12e3* is the unique highest weight vector of weight 2L1-L3, it must occur in V. Similarly, since (by part 2) V has a highest weight vector of weight L1, V must have a subrepresentation V2V(1,0)3. In fact, one can check that V2 has basis

    e12e1*+e1e2e2*+e1e3e3*

    together with the two similar vectors obtained by permuting the roles of e1,e2,e3.

    So we have V1V2V and we want to show equality. Note that v=e12e3*V1 is a highest weight vector in V1 of weight 2L1-L3, and E21v is a nonzero weight vector in V1 of weight -2L3. Moreover, part (3) implies that L1 occurs with multiplicity at least two in V1. We also have that L1 occurs in V2 with multiplicity one. All the dominant weights of V are now accounted for by V1V2 with the correct multiplicities. Thus, by Weyl symmetry, the weights of V agree with the weights of V1V2 and we have equality. We see that the multiplicities of L1, L2, L3 are exactly two in V1 and all other weights in V1 occur with multiplicity one, and we have

    V=V1V2V(2,1)3.

    The weight diagram of V1V(2,1) is obtained from that of V (see part (1)) by removing one circle around each of L1, L2, and L3.

Solution to (46).

  1. 1.

    We have that

    H(e1ae2be3c) =aL1(H)e1e1a-1e2be3c+e1abL2(H)e2e2b-1e3c+e1ae2bcL3(H)e3e3c-1
    =(aL1+bL2+cL3)(H)e1ae2be3c.

    So e1ae2be3c is a weight vector, and we know that these are a basis for Symn(3).

    To see that they are distinct, suppose that a+b+c=n and a+b+c=n for nonnegative integers a,a, etc., and that

    aL1+bL2+cL3=aL1+bL2+cL3.

    Then

    (a-a)L1+(b-b)L2+(c-c)L3=0

    which implies that a-a=b-b=c-c. But as a+b+c=a+b+c, this common difference must be zero, so a=a, b=b and c=c as required.

  2. 2.

    Suppose that v is a highest weight vector. Since all the weights from the first part are distinct, the weight spaces are one-dimensional, so (after scaling) v=e1ae2be3c for some a,b,c. We have

    E12v=be1a+1e2b-1e3c=0

    as v is a highest weight vector, so b=0. Similarly, c=0. Thus v=e1n (up to scalar) is the unique highest weight vector.

  3. 3.

    By part 2 and the previous question, Symn(3) has a unique highest weight vector (up to scalar), and so is irreducible. Its highest weight is the weight of e1n, which is nL1.