2 Character theory

2.6 Linear algebra constructions

We construct new representations from old in various ways, using linear algebra. We continue to work over k= throughout, though many of these constructions work over any field.

2.6.1 The dual representation

Recall that we earlier defined, for any G-representations V and W, a G-representation on the space Hom(V,W) of linear maps from V to W. It has character χ¯VχW.

Definition 2.37.

If V is any vector space, then the dual space of V is

V*=Hom(V,).

We have dimV*=dimV. To see this, given a basis v1,,vn of V, we have the dual basis v1*,,vn* of V* given by

vi*(vj)=δij={1if i=j0otherwise.

There is a bilinear map V*×V,(ϕ,v)ϕ(v). The choice of basis identifies V with n (as column vectors) and then the dual basis realizes V* as 1×n matrices, with the above pairing being the usual matrix/dot product.

If V has a G-representation ρ, then we take to have the trivial representation and get an action ρ* of G on V* defined by ρ*(g)(ϕ)=ϕ(ρ(g)-1v). From the formula for the character of Hom(V,W), we see

χV*=χ¯V.

If the matrix of ρ(g) with respect to some basis is A, then the matrix of ρ(g) with respect to the dual basis is (AT)-1, the inverse of the transpose of A.

2.6.2 Tensor products

Let V,W be two vector spaces. Then the tensor product

VW

is the -vector space generated by the symbols vw for vV and wW, with the “bilinear” relations

λ(vw) =(λv)w=v(λw);
(v+v)w =vw+vw;
v(w+w) =vw+vw.
Remark 2.38.

Rigorously, we are taking the (infinite dimensional) vector space with a basis element vw for every vV and wW and then forming its quotient by the (infinite dimensional) subspace generated by vectors of the form (v+v)w-vw-vw and by similar expressions corresponding to the other relations; this quotient is then finite dimensional, as the proposition below shows.

Proposition 2.39.

Let e1,,en be a basis of V and f1,,fm be a basis of W. Then

{eifj:i=1,,n,j=1,,m}

is a basis of VW.

Proof.

Omitted.∎

In particular, the dimension of the tensor product is the product of the dimensions of the vector spaces:

dim(VW)=dimVdimW.

Contrast the direct sum, which has dimension the sum of the dimensions of the vector spaces.

Exercise 2.40.

It is not true that every vector in VW is of the form vw. For example, if V=W is two-dimensional with basis e,f then ee+ffVV cannot be written in this form.

Then next remark is non-examinable.

Remark 2.41.

Tensor products can be difficult to get used to. Perhaps the most important principle for understanding them is the following:

A linear map VWU is the same as a bilinear map V×WU.

The dictionary as follows: given a linear map ϕ:VWU, we define a bilinear map V×WU by sending (v,w) to ϕ(vw). The bilinearity is then a consequence of the relations that hold in the tensor product. Conversely, given a bilinear map ψ:V×WU, define a linear map VWU by sending vw to ψ(v,w). We have to check that this is well-defined, i.e. that the bilinear relations are respected, and this is equivalent to the bilinearity of ψ.

This all seems very abstract, but it is useful: defining bilinear maps is ‘easy’! In fact, tensor products ‘in real life’ often arise in situations where you have a number that depends on the choice of two vectors (in V, say); then this dependency can be expressed as a linear map VV.

All that said, for this course it will not be important to have such a deep theoretical understanding of tensor products as long as you are able to do calculations with them and are willing to take Proposition 2.39 on trust.

Naturally, we can consider more factors V1V2Vr, with linearity in each slot.

Now, if V and W are both representations of G then VW becomes a representation via

g(vw)=gvgw.

We also write ρVρW for this representation. If we have bases e1,,en of V and f1,,fm of W, with respect to which the matrices of g acting on V and W are A and B, and we order the resulting basis of VW as

e1f1,e1f2,,e1fm,e2f1,,

then the matrix of g on VW is AB where this is the block matrix

(a11Ba12Ba21Ba22B.).
Proposition 2.42.

If ρV and ρW have characters χV and χW, then

χVW=χVχW.
Proof.

Let gG and choose bases e1,,en and f1,,fm such that gei=λiei and gfj=μjfj. Then

g(eifj)=λiμjeifj,

so with respect to the basis {eifj} of VW, g acts diagonally with entries λiμj. So

χVW(g)=λiμj=(λi)(μj)=χV(g)χW(g).

The tensor product generalises the ’twisting’ construction earlier. If V is any vector space then V is isomorphic to V via the map vλλv. If (χ,) is a 1-dimensional representation and (ρ,V) is any representation of G, then ρχ is a representation acting on V. We have

(ρχ)(g)(vλ)=(ρ(g)v)(χ(g)λ)χ(g)ρ(g)λv

via the above isomorphism so that

ρχχρ.

Note that if ρ is irreducible, so is χρ. Furthermore, χρ might or might not be isomorphic to ρ.

Lemma 2.43.

Let V and W be two finite-dimensional representations of a group G. Then

V*WHom(V,W)

as G-modules.

Proof.

The best way to prove this is to show that the map ϕwT, where T(v)=ϕ(v)w, is a G-isomorphism. This is straightforward but a bit technical.

Another proof which works in our situation is simply to observe that both sides have character χ¯VχW. ∎

2.6.3 Symmetric and alternating powers

The symmetric square Sym2(V) of V is the vector space spanned by symbols vv subject to the bilinear relations above and, additionally,

vv=vv

for all v,vV.

Remark 2.44.

Formally, Sym2(V) is the quotient of VV by the subspace spanned by all elements of the form vv-vv, and then vv=vv is the image of vv in Sym2V.

Proposition 2.45.

Given a basis e1,,en of V, the eiej with ij are a basis of Sym2(V). ∎

Hence

dimSym2(V)=n(n+1)2.

The alternating square 2(V) of V is spanned by elements of the form vv subject to the bilinear relations above and, additionally,

vv=-vv

for all v,vV.

Remark 2.46.

Formally, it is the quotient of VV by the subspace spanned by all elements of the form

vv+vv,

and then vv=-vv is the image of vv in Λ2(V).

If V and W are representations of G, we define actions of G on these spaces as for the tensor product.

We can define linear maps Sym2VVV and 2(V)VV sending vvvv+vv and vvvv-vv. Any vwVV can be written

vw=12((vw+wv)+(vw-wv))

and this shows that

VVSym2(V)2(V).

In fact this decomposition holds as G-representations.

Remark 2.47.

The space VV has an involution22 2 Map whose square is the identity.

σ:vwwv.

As σ2=I, its eigenvalues are ±1. The decomposition above is the the eigenspace decomposition for σ: Sym2V is the (+1)-eigenspace, Λ2(V) the (-1)-eigenspace.

Proposition 2.48.

If (ρ,V) has character χ, then

χSym2V(g)=12(χ(g)2+χ(g2)))

and

χΛ2V(g)=12(χ(g)2-χ(g2))).
Proof.

If ρ(g) has eigenvalues λ1,λn, then diagonalise it as usual to get an eigenvector basis v1,,vn. Using the basis vivj of Λ2(V) you find that

χΛ2V(g)=i<jλiλj.

This is

12((λi)2-(λi2))

as required.

The proof for the symmetric square is similar, or use the decomposition of VV. ∎

Remark 2.49.

One can also define spaces Symk(V) and Λk(V) for any k, the latter vanishing if k>dimV. The former is spanned by expressions v1v2vk where ‘the order doesn’t matter’, while the latter is spanned by expressions v1v2vk where ‘switching two vectors introduces a minus sign’.

A particular special case is k=dim(V). In this case, Λk(V) is exactly one dimensional (it is easy to see it is spanned by e1ek for e1,,ek any basis of V, showing that the dimension is at most one, and there is an injective map to Vk given by

v1vkσSnϵ(σ)e1ek

which shows that the dimension is at least one).

If T:VV is any linear map, then we get a linear map Λk(T):Λk(V)Λk(V) by setting T(v1vk)=T(v1)T(vk). Since it is a map from a one-dimensional vector space to itself, Λk(T) is just multiplication by some scalar. This scalar is exactly the determinant of T!

2.6.4 Matrices in dimension 2

Suppose that (ρ,V) is a representation of G and that dimV=2, with e1,e2 being a basis of V. Let gG, and let

ρ(g)=(abcd)

be the matrix of ρ(g) in this basis. We compute the matrices of Λ2ρ(g) and Sym2ρ(g).

The space Λ2V is one-dimensional, with basis vector e1e2. Then

g(e1e2) =(ge1)(ge2)
=(ae1+ce2)(be1+de2)
=(ad-bc)e1e2
=det(ρ(g))e1e2

using that e1e1=e2e2=0 and e1e2=-e2e1. We see that

Sym2ρdetρ.

The space Sym2V is three-dimensional with basis e12,e1e2,e22, and

g(e12) =(ae1+ce2)2
=a2e12+2ace1e2+c2e22
g(e1e2) =abe12+(ad+bc)e1e2+cde22
g(e22) =b2e12+2bde1e2+d2e22

whence the matrix of Sym2ρ(g) (in this basis) is

(a2abb22acad+bc2bdc2cdd2).