Quantum Mechanics

Overview of linear vector spaces

Here we generalize the familiar notion of vector as an arrow (with magnitude and direction) into a more abstract definition which is mathematically far more powerful.

Def. A linear vector space 𝐕 is a collection of objects 1, 2,, V, W,, called vectors, for which there exists

  • a definite rule for forming vector sum V+W, and

  • a definite rule for multiplying by scalars aV (which we may equivalently write as Va),

with the following features:

  • closure: V+W𝐕

  • scalar multiplication distributive in vectors: a(V+W)=aV+aW

  • scalar multiplication distributive in scalars: (a+b)V=aV+bV

  • scalar multiplication associative: a(bV)=abV

  • addition is commutative: V+W=W+V

  • addition is associative: V+(W+Z)=(V+W)+Z

  • a null vector 0 s.t. V, V+0=V

  • V, inverse under addition, -V, s.t. V+-V=0

The numbers a, b, are elements of the field over which the vector space is defined. If the field consists of real [complex] numbers, 𝐕 is a real [complex] vector space.

Exercise: Prove the following properties:

  • 1.

    0 is unique

  • 2.

    0V=0

  • 3.

    -V=-V

  • 4.

    -V is the unique additive inverse of V

Example: the following are vector spaces:

  • arrows in n, or n-tuples (a,b,) with addition (a,b,)+(c,d,)=(a+c,b+d,) and scalar multiplication α(a,b,)=(αa,αb,)

  • n×n matrices, with addition and multiplication acting in the usual way on each component

  • functions f(x), defined on 0xL

Def. A set of vectors 1,2,,n is linearly independent iff the only solution to

j=1najj=0 (1.1)

is the trivial one, with all aj=0. (Note: one often abbreviates the notation 0:=0.)
The vector space has dimension n if it can accommodate a maximum of n linearly independent vectors.
A set of n linearly independent vectors in an n-dimensional vector space is called a basis.

Example: In the previous examples of vector spaces, n-tuples form n-dimensional vector space, n×n matrices form n2-dimensional vector space, and functions form -dimensional vector space.

Thm: Any vector V in n-dimensional vector space can be written as a linear combination of n linearly independent vectors 1, 2,, n:

V=j=1nvjj (1.2)

The coefficients vj are components of the vector V in the basis {j}.

Note:

  • For given vector and specified basis the components are uniquely determined.

  • However, if we change the basis, the components of V will change.

  • Nevertheless, any vector equation, such as a+b=c, is independent of the basis.

It follows that to add vectors we simply add their components, and to multiply vectors by scalars we correspondingly multiply their components, (in any basis {j}), i.e., for V=j=1nvjj and W=j=1nwjj,

V+W=j=1n(vj+wj)j  and  aV=j=1n(avj)j (1.3)

Inner product spaces:

Even though a general vector is not necessarily an arrow, we can still have generalized notion of length and direction, by suitably generalizing the dot product formula

AB=ABcosθ (1.4)

Let us start with length (or norm) of a vector, by noting that we can also express the dot product in terms of components: AB=AxBx+AyBy+AzBz.

We formulate a generalization of the dot product, called inner product or scalar product, between two vectors V and W, denoted by VW, by insisting it obey the following axioms:

  • skew symmetry: VW=WV

  • positive semidefinititeness: VV0 and =0 iff V=0

  • linearity in ket: V(aW+bZ)VaW+bZ=aVW+bVZ

Exercise: Use the axioms to argue that

aW+bZV=aWV+bZV (1.5)

Def. A vector space with an inner product is called an inner product space.

Note: the first axiom guarantees that VV is real, while the second restricts it further to be positive semidefinite.

Def.

  • We’ll define the length or norm of the vector by V=VV. A normalized vector has unit norm, so VV=1

  • We say two vectors V and W are orthogonal (or perpendicular) if their inner product vanishes, VW=0.

  • An orthonormal (ON) basis is a set of basis vectors, all of unit norm, which are pairwise orthogonal,

    jk=δjk{1,j=k0,jk (1.6)

Thm (Gram-Schmidt): Given any basis, we can form linear combinations of the basis vectors to obtain an orthonormal basis.

Exercise: Show that for an arbitrary basis {j}, with the usual vector decompositions V=jvjj and W=jwjj,

VW=jkvjwkjk, (1.7)

whereas if {i} is an orthonormal basis,

VW=jvjwj. (1.8)

Note that the latter automatically guarantees that norm of a vector is a real non-negative number:

VV=jvj20 (1.9)

(hence the need for complex conjugation and skew symmetry in our axioms…)

Representation in terms of n-tuples:

Just as for arrows, we can represent a general vector V in n-dimensional vector space as the n-tuple specified by a usual column-vector,

V=(v1v2vn) (1.10)

so that in the basis decomposition (1.2), each basis vector {j} is simply the column-vector with 1 in jth position and 0 everywhere else.

When we’re taking an inner product, there is no way to get a number out of two column-vectors, but there is a way to get a number by matrix-multiplying a row-vector with a column-vector. In particular, we reproduce the inner product formula (in some ON basis) by associating

VW=(v1,v2,,vn)(w1w2wn) (1.11)

Dual spaces and the Dirac notation:

We can therefore proceed backwards, and identify a new object V with the row-vector

V=(v1,v2,,vn) (1.12)

(in the given ON basis {j}), which is the adjoint, i.e. transpose complex conjugate, of the column vector corresponding to V.

In the Dirac notation,

  • a vector V is called a ket,

  • its associated adjoint V is called a bra,

  • and the inner product VW is called a bracket.

The bras and kets then form distinct (dual) vector spaces, with a ket for every bra and vice-versa. The inner products are really only defined between bras and kets, whereas we can add together two bras or two kets, but not a ket and a bra. (If this is confusing, just think of the allowed operations involving row-vectors and column-vectors.)

Exercise: Show that the inner product VW is independent of the choice of basis.

Exercise: Show that for the expansion (1.2), we can write a given component vj as

vj=jV (1.13)

Note that using the above result, we may write the expansion (1.2) as

V=jjjV. (1.14)

Adjoint operation:

The adjoint of V is V.
Since taking an adjoint entails taking a transpose conjugate, this then implies that the adjoint of aV=aV is aV=Va=aV.

Exercise: Find the adjoint of the equation aV=bW+cZ.

This further implies that adjoint of

V=jvjj=jjjV  is  V=jjvj=jVjj.

Useful properties:

Two powerful theorems apply to any inner product space which obeys our axioms:

  • Schwarz Identity: VWVW

  • Triangle Inequality: V+WV+W

Exercise: Prove these two identities by using the axioms.

Linear Operators:

An operator O^ is an instruction for transforming any given vector V into another vector V, represented by the equation

O^V=V (1.15)

We say that the operator O^ has transformed the vector V into the vector V. (Operators can act on bras and kets.) We will restrict to considering only operators which do not take us out of the vector space.

Furthermore, we will only be interested in linear operators, which obey the following rules:

  • O^aV=aO^V and VO^a=VaO^

  • O^(aV+bW)=aO^V+bO^W and (Va+Wb)O^=aVO^+bWO^

Just as we can represent vectors by n-tuples once we specify a particular basis of our n-dimensional vector space, we can represent operators by n×n matrices.

Note that the ‘outer product’ of two vectors, say VW, is a linear operator. (In the matrix representation, it is obtained by matrix-multiplying the column-vector representing V with the row-vector representing W.)