Quantum Mechanics
Overview of linear vector spaces
Here we generalize the familiar notion of vector as an arrow (with magnitude and direction) into a more abstract definition which is mathematically far more powerful.
Def. A linear vector space is a collection of objects , , , , called vectors, for which there exists
-
–
a definite rule for forming vector sum , and
-
–
a definite rule for multiplying by scalars (which we may equivalently write as ),
with the following features:
-
–
closure:
-
–
scalar multiplication distributive in vectors:
-
–
scalar multiplication distributive in scalars:
-
–
scalar multiplication associative:
-
–
addition is commutative:
-
–
addition is associative:
-
–
a null vector s.t. ,
-
–
, inverse under addition, , s.t.
The numbers , are elements of the field over which the vector space is defined. If the field consists of real [complex] numbers, is a real [complex] vector space.
Exercise: Prove the following properties:
-
1.
is unique
-
2.
-
3.
-
4.
is the unique additive inverse of
Example: the following are vector spaces:
-
•
arrows in , or -tuples with addition and scalar multiplication
-
•
matrices, with addition and multiplication acting in the usual way on each component
-
•
functions , defined on
Def. A set of vectors is linearly independent iff the only solution to
(1.1) |
is the trivial one, with all .
(Note: one often abbreviates the notation .)
The vector space has dimension if it can accommodate a maximum of linearly independent vectors.
A set of linearly independent vectors in an -dimensional vector space is called a basis.
Example: In the previous examples of vector spaces, -tuples form -dimensional vector space, matrices form -dimensional vector space, and functions form -dimensional vector space.
Thm: Any vector in -dimensional vector space can be written as a linear combination of linearly independent vectors , , :
(1.2) |
The coefficients are components of the vector in the basis .
Note:
-
–
For given vector and specified basis the components are uniquely determined.
-
–
However, if we change the basis, the components of will change.
-
–
Nevertheless, any vector equation, such as , is independent of the basis.
It follows that to add vectors we simply add their components, and to multiply vectors by scalars we correspondingly multiply their components, (in any basis ), i.e., for and ,
(1.3) |
Inner product spaces:
Even though a general vector is not necessarily an arrow, we can still have generalized notion of length and direction, by suitably generalizing the dot product formula
(1.4) |
Let us start with length (or norm) of a vector, by noting that we can also express the dot product in terms of components: .
We formulate a generalization of the dot product, called inner product or scalar product, between two vectors and , denoted by , by insisting it obey the following axioms:
-
–
skew symmetry:
-
–
positive semidefinititeness: and iff
-
–
linearity in ket:
Exercise: Use the axioms to argue that
(1.5) |
Def. A vector space with an inner product is called an inner product space.
Note: the first axiom guarantees that is real, while the second restricts it further to be positive semidefinite.
Def.
-
–
We’ll define the length or norm of the vector by . A normalized vector has unit norm, so
-
–
We say two vectors and are orthogonal (or perpendicular) if their inner product vanishes, .
-
–
An orthonormal (ON) basis is a set of basis vectors, all of unit norm, which are pairwise orthogonal,
(1.6)
Thm (Gram-Schmidt): Given any basis, we can form linear combinations of the basis vectors to obtain an orthonormal basis.
Exercise: Show that for an arbitrary basis , with the usual vector decompositions and ,
(1.7) |
whereas if is an orthonormal basis,
(1.8) |
Note that the latter automatically guarantees that norm of a vector is a real non-negative number:
(1.9) |
(hence the need for complex conjugation and skew symmetry in our axioms…)
Representation in terms of n-tuples:
Just as for arrows, we can represent a general vector in -dimensional vector space as the -tuple specified by a usual column-vector,
(1.10) |
so that in the basis decomposition (1.2), each basis vector is simply the column-vector with 1 in th position and 0 everywhere else.
When we’re taking an inner product, there is no way to get a number out of two column-vectors, but there is a way to get a number by matrix-multiplying a row-vector with a column-vector. In particular, we reproduce the inner product formula (in some ON basis) by associating
(1.11) |
Dual spaces and the Dirac notation:
We can therefore proceed backwards, and identify a new object with the row-vector
(1.12) |
(in the given ON basis ), which is the adjoint, i.e. transpose complex conjugate, of the column vector corresponding to .
In the Dirac notation,
-
–
a vector is called a ket,
-
–
its associated adjoint is called a bra,
-
–
and the inner product is called a bracket.
The bras and kets then form distinct (dual) vector spaces, with a ket for every bra and vice-versa. The inner products are really only defined between bras and kets, whereas we can add together two bras or two kets, but not a ket and a bra. (If this is confusing, just think of the allowed operations involving row-vectors and column-vectors.)
Exercise: Show that the inner product is independent of the choice of basis.
Adjoint operation:
The adjoint of is .
Since taking an adjoint entails taking a transpose conjugate, this then implies that the adjoint of is .
Exercise: Find the adjoint of the equation .
This further implies that adjoint of
Useful properties:
Two powerful theorems apply to any inner product space which obeys our axioms:
-
•
Schwarz Identity:
-
•
Triangle Inequality:
Exercise: Prove these two identities by using the axioms.
Linear Operators:
An operator is an instruction for transforming any given vector into another vector , represented by the equation
(1.15) |
We say that the operator has transformed the vector into the vector . (Operators can act on bras and kets.) We will restrict to considering only operators which do not take us out of the vector space.
Furthermore, we will only be interested in linear operators, which obey the following rules:
-
–
and
-
–
and
Just as we can represent vectors by -tuples once we specify a particular basis of our -dimensional vector space, we can represent operators by matrices.
Note that the ‘outer product’ of two vectors, say , is a linear operator. (In the matrix representation, it is obtained by matrix-multiplying the column-vector representing with the row-vector representing .)