Quantum Information
1.Introduction
2.Quantum mechanics essentials
3.Measurement and uncertainty
3.1.Observables
3.2.Density matrices
3.3.Pure versus mixed states
3.4.Problems
4.Qubits and the Bloch sphere
5.Bipartite systems
6.Entanglement applications
7.Information theory
8.Changelog
9.Bibliography
\(\newcommand{\Hint}[1]{{\flushleft{\bf Hint:}} #1}\)
\(\newcommand{\Tr}{\mathrm{Tr}}\)
\(\renewcommand{\d}{\mathrm{d}}\)
\(\renewcommand{\vec}[1]{\mathbf{#1}}\)
\(\newcommand{\cvec}[1]{\left( \begin{array}{c} #1 \end{array} \right)}\)
\(\newcommand{\ket}[1]{\left\vert #1 \right\rangle}\)
\(\newcommand{\bra}[1]{\left\langle #1 \right\vert}\)
\(\newcommand{\ipop}[3]{\left\langle #1 \left\vert #2 \vphantom{#1 #3} \right\vert #3 \right\rangle}\)
\(\newcommand{\ip}[2]{\left\langle #1 \left\vert #2 \vphantom{#1} \right\rangle \right.}\)
\(\newcommand{\ev}[1]{\left\langle #1 \right\rangle}\)
\(\newcommand{\com}[2]{\left[ \hat{#1}, \hat{#2} \right]}\)
\(\newcommand{\acom}[2]{\left\{ \hat{#1}, \hat{#2} \right\}}\)
\(\newcommand{\norm}[1]{\vert\vert \ket{#1} \vert \vert}\)
3.Measurement and uncertainty
3.1.Observables
4
In quantum mechanics observables are used to indicate quantities which could be measured in an experiment, and observable also refers to the self-adjoint operator associated to such a measurement. Specifically, there is a one-to-one correspondence between measurable quantities \(M\) and self-adjoint operators \(\hat{M}\). One example is energy and the Hamiltonian operator \(\hat{H}\).
Now, in quantum mechanics the possible values of a measurement of \(M\) are the eigenvalues of \(\hat{M}\) (ignoring experimental error, we do not commit experimental errors.) Typically for a given state \(\ket{\psi}\) we cannot predict with certainty the result of a measurement. instead we can give probabilities for the different
Observables correspond to Hermitian matrices. Their eigenvalues are the possible measurement outcomes.
outcomes. Note that for a Hilbert space \(\mathcal{H}\) of finite dimension \(N\), the operator \(\hat{M}\) will have a finite number of eigenvalues, in fact the number is precisely \(N\) (if we count the multiplicity of any degenerate eigenvalues) since this is equivalent to asking for the spectrum of an \(N \times N\) Hermitian matrix.
Dictionary Linear Algebra \(\leftrightarrow\) QM:
Def: The spectrum of an operator \(\hat{H}\) is the set
\begin{equation} \mbox{Spec}(\hat{H}) = \{\lambda \in \mathbb{C}\,\text{s.t.}\, \hat{H}-\lambda \hat{I} \,\text{is non invertible}\}. \end{equation}
For a finite-dimensional Hilbert space this is identical to the set of all finitely many eigenvalues of \(\hat{H}\).
Using basic result from linear algebra, the spectrum of a self-adjoint operator \(\hat{M}\) is a set of real eigenvalues \(\lambda_n\), each with a corresponding eigenstate \(\ket{n}\). Eigenstates corresponding to different eigenvalues are automatically orthogonal. If there is degeneracy then for each eigenspace of dimension greater than one we can always choose a basis of orthogonal eigenstates (Apply Grahm-Schmidt procedure eigenspace by eigenspace). Of course, we can always normalise the eigenstates, then we will have \(N\) orthonormal eigenstates giving us an orthonormal basis for \(\mathcal{H}\).
The spectral representation of an operator expresses it in terms of its eigenvectors and eigenvalues.
This also give us the spectral representation of \(\hat{M}\) (corresponding to diagonalisation of the matrix \(M\))
\[ \hat{M} = \sum_n \lambda_n \ket{n}\bra{n} . \]
Note that for the identity operator, the only eigenvalue is \(1\) with degeneracy \(N\), so we can choose any orthonormal basis of \(\mathcal{H}\) and
\[ \hat{I} = \sum_n \ket{n}\bra{n} . \]
This is a very useful expression. We can often use it in calculations by “inserting the identity as a complete sum of states.”
Now when a measurement of M is made on a state
\[\ket{\psi}=\sum_n c_n \ket{n}\,,\]
we will get the result \(\lambda_n\) with probability \(p_n = \left\vert \ip{n}{\hat{\psi}} \right\vert^2 = \vert c_m\vert^2\) which is just the magnitude squared of the coefficient of \(\ket{n}\) if we write \(\psi\) in the basis \(\{\ket{n}\}\). After the measurement, if the result is \(\lambda_n\), the state will then have definite value of M, \(\lambda_n\), so measuring M again will give the same result. Therefore the state is no longer \(\ket{\psi}\) but is \(\ket{n}\). Note that this “collapse of the wavefunction” is not a unitary process, and is not reversible.
One way to describe this measurement process is in terms of the set of projection operators \(\hat{P}_n = \ket{n}\bra{n}\) formed from the eigenstates of \(\hat{M}\). Then the probability of result \(\lambda_n\) is \(p_n = \ipop{\psi}{\hat{P}_n}{\psi}\) and the resulting state is \(\frac{1}{\sqrt{p_n}} {\hat{P}_n} \ket{\psi}\) which is the state \(\ket{n}\) up to an irrelevant overall phase.
Remember that a projector \(\hat{P}\) is a linear operator such that \(\hat{P}^\dagger = \hat{P}\) and \(\hat{P}^2 = \hat{P}\) and check that indeed \(\hat{P}_n = \ket{n}\bra{n}\) has all these properties.
Projection operators can be used to express the measurement process.
The above discussion of measurement assume the spectrum of \(\hat{M}\) is not degenerate. If we have degeneracy then we can generalise the definition of the projection operators. Consider an eigenvalue \(\lambda\). We define the projection operator to be a sum over the eigenstates with that eigenvalue, i.e.
\[ \hat{P}_{\lambda} = \sum_{n : \lambda_n = \lambda} \ket{n}\bra{n} . \]
Then we still have the result that the probability is \(p_{\lambda} = \ipop{\psi}{\hat{P}_{\lambda}}{\psi}\) and the resulting state is \(\frac{1}{\sqrt{p_{\lambda}}} {\hat{P}_{\lambda}} \ket{\psi}\).
An important point to note is that a state can only have definite values for two observables, say \(A\) and \(B\), if it is a simultaneous eigenstate of \(\hat{A}\) and \(\hat{B}\). This is not possible for two generic operators. However, if \(\com{A}{B} = 0\) then we can always find simultaneous eigenstates. In this case we say that the observables \(A\) and \(B\) are compatible. If the observables are not compatible then measuring \(A\), then \(B\), then \(A\) again will not necessarily give the same result for the two measurements of \(A\). That is because the state after the first measurement of \(A\) is not an eigenstate of \(\hat{B}\), and so a measurement of \(B\) will change the state (to some eigenstate of \(\hat{B}\).) This will not be an eigenstate of \(A\), so the result of the second measurement of \(A\) cannot be determined with certainty.
Let \(\mathcal{H} = \mbox{span}\{\ket{-2},\ket{-1}, \ket{1},\ket{2}\}\) be a four dimensional Hilbert space with orthonormal basis vectors given by the eigenvectors of an hermitian operator \(\hat{A}\) as
\begin{align*} &\hat{A} \ket{-2}= -2 \ket{-2}\,,\qquad \hat{A} \ket{-1} = -1 \ket{-1}\,,\\ &\hat{A} \ket{1} = 1 \ket{1}\,,\qquad\qquad \hat{A} \ket{2} = 2 \ket{2}\,, \end{align*}
giving the spectral decomposition
\begin{align*}\hat{A} &= -2\ket{-2}\bra{-2}-1 \ket{-1}\bra{-1}+1 \ket{1}\bra{1}+2\ket{2}\bra{2}\\ &= ({\color{red}-2}) \hat{P}_{{\color{red}-2}} +({\color{orange}-1}) \hat{P}_{{\color{orange}-1}} + ({\color{blue}+1}) \hat{P}_{{\color{blue}+1}} +({\color{violet}+2}) \hat{P}_{{\color{violet}+2}}\\ & = \sum_{\lambda \in \mbox{Spec}(\hat{A})}{\color{red}\lambda} \hat{P}_{\color{red}\lambda} \,. \end{align*}
in terms of the projectors \(\hat{P}_{\lambda} = \ket{\lambda}\bra{\lambda}\).
If we prepare a state \(\ket{\psi} \in \mathcal{H}\) and measure the observable \(\hat{A}\) we can only find one of the values \(\{-2,-1,1,2\}\).
Suppose we prepared the state
\[\ket{\psi} = 2 \ket{-2} + (1+i) \ket{-1} +3i \ket{1}\]
we know that if we were to measure \(\hat{A}\) on \(\ket{\psi}\) we will never find the outcome \(+2\) since the coefficient of \(\ket{2}\) in the expansion for \(\ket{\psi}\) vanishes.
To compute the probabilities of measuring \(\{-2,-1,1,2\}\) we have to normalise the state, i.e. we need to impose \(\ip{\psi}{\psi} = 1\). We compute
\begin{multline} \ip{\psi}{\psi} = \Big[ 2\bra{-2} +(1-i) \bra{-1} +(-3i) \bra{1} \Big] \\ \times \Big[ 2 \ket{-2} + (1+i) \ket{-1} +3i \ket{1}\Big]= 4+2+9=15 \end{multline}
and use this result to define the normalised state
\[\ket{\tilde{\psi}} = \frac{\ket{\psi}}{\sqrt{15}}=\frac{2}{\sqrt{15}} \ket{-2} + \frac{(1+i)}{\sqrt{15}} \ket{-1} + \frac{3i}{\sqrt{15}} \ket{1}\,.\]
The probability of measuring \(\hat{A}\) and finding outcome \(-2\) is then the modulus square of the coefficient in front of \(\ket{-2}\), i.e. \(p_{-2} = \frac{4}{15}\), similarly \(p_{-1} = \frac{2}{15},\,p_{+1}= \frac{9}{15}\) and of course \(p_{+2} =0\). The total probability is \(1\) as it should since \(\ip{\tilde{\psi}}{\tilde{\psi}}=1\). Using the projector \(\hat{P}_{-2} = \ket{-2}\bra{-2}\) we have \(p_{-2} = \bra{\tilde{\psi}} \hat{P}_{-2}\ket{\tilde{\psi}}\).
If we prepare many copies of the same state \(\ket{\psi}\), measure \(\hat{A}\) and then average, we find the expectation value
\begin{align*} \langle A \rangle_\psi &= \frac{\bra{\psi} \hat{A} \ket{\psi}}{\ip{\psi}{\psi}} = \bra{\tilde{\psi}} \hat{A} \ket{\tilde{\psi}} \\ &= \frac{4}{15} \bra{-2} \hat{A} \ket{-2}+ \frac{2}{15} \bra{-1} \hat{A} \ket{-1}+ \frac{9}{15} \bra{1} \hat{A} \ket{1}\\ &= p_{{\color{red}-2}}({\color{red}-2}) + p_{{\color{orange}-1}}({\color{orange}-1})+ p_{{\color{blue}+1}}({\color{blue}+1})+ p_{{\color{violet}+2}}({\color{violet}+2}) = -\frac{1}{15}\,. \end{align*}
3.2.Density matrices
5
The above sections give the standard Dirac notation description of QM. The states previously described are what we will now call pure states. This means that the states are definite, i.e. we assume that (at least in principle) we know what the state of the system is. Any uncertainties in predictions are due to the nature of
Pure states are fully determined, mixed states arise when we do not know the state of a system. A sum of basis states is still a pure state.
QM. However, we can also consider mixed states which arise when we do not know with certainty the state of a system. Here we assume that we have some probabilistic knowledge, such as the system is in state \(\ket{\psi}\) with probability \(p\), and in state \(\ket{\phi}\) with probability \(1-p\). This type of uncertainty is ‘classical uncertainty’ in the sense that it just describes our lack of knowledge about a system. Indeed, whether the state is pure or mixed may be a matter of perspective since one person may have more knowledge about the system than other (we will see this later when we discuss the reduced density matrix for a bipartite system).
For a pure state \(\ket{\psi}\) we define the density operator or, as more commonly called, the density matrix to be
\[ \hat{\rho} = \ket{\psi}\bra{\psi} . \]
Note that when our Hilbert space is \(n\)-dimensional if we think of ket vectors \(\ket{\psi}\) as \(n\) components column vectors \(\textbf{z}\) and bra vectors \(\bra{\phi}\) as \(n\) components row vectors \(\textbf{w}^\dagger\), then an operator of the form \(\ket{\psi}\bra{\phi}\) can be thought of as \(\textbf{z} \textbf{w}^\dagger\), hence a \(n\times 1\) matrix times a \(1\times n\) matrix, i.e. a \(n\times n\) matrix, while the inner product \( \textbf{w}^\dagger \textbf{z}\) as a \(1\times n\) matrix times a \(n\times 1\) matrix resulting in a \(1\times 1\) matrix, i.e. a complex number.
For pure states there is a one-to-one mapping between the density matrix and the state, so we can work with one or the other. For example we have the following correspondence:
\[ \begin{array}{lcl} \hat{M}\ket{\psi} = \lambda\ket{\psi} & \longleftrightarrow & \hat{M}\hat{\rho} = \lambda\hat{\rho} \\ \ket{\psi} \rightarrow \hat{U}\ket{\psi} & \longleftrightarrow & \hat{\rho} \rightarrow \hat{U}\hat{\rho}\hat{U}^{\dagger} \end{array} \]
Inner products of states arise when multiplying operators or when taking traces. In particular, if we label the orthonormal basis states \(\ket{n}\) for some range of integers \(n\), we define the trace of \(\hat{A}\) to be:
\[ \Tr(\hat{A}) = \sum_n \ipop{n}{\hat{A}}{n} \,, \]
you can think of \(\ipop{m}{\hat{A}}{n}\) as the \(m^{th}\) row, \(n^{th}\) column entry of the matrix representation of \(\hat{A}\) operator in the standard basis, hence the trace just defined corresponds indeed to the sum of the diagonal entries.
The trace of the density matrix \(\hat{\rho}\) is equal to one, \(\Tr(\hat{\rho})=1\).
Note that
\begin{equation} \begin{aligned} \Tr(\hat{\rho}) &= \sum_n \ipop{n}{\hat{\rho}}{n} = \sum_n \ip{n}{\psi} \ip{\psi}{n}\\ & = \sum_n \ip{\psi}{n} \ip{n}{\psi} = \ipop{\psi}{\hat{I}}{\psi} = 1\,, \end{aligned} \end{equation}
where in the last two steps we used the spectral representation of the identity operator and the fact that \(\ket{\psi}\) is normalised. Similar manipulations show that in general \(\Tr (\ket{\phi}\bra{\psi}) = \ip{\psi}{\phi} \). Also note that for a pure state \(\Tr(\hat{\rho}^2) = 1\) since \(\hat{\rho}\) is a projector and we know that for projectors we have \(\hat{\rho}^2 = \hat{\rho}\).
Mixed states describe situations where there is uncertainty about the state of the system due to lack of knowledge, i.e. this is the usual ‘classical' uncertainty we have if we don’t know everything
The density operator (or density matrix) can be used to describe the state of a system, for both pure and mixed states.
about the system. We can describe mixed states in terms of an ensemble of pure states, each with a given probability of being the state of the system, e.g. \(\{ ( p_i, \ket{i})\}\) with \(\ket{i}\) not necessarily orthogonal but chosen with unit norm (if not just normalised them one by one). The density matrix is just the linear combination of the density matrices for each of the pure states, weighted by the probability, i.e.
\[ \hat{\rho} = \sum_i p_i \ket{i}\bra{i} . \]
Note that there is no requirement for the state \(\ket{i}\) to be orthogonal (although we assume they are normalised) and also such a mixed state density matrix does not correspond to a unique ensemble. There will be in general more than one ensemble \(\{ ( p_i, \ket{i})\}\) giving rise to the same density matrix for the same mixed state.
Of course, the probabilities \(p_i\) cannot be negative and must sum to \(1\). We can also generalise the definition of the mixed state density matrix to allow ensembles including mixed states.I.e. we can have \(\hat{\rho} = \sum_i p_i \hat{\rho}_i\) where the \(\hat{\rho}_i\) are mixed and/or pure state density matrices.
Such ensembles can only give a pure state in the trivial case where there is only one pure state, which must then have probability \(1\). However, given a density matrix it is often not immediately obvious whether it describes a pure or a mixed state. A test for this (see later discussion) is to calculate \(\Tr(\hat{\rho}^2)\) which will be \(1\) for a pure state and less than 1 for a mixed state.
By construction, density matrices are
If we measure then the results for pure states in Dirac notation generalise to all pure or mixed density matrices as:
Given a two dimensional Hilbert space \(\mathcal{H} = \mbox{span}\{\ket{0},\ket{1}\}\) decide whether the matrix \(\rho = \frac{1}{9}\left(\begin{matrix} 5 & 2-4i \\ 2+4i & 4\end{matrix}\right)\) is
  • a density matrix for a pure state,
  • a density matrix for a mixed state,
  • not a density matrix,
when we represented the basis vector using the standard basis.
First of all to be a density matrix the matrix \(\rho\) needs to be with \(\mbox{Tr}\rho =1\), hermitian \(\rho^\dagger= \rho\) and semi-positive definite, i.e. \(\textbf{z}^\dagger \rho\, \textbf{z} \geq 0\) for all \(\textbf{z}\in\mathbb{C}^2\).
It is simple to check that \(\rho\) is indeed hermitian and with trace equal to \(1\). Instead of checking that \(\rho\) is semi-positive definite let us see how a density matrix for a pure state \(\ket{\psi} = a\ket{0}+b\ket{1}\) looks like. Let us assume the state is normalised so \(|a|^2+|b|^2=1\) and pass to vector representation \(\ket{\psi}\to \left(\begin{matrix} a\\b\end{matrix}\right)\) then the matrix associate to its density operator is
\[ \hat{\rho_\psi} = \ket{\psi}\bra{\psi} \to \rho_\psi = \left(\begin{matrix} a\\b\end{matrix}\right) \left(\begin{matrix} a\\b\end{matrix}\right)^\dagger = \left(\begin{matrix} |a|^2 & a b^*\\ b a^* & |b|^2 \end{matrix}\right)\,. \]
It is simple to see that if we chose \(b=\frac{2}{3}\) and \(a=\frac{1-2i}{3}\) we obtain precisely the matrix \(\rho\) under question. Note that this is not the only possibility! We can multiply \(\ket{\psi} = \frac{1-2i}{3}\ket{0}+\frac{2}{3}\ket{1}\) by any phase \(e^{i \alpha}\) without changing its density matrix.
3.3.Pure versus mixed states
6
In the example above we were able to find explicitly the pure state whose density operator was the matrix provided, however we would like to know whether a certain density operator given comes from a pure state or a mixed one.To this end we have the following theorem.
Let \(\hat{\rho}\) be a density operator on a Hilbert space \(\mathcal{H}\), i.e. \(\mbox{Tr}\,\hat{\rho}=1\), \(\hat{\rho}^\dagger =\hat{\rho}\) and \(\hat{\rho}\) positive operator. The density operator \(\hat{\rho}\) corresponds to a pure state if and only if \(\mbox{Tr} \,\hat{\rho}^2 =1\).
Proof:\((\Rightarrow)\) Let us assume that \(\hat{\rho} = \ket{\psi}\bra{\psi}\) is the density matrix associated to a pure state \(\ket{\psi}\in \mathcal{H}\). It is simple to compute \(\hat{\rho}^2 = \ket{\psi}\ip{\psi}{\psi} \bra{\psi} =\hat{\rho}\) since the state is normalised, hence \(\mbox{Tr}\,\hat{\rho}^2 = \mbox{Tr} \hat{\rho} = 1\).
\((\Leftarrow)\) Conversely let us suppose that \(\hat{\rho}\) is the density operator corresponding to the ensemble \(\{p_i,\ket{\psi_i}\}\), i.e. \(\hat{\rho} = \sum_i p_i \hat{\rho}_i = \sum_i p_i \ket{\psi_i}\bra{\psi_i}\). We want to compute \(\mbox{Tr} \hat{\rho}^2\):
\begin{align*} \mbox{Tr} \hat{\rho}^2 &= \sum_n \bra{n} \hat{\rho}^2 \ket{n} = \sum_{n,i,j} p_i p_j \ip{n}{\psi_i} \ip{\psi_i}{\psi_j} \ip{\psi_j}{n}\\ & = \sum_{ij} p_i p_j \ip{\psi_i}{\psi_j} \left(\sum_n \ip{\psi_j}{n}\ip{n}{\psi_i}\right) \\ & = \sum_{ij} p_i p_j \ip{\psi_i}{\psi_j} \bra{\psi_j}\hat{I} \ket{\psi_i} \\ &= \sum_{ij} p_i p_j \ip{\psi_i}{\psi_j}\ip{\psi_j}{\psi_i } \\ &= \sum_{ij} p_i p_j \vert \ip{\psi_i}{\psi_j}\vert^2\leq \sum_{ij} p_i p_j \leq 1\,. \end{align*}
For a pure state, the density matrix has \(\Tr(\hat{\rho}^2)=1\). For a mixed state, we have instead \(\Tr(\hat{\rho}^2) \lt{} 1\).
In the second line we used the spectral decomposition of the identity operator \(\hat{I} = \sum_n \ket{n}\bra{n}\), while in the third line we made use of the complex Cauchy-Schwarz inequality
\[ \vert \ip{\psi_i}{\psi_j}\vert^2 \leq \ip{\psi_i}{\psi_i} \ip{\psi_j}{\psi_j} \leq 1 \]
since the states \(\ket{\psi_i}\) are normalised. Finally in the last step we used the fact that the \(p_i\) are probabilities and \(\sum_i p_i =1\).
We also know that the equality in the Cauchy-Schwarz inequality holds if and only if the vectors \(\ket{\psi_i}\) and \(\ket{\psi_j}\) are collinear, i.e. \(\ket{\psi_i} = a \ket{\psi_j}\) for some complex number \(a\in\mathbb{C}\) that can only be a phase \(a=e^{i \alpha}\) since all the vectors must have length one.
Hence we have that \(\mbox{Tr} \,\hat{\rho}^2 \leq 1\) with equality if and only if all vectors are collinear with one another, i.e. they are all a multiple of say the first one \(\ket{\psi_i} = e^{i \alpha_i} \ket{\psi_1}\) but this means that the density matrix
\[ \hat{\rho} = \sum_i p_i \ket{\psi_i}\bra{\psi_i} = \sum_i p_i \ket{\psi_1}\bra{\psi_1} = \ket{\psi_1}\bra{\psi_1}\,, \]
hence \(\mbox{Tr}\, \hat{\rho}^2 =1 \) and \(\hat{\rho}\) is a pure state.
We then have a complete characterization of pure vs mixed states! We just need to compute \(\mbox{Tr}\,\hat{\rho}^2\) if this number is less than one we know that we have a mixed state, if we find one we know the state is pure.
We will shortly give a geometric characterization for pure and mixed state in the simplest case of a two dimension Hilbert space, i.e. what we call a qubit.
Suppose we have a three dimensional Hilbert space with orthonormal basis \(\mathcal{H} =\mbox{span}\{\ket{1},\ket{3},\ket{5}\}\). Compute the density matrix associated to the ensemble \(\{ (2/3, \ket{\psi_1}),(1/3,\ket{\psi_2})\}\) where \(\ket{\psi_1} = \frac{1}{\sqrt{2}} (\ket{1}-\ket{3})\) and \(\ket{\psi_2} = \frac{1}{\sqrt{2}}(\ket{3} + i \ket{5}) \).
First of all we notice that the state \(\ket{\psi_1},\ket{\psi_2}\) are normalised, had they not we would have had to normalise them before proceeding. The density operator associated with this mixed state is then
\begin{align*} \hat{\rho} &= \frac{2}{3} \ket{\psi_1}\bra{\psi_1} + \frac{1}{3} \ket{\psi_2} \bra{\psi_2}\\ &=\frac{2}{6}( \ket{1}-\ket{3})(\bra{1}-\bra{3}) + \frac{1}{6} (\ket{3} + i \ket{5})(\bra{3} -i \bra{5})\\ &= \frac{2}{6} \ket{1}\bra{1} -\frac{2}{6} \ket{1}\bra{3}-\frac{2}{6}\ket{3}\bra{1} \\ &\qquad +\frac{1}{2}\ket{3}\bra{3}+\frac{i}{6}\ket{3}\bra{5} -\frac{i}{6} \ket{5}\bra{3} +\frac{1}{6}\ket{5}\bra{5}\,. \end{align*}
If we represent the three basis vectors using the standard basis we can write the density operator as the \(3\times3 \) matrix
\begin{align*} \rho &= \frac{2}{3}\left(\begin{matrix}1/\sqrt{2}\\-1/\sqrt{2}\\0\end{matrix}\right) \left(\begin{matrix}1/\sqrt{2}\\-1/\sqrt{2}\\0\end{matrix}\right) ^\dagger+ \frac{1}{3}\left(\begin{matrix}0\\1/\sqrt{2}\\i/\sqrt{2} \end{matrix}\right) \left(\begin{matrix}0\\1/\sqrt{2}\\i/\sqrt{2} \end{matrix}\right) ^\dagger = \\ & =\left(\begin{matrix}\frac{1}{3} & -\frac{1}{3} & 0\\ -\frac{1}{3} & \frac{1}{2} & \frac{i}{6} \\ 0& -\frac{i}{6} & \frac{1}{6}\end{matrix}\right)\,. \end{align*}
It is simple to check now that the trace of this matrix is of course one, while \(\mbox{Tr}\rho^2 = \frac{2}{3}\lt{}1\) since the state is a mixed state.
Finally if we have the observable \(\hat{A}\) with spectrum \(\{1,\ket{1}; 3,\ket{3}; 5,\ket{5} \}\), we can easily compute the expectation value on this state
\[ \mbox{Tr} \left[ \hat{\rho} \hat{A} \right]= \mbox{Tr} \left[ \rho \left(\begin{matrix} 1 & 0 & 0\\ 0 & 3 & 0\\ 0& 0& 5\end{matrix}\right) \right]= \frac{8}{3}\,, \]
which you can also check using the abstract operator formalism.
Let \(\mathcal{H} =\mbox{span}\{\ket{1},...,\ket{6}\}\) be a six dimensional Hilbert space with orthonormal basis \(\ket{i}\) given by the eigenvectors with eigenvalues \(\{1,2,3,4,5,6\}\) for the hermitian operator \(\hat{A}\). Consider the normalised mixed state given by the ensemble \(\{(\frac{1}{6},\ket{1}),(\frac{1}{6},\ket{2}),(\frac{1}{6},\ket{3}),(\frac{1}{6},\ket{4}),(\frac{1}{6},\ket{5}),(\frac{1}{6},\ket{6})\}\)
\[ \hat{\rho} = \sum_{i=1}^6 \frac{1}{6} \ket{i}\bra{i} = \frac{1}{6}\hat{I}\,,\qquad \rho =\frac{\mathbb{I}_6}{6}\,, \]
where we used the standard basis to represent the basis vectors and obtain the matrix representation for \(\hat{\rho}\) given by \(\rho\).
This state is in a certain sense (that we will quantify later on) the most mixed, it is an equally probable ensemble of the six basis vectors. Its trace is clearly one while \(\mbox{Tr}(\hat{\rho}^2) =\mbox{Tr}(\rho^2) = \frac{1}{6}\lt{}1\).
The expectation value of the observable \(\hat{A}\), with spectrum precisely \(\{1,\ket{1};\,...\,; 6,\ket{6}\}\), on this state is given by
\[ \langle A\rangle = \mbox{Tr}( \hat{\rho} \hat{A} ) = \sum_{i=1}^6 \frac{1}{6} \times i =\frac{7}{2}\,, \]
if you want the state \(\hat{\rho}\) is the most “classically” uncertain of all the states, it is exactly the same ensemble of a six-faced die for which the average outcome is precisely \(7/2\).
3.4.Problems
  1. 1
    Consider a two-dimensional Hilbert space with states represented by two-component column vectors, and with the standard inner product.
    1. Find the (normalised) density matrix for each of the following states:
      \[ \cvec{1 \\ 0} \; , \;\; \cvec{0 \\ 1} \; , \;\; \cvec{1 \\ 1} \; , \;\; \cvec{1 \\ -1} \; . \]
    2. Find states corresponding to the following density matrices, if possible:
      \[ \left( \begin{array}{cc} \frac{3}{4} & \frac{\sqrt{3}}{4}i \\ -\frac{\sqrt{3}}{4}i & \frac{1}{4} \end{array} \right) \; , \;\; \left( \begin{array}{cc} \frac{3}{4} & \frac{\sqrt{3}}{4}i \\ \frac{\sqrt{3}}{4}i & \frac{1}{4} \end{array} \right) \; , \;\; \left( \begin{array}{cc} \frac{3}{4} & \frac{\sqrt{3}}{4} \\ \frac{\sqrt{3}}{4} & \frac{1}{4} \end{array} \right) \; , \;\; \left( \begin{array}{cc} \frac{3}{4} & 0 \\ 0 & \frac{1}{4} \end{array} \right) \; . \]
    3. Calculate \(\Tr(\rho^2)\) for each of the density matrices \(\rho\) in parts (a) and (b).
    Solution:
    1. Recall that for a pure state \(\ket{\psi}\) the density operator is \(\hat{\rho} = \ket{\psi}\bra{\psi}\). For the usual vector representation of states this means that for a vector \(u\) we have the density matrix \(\rho = uu^{\dagger}\). If the state/vector is not normalised we need to either first normalise it, or divide by the norm squared when calculating the density operator/matrix. For the four given vectors we have the following density matrices \(\rho\):
      \begin{eqnarray*} \cvec{1 \\ 0} \left( \begin{array}{cc} 1 & 0 \end{array} \right) & = & \left( \begin{array}{cc} 1 & 0 \\ 0 & 0 \end{array} \right) \\ \cvec{0 \\ 1} \left( \begin{array}{cc} 0 & 1 \end{array} \right) & = & \left( \begin{array}{cc} 0 & 0 \\ 0 & 1 \end{array} \right) \\ \frac{1}{2} \cvec{1 \\ 1} \left( \begin{array}{cc} 1 & 1 \end{array} \right) & = & \frac{1}{2} \left( \begin{array}{cc} 1 & 1 \\ 1 & 1 \end{array} \right) \\ \frac{1}{2} \cvec{1 \\ -1} \left( \begin{array}{cc} 1 & -1 \end{array} \right) & = & \frac{1}{2} \left( \begin{array}{cc} 1 & -1 \\ -1 & 1 \end{array} \right) \end{eqnarray*}
      Note that in all cases the density matrices are properly normalised, i.e. \(\Tr \rho = 1\) and the matrices are Hermitian.
    2. You can just consider an arbitrary normalised pure state which is represented by a vector \(\cvec{a \\ b}\) for some \(a, b \in \mathbb{C}\) with \(|a|^2 + |b|^2 = 1\). This will have density matrix
      \[ \rho = \cvec{a \\ b} \left( \begin{array}{cc} a^* & b^* \end{array} \right) = \left( \begin{array}{cc} |a|^2 & ab^* \\ ba^* & |b|^2 \end{array} \right) . \]
      It is then easy to see that the first and the third matrices are given by the vectors \(\cvec{\frac{\sqrt{3}}{2} \\ \frac{-i}{2}}\) and \(\cvec{\frac{\sqrt{3}}{2} \\ \frac{1}{2}}\) (and these vectors are unique up to multiplication by a phase \(\exp(i \phi)\) for any \(\phi \in \mathbb{R}\).
      The second matrix is not Hermitian so it cannot be a density matrix (for a pure or a mixed state.)
      The final matrix clearly cannot be the density matrix for a pure state, but it is Hermitian and has trace one. In fact it is also clearly a positive matrix so it is a mixed state density matrix. For mixed states the ensemble is not unique, but an obvious example here is \(\cvec{1 \\ 0}\) with probability \(\frac{3}{4}\) and \(\cvec{0 \\ 1}\) with probability \(\frac{1}{4}\).
    3. For the density matrices from part (a), and for the first and third matrices in (b) you should find \(\Tr (\rho^2) = 1\) since they are pure states. For the final matrix in (b) \(\Tr (\rho^2) = \frac{5}{8} \lt{} 1\) as expected for a mixed state.
Ask a question about the highlighted paragraph(s):
Your question will be raised anonymously and the answer will appear here at some point.
Long-tap anywhere to ask a question (or see the reply). Right-click anywhere to ask a question (or see the reply).