Density Operator

Alexei Gilchrist

3 States
The description of physical states as vectors in a Hilbert space can be extended to a probabilistic description when we don’t have complete information. It turns out we are forced to use such a description in certain situations.

1 Motivation

We saw in the , the state of a physical system is represented as a norm-1 vector in a Hilbert space. The normalisation was due the measurement postulate which associated a probability to the absolute-square of the inner-product between two states. This description assumes we know exactly what the physical state is, but what if we only have incomplete knowledge of the state? The obvious thing to do in this case is to describe the physical state in terms of a probability distribution over exact, or “pure”, states. We might have an annoyance in representing this mathematically as the obvious thing does not work. Say in classical physics the state of some system was \(s_1\) with probability \(p\) or state \(s_2\) with probability \(1-p\). I could conveniently represent the state by the average: \(s=ps_1+(1-p)s_2\) and this would work for any linear function. If I use the same construction with vectors it doesn’t work as \(|s\rangle = p|s_1\rangle+(1-p)|s_2\rangle\) is just another vector representing an exact state of the system.

We also came across a puzzling situation when we combined two or more systems: there were states of the combined system that couldn’t be written as states of the individual systems. This should make you feel uncomfortable—we can have a situation where you know exactly what the physical state of two systems is, but if you choose to focus on only one system the formalism covered so far can’t represent its state! The formalism is clearly missing something, but the good news is that we can extend the formalism in a way that solves both this and the previous problem.

In fact, the second problem is one of the key distinguishing features of quantum mechanics over classical mechanics. In essence, quantum states can encode information in the global degree’s of freedom of multiple parties in a way that is not possible in classical physics. When we discard one of the component systems we lose that global information and are reduced to an incomplete, probabilistic description over physical states. We will make these notions much more exact in due course.

First note that a state \(|\psi\rangle\) and a rank-1 projector onto that state \(|\psi \rangle\langle \psi|\), carry the same information up to a global phase (that isn’t measurable anyway), that is from one you can get the other

\[ |\psi\rangle \longleftrightarrow |\psi \rangle\langle \psi|. \]

Say a device prepares a system in one of several states \(\{|\psi_1\rangle, |\psi_2\rangle,\ldots , |\psi_n\rangle \}\), each with a given probability \(\{p_1, p_2, \ldots, p_n\}\), and \(\sum_j p_j = 1\). For a moment, let’s imagine we know the device prepared state \(|\psi_k\rangle\), then the expectation of some observable \(A\) will be

\[\langle A \rangle_{k} = \langle \psi_k|A|\psi_k\rangle = \mathrm{Tr}(A |\psi_k \rangle\langle \psi_k|).\]
Knowing the probabilities with which each state occurs we can calculate the overall expectation value too.
\[\begin{align*}\langle A \rangle &= \sum_k p_k \langle A \rangle_{k} = \sum_k p_k \mathrm{Tr}(A |\psi_k \rangle\langle \psi_k|) \\ &= \mathrm{Tr}\left(A \sum_k p_k |\psi_k \rangle\langle \psi_k|\right) = \mathrm{Tr}(A \rho),\end{align*}\]
where in the last expression we’ve defined the linear operator \(\rho = \sum_k p_k |\psi_k \rangle\langle \psi_k|\). The operator \(\rho\), known as the “density operator” is the extension to the vector picture to allow a probabilistic description.

2 Density operator

We’ve introduced the density operator as the description of a probabilistic preparation of quantum states:

\[\rho = \sum_k p_k |\psi_k \rangle\langle \psi_k|.\]
All the can be extended to use the density operator as a more general representation of a quantum state.

Thinking of the density operator as only being a convenience to represent a probabilistic preparation will turn out to be a bit narrow, but first let’s test the definition for consistency. Given we have prepared the state \(\rho\) as above, and assuming the states \(|\psi_k\rangle\) are orthonormal, we should find that the probability of measuring the state \(|\psi_k\rangle\) is the same as the preparation probability \(p_k\).

\[\langle \psi_k|\rho|\psi_k\rangle = \sum_j p_j \langle \psi_k | \psi_j\rangle \langle \psi_j | \psi_k\rangle = \sum_j p_j \;\delta_{k,j}\delta_{j,k} = p_k.\]

© Copyright 2022 Alexei Gilchrist