Dirac Braket Notation

Alexei Gilchrist

The Dirac braket notation is popular in Quantum Mechanics but at its core this is just an elegant notation for vectors of complex numbers. It’s introduced in a finite dimensional complex vector space.

1 Ket

A ket is simply a vector of complex numbers:

\[\begin{equation}|a\rangle = \begin{pmatrix} a_1 \\ a_2 \\ \vdots \end{pmatrix}\end{equation}\]
Note that the argument of the ket is simply a label for the vector and doesn’t carry any intrinsic meaning. Writting \(|\mathrm{boom!}\rangle\) or \(|+\rangle\) is just as valid as writing \(|\psi\rangle\). Sometimes the label will be used as part of the calculation so that you may find expressions like
\[\begin{equation*}\mathbf{A}|\alpha \psi\rangle = \alpha \mathbf{A}|\psi\rangle\end{equation*}\]
but bear in mind this is just assuming the additional notation \(|\alpha \psi\rangle\equiv\alpha|\psi\rangle\) and is not something inherent to the braket notation. Often the label will be an index into a set of vectors like a basis.

2 Bra

The bra \(\langle a|\) is the transpose complex-conjugate of the ket vector, an operation that is usually denoted by a dagger symbol:

\[\begin{equation}\langle a| = \begin{pmatrix} a_1^* & a_2^* & \cdots \end{pmatrix} = |a\rangle^\dagger\end{equation}\]
The complex-conjugation operation simply replaces each \(i\) with a \(-i\): \((a+ib)^*=a-ib\), and the transpose makes each row a column.

Note that \(\langle a|\) is in a sense a different object from \(|a\rangle\). Whereas \(|a\rangle\) is a vector, \(\langle a|\) is sometimes called a co-vector, covariant vector, or a one-form. It lives in a dual space to the vector space that \(|a\rangle\) lives in.

3 Inner-product

Multiplying a bra and ket yields the usual inner product, and the resulting braket is the source of the whimsical names for the notation:

\[\begin{equation}\label{eq:braket} \langle a | b\rangle = \begin{pmatrix} a_1^* & a_2^* & \cdots \end{pmatrix}\begin{pmatrix} b_1 \\ b_2 \\ \vdots \end{pmatrix} = \sum_j a_j^* b_j = c\end{equation}\]
It is usual to drop the extra bar and have \(\langle a | b\rangle\) instead of \(\langle a||b\rangle\) and there is actually a subtle “syntatic-sugar” at work here. \(\langle a | b\rangle\) is a complex number, whereas \(\langle a||b\rangle\) is the dot product of two vectors—implicit in the notation is that the dot-product will produce a number. It’s these small tricks of perception that make the braket notation a very elegant representation of vectors. We’ll see some more below.

First let’s confirm that the braket really is an inner-product. There are only three requirements to check other that the inner product takes two vectors and produces a number, which is demonstrated in \eqref{eq:braket},

  1. Complex conjugation swaps the order:
    \[\begin{equation*}\langle a | b\rangle^*=\langle a|^*|b\rangle^*=(|b\rangle^{*T}\langle a|^{*T})^T=\langle b | a\rangle^T=\langle b | a\rangle.\end{equation*}\]
  2. Linear in the second argument: say \(|c\rangle=\sum_j \gamma_j|c_j\rangle\)
    \[\begin{equation*}\langle a | c\rangle = \langle a| \sum_j \gamma_j|c_j\rangle = \sum_j \gamma_j\langle a | c_j\rangle.\end{equation*}\]
    (Some conventions have an inner product \((\cdot,\cdot)\) which is linear in its first argument, in which case \((b,a)\equiv\langle a | b\rangle\)).
  3. Self-inner product is a non-negative real number:
    \[\begin{equation*}\langle a | a\rangle = (a_1^* \ldots a_n^*)\cdot \begin{pmatrix} a_1\\ \vdots \\ a_n\end{pmatrix} = |a_1|^2 + \ldots + |a_n|^2 \ge 0.\end{equation*}\]

Now that we have an inner product, we can describe vectors as being orthogonal if \(\langle a | b\rangle=0\), and normalised if \(\langle a | a\rangle=1\), and so we can define an orthonormal basis \(\{|a_j\rangle\}\) with the property that

\[\begin{equation}\label{eq:orthonormal} \langle a_j | a_k\rangle = \delta_{jk}.\end{equation}\]
Any vector from that vector space can then be expanded in the basis
\[\begin{equation*}|b\rangle = \sum_j a_j |a_j\rangle.\end{equation*}\]
Note that we have used the symbol \(a_j\) to denote two things in the above expression. The bare \(a_j\) is the complex coefficient in the expansion, and the \(a_j\) in the ket is a label for the basis vector. In practice, the context is more than sufficient to avoid ambiguity.

Since the \(\{|a_j\rangle\}\) basis we have introduced is orthonormal, we have

\[\begin{equation*}\langle a_k | b\rangle = a_k|a_k\rangle,\end{equation*}\]
so we can write the expansion in basis vectors as
\[\begin{equation}\label{eq:basisexpansion} |b\rangle = \sum_j \langle a_j | b\rangle|a_j\rangle.\end{equation}\]

4 Tensor-product

We can define the product of two kets to be the tensor-product, with the result that it forms a larger ket. If the dimensions of \(|a\rangle\) and \(|b\rangle\) are \(m\) and \(n\) respectively, the dimension of the product will be \(mn\):

\[\begin{equation*}|a\rangle|b\rangle \equiv |a\rangle\otimes|b\rangle=(a_1b_1, a_1b_2, \dots, a_1b_n, a_2b_1, a_2b_2, \ldots, a_2b_n, \ldots, a_mb_n)^T\end{equation*}\]

A ket multiplying a bra forms a matrix and again you can imagine there is an implicit tensor-product between the two:

\[\begin{equation}|a \rangle\langle b| \equiv \begin{pmatrix} b_1 \\ b_2 \\ \vdots \end{pmatrix}\otimes\begin{pmatrix} a_1^* & a_2^* & \cdots \end{pmatrix} = \begin{pmatrix} b_1 a_1^* & b_1 a_2^* & \cdots \\ b_2 a_1^* & b_2 a_2^* & \cdots \\ \vdots & \vdots & \ddots \end{pmatrix}\end{equation}\]

Every matrix can be written as a sum of outer-products of this form. Its easiest to see this with the standard basis composed of vectors that have a single `1’ in the \(j^\mathrm{th}\) row: \(|e_j\rangle=(0,\ldots,1,\ldots,0)^T\). The matrix formed by \(|e_j \rangle\langle e_k|\) will have a single `1’ on the \(j^\mathrm{th}\) row and \(k^\mathrm{th}\) column. So

\[\begin{equation*}\mathbf{M} = \sum_{jk} m_{jk}|e_j \rangle\langle e_k|\end{equation*}\]
where \(m_{jk}\) is the element on the \(j^\mathrm{th}\) row and \(k^\mathrm{th}\) column.

5 In the eye of the beholder

Notice that by judicious rearrangement of the elements we can split up an expression between matrices, vectors and numbers:

\[\begin{equation}|a \rangle\langle b||c\rangle = |a\rangle\langle b | c\rangle = \langle b | c\rangle |a\rangle.\end{equation}\]
This is where the notation comes into its own.

Have another look at the expansion of a vector in an orthonormal basis, Eq.\eqref{eq:basisexpansion}. With a trivial rearrangement we get:

\[\begin{equation*}|b\rangle = \sum_j \langle a_j | b\rangle|a_j\rangle = \sum_j|a_j \rangle\langle a_j||b\rangle.\end{equation*}\]
Since this is true for any \(|b\rangle\), this is a proof that \begin{equation*} \sum_j|a_j \rangle\langle a_j| = I \end{equation*} for an orthonormal basis. A result that may not have been obvious but is very useful.

© Copyright 2022 Alexei Gilchrist