# Bra-Ket Notation

## Table of contents

- Wave function as a vector in Hilbert space Here you will learn how the wave function from quantum mechanics can be understood as an infinite-dimensional vector living in Hilbert space.
- Bra and Ket vectors Here you will learn how bra and ket vectors are defined and how they are related to each other.
- Scalar Product and Inner Product in Bra-Ket notation Here you will learn how to use bra-ket notation to write up the overlap integral as the scalar product of bra-ket vectors.
- Tensor product (density matrix)

## Wave function as a vector in Hilbert space

Consider any one-dimensional wave function \( \mathit{\Psi}(x)\) describing a quantum mechanical particle. The value of the wave function, for example at location \( \class{red}{x_1} \) is \( \mathit{\Psi}(\class{red}{x_1}) \), at location \(\class{green}{x_2}\) the function value is \( \mathit{\Psi}(\class{green}{x_2}) \), at location \(\class{blue}{x_3}\) it is \( \mathit{\Psi}(\class{blue}{x_3}) \) and so on. You can assign a function value to each \(x\) value in this way. We can then represent the whole function values as a list of values. We can think of this list of values as a column vector \( \mathit{\Psi}\) that lives in an abstract space. The vector then has the components:

**Wave function values as column vector**

This vector is spanned by the coordinates \( \mathit{\Psi}(\class{red}{x_1}) \), \( \mathit{\Psi}(\class{green}{x_2}) \), \( \mathit{\Psi}(\class{blue}{x_3}) \). We can even illustrate this approximated vector (see Ilustration 1, right):

As soon as we add an additional function value \( \mathit{\Psi}(x_4) \), the space becomes four-dimensional and can no longer be represented graphically. The principle of how a function can be understood as a vector is hopefully now clear. In the case of the wave function, we call the vector 2

the **state vector**.

Theoretically there are of course *infinitely many* \( x \) values. Therefore there are also infinitely many associated function values \( \mathit{\Psi}(x) \). If there are infinitely many function values \( \mathit{\Psi}(x) \), the space in which the vector \( \mathit{\Psi}\) lives is *infinite-dimensional*. Remember, this is not an infinite-dimensional position space, but an abstract space, as shown in Illustration 1. This abstract space in which quantum mechanical state vectors live is called **Hilbert space**. In general, this is an infinite-dimensional vector space. But it can also be finite-dimensional, as for example in the case of spin states.

If you approximate the wave function for example for numerical calculations with the computer, by *finitely* many function values (otherwise it does not work), the \( \mathit{\Psi}\)-vector 1

will have *finitely* many components. It is clear that if you choose a larger \(n\), the representation of the wave function as a vector becomes more accurate:

**Wave function values as finite column vector**

If the Hilbert space is finite-dimensional, then a \( \mathit{\Psi}\) vector, such as 2

, with \(n\) components, lives in an \(n\) dimensional Hilbert space.

## Bra and Ket vectors

So, as you learned, we can represent a quantum mechanical particle in two ways:

as a wave function

as a state vector

In order to distinguish the description of the particle as a state vector from the description as a wave function, we write the state vector 1

as follows:

**Ket vector**

Wave function \(\mathit{\Psi}(x)\) represented as a *column vector* is called **ket vector** \(|\mathit{\Psi}\rangle\). It doesn't matter what you write between \( | ~~ \rangle \). For example, you could have written \(|\mathit{\Psi}(x)\rangle\). The main thing is that it is clear from the ket notation which quantum mechanical system the ket vector 3

represents.

So when you see the ket notation \(|\mathit{\Psi}\rangle\), then you know that it means the representation of the particle state as a

*state vector*.On the other hand, if you see \(\mathit{\Psi}(x)\), then you know that it means the representation of the particle state as a

*wave function*.

The vector \(|\mathit{\Psi}\rangle^\dagger\) *adjoint* to the ket vector is called **bra vector**. The symbol '\(\dagger\)' is pronounced as 'dagger'. For a clever, compact notation, we write the bra vector with an inverted arrow: \(|\mathit{\Psi}\rangle^\dagger ~:=~ \langle\mathit{\Psi}|\). Note that '*adjoint*' is sometimes also called '*Hermitian adjoint*'.

To get the bra vector \( |\mathit{\Psi}\rangle \) *adjoint* to the ket vector \( \langle\mathit{\Psi}| \), you need to do two things:

*Transpose*the ket vector3

. This makes it a row vector:**Transposed ket vector**Formula anchor $$ \begin{align} \left[ \mathit{\Psi}(\class{red}{x_1}),~ \mathit{\Psi}(\class{green}{x_2}),~ \mathit{\Psi}(\class{blue}{x_3}),~~... \right] \end{align} $$*Complex-conjugate*the ket vector3

. This operation 'adds asterisks' to the components.

**Bra Vector**

Since we have interpreted the wave function \(\mathit{\Psi}\) as a ket vector \(| \mathit{\Psi} \rangle\), we can calculate with it practically in the same way as with usual vectors you know from mathematics. For example, we can form a scalar product or a tensor product between the bra-ket vectors. The thing that is probably new to you is that the components of the vector can be *complex* and the number of components can be *infinite*. In this case, you would have a vector with infinitely many complex components. Practically, of course, that is, in the numerical representation of a ket vector, it always has *finitely* many components. However, even in finite-dimensional Hilbert spaces, which are just as important in quantum mechanics, the number of components of the bra-ket vectors is finite. The spin state vectors for example, live in a two-dimensional Hilbert space. Thus, the Bra and Ket vectors would have only two components.

## Scalar Product and Inner Product in Bra-Ket notation

You can form the **scalar product** of the bra-ket vectors. By the way, in abstract infinite dimensional space, scalar product is called *inner product*. In finite dimensional Hilbert space, the scalar product between the Bra and Ket vectors looks like this:

**Row bra vector multiplied by column ket vector**

Here you can see one of the many examples of why notating the bra vector with a reversed arrow is convenient. The notation is suitable for scalar products in finite as well as inner products in infinite Hilbert spaces. Here we do not need to include the scalar product point and a vertical line. We write \(\langle\mathit{\Psi} | \mathit{\Psi} \rangle\) instead of \( \langle\mathit{\Psi} | ~\cdot~ | \mathit{\Psi} \rangle \).

Of course, you can also form the scalar product between two different state vectors:

**Row bra vector multiplied by another column ket vector**

You can multiply out the vectors in 7

just as you do with the usual matrix multiplication:

**Scalar product of two states written out**

You can write Eq. 8

even shorter with a sum sign (if the dimension of the Hilbert space is infinite, then the sum is of course only an approximation):

**Scalar product of two states**

Here \(n\) is the dimension of the Hilbert space, i.e. the number of components of a state vector living in this Hilbert space.

If we take two normalized and orthogonal states \( \mathit{\Psi}_i \) and \( \mathit{\Psi}_j \) and give them indices rather than different letters, then the scalar product gives either 0 or 1 (imagine the scalar product of two basis vectors).

Scalar product of two different orthonormal states (\(i \neq j\)) yields 0.

Scalar product of two equal, orthonormal states (\(i = j\)) yields 1.

This orthogonality of states can be combined into a single equation with a Kronecker delta and Einstein sum convention:

**Scalar product of two orthonormal states**

We can make the illustrative transition to an infinite dimensional Hilbert space as follows: We scale the scalar product 9

with \(\Delta x\) and let \(\Delta x \rightarrow 0 \) go to zero. Then \(\Delta x\) changes into \(\text{d}x \) and the sum sign into an integral sign, which corresponds to a continuous sum:

**Inner product between equal states**

The integral 9

is sometimes called the **overlap integral** because, like the scalar product, it indicates how much the two states \( \langle\mathit{\Psi} | \) and \(| \mathit{\Psi} \rangle\) overlap.

Of course, we can calculate the overlap between two different states. Bra and Ket vector do not have to describe the same state:

**Inner product of two states**

## Tensor product (density matrix)

Another important operation between bra-ket vectors is the **tensor product**: \(|\mathit{\Phi} \rangle ~\otimes~ \langle\mathit{\Psi} |\). We can omit the tensor sign '\(\otimes\)' because it is immediately clear from the bra-ket notation that it is not a scalar product (the bra and ket vectors are swapped).

The result of the tensor product is a *matrix*. In the case of the tensor product of two quantum mechanical states this matrix is called **density matrix** or base-independent **density operator**:

**Density matrix**

If the state \( |\mathit{\Psi} \rangle \) is normalized and we form the tensor product \(|\mathit{\Psi} \rangle \langle\mathit{\Psi} | \), then this product is called a **projector** or **projection matrix**. If we apply the projection matrix \(|\mathit{\Psi} \rangle \langle\mathit{\Psi} | \) to a ket vector \( |\mathit{\Phi} \rangle \) (imagine a matrix which is applied to a vector): \(|\mathit{\Psi} \rangle \langle\mathit{\Psi} | \, \mathit{\Phi} \rangle \), then we get a new ket vector giving the projection of \( |\mathit{\Phi} \rangle \) onto \( |\mathit{\Psi} \rangle \). Thus, we can find out how much of the state \( |\mathit{\Phi} \rangle \) is in the state \( |\mathit{\Psi} \rangle \).

Another important application of the projection matrix is the **insertion of identity**. Consider a set \( \{|\Psi_i\rangle \} \) of state vectors \(|\Psi_i\rangle\) forming an orthonormal basis. That is, these state vectors are orthogonal to each other, normalized, and they span the Hilbert space in which they live. The sum of the projection matrices of these basis vectors gives a \(n\) x \(n\) unit matrix:

**Sum of projectors gives unit matrix**

We can apply this unit matrix to any state \( |\mathit{\Phi} \rangle \):

**'Insertion of the identity' for a finite base**

~&=~ \underset{i=1}{\overset{n}{\boxed{+}}} \, |\mathit{\Psi}_i \rangle \langle\mathit{\Psi}_i | \mathit{\Phi} \rangle \end{align} $$

Here the scalar product \(\langle\mathit{\Psi}_i | \mathit{\Phi} \rangle\) picks out the \(i\)th component, of the state \(| \mathit{\Phi} \rangle\). Let us call the \(i\)th component for short as \(\langle\mathit{\Psi}_i | \mathit{\Phi} \rangle := \varphi_i \):

**Representation of a state in a basis**

With the trick 'insertion of the identity' we are able to represent a state \(|\mathit{\Phi} \rangle\) in the basis \( \{|\Psi_i\rangle \} \).

An infinite-dimensional Hilbert space is spanned by infinitely many basis states. To translate the 'insertion of the identity' (Eq. 15

) to an infinite Hilbert space, we scale 15

with \(\Delta x\) and let it go to zero. Then \(\delta x\) turns into an infinitesimal interval \(\text{d}x\) and the sum sign becomes an integral sign:

**Unit matrix for an infinite basis**

This unit matrix now has infinitely many columns and rows. With Eq. 17

we are now able to represent a ket vector \( | \Phi \rangle \) living in an *infinite-dimensional* Hilbert space:

**'Insertion of the identity' for an infinite base**

~&=~ \int \text{d}x \, |\mathit{\Psi} \rangle \langle\mathit{\Psi} | \mathit{\Phi} \rangle \end{align} $$

With this basic knowledge, you should now be able to understand the equations of quantum mechanics in bra-ket notation.