Basic Notion: Entanglement Entropy
Updated: May 20, 2021
In this short post I’ll discuss the definition of entanglement, its relation to the density matrix formalism and how the von Neumann entropy is related to information theory.
What is entanglement?
Let’s start with the definition of entanglement. A state is said to be entangled if it is not separable. For example, the state in the first example below is not entangled because it can be written as the product state of the Hilbert spaces of the two particles. The state of the second example below is entangled because it is not possible to write it as system A otimes system B. This might not be immediately obvious, but try writing it as a product state with coefficients a,b,c,d and try to find these coefficients.
Take particle A to be in a superposition of being is a spin up state (represented below with 1) and a spin down state (represented below with 0). These states are represented by kets which form the basis vectors of its Hilbert space.
Particle B is defined in an identical way.
The state of the full system is
Which can be written as system A times system B.
This is what is meant with a separable system.
Consider now one of the Bells states, which are known to be maximally entangled. It is written as:
Where after the second equal sign the state has been simplified by omitting the tensor product sign. This state cannot be written as a product of system A times system B, therefore it is not separable and thus entangled.
Because checking if a state is separable can be a tedious task, especially for large systems, we turn to the density matrix formalism for help.
Density matrix formalism
The density matrix (or rather the density operator represented by a matrix) is defined to be
We can see what this would look like for the second example above.
For the Bell state
The density matrix is
Which in density matrix form can be written as below for the basis
You can check that the square of the density matrix is equal to the density matrix, meaning that this is a pure state. Pure states have probability amplitudes as coefficients, which we can see from Ex. 2 above. Pure states are not entangled by definition. A nice trick we can perform on the density matrix is that we “integrate out” the second system so that we can study only the first system. We do this by taking the partial trace. There are two ways in which to visualize this, shown in Example 4.
The partial trace is denoted as the trace with a subscript that denotes the system that is traced over. Following up on Example 3:
Where from the subscripts it is clear that these bra's and ket's that squeeze the density matrix only work on the second system. So we can write it as a tensor product:
This is equal to zero because the basis vectors |0> and |1> are orthonormal. Similarly,
due to the orthonormality of the basis vectors.
Another way to work out the partial trace is by using the cyclic property of the trace. For example
Where first we use that the partial trace only works on the second system and then we use the cyclic property of the trace so that we can use the orthonormality of the states again.
Even without the cyclic property it is clear that the trace is one in this example.
Doing this for all the states in the density matrix the partial density matrix for the first system is:
Which is no longer a pure state.
For the partial density matrix it is no longer the case that the square of the density matrix is equal to the density matrix. Therefore this state is not pure, and therefore it is entangled in some degree. To quantize this entanglement we turn to the von Neumann entropy.
Von Neumann entropy
The VNE is given by the density matrix times the logarithm of the density matrix, which very much looks like the Shannon entropy in classical information theory. To see where this definition comes from exactly see this short Khan Academy video on 'Journey into information theory' https://www.youtube.com/watch?v=2s3aJfRr9gE .
If the density matrix is not diagonal, taking the logarithm of a matrix can become difficult very quickly. However, since the density matrix is hermitian the definition can be rewritten in a smart way:
Where U is unitary and Lambda is a diagonal matrix. Since the density matrix is diagonalizable it can be written in terms of these matrices. Using the cyclic property of the trace and the fact that U is unitary.
Shows that the entanglement entropy can be written as the sum of the density matrix's eigenvalues times the logarithm of the eigenvalue.
Using either of these two equations we find that the entanglement entropy for the Bell state given in Example 2 is log(2), which indeed means that the state is maximally entangled.
So let’s go back to basics: what physically happens with entanglement? Entanglement means that there is a measurement correlation between two (or more) particles. For example, when the first particle is measured with spin up the second particle is also always measured with spin up and if the first particle is measured with spin down, then the second particle is always measured with spin down (the Bell state shown in Example 2).
This means that entangled particles share mutual information. If there is no mutual information between the particles then there is also no correlation in their measurements, and we say the entanglement entropy is zero. If there is maximal mutual information between the system then their entropy is log k (for a k-particle system). This is very much in line with what we discussed for the Shannon entropy!
This blog post was written subsequent to a presentation in the Basic Notions seminar. For more information about this seminar, go to the seminar's webpage.