# Maximum Entropy

## Alexei Gilchrist

An argument to maximising the entropy

### 1 Partial knowledge

We’ve seen how if we know nothing about a problem we can use symmetry arguments to assign probabilities. For instance in the case where the symmetry implies we can reshuffle the probability assignments and we won’t be able to distinguish between the different permutations, we should assign the possibilities all the same probability. i.e. a `flat’ distribution.

Often we know something about a problem, and the problem cannot be partitioned into a part we know for certain and a part we know nothing about. How then should we proceed in order to assign probabilities?

At the very least there is something we shouldn’t do: we should not imply assertions when we have no basis to do so from our background information.

We will see below that this requirement leads to maximising the entropy $$H(\{p_{j}\})$$ of the distribution:

$\begin{equation*}H(\{p_{j}\}) = - \sum_{j} p_{j} \log p_{j}.\end{equation*}$

To be completed

To be completed