Written by Alexei Gilchrist, updated months ago
An argument to maximising the entropy
Level: 2, Subjects: Probability Maximum Entropy

1 Partial knowledge

We've seen how if we know nothing about a problem we can use symmetry arguments to assign probabilities. For instance in the case where the symmetry implies we can reshuffle the probability assignments and we won't be able to distinguish between the different permutations, we should assign the possibilities all the same probability. i.e. a `flat' distribution.

Often we know something about a problem, and the problem cannot be partitioned into a part we know for certain and a part we know nothing about. How then should we proceed in order to assign probabilities?

At the very least there is something we shouldn't do: we should not imply assertions when we have no basis to do so from our background information.

We will see below that this requirement leads to maximising the entropy \(H(\{p_{j}\})\) of the distribution: \begin{equation} H(\{p_{j}\}) = - \sum_{j} p_{j} \log p_{j}. \end{equation}

2 The Kangaroo Problem

To be completed

This is an example where the entropy arises as the natural function of probabilities to maximise when given partial information about a problem.

3 The Monkey Argument

To be completed

How should you assign probabilities if you know some function of the probabilities like the average value. We'll employ a team of monkeys and a whole lot of coins to answer this question.