Probability permeates much of physics. It appears in quantifying errors in every measurement, in the dynamics of stochastic processes, in statistical mechanics as a way of coping with the vast amount of variables, and even intrinsically in quantum mechanics. At first glance the use of probability may seem natural and even `obvious', but things get much more lively when you realise that it is still not settled what a probability is. Different interpretations of probability affect the meaning of all the areas that it touches.
This material supports a second and third year advanced physics unit at Macquarie University.
In these notes I will take the view that probabilities are a measure of plausibility and probability theory is the extension of deductive logic to incomplete information. This view follows Laplace, Jeffreys, Cox, and Jaynes.
Variables are introduced and then some consequences of the sum-rule are explored.
Some consequences of the product rule are explored including the famous Bayes' rule.
Given that we are interpreting probability as a measure of plausibility, just what is the relationship of probabilities to frequencies?
Model comparison is one of the principal tasks on inference. Given some data, how does the plausibility of different models change? Does the data select out a particular model as being better?
Find the probability distribution that maximises the entropy subject to requiring some averages to be fixed.