Woordenlijst

Selecteer een van de zoekwoorden aan de linkerkant ...

Bayesian Inference and Graphical ModelsIntroduction

Leestijd: ~20 min

Exercise
Consider the following two scenarios.

  1. You pull a coin out of your change purse and flip it five times. It comes up heads all five times.
  2. You meet a magician who flips a coin five times and shows you that it came up heads all five times.

In which situation would you be more inclined to be skeptical of the null hypothesis that the coin being flipped is a fair coin?

Solution. We'd be more inclined to be skeptical in the magician scenario, since it isn't unusual for a magician to have a trick coin or card deck. Given a random coin from our change purse, it is extraordinarily unlikely that the coin is actually significantly biased towards heads. Although it's also unlikely for a fair coin to turn up heads five times in a row, that isn't going to be enough evidence to be persuasive.

This example illustrates one substantial shortcoming of the statistical framework—called frequentism—used in our statistics course. Frequentism treats parameters as fixed constants rather than random variables, and as a result it does not allow for the incorporation of information we might have about the parameters beyond the data observed in the random experiment (such as the real-world knowledge that a magician is not so unlikely to have a double-headed coin).

Bayesian statistics is an alternative framework in which we do treat model parameters as random variables. We specify a prior distribution for a model's parameters, and this distribution is meant to represent what we believe about the parameters before we observe the results of the random experiment. Then the results of the experiment serve to update our beliefs, yielding a posterior distribution.

The theorem in probability which specifies how probability distributions update in light of new evidence is called .

For example, if your prior assessment of the probability that the magician's coin is double-headed is 5%, then your posterior estimate of that probability after observing five heads would shoot up to

\begin{align*}\mathbb{P}(\text{two-headed}|\text{5 heads}) = \frac{\mathbb{P}(\text{5 heads}|\text{two-headed})\mathbb{P}(\text{two-headed})}{\mathbb{P}(\text{5 heads})} = \frac{(1)(5\%)}{(1)(5\%) + (1/2^5)(95\%)} \approx 62.7\%.\end{align*}

Meanwhile if the prior for double-headedness for the coin in your coin purse is 0.001, then the posterior is only \frac{(1)(0.1\%)}{(1)(0.01\%) + (1/2^5)(95\%)} \approx 3.26\%.

The quantity \mathbb{P}(\text{5 heads}|\text{two-headed}) is called the likelihood of the observed result. So we can summarize Bayes theorem with the mnemonic posterior is proportional to likelihood times prior.

Bayes rule takes an especially simple form when our distributions are supported on two values (for example, "fair" and "double-headed"), but we can apply the same idea to other probability mass functions as well as probability density functions.

Example
Suppose that the heads probability of a coin is p. Consider a uniform prior distribution for p, and suppose that n flips of the coin are observed. Express the posterior density in terms of the number of heads H(x) and tails T(x) in the observed sequence x of n flips.

Solution. We calculate the posterior density f as likelihood times prior. Let's call X the random sequence of flips, and suppose x is a possible value of X. We get

\begin{align*}\overbrace{f(p|x)}^{\text{posterior}} \varpropto \overbrace{f(x|p)}^{\text{likelihood}}\overbrace{f(p)}^{\text{prior}} = p^{H(x)}(1-p)^{T(x)}(1)\end{align*}

In this formula we are employing a common abuse of notation by using the same letter (f) for three different densities. For example, f(p|x) refers to the conditional density of p given x; more precisely, it refers to the density of the conditional distribution of the random variable P given the event X = x, evaluated at the value p. It might more written more conventionally as f_{P|X = x}(p). Likewise, f(p) refers to the marginal distribution of p, and so on.

The continuous distribution on [0,1] whose density is proportional to p^{H}(1-p)^{T} is called the Beta distribution with parameters \alpha = H + 1 and \beta = T + 1. So the coin flip posterior for a uniform prior is a Beta distribution.

α=${α}

β=${β}

Exercise
Show that the coin flip posterior for a Beta prior is also a Beta distribution. How does the evidence alter the parameters of the beta distribution.

Solution. If the prior density is proportional to p^{\alpha-1}(1-p)^{\beta-1}, then the posterior distribution is proportional to p^{\alpha + H(x) -1}(1-p)^{\beta + T(x)-1}, following the same calculation as above. In other words, each head in the observed sequence increments the \alpha parameter of the distribution, while each tail increments the \beta parameter.

When the posterior distribution has the same parametric form as the prior distribution, this property is called conjugacy. For the example above, we say that the Beta distribution is a conjugate family for the binomial likelihood.

Bruno
Bruno Bruno