Approximation of the normalization constant
Background
Let $\pi(x)$ denote a probability density called the target. In many problems, e.g. in Bayesian statistics, the density $\pi$ is typically known only up to a normalization constant,
\[\pi(x) = \frac{\gamma(x)}{Z},\]
where $\gamma$ can be evaluated pointwise, but $Z$ is unknown.
In many applications, it is useful to approximate the constant $Z$. For example, in Bayesian statistics, this corresponds to the marginal likelihood, and it is used for model selection.
Normalization constant approximation in Pigeons
As a side-product of parallel tempering, we automatically obtain an approximation of the logarithm of the normalization constant $\log Z$. This is done automatically using the stepping stone estimator computed in stepping_stone()
.
It is shown in the standard output report produced at each round:
using DynamicPPL
using Pigeons
# example target: Binomial likelihood with parameter p = p1 * p2
an_unidentifiable_model = Pigeons.toy_turing_unid_target(100, 50)
pt = pigeons(target = an_unidentifiable_model)
┌ Info: Neither traces, disk, nor online recorders included.
│ You may not have access to your samples (unless you are using a custom recorder, or maybe you just want log(Z)).
└ To add recorders, use e.g. pigeons(target = ..., record = [traces; record_default()])
──────────────────────────────────────────────────────────────────────────────────────────────────
scans Λ time(s) allc(B) log(Z₁/Z₀) min(α) mean(α) min(αₑ) mean(αₑ)
────────── ────────── ────────── ────────── ────────── ────────── ────────── ────────── ──────────
2 3.24 0.00103 1.06e+06 -8.14 0.00178 0.64 1 1
4 1.64 0.00198 2.08e+06 -5.04 0.0352 0.818 1 1
8 1.17 0.00381 4.07e+06 -4.42 0.708 0.871 1 1
16 1.2 0.00801 8.59e+06 -4.03 0.549 0.867 1 1
32 1.11 0.0157 1.68e+07 -4.77 0.754 0.877 1 1
64 1.35 0.0312 3.37e+07 -4.79 0.698 0.85 1 1
128 1.6 0.1 6.68e+07 -4.97 0.725 0.823 1 1
256 1.51 0.143 1.32e+08 -4.92 0.758 0.832 1 1
512 1.46 0.352 2.66e+08 -5 0.806 0.838 1 1
1.02e+03 1.49 0.634 5.33e+08 -4.92 0.798 0.834 1 1
──────────────────────────────────────────────────────────────────────────────────────────────────
and can also be accessed using:
stepping_stone(pt)
-4.9221510045598675
From ratios to normalization constants
To be more precise, the steppping stone estimator computes the log of the ratio, $\log (Z_1/ Z_0)$ where $Z_1$ and $Z_0$ are the normalization constants of the target and reference respectively.
Hence to estimate $\log Z_1$ the reference distribution $\pi_1$ should have a known normalization constant. In cases where the reference is a proper prior distribution, for example in Turing.jl models, this is typically the case.
In scenarios where the reference is specified manually, e.g. for black-box functions or Stan models, more care is needed. In such cases, one alternative is to use variational PT in which case the built-in variational distribution is constructed so that its normalization constant is one.
BridgeStan
offers an option propto
to skip constants that do not depend on the sampled parameters. Every calls to BridgeStan
made by Pigeons disable this option to make it easier to design reference distributions with a known normalization constant.