Turing.jl model as input to pigeons
To target the posterior distribution specified by a Turing.jl model first load Turing
or DynamicPPL
and use TuringLogPotential
:
using DynamicPPL, Pigeons, Distributions, DistributionsAD
DynamicPPL.@model function my_turing_model(n_trials, n_successes)
p1 ~ Uniform(0, 1)
p2 ~ Uniform(0, 1)
n_successes ~ Binomial(n_trials, p1*p2)
return n_successes
end
pt = pigeons(target = TuringLogPotential(my_turing_model(100, 50)));
┌ Info: Neither traces, disk, nor online recorders included.
│ You may not have access to your samples (unless you are using a custom recorder, or maybe you just want log(Z)).
└ To add recorders, use e.g. pigeons(target = ..., record = [traces; record_default()])
──────────────────────────────────────────────────────────────────────────────────────────────────
scans Λ time(s) allc(B) log(Z₁/Z₀) min(α) mean(α) min(αₑ) mean(αₑ)
────────── ────────── ────────── ────────── ────────── ────────── ────────── ────────── ──────────
2 3.24 0.027 1.21e+06 -8.14 0.00178 0.64 1 1
4 1.64 0.00206 1.37e+06 -5.04 0.0352 0.818 1 1
8 1.17 0.00396 2.67e+06 -4.42 0.708 0.871 1 1
16 1.2 0.00858 5.61e+06 -4.03 0.549 0.867 1 1
32 1.11 0.0167 1.1e+07 -4.77 0.754 0.877 1 1
64 1.35 0.0331 2.19e+07 -4.79 0.698 0.85 1 1
128 1.6 0.0941 4.34e+07 -4.97 0.725 0.823 1 1
256 1.51 0.158 8.58e+07 -4.92 0.758 0.832 1 1
512 1.46 0.297 1.73e+08 -5 0.806 0.838 1 1
1.02e+03 1.49 0.6 3.47e+08 -4.92 0.798 0.834 1 1
──────────────────────────────────────────────────────────────────────────────────────────────────
At the moment, only Turing models with fixed dimensionality are supported. Both real and integer-valued random variables are supported. For a TuringLogPotential
, the default_explorer()
is the SliceSampler
and the default_reference()
is the prior distribution encoded in the Turing model.
Manipulating the output
Internally, Turing target's states (of type DynamicPPL.TypedVarInfo
) are stored in an unconstrained parameterization provided by Turing (for example, bounded support variables are mapped to the full real line). However, sample post-processing functions such as sample_array()
and process_sample()
convert back to the original ("constrained") parameterization via extract_sample()
.
As a result parameterization issues can be essentially ignored when post-processing, for example some common post-processing are shown below, see the section on output processing for more information.
using MCMCChains
using StatsPlots
plotlyjs()
pt = pigeons(
target = TuringLogPotential(my_turing_model(100, 50)),
record = [traces])
samples = Chains(pt)
my_plot = StatsPlots.plot(samples)
StatsPlots.savefig(my_plot, "turing_posterior_densities_and_traces.html");
samples
Chains MCMC chain (1024×3×1 Array{Float64, 3}):
Iterations = 1:1:1024
Number of chains = 1
Samples per chain = 1024
parameters = p1, p2
internals = log_density
Summary Statistics
parameters mean std mcse ess_bulk ess_tail rhat e ⋯
Symbol Float64 Float64 Float64 Float64 Float64 Float64 ⋯
p1 0.7239 0.1520 0.0065 574.1139 703.2334 0.9999 ⋯
p2 0.7086 0.1507 0.0062 614.1593 920.7039 0.9998 ⋯
1 column omitted
Quantiles
parameters 2.5% 25.0% 50.0% 75.0% 97.5%
Symbol Float64 Float64 Float64 Float64 Float64
p1 0.4735 0.5934 0.7108 0.8487 0.9874
p2 0.4647 0.5850 0.6959 0.8270 0.9815