SGHMC
Inherits From: MonteCarlo
ed.SGHMC
ed.inferences.SGHMC
Defined in edward/inferences/sghmc.py
.
Stochastic gradient Hamiltonian Monte Carlo (Chen, Fox, & Guestrin, 2014).
In conditional inference, we infer \(z\) in \(p(z, \beta \mid x)\) while fixing inference over \(\beta\) using another distribution \(q(\beta)\). SGHMC
substitutes the model’s log marginal density
\(\log p(x, z) = \log \mathbb{E}_{q(\beta)} [ p(x, z, \beta) ] \approx \log p(x, z, \beta^*)\)
leveraging a single Monte Carlo sample, where \(\beta^* \sim q(\beta)\). This is unbiased (and therefore asymptotically exact as a pseudo-marginal method) if \(q(\beta) = p(\beta \mid x)\).
mu = Normal(loc=0.0, scale=1.0)
x = Normal(loc=mu, scale=1.0, sample_shape=10)
qmu = Empirical(tf.Variable(tf.zeros(500)))
inference = ed.SGHMC({mu: qmu}, {x: np.zeros(10, dtype=np.float32)})
init
__init__(
*args,
**kwargs
)
build_update
build_update()
Simulate Hamiltonian dynamics with friction using a discretized integrator. Its discretization error goes to zero as the learning rate decreases.
Implements the update equations from (15) of Chen et al. (2014).
finalize
finalize()
Function to call after convergence.
initialize
initialize(
step_size=0.25,
friction=0.1,
*args,
**kwargs
)
Initialize inference algorithm.
step_size
: float. Constant scale factor of learning rate.friction
: float. Constant scale on the friction term in the Hamiltonian system.print_progress
print_progress(info_dict)
Print progress to output.
run
run(
variables=None,
use_coordinator=True,
*args,
**kwargs
)
A simple wrapper to run inference.
initialize
.update
for self.n_iter
iterations.print_progress
.finalize
.To customize the way inference is run, run these steps individually.
variables
: list. A list of TensorFlow variables to initialize during inference. Default is to initialize all variables (this includes reinitializing variables that were already initialized). To avoid initializing any variables, pass in an empty list.use_coordinator
: bool. Whether to start and stop queue runners during inference using a TensorFlow coordinator. For example, queue runners are necessary for batch training with file readers. *args, **kwargs: Passed into initialize
.update
update(feed_dict=None)
Run one iteration of sampling.
feed_dict
: dict. Feed dictionary for a TensorFlow session run. It is used to feed placeholders that are not fed during initialization.dict. Dictionary of algorithm-specific information. In this case, the acceptance rate of samples since (and including) this iteration.
We run the increment of t
separately from other ops. Whether the others op run with the t
before incrementing or after incrementing depends on which is run faster in the TensorFlow graph. Running it separately forces a consistent behavior.
Chen, T., Fox, E. B., & Guestrin, C. (2014). Stochastic gradient Hamiltonian Monte Carlo. In International conference on machine learning.