Gibbs
Inherits From: MonteCarlo
ed.Gibbs
ed.inferences.Gibbs
Defined in edward/inferences/gibbs.py
.
Gibbs sampling (Geman & Geman, 1984).
Note Gibbs
assumes the proposal distribution has the same support as the prior. The auto_transform
attribute in the method initialize()
is not applicable.
x_data = np.array([0, 1, 0, 0, 0, 0, 0, 0, 0, 1])
p = Beta(1.0, 1.0)
x = Bernoulli(probs=p, sample_shape=10)
qp = Empirical(tf.Variable(tf.zeros(500)))
inference = ed.Gibbs({p: qp}, data={x: x_data})
init
__init__(
latent_vars,
proposal_vars=None,
data=None
)
Create an inference algorithm.
proposal_vars
: dict of RandomVariable to RandomVariable. Collection of random variables to perform inference on; each is binded to its complete conditionals which Gibbs cycles draws on. If not specified, default is to use ed.complete_conditional
.build_update
build_update()
The updates assume each Empirical random variable is directly parameterized by tf.Variable
s.
finalize
finalize()
Function to call after convergence.
initialize
initialize(
scan_order='random',
*args,
**kwargs
)
Initialize inference algorithm. It initializes hyperparameters and builds ops for the algorithm’s computation graph.
scan_order
: list or str. The scan order for each Gibbs update. If list, it is the deterministic order of latent variables. An element in the list can be a RandomVariable
or itself a list of RandomVariable
s (this defines a blocked Gibbs sampler). If ‘random’, will use a random order at each update.print_progress
print_progress(info_dict)
Print progress to output.
run
run(
variables=None,
use_coordinator=True,
*args,
**kwargs
)
A simple wrapper to run inference.
initialize
.update
for self.n_iter
iterations.print_progress
.finalize
.To customize the way inference is run, run these steps individually.
variables
: list. A list of TensorFlow variables to initialize during inference. Default is to initialize all variables (this includes reinitializing variables that were already initialized). To avoid initializing any variables, pass in an empty list.use_coordinator
: bool. Whether to start and stop queue runners during inference using a TensorFlow coordinator. For example, queue runners are necessary for batch training with file readers. *args, **kwargs: Passed into initialize
.update
update(feed_dict=None)
Run one iteration of sampling.
feed_dict
: dict. Feed dictionary for a TensorFlow session run. It is used to feed placeholders that are not fed during initialization.dict. Dictionary of algorithm-specific information. In this case, the acceptance rate of samples since (and including) this iteration.
Geman, S., & Geman, D. (1984). Stochastic relaxation, gibbs distributions, and the bayesian restoration of images. IEEE Transactions on Pattern Analysis and Machine Intelligence, (6), 721–741.