Inference
ed.Inference
ed.inferences.Inference
Defined in edward/inferences/inference.py
.
Abstract base class for inference. All inference algorithms in Edward inherit from Inference
, sharing common methods and properties via a class hierarchy.
Specific algorithms typically inherit from other subclasses of Inference
rather than Inference
directly. For example, one might inherit from the abstract classes MonteCarlo
or VariationalInference
.
To build an algorithm inheriting from Inference
, one must at the minimum implement initialize
and update
: the former builds the computational graph for the algorithm; the latter runs the computational graph for the algorithm.
To reset inference (e.g., internal variable counters incremented over training), fetch inference’s reset ops from session with sess.run(inference.reset)
.
# Set up probability model.
mu = Normal(loc=0.0, scale=1.0)
x = Normal(loc=mu, scale=1.0, sample_shape=50)
# Set up posterior approximation.
qmu_loc = tf.Variable(tf.random_normal([]))
qmu_scale = tf.nn.softplus(tf.Variable(tf.random_normal([])))
qmu = Normal(loc=qmu_loc, scale=qmu_scale)
inference = ed.Inference({mu: qmu}, data={x: tf.zeros(50)})
init
__init__(
latent_vars=None,
data=None
)
Create an inference algorithm.
latent_vars
: dict. Collection of latent variables (of type RandomVariable
or tf.Tensor
) to perform inference on. Each random variable is binded to another random variable; the latter will infer the former conditional on data.data
: dict. Data dictionary which binds observed variables (of type RandomVariable
or tf.Tensor
) to their realizations (of type tf.Tensor
). It can also bind placeholders (of type tf.Tensor
) used in the model to their realizations; and prior latent variables (of type RandomVariable
) to posterior latent variables (of type RandomVariable
).finalize
finalize()
Function to call after convergence.
initialize
initialize(
n_iter=1000,
n_print=None,
scale=None,
auto_transform=True,
logdir=None,
log_timestamp=True,
log_vars=None,
debug=False
)
Initialize inference algorithm. It initializes hyperparameters and builds ops for the algorithm’s computation graph.
Any derived class of Inference
must implement this method. No methods which build ops should be called outside initialize()
.
n_iter
: int. Number of iterations for algorithm when calling run()
. Alternatively if controlling inference manually, it is the expected number of calls to update()
; this number determines tracking information during the print progress.n_print
: int. Number of iterations for each print progress. To suppress print progress, then specify 0. Default is int(n_iter / 100)
.scale
: dict of RandomVariable to tf.Tensor. A tensor to scale computation for any random variable that it is binded to. Its shape must be broadcastable; it is multiplied element-wise to the random variable. For example, this is useful for mini-batch scaling when inferring global variables, or applying masks on a random variable.auto_transform
: bool. Whether to automatically transform continuous latent variables of unequal support to be on the unconstrained space. It is only applied if the argument is True
, the latent variable pair are ed.RandomVariable
s with the support
attribute, the supports are both continuous and unequal.logdir
: str. Directory where event file will be written. For details, see tf.summary.FileWriter
. Default is to log nothing.log_timestamp
: bool. If True (and logdir
is specified), create a subdirectory of logdir
to save the specific run results. The subdirectory’s name is the current UTC timestamp with format ‘YYYYMMDD_HHMMSS’.log_vars
: list. Specifies the list of variables to log after each n_print
steps. If None, will log all variables. If []
, no variables will be logged. logdir
must be specified for variables to be logged.debug
: bool. If True, add checks for NaN
and Inf
to all computations in the graph. May result in substantially slower execution times.print_progress
print_progress(info_dict)
Print progress to output.
info_dict
: dict. Dictionary of algorithm-specific information.run
run(
variables=None,
use_coordinator=True,
*args,
**kwargs
)
A simple wrapper to run inference.
initialize
.update
for self.n_iter
iterations.print_progress
.finalize
.To customize the way inference is run, run these steps individually.
variables
: list. A list of TensorFlow variables to initialize during inference. Default is to initialize all variables (this includes reinitializing variables that were already initialized). To avoid initializing any variables, pass in an empty list.use_coordinator
: bool. Whether to start and stop queue runners during inference using a TensorFlow coordinator. For example, queue runners are necessary for batch training with file readers. *args, **kwargs: Passed into initialize
.update
update(feed_dict=None)
Run one iteration of inference.
Any derived class of Inference
must implement this method.
feed_dict
: dict. Feed dictionary for a TensorFlow session run. It is used to feed placeholders that are not fed during initialization.dict. Dictionary of algorithm-specific information.