ed.VariationalInference

Class VariationalInference

Inherits From: Inference

Aliases:

  • Class ed.VariationalInference
  • Class ed.inferences.VariationalInference

Defined in edward/inferences/variational_inference.py.

Abstract base class for variational inference. Specific variational inference methods inherit from VariationalInference, sharing methods such as a default optimizer.

To build an algorithm inheriting from VariaitonalInference, one must at the minimum implement build_loss_and_gradients: it determines the loss function and gradients to apply for a given optimizer.

Methods

init

__init__(
    *args,
    **kwargs
)

build_loss_and_gradients

build_loss_and_gradients(var_list)

Build loss function and its gradients. They will be leveraged in an optimizer to update the model and variational parameters.

Any derived class of VariationalInference must implement this method.

Raises:

NotImplementedError.

finalize

finalize()

Function to call after convergence.

initialize

initialize(
    optimizer=None,
    var_list=None,
    use_prettytensor=False,
    global_step=None,
    *args,
    **kwargs
)

Initialize inference algorithm. It initializes hyperparameters and builds ops for the algorithm's computation graph.

Args:

  • optimizer: str or tf.train.Optimizer, optional. A TensorFlow optimizer, to use for optimizing the variational objective. Alternatively, one can pass in the name of a TensorFlow optimizer, and default parameters for the optimizer will be used.
  • var_list: list of tf.Variable, optional. List of TensorFlow variables to optimize over. Default is all trainable variables that latent_vars and data depend on, excluding those that are only used in conditionals in data.
  • use_prettytensor: bool, optional. True if aim to use PrettyTensor optimizer (when using PrettyTensor) or False if aim to use TensorFlow optimizer. Defaults to TensorFlow.
  • global_step: tf.Variable, optional. A TensorFlow variable to hold the global step.
print_progress(info_dict)

Print progress to output.

run

run(
    variables=None,
    use_coordinator=True,
    *args,
    **kwargs
)

A simple wrapper to run inference.

  1. Initialize algorithm via initialize.
  2. (Optional) Build a TensorFlow summary writer for TensorBoard.
  3. (Optional) Initialize TensorFlow variables.
  4. (Optional) Start queue runners.
  5. Run update for self.n_iter iterations.
  6. While running, print_progress.
  7. Finalize algorithm via finalize.
  8. (Optional) Stop queue runners.

To customize the way inference is run, run these steps individually.

Args:

  • variables: list, optional. A list of TensorFlow variables to initialize during inference. Default is to initialize all variables (this includes reinitializing variables that were already initialized). To avoid initializing any variables, pass in an empty list.
  • use_coordinator: bool, optional. Whether to start and stop queue runners during inference using a TensorFlow coordinator. For example, queue runners are necessary for batch training with file readers. *args, **kwargs: Passed into initialize.

update

update(feed_dict=None)

Run one iteration of optimization.

Args:

  • feed_dict: dict, optional. Feed dictionary for a TensorFlow session run. It is used to feed placeholders that are not fed during initialization.

Returns:

dict. Dictionary of algorithm-specific information. In this case, the loss function value after one iteration.