API and Documentation

Inference

We describe how to perform inference in probabilistic models. For background, see the Inference tutorial.

Suppose we have a model \(p(\mathbf{x}, \mathbf{z}, \beta)\) of data \(\mathbf{x}_{\text{train}}\) with latent variables \((\mathbf{z}, \beta)\). Consider the posterior inference problem, \[q(\mathbf{z}, \beta)\approx p(\mathbf{z}, \beta\mid \mathbf{x}_{\text{train}}),\] in which the task is to approximate the posterior \(p(\mathbf{z}, \beta\mid \mathbf{x}_{\text{train}})\) using a family of distributions, \(q(\mathbf{z},\beta; \lambda)\), indexed by parameters \(\lambda\).

In Edward, let z and beta be latent variables in the model, where we observe the random variable x with data x_train. Let qz and qbeta be random variables defined to approximate the posterior. We write this problem as follows:

inference = ed.Inference({z: qz, beta: qbeta}, {x: x_train})

Inference is an abstract class which takes two inputs. The first is a collection of latent random variables beta and z, along with “posterior variables” qbeta and qz, which are associated to their respective latent variables. The second is a collection of observed random variables x, which is associated to the data x_train.

Inference adjusts parameters of the distribution of qbeta and qz to be close to the posterior \(p(\mathbf{z}, \beta\,|\,\mathbf{x}_{\text{train}})\).

Running inference is as simple as running one method.

inference = ed.Inference({z: qz, beta: qbeta}, {x: x_train})
inference.run()

Inference also supports fine control of the training procedure.

inference = ed.Inference({z: qz, beta: qbeta}, {x: x_train})
inference.initialize()

tf.global_variables_initializer().run()

for _ in range(inference.n_iter):
  info_dict = inference.update()
  inference.print_progress(info_dict)

inference.finalize()

initialize() builds the algorithm’s update rules (computational graph) for \(\lambda\); tf.global_variables_initializer().run() initializes \(\lambda\) (TensorFlow variables in the graph); update() runs the graph once to update \(\lambda\), which is called in a loop until convergence; finalize() runs any computation as the algorithm terminates.

The run() method is a simple wrapper for this procedure.

Other Settings

We highlight other settings during inference.

Model parameters. Model parameters are parameters in a model that we will always compute point estimates for and not be uncertain about. They are defined with tf.Variables, where the inference problem is \[\hat{\theta} \leftarrow^{\text{optimize}} p(\mathbf{x}_{\text{train}}; \theta)\]

from edward.models import Normal

theta = tf.Variable(0.0)
x = Normal(mu=tf.ones(10) * theta, sigma=1.0)

inference = ed.Inference({}, {x: x_train})

Only a subset of inference algorithms support estimation of model parameters. (Note also that this inference example does not have any latent variables. It is only about estimating theta given that we observe \(\mathbf{x} = \mathbf{x}_{\text{train}}\). We can add them so that inference is both posterior inference and parameter estimation.)

For example, model parameters are useful when applying neural networks from high-level libraries such as Keras and TensorFlow Slim. See the model compositionality page for more details.

Conditional inference. In conditional inference, only a subset of the posterior is inferred while the rest are fixed using other inferences. The inference problem is \[q(\beta)q(\mathbf{z})\approx p(\mathbf{z}, \beta\mid\mathbf{x}_{\text{train}})\] where parameters in \(q(\beta)\) are estimated and \(q(\mathbf{z})\) is fixed. In Edward, we enable conditioning by binding random variables to other random variables in data.

inference = ed.Inference({beta: qbeta}, {x: x_train, z: qz})

In the compositionality page, we describe how to construct inference by composing many conditional inference algorithms.

Implicit prior samples. Latent variables can be defined in the model without any posterior inference over them. They are implicitly marginalized out with a single sample. The inference problem is \[q(\beta)\approx p(\beta\mid\mathbf{x}_{\text{train}}, \mathbf{z}^*)\] where \(\mathbf{z}^*\sim p(\mathbf{z}\mid\beta)\) is a prior sample.

inference = ed.Inference({beta: qbeta}, {x: x_train})

For example, implicit prior samples are useful for generative adversarial networks. Their inference problem does not require any inference over the latent variables; it uses samples from the prior.


class edward.inferences.Inference(latent_vars=None, data=None)[source]

Abstract base class for inference. All inference algorithms in Edward inherit from Inference, sharing common methods and properties via a class hierarchy.

Specific algorithms typically inherit from other subclasses of Inference rather than Inference directly. For example, one might inherit from the abstract classes MonteCarlo or VariationalInference.

To build an algorithm inheriting from Inference, one must at the minimum implement initialize and update: the former builds the computational graph for the algorithm; the latter runs the computational graph for the algorithm.

Methods

Initialization.

Parameters:

latent_vars : dict, optional

Collection of latent variables (of type RandomVariable or tf.Tensor) to perform inference on. Each random variable is binded to another random variable; the latter will infer the former conditional on data.

data : dict, optional

Data dictionary which binds observed variables (of type RandomVariable or tf.Tensor) to their realizations (of type tf.Tensor). It can also bind placeholders (of type tf.Tensor) used in the model to their realizations; and prior latent variables (of type RandomVariable) to posterior latent variables (of type RandomVariable).

Examples

mu = Normal(mu=tf.constant(0.0), sigma=tf.constant(1.0))
x = Normal(mu=tf.ones(50) * mu, sigma=tf.constant(1.0))

qmu_mu = tf.Variable(tf.random_normal([]))
qmu_sigma = tf.nn.softplus(tf.Variable(tf.random_normal([])))
qmu = Normal(mu=qmu_mu, sigma=qmu_sigma)

inference = ed.Inference({mu: qmu}, data={x: tf.zeros(50)})

Methods

run(variables=None, use_coordinator=True, *args, **kwargs)[source]

A simple wrapper to run inference.

  1. Initialize algorithm via initialize.
  2. (Optional) Build a TensorFlow summary writer for TensorBoard.
  3. (Optional) Initialize TensorFlow variables.
  4. (Optional) Start queue runners.
  5. Run update for self.n_iter iterations.
  6. While running, print_progress.
  7. Finalize algorithm via finalize.
  8. (Optional) Stop queue runners.

To customize the way inference is run, run these steps individually.

Parameters:

variables : list, optional

A list of TensorFlow variables to initialize during inference. Default is to initialize all variables (this includes reinitializing variables that were already initialized). To avoid initializing any variables, pass in an empty list.

use_coordinator : bool, optional

Whether to start and stop queue runners during inference using a TensorFlow coordinator. For example, queue runners are necessary for batch training with file readers.

*args

Passed into initialize.

**kwargs

Passed into initialize.

initialize(n_iter=1000, n_print=None, scale=None, logdir=None, debug=False)[source]

Initialize inference algorithm. It initializes hyperparameters and builds ops for the algorithm’s computational graph. No ops should be created outside the call to initialize().

Any derived class of Inference must implement this method.

Parameters:

n_iter : int, optional

Number of iterations for algorithm.

n_print : int, optional

Number of iterations for each print progress. To suppress print progress, then specify 0. Default is int(n_iter / 10).

scale : dict of RandomVariable to tf.Tensor, optional

A tensor to scale computation for any random variable that it is binded to. Its shape must be broadcastable; it is multiplied element-wise to the random variable. For example, this is useful for mini-batch scaling when inferring global variables, or applying masks on a random variable.

logdir : str, optional

Directory where event file will be written. For details, see tf.summary.FileWriter. Default is to write nothing.

debug : bool, optional

If True, add checks for NaN and Inf to all computations in the graph. May result in substantially slower execution times.

update(feed_dict=None)[source]

Run one iteration of inference.

Any derived class of Inference must implement this method.

Parameters:

feed_dict : dict, optional

Feed dictionary for a TensorFlow session run. It is used to feed placeholders that are not fed during initialization.

Returns:

dict

Dictionary of algorithm-specific information.

print_progress(info_dict)[source]

Print progress to output.

Parameters:

info_dict : dict

Dictionary of algorithm-specific information.

finalize()[source]

Function to call after convergence.