`Inference`

- Class
`ed.Inference`

- Class
`ed.inferences.Inference`

Defined in `edward/inferences/inference.py`

.

Abstract base class for inference. All inference algorithms in Edward inherit from `Inference`

, sharing common methods and properties via a class hierarchy.

Specific algorithms typically inherit from other subclasses of `Inference`

rather than `Inference`

directly. For example, one might inherit from the abstract classes `MonteCarlo`

or `VariationalInference`

.

To build an algorithm inheriting from `Inference`

, one must at the minimum implement `initialize`

and `update`

: the former builds the computational graph for the algorithm; the latter runs the computational graph for the algorithm.

To reset inference (e.g., internal variable counters incremented over training), fetch inference's reset ops from session with `sess.run(inference.reset)`

.

```
# Set up probability model.
mu = Normal(loc=0.0, scale=1.0)
x = Normal(loc=mu, scale=1.0, sample_shape=50)
# Set up posterior approximation.
qmu_loc = tf.Variable(tf.random_normal([]))
qmu_scale = tf.nn.softplus(tf.Variable(tf.random_normal([])))
qmu = Normal(loc=qmu_loc, scale=qmu_scale)
inference = ed.Inference({mu: qmu}, data={x: tf.zeros(50)})
```

**init**

```
__init__(
latent_vars=None,
data=None
)
```

Create an inference algorithm.

: dict, optional. Collection of latent variables (of type`latent_vars`

`RandomVariable`

or`tf.Tensor`

) to perform inference on. Each random variable is binded to another random variable; the latter will infer the former conditional on data.: dict, optional. Data dictionary which binds observed variables (of type`data`

`RandomVariable`

or`tf.Tensor`

) to their realizations (of type`tf.Tensor`

). It can also bind placeholders (of type`tf.Tensor`

) used in the model to their realizations; and prior latent variables (of type`RandomVariable`

) to posterior latent variables (of type`RandomVariable`

).

`finalize`

`finalize()`

Function to call after convergence.

`initialize`

```
initialize(
n_iter=1000,
n_print=None,
scale=None,
logdir=None,
log_timestamp=True,
log_vars=None,
debug=False
)
```

Initialize inference algorithm. It initializes hyperparameters and builds ops for the algorithm's computation graph.

Any derived class of `Inference`

**must** implement this method. No methods which build ops should be called outside `initialize()`

.

: int, optional. Number of iterations for algorithm when calling`n_iter`

`run()`

. Alternatively if controlling inference manually, it is the expected number of calls to`update()`

; this number determines tracking information during the print progress.: int, optional. Number of iterations for each print progress. To suppress print progress, then specify 0. Default is`n_print`

`int(n_iter / 100)`

.: dict of RandomVariable to tf.Tensor, optional. A tensor to scale computation for any random variable that it is binded to. Its shape must be broadcastable; it is multiplied element-wise to the random variable. For example, this is useful for mini-batch scaling when inferring global variables, or applying masks on a random variable.`scale`

: str, optional. Directory where event file will be written. For details, see`logdir`

`tf.summary.FileWriter`

. Default is to log nothing.: bool, optional. If True (and`log_timestamp`

`logdir`

is specified), create a subdirectory of`logdir`

to save the specific run results. The subdirectory's name is the current UTC timestamp with format 'YYYYMMDD_HHMMSS'.: list, optional. Specifies the list of variables to log after each`log_vars`

`n_print`

steps. If None, will log all variables. If`[]`

, no variables will be logged.`logdir`

must be specified for variables to be logged.: bool, optional. If True, add checks for`debug`

`NaN`

and`Inf`

to all computations in the graph. May result in substantially slower execution times.

`print_progress`

`print_progress(info_dict)`

Print progress to output.

: dict. Dictionary of algorithm-specific information.`info_dict`

`run`

```
run(
variables=None,
use_coordinator=True,
*args,
**kwargs
)
```

A simple wrapper to run inference.

- Initialize algorithm via
`initialize`

. - (Optional) Build a TensorFlow summary writer for TensorBoard.
- (Optional) Initialize TensorFlow variables.
- (Optional) Start queue runners.
- Run
`update`

for`self.n_iter`

iterations. - While running,
`print_progress`

. - Finalize algorithm via
`finalize`

. - (Optional) Stop queue runners.

To customize the way inference is run, run these steps individually.

: list, optional. A list of TensorFlow variables to initialize during inference. Default is to initialize all variables (this includes reinitializing variables that were already initialized). To avoid initializing any variables, pass in an empty list.`variables`

: bool, optional. Whether to start and stop queue runners during inference using a TensorFlow coordinator. For example, queue runners are necessary for batch training with file readers. *args, **kwargs: Passed into`use_coordinator`

`initialize`

.

`update`

`update(feed_dict=None)`

Run one iteration of inference.

Any derived class of `Inference`

**must** implement this method.

: dict, optional. Feed dictionary for a TensorFlow session run. It is used to feed placeholders that are not fed during initialization.`feed_dict`

dict. Dictionary of algorithm-specific information.