`MonteCarlo`

Inherits From: `Inference`

- Class
`ed.MonteCarlo`

- Class
`ed.inferences.MonteCarlo`

Defined in `edward/inferences/monte_carlo.py`

.

Abstract base class for Monte Carlo. Specific Monte Carlo methods inherit from `MonteCarlo`

, sharing methods in this class.

To build an algorithm inheriting from `MonteCarlo`

, one must at the minimum implement `build_update`

: it determines how to assign the samples in the `Empirical`

approximations.

The number of Monte Carlo iterations is set according to the minimum of all `Empirical`

sizes.

Initialization is assumed from `params[0, :]`

. This generalizes initializing randomly and initializing from user input. Updates are along this outer dimension, where iteration t updates `params[t, :]`

in each `Empirical`

random variable.

No warm-up is implemented. Users must run MCMC for a long period of time, then manually burn in the Empirical random variable.

Most explicitly, `MonteCarlo`

is specified via a dictionary:

```
qpi = Empirical(params=tf.Variable(tf.zeros([T, K-1])))
qmu = Empirical(params=tf.Variable(tf.zeros([T, K*D])))
qsigma = Empirical(params=tf.Variable(tf.zeros([T, K*D])))
ed.MonteCarlo({pi: qpi, mu: qmu, sigma: qsigma}, data)
```

The inferred posterior is comprised of `Empirical`

random variables with `T`

samples. We also automate the specification of `Empirical`

random variables. One can pass in a list of latent variables instead:

```
ed.MonteCarlo([beta], data)
ed.MonteCarlo([pi, mu, sigma], data)
```

It defaults to `Empirical`

random variables with 10,000 samples for each dimension.

**init**

```
__init__(
latent_vars=None,
data=None
)
```

Create an inference algorithm.

: list or dict, optional. Collection of random variables (of type`latent_vars`

`RandomVariable`

or`tf.Tensor`

) to perform inference on. If list, each random variable will be approximated using a`Empirical`

random variable that is defined internally (with unconstrained support). If dictionary, each value in the dictionary must be a`Empirical`

random variable.: dict, optional. Data dictionary which binds observed variables (of type`data`

`RandomVariable`

or`tf.Tensor`

) to their realizations (of type`tf.Tensor`

). It can also bind placeholders (of type`tf.Tensor`

) used in the model to their realizations.

`build_update`

`build_update()`

Build update rules, returning an assign op for parameters in the `Empirical`

random variables.

Any derived class of `MonteCarlo`

**must** implement this method.

NotImplementedError.

`finalize`

`finalize()`

Function to call after convergence.

`initialize`

```
initialize(
*args,
**kwargs
)
```

`print_progress`

`print_progress(info_dict)`

Print progress to output.

`run`

```
run(
variables=None,
use_coordinator=True,
*args,
**kwargs
)
```

A simple wrapper to run inference.

- Initialize algorithm via
`initialize`

. - (Optional) Build a TensorFlow summary writer for TensorBoard.
- (Optional) Initialize TensorFlow variables.
- (Optional) Start queue runners.
- Run
`update`

for`self.n_iter`

iterations. - While running,
`print_progress`

. - Finalize algorithm via
`finalize`

. - (Optional) Stop queue runners.

To customize the way inference is run, run these steps individually.

: list, optional. A list of TensorFlow variables to initialize during inference. Default is to initialize all variables (this includes reinitializing variables that were already initialized). To avoid initializing any variables, pass in an empty list.`variables`

: bool, optional. Whether to start and stop queue runners during inference using a TensorFlow coordinator. For example, queue runners are necessary for batch training with file readers. *args, **kwargs: Passed into`use_coordinator`

`initialize`

.

`update`

`update(feed_dict=None)`

Run one iteration of sampling.

: dict, optional. Feed dictionary for a TensorFlow session run. It is used to feed placeholders that are not fed during initialization.`feed_dict`

dict. Dictionary of algorithm-specific information. In this case, the acceptance rate of samples since (and including) this iteration.

We run the increment of `t`

separately from other ops. Whether the others op run with the `t`

before incrementing or after incrementing depends on which is run faster in the TensorFlow graph. Running it separately forces a consistent behavior.