ed.WakeSleep

Class WakeSleep

Inherits From: VariationalInference

Aliases:

  • Class ed.WakeSleep
  • Class ed.inferences.WakeSleep

Defined in edward/inferences/wake_sleep.py.

Wake-Sleep algorithm (Hinton, Dayan, Frey, & Neal, 1995).

Given a probability model \(p(x, z; \theta)\) and variational distribution \(q(z\mid x; \lambda)\), wake-sleep alternates between two phases:

  • In the wake phase, \(\log p(x, z; \theta)\) is maximized with respect to model parameters \(\theta\) using bottom-up samples \(z\sim q(z\mid x; \lambda)\).
  • In the sleep phase, \(\log q(z\mid x; \lambda)\) is maximized with respect to variational parameters \(\lambda\) using top-down “fantasy” samples \(z\sim p(x, z; \theta)\).

Hinton et al. (1995) justify wake-sleep under the variational lower bound of the description length,

\(\mathbb{E}_{q(z\mid x; \lambda)} [ \log p(x, z; \theta) - \log q(z\mid x; \lambda)].\)

Maximizing it with respect to \(\theta\) corresponds to the wake phase. Instead of maximizing it with respect to \(\lambda\) (which corresponds to minimizing \(\text{KL}(q\|p)\)), the sleep phase corresponds to minimizing the reverse KL \(\text{KL}(p\|q)\) in expectation over the data distribution.

Notes

In conditional inference, we infer \(z\) in \(p(z, \beta \mid x)\) while fixing inference over \(\beta\) using another distribution \(q(\beta)\). During gradient calculation, instead of using the model’s density

\(\log p(x, z^{(s)}), z^{(s)} \sim q(z; \lambda),\)

for each sample \(s=1,\ldots,S\), WakeSleep uses

\(\log p(x, z^{(s)}, \beta^{(s)}),\)

where \(z^{(s)} \sim q(z; \lambda)\) and \(\beta^{(s)} \sim q(\beta)\).

The objective function also adds to itself a summation over all tensors in the REGULARIZATION_LOSSES collection.

Methods

init

__init__(
    *args,
    **kwargs
)

build_loss_and_gradients

build_loss_and_gradients(var_list)

finalize

finalize()

Function to call after convergence.

initialize

initialize(
    n_samples=1,
    phase_q='sleep',
    *args,
    **kwargs
)

Initialize inference algorithm. It initializes hyperparameters and builds ops for the algorithm’s computation graph.

Args:

  • n_samples: int. Number of samples for calculating stochastic gradients during wake and sleep phases.
  • phase_q: str. Phase for updating parameters of q. If ‘sleep’, update using a sample from p. If ‘wake’, update using a sample from q. (Unlike reparameterization gradients, the sample is held fixed.)
print_progress(info_dict)

Print progress to output.

run

run(
    variables=None,
    use_coordinator=True,
    *args,
    **kwargs
)

A simple wrapper to run inference.

  1. Initialize algorithm via initialize.
  2. (Optional) Build a TensorFlow summary writer for TensorBoard.
  3. (Optional) Initialize TensorFlow variables.
  4. (Optional) Start queue runners.
  5. Run update for self.n_iter iterations.
  6. While running, print_progress.
  7. Finalize algorithm via finalize.
  8. (Optional) Stop queue runners.

To customize the way inference is run, run these steps individually.

Args:

  • variables: list. A list of TensorFlow variables to initialize during inference. Default is to initialize all variables (this includes reinitializing variables that were already initialized). To avoid initializing any variables, pass in an empty list.
  • use_coordinator: bool. Whether to start and stop queue runners during inference using a TensorFlow coordinator. For example, queue runners are necessary for batch training with file readers. *args, **kwargs: Passed into initialize.

update

update(feed_dict=None)

Run one iteration of optimization.

Args:

  • feed_dict: dict. Feed dictionary for a TensorFlow session run. It is used to feed placeholders that are not fed during initialization.

Returns:

dict. Dictionary of algorithm-specific information. In this case, the loss function value after one iteration.

Hinton, G. E., Dayan, P., Frey, B. J., & Neal, R. M. (1995). The "wake-sleep" algorithm for unsupervised neural networks. Science.