API and Documentation


A probabilistic model is a joint distribution \(p(\mathbf{x}, \mathbf{z})\) of data \(\mathbf{x}\) and latent variables \(\mathbf{z}\). For background, see the Probabilistic Models tutorial.

In Edward, we specify models using a simple language of random variables. A random variable \(\mathbf{x}\) is an object parameterized by tensors \(\theta^*\), where the number of random variables in one object is determined by the dimensions of its parameters.

from edward.models import Normal, Exponential

# univariate normal
Normal(mu=tf.constant(0.0), sigma=tf.constant(1.0))
# vector of 5 univariate normals
Normal(mu=tf.zeros(5), sigma=tf.ones(5))
# 2 x 3 matrix of Exponentials
Exponential(lam=tf.ones([2, 3]))

For multivariate distributions, the multivariate dimension is the innermost (right-most) dimension of the parameters.

from edward.models import Dirichlet, MultivariateNormalFull

# K-dimensional Dirichlet
Dirichlet(alpha=tf.constant([0.1] * K))
# vector of 5 K-dimensional multivariate normals
MultivariateNormalFull(mu=tf.zeros([5, K]), sigma=tf.ones([5, K, K]))
# 2 x 5 matrix of K-dimensional multivariate normals
MultivariateNormalFull(mu=tf.zeros([2, 5, K]), sigma=tf.ones([2, 5, K, K]))

Random variables are equipped with methods such as log_prob(), \(\log p(\mathbf{x}\mid\theta^*)\), mean(), \(\mathbb{E}_{p(\mathbf{x}\mid\theta^*)}[\mathbf{x}]\), and sample(), \(\mathbf{x}^*\sim p(\mathbf{x}\mid\theta^*)\). Further, each random variable is associated to a tensor \(\mathbf{x}^*\) in the computational graph, which represents a single sample \(\mathbf{x}^*\sim p(\mathbf{x}\mid\theta^*)\).

This makes it easy to parameterize random variables with complex deterministic structure, such as with deep neural networks, a diverse set of math operations, and compatibility with third party libraries which also build on TensorFlow. The design also enables compositions of random variables to capture complex stochastic structure. They operate on \(\mathbf{x}^*\).


from edward.models import Normal

x = Normal(mu=tf.zeros(10), sigma=tf.ones(10))
y = tf.constant(5.0)
x + y, x - y, x * y, x / y
tf.tanh(x * y)
x[2]  # 3rd normal rv in the vector

In the compositionality page, we describe how to build models by composing random variables.

For a list of random variables supported in Edward, see the API reference page. Edward random variables build on top of them, inheriting the same arguments and class methods. Additional methods are also available, detailed below.

class edward.models.RandomVariable(*args, **kwargs)[source]

Base class for random variables.

A random variable is an object parameterized by tensors. It is equipped with methods such as the log-density, mean, and sample.

It also wraps a tensor, where the tensor corresponds to a sample from the random variable. This enables operations on the TensorFlow graph, allowing random variables to be used in conjunction with other TensorFlow ops.


RandomVariable assumes use in a multiple inheritance setting. The child class must first inherit RandomVariable, then second inherit a class in tf.contrib.distributions. With Python’s method resolution order, this implies the following during initialization (using distributions.Bernoulli as an example):

  1. Start the __init__() of the child class, which passes all *args, **kwargs to RandomVariable.
  2. This in turn passes all *args, **kwargs to distributions.Bernoulli, completing the __init__() of distributions.Bernoulli.
  3. Complete the __init__() of RandomVariable, which calls self.sample(), relying on the method from distributions.Bernoulli.
  4. Complete the __init__() of the child class.

Methods from both RandomVariable and distributions.Bernoulli populate the namespace of the child class. Methods from RandomVariable will take higher priority if there are conflicts.


p = tf.constant(0.5)
x = Bernoulli(p=p)

z1 = tf.constant([[2.0, 8.0]])
z2 = tf.constant([[1.0, 2.0]])
x = Bernoulli(p=tf.matmul(z1, z2))

mu = Normal(mu=tf.constant(0.0), sigma=tf.constant(1.0))
x = Normal(mu=mu, sigma=tf.constant(1.0))




Shape of random variable.

eval(session=None, feed_dict=None)[source]

In a session, computes and returns the value of this random variable.

This is not a graph construction method, it does not add ops to the graph.

This convenience method requires a session where the graph containing this variable has been launched. If no session is passed, the default session is used.


session : tf.BaseSession, optional

The tf.Session to use to evaluate this random variable. If none, the default session is used.

feed_dict : dict, optional

A dictionary that maps tf.Tensor objects to feed values. See tf.Session.run() for a description of the valid feed values.


x = Normal(0.0, 1.0)
with tf.Session() as sess:
  # Usage passing the session explicitly.
  # Usage with the default session.  The 'with' block
  # above makes 'sess' the default session.


Get tensor that the random variable corresponds to.


Get ancestor random variables.


Get child random variables.


Get descendant random variables.


Get parent random variables.


Get sibling random variables.


Get TensorFlow variables that the random variable depends on.


Get shape of random variable.

class edward.models.DirichletProcess(alpha, base, validate_args=False, allow_nan_stats=True, name='DirichletProcess', *args, **kwargs)[source]

Dirichlet process \(\mathcal{DP}(\alpha, H)\).

It has two parameters: a positive real value \(\alpha\), known as the concentration parameter (alpha), and a base distribution \(H\) (base).



Initialize a batch of Dirichlet processes.


alpha : tf.Tensor

Concentration parameter. Must be positive real-valued. Its shape determines the number of independent DPs (batch shape).

base : RandomVariable

Base distribution. Its shape determines the shape of an individual DP (event shape).


# scalar concentration parameter, scalar base distribution
dp = DirichletProcess(0.1, Normal(mu=0.0, sigma=1.0))
assert dp.shape == ()

# vector of concentration parameters, matrix of Exponentials
dp = DirichletProcess(tf.constant([0.1, 0.4]),
                      Exponential(lam=tf.ones([5, 3])))
assert dp.shape == (2, 5, 3)


class edward.models.Empirical(params, validate_args=False, allow_nan_stats=True, name='Empirical', *args, **kwargs)[source]

Empirical random variable.



Initialize an Empirical random variable.


params : tf.Tensor

Collection of samples. Its outer (left-most) dimension determines the number of samples.


# 100 samples of a scalar
x = Empirical(params=tf.zeros(100))
assert x.shape == ()

# 5 samples of a 2 x 3 matrix
dp = Empirical(params=tf.zeros([5, 2, 3]))
assert x.shape == (2, 3)


class edward.models.PointMass(params, validate_args=False, allow_nan_stats=True, name='PointMass', *args, **kwargs)[source]

PointMass random variable.

It is analogous to an Empirical random variable with one sample, but its parameter argument does not have an outer dimension.



Initialize a PointMass random variable.


params : tf.Tensor

The location with all probability mass.


# scalar
x = PointMass(params=28.3)
assert x.shape == ()

# 5 x 2 x 3 tensor
dp = PointMass(params=tf.zeros([5, 2, 3]))
assert x.shape == (5, 2, 3)